Skip to main content
Version: 2.11.0

Configure ECA for STIG-Enabled PowerScale Environments

Introduction​

This document outlines the configuration required to operate the Extended Cluster Appliance (ECA) for Data Security features in environments where Dell PowerScale clusters are secured using the OneFS STIG security profile. It is intended for scenarios where the appliance is already deployed and focuses on the post-deployment steps needed to ensure interoperability with STIG-enabled clusters.

Use this guide to align your Superna configuration with the security requirements of a STIG-enabled PowerScale environment.

Requirements​

Configuration Steps​

You must complete all steps in the Eyeglass STIG Configuration Guide before proceeding.

info

Perform these steps on each ECA node.

  1. Cluster down

    Run the following command:

    ecactl cluster down
  2. Open an SSH session to each ECA node using the ecaadmin account. After logging in, switch to the root user:

    sudo su -
  3. Update the Network Time Protocol (NTP) configuration:

    • Edit the file /etc/chrony.d/pool.conf.
    • Set the NTP server to match the PowerScale cluster's configuration.
  4. Configure Kerberos client services:

    1. Install the required packages:

      sudo zypper in krb5-client adcli sssd sssd-ldap sssd-ad sssd-tools realmd
    2. Stop the nscd service:

      sudo systemctl stop nscd
    3. Join the Kerberos realm:

      Replace the placeholders with your environment's actual domain and credentials.

      /usr/sbin/realm join <your-domain> -U <your-user> --automatic-id-mapping=no
    4. Update the Kerberos configuration:

      Edit the /etc/krb5.conf file with the following template, replacing placeholder values with your domain-specific information:

      includedir  /etc/krb5.conf.d

      [libdefaults]
      default_realm = EXAMPLE.COM
      dns_lookup_kdc = true
      forwardable = true
      default_cache_name = FILE:/tmp/krb5cc_%{uid}

      [realms]
      EXAMPLE.COM = {
      admin_server = dc1.example.com
      kdc = kdc1.example.com
      }

      [domain_realm]
      .example.com = EXAMPLE.COM
      example.com = EXAMPLE.COM

      [logging]
      kdc = FILE:/var/log/krb5/krb5kdc.log
      admin_server = FILE:/var/log/krb5/kadmind.log
      default = SYSLOG:NOTICE:DAEMON
    5. Configure the SSSD service:

      Edit the file /etc/sssd/sssd.conf. Replace values as required by your environment:

      [sssd]
      config_file_version = 2
      services = nss,pam
      domains = example.com

      [nss]
      filter_users = root
      filter_groups = root

      [pam]

      [domain/EXAMPLE.COM]
      id_provider = ad
      auth_provider = ad
      ad_domain = example.com
      cache_credentials = true
      enumerate = false
      override_homedir = /home/%d/%u
      ldap_id_mapping = true
      ldap_referrals = false
      ldap_schema = ad
    6. (Optional) Edit /etc/nsswitch.conf if necessary:

      passwd: compat sss
      group: compat sss
    7. Join the directory using adcli:

      adcli join -D <your-domain>
    8. Start required services:

      systemctl start sssd rpc-gssd
    9. Update PAM configuration:

      pam-config -a --sss
      pam-config -a --mkhomedir
    10. Edit /etc/openldap/ldap.conf:

      Replace values with domain-appropriate entries.

      URI ldap://example.com
      BASE dc=example,dc=com
      REFERRALS OFF
    11. Create and validate a Kerberos ticket for an Active Directory user:

      1. Run the following command to create the ticket and ensure there are no errors:

        kinit <AD_user>@<DOMAIN>
      2. Validate the Kerberos ticket:

        klist
      3. Retrieve Active Directory server information:

        adcli info <your-domain>
      4. Verify the user ID of the Active Directory user:

        id <AD_user>
    caution

    The Kerberos ticket will expire. Refresh it while the ticket is still valid using the following command:

    kinit -R

    To automate this, add the command to a cron job. Example:

    0 */12 * * * kinit -R

    Failure to refresh the ticket may result in read errors from mounted paths by Java-based processes.

  5. Update the NFS Auto-Mount Configuration

    Update the auto.nfs file to enable centralized NFS access on all ECA nodes.

    1. Enable and start the autofs service on all nodes:

      ecactl cluster exec "sudo systemctl unmask autofs"
      ecactl cluster exec "sudo systemctl start autofs"

      (Optional) Confirm the service is active:

      ecactl cluster exec "sudo systemctl status autofs"
    2. Update the auto.nfs file on the primary ECA node. Replace the placeholders with environment-specific values:

      echo -e "\n/opt/superna/mnt/audit/<cluster_GUID>/<cluster_name> --fstype=nfs,nfsvers=3,sec=krb5p,ro,soft 
      <nfs_pool_SmartConnect_zonename>:/ifs/.ifsvar/audit/logs" >> /opt/superna/eca/data/audit-nfs/auto.nfs
    3. Push the updated configuration to all nodes:

      ecactl cluster push-config
    4. Restart autofs to apply the update:

      ecactl cluster exec 'sudo systemctl restart autofs'
    5. Confirm the mount is available:

      df -h
  6. Cluster up

    Run the following command:

    ecactl cluster up