Skip to main content

Core Agent Appliance & ECA - Re-deployment Upgrade

Core Agent Appliance Re-deployment Upgrade Prechecks

  1. Take the following screenshots from the Core Agent Appliance GUI:

    a. About/Contact

    note

    This provides details like version, OpenSUSE OS version and Appliance ID.

    b. Continuous Op Dashboard

    note

    Check that the Connectivity status is OK (green).

    c. Easy Auditor >

    • i. Report Schedule
    • ii. Saved Queries
    • iii. Active Auditor (Data Loss Protection, Mass Delete, Custom)
    • iv. Robo Audit
    note

    Verify that this job completes successfully.

    d. Inventory View

    note

    Make sure all the clusters are populated with their configuration details.

    e. Jobs >

    • i. Job Definitions — Verify all the jobs status.
    • ii. Running Jobs — Make sure all the jobs are completing successfully.

    f. License Management

    note

    Review license details and Support License Expiry date.

    g. Ransomware Defender >

    • i. Learned Thresholds
    • ii. Ignored List
    • iii. Monitor Only Settings
    • iv. Threshold
    • v. File Filters
    • vi. Security Guard — Verify if this job is completing successfully.

    h. Manage Services

    • i. ECA Monitor
      • Make sure ECA VMs are receiving and sending events.
      • Verify the status of all VMs.
  2. Take the following screenshots from the Core Agent Appliance CLI:

    a. free -h

    note

    Verify the RAM size and assign the same RAM size to the newly deployed VM.

    b. Collect Networking details:

    ip addr; ip route; tail -6 /etc/resolv.conf; cat /etc/chrony.d/pool.conf

    c. Copy the following files for SMTP configs and replace in the new environment:

    • /etc/syslog-ng/conf.d/eca-syslog.conf
    • /etc/syslog-ng/syslog-ng.conf
    • grep syncIsilonsToZK /opt/superna/sca/data/system.xml
    note

    Make sure it is set to true. If false, need to update to true post upgrade.

  3. Take a Restore Backup of the old Core Agent Appliance and copy it to your local machine:

    a. Go to About/Contact > Backup > Restore Backup.

    b. Go to Jobs > Running Jobs > Monitor the status of the Archive Creation job and wait for it to complete.

    c. Once the Archive Creation job completes, go to About/Contact > Backup > Select the latest restore backup > Download.


Core Agent Appliance Re-deployment Upgrade Steps

  1. Download the latest Core Agent Appliance OVF file from the support portal: https://support.superna.net/hc/en-us

  2. Deploy the new VM with the same or different network details.

  3. Once deployed, Power Off the old VM and Power On the new VM.

  4. Using CMD (command prompt), ping the new VM IP to make sure the network settings are correct.

  5. Use WinSCP to transfer the restore backup zip file under /home/admin/ in the new VM.

  6. SSH to the new VM using admin/3y3gl4ss and run the following command to switch to root user:

    sudo su

  7. Run the following command to restore the backup:

    igls app restore /home/admin/<restore backup filename> --anyrelease

    • Press Y when prompted.
  8. Once the restore completes, continue with Step 9.

  9. Copy all of the files from the folders below to the new Core Agent Appliance using WinSCP and move to their respective paths:

    • /etc/syslog-ng/conf.d/eca-syslog.conf
    • /etc/syslog-ng/syslog-ng.conf
  10. Update NTP server IP if required:

    a. nano /etc/chrony.d/pool.conf

    b. Remove default entries and update with the internal NTP server IP.

    c. Save the file:

    • Press Ctrl+X
    • Answer yes to save and exit the nano editor.

    d. Restart the sca service: systemctl restart chronyd.service


Core Agent Appliance Re-deployment Post Upgrade Checks

  1. Take the following screenshots from the Core Agent Appliance GUI:

    a. About/Contact

    note

    Verify the upgraded version.

    b. Continuous Op Dashboard

    note

    Check that the Connectivity status is OK (green).

    c. Easy Auditor >

    • i. Report Schedule
    • ii. Saved Queries
    • iii. Active Auditor (Data Loss Protection, Mass Delete, Custom)
    • iv. Robo Audit
    note

    Initiate Robo Audit job and make sure it completes successfully.

    d. Inventory View

    note

    Make sure all the clusters are populated with their configuration details.

    e. Jobs >

    • i. Job Definitions
    note

    If jobs are not present under Job Definitions, check running jobs and make sure initial inventory is finished. If it's not finished, then wait. If it's finished and jobs are not present, then open a support ticket to troubleshoot further.

    • ii. Running Jobs — Make sure all the jobs are completing successfully.

    f. License Management

    g. Ransomware Defender >

    • i. Learned Thresholds
    • ii. Ignored List
    • iii. Monitor Only Settings
    • iv. Threshold
    • v. File Filters
    • vi. Security Guard
    note

    Initiate Security Guard job and make sure it completes successfully.

    h. Manage Services

    • i. ECA Monitor
      • Make sure ECA VMs are receiving and sending events.
      • Make sure all VMs are in OK (green) status.
  2. Take the following screenshots from the Core Agent Appliance CLI:

    a. df -h

    note

    Make sure disk space usage is less than 80%. If it is above 80%, open a support ticket to troubleshoot further.

    b. grep syncIsilonsToZK /opt/superna/sca/data/system.xml

    note

    Make sure it is set to true. If false, need to update to true using the steps below:

    1. SSH to the Core Agent Appliance as the admin user
      • Switch to the root user: sudo su
      • nano /opt/superna/sca/data/system.xml
      • Search for syncIsilonsToZK and update to true.
      • Save the file:
        • Press Ctrl+X
        • Answer yes to save and exit the nano editor.
      • Restart sca service: systemctl restart sca

ECA Re-deployment Upgrade Prechecks

  1. Complete the following steps by SSH to the current ECA Node 1 as the ecaadmin user:

    a. Copy the following files to the local system:

    • /opt/superna/eca/eca-env-common.conf
    • /opt/superna/eca/data/audit-nfs/auto.nfs
    • /opt/superna/eca/conf/syslogpublisher/log4j2.xml
    • /opt/superna/eca/docker-compose.overrides.yml
    • /opt/superna/eca/conf/common/overrides/ThreatLevels.json — If it exists. Otherwise ok to ignore.

    b. free -h

    note

    Verify the RAM size and assign the same RAM size to the newly deployed VM.

    c. Collect Networking details:

    ip addr ip route tail -6 /etc/resolv.conf cat /etc/chrony.d/pool.conf


ECA Re-deployment Upgrade Steps

  1. Download the latest ECA Cluster Appliance OVF file from the support portal: https://support.superna.net/hc/en-us

  2. Deploy new ECA VMs with same or different IPs and leave them in powered off state.

  3. Deploy the new ECA VMs with the same cluster name. You can find the name with the command below:

    grep ECA_CLUSTER_ID /opt/superna/eca/eca-env-common.conf

  4. On old ECA VM 1, run ecactl cluster down to bring the cluster down, then Power Off all the old ECA VMs.

  5. Power On the new ECA VMs and ping each one of them.

    warning

    Do NOT continue if you cannot ping.

  6. WinSCP to the new ECA VM 1 and replace the default files with the above files:

    • /opt/superna/eca/eca-env-common.conf
    • /opt/superna/eca/data/audit-nfs/auto.nfs
    • /opt/superna/eca/conf/syslogpublisher/log4j2.xml
    • /opt/superna/eca/conf/common/overrides/ThreatLevels.json — If available and copied from old ECA. Otherwise ok to ignore.
    • /opt/superna/eca/docker-compose.overrides.yml
    note

    Do not copy this file directly. Check the content and copy only the necessary flags (additional networking flags or mini-ECA configuration flags) into the new ECA VM file, as some older version ECA files contain obsolete flags.

  7. If Core Agent Appliance IP is changed, update the line export EYEGLASS_LOCATION= in /opt/superna/eca/eca-env-common.conf with the new IP. Otherwise, skip to Step 9.

  8. If ECA VM IPs are different from the old ECA VMs, perform the steps below. If IPs are the same, skip to Step 9.

    a. Update NFS Exports on all PowerScale clusters:

    • isi nfs exports list
    • isi nfs exports modify <export_id> --root-clients="<ECA_VM_1_IP>,<ECA_VM_2_IP>,<ECA_VM_3_IP>,<ECA_VM_n_IP>" --clients="<ECA_VM_1_IP>,<ECA_VM_2_IP>,<ECA_VM_3_IP>,<ECA_VM_n_IP>" -f
      • Replace export_id with the ID from the first command.
      • Replace all ECA VM IPs.

    b. Update HDFS virtual rack client IPs with the new ECA VM IPs:

    note

    Skip this step if the Easy Auditor solution is not deployed.

    • Log in to the PowerScale cluster OneFS GUI hosting the Easy Auditor HDFS Database.
    • Go to Protocols > Hadoop (HDFS) > Current access zone > Select Eyeglass Easy Auditor HDFS Access Zone > Update Client IP range.
  9. Check if the mounts are present: cat /opt/superna/eca/data/audit-nfs/auto.fs

  10. Update NTP server IP on all ECA VMs if required:

    On each ECA VM:

    a. nano /etc/chrony.d/pool.conf

    b. Remove default entries and update with the internal NTP server IP.

    c. Save the file:

    • Press Ctrl+X
    • Answer yes to save and exit the nano editor.

    d. Restart the sca service: systemctl restart chronyd.service

  11. Create Passwordless SSH: ecactl components configure-nodes

  12. Push config to all nodes: ecactl cluster push-config

  13. Cluster up: ecactl cluster up

  14. Once the cluster is up and running, follow the ECA Re-deployment Post Upgrade checks.


ECA Re-deployment Post Upgrade Checks

  1. Make sure Core Agent Appliance and ECA VMs version match.

  2. Take screenshot below from ECA Node 1:

    a. SSH to ECA Node 1 as ecaadmin user.

    b. ecactl cluster exec "df -h"

    note

    Make sure disk space usage is less than 80% on all ECA VMs. If it is above 80%, open a support ticket to troubleshoot further.

    c. Manage Services

    • i. ECA Monitor
      • Make sure ECA VMs are receiving and sending events.
      • Make sure all VMs are in OK (green) status.