Skip to main content
Version: 2.9.0

ECA VM Installation

Introduction

The Eyeglass Clustered Agent (ECA) is a tool that facilitates auditing and ransomware defense through the deployment of virtual machines (VMs) across different platforms, including Hyper-V, VMware, and Mini-ECA. This guide provides step-by-step instructions for deploying the ECA VM on various environments, ensuring that the auditing of critical data is carried out efficiently, with support for both centralized and distributed cluster modes.

One key deployment option in this guide is the Mini-ECA, designed specifically for environments that require distributed cluster mode. The Mini-ECA enhances security by forwarding audit data from remote sites to a central cluster, making it particularly useful in scenarios where remote site processing is necessary due to high latency or slow WAN connections.

Deployment Scenarios

Centralized with NFS over WAN

In this setup, a central ECA cluster accesses audit data from remote PowerScale OneFS clusters over a WAN link using NFS. This configuration is ideal for metro WAN environments where latency is low.

Centralized with Remote Mini-ECA

If your WAN connection has higher latency (>10 ms RTT) or is slow, deploying a Mini-ECA at remote sites helps mitigate these issues. In this configuration, the Mini-ECA locally collects audit data via NFS and forwards it to the central ECA cluster for processing.

important

Mini-ECA is optional: Before proceeding with its setup, assess your environment’s network conditions to determine if Mini-ECA is necessary. Installing it in environments with low latency may lead to unnecessary configurations.

Requirements

The Eyeglass appliance is required for installation and configuration. The ECA Cluster operates in a separate group of VMs from Eyeglass.

Key Components

  • Eyeglass: Responsible for taking actions on the cluster and notifying administrators.
  • PowerScale Cluster: Stores the analytics database (can be the same cluster that is monitored for audit events).
  • Licenses:
    • Eyeglass Appliance: Requires either Ransomware Defender Agent Licenses, Easy Auditor Agent Licenses, or Performance Auditor Licenses.
  • HDFS License (for Easy Auditor):
    • PowerScale cluster requires an HDFS license to store the analytics database for Easy Auditor.
    note

    Ransomware Defender deployments no longer require an HDFS pool.

System Requirements and Network Latency Considerations

ECA Hyper-V

The ECA appliance uses two disks: one for the OS and one for data.

  • OS Disk: Requires 20 GB (default disk).
  • Data Disk: Requires 80 GB. (Read the instructions below on how to create the data disk).

OVA Install Prerequisites

info

The OVA file will deploy 3 VMs. To build a 6-node cluster, deploy the OVA twice and move the VMs into the first Cluster object in vCenter. Follow the instructions below to correctly move the VMs into a single vApp in vCenter.

Configuration Item
see scaling section
vSphere 6.x or higher
1x IP address on the same subnet for each node
Gateway
Network Mask
DNS IP
NTP server IP
IP Address of Eyeglass
API token from Eyeglass
Unique cluster name (lower case, no special characters)

Mini-ECA

Latency Requirements

  • Latency between the main ECA cluster and the remote mini ECAs must be below a ping time of 80 ms.
    • Latency above 80 ms may not be supported.

Required Mounting Method

  • The FSTAB method is required for mounting the cluster audit folder.
    • See detailed instructions in the following section.

Network Impact Calculation

  1. To calculate the bandwidth requirement, you need to know the audit event rate for the cluster.

  2. Run the following command to get the average disk operations per PowerScale OneFS node:

    • Command:

      isi statistics query current --nodes=all --stats=node.disk.xfers.rate.sum
    • This command returns the average per node at the bottom of the results. Use this value in the calculation below.

  3. Calculate the network bandwidth by taking the following steps:

    • Take the average per node and multiply by the number of nodes.
      • Example: If the command reports an average of 2200 and there are 7 nodes:
        • 2200 * 7 = 15,400
    • Divide this number by the ratio of audit events to disk transfers (1.83415365 in this case).
      • Example:

        15,400 / 1.83415365 = 8396 events/second
  4. Use the following calculation to compute the required network bandwidth to forward events to the central site for processing:

    • Given: 5 Mbps of network traffic @ 1000 events/sec

    • Example for 8396 events/sec:

      • (8396 / 1000) * 5 Mbps = 40 Mbps
    info

    the required network bandwidth is 40 Mbps to handle the audit event traffic.

ECA Cluster Sizing and Performance Considerations

ECA clusters can consist of 3 to 12 nodes or more, depending on the following factors:

  • The applications running on the cluster.
  • The number of events generated per second.
  • The number of cluster nodes producing audit events.

Minimum ECA Node Configurations

The supported minimum configurations for all ECA deployments are listed below.

note

New applications or releases with features that require additional resources may necessitate expanding the ECA cluster to handle multiple clusters or new application services.

Application ConfigurationNumber of ECA VM Nodes RequiredESX Hosts to Split VM Workload and Ensure High AvailabilityECA Node VM SizeNetwork Latency NFS Mount for Ransomware Defender & Easy AuditorEasy Auditor Database Network Latency Between ECA and PowerScale OneFS Storing the DBHost Hardware Configuration Requirements
Ransomware Defender Only (3, 6, 8, 9)3 ECA node cluster (1 to 2 managed clusters OR < 6000 audit events per second)
6 ECA node cluster (> 2 managed clusters OR > 6000 EVTS)
264 x vCPU, 16G RAM, 30G OS partition + 80G disk< 10 ms RTTNA2 socket CPU 2000 GHz or greater, Disk IO latency average read and write < 20 ms
Easy Auditor Only (2, 3, 5, 7, 8, 9)6 ECA node cluster264 x vCPU, 16G RAM, 30G OS partition + 80G disk< 10 ms RTT< 5 ms RTT2 socket CPU 2000 GHz or greater, Disk IO latency average read and write < 20 ms
Ransomware Defender And Easy Auditor Unified Deployment (< 18K events per second, 3, 5, 7, 8, 9)6 ECA node cluster264 x vCPU, 16G RAM, 30G OS partition + 80G disk< 10 ms RTT< 5 ms RTT2 socket CPU 2000 GHz or greater, Disk IO latency average read and write < 10 ms
Very High IO Rate Clusters (> 18K events per second, 3, 5, 7, 8, 9, 10)9 ECA node cluster364 x vCPU, 16G RAM, 30G OS partition + 80G disk< 10 ms RTT< 5 ms RTT2 socket CPU 2000 GHz or greater, Disk IO latency average read and write < 10 ms
Large Node Count Clusters (> 20 nodes, 3, 5, 7, 8, 9)20 - 30 nodes = 9 VMs, > 30 nodes = 12 VMs
364 x vCPU, 16G RAM, 30G OS partition + 80G disk< 10 ms RTT< 5 ms RTT2 socket CPU 2000 GHz or greater, Disk IO latency average read and write < 10 ms
Unified Ransomware Defender, Easy Auditor, and Performance Auditor Deployments (3, 4, 5, 7, 8, 9)6 - 9 ECA VMs depending on event rates
366 x vCPU, 20G RAM, 30G OS partition + 80G disk< 10 ms RTT< 5 ms RTT2 socket CPU 2000 GHz or greater, Disk IO latency average read and write < 10 ms

ECA Appliance Platforms

VMware OVA and Microsoft Hyper-v VHDX are available appliance platforms.

Low Event Rate Environments

Contact support for reduced footprint configuration with 3 VMs only for low event rate environments.

ECA Cluster Configuration Guidelines
  1. OVA Resource Limits: The OVA default sets a resource limit of 18000 MHz, shared by all ECA VM nodes in the cluster. This limit can be increased if the audit event load requires more CPU processing. Consult support before making any changes in VMware.

  2. Real-Time Distributed Processing: ECA clusters must operate in the same Layer 2 subnet with low latency between VMs. Splitting VMs across data centers is not supported. The only supported distributed mode is the Mini-ECA deployment architecture covered in this guide.

  3. Resource Requirements for Additional Applications: Unified Ransomware Defender, Easy Auditor, and Performance Auditor require additional resources beyond event rate sizing requirements. Add 4 GB of RAM and 2 additional vCPUs per ECA node. High event rates may require further resource increases. Consult the EyeGlass Scalability table for RAM upgrade requirements.

  4. Audit Data Retention: Retaining audit data for more than 1 year increases database size, requiring at least 3 additional ECA VMs to maintain performance. Data retention longer than 365 days requires extra resources and VMs.

  5. High Availability (HA) Requirements: For HA, multiple physical hosts are required. ECA clusters with 3 VMs can tolerate N-1 VM failures, clusters with 6 VMs can tolerate N-2 failures, and larger clusters tolerate N-3 failures.

  6. OneFS 8.2 or Later: Customers using OneFS 8.2 or later must disable directory open and directory close events to reduce the audit rate and the ECA VM footprint.

  7. VMware Settings: Storage vMotion, SDRS, and DRS should be disabled, as ECA VMs are real-time processing systems.

  8. Archiving GZ Files: To maintain performance, old gz files collected on OneFS nodes must be archived. Performance degrades when the gz file count exceeds 5000. Follow the procedures provided, or use the auto-archive feature in OneFS 9.x.

  9. Database Save Rates: Database save rates exceeding 1000 events per second per ECA node require additional database VMs to handle save operations efficiently.

IP Connection and Pool Requirements for Analytics database Requires HDFS on the Cluster (Easy Auditor)**

analytics

ECA Cluster Network Bandwidth Requirements to PowerScale OneFS (Ransomware Defender, Easy Auditor, Performance Auditor)

Each ECA node processes audit events and writes data to the Analytics Database using HDFS on the same network interface. Therefore, the combined TX (Transmit) and RX (Receive) data flow constitutes the peak bandwidth requirement per node.

Below is a table that provides minimum bandwidth requirements per ECA VM based on an example calculation for HDFS Bandwidth. This includes estimates and guidelines for Analytics Database network bandwidth access to PowerScale OneFS.

Product ConfigurationAudit Event rate Per SecondPeak Bandwidth requirement - Events per second per ECA cluster (input NFS Reading events from PowerScale OneFS to ECA cluster)Peak Bandwidth requirement - Audit data Writes Mbps per ECA cluster (output HDFS writing events)
Ransomware Defender only2000 evtsInput to ECA → 50 MbpsOut of ECA ← < 150 Mbps
Unified Ransomware and Easy Auditor - Steady state storing events> 4000 evtsInput to ECA → 125 MbpsOut of ECA ← 500 Mbps - 1.2 Gbps
Easy Auditor Analysis Reports (long running reports)NAInput to ECA (HDFS from PowerScale OneFS) ← 800 Mbps - 1.5 Gbps while report runs

Hyper-V or VMware Requirements

VMware ESX Host Compute Sizing for ECA nodes (Ransomware Defender, Easy Auditor, Performance Auditor)

For VMware environments with DRS and SDRS, it is best practice to exempt the ECA and vApp from dynamic relocation. This is because ECA is a real-time application with time synchronization requirements between VMs for processing and database operations.

While DRS movement of running VMs can negatively affect these processes, it is acceptable to migrate VMs for maintenance purposes as needed.

Number of active concurrent Users per cluster ¹ECA VM per Physical Host RecommendationEstimated Events Guideline
1 to 10001 Host=5000 * 1.25 = 6,250 events per second
5000 - 100002 Host=10,000 * 1.25 = 12,500 events per second
> 100003 Host= Number of users * 1.25 events/second
info

Active TCP connection with file IO to the cluster.

Firewall Configurations

Security - Firewall Port Requirements Ransomware Defender , Easy Auditor and Performance Auditor

Firewall Rules and Direction Table

info

These rules apply to both incoming and outgoing traffic for the virtual machines (VMs). It is important to ensure that all ports remain open between VMs. Private VLANs and firewalls between VMs are not supported in this configuration.

To enhance the security of the Eyeglass Clustered Agent (ECA), we recommend the following measures:

  1. Firewall Configuration:

    • Configure firewalls to restrict access to the ports between the Eyeglass VM and the ECA VM.
    • No external access is required for the ECA, aside from SSH access for management purposes.
    • This is the most important step to secure the ECA.
  2. Securing Troubleshooting GUIs:

    • Limit access to the troubleshooting tools (HBASE, Spark, and Kafka) by configuring them to only be accessible on a management subnet.

Eyeglass GUI VM (Applies to Ransomware Defender & Easy Auditor)**

Please consult the Eyeglass Firewall Ports documentation for the required ports that must be in place for Eyeglass add-on products such as Ransomware Defender and Easy Auditor. These ports are listed in the table for specific features of these products.

PortDirectionFunction
Operating System Open Suse 15.xIt is the customer's responsibility to patch the operating system and allow Internet repository access for automatic patching. The OS is not covered by support.
TCP 9090Eyeglass → ECAPrometheus database for event stats
2181 (TCP)Eyeglass → ECAzookeeper
9092 (TCP)Eyeglass → ECAKafka
5514 (TCP) as of 2.5.6 build 84ECA → Eyeglasssyslog
443 (TCP)ECA → EyeglassTLS messaging
NFS v3 UDP/TCP port 111, TCP and UDP port 2049 and TCP/UDP 300ECA → PowerScale OneFSNFS export mounting audit data folder on managed clusters (NOTE: Kerberized NFS is not supported)
NFS 4.x TCP port 2049ECA → PowerScale OneFSNFS export mounting audit data folder on managed clusters
REST API 8080 TCP (New mandatory)ECA → PowerScale OneFSNeeded for REST API audit log monitoring
NTP (UDP) 123ECA → NTP serverTime sync
ICMPVault cluster → Prod cluster(s)AirGap Solution: Enterprise Edition - Description: Ping from prod cluster to vault cluster, used to assess network reachability and vault isolation
ICMPECA VMs → PowerScale OneFS nodesUsed to provide reachability check and filter out nodes in the SmartConnect pool that are not reachable via ping ICMP (Audit data ingestion for system zone)
Additional Ports for Easy AuditorDirectionFunction
8020 AND 585 (TCP)ECA → PowerScale OneFSHDFS (NOTE: Encrypted HDFS is not supported)
18080Eyeglass → ECA node 1 onlyHbase history required for Easy Auditor
16000, 16020Eyeglass → ECAHbase
6066 (TCP)Eyeglass → ECASpark job engine
9092 (TCP)Eyeglass → ECAKafka broker
443 (TCP)Admin browser → ECASecure access to management tools with authentication required.

Firewall for Mini ECA

PortDirection
22 sshECA main cluster <--> mini ECA
admin pc --> mini ECA
2181, 2888 TCPECA main cluster <--> mini ECA
9092, 9090 TCPECA main cluster <--> mini ECA
5514 (TCP) as of 2.5.6 build 84mini ECA --> Eyeglass
443 (TCP)mini ECA --> Eyeglass
admin pc --> mini ECA
NFS UDP/TCP port 111, TCP and UDP 2049, UDP 300 (NFSv3 only)mini ECA --> cluster
NTP (UDP) 123mini ECA --> NTP server
DNS UDP 53mini ECA --> DNS server
TCP port 5000 for node 1 ECA (during upgrades only)all ECA and mini ECA --> node 1 main ECA cluster IP

Eyeglass VM Prerequisites

To ensure proper deployment of Eyeglass with the Eyeglass Clustered Agent (ECA), follow these steps to add licenses for Easy Auditor or Ransomware Defender to the Eyeglass VM.

Steps to Add Eyeglass Licenses

  1. Verify Compatibility:

    • Ensure that Eyeglass is deployed or upgraded to the compatible release version for the ECA release being installed.
  2. Login to Eyeglass:

    • Access the Eyeglass interface.
  3. Open the License Manager:

    • Click on the License Manager icon.
  4. Download License Key:

    • Follow the instructions to download the license key using the email token provided with your purchase.
  5. Upload License Key:

    • Upload the license key zip file obtained in Step 4.
    • Once uploaded, the webpage will refresh automatically.
  6. Open License Manager Again:

    • After the page refreshes, open the License Manager.
  7. Set License Status:

    • Navigate to the Licensed Devices tab.
    • For each cluster you wish to monitor using Ransomware Defender or Easy Auditor, set the license status to User Licensed.
    • For clusters that should not be licensed, set the license status to Unlicensed. This ensures licenses are applied correctly and prevents them from being used on unintended clusters.

License

ECA VM Deployment

alt text

Step-by-Step Guide for Hyper-V Deployment

Create ECA Hyper-V Virtual Machine

Follow the steps below to create an Eyeglass Clustered Agent (ECA) Virtual Machine on Hyper-V:

  1. Download ECA VHDX File:

  2. Deploy a New Virtual Machine:

    • Open Hyper-V Manager and start the process to create a new Virtual Machine.

    Download ECA VHDX File

  3. Configure the Virtual Machine:

    • Enter a Name for the virtual machine. Configure VM Name
    • Select Generation 1 for the virtual machine generation. Select Generation 1
    • Set the Startup Memory to 16384 MB (16 GB). Set Startup Memory
  4. Configure Network:

    • Select the appropriate Network Adapter for the virtual machine. Configure Network Adapter
  5. Attach the ECA VHDX:

    • In the virtual hard disk options, choose Use an existing virtual hard disk.
    • Browse to and select the downloaded ECA vhdx file. Attach ECA VHDX File
  6. Complete the Wizard:

    • Follow the prompts to complete the virtual machine creation process. Complete VM Creation

Configure ECA Data Disk

After deploying the Eyeglass Clustered Agent (ECA) Virtual Machine, follow the steps below to configure the data disk:

  1. Open VM Settings:

    • Go to the new VM in Hyper-V Manager.
    • Right-click the VM and select Settings. Open VM Settings in Hyper-V
  2. Add a Hard Drive:

    • Under IDE Controller 0, click Add and select Hard Drive. Add Hard Drive
  3. Create a New Virtual Hard Disk:

    • Choose Create New to configure a new virtual hard disk. Create New Virtual Hard Disk
  4. Configure Disk Format and Type:

    • Disk Format: Select VHDX. Select VHDX Format
    • Disk Type: Select Fixed size. Choose Fixed Size Disk
  5. Name and Size the Data Disk:

    • Enter a Name for the data disk. Enter Data Disk Name
    • Set the size to 80 GB for the new blank virtual hard disk. Set Data Disk Size
  6. Complete the Wizard:

    • Follow the prompts to complete the data disk creation process. Complete Data Disk Setup

Configuration of ECA cluster

SSH Access

  • Username: <your-username>
  • Password: <your-password>

Steps to Configure the ECA Cluster

  1. Power Up the VM:

    • Start the ECA VM and wait 5-10 minutes for the Superna on-boot script to run.

    • To monitor the script, use the following command:

      tail -2 /var/log/superna-on-boot.log
    • Wait for the script to finish and follow the on-screen instructions.

    Monitor On-Boot Script

  2. Setup the First Node (Node 1):

    • Run the command to set up your ECA Hyper-V node 1:

      sudo spy-hyperv-setup
    • When prompted, enter the following network configuration details:

      • Admin password
      • IP Address
      • Netmask
      • Gateway
      • Hostname
      • DNS
      • NTP

    Setup Node 1 Network Configuration

  3. Configure the Cluster (Node 1):

    • Follow the instructions carefully:
      • Do not press y until Node 2-N is configured.
      • Move on to the next step for Node 2-N setup.

    Cluster Configuration Instructions

  4. Setup Additional Nodes (Node 2-N):

    • Repeat STEP 1 and STEP 2 on Node 2-N for each additional node you wish to deploy.
    • When prompted for the master node during setup on Node 2-N, enter n.

    Setup Node 2-N

  5. Complete Setup on the Master Node (Node 1):

    • Return to Node 1 (the master node) and press y to complete the setup.
    • Enter the following details:
      • ECA Cluster Name (use lowercase, no uppercase, underscores, or special characters).
      • Child Nodes IPs (space-separated).

    Complete Master Node Setup

  6. Verify Completion:

    • After setup is complete, verify that all nodes are configured properly.
    • You will see a "Setup complete" message once everything is successfully configured.

    Verify Cluster Setup Completion

VMware OVA Installation Procedure

Installation ECA Vmware OVA

The deployment involves three ECA appliances. Follow the steps below to complete the installation.

  1. Download the Superna Eyeglass™ OVF:

  2. Unzip the OVF File:

    • Extract the downloaded file into a directory on a machine with vSphere Client installed. Unzip OVF File
  3. OVA Contents:

    • The unzipped download contains 1, 3, 6, and 9 VM OVF files.

    • Use the 1 VM OVF if you do not have a VMware license for vAppliance objects and need to deploy N x VMs.

    • Select the 3, 6, 9 OVF + VMDK files to deploy an ECA cluster, matching the VM count from the scaling table in this guide.

  4. Install the OVF using HTML vCenter Web Interface:

    warning

    Access vCenter with an FQDN DNS name, not an IP address. A bug in vCenter will generate an error during OVA validation.

  5. MANDATORY STEP: Power on the vApp After Deployment to Ensure IP Addresses Get Assigned:

    • DO NOT remove the VMs from the vApp before powering them on.
  6. First Boot Verification:

    • Make sure the first boot steps complete by reviewing the logs. Run the following commands on each ECA VM:

      • Check the status of the boot process:
      sudo systemctl status superna-on-boot
      • Verify the process has completed:
      cat /var/log/superna-on-boot.log
      • Ensure the log shows "done" before proceeding. Do not proceed until this step is complete.
  7. Procedures After First Boot:

    • Once the VMs are pingable, you can move the VMs from the vApp object if needed to rename each VM according to your naming convention.
      note

      Make sure the first boot script has completed using the procedures above on each VM. This can take up to 10 minutes per VM on the first boot as it sets up docker containers and must complete successfully along with SSH key creation.

      Verify First Boot Steps
  8. Deploy from a File or URL:

    • Deploy the OVA from the file or URL where it was saved.
  9. Configure VM Settings:

    • Using vCenter Client, set the required VM settings for datastore and networking.

    • NOTE: Leave the setting as Fixed IP address.

  10. Complete the Networking Sections:

    • ECA Cluster Name:

      • The name must be lowercase, less than 8 characters, and contain no special characters, with only letters.

        warning

        The ECA cluster name cannot include underscores (_) as this will cause some services to fail.

    • Ensure that all VMs are on the same subnet.

    • Enter the Network Mask (this will be applied to all VMs).

    • Enter the Gateway IP.

    • Enter the DNS Server:

      • The DNS server must be able to resolve igls.<your domain name here>. Use the nameserver IP address.
      note

      Agent node 1 is the master node where all ECA CLI commands are executed for cluster management.

  11. vCenter Windows client example

    vCenter Windows Client Example

    1. vCenter HTML Client Example vCenter HTML Client Example
  12. Example OVA vAPP after deployment OVA vApp Deployment Example

  13. Enter IP Information:

    • Enter all IP information for each ECA VM in the vCenter UI.
  14. After Deployment is Complete:

    • Power on the vApp (Recommended to stop here and wait for services to complete the remaining steps).

    • Ping each IP address to make sure each node has finished booting.

    • Login to the Master Node:

      • Login via SSH to Node 1 (the Master Node) using the <your-user> account.
      • Default password: `F.
    • Configure Keyless SSH:

      • Run the following command to configure keyless SSH for the <your-user> to manage the cluster:

        ecactl components configure-nodes
    • Generate API Token on Eyeglass Appliance:

      • On the Eyeglass Appliance, generate a unique API Token from the Superna Eyeglass REST API Window.
      • Once the token is generated for the ECA Cluster, it will be used in the ECA startup command for authentication.
    • Login to Eyeglass:

      • Go to the main menu and navigate to the Eyeglass REST API menu item.

      • Create a new API token, which will be used in the startup file for the ECA cluster to authenticate with the Eyeglass VM and register ECA services.

      Generate API Token in Eyeglass

  15. On ECA Cluster Master node ip 1

    1. Login to that VM using ass as the ecaadmin user default password <your-password>. From this point on, commands will only be executed on the master node.

    2. On the master node, edit the file nano /opt/superna/eca/eca-env-common.conf, and change these five settings to reflect your environment. Replace the variables accordingly.

    3. Set the IP address or FQDN of the Eyeglass appliance and the API Token (created above), uncomment the parameter lines before saving the file. For example:

      export EYEGASS_LOCATION=ip_addr_of_eyeglass_appliance
      export EYEGASS_API_TOKEN=Eyeglass_API_token
    4. Verify the IP addresses for the nodes in your cluster. It is important that NODE_1 be the master (i.e., the IP address of the node you're currently logged into).

      info

      Add additional ECA_LOCATION_NODE_X=x.x.x.x for an additional node in the ECA cluster depending on ECA cluster size. All nodes in the cluster must be listed in the file. Copy a line and paste to add additional ECA nodes and make sure to change the node number example to add the 4th ECA VM, it would look like this:

      export ECA_LOCATION_NODE_4=
      export ECA_LOCATION_NODE_1=ip_addr_of_node_1  # set by first boot from the OVF
      export ECA_LOCATION_NODE_2=ip_addr_of_node_2 # set by first boot from the OVF
      export ECA_LOCATION_NODE_3=ip_addr_of_node_3 # set by first boot from the OVF
    5. Set the HDFS path to the SmartConnect name setup in the Analytics database configuration steps. Replace the FQDN hdfs_sc_zone_name with <your smartconnect FQDN>.

      note

      Do not change any other value. Whatever is entered here is created as a subdirectory of the HDFS root directory that was set earlier.

    6. export ISILON_HDFS_ROOT='hdfs://hdfs_sc_zone_name:8020/eca1'
  16. Done: Continue on to the Cluster Auditing Configuration Section.

Isilon/PowerScale Protocol Audit Configuration - Required

This section configures PowerScale file auditing required to monitor user behaviors. The Audit protocol can be enabled on each Access Zone independently that requires monitoring.

Enable Protocol Access Auditing OneFS GUI

  1. Click Cluster Management > Auditing.

  2. In the Settings area, select Enable Configuration Change Auditing and Enable Protocol Access Auditing checkbox.

  3. In the Audited Zones area, click Add Zones.

  4. In the Select Access Zones dialog box, select one or more access zones, and click Add Zones (do not add Eyeglass access zone).

    note

    Any zone that does not have auditing enabled is unprotected.

Disable High Rate Audit Events OneFS 8.2 and Later (Mandatory Step)

Directory Open and Directory Close events generate unnecessary load on the cluster to log these event types. These events are not used by Ransomware Defender or Easy Auditor (Default settings do not store these events in the database). These events also cause performance issues on the cluster and high cluster overhead for these two events. It is required to disable these events.

Procedure to Disable High Rate Events

  1. Log in to the OneFS cluster over SSH as the root user.

  2. Replace the yellow highlight with access zone names that are enabled for auditing. This change takes effect immediately and will reduce audit overhead and increase auditing performance.

    isi audit settings modify --zone=<zone_name> --remove-audit-success=open_directory,close_directory

Preparation of Analytics Database or Index - Required

Prerequisites Analytics Database or Index

  • Easy Auditor only
  • Must add a minimum of 3 PowerScale nodes to the new IP pool and assign the pool to the access zone created for the audit database.
  • Must configure the SmartConnect zone name with FQDN.
  • Must complete DNS delegation to the FQDN assigned to the new pool for HDFS access.
  • Must enable the HDFS protocol on the new access zone (protocols tab in OneFS GUI) – Easy Auditor only.
  • Must have an HDFS license applied to the cluster.
  • Must configure a Snapshot schedule on the access zone path below every day at midnight with 30-day retention.
  • Optional: Create a SyncIQ policy to replicate the database to a DR site.

Steps Analytics Database or Index

  1. Activate a license for HDFS. When a license is activated, the HDFS service is enabled by default.

  2. Create an "eyeglass" Access Zone with the path "/ifs/data/eyeglass/analyticsdb" for the HDFS connections from Hadoop compute clients (ECA) and under Available Authentication Providers, select only the Local System authentication provider.

  3. Select/Create Zone Base Directory alt text

    note
    • Ensure that the Local System provider is at the top of the list. Additional AD providers are optional and not required.
    • In OneFS 8.0.1, the Local System provider must be added using the command line. After adding, the GUI can be used to move the Local System provider to the top of the list.
    isi zone zones modify eyeglass --add-auth-providers=local:system
    • Set the HDFS root directory in the Eyeglass access zone that supports HDFS connections:

      (OneFS 8.x)

      isi hdfs settings modify --root-directory=path_to_hdfs_root_dir --zone=access_zone_name_for_hdfs

      Example:

      isi hdfs settings modify --root-directory=/ifs/data/igls/analyticsdb/ --zone=eyeglass
    • Create an IP pool for HDFS access with at least 3 nodes in the pool to ensure high availability for each ECA node. The pool will be configured with round-robin load balancing.

      (OneFS 8.0)

      isi network pools create groupnet0.subnet0.hdfspool --ranges=172.22.1.22-172.22.1.22 --ifaces 1-4:10gige-1 --access-zone eyeglass --sc-dns-zone hdfs-mycluster.ad1.test --alloc-method static
    • Configure virtual HDFS racks on the PowerScale Cluster:

      note

      The ip_address_range_for_client refers to the IP range used by the ECA cluster VMs.

      (OneFS 8.0)

      isi hdfs racks create /hdfs_rack_name --zone=access_zone_name_for_hdfs --client-ip-ranges=ip_address_range_for_client --ip-pools=subnet:pool

      Example:

      isi hdfs racks create /hdfs-iglsrack0 --client-ip-ranges=172.22.1.18-172.22.1.20 --ip-pools=subnet0:hdfspool --zone=eyeglass
      • Verify the rack creation:

        isi hdfs racks list --zone=eyeglass

        Output:

        Name        Client IP Ranges        IP Pools
        -------------------------------------------------------------
        /hdfs-rack0 172.22.1.18-172.22.1.20 subnet0:hdfspool
        -------------------------------------------------------------
        Total: 1
    • Create a local Hadoop user in the System access zone.

      note

      The User ID must be eyeglasshdfs.

      (OneFS 8.0)

      isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --zone=system

      Example:

      isi auth users create --name=eyeglasshdfs --provider=local --enabled=yes --password-expires=no --zone=system
    • Log in via SSH to the PowerScale cluster as the root user to change ownership, permissions, and block inherited permissions from parent folders on the HDFS path used by Eyeglass ECA clusters.

      chown -R eyeglasshdfs:'Isilon Users' /ifs/data/igls/analyticsdb/
      chmod -R 755 /ifs/data/igls/analyticsdb/
      chmod -c +dacl_protected /ifs/data/igls/analyticsdb/
      note

      If using a cluster in compliance mode, do not run the commands above. Instead, run:

      chmod 777 /ifs/data/igls/analyticsdb/

Configure an NFS Mount Point on Each ECA Node to Read Audit Data from Isilon/Powerscale OneFS - Required

Audit events are ingested over NFS mounts on ECA nodes 1 - X (where X is the size of your ECA cluster). Follow the steps below to add the export to each of the VMs.

Make sure you have

  • Cluster GUID and Cluster Name for each cluster to be indexed.
  • Cluster Name as shown in the top-right corner after login to the OneFS GUI.

The cluster name is case-sensitive, and the NFS mount must match the exact case of the cluster name.

Refer to the example in the OneFS GUI for obtaining this information.

  1. Log in to each ECA node.

  2. Configure the NFS mount point for each node using the exact cluster name obtained from OneFS. NFS Mount Setup Example

  3. Repeat the process on nodes 2 - X, where X is the last node in the ECA cluster.

  4. Login to ECA node 1:

    • ssh ecaadmin@x.x.x.x (where x.x.x.x is node 1 IP of the ECA cluster)
  5. Create local mount directory and sync to all nodes:

    1. Run the following command:

      ecactl cluster exec "sudo mkdir -p /opt/superna/mnt/audit/GUID/clusternamehere/"
    2. Replace GUID and clusternamehere with the correct values.

      note

      The cluster name is case-sensitive and must match the cluster name case as shown in OneFS.

    3. Enter the admin password when prompted on each ECA node.

    4. Verify the folder exists on all ECA nodes

    ecactl cluster exec "ls -l /opt/superna/mnt/audit/"
  6. NFS Mount Setup with Centralized Mount File for All Nodes with Auto-Mount

    note

    This option will mount on cluster up using a centralized file to control the mount. This simplifies changing mounts on nodes and provides cluster-up mount diagnostics.

    1. Configuration Steps for Auto-Mount:

      1. Open the configuration file:

        nano /opt/superna/eca/eca-env-common.conf
      2. Add a variable to ensure the cluster stops if the NFS mount fails:

        export STOP_ON_AUTOMOUNT_FAIL=true
      3. SSH to ECA node 1 as <your-username> user to enable auto-mount and ensure it starts on OS reboot.

        note

        For each node, you will be prompted for the <your-username> password.

      4. Enable auto-mount:

        ecactl cluster exec "sudo systemctl unmask autofs"
        ecactl cluster exec "sudo systemctl start autofs"
      5. Check and ensure the service is running:

        ecactl cluster exec "sudo systemctl status autofs"
    2. Add a New Entry to auto.nfs File on ECA Node 1:

      note
      • The FQDN should be a smartconnect name for a pool in the System Access Zone IP Pool. <NAME> is the cluster name collected from the section above. GUID is the cluster GUID from the General Settings screen of OneFS.

      • Add 1 line for each Isilon/Powerscale cluster that will be monitored from this ECA cluster.

      1. For NFS v3:

        echo -e "\n/opt/superna/mnt/audit/<GUID>/<NAME> --fstype=nfs,nfsvers=3,ro,soft <FQDN>:/ifs/.ifsvar/audit/logs" >> /opt/superna/eca/data/audit-nfs/auto.nfs
      2. For NFS v4.x:

        echo -e "\n/opt/superna/mnt/audit/<GUID>/<NAME> --fstype=nfs,nfsvers=4,ro,soft <FQDN>:/ifs/.ifsvar/audit/logs" >> /opt/superna/eca/data/audit-nfs/auto.nfs
      3. Verify the contents of the auto.nfs file:

        cat /opt/superna/eca/data/audit-nfs/auto.nfs
    3. Push the Configuration to All ECA Nodes:

      ecactl cluster push-config
    4. Start Auto-Mount and Verify the Mount:

      1. Restart auto-mount:

        ecactl cluster exec "sudo systemctl restart autofs"
        note

        You will be asked to enter the <your-username> password for each ECA node.

      2. Check the mount by typing the following command:

        mount
    5. Cluster Up Command:

      • Run the cluster up command and mount each ECA node during cluster up.

Start up the ECA Cluster

  1. Start up the cluster:

    • At this point, you can start up the cluster.
  2. SSH to ECA node 1:

    • SSH to ECA node 1 as <your-username> and run the following command:
      • ecactl cluster up
      • Note: This process can take 5-8 minutes to complete.
  3. Verify startup and post-startup status:

    • Refer to the troubleshooting section below for commands to verify startup and post-startup status.

Creating and Configuring ECA Nodes

Mini-ECA Deployment Diagram

Mini-ECA Deployment Diagram

How to Deploy Mini-ECA VM's

  1. Deploy the OVA:

    • Follow the standard ECA OVA deployment instructions to deploy the OVA.
  2. Delete ECA Nodes:

    • For a single Mini-ECA deployment, delete ECA node 2 and ECA node 3.
    note

    Mini-ECA supports High Availability (HA) configurations and can operate with ECA nodes 1 and 2. If you want to enable HA, only delete node 3 from the vApp.

  3. Completion:

    • After deleting the necessary nodes, the deployment is complete.

Network and Storage Setup

Configuring NFS Mounts and Network Settings

How to configure NFS mount on Mini-ECA

Each mini ECA will need to mount the cluster it has been assigned. Follow the steps below to create the export and mount the cluster on each ECA node.

  1. Create the export:

    • The steps to create the export are the same as in the section How to Configure Audit Data Ingestion on the Vast/PowerScale OneFS.
  2. Add the mount to /etc/fstab:

    1. Create mount path:

      • sudo mkdir -p /opt/superna/mnt/audit/GUID/clusternamehere/
        • Replace GUID and clusternamehere with the correct values.
        • Note: The cluster name is case-sensitive and must match the cluster name as shown in OneFS.
        • Enter the admin password when prompted on each ECA node.
    2. Edit /etc/fstab:

      1. This will add a mount for content indexing to /etc/fstab on all nodes.
      2. Build the mount command using cluster GUID and cluster name, replacing the highlighted sections with correct values for your cluster.
        • Note: This is only an example.
      3. You will need a SmartConnect name to mount the snapshot folder on the cluster. The SmartConnect name should be a system zone IP pool.
      4. Replace SmartConnect FQDN and <> with a DNS SmartConnect name.
      5. Replace <GUID> with the cluster GUID.
      6. Replace <name> with the cluster name.
  3. On the mini ECA VM:

    1. SSH to the node as ecaadmin.
    2. Run the command: sudo -s.
    3. Enter the ecaadmin password.
    4. Run the command:
      • echo '<CLUSTER_NFS_FQDN>:/ifs/.ifsvar/audit/logs /opt/superna/mnt/audit/GUID/clusternamehere/ nfs defaults,nfsvers=3 0 0' | sudo tee -a /etc/fstab
    5. Mount the filesystem:
      • mount -a
    6. Verify the mount.
    7. Exit.
  4. Completion:

    • Done.
How to verify the configuration

  1. Start up the cluster on all nodes. Log in to node 1 of the ECA cluster.
  2. Run ecactl cluster up.
  3. Verify any startup issues on all nodes.
  4. Generate test events on each cluster.
  5. Use the wiretap feature to view these events on each managed cluster.

Setting Up Audit Data Ingestion

How to Configure Audit Data Ingestion on the VAST/PowerScale OneFS

Prerequisites Audit Data NFS Export
  1. SmartConnect Name: Configure a SmartConnect name in the system zone for the NFS export, created on /ifs/.ifsvar/audit/logs.
  2. IP Pool Configuration: Set the PowerScale OneFS IP pool to dynamic mode for the NFS mount used by ECA cluster nodes, ensuring high availability.
  3. Firewall:
    • Open Port TCP 8080 from each ECA node to all PowerScale OneFS nodes in the management pool within the system zone for audit data ingestion.
    • Ensure NFS Ports are open from all ECA nodes to all PowerScale OneFS nodes in the management pool for audit data ingestion.
  4. NFS Support:
    • NFS v4.x is supported with Appliance OS version 15.3 and later.
    • Kerberized NFS is not supported.

Create a Read-Only NFS Export on the PowerScale OneFS Cluster(s) to Be Managed

  • NFS v4.x (recommended for all deployments) – NFSv3 is also supported.

  • Create the NFSv4 or NFSv3 export from the CLI using the following command:

isi nfs exports create /ifs/.ifsvar --root-clients="<ECA_IP_1>,<ECA_IP_2>,<ECA_IP_3>" --clients="<ECA_IP_1>,<ECA_IP_2>,<ECA_IP_3>" --read-only=true -f --description "Easy Auditor Audit Log Export" --all-dirs true

Prerequisite

  • Enable NFS 4.2.
note

You should not enable NFSv4 if your hosts do not specify the mount protocol in fstab or auto-mount. Before enabling NFSv4, consult the Dell technical team for considerations and an enabling checklist.

alt text

REST API Audit Ingestion - Mandatory for All deployments

Prerequisites API Audit

  • Version Requirements: 2.5.8.1 or higher
  • NFS Requirements: NFS v4 or NFSv3
  • Update the Eyeglass service account role and add backup read permissions to the Eyeglass admin role. Add the permissions for either 9.x or 8.2 as shown below.

Steps

  1. Login to the Eyeglass VM.

  2. Run sudo -s (enter the admin password).

  3. Open the system file using nano /opt/superna/sca/data/system.xml.

  4. Locate the <process> tag.

  5. Insert the following tag:

    <syncIsilonsToZK>true</syncIsilonsToZK>
  6. Save and exit using control + x.

  7. Restart the service: systemctl restart sca.

  8. Done.

Role Permissions for Backup

  • For 9.x clusters, add Backup Files from /Ifs:

    isi auth roles modify EyeglassAdmin --add-priv-ro ISI_PRIV_IFS_BACKUP
  • For 8.2 or later, add the following permissions to the role:

    isi auth roles modify EyeglassAdmin --add-priv-ro ISI_PRIV_IFS_BACKUP
    isi auth roles modify EyeglassAdmin --add-priv-ro ISI_PRIV_IFS_RESTORE
note

You should not enable NFSv4 if your hosts do not specify the mount protocol in fstab or auto-mount. Before enabling NFSv4, consult the Dell technical team for considerations and an enabling checklist.

On Each Source Cluster to Be Protected

Follow these steps to enable REST API audit folder monitoring:

  1. Login as the root user and create the following symlink:

    ln -s /ifs/.ifsvar/audit/logs /ifs/auditlogs
  2. Login to ECA node 1.

  3. Open the configuration file:

    nano /opt/superna/eca/eca-env-common.conf
  4. Add this variable:

    export TURBOAUDIT_AUDITLOG_PATH=/ifs/auditlogs
  5. Save the file and exit (use control + x).

  6. Restart the cluster for the change to take effect:

    ecactl cluster down
    ecactl cluster up
    note

    ECA VMs require port https TCP 8080 to be open from ECA to the protected cluster.

Final Configuration

Configuring Services and Joining the Central ECA Cluster

  1. Login to the Central ECA Node:

    • SSH into ECA Central Node 1 as your-user.
  2. Edit the Configuration File:

    • Open the configuration file:

      vim /opt/superna/eca/eca-env-common.conf
    • Add additional Mini-ECA nodes by adding a line for each Mini-ECA at remote sites, incrementing the node ID for each new line:

      export ECA_LOCATION_NODE_7=x.x.x.x
  3. Configure Passwordless SSH for New Nodes:

    • Run the following command to add Mini-ECA nodes for passwordless SSH:

      ecactl components configure-nodes
  4. Modify the neOverrides.json File:

    • Open the neOverrides.json file:

      vi /opt/superna/eca/data/common/neOverrides.json
    • Copy and edit the following text based on your configuration, ensuring to:

      • Replace the cluster names with the Mini-ECA cluster names.
      • Align the nodes mapping to correspond with the ECA node IDs configured in the eca-env-common.conf file.
    • Example:

      [
      {
      "name": "SC-8100A",
      "nodes": ["2", "3"]
      },
      {
      "name": "SC-8100B",
      "nodes": ["7"]
      }
      ]
      note

      Ensure the mapping is done correctly to process and tag events for the correct cluster.

    • Save the file by typing :wq.

  5. Configure Services for Mini-ECA Nodes:

    • Use the overrides file to specify services for Mini-ECA nodes:

      cp /opt/superna/eca/templates/docker-compose.mini_7_8_9.yml /opt/superna/eca/docker-compose.overrides.yml
    note

    This file automatically configures the services for Mini-ECA nodes 7-9, if they exist. No additional configuration is required.

Verifying the Installation and Network Setup

Verify ECA Remote Monitoring Connection from the Eyeglass Appliance

  1. Login to Eyeglass as the admin user.
  2. Check the status of the ECA Cluster. Click the Manage Service icon and then click the + to expand the container or services for each ECA node (review the image below).
  3. Verify the IP addresses of the ECA nodes are listed.
  4. Verify that all cluster nodes and all Docker containers show green health.
note

HBase status can take up to 5 minutes to transition from warning to green.

Monitoring

How to Backup and Protect the Audit Database with SnapshotIQ

Use the PowerScale native SnapshotIQ feature to backup the audit data

How to Upgrade the ECA cluster Software For Easy Auditor , Ransomware Defender and Performance Auditor

Important Notes for Upgrading

note
  • Contact support first before upgrading the cluster to ensure compatibility with the Eyeglass version. Both Eyeglass and ECA must be running the same version.

  • Upgrade assistance is scheduled and is a service not covered under 24/7 support. Please review the EULA terms and conditions.

  • Always take a VM-level snapshot before any upgrade steps to allow for rollback to the previous release if needed.

Steps to Carrier Grade Upgrade - No downtime

  1. Requirements:

    • 2.5.8.2 or later release
  2. Login to node 1 as <your-username> and copy the run file to node 1.

  3. Change file permissions:

    chmod 777 xxxx (name of the run file)
  4. Run the upgrade with the following command:

    ./eca-xxxxx.run --rolling-upgrade
  5. Provide the password for <your-username> when prompted.

  6. Nodes will be upgraded in a manner that allows audit data and all ECA products to continue operating fully.

  7. The upgrade will manage all node upgrades and will exit when done. The final state will have all containers running the new code.

Steps to Upgrade

  1. Take a Hypervisor-level VM snapshot to enable a rollback if needed. This is a mandatory step.

  2. Disable Ransomware Defender, Easy Auditor, and Performance Auditor functionality before beginning the upgrade – required first step:

    • Log in to ECA Node 1 using <your-username> credentials.
    • Issue the following command: ecactl cluster down.
    • Wait for the procedure to complete on all involved ECA nodes.
    • Done!
  3. Upgrade Eyeglass VM first and download the latest release from here.

    note

    Eyeglass and ECA cluster software must be upgraded to the same version.

    • Follow the guide here.
    • Double-check that licenses are assigned to the correct clusters based on the information here.
    • Double-check that Ransomware Defender, Easy Auditor, and Performance Auditor settings match the ones before the upgrade.
  4. Download the latest GA Release for the ECA upgrade, following instructions from here.

  5. Log in to ECA Node 1 using <your-username> credentials.

  6. note

    ECA is in a down state – ecactl cluster down was already done in step 1.

  7. Verify by executing the following command:

    ecactl cluster status
  8. Ensure no containers are running.

  9. If containers are still running, stop them by executing the command and waiting for it to complete on all nodes:

    ecactl cluster down
  10. Once the above steps are complete:

    • Use WinSCP to transfer the run file to Node 1 (Master Node) in /home/ecaadmin directory.

    • SSH to ECA Node 1 as <your-username>:

      ssh ecaadmin@x.x.x.x
      cd /home/ecaadmin
      chmod +x ecaxxxxxxx.run (xxxx is the name of the file)
      ./ecaxxxxxxx.run
    • Enter the <your-username> password when prompted.

    • Wait for the installation to complete.

    • Capture the upgrade log for support if needed.

  11. Complete the software upgrade.

  12. Bring up the ECA cluster:

    • Execute:

      ecactl cluster exec "sudo systemctl enable --now zkcleanup.timer"

      (Enter the <your-username> password for each node.)

    • Start the cluster:

      ecactl cluster up
    • Wait until all services start on all nodes. If there are any errors, copy the upgrade log and use WinSCP to transfer it to your PC or attach it to a support case.

  13. Once completed, log in to Eyeglass, open the Manage Services icon, and verify that all ECA nodes show green and are online. If any services show a warning or are inactive, wait at least 5 minutes. If the condition persists, open a support case. Centralized_NFS_WAN_Deployment

  14. If all steps pass and all ECA nodes show green:

    • Use the Security Guard test in Ransomware Defender or run the RoboAudit feature in Easy Auditor to validate that audit data ingestion is functioning.
  15. Consult the admin guide for each product to start a manual test of these features.

How to Migrate ECA cluster settings to a new ECA cluster deployment - To upgrade Open Suse OS

To upgrade an ECA cluster OS, it is easier to migrate the settings to a new ECA cluster deployed with the new OS. Follow these steps to deploy a new ECA cluster and migrate configuration to the new ECA cluster.

  1. Retrieve the ECA Cluster Name:

    • The ECA cluster has a logical name shared between nodes. When deploying a new OVA, the deployment will prompt for the ECA cluster name. This should be the same as the previous ECA cluster name.
    • To get the ECA cluster name:
      1. Log in to ECA Node 1 via SSH as <your-username> (e.g., ssh ecaadmin@x.x.x.x).

      2. Run the following command:

        cat /opt/superna/eca/eca-env-common.conf | grep ECA_CLUSTER_ID
      3. Use the value returned after the = sign when deploying the new ECA cluster.

      4. Use WinSCP to copy the following files from ECA Node 1 of the existing ECA cluster (logged in as <your-username>):

        • /opt/superna/eca/eca-env-common.conf
        • /opt/superna/eca/docker-compose.overrides.yml
        • /opt/superna/eca/conf/common/overrides/ThreatLevels.json
        • /opt/superna/eca/data/audit-nfs/auto.nfs
    note

    This procedure assumes the IP addresses will stay the same, so the cluster NFS export doesn't need to be changed, and there will be no impact on any firewall rules.

  2. Deploy a New OVA:

    • Deploy a new OVA ECA cluster using the latest OS OVA, following the instructions
    • Follow the deployment instructions and use the same ECA cluster name captured earlier when prompted during the installation of the OVA.
    note

    Use the same IP addresses as the current ECA cluster.

  3. Shutdown the Old ECA Cluster:

    1. Log in to Node 1 as <your-username>.

    2. Run the following command:

      ecactl cluster down
    3. Wait for the shutdown to finish.

    4. Using the vCenter UI, power off the VApp.

  4. Startup the New ECA Cluster:

    1. Power on the VApp using the vCenter UI.

    2. Ping each IP address in the cluster until all VMs respond.

      warning

      Do not continue if you cannot ping each VM in the cluster.

    3. Using WinSCP, log in as <your-username> and copy the files from the steps above into the new ECA OVA cluster.

    4. On Node 1, replace the files with the backup copies:

      • /opt/superna/eca/eca-env-common.conf
      • /opt/superna/eca/docker-compose.overrides.yml
      • /opt/superna/eca/conf/common/overrides/ThreatLevels.json
      • /opt/superna/eca/data/audit-nfs/auto.nfs
    5. On Nodes 1 to X (where X is the last node in the cluster):

      1. On each node, complete the following steps:
        • SSH to the node as <your-username>:

          ssh ecaadmin@x.x.x.x
        • Run the following commands:

          sudo -s
          mkdir -p /opt/superna/mnt/audit/<cluster GUID/cluster name>

          Example:

          /opt/superna/mnt/audit/0050569960fcd70161594d21dd22a3c10cbe/prod-cluster-8
        • Repeat for each cluster managed by this ECA cluster. View the contents of the auto.nfs file to get the cluster GUID and name.

    6. Restart the Autofs process to read the auto.nfs file and mount all clusters:

      • Run the following commands on each node:

        ecactl cluster exec "sudo systemctl restart autofs"
        ecactl cluster exec "mount"
      • Verify that the mount is present on all nodes in the output from the mount command.

    7. Start up the new ECA cluster:

      • Log in to ECA Node 1 as <your-username>:

        ecactl cluster up
      • Review startup messages for errors.

    8. Done.

Monitor Health and Performance of an ECA Cluster - Optional

The sections below provide instructions on how to monitor the health and performance of an ECA cluster. Always contact support before taking any actions. Note that ECA clusters are designed to consume high CPU resources for most operations, and it is expected to see high CPU usage on all nodes most of the time.

Verifying ECA Cluster Status

To check the status of an ECA cluster, follow these steps:

  1. Access the master node and run the following command:

    ecactl db shell
  2. Once in the shell, execute the command:

    status
  3. The output should show:

    • 1 active master
    • 2 backup master servers

Mini_ECA_Configuration

Verifying ECA Containers are Running

To verify that ECA containers are running, execute the following command:

ecactl containers ps

ECA_Cluster_Status_Screenshot

Check Cluster Status and Verify Analytics Tables - Optional

This section explains how to check the status of the cluster and ensure all analytics tables are available for Ransomware Defender, Easy Auditor, and Performance Auditor.

  1. Run the following command to check the cluster status:

    ecactl cluster status

    This command verifies:

    • All containers are running on every node.
    • Each node can mount the necessary tables in the Analytics database.
  2. If any errors are encountered, follow these steps:

    • Open a support case to resolve the issue.
    • Alternatively, retry the cluster commands below:
    ecactl cluster down
    ecactl cluster up
  3. Once the cluster is back up, send the ECA cluster startup log to support for further assistance.

Check ECA Node Container CPU and Memory Usage - Optional

To monitor the real-time CPU and memory usage of containers on an ECA node, follow these steps:

  1. Log in to the ECA node as the <your-username> user.

  2. Run the following command to view the real-time resource utilization of the containers:

    ecactl stats

Enable Real-time Monitoring of ECA Cluster Performance (If Directed by Support)

Follow this procedure to enable container monitoring and ensure that CPU GHz are set correctly for query and writing performance to PowerScale.

Steps to Enable Monitoring

  1. To enable cadvisor across all cluster nodes, add the following line to the eca-env-common.conf file:

    export LAUNCH_MONITORING=true

    This will launch cadvisor on all ECA cluster nodes.

  2. If you need to launch cadvisor on a single node, log in to that specific node and run the following command:

    ecactl containers up -d cadvisor
  3. Once the cadvisor service is running, you can access the web UI by navigating to:

    http://<IP OF ECA NODE>:9080

    Replace <IP OF ECA NODE> with the actual IP address of the node.

Done! You can now monitor the real-time performance of the ECA cluster.

ECA Cluster Modification Procedures - Optional

How to Expand Easy Auditor Cluster Size

note

Always contact support before proceeding. Support will determine if your installation requires expansion.

To enhance analytics performance for handling higher event rates or long-running queries in a large database, follow these steps to add 3 or 6 more VMs:

  1. Deploy the ECA OVA.
  2. Copy the new VMs into the vAPP.
  3. Remove the vAPP created during the deployment.
note

The ECA name during the OVA deployment is not important, as it will be synchronized from the existing ECA cluster during the cluster startup procedures.

  1. Log in to the master ECA node.

  2. Run the following command to take the cluster down:

    ecactl cluster down
  3. Deploy one or two more ECA clusters. No special configuration is needed on the newly deployed ECA OVA.

  4. Edit the configuration file to add more nodes:

    nano /opt/superna/eca/eca-env-common.conf
  5. Add the IP addresses of the new nodes, for example:

    ECA_LOCATION_NODE_4: <IP>
    ECA_LOCATION_NODE_5: <IP>
  6. You can add nodes from 4 to 9, depending on the number of VMs added to the cluster.

  7. Run the following command to configure the new nodes:

    ecactl components configure-nodes
  8. Bring the cluster back up:

    ecactl cluster up
  9. This will expand the HBASE and Spark containers for faster read and analytics performance.

  10. Log in to Eyeglass and open the managed services.

  11. Now balance the load across the cluster for improved read performance:

    • Log in to the Region Master VM (typically node 1).
    • Open the UI at http://x.x.x.x:16010/ and verify that each region server (6 total) is visible.
    • Ensure each server has assigned regions and verify that requests are visible for each region server.
    • Check that the tables section shows no regions offline, and no regions are in the "other" column.
    • Example screenshots of six region servers with regions and normal table views can be used for reference.

    alt text alt text

Advanced Configurations

How to Configure a Ransomware Defender Only Configuration (Skip if Running Multiple Products)

Follow this procedure before starting up the cluster to ensure unnecessary Docker containers are disabled during startup.

  1. Log in to node 1 over SSH as the <your-username> user.

  2. Open the configuration file:

    nano /opt/superna/eca/eca-env-common.conf
  3. Add the following variable:

    export RSW_ONLY_CFG=true
  4. Save and exit the file:

    :wq
  5. Continue with the startup steps below.

How to Configure NFS Audit Data Ingestion with Non-System Access Zone

  1. Create an access zone in /ifs/ named "auditing".

  2. Create an IP pool in the new access zone with 3 nodes and 3 IPs in the pool.

  3. Create a SmartConnect name and delegate this name to DNS.

  4. Auditing will be disabled by default in this zone.

  5. Use this SmartConnect name in the auto.nfs file.

  6. Log in to node 1 of the ECA as <your-username>.

  7. Open the auto.nfs file:

    nano /opt/superna/eca/data/audit-nfs/auto.nfs
  8. Follow the syntax in this guide to enter the mount of the cluster audit folder, then save the file with Control + X.

  9. Log in to the cluster and move the NFS export used for auditing to the new access zone:

    isi nfs exports modify --id=1 --zone=system --new-zone=auditing --force
  10. Verify the new export:

    isi nfs exports list --zone=auditing
  11. Restart the autofs service to remount the NFS export in the new zone:

    • Push the updated config:

      ecactl cluster push-config
    • Restart the autofs service on all nodes:

      ecactl cluster exec sudo systemctl restart autofs
  12. Enter the <your-username> password on each node when prompted.

Done.