ECA VM Installation
Introduction
The Eyeglass Clustered Agent (ECA) is a tool that facilitates auditing and ransomware defense through the deployment of virtual machines (VMs) across different platforms, including Hyper-V, VMware, and Mini-ECA. This guide provides step-by-step instructions for deploying the ECA VM on various environments, ensuring that the auditing of critical data is carried out efficiently, with support for both centralized and distributed cluster modes.
One key deployment option in this guide is the Mini-ECA, designed specifically for environments that require distributed cluster mode. The Mini-ECA enhances security by forwarding audit data from remote sites to a central cluster, making it particularly useful in scenarios where remote site processing is necessary due to high latency or slow WAN connections.
Deployment Scenarios
Centralized with NFS over WAN
In this setup, a central ECA cluster accesses audit data from remote PowerScale OneFS clusters over a WAN link using NFS. This configuration is ideal for metro WAN environments where latency is low.
Centralized with Remote Mini-ECA
If your WAN connection has higher latency (>10 ms RTT) or is slow, deploying a Mini-ECA at remote sites helps mitigate these issues. In this configuration, the Mini-ECA locally collects audit data via NFS and forwards it to the central ECA cluster for processing.
Mini-ECA is optional: Before proceeding with its setup, assess your environment’s network conditions to determine if Mini-ECA is necessary. Installing it in environments with low latency may lead to unnecessary configurations.
Requirements
The Eyeglass appliance is required for installation and configuration. The ECA Cluster operates in a separate group of VMs from Eyeglass.
Key Components
- Eyeglass: Responsible for taking actions on the cluster and notifying administrators.
- PowerScale Cluster: Stores the analytics database (can be the same cluster that is monitored for audit events).
- Licenses:
- Eyeglass Appliance: Requires either Data Security Agent Licenses, Easy Auditor Agent Licenses, or Performance Auditor Licenses.
- HDFS License (for Easy Auditor):
- PowerScale cluster requires an HDFS license to store the analytics database for Easy Auditor.
noteData Security deployments no longer require an HDFS pool.
System Requirements and Network Latency Considerations
ECA Hyper-V
The ECA appliance uses two disks: one for the OS and one for data.
- OS Disk: Requires 20 GB (default disk).
- Data Disk: Requires 80 GB. (Read the instructions below on how to create the data disk).
OVA Install Prerequisites
The OVA file will deploy 3 VMs. To build a 6-node cluster, deploy the OVA twice and move the VMs into the first Cluster object in vCenter. Follow the instructions below to correctly move the VMs into a single vApp in vCenter.
| Configuration Item |
|---|
| see scaling section |
| vSphere 6.x or higher |
| 1x IP address on the same subnet for each node |
| Gateway |
| Network Mask |
| DNS IP |
| NTP server IP |
| IP Address of Eyeglass |
| API token from Eyeglass |
| Unique cluster name (lower case, no special characters) |
Mini-ECA
Latency Requirements
- Latency between the main ECA cluster and the remote mini ECAs must be below a ping time of 80 ms.
- Latency above 80 ms may not be supported.
Required Mounting Method
- The FSTAB method is required for mounting the cluster audit folder.
- See detailed instructions in the following section.
Network Impact Calculation
-
To calculate the bandwidth requirement, you need to know the audit event rate for the cluster.
-
Run the following command to get the average disk operations per PowerScale OneFS node:
-
Command:
isi statistics query current --nodes=all --stats=node.disk.xfers.rate.sum -
This command returns the average per node at the bottom of the results. Use this value in the calculation below.
-
-
Calculate the network bandwidth by taking the following steps:
- Take the average per node and multiply by the number of nodes.
- Example: If the command reports an average of 2200 and there are 7 nodes:
2200 * 7 = 15,400
- Example: If the command reports an average of 2200 and there are 7 nodes:
- Divide this number by the ratio of audit events to disk transfers (1.83415365 in this case).
-
Example:
15,400 / 1.83415365 = 8396 events/second
-
- Take the average per node and multiply by the number of nodes.
-
Use the following calculation to compute the required network bandwidth to forward events to the central site for processing:
-
Given: 5 Mbps of network traffic @ 1000 events/sec
-
Example for 8396 events/sec:
-
(8396 / 1000) * 5 Mbps = 40 Mbps
-
infothe required network bandwidth is 40 Mbps to handle the audit event traffic.
-
ECA Cluster Sizing and Performance Considerations
ECA clusters can consist of 3 to 12 nodes or more, depending on the following factors:
- The applications running on the cluster.
- The number of events generated per second.
- The number of cluster nodes producing audit events.
Minimum ECA Node Configurations
The supported minimum configurations for all ECA deployments are listed below.
New applications or releases with features that require additional resources may necessitate expanding the ECA cluster to handle multiple clusters or new application services.
- PowerScale Only
- VAST Only
- Qumulo Only
- VAST and Qumulo Only
| Environment Size | Number of VM Nodes Required | ESX Hosts to Split VM Workload and Ensure High Availability | ECA Node VM Size | Network Latency NFS Mount For Ransomware Defender & Easy Auditor | EZA DB Network Latency Between ECA and Storing the DB | Host Hardware Configuration Requirements |
|---|---|---|---|---|---|---|
| <18K events per second | - 1 VM for Core Agent (Eyeglass) - 6 ECA VMs | 26 | 4 x vCPU, 16G Ram, 30G OS partition + 80G disk | < 10 ms RTT | < 5 ms RTT | 2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 20 ms |
| >18K events per second | - 1 VM for Core Agent (Eyeglass) - 9 ECA VMs | 36 | 4 x vCPU, 16G Ram, 30G OS partition + 80G disk | < 10 ms RTT | < 5 ms RTT | 2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 10 ms |
| Large node count clusters >20 nodes | - 1 VM for Core Agent (Eyeglass) - 9 ECA VMs (20-30 nodes) / 12 ECA VMs (>30 nodes) | 36 | 4 x vCPU, 16G Ram, 30G OS partition + 80G disk | < 10 ms RTT | < 5 ms RTT | 2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 10 ms |
| Environment Size | Number of VM Nodes Required | ESX Hosts to Split VM Workload and Ensure High Availability | ECA Node VM Size | Network Latency NFS Mount For Ransomware Defender & Easy Auditor | EZA DB Network Latency Between ECA and Storing the DB | Host Hardware Configuration Requirements |
|---|---|---|---|---|---|---|
| <18K events per second | - 1 VM for Core Agent (Eyeglass) - 3 ECA VMs - 1 VM audit database | 26 | 4 x vCPU, 16G Ram, 30G OS partition + 80G disk | 2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 10 ms | ||
| >18K events per second | - 1 VM for Core Agent (Eyeglass) - 6 ECA VMs - 1 VM audit database | 36 | 4 x vCPU, 16G Ram, 30G OS partition + 80G disk | 2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 10 ms | ||
| Large node count clusters >20 CNodes | - 1 VM for Core Agent (Eyeglass) - 9 ECA VMs - 1 VM audit database | 36 | 4 x vCPU, 16G Ram, 30G OS partition + 80G disk | 2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 10 ms |
It is recommended to allocate more resources for the TA responsible for receiving events from Qumulo, as a single TA will be handling the entire event load.
| Environment Size | Number of VM Nodes Required | ESX Hosts to Split VM Workload and Ensure High Availability | ECA Node VM Size | Network Latency NFS Mount For Ransomware Defender & Easy Auditor | EZA DB Network Latency Between ECA and Storing the DB | Host Hardware Configuration Requirements |
|---|---|---|---|---|---|---|
| <18K events per second | - 1 VM for Core Agent (Eyeglass) - 3 VMs for ECA - 1 VM audit database (after 4.0) | 26 | 4 x vCPU, 16G Ram, 30G OS partition + 80G disk | 2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 10 ms | ||
| >18K events per second | - 1 VM for Core Agent (Eyeglass) - 6 ECA VMs - 1 VM audit database | 36 | 4 x vCPU, 16G Ram, 30G OS partition + 80G disk | 2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 10 ms |
It is recommended to allocate more resources for the TA responsible for receiving events from Qumulo, as a single TA will be handling the entire event load.
| Environment Size | Number of VM Nodes Required | ESX Hosts to Split VM Workload and Ensure High Availability | ECA Node VM Size | Network Latency NFS Mount For Ransomware Defender & Easy Auditor | EZA DB Network Latency Between ECA and Storing the DB | Host Hardware Configuration Requirements |
|---|---|---|---|---|---|---|
| <18K events per second | - 1 VM for Core Agent (Eyeglass) - 6 VMs for ECA nodes - 1 VM audit database (after 4.0) | 26 | 4 x vCPU, 16G Ram, 30G OS partition + 80G disk | 2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 10 ms | ||
| >18K events per second | - 1 VM for Core Agent (Eyeglass) - 9 ECA VMs - 1 VM audit database | 36 | 4 x vCPU, 16G Ram, 30G OS partition + 80G disk | 2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 10 ms | ||
| Large node count clusters >20 VAST CNodes | - 1 VM for Core Agent (Eyeglass) - 12 ECA VMs - 1 VM audit database | 36 | 4 x vCPU, 16G Ram, 30G OS partition + 80G disk | 2 socket CPU 2000 GHZ or greater, Disk IO latency average read and write < 10 ms |
ECA Appliance Platforms
VMware OVA and Microsoft Hyper-v VHDX are available appliance platforms.
Low Event Rate Environments
Contact support for reduced footprint configuration with 3 VMs only for low event rate environments.
-
OVA Resource Limits: The OVA default sets a resource limit of 18000 MHz, shared by all ECA VM nodes in the cluster. This limit can be increased if the audit event load requires more CPU processing. Consult support before making any changes in VMware.
-
Real-Time Distributed Processing: ECA clusters must operate in the same Layer 2 subnet with low latency between VMs. Splitting VMs across data centers is not supported. The only supported distributed mode is the Mini-ECA deployment architecture covered in this guide.
-
Resource Requirements for Additional Applications: Unified Data Security, Easy Auditor, and Performance Auditor require additional resources beyond event rate sizing requirements. Add 4 GB of RAM and 2 additional vCPUs per ECA node. High event rates may require further resource increases. Consult the EyeGlass Scalability table for RAM upgrade requirements.
-
Audit Data Retention: Retaining audit data for more than 1 year increases database size, requiring at least 3 additional ECA VMs to maintain performance. Data retention longer than 365 days requires extra resources and VMs.
-
High Availability (HA) Requirements: For HA, multiple physical hosts are required. ECA clusters with 3 VMs can tolerate N-1 VM failures, clusters with 6 VMs can tolerate N-2 failures, and larger clusters tolerate N-3 failures.
-
OneFS 8.2 or Later: Customers using OneFS 8.2 or later must disable directory open and directory close events to reduce the audit rate and the ECA VM footprint.
-
VMware Settings: Storage vMotion, SDRS, and DRS should be disabled, as ECA VMs are real-time processing systems.
-
Archiving GZ Files: To maintain performance, old gz files collected on OneFS nodes must be archived. Performance degrades when the gz file count exceeds 5000. Follow the procedures provided, or use the auto-archive feature in OneFS 9.x.
-
Database Save Rates: Database save rates exceeding 1000 events per second per ECA node require additional database VMs to handle save operations efficiently.
IP Connection and Pool Requirements for Analytics database Requires HDFS on the Cluster (Easy Auditor)**

ECA Cluster Network Bandwidth Requirements to PowerScale OneFS (Data Security, Easy Auditor, Performance Auditor)
Each ECA node processes audit events and writes data to the Analytics Database using HDFS on the same network interface. Therefore, the combined TX (Transmit) and RX (Receive) data flow constitutes the peak bandwidth requirement per node.
Below is a table that provides minimum bandwidth requirements per ECA VM based on an example calculation for HDFS Bandwidth. This includes estimates and guidelines for Analytics Database network bandwidth access to PowerScale OneFS.
| Product Configuration | Audit Event rate Per Second | Peak Bandwidth requirement - Events per second per ECA cluster (input NFS Reading events from PowerScale OneFS to ECA cluster) | Peak Bandwidth requirement - Audit data Writes Mbps per ECA cluster (output HDFS writing events) |
|---|---|---|---|
| Data Security only | 2000 evts | Input to ECA → 50 Mbps | Out of ECA ← \< 150 Mbps |
| Unified Ransomware and Easy Auditor - Steady state storing events | > 4000 evts | Input to ECA → 125 Mbps | Out of ECA ← 500 Mbps - 1.2 Gbps |
| Easy Auditor Analysis Reports (long running reports) | NA | Input to ECA (HDFS from PowerScale OneFS) ← 800 Mbps - 1.5 Gbps while report runs |
Hyper-V or VMware Requirements
VMware ESX Host Compute Sizing for ECA nodes (Data Security, Easy Auditor, Performance Auditor)
For VMware environments with DRS and SDRS, it is best practice to exempt the ECA and vApp from dynamic relocation. This is because ECA is a real-time application with time synchronization requirements between VMs for processing and database operations.
While DRS movement of running VMs can negatively affect these processes, it is acceptable to migrate VMs for maintenance purposes as needed.
| Number of active concurrent Users per cluster ¹ | ECA VM per Physical Host Recommendation | Estimated Events Guideline |
|---|---|---|
| 1 to 1000 | 1 Host | =5000 * 1.25 = 6,250 events per second |
| 5000 - 10000 | 2 Host | =10,000 * 1.25 = 12,500 events per second |
| > 10000 | 3 Host | = Number of users * 1.25 events/second |
Active TCP connection with file IO to the cluster.
Firewall Configurations
Security - Firewall Port Requirements Data Security , Easy Auditor and Performance Auditor
Firewall Rules and Direction Table
These rules apply to both incoming and outgoing traffic for the virtual machines (VMs). It is important to ensure that all ports remain open between VMs. Private VLANs and firewalls between VMs are not supported in this configuration.
To enhance the security of the Eyeglass Clustered Agent (ECA), we recommend the following measures:
-
Firewall Configuration:
- Configure firewalls to restrict access to the ports between the Eyeglass VM and the ECA VM.
- No external access is required for the ECA, aside from SSH access for management purposes.
- This is the most important step to secure the ECA.
-
Securing Troubleshooting GUIs:
- Limit access to the troubleshooting tools (HBASE, Spark, and Kafka) by configuring them to only be accessible on a management subnet.
Eyeglass GUI VM (Applies to Data Security & Easy Auditor)
| Port | Direction | Function |
|---|---|---|
| Operating System Open Suse 15.x | It is customer responsibility to patch the operating system and allow Internet repository access for automatic patching. The OS is not covered by the support agreement | |
| TCP 443 | Eyeglass → VAST/Qumulo | Authenticated access to the API |
| TCP 443 | Eyeglass → VAST | Authenticated access to the API |
| TCP 8080 | Eyeglass → Powerscale | Authenticated access to the API |
| TCP 9090 | Eyeglass → ECA | Prometheus database for event stats |
| 2181 (TCP) | Eyeglass → ECA | Zookeeper |
| 9092 (TCP) | Eyeglass → ECA | Kafka |
| 5514 (TCP) as of 2.5.6 build 84 | ECA → Eyeglass | Syslog |
| 443 (TCP) | ECA → Eyeglass | TLS messaging |
| 514 | Qumulo → ECA | Syslog information from Qumulo to ECAs |
| NFS v3 UDP/TCP port 111, TCP and UDP port 2049 and TCP/UDP 300 | ECA → Storage Platform | NFS export mounting audit data folder on managed clusters (NOTE: Kerberized NFS is not supported) |
| NFS 4.x TCP port 2049 | ECA → Storage Platform | NFS export mounting audit data folder on managed clusters |
| SMB TCP 445 | Eyeglass → Storage Platform | Security Guard |
| REST API 8080 TCP | ECA → Powerscale (OneFS mandatory) | Needed for REST API audit log monitoring |
| NTP (UDP) 123 | ECA → NTP server | Time sync |
| ICMP | ECA VMs using REST API mode → Powerscale nodes in system zone | Used to provide reachability check and filter out nodes in the SmartConnect pool that are not reachable via ping ICMP |
| SMTP 25 (TCP) | Eyeglass → Email Server | Used by default in Notification center |
| SSH 22 (TCP) | → ECA, → Eyeglass | Port used to connect to ECA, Eyeglass using SSH |
Additional Ports for Easy Auditor
| Port | Direction | Function |
|---|---|---|
| 8020 AND 585 (TCP) | ECA → PowerScale | HDFS (NOTE: Encrypted HDFS is not supported) |
| 18080 | Eyeglass → ECA node 1 only | Hbase history required for Easy Auditor |
| 16000, 16020 | Eyeglass → ECA | Hbase |
| TCP 1433, 4022, 135, 1434, UDP 1434 | ECA → VAST and Qumulo | SQL Database |
| 6066 (TCP) | Eyeglass → ECA | Spark job engine |
| 7077 | Eyeglass → ECA node 1 and 3 | Spark job submission |
| 9092 (TCP) | Eyeglass → ECA | Kafka broker |
| 443 (TCP) | Admin browser → ECA | Secure access to management tools with authentication required. |
| 443 (TCP) | Eyeglass → Superna | Phonehome |
AirGap
| Port | Direction | Function |
|---|---|---|
| ICMP | Vault cluster → prod cluster(s) | AirGap Solution: Enterprise Edition Description of port: Ping from prod cluster to vault cluster Comments: Used to assess network reachability and vault isolation |
| 15000 (TCP) | ECA → Eyeglass | Transferring vaultagent logs to Eyeglass |
Firewall for Mini ECA
| Port | Direction |
|---|---|
| 22 ssh | ECA main cluster \\<--> mini ECA admin pc --> mini ECA |
| 2181, 2888 TCP | ECA main cluster \\<--> mini ECA |
| 9092, 9090 TCP | ECA main cluster \\<--> mini ECA |
| 5514 (TCP) as of 2.5.6 build 84 | mini ECA --> Eyeglass |
| 443 (TCP) | mini ECA --> Eyeglass admin pc --> mini ECA |
| NFS UDP/TCP port 111, TCP and UDP 2049, UDP 300 (NFSv3 only) | mini ECA --> cluster |
| NTP (UDP) 123 | mini ECA --> NTP server |
| DNS UDP 53 | mini ECA --> DNS server |
| TCP port 5000 for node 1 ECA (during upgrades only) | all ECA and mini ECA --> node 1 main ECA cluster IP |
Eyeglass VM Prerequisites
To ensure proper deployment of Eyeglass with the Eyeglass Clustered Agent (ECA), follow these steps to add licenses for Easy Auditor or Data Security to the Eyeglass VM.
Steps to Add Eyeglass Licenses
-
Verify Compatibility:
- Ensure that Eyeglass is deployed or upgraded to the compatible release version for the ECA release being installed.
-
Login to Eyeglass:
- Access the Eyeglass interface.
-
Open the License Manager:
- Click on the License Manager icon.
-
Download License Key:
- Follow the instructions to download the license key using the email token provided with your purchase.
-
Upload License Key:
- Upload the license key zip file obtained in Step 4.
- Once uploaded, the webpage will refresh automatically.
-
Open License Manager Again:
- After the page refreshes, open the License Manager.
-
Set License Status:
- Navigate to the Licensed Devices tab.
- For each cluster you wish to monitor using Data Security or Easy Auditor, set the license status to User Licensed.
- For clusters that should not be licensed, set the license status to Unlicensed. This ensures licenses are applied correctly and prevents them from being used on unintended clusters.

ECA VM Deployment

Step-by-Step Guide for Hyper-V Deployment
Create ECA Hyper-V Virtual Machine
Follow the steps below to create an Eyeglass Clustered Agent (ECA) Virtual Machine on Hyper-V:
-
Download ECA VHDX File:
- Visit the Superna Support Portal and download the ECA Hyper-V vhdx file.
-
Deploy a New Virtual Machine:
- Open Hyper-V Manager and start the process to create a new Virtual Machine.

-
Configure the Virtual Machine:
- Enter a Name for the virtual machine.

- Select Generation 1 for the virtual machine generation.

- Set the Startup Memory to 16384 MB (16 GB).

- Enter a Name for the virtual machine.
-
Configure Network:
- Select the appropriate Network Adapter for the virtual machine.

- Select the appropriate Network Adapter for the virtual machine.
-
Attach the ECA VHDX:
- In the virtual hard disk options, choose Use an existing virtual hard disk.
- Browse to and select the downloaded ECA vhdx file.

-
Complete the Wizard:
- Follow the prompts to complete the virtual machine creation process.

- Follow the prompts to complete the virtual machine creation process.
Configure ECA Data Disk
After deploying the Eyeglass Clustered Agent (ECA) Virtual Machine, follow the steps below to configure the data disk:
-
Open VM Settings:
- Go to the new VM in Hyper-V Manager.
- Right-click the VM and select Settings.

-
Add a Hard Drive:
- Under IDE Controller 0, click Add and select Hard Drive.

- Under IDE Controller 0, click Add and select Hard Drive.
-
Create a New Virtual Hard Disk:
- Choose Create New to configure a new virtual hard disk.

- Choose Create New to configure a new virtual hard disk.
-
Configure Disk Format and Type:
- Disk Format: Select VHDX.

- Disk Type: Select Fixed size.

- Disk Format: Select VHDX.
-
Name and Size the Data Disk:
- Enter a Name for the data disk.

- Set the size to 80 GB for the new blank virtual hard disk.

- Enter a Name for the data disk.
-
Complete the Wizard:
- Follow the prompts to complete the data disk creation process.

- Follow the prompts to complete the data disk creation process.
Configuration of ECA cluster
SSH Access
- Username:
\\<your-username> - Password:
\\<your-password>
Steps to Configure the ECA Cluster
-
Power Up the VM:
-
Start the ECA VM and wait 5-10 minutes for the Superna on-boot script to run.
-
To monitor the script, use the following command:
tail -2 /var/log/superna-on-boot.log -
Wait for the script to finish and follow the on-screen instructions.

-
-
Setup the First Node (Node 1):
-
Run the command to set up your ECA Hyper-V node 1:
sudo spy-hyperv-setup -
When prompted, enter the following network configuration details:
- Admin password
- IP Address
- Netmask
- Gateway
- Hostname
- DNS
- NTP

-
-
Configure the Cluster (Node 1):
- Follow the instructions carefully:
- Do not press
yuntil Node 2-N is configured. - Move on to the next step for Node 2-N setup.
- Do not press

- Follow the instructions carefully:
-
Setup Additional Nodes (Node 2-N):
- Repeat STEP 1 and STEP 2 on Node 2-N for each additional node you wish to deploy.
- When prompted for the master node during setup on Node 2-N, enter
n.

-
Complete Setup on the Master Node (Node 1):
- Return to Node 1 (the master node) and press
yto complete the setup. - Enter the following details:
- ECA Cluster Name (use lowercase, no uppercase, underscores, or special characters).
- Child Nodes IPs (space-separated).

- Return to Node 1 (the master node) and press
-
Verify Completion:
- After setup is complete, verify that all nodes are configured properly.
- You will see a "Setup complete" message once everything is successfully configured.

VMware OVA Installation Procedure
Installation ECA Vmware OVA
The deployment involves three ECA appliances. Follow the steps below to complete the installation.
-
Download the Superna Eyeglass™ OVF:
- Visit Superna Eyeglass Downloads to download the OVF file.
-
Unzip the OVF File:
- Extract the downloaded file into a directory on a machine with vSphere Client installed.

- Extract the downloaded file into a directory on a machine with vSphere Client installed.
-
OVA Contents:
-
The unzipped download contains 1, 3, 6, and 9 VM OVF files.
-
Use the 1 VM OVF if you do not have a VMware license for vAppliance objects and need to deploy N x VMs.
-
Select the 3, 6, 9 OVF + VMDK files to deploy an ECA cluster, matching the VM count from the scaling table in this guide.
-
-
Install the OVF using HTML vCenter Web Interface:
warningAccess vCenter with an FQDN DNS name, not an IP address. A bug in vCenter will generate an error during OVA validation.
-
MANDATORY STEP: Power on the vApp After Deployment to Ensure IP Addresses Get Assigned:
- DO NOT remove the VMs from the vApp before powering them on.
-
First Boot Verification:
-
Make sure the first boot steps complete by reviewing the logs. Run the following commands on each ECA VM:
- Check the status of the boot process:
sudo systemctl status superna-on-boot- Verify the process has completed:
cat /var/log/superna-on-boot.log- Ensure the log shows "done" before proceeding. Do not proceed until this step is complete.
-
-
Procedures After First Boot:
- Once the VMs are pingable, you can move the VMs from the vApp object if needed to rename each VM according to your naming convention.
note
Make sure the first boot script has completed using the procedures above on each VM. This can take up to 10 minutes per VM on the first boot as it sets up docker containers and must complete successfully along with SSH key creation.

- Once the VMs are pingable, you can move the VMs from the vApp object if needed to rename each VM according to your naming convention.
-
Deploy from a File or URL:
- Deploy the OVA from the file or URL where it was saved.
-
Configure VM Settings:
-
Using vCenter Client, set the required VM settings for datastore and networking.
-
NOTE: Leave the setting as Fixed IP address.
-
-
Complete the Networking Sections:
-
ECA Cluster Name:
-
The name must be lowercase, less than 8 characters, and contain no special characters, with only letters.
warningThe ECA cluster name cannot include underscores (
_) as this will cause some services to fail.
-
-
Ensure that all VMs are on the same subnet.
-
Enter the Network Mask (this will be applied to all VMs).
-
Enter the Gateway IP.
-
Enter the DNS Server:
- The DNS server must be able to resolve
igls.\\<your domain name here>. Use the nameserver IP address.
noteAgent node 1 is the master node where all ECA CLI commands are executed for cluster management.
- The DNS server must be able to resolve
-
-
vCenter Windows client example

- vCenter HTML Client Example

- vCenter HTML Client Example
-
Example OVA vAPP after deployment

-
Enter IP Information:
- Enter all IP information for each ECA VM in the vCenter UI.
-
After Deployment is Complete:
-
Power on the vApp (Recommended to stop here and wait for services to complete the remaining steps).
-
Ping each IP address to make sure each node has finished booting.
-
Login to the Master Node:
- Login via SSH to Node 1 (the Master Node) using the
\\<your-user>account. - Default password: `F.
- Login via SSH to Node 1 (the Master Node) using the
-
Configure Keyless SSH:
-
Run the following command to configure keyless SSH for the
\\<your-user>to manage the cluster:ecactl components configure-nodes
-
-
Generate API Token on Eyeglass Appliance:
- On the Eyeglass Appliance, generate a unique API Token from the Superna Eyeglass REST API Window.
- Once the token is generated for the ECA Cluster, it will be used in the ECA startup command for authentication.
-
Login to Eyeglass:
-
Go to the main menu and navigate to the Eyeglass REST API menu item.
-
Create a new API token, which will be used in the startup file for the ECA cluster to authenticate with the Eyeglass VM and register ECA services.

-
-
-
On ECA Cluster Master node ip 1
-
Login to that VM using ass as the ecaadmin user default password
\\<your-password>. From this point on, commands will only be executed on the master node. -
On the master node, edit the file
nano /opt/superna/eca/eca-env-common.conf, and change these five settings to reflect your environment. Replace the variables accordingly. -
Set the IP address or FQDN of the Eyeglass appliance and the API Token (created above), uncomment the parameter lines before saving the file. For example:
export EYEGASS_LOCATION=ip_addr_of_eyeglass_appliance
export EYEGASS_API_TOKEN=Eyeglass_API_token -
Verify the IP addresses for the nodes in your cluster. It is important that NODE_1 be the master (i.e., the IP address of the node you're currently logged into).
infoAdd additional
ECA_LOCATION_NODE_X=x.x.x.xfor an additional node in the ECA cluster depending on ECA cluster size. All nodes in the cluster must be listed in the file. Copy a line and paste to add additional ECA nodes and make sure to change the node number example to add the 4th ECA VM, it would look like this:export ECA_LOCATION_NODE_4=export ECA_LOCATION_NODE_1=ip_addr_of_node_1 # set by first boot from the OVF
export ECA_LOCATION_NODE_2=ip_addr_of_node_2 # set by first boot from the OVF
export ECA_LOCATION_NODE_3=ip_addr_of_node_3 # set by first boot from the OVF -
Set the HDFS path to the SmartConnect name setup in the Analytics database configuration steps. Replace the FQDN
hdfs_sc_zone_namewith\\<your smartconnect FQDN>.noteDo not change any other value. Whatever is entered here is created as a subdirectory of the HDFS root directory that was set earlier.
-
export ISILON_HDFS_ROOT='hdfs://hdfs_sc_zone_name:8020/eca1'
-
-
Done: Continue on to the Cluster Auditing Configuration Section.
Isilon/PowerScale Protocol Audit Configuration - Required
This section configures PowerScale file auditing required to monitor user behaviors. The Audit protocol can be enabled on each Access Zone independently that requires monitoring.
Enable Protocol Access Auditing OneFS GUI
-
Click Cluster Management > Auditing.
-
In the Settings area, select Enable Configuration Change Auditing and Enable Protocol Access Auditing checkbox.
-
In the Audited Zones area, click Add Zones.
-
In the Select Access Zones dialog box, select one or more access zones, and click Add Zones (do not add Eyeglass access zone).
noteAny zone that does not have auditing enabled is unprotected.
Disable High Rate Audit Events OneFS 8.2 and Later (Mandatory Step)
Directory Open and Directory Close events generate unnecessary load on the cluster to log these event types. These events are not used by Data Security or Easy Auditor (Default settings do not store these events in the database). These events also cause performance issues on the cluster and high cluster overhead for these two events. It is required to disable these events.
Procedure to Disable High Rate Events
-
Log in to the OneFS cluster over SSH as the root user.
-
Replace the yellow highlight with access zone names that are enabled for auditing. This change takes effect immediately and will reduce audit overhead and increase auditing performance.
isi audit settings modify --zone=\\<zone_name> --remove-audit-success=open_directory,close_directory