Installation
Overview
This guide is designed to help IT professionals successfully install and initially configure the Superna Data Security Edition solution, ensuring that data is protected and systems are resilient.
The solution is designed to be flexible and adaptable, offering integration across multiple platforms, including on-premise, cloud, and hybrid cloud setups.
Installation types
We are in the process of rebuilding our documentation.
For installation instructions for PowerScale and AWS, visit the legacy documentation portal.
Platforms documented here:
- VAST Data
- Qumulo
If you encounter any issues during the installation or configuration process, Superna support is available to assist you. Please reach out to our support team for guidance and troubleshooting.
Before you begin
The installation process involves:
- Downloading and deploying virtual machines
- Configuring platform-specific settings
- Initializing and configuring Superna appliances
- Setting up additional features like Security Guard
System requirements
Before you begin, ensure you have the following:
- Valid licenses for Superna solutions.
- An account named "superna" in VAST with full administrative permissions. This account is required to install the Eyeglass and ECA components.
- A dedicated AD user account for the Security Guard feature.
- A network share accessible by the AD user.
- VAST
- Qumulo
Step 1 - Download and deploy virtual machines
The Core Agent Appliance (formerly Eyeglass) must be installed and operational on the cluster.
1.1 - Download links from the support portal
For security reasons, download links are not publicly available. Access them through our support site here.
To start, sign in using your Superna Support credentials.
Once you enter the Superna support site, scroll down to display the links to the latest versions of Superna installation files.
If you're doing a fresh installation, select Download VM Install Files.
For appliances to be hosted with VMWare, select Download OVF Installer.
After your selection, you'll be prompted to enter your email, and accept the Subscription Terms and Conditions, to continue the installation process.
Click the link to download the necessary installers for your solution.
1.2 - Deploy the appliances
You must deploy two separate machines: one for the Core Agent Appliance (formerly Eyeglass) and one for the Extended Cluster Appliance.
Unzip the download package on a machine with vSphere installed. Select both .ovf
and .vmdk
files under the OVF template deployment.
Click GO TO DOWNLOADS in VMware vSphere Hypervisor (ESXi). This opens the Download Product page:
Select required VM settings for VM name and folder, computer resource, datastore, and networking. Complete the networking section as requested.
1.3 - Login to the Superna Dashboard
Access the Eyeglass Web UI via https://<Eyeglass IP address>
.
Login using the default credentials provided for new deployments.
Step 2 - Upload licenses
2.1 - Retrieve License Keys
Login to the Superna Support Desk and submit your license request using the Appliance ID and Transaction Token from your license email.
The Appliance ID and Transaction Token must be entered exactly as shown in the license email, with all dashes and without any leading or trailing spaces.
Ex. EMC-xxx-xxx-xxx-xxx
Download the provided zipped license file.
Do not unzip the license file! You will upload the .zip file in the next step.
2.2 - Upload License File
Navigate to License Management > Manage Licenses > Browse
Upload the zipped license file and accept the Eyeglass EULA.
Step 3 - VAST Platform configuration
In order to support Data Security Edition, some configuration must first be set on the VAST platform itself.
First we will enable auditing, then we will create a VAST view that will allow the Extended Cluster Agent to access audit information through NFS.
Enable auditing
Auditing is essential for detecting suspicious activities.
Open the VAST UI.
Navigate to: Settings -> Auditing
Add a name for the audit directory (default .vast_audit_dir
is fine) and add the root
user to Read-access Users.
Auditing can be enabled on specific view policies, allowing for targeted auditing of user activities. Administrators have the flexibility to configure which activities are audited per View Policy.
To audit all views and collect all user activities (global auditing), navigate to Settings -> Auditing -> Global Baseline Audit Settings and make sure all settings are enabled.
3.2 VAST - Configure NFS Protocol View
Now that auditing is enabled, VAST will track all user activity.
In order to read the VAST audit data on our own systems, we will create an NFS export to read the audit data from. In order to create an NFS export, we need to create a VAST View on that path.
3.2.1 Create a View Policy
To create a View, first create a View Policy.
Navigate to: Element Store -> View Policies, and select Create Policy.
In the General Tab:
- Tenant: Default
- Name:
- Security Flavor: NFS
- VIP Pools: Any Virtual IP Pool can be chosen, but please take note of the name for later.
- Group Membership Source: Client or Client And Providers
If a Virtual IP Pool does not yet exist, it can be created under Network Access -> Virtual IP Pools
The VIP Pool must be assigned the PROTOCOLS role. The range can be any valid IP range in the network.
In the Host-Based Access tab:
NFS:
- Add your ECA IPs to No Squash by clicking Add New Rule
- Add your ECA IPs to Read Only by clicking Add New Rule
- Root Squash, All Squash and Read/Write should be empty
- Default values can be used for everything else in the View Policy.
3.2.2 Create a View
Now, we need to create a View.
Navigate to Element Store -> View
Select Create View
VAST will display the window for creating/adding a new view.
In the General Tab:
- Path: should be the same as the audit directory (i.e.
/.vast_audit_dir
) - Protocols: NFS
- Policy name: should be the View Policy created in the previous step.
- NFS alias: choose an alias for the NFS export (optional)
- Use default values for everything else.
- There is no need to enable the 'create directory' value.
This concludes the set up needed on the VAST machine itself. We will now proceed to the configuration needed on Eyeglass and ECA.
Step 4 - Eyeglass and ECA configuration
4.1 - Add cluster to Eyeglass
Make sure you've added the appropriate licenses according to the instructions in Step 2.
Select Add Managed Device from the Eyeglass Main Menu at the bottom left corner.
Fill the IP Address of the VAST Data, as well as the username and password.
Click Submit to continue. Eyeglass will display a confirmation when the job is successfully submitted for inventory collection.
To see the cluster just added, return to the Eyeglass desktop and select Inventory View.
Inventory View displays the cluster just added in the list of Managed Devices.
4.2 - Enable functionality on ECA
SSH into the primary ECA node using an SSH client of your choosing.
Before editing the configuration file, bring down the ECA cluster with the command ecactl cluster down
. Next, open the file /opt/superna/eca/eca-env-common.conf
in the vi text editor. You can do this by entering the command vi /opt/superna/eca/eca-env-common.conf
.
In the vi editor, add the following lines to enable VAST support on ECA:
export TURBOAUDIT_VAST_ENABLED=true
export VAST_LOG_MOUNT_PATH="/opt/superna/mnt/vastaudit"
After making these changes, save the file and exit the editor. Finally, bring the ECA cluster back up by executing ecactl cluster up
.
4.3 - Mount the Audit Path to the Secondary ECA Node
To set up the NFS mount for accessing audit data from VAST on the secondary ECA node (ECA Node 2), follow these steps:
Run the following commands in your ECA Node 2. Note the items in <angle brackets>
, which must be replaced with the appropriate information.
If it's the first time following this installation guide on an ECA Node 2 cluster, use the provided command. However, if this command has been run before, it may overwrite existing configurations. Continue to find steps for checking the file configuration.
ecactl cluster exec "mkdir -p /opt/superna/mnt/vastaudit/<VAST_cluster_name>/<VAST_cluster_name"
echo -e "/opt/superna/mnt/vastaudit/<VAST_cluster_name>/<cluster_name> --fstype=nfs <vip_pool_ip>:/<audit_dir_name>" >> /opt/superna/eca/data/audit-nfs/auto.nfs
For vip_pool_ip
, select any IP within the range of the VIP-pool.
This command creates the necessary directory structure and adds NFS mount configuration to the auto.nfs file.
To check the file configuration, use a command utility or text editor like less or nano to read the contents of /opt/superna/eca/data/audit-nfs/auto.nfs
Here is an example of a correct auto.nfs file configuration:
# Superna ECA configuration file for Autofs mounts
# Syntax:
# /opt/superna/mnt/audit/<GUID>/<NAME> --fstype=nfs,nfsvers=3,ro,soft <FQDN>:/ifs/.ifsvar/audit/logs
#
# <GUID> is the Isilon cluster's GUID
# <NAME> is the Isilon cluster's name
# <FQDN> is the SmartConnect zone name of a network pool in the Isilon
# cluster's System zone.
Finally, we need to restart the cluster.
ecactl cluster down
ecactl cluster up
After this, we should be able to raise VAST RSWEvents.
Continue to the following articles to configure Security Guard for VAST and Recovery Manager for VAST.
Qumulo Installation and Configuration
Deploy the most recent Eyeglass and ECA OVF. Download and install the Qumulo installer for each, and run the installation. The current supported versions and links to the support portal for download are below:
Package | Link |
---|---|
Eyeglass OVF | Use Support Portal for download |
ECA OVF | Use Support Portal for download |
Eyeglass Installer | Use Support Portal for download |
ECA Installer | Use Support Portal for download |
Prerequisites - Qumulo
The Security Guard appliance requires a dedicated AD user account, and a network share where that AD user has access permissions.
Eyeglass must be operational on the cluster.
Eyeglass Configuration
Licensing
Log in to Superna Eyeglass and open License Management.
Where Qumulo type licenses are displayed.
Add a Qumulo type license to the system
In order for the UI functionality to be displayed, a Qumulo license must be added via the eyeglass UI. The License Devices tab displays all added devices to the system.
Add a Qumulo cluster via the eyeglass UI
When the cluster is successfully added, the confirmation window will appear.
Open the Jobs menu to check Running Jobs. Wait until the add job is complete, and validate that the cluster can be browsed in the inventory view:
ECA Configuration
Configure Active Directory on Eyeglass
See here for a guide on Eyeglass CLI Commands
Enable Qumulo functionality on ECA
Add the following parameter to /opt/superna/eca/eca-env-common.conf before cluster up:
export TURBOAUDIT_QM_SERVER_ENABLED=true
Configure the following setting in /opt/superna/eca/eca-env-common.conf to start in Ransomware Only mode:
export RSW_ONLY_CFG=true
Configure as false to continue cluster up even if no NFS mount (expected because Qumulo uses Syslog):
export STOP_ON_AUTOMOUNT_FAIL=false
Add Eyeglass IP and API token:
export EYEGLASS_LOCATION=
export EYEGLASS_API_TOKEN=
Kafka Additional Memory
Additional memory needs to be allocated to the Kafka docker container.
Do the following
- SSH to ECA1 (user: ecaadmin, password: 3y3gl4ss).
- Open the
docker-compose.overrides.yml
file for editing:
vim /opt/superna/eca/docker-compose.overrides.yml
- Add the following lines. IMPORTANT: Maintain the spacing at the start of each line.
version: '2.4'
#services:
# cadvisor:
# labels:
# eca.cluster.launch.all: 1
services:
kafka:
mem_limit: 2048MB
mem_reservation: 2048MB
memswap_limit: 2048MB
- Save changes with: ESC + wq!
Zookeeper Retention
We will be implementing the following changes to prevent zk-ramdisk exhaustion from occurring. When zk-ramdisk reaches 100% utilization, this causes event processing to halt.
Do the following:
- SSH to ECA1 (user: ecaadmin, password: 3y3gl4ss).
- vim /opt/superna/eca/conf/zookeeper/conf/zoo.cfg.template
- Add the following configurations to the bottom of the file:
snapCount=1000
preAllocSize=1000
- Save changes with: ESC + wq!
Cron Jobs
Cron job needs to be created to restart the fastanalysis docker container on a schedule. Do the following:
- SSH to ECA1 (user: ecaadmin, password: 3y3gl4ss).
ecactl cluster exec "sudo -E USER=ecaadmin ecactl components restart-cron set fastanalysis 0 0,6,12,18 \'*\' \'*\' \'*\'"
- Validate cron job added:
ecactl cluster exec 'cat /etc/cron.d/eca-*'
Cluster up from ECA1 (must be done before configuring auditing):
- ecactl cluster up
Qumulo Configuration
To launch Qumulo, use the IP address or open it from the Inventory View.
Add ECA node 2 as syslog consumer
On the Qumulo interface, go to Cluster -> Audit.
Enter the IP address of ECA node 2, and save
We are in the process of rebuilding our documentation.
For now, see the table below for links to Data Security feature guides from our legacy documentation.