Installation
Overview
This guide is designed to help IT professionals successfully install and initially configure the Superna Data Security Edition solution, ensuring that data is protected and systems are resilient.
The solution is designed to be flexible and adaptable, offering integration across multiple platforms, including on-premise, cloud, and hybrid cloud setups.
Installation types
We are in the process of rebuilding our documentation.
For installation instructions for PowerScale OneFS and AWS, visit the legacy documentation portal.
Platforms documented here:
- VAST Data
- Qumulo
If you encounter any issues during the installation or configuration process, Superna support is available to assist you. Please reach out to our support team for guidance and troubleshooting.
Before you begin
The installation process involves:
- Downloading and deploying virtual machines
- Configuring platform-specific settings
- Initializing and configuring Superna appliances
- Setting up additional features like Security Guard
- for VAST and Qumulo The Security Guard appliance requires a dedicated Active Directory (AD) user account and a network share where that AD user has the necessary access permissions. The network share is created automatically
System requirements
Before you begin, ensure you have the following:
-
Valid licenses for Superna solutions.
-
The Security Guard appliance requires a dedicated AD user account and a network share where that AD user has the necessary access permissions
-
An account in VAST with full administrative permissions. This account is required to install the Eyeglass and ECA components:
- Create a Role name
supernaAdmin
with the following permissions. Be sure to leave tenants blank
Permissions Details
Realm Permissions required Description Hardware View Needed to display the inventory and information about clusters. Events No access required Logical Create, View, Edit, Delete Access to Views and Snapshots is required for Ransomware Defender. Protected Paths, Replication stream, and Quotas are required for Disaster Recovery. Security Create, View, Edit, Delete To accurately display user data, user groups, and review permissions (including user lockout and restoring user access). Application No access required Database No access required Settings No access required Monitoring No access required Support No access required - Create superna user account. And then add the
supernaAdmin
role to the superna user.importantNo additional user role permissions are required for reading the audit data. However, the root user must be added to the Read Access Users in the audit settings. This is because it is the root user who will be accessing the audit mount in ECA. To reduce access, only the ECA ips should be added to the Read Only access for the view policy. No Squash must be set to.
- Create a Role name
-
Audit settings for Ransomware Defender
To enable Ransomware Defender events, ensure that the audit settings are correctly configured.-
Auditing must be enabled on the view policy.
-
All monitored protocols and operations must be enabled.
-
Full path and username options must be enabled in the audit record options.
tipAlternatively, you can enable these settings globally using the Global Baseline Audit Settings to apply across all view policies.
-
-
A dedicated AD user account for the Security Guard feature.
-
A network share accessible by the AD user.
Vast Deployment and Configuration
Step 1 - Download and deploy virtual machines
The Core Agent Appliance (formerly Eyeglass) must be installed and operational on the cluster.
1.1 - Download links from the support portal
For security reasons, download links are not publicly available. Access them through our support site here.
To start, sign in using your Superna Support credentials.
Once you enter the Superna support site, scroll down to display the links to the latest versions of Superna installation files.
If you're doing a fresh installation, select Download VM Install Files.
For appliances to be hosted with VMWare, select Download OVF Installer.
After your selection, you'll be prompted to enter your email, and accept the Subscription Terms and Conditions, to continue the installation process.
Click the link to download the necessary installers for your solution.
1.2 - Deploy the appliances
You must deploy two separate machines: one for the Core Agent Appliance (formerly Eyeglass) and one for the Extended Cluster Appliance.
Unzip the download package on a machine with vSphere installed. Select both .ovf
and .vmdk
files under the OVF template deployment.
Click GO TO DOWNLOADS in VMware vSphere Hypervisor (ESXi). This opens the Download Product page:
Select required VM settings for VM name and folder, computer resource, datastore, and networking. Complete the networking section as requested.
1.3 - Login to the Superna Dashboard
Access the Eyeglass Web UI via https://<Eyeglass IP address>
.
Login using the default credentials provided for new deployments.
Step 2 - Upload licenses
2.1 - Retrieve License Keys
Login to the Superna Support Desk and submit your license request using the Appliance ID and Transaction Token from your license email.
The Appliance ID and Transaction Token must be entered exactly as shown in the license email, with all dashes and without any leading or trailing spaces.
Ex. EMC-xxx-xxx-xxx-xxx
Download the provided zipped license file.
Do not unzip the license file! You will upload the .zip file in the next step.
2.2 - Upload License File
Navigate to License Management > Manage Licenses > Browse
Upload the zipped license file and accept the Eyeglass EULA.
Step 3 - VAST Platform configuration
In order to support Data Security Edition, some configuration must first be set on the VAST platform itself.
First we will enable auditing, then we will create a VAST view that will allow the Extended Cluster Agent to access audit information through NFS.
Enable auditing
Auditing is essential for detecting suspicious activities.
Open the VAST UI.
Navigate to: Settings -> Auditing
Add a name for the audit directory (default .vast_audit_dir
is fine) and add the root
user to Read-access Users.
If the option to add a root user is not available in the VAST interface, you can add one using the following command:
curl -k -s -u admin:123456 'https://<vast_ip_address>/api/latest/clusters/1/auditing/' \
-X 'PATCH' \
-H 'accept: application/json, text/plain, */*' \
-H 'Content-Type: application/json' \
--data-raw '{"audit_dir_name":".vast_audit_dir","read_access_users":["root"],"read_access_users_groups":[],"max_file_size":1000000000,"max_retention_period":1,"max_retention_timeunit":"h","protocols_audit":{"log_full_path":false,"log_hostname":false,"log_username":false,"log_deleted_files_dirs":false,"create_delete_files_dirs_objects":false,"modify_data":false,"modify_data_md":false,"read_data":false,"read_data_md":false,"session_create_close":false},"protocols":[]}' --insecure
While this command does indeed add the root user, it will also reset the maximum file size, maximum retention period, and the global audit settings.
Auditing can be enabled on specific view policies, allowing for targeted auditing of user activities. Administrators have the flexibility to configure which activities are audited per View Policy.
To audit all views and collect all user activities (global auditing), navigate to Settings -> Auditing -> Global Baseline Audit Settings and make sure all settings are enabled.
3.2 VAST - Configure NFS Protocol View
Now that auditing is enabled, VAST will track all user activity.
In order to read the VAST audit data on our own systems, we will create an NFS export to read the audit data from. In order to create an NFS export, we need to create a VAST View on that path.
3.2.1 Create a View Policy
To create a View, first create a View Policy.
Navigate to: Element Store -> View Policies, and select Create Policy.
In the General Tab:
- Tenant: Default
- Name:
- Security Flavor: NFS
- VIP Pools: Any Virtual IP Pool can be chosen, but please take note of the name for later.
- Group Membership Source: Client or Client And Providers
If a Virtual IP Pool does not yet exist, it can be created under Network Access -> Virtual IP Pools
The VIP Pool must be assigned the PROTOCOLS role. The range can be any valid IP range in the network.
In the Host-Based Access tab:
NFS:
- Add '*' to No Squash by clicking Add New Rule
- Add your ECA IPs to Read Only by clicking Add New Rule
- Root Squash, All Squash and Read/Write should be empty
- Default values can be used for everything else in the View Policy.
3.2.2 Create a View
Now, we need to create a View.
Navigate to Element Store -> View
Select Create View
VAST will display the window for creating/adding a new view.
In the General Tab:
- Path: Should be the same as the audit directory (i.e.
/.vast_audit_dir
). Use the complete path wherever applicable. - Protocols: NFS
- Policy name: should be the View Policy created in the previous step.
- NFS alias: choose an alias for the NFS export (optional)
- Use default values for everything else.
- There is no need to enable the 'create directory' value.
This concludes the set up needed on the VAST machine itself. We will now proceed to the configuration needed on Eyeglass and ECA.
Step 4 - Eyeglass and ECA configuration
4.1 - Add cluster to Eyeglass
Make sure you've added the appropriate licenses according to the instructions in Step 2.
Select Add Managed Device from the Eyeglass Main Menu at the bottom left corner.
Fill the IP Address of the VAST Data, as well as the username and password.
Click Submit to continue. Eyeglass will display a confirmation when the job is successfully submitted for inventory collection.
To see the cluster just added, return to the Eyeglass desktop and select Inventory View.
Inventory View displays the cluster just added in the list of Managed Devices.
4.2 - Enable functionality on ECA
-
SSH into the primary ECA node using an SSH client of your choosing.
-
Run the command to set up keyless SSH for the
ecaadmin
user to manage the cluster:ecactl components configure-nodes
-
Bring down the ECA cluster before editing the configuration file:
ecactl cluster down
-
Edit the configuration file
/opt/superna/eca/eca-env-common.conf
using the vi text editor:vi /opt/superna/eca/eca-env-common.conf
-
In the vi editor, add the following lines to enable VAST support on ECA:
export TURBOAUDIT_VAST_ENABLED=true
export VAST_LOG_MOUNT_PATH="/opt/superna/mnt/vastaudit" -
Change the automount line to
true
:export STOP_ON_AUTOMOUNT_FAIL=true
-
After making these changes, save the file and exit the editor.
4.3 - Mount the Audit Path to the ECA Node
To set up the NFS mount for accessing audit data from VAST on the primary ECA node (ECA Node 1), follow these steps:
Run the following commands in your ECA Node 1. Note the items in <angle brackets>
, which must be replaced with the appropriate information.
If this is your first time following the installation guide on an ECA Node 1 cluster, use the provided command. However, if the command has already been executed, running it again may overwrite existing configurations. Follow the next steps to verify the current file configuration before proceeding.
ecactl cluster exec "mkdir -p /opt/superna/mnt/vastaudit/<VAST_cluster_name>/<VAST_cluster_name">
echo -e "/opt/superna/mnt/vastaudit/<VAST_cluster_name>/<cluster_name> --fstype=nfs <vip_pool_ip>:/<audit_dir_name>" >> /opt/superna/eca/data/audit-nfs/auto.nfs
For vip_pool_ip
, select any IP within the range of the VIP-pool.
This command creates the necessary directory structure and adds NFS mount configuration to the auto.nfs file.
To check the file configuration, use a command utility or text editor to read the contents of /opt/superna/eca/data/audit-nfs/auto.nfs
-
Push the configuration to all ECA nodes a. ecactl cluster push-config
-
Restart Auto mount service a. ecactl cluster exec "sudo systemctl restart autofs"
Here is an example of a correct auto.nfs file configuration:
# Superna ECA configuration file for Autofs mounts
# Syntax:
# /opt/superna/mnt/audit/<GUID>/<NAME> --fstype=nfs,nfsvers=3,ro,soft <FQDN>:/ifs/.ifsvar/audit/logs
#
# <GUID> is the VAST cluster's GUID
# <NAME> is the VAST cluster's name
# <FQDN> is the SmartConnect zone name of a network pool in the VAST cluster's System zone.
/opt/superna/mnt/vastaudit/VASTEMEA/VASTEMEA --fstype=nfs 10.252.30.1:/.vast_audit_dir
Finally, we need to restart the cluster.
ecactl cluster down
ecactl cluster up
After this, we should be able to raise VAST event.
Continue to the following articles to configure Security Guard for VAST and Recovery Manager for VAST.
Qumulo Installation and Configuration
Deploy the most recent Eyeglass and ECA OVF. Download and install the Qumulo installer for each, and run the installation. The current supported versions and links to the support portal for download are below:
Package | Link |
---|---|
Eyeglass OVF | Use Support Portal for download |
ECA OVF | Use Support Portal for download |
Eyeglass Installer | Use Support Portal for download |
ECA Installer | Use Support Portal for download |
Prerequisites - Qumulo
The Security Guard appliance requires a dedicated AD user account and a network share where that AD user has the necessary access permissions. The network share is created automatically
Eyeglass must be operational on the cluster.
Active Directory configuration
Set USERRESOLUTION_SERVICE
to true
to store user IDs as SIDs instead of usernames in the events.
Data Security Edition requires the following Active Directory accounts to function properly:
- AD account for name resolution service
- Security Guard
- RoboAudit (not available yet, coming soon)
Account | Permissions | Command |
---|---|---|
AD account for name resolution service | This account is solely used to query Active Directory for retrieving a list of AD users to display correctly in the events list. In most environments, being part of the "Authenticated Users" group is sufficient for basic querying. The user should have at least read permissions on the user objects within the AD. | igls adv adserver set --server=<AD.domain> --basedn=CN=Users,DC=<distinguished_name>,DC=<domain_controller>--logindn=CN=Administrator,CN=Users,DC=<distinguished_name>,DC=<domain_controller>--domain=<AD.domain> --loginhost=<ad_server_ip> --ssl=false --port=<port_number>--password=<> |
Security Guard - igls-securityguard | Grant full permissions to a dedicated share for the Security Guard. | New-SmbShare -Name "securityguard" -Path "<path>" -FullAccess "DOMAIN\igls-securityguard" |
Eyeglass Configuration
Licensing
Log in to Superna Eyeglass and open License Management.
Where Qumulo type licenses are displayed.
Add a Qumulo type license to the system
In order for the UI functionality to be displayed, a Qumulo license must be added via the eyeglass UI. The License Devices tab displays all added devices to the system.
Add a Qumulo cluster via the eyeglass UI
When the cluster is successfully added, the confirmation window will appear.
Open the Jobs menu to check Running Jobs. Wait until the add job is complete, and validate that the cluster can be browsed in the inventory view:
ECA Configuration
Configure Active Directory on Eyeglass
See here for a guide on Eyeglass CLI Commands
Enable Qumulo functionality on ECA
Add the following parameter to /opt/superna/eca/eca-env-common.conf before cluster up:
export TURBOAUDIT_QM_SERVER_ENABLED=true
Configure the following setting in /opt/superna/eca/eca-env-common.conf to start in Ransomware Only mode:
export RSW_ONLY_CFG=true
After setting up Easy Auditor, this setting will need to be changed to false. Please review your specific requirements to determine if a Ransomware Only configuration is necessary for your environment.
Configure as false to continue cluster up even if no NFS mount (expected because Qumulo uses Syslog):
export STOP_ON_AUTOMOUNT_FAIL=false
Add Eyeglass IP and API token:
export EYEGLASS_LOCATION=
export EYEGLASS_API_TOKEN=
Kafka Additional Memory
Additional memory needs to be allocated to the Kafka docker container.
Do the following
- SSH to ECA1 (user: ecaadmin, password: 3y3gl4ss).
- Open the
docker-compose.overrides.yml
file for editing:
vim /opt/superna/eca/docker-compose.overrides.yml
- Add the following lines. IMPORTANT: Maintain the spacing at the start of each line.
version: '2.4'
#services:
# cadvisor:
# labels:
# eca.cluster.launch.all: 1
services:
kafka:
mem_limit: 2048MB
mem_reservation: 2048MB
memswap_limit: 2048MB
- Save changes with: ESC + wq!
Zookeeper Retention
We will be implementing the following changes to prevent zk-ramdisk exhaustion from occurring. When zk-ramdisk reaches 100% utilization, this causes event processing to halt.
Do the following:
- SSH to ECA1 (user: ecaadmin, password: 3y3gl4ss).
_vim /opt/superna/eca/conf/zookeeper/conf/zoo.cfg.template_
- Add the following configurations to the bottom of the file:
snapCount=1000
preAllocSize=1000
- Save changes with: ESC + wq!
Cron Jobs
Cron job needs to be created to restart the fastanalysis docker container on a schedule. Do the following:
- SSH to ECA1 (user: ecaadmin, password: 3y3gl4ss).
ecactl cluster exec "sudo -E USER=ecaadmin ecactl components restart-cron set fastanalysis 0 0,6,12,18 \'*\' \'*\' \'*\'"
- Validate cron job added:
ecactl cluster exec 'cat /etc/cron.d/eca-*'
Cluster up from ECA1 (must be done before configuring auditing):
_ecactl cluster up_
Qumulo Configuration
To launch Qumulo, use the IP address or open it from the Inventory View.
Add ECA node 2 as syslog consumer
On the Qumulo interface, go to Cluster -> Audit.
Enter the IP address of ECA node 2, and save