Skip to main content
Version: 2.8.3

Installation

Overview

This guide is designed to help IT professionals successfully install and initially configure the Superna Data Security Edition solution, ensuring that data is protected and systems are resilient.

The solution is designed to be flexible and adaptable, offering integration across multiple platforms, including on-premise, cloud, and hybrid cloud setups.

Installation types

info

We are in the process of rebuilding our documentation.

For installation instructions for PowerScale and AWS, visit the legacy documentation portal.

Platforms documented here:

  • VAST Data
  • Qumulo

If you encounter any issues during the installation or configuration process, Superna support is available to assist you. Please reach out to our support team for guidance and troubleshooting.


Before you begin

The installation process involves:

  1. Downloading and deploying virtual machines
  2. Configuring platform-specific settings
  3. Initializing and configuring Superna appliances
  4. Setting up additional features like Security Guard

System requirements

Before you begin, ensure you have the following:

  • Valid licenses for Superna solutions.
  • An account named "superna" in VAST with full administrative permissions. This account is required to install the Eyeglass and ECA components.
  • A dedicated AD user account for the Security Guard feature.
  • A network share accessible by the AD user.

Step 1 - Download and deploy virtual machines

The Core Agent Appliance (formerly Eyeglass) must be installed and operational on the cluster.

For security reasons, download links are not publicly available. Access them through our support site here.

To start, sign in using your Superna Support credentials.

Superna Support Sign-In page

Once you enter the Superna support site, scroll down to display the links to the latest versions of Superna installation files.

If you're doing a fresh installation, select Download VM Install Files.

For appliances to be hosted with VMWare, select Download OVF Installer.

After your selection, you'll be prompted to enter your email, and accept the Subscription Terms and Conditions, to continue the installation process.

Click the link to download the necessary installers for your solution.

Download Eyeglass DR Edition

1.2 - Deploy the appliances

warning

You must deploy two separate machines: one for the Core Agent Appliance (formerly Eyeglass) and one for the Extended Cluster Appliance.

Unzip the download package on a machine with vSphere installed. Select both .ovf and .vmdk files under the OVF template deployment.

Deploy OVF template

Click GO TO DOWNLOADS in VMware vSphere Hypervisor (ESXi). This opens the Download Product page:

Select required VM settings for VM name and folder, computer resource, datastore, and networking. Complete the networking section as requested.

1.3 - Login to the Superna Dashboard

Access the Eyeglass Web UI via https://<Eyeglass IP address>.

Eyeglass UI Sign In

Login using the default credentials provided for new deployments.

Superna Dashboard


Step 2 - Upload licenses

2.1 - Retrieve License Keys

Login to the Superna Support Desk and submit your license request using the Appliance ID and Transaction Token from your license email.

note

The Appliance ID and Transaction Token must be entered exactly as shown in the license email, with all dashes and without any leading or trailing spaces.

Ex. EMC-xxx-xxx-xxx-xxx

Download the provided zipped license file.

warning

Do not unzip the license file! You will upload the .zip file in the next step.

2.2 - Upload License File

Navigate to License Management > Manage Licenses > Browse

Manage Licenses Browse

Upload the zipped license file and accept the Eyeglass EULA.


Step 3 - VAST Platform configuration

In order to support Data Security Edition, some configuration must first be set on the VAST platform itself.

First we will enable auditing, then we will create a VAST view that will allow the Extended Cluster Agent to access audit information through NFS.

Enable auditing

Auditing is essential for detecting suspicious activities.

Open the VAST UI.

Navigate to: Settings -> Auditing

Navigate to Auditing

Add a name for the audit directory (default .vast_audit_dir is fine) and add the root user to Read-access Users.

Name Audit Directory

Auditing can be enabled on specific view policies, allowing for targeted auditing of user activities. Administrators have the flexibility to configure which activities are audited per View Policy.

To audit all views and collect all user activities (global auditing), navigate to Settings -> Auditing -> Global Baseline Audit Settings and make sure all settings are enabled.

Global Auditing

3.2 VAST - Configure NFS Protocol View

Now that auditing is enabled, VAST will track all user activity.

In order to read the VAST audit data on our own systems, we will create an NFS export to read the audit data from. In order to create an NFS export, we need to create a VAST View on that path.

3.2.1 Create a View Policy

To create a View, first create a View Policy.

Navigate to: Element Store -> View Policies, and select Create Policy.

VAST Create Policy

In the General Tab:

  • Tenant: Default
  • Name:
  • Security Flavor: NFS
  • VIP Pools: Any Virtual IP Pool can be chosen, but please take note of the name for later.
  • Group Membership Source: Client or Client And Providers
info

If a Virtual IP Pool does not yet exist, it can be created under Network Access -> Virtual IP Pools

The VIP Pool must be assigned the PROTOCOLS role. The range can be any valid IP range in the network.

In the Host-Based Access tab:

NFS:

  • Add your ECA IPs to No Squash by clicking Add New Rule
  • Add your ECA IPs to Read Only by clicking Add New Rule
  • Root Squash, All Squash and Read/Write should be empty
  • Default values can be used for everything else in the View Policy.
3.2.2 Create a View

Now, we need to create a View.

Navigate to Element Store -> View

Select Create View

Create View

VAST will display the window for creating/adding a new view.

Add View

In the General Tab:

  • Path: should be the same as the audit directory (i.e. /.vast_audit_dir)
  • Protocols: NFS
  • Policy name: should be the View Policy created in the previous step.
  • NFS alias: choose an alias for the NFS export (optional)
  • Use default values for everything else.
  • There is no need to enable the 'create directory' value.

This concludes the set up needed on the VAST machine itself. We will now proceed to the configuration needed on Eyeglass and ECA.


Step 4 - Eyeglass and ECA configuration

4.1 - Add cluster to Eyeglass

note

Make sure you've added the appropriate licenses according to the instructions in Step 2.

Select Add Managed Device from the Eyeglass Main Menu at the bottom left corner.

Fill the IP Address of the VAST Data, as well as the username and password.

Click Submit to continue. Eyeglass will display a confirmation when the job is successfully submitted for inventory collection.

To see the cluster just added, return to the Eyeglass desktop and select Inventory View.

Inventory View displays the cluster just added in the list of Managed Devices.

4.2 - Enable functionality on ECA

SSH into the primary ECA node using an SSH client of your choosing.

Before editing the configuration file, bring down the ECA cluster with the command ecactl cluster down. Next, open the file /opt/superna/eca/eca-env-common.conf in the vi text editor. You can do this by entering the command vi /opt/superna/eca/eca-env-common.conf.

In the vi editor, add the following lines to enable VAST support on ECA:

export TURBOAUDIT_VAST_ENABLED=true
export VAST_LOG_MOUNT_PATH="/opt/superna/mnt/vastaudit"

After making these changes, save the file and exit the editor. Finally, bring the ECA cluster back up by executing ecactl cluster up.

4.3 - Mount the Audit Path to the Secondary ECA Node

To set up the NFS mount for accessing audit data from VAST on the secondary ECA node (ECA Node 2), follow these steps:

Run the following commands in your ECA Node 2. Note the items in <angle brackets>, which must be replaced with the appropriate information.

note

If it's the first time following this installation guide on an ECA Node 2 cluster, use the provided command. However, if this command has been run before, it may overwrite existing configurations. Continue to find steps for checking the file configuration.

ecactl cluster exec "mkdir -p /opt/superna/mnt/vastaudit/<VAST_cluster_name>/<VAST_cluster_name"
echo -e "/opt/superna/mnt/vastaudit/<VAST_cluster_name>/<cluster_name> --fstype=nfs <vip_pool_ip>:/<audit_dir_name>" >> /opt/superna/eca/data/audit-nfs/auto.nfs

For vip_pool_ip, select any IP within the range of the VIP-pool.

This command creates the necessary directory structure and adds NFS mount configuration to the auto.nfs file.

To check the file configuration, use a command utility or text editor like less or nano to read the contents of /opt/superna/eca/data/audit-nfs/auto.nfs

Here is an example of a correct auto.nfs file configuration:

# Superna ECA configuration file for Autofs mounts
# Syntax:
# /opt/superna/mnt/audit/<GUID>/<NAME> --fstype=nfs,nfsvers=3,ro,soft <FQDN>:/ifs/.ifsvar/audit/logs
#
# <GUID> is the Isilon cluster's GUID
# <NAME> is the Isilon cluster's name
# <FQDN> is the SmartConnect zone name of a network pool in the Isilon
# cluster's System zone.

Finally, we need to restart the cluster.

ecactl cluster down
ecactl cluster up

After this, we should be able to raise VAST RSWEvents.

Continue to the following articles to configure Security Guard for VAST and Recovery Manager for VAST.