Skip to main content
Version: 2.9.0

DFS Failover Configuration Prerequisites

Introduction

This article covers all of the requirements for performing a DFS (Distributed File System) Failover with Superna DR. It's important to review this guide before attempting to execute a DFS Failover.

Requirements

Cluster Version Requirements

Clusters participating in a Microsoft DFS Mode Failover must be running the supported PowerScale OneFS Cluster version for this feature.  See the Feature Release Compatibility matrix in the release notes specific to your appliance version found here.

SyncIQ Policy Requirements - Blocks Failover

For a successful Microsoft DFS Mode failover, the Configuration Replication Job linked to the associated SyncIQ Policy must be enabled.

note

The failover will be blocked if either the SyncIQ Policy is disabled in PowerScale OneFS or the associated Configuration Replication Job is disabled.

Failover Target Cluster Requirements - Blocks Failover

For a successful Microsoft DFS Mode failover with Superna DR Edition, ensure that the Eyeglass appliance has IP network access to the target PowerScale OneFS Cluster, with all necessary ports open.

Eyeglass Quota Job Requirements

For a Microsoft DFS Mode failover with Superna DR Edition, there are no Quota Job state requirements.  Quotas will be failed over whether Quota Job is in Enabled or Disabled state.

Active Directory

  • AD clients must have both paths of the folder target cached after the failover
  • AD clients must be able to contact a Domain Controller
  • UNC paths to mount the DFS folder must use the DFS UNC syntax: \domain_name\dfsrootname\dfs_folder_name
  • SmartConnect zone names in the UNC targets must be delegated and resolvable by clients
  • SmartConnect zone name SPNs for folder target UNC paths must be correctly registered in Active Directory (AD)

Windows OS Compatibility

The following list of Windows OS clients has all been tested for compatibility:

Windows Client OSServer OSSamba Version
Windows 7, 8.x, 10Windows Server 2008, 2012, 2012 R2, 2016Samba version 4.8.3 (configured for Microsoft DFS mounts and dual referral paths)

Release Note

PowerScale OneFS 8 includes a CA (Continuous Availability) compatibility mode feature known as persistent file handles, which is required for Continuous Availability Mode support with PowerScale OneFS. Windows servers in cluster mode will only advertise this capability when using CA mode.

Issue

An issue arises in PowerScale OneFS 8 where this capability is still advertised to clients even when not using CA mode. This can trigger the issue described in the KB article linked below:

"The computer can take more time to determine whether a shared folder is available if there is a failover of the shared folder."

Resolution

This issue only affects client machines with no active connection to PowerScale OneFS shares mounted over DFS. A delay (up to one minute) was observed when mounting the DFS root. However, actively mounted shares or those mapped directly to DFS folders did not experience this delay.

Resolution:

  • Windows 10 and Windows Server 2012 have a registry setting described in the following link to correct this behavior.
  • Windows 8 and Windows Server 2012 require a hotfix to be applied.

DFS Feature Changes and Share names used on DFS synced Shares

To streamline DFS mode with normal configuration sync, the feature has been enhanced as shown in the table below. This change introduces new options for customers requiring access to DR cluster data in a read-only state, while preserving DFS mode functionality. It also reduces the risk of issues during failover.

A new feature to hide shares on the DR cluster or read-only cluster is now available. After switching the tag, the next configuration sync cycle will apply the changes to the DR cluster share names.

Eyeglass VersionDFS Mode Sync BehaviorDFS Mode Failover BehaviorPrefix on SharesPost-fix on Shares for Security (DR Cluster)
1.4.x and earlierDelete shares of the same name on the DR clusterCreate shares on the DR cluster and delete on the source clusterN/AN/A
1.5 and beyondCreate shares on the DR cluster with a renamed prefix added to the share nameRename share on the DR cluster and rename source cluster with a prefix added to the share nameDefault Prefix: igls-dfs
You can change this by editing the tag in /opt/superna/sca/data/system.xml:
<dfsshareprefix>igls-dfs-</dfsshareprefix>
WARNING: Changing this tag will NOT automatically clean up shares with the old prefix. You must manually delete old shares, and new shares will be created with the updated tag.
N/A
1.6 and beyondSame as 1.5Same as 1.5N/ANew Feature: The source active cluster share can remain visible. A post-fix $ can be applied to hide the share on the DR cluster after failover.
To enable, edit the /opt/superna/sca/data/system.xml file and change the tag <dfssharesuffix> to $.
Note: Manual deletion of old DFS renamed shares is required.
info

When 1.5 DFS mode is enabled, all shares found on the source will be created on the target cluster with the prefix applied to the share name. If upgrading from 1.4.x DFS mode, no action is required and shares will be created using 1.5 logic.

DFS Fast Failover Mode - Superna Eyeglass 1.5.2 and beyond

ReleaseSpeed Improvement
1.5.2 >For DFS Failover (Microsoft DFS Mode or DFS enabled job in an Access Zone Failover), the Renaming shares step occurs after Data sync and before the Policy Failover step (Allow Writes, Resync Prep). This ensures that the amount of time that DFS clients are directed to the failover source cluster is minimized once the failover has started and that the DFS clients are already directed to the target cluster when the filesystem becomes writeable.
1.6.0 >Parallelized Rename - Now the rename process can use up to 10 threads at once to rename 10 shares in parallel across all policies in a failover job. This will provide a 10x speed improvement to redirect DFS clients faster under all failover conditions. Large share or policy count failovers will be accelerated by a factor of 10.
NOTE - (Version 1.5.2)

During failover, clients with open files will now receive a read-only error message if they attempt to save data once redirection has occurred but before the target is writeable. This is expected and provides user feedback that writes will not be successful. Each application may return a read-only error differently.

DFS Failover Enhancements

ReleaseEnhancement
1.9 >For DFS Failover (Microsoft DFS Mode or DFS enabled job in an Access Zone Failover), the following Share Rename Step Enhancement has been made:
  • If share renaming fails for all the shares for a cluster, then failover status is marked as error and the failover is stopped. This aborts the failover and leaves the data accessible on the source cluster.
  • If share renaming fails for only some shares, then the failover status is marked as a warning and manual recovery on the failed shares is required.
Summary

This enhancement eliminates the possibility of data access outage from the share rename step. It ensures that if some shares are renamed successfully, the failover will continue.

Recommendations for Configuration

The following are highly recommended to ensure that all automated Eyeglass Microsoft DFS Mode failover steps can be completed.

SyncIQ Policy Recommendations

  • The SyncIQ Job in PowerScale OneFS should be completed without errors and display a green status.
Impact

Failover will be blocked if SyncIQ policies are in an error state on the cluster. Eyeglass will attempt to run the policy, which will fail. Correct this on the PowerScale OneFS cluster. Data loss may occur due to unreplicated data.

  • PowerScale OneFS does not support SyncIQ Policies with exclude (or include) settings for failover.
Impact

This configuration is not supported for failback.

  • PowerScale OneFS best practices recommend that SyncIQ Policies use the Restrict Source Nodes option, which requires an IP created with the target SmartConnect zone.
Impact

The subnet pool used for data replication is not controlled, meaning that all nodes in the cluster can replicate data from all IP pools. This complicates bandwidth management and requires all nodes to have WAN access.

Next Steps

From this point you may go to the DFS Failover Configuration Procedures article, to begin the Configuration.