Skip to main content
Version: 2.12.0

Machine Learning Module Setup

Configuring the ML Module is essential for effective threat hunting. This guide walks you through setting up the environment and configuring the necessary components for data exfiltration detection.

Installation Tracking

Monitor your installation progress at any point by running kubectl -n seed-ml get pods, which displays the current status of all components being deployed in the installation process.

Setting Up the ML Module Environment

  1. Connect to the ML Module VM

    Connect to your ML Module VM using SSH:

    ssh <username>@<ML-module-VM-IP-address>
    info

    Replace <username> with your VM's username (typically ecadmin) and <ML-module-VM-IP-address> with your actual ML Module VM IP address.

  2. Install K3s

    Installing K3s sets up the K3s service along with essential utilities, including kubectl, crictl, k3s-killall.sh, and k3s-uninstall.sh. A kubeconfig file will be created at /etc/rancher/k3s/k3s.yaml. By default, K3s operates with root privileges. The option --write-kubeconfig-mode=644 ensures that the kubeconfig file is generated with read permissions for all users. Consider adjusting this setting if it raises any security concerns.

    Install K3s server with disabled Traefik and ServiceLB, and set kubeconfig permissions:

    curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='server' sh -s - --disable=traefik --disable=servicelb --write-kubeconfig-mode=644
    Expected Result

    This command will download and install K3s. You should see output similar to:

    [INFO]  Finding release for channel stable
    [INFO] Using v1.32.5+k3s1 as release
    [INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.32.5+k3s1/sha256sum-amd64.txt
    [INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.32.5+k3s1/k3s
    [INFO] Verifying binary download
    [INFO] Installing k3s to /usr/local/bin/k3s

    Configure kubectl to use the K3s kubeconfig file:

    mkdir ~/.kube && chmod 700 ~/.kube && cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
  3. Configure Firewall for K3s Operation

    Update firewall rules for correct operation of K3s:

    sudo firewall-cmd --permanent --add-port=6443/tcp
    sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16
    sudo firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16
    sudo firewall-cmd --reload
    Expected Result

    Each command should respond with success if properly executed.

  4. Install Git

    Install Git using the package manager:

    sudo zypper install git
    tip

    Accept the default options when prompted during the installation process.

  5. Install Helm

    Download and install Helm:

    curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | sh

    Add Bitnami Helm repository:

    helm repo add bitnami https://charts.bitnami.com/bitnami

    Add Kafka UI Helm repository:

    helm repo add kafka-ui https://provectus.github.io/kafka-ui-charts

    Add Apache Superset Helm repository:

    helm repo add superset https://apache.github.io/superset

    Add HAProxy Technologies Helm repository:

    helm repo add haproxytech https://haproxytech.github.io/helm-charts

    Update all Helm repositories:

    helm repo update
    Expected Result

    When successful, you should see output similar to:

    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "kafka-ui" chart repository
    ...Successfully got an update from the "haproxytech" chart repository
    ...Successfully got an update from the "superset" chart repository
    ...Successfully got an update from the "bitnami" chart repository
    Update Complete. ⚡Happy Helming!⚡
  6. Add NFS Provisioner

    Add NFS Provisioner Helm chart to allow RWX in K3s (easy NFS server integration and storage creation).

    Install NFS utilities on the host machine:

    sudo zypper install nfs-utils

    Add the NFS Ganesha Server and External Provisioner Helm repository:

    helm repo add nfs-ganesha-server-and-external-provisioner \
    https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner/

    Install the NFS server provisioner Helm chart:

    helm upgrade --install nfs-server nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner \
    -n seed-ml --create-namespace \
    --set storageClass.name=efs \
    --set persistence.size=8Gi
  7. Set Up Charts
    Clone the charts repository and work with the correct branch. After the clone command you will be asked for the username supernadev and the temporal token provided. It is essential, when performing the checkout, to select the version you wish to deploy. For example, if you are deploying version 0.2.1, you should set the release version with the command export RELEASE="0.2.1".

    Clone the charts repository (you'll be prompted for username supernadev and the temporal token):

    git clone https://github.com/supernadev/chart-seed-ml.git

    Set the desired release version:

    export RELEASE="0.8.0"

    Navigate to the repository, checkout the specific delivery branch for the release, pull latest changes, and return to the previous directory:

    cd chart-seed-ml; git checkout v$RELEASE ; cd ..
    Expected Result
    Note: switching to 'v1.0.0'.

    You are in 'detached HEAD' state. You can look around, make experimental
    changes and commit them, and you can discard any commits you make in this
    state without impacting any branches by switching back to a branch.

    If you want to create a new branch to retain commits you create, you may
    do so (now or later) by using -c with the switch command. Example:

    git switch -c <new-branch-name>

    Or undo this operation with:

    git switch -

    Turn off this advice by setting config variable advice.detachedHead to false

    HEAD is now at 959d899 Merged develop into delivery/v1.0.0
    tip

    Building dependencies may take a few minutes. Each command should complete before proceeding to the next.

    Rebuild the chart dependencies:

    helm dependency build chart-seed-ml/charts/kafka
    helm dependency build chart-seed-ml/charts/kafka-ui

    Rebuild ClickHouse chart dependencies:

    helm dependency build chart-seed-ml/charts/clickhouse

    Rebuild PostgreSQL chart dependencies:

    helm dependency build chart-seed-ml/charts/postgres

    Rebuild Superset chart dependencies:

    helm dependency build chart-seed-ml/charts/superset
  8. Install HAProxy
    Install haproxy-controller in k8s.

    Create configuration file for the primary HAProxy controller:

    cat << EOF > haproxy-controller-values.yaml
    controller:
    service:
    type: NodePort
    nodePorts:
    http: 31080
    https: 31443
    stat: 31024
    admin: 31060
    EOF

    Install the primary HAProxy ingress controller:

    helm upgrade --install haproxy-ingress haproxytech/kubernetes-ingress --create-namespace --namespace haproxy-controller -f haproxy-controller-values.yaml

    Install the second configuration file for the Superset HAProxy controller:

       cat << EOF > haproxy-controller-superset-values.yaml
    controller:
    ingressClassResource:
    name: haproxy-superset
    ingressClass: haproxy-superset
    service:
    type: NodePort
    nodePorts:
    http: 30080
    https: 30443
    stat: 30024
    admin: 30060
    EOF

    Install a second haproxy-controller for Superset:

    helm upgrade --install haproxy-ingress-superset haproxytech/kubernetes-ingress --create-namespace --namespace haproxy-controller-superset -f haproxy-controller-superset-values.yaml

Connect to ECA

Set up your ECA nodes to work with the ML module by configuring the necessary network connections, following the steps below.

  1. Log In to Main ECA Node

    Connect to your main ECA node using SSH:

    ssh <username>@<primary-ECA-node-IP>
    info

    Replace <username> with your ECA node's username (typically ecadmin) and <primary-ECA-node-IP> with your actual primary ECA node IP address.

    tip

    If you need help identifying your primary node, refer to the ECA Management Guide.

  2. Open the Firewall Settings File

    Open the network access control file using either vi or nano:

    vi /opt/superna/eca/scripts/eca_iptables.sh
    tip

    If you're more comfortable with nano, use this alternative command:

    nano /opt/superna/eca/scripts/eca_iptables.sh
    info

    For comprehensive network configuration information, consult the Network Configuration Guide.

  3. Add Connection Permission

    Locate rule #5 in the file and add the following line:

    sudo /usr/sbin/iptables -A $CHAIN -p tcp --dport 9092 -s x.x.x.x -j ACCEPT
    info

    Replace x.x.x.x with your ML server's actual IP address. This rule allows your ML module to connect to the ECA Kafka data service.

    tip

    For additional details about configuring connections between services, refer to the Connection Guide.

  4. Save Configuration

    Save your changes to the file:

    info
    • If using vi: Press ESC, then type :wq and press Enter.
    • If using nano:
      • Type the configuration exactly as shown above
      • Press Ctrl + O to save (Write Out)
      • Press Enter to confirm the filename
      • Press Ctrl + X to exit nano
  5. Apply Your Changes

    warning

    The following commands will briefly disconnect services, so notify your team before proceeding.

    Restart the ECA system to apply your firewall changes:

    ecactl cluster down
    ecactl cluster up
    tip

    If you encounter any issues during this process, consult the Troubleshooting Guide for ECA operations assistance.

After Upgrading ECA

If you've upgraded your ECA nodes, you'll need to recreate the IP table rules to maintain connectivity with the ML module. Follow the same steps outlined in the Connect to ECA section above to re-establish the firewall configuration.