Machine Learning Module Setup
Configuring the ML Module is essential for effective threat hunting. This guide walks you through setting up the environment and configuring the necessary components for data exfiltration detection.
Monitor your installation progress at any point by running kubectl -n seed-ml get pods
, which displays the current status of all components being deployed in the installation process.
Setting Up the ML Module Environment
-
Connect to the ML Module VM
Connect to your ML Module VM using SSH:
ssh <username>@<ML-module-VM-IP-address>
infoReplace
<username>
with your VM's username (typicallyecadmin
) and<ML-module-VM-IP-address>
with your actual ML Module VM IP address. -
Install K3s
Installing K3s sets up the K3s service along with essential utilities, including
kubectl
,crictl
,k3s-killall.sh
, andk3s-uninstall.sh
. A kubeconfig file will be created at/etc/rancher/k3s/k3s.yaml
. By default, K3s operates with root privileges. The option--write-kubeconfig-mode=644
ensures that the kubeconfig file is generated with read permissions for all users. Consider adjusting this setting if it raises any security concerns.Install K3s server with disabled Traefik and ServiceLB, and set kubeconfig permissions:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='server' sh -s - --disable=traefik --disable=servicelb --write-kubeconfig-mode=644
Expected ResultThis command will download and install K3s. You should see output similar to:
[INFO] Finding release for channel stable
[INFO] Using v1.32.5+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.32.5+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.32.5+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3sConfigure
kubectl
to use the K3s kubeconfig file:mkdir ~/.kube && chmod 700 ~/.kube && cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
-
Configure Firewall for K3s Operation
Update firewall rules for correct operation of K3s:
sudo firewall-cmd --permanent --add-port=6443/tcp
sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16
sudo firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16
sudo firewall-cmd --reload
Expected ResultEach command should respond with
success
if properly executed. -
Install Git
Install Git using the package manager:
sudo zypper install git
tipAccept the default options when prompted during the installation process.
-
Install Helm
Download and install Helm:
curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | sh
Add Bitnami Helm repository:
helm repo add bitnami https://charts.bitnami.com/bitnami
Add Kafka UI Helm repository:
helm repo add kafka-ui https://provectus.github.io/kafka-ui-charts
Add Apache Superset Helm repository:
helm repo add superset https://apache.github.io/superset
Add HAProxy Technologies Helm repository:
helm repo add haproxytech https://haproxytech.github.io/helm-charts
Update all Helm repositories:
helm repo update
Expected ResultWhen successful, you should see output similar to:
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "kafka-ui" chart repository
...Successfully got an update from the "haproxytech" chart repository
...Successfully got an update from the "superset" chart repository
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⚡Happy Helming!⚡ -
Add NFS Provisioner
Add NFS Provisioner Helm chart to allow RWX in K3s (easy NFS server integration and storage creation).
Install NFS utilities on the host machine:
sudo zypper install nfs-utils
Add the NFS Ganesha Server and External Provisioner Helm repository:
helm repo add nfs-ganesha-server-and-external-provisioner \
https://kubernetes-sigs.github.io/nfs-ganesha-server-and-external-provisioner/Install the NFS server provisioner Helm chart:
helm upgrade --install nfs-server nfs-ganesha-server-and-external-provisioner/nfs-server-provisioner \
-n seed-ml --create-namespace \
--set storageClass.name=efs \
--set persistence.size=8Gi -
Set Up Charts
Clone the charts repository and work with the correct branch. After the clone command you will be asked for the usernamesupernadev
and the temporal token provided. It is essential, when performing the checkout, to select the version you wish to deploy. For example, if you are deploying version0.2.1
, you should set the release version with the commandexport RELEASE="0.2.1"
.Clone the charts repository (you'll be prompted for username
supernadev
and the temporal token):git clone https://github.com/supernadev/chart-seed-ml.git
Set the desired release version:
export RELEASE="0.8.0"
Navigate to the repository, checkout the specific delivery branch for the release, pull latest changes, and return to the previous directory:
cd chart-seed-ml; git checkout v$RELEASE ; cd ..
Expected ResultNote: switching to 'v1.0.0'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to false
HEAD is now at 959d899 Merged develop into delivery/v1.0.0tipBuilding dependencies may take a few minutes. Each command should complete before proceeding to the next.
Rebuild the chart dependencies:
helm dependency build chart-seed-ml/charts/kafka
helm dependency build chart-seed-ml/charts/kafka-ui
Rebuild ClickHouse chart dependencies:
helm dependency build chart-seed-ml/charts/clickhouse
Rebuild PostgreSQL chart dependencies:
helm dependency build chart-seed-ml/charts/postgres
Rebuild Superset chart dependencies:
helm dependency build chart-seed-ml/charts/superset
-
Install HAProxy
Install haproxy-controller in k8s.Create configuration file for the primary HAProxy controller:
cat << EOF > haproxy-controller-values.yaml
controller:
service:
type: NodePort
nodePorts:
http: 31080
https: 31443
stat: 31024
admin: 31060
EOFInstall the primary HAProxy ingress controller:
helm upgrade --install haproxy-ingress haproxytech/kubernetes-ingress --create-namespace --namespace haproxy-controller -f haproxy-controller-values.yaml
Install the second configuration file for the Superset HAProxy controller:
cat << EOF > haproxy-controller-superset-values.yaml
controller:
ingressClassResource:
name: haproxy-superset
ingressClass: haproxy-superset
service:
type: NodePort
nodePorts:
http: 30080
https: 30443
stat: 30024
admin: 30060
EOFInstall a second haproxy-controller for Superset:
helm upgrade --install haproxy-ingress-superset haproxytech/kubernetes-ingress --create-namespace --namespace haproxy-controller-superset -f haproxy-controller-superset-values.yaml
Connect to ECA
Set up your ECA nodes to work with the ML module by configuring the necessary network connections, following the steps below.
-
Log In to Main ECA Node
Connect to your main ECA node using SSH:
ssh <username>@<primary-ECA-node-IP>
infoReplace
<username>
with your ECA node's username (typicallyecadmin
) and<primary-ECA-node-IP>
with your actual primary ECA node IP address.tipIf you need help identifying your primary node, refer to the ECA Management Guide.
-
Open the Firewall Settings File
Open the network access control file using either
vi
ornano
:vi /opt/superna/eca/scripts/eca_iptables.sh
tipIf you're more comfortable with
nano
, use this alternative command:nano /opt/superna/eca/scripts/eca_iptables.sh
infoFor comprehensive network configuration information, consult the Network Configuration Guide.
-
Add Connection Permission
Locate rule #5 in the file and add the following line:
sudo /usr/sbin/iptables -A $CHAIN -p tcp --dport 9092 -s x.x.x.x -j ACCEPT
infoReplace
x.x.x.x
with your ML server's actual IP address. This rule allows your ML module to connect to the ECA Kafka data service.tipFor additional details about configuring connections between services, refer to the Connection Guide.
-
Save Configuration
Save your changes to the file:
info- If using
vi
: Press ESC, then type:wq
and press Enter. - If using
nano
:- Type the configuration exactly as shown above
- Press Ctrl + O to save (Write Out)
- Press Enter to confirm the filename
- Press Ctrl + X to exit
nano
- If using
-
Apply Your Changes
warningThe following commands will briefly disconnect services, so notify your team before proceeding.
Restart the ECA system to apply your firewall changes:
ecactl cluster down
ecactl cluster up
tipIf you encounter any issues during this process, consult the Troubleshooting Guide for ECA operations assistance.
After Upgrading ECA
If you've upgraded your ECA nodes, you'll need to recreate the IP table rules to maintain connectivity with the ML module. Follow the same steps outlined in the Connect to ECA section above to re-establish the firewall configuration.