nDPI/tests/dga
Ivan Nardi 0a47f745cc
Avoid useless host automa lookup (#1724)
The host automa is used for two tasks:
* protocol sub-classification (obviously);
* DGA evaluation: the idea is that if a domain is present in this
automa, it can't be a DGA, regardless of its format/name.

In most dissectors both checks are executed, i.e. the code is something
like:

```
ndpi_match_host_subprotocol(..., flow->host_server_name, ...);
ndpi_check_dga_name(..., flow->host_server_name,...);

```

In that common case, we can perform only one automa lookup: if we check the
sub-classification before the DGA, we can avoid the second lookup in
the DGA function itself.
2022-09-05 13:59:51 +02:00
..
dga_evaluate.c Avoid useless host automa lookup (#1724) 2022-09-05 13:59:51 +02:00
Makefile.in Do not interfere with CFLAGS/LDFLAGS env anymore. (#1659) 2022-07-13 19:44:18 +02:00
README.md Implement DGA detection performances tracking workflow. (#1064) 2020-11-16 21:17:16 +01:00
test_dga.csv Improved DGA detection 2021-03-03 19:30:01 +01:00
test_non_dga.csv Implement DGA detection performances tracking workflow. (#1064) 2020-11-16 21:17:16 +01:00

DGA detection testing workflow

Overview

nDPI provides a set of threat detection features available through NDPI_RISK detection.

As part of these features, we provide DGA detection.

Domain generation algorithms (DGA) are algorithms seen in various families of malware that are used to periodically generate a large number of domain names that can be used as rendezvous points with their command and control servers.

DGA detection heuristic is implemented here.

DGA performances test and tracking allows us to detect automatically if a modification is harmful.

The modification can be a simple threshold change or a future lightweight ML approach.

Used data

Original used dataset is a collection of legit and DGA domains (balanced) that can be obtained as follow:

wget https://raw.githubusercontent.com/chrmor/DGA_domains_dataset/master/dga_domains_full.csv

We split the dataset into DGA and NON-DGA and we keep 10% of each as test set and 90% as training set.

python3 -m pip install pandas
python3 -m pip install sklearn

Instruction using python3

from sklearn.model_selection import train_test_split
import pandas as pd

df = pd.read_csv("dga_domains_full.csv", header=None, names=["type", "family", "domain"])
df_dga = df[df.type=="dga"]
df_non_dga = df[df.type=="legit"]
train_non_dga, test_non_dga = train_test_split(df_non_dga, test_size=0.1, shuffle=True, random_state=27)
train_dga, test_dga = train_test_split(df_dga, test_size=0.1, shuffle=True, random_state=27)

test_dga["domain"].to_csv("test_dga.csv", header=False, index=False)
test_non_dga["domain"].to_csv("test_non_dga.csv", header=False, index=False)
train_dga["domain"].to_csv("train_dga.csv", header=False, index=False)
test_non_dga["domain"].to_csv("test_non_dga.csv", header=False, index=False)

Detection approach must be built on top of training set only, test set must be kept as unseen cases for testing

dga_evaluate

After nDPI compilation, you can use dga_evaluate helper to check number of detections out of an input file.

dga_evaluate <file name>

You can evaluate your modifications performances before submitting it as follows:

./do-dga.sh

If your modifications decreases baseline performances, test will fails. If not (well done), test passed and you must update the baseline metrics with your obtained ones.