Right now, there is, in essence, a static mapping between flow protocols
and flow breeds.
Make it dynamic: allow to have different flows, with the same
classification but differents breeds. This is the same logic that we
already have for categories....
Preliminary work to support breed in category lists.
API change from the app POV: to get the flow breed don't use anymore
`ndpi_get_proto_breed()`, but access directly `struct ndpi_proto->breed`
The functions `ndpi_domain_classify_*()` and
`ndpi_get_host_domain_suffix()` now have a `u_int32_t` parameter as
`class_id` (instead of `u_int_16_t`), with the following logic:
```
class_id = (breed << 16) | category
```
instead of the old:
```
class_id = category
```
Please note that this change is back-compatible: if you are not
interested into breeds, you don't need to update the application code.
The idea is to remove the limitation of only two protocols ("master" and
"app") in the flow classifcation.
This is quite handy expecially for STUN flows and, in general, for any
flows where there is some kind of transitionf from a cleartext protocol
to TLS: HTTP_PROXY -> TLS/Youtube; SMTP -> SMTPS (via STARTTLS msg).
In the vast majority of the cases, the protocol stack is simply
Master/Application.
Examples of real stacks (from the unit tests) different from the standard
"master/app":
* "STUN.WhatsAppCall.SRTP": a WA call
* "STUN.DTLS.GoogleCall": a Meet call
* "Telegram.STUN.DTLS.TelegramVoip": a Telegram call
* "SMTP.SMTPS.Google": a SMTP connection to Google server started in
cleartext and updated to TLS
* "HTTP.Google.ntop": a HTTP connection to a Google domain (match via
"Host" header) and to a ntop server (match via "Server" header)
The logic to create the stack is still a bit coarse: we have a decade of
code try to push everything in only ywo protocols... Therefore, the
content of the stack is still **highly experimental** and might change
in the next future; do you have any suggestions?
It is quite likely that the legacy fields "master_protocol" and
"app_protocol" will be there for a long time.
Add some helper to use the stack:
```
ndpi_stack_get_upper_proto();
ndpi_stack_get_lower_proto();
bool ndpi_stack_contains(struct ndpi_proto_stack *s, u_int16_t proto_id);
bool ndpi_stack_is_tls_like(struct ndpi_proto_stack *s);
bool ndpi_stack_is_http_like(struct ndpi_proto_stack *s);
```
Be sure new stack logic is compatible with legacy code:
```
assert(ndpi_stack_get_upper_proto(&flow->detected_protocol.protocol_stack) ==
ndpi_get_upper_proto(flow->detected_protocol));
assert(ndpi_stack_get_lower_proto(&flow->detected_protocol.protocol_stack) ==
ndpi_get_lower_proto(flow->detected_protocol));
```
This way, the `ndpiReader` output doesn't change if we change the
internal logic about the order we set/check the various flow risks.
Note that the flow risk *list* is already printed by `ndpiReader`
in order.
This cache was added in b6b4967aa, when there was no real Zoom support.
With 63f349319, a proper identification of multimedia stream has been
added, making this cache quite useless: any improvements on Zoom
classification should be properly done in Zoom dissector.
Tested for some months with a few 10Gbits links of residential traffic: the
cache pretty much never returned a valid hit.
This piece of code has multiple problems:
* nDPI is able to detect some TCP protocols even with mid-flows (i.e.
without the initial packets of the session); TLS is the most
significative example
* since e6b332aa4a it is perfectly valid
to not pass the TCP Handshake packets to nDPI
* in any case, we shouldn't call `ndpi_detection_giveup()`. That
function is usually called by the application and we end up calling it
twice in some cases.
The simple solution is to completely remove that code: process these
kinds of flows like everyone else.
Note that the application can always avoid to pass to nDPI any TCP flows
without the initial handshake; the flow managemnt is always up to the
application.
Looking at the CI results, some rare flows are now processed significantly
longer. As a follow-up we could look into that.
Extend internal unit tests to handle multiple configurations.
As some examples, add tests about:
* disabling some protocols
* disabling Ookla aggressiveness
Every configurations data is stored in a dedicated directory under
`tests\cfgs`
2023-04-06 11:30:36 +02:00
Renamed from tests/result/tcp_scan.pcapng.out (Browse further)