nDPI/python/ndpi/ndpi.py
Ivan Nardi 400cd516b5
Allow multiple struct ndpi_detection_module_struct to share some state (#2271)
Add the concept of "global context".

Right now every instance of `struct ndpi_detection_module_struct` (we
will call it "local context" in this description) is completely
independent from each other. This provide optimal performances in
multithreaded environment, where we pin each local context to a thread,
and each thread to a specific CPU core: we don't have any data shared
across the cores.

Each local context has, internally, also some information correlating
**different** flows; something like:
```
if flow1 (PeerA <-> Peer B) is PROTOCOL_X; then
  flow2 (PeerC <-> PeerD) will be PROTOCOL_Y
```
To get optimal classification results, both flow1 and flow2 must be
processed by the same local context. This is not an issue at all in the far
most common scenario where there is only one local context, but it might
be impractical in some more complex scenarios.

Create the concept of "global context": multiple local contexts can use
the same global context and share some data (structures) using it.
This way the data correlating multiple flows can be read/write from
different local contexts.
This is an optional feature, disabled by default.

Obviously data structures shared in a global context must be thread safe.
This PR updates the code of the LRU implementation to be, optionally,
thread safe.

Right now, only the LRU caches can be shared; the other main structures
(trees and automas) are basically read-only: there is little sense in
sharing them. Furthermore, these structures don't have any information
correlating multiple flows.

Every LRU cache can be shared, independently from the others, via
`ndpi_set_config(ndpi_struct, NULL, "lru.$CACHE_NAME.scope", "1")`.

It's up to the user to find the right trade-off between performances
(i.e. without shared data) and classification results (i.e. with some
shared data among the local contexts), depending on the specific traffic
patterns and on the algorithms used to balance the flows across the
threads/cores/local contexts.

Add some basic examples of library initialization in
`doc/library_initialization.md`.

This code needs libpthread as external dependency. It shouldn't be a big
issue; however a configure flag has been added to disable global context
support. A new CI job has been added to test it.

TODO: we should need to find a proper way to add some tests on
multithreaded enviroment... not an easy task...

*** API changes ***

If you are not interested in this feature, simply add a NULL parameter to
any `ndpi_init_detection_module()` calls.
2024-02-01 15:33:11 +01:00

101 lines
4.2 KiB
Python

"""
------------------------------------------------------------------------------------------------------------------------
ndpi.py
Copyright (C) 2011-22 - ntop.org
This file is part of nDPI, an open source deep packet inspection library.
nDPI is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later
version.
nDPI is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty
of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License along with NFStream.
If not, see <http://www.gnu.org/licenses/>.
------------------------------------------------------------------------------------------------------------------------
"""
from collections import namedtuple
from _ndpi import ffi, lib
ndpi_protocol = namedtuple('NDPIProtocol', ['C',
'master_protocol',
'app_protocol',
'category'])
ndpi_confidence = namedtuple('NDPIConfidence', ['id',
'name'])
class NDPI(object):
__slots__ = ("_api_version",
"_revision",
"_detection_module")
def __init__(self):
self._detection_module = lib.ndpi_init_detection_module(ffi.NULL)
if self._detection_module == ffi.NULL:
raise MemoryError("Unable to instantiate NDPI object")
lib.ndpi_py_setup_detection_module(self._detection_module)
@property
def api_version(self):
return lib.ndpi_get_api_version()
@property
def revision(self):
return ffi.string(lib.ndpi_revision()).decode('utf-8', errors='ignore')
def process_packet(self, flow, packet, packet_time_ms, input_info):
p = lib.ndpi_detection_process_packet(self._detection_module,
flow.C,
packet,
len(packet),
int(packet_time_ms),
input_info)
return ndpi_protocol(C=p,
master_protocol=p.master_protocol,
app_protocol=p.app_protocol,
category=p.category)
def giveup(self, flow):
p = lib.ndpi_detection_giveup(self._detection_module,
flow.C,
ffi.new("uint8_t*", 0))
return ndpi_protocol(C=p,
master_protocol=p.master_protocol,
app_protocol=p.app_protocol,
category=p.category)
def protocol_name(self, protocol):
buf = ffi.new("char[40]")
lib.ndpi_protocol2name(self._detection_module, protocol.C, buf, ffi.sizeof(buf))
return ffi.string(buf).decode('utf-8', errors='ignore')
def protocol_category_name(self, protocol):
return ffi.string(lib.ndpi_category_get_name(self._detection_module,
protocol.C.category)).decode('utf-8',
errors='ignore')
def __del__(self):
if self._detection_module != ffi.NULL:
lib.ndpi_exit_detection_module(self._detection_module)
class NDPIFlow(object):
__slots__ = "C"
@property
def confidence(self):
confidence = self.C.confidence
return ndpi_confidence(id=confidence,
name=ffi.string(lib.ndpi_confidence_get_name(confidence)).decode('utf-8',
errors='ignore'))
def __init__(self):
self.C = lib.ndpi_py_initialize_flow()
def __del__(self):
if self.C != ffi.NULL:
lib.ndpi_flow_free(self.C)
self.C = ffi.NULL