Project

General

Profile

Actions

Bug #8141

open

affinity: memory affinity is not NUMA-aware in suricata 8.0.2, 8.0.1

Added by Evgeniy M 2 days ago.

Status:
New
Priority:
Normal
Assignee:
-
Target version:
Affected Versions:
Effort:
high
Difficulty:
Label:

Description

I use Suricata as traffic analizer in IDS mode on SPAN /mirroring interfaces with DPDK-capture. Compile suricata 8.0.1, 8.0.2 with --enable-dpdk.
Hardware platform is double NUMA. 24 hardware cores on numa 0, 24 hardware cores on numa 1. One interface get from numa 0. Second interface get from numa 1.
Used Debian 12.9.
2M hugepages set up in grub.cfg as hugepages=0:1536,1:1536 - 6G memory total. Interfaces is Mellanox 5 En 100G PCIe4.0. 128 G RAM.

Suricata process run with switched off detection in command line, with no signatures with each app-layer parser set to no or detect-only. I need only L7 protocol in every session in the eve.log and some additional data. Not need alerts.

Main configuration is: one suricata process on both interfaces. Both of the mpointed in the dpdk section with from 8 to 20 threads and the same rx-queues per each interface. tx-queues is 0. Both interfaces pointed in the cpu-affinity section and interface specific affinity is pointed too.

To test this configuration i used traffic on MIRRORED ports. Traffic is HTTP 1.1 with about 1 Kbytes of data in each request-response per session.

I meet some troubles and do some test to clearify them.

If traffic goes through interfaces that mirrored on interface of suricata on numa 0, then result is 80 kCPS 45 Gbps without errors in capture, minimum not logged sessions for 8 threads configuration and 160 kCPS 90 Gbps for 20 threads configuration.

If traffic goes through interface that mirrored on interface of numa 1 * of suricata, then results for the same loads are with 50 - 99+% losses in capture and logging.

I try fix this by manipulation with hugepages in the grub.cfg, socket-mem EAL parameter in the dpdk-section of suricata yaml, do two separate process of suricata each for own interface with acceptable affinity. Do the same test on other machine with other configuration of interfaces, numa and hugepages. Trying use suricata launch with numactl memory binding. Nothing help. Thinking, that...

...suricata allocating memory in not optimal way, so if suricata processes working on one numa node goes in memory of other numa node, then huge loss appears.

Please, help me with workaround, or appropriate yaml configuration, or fix code.

Thank you! )

No data to display

Actions

Also available in: Atom PDF