Project

General

Profile

Actions

Bug #282

closed

Verdict Threats not processing equally

Added by Fernando Ortiz about 13 years ago. Updated over 12 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Target version:
-
Affected Versions:
Effort:
Difficulty:
Label:

Description

I am running suricata 1.1beta1 (rev 0d6d0ae) testing the multiqueue patch and setting a custom configuration of cpu affinity.

Cpu affinity: (It is not in yaml for reducing space)
 - management_cpu_set:   cpu: [ 0 ]  
 - receive_cpu_set:      cpu: [ "4-7" ]    mode: "exclusive" 
 - decode_cpu_set:       cpu: [ 0, 1 ]     mode: "balanced" 
 - stream_cpu_set:       cpu: [ "0-1" ]
 - detect_cpu_set:       cpu: [ "0-3" ]    mode: "exclusive" 
 - verdict_cpu_set:      cpu: [ "5-7" ]    mode: "exclusive" 
 - reject_cpu_set:       cpu: [ 0 ]
 - output_cpu_set:       cpu: [ "0-3" ]

Load Distribution in iptables:

iptables -A FORWARD -m statistic --mode nth --every 4 -j NFQUEUE --queue-num 4
iptables -A FORWARD -m statistic --mode nth --every 3 -j NFQUEUE --queue-num 4
iptables -A FORWARD -m statistic --mode nth --every 2 -j NFQUEUE --queue-num 2
iptables -A FORWARD -j NFQUEUE --queue-num 1

  • Doing a $iptables -nvL shows that distribution is equall between queueus

Suricata is running with:

  $suricata -D -c /etc/suricata/suricata.yaml -q 1 -q 2 -q 3 -q 4

Using hping3 for stressing the server
  $hping3 somesrv -d 1400 --flood 

In a few seconds after doing hping packets begin to drop

  Apr 11 18:02:48 ips2 kernel: nf_conntrack: table full, dropping packet.

I know that as we are dealing with a flood traffic queues would eventually be filled an packets dropped, the problem that I see is that only one queue is overwhelmed.

     $watch cat /proc/net/netfilter/nfnetlink_queue
              queue  pid                 dropped  
         1  -4281 17110 2 65535     0     0   398751  1                              
         2  13918  5736 2 65535     0     0   398751  1    
         3  -4279  5743 2 65535     0     0   398751  1    
         4  -4280 79972 2 65535 18256     0   398751  1

As you can see, queues 2 and 3 have plenty room yet, and so does queue 1 but queue 4 is being a bottleneck. I double check the queue distribution and there is not a problem in there. I assume is a problem with Verdicts threats.
By the way, no cpu is saturated or close to saturation.
Is this a bug or something I am missing in my configuration.

Actions

Also available in: Atom PDF