Bug #282
closedVerdict Threats not processing equally
Description
I am running suricata 1.1beta1 (rev 0d6d0ae) testing the multiqueue patch and setting a custom configuration of cpu affinity.
Cpu affinity: (It is not in yaml for reducing space) - management_cpu_set: cpu: [ 0 ] - receive_cpu_set: cpu: [ "4-7" ] mode: "exclusive" - decode_cpu_set: cpu: [ 0, 1 ] mode: "balanced" - stream_cpu_set: cpu: [ "0-1" ] - detect_cpu_set: cpu: [ "0-3" ] mode: "exclusive" - verdict_cpu_set: cpu: [ "5-7" ] mode: "exclusive" - reject_cpu_set: cpu: [ 0 ] - output_cpu_set: cpu: [ "0-3" ]
Load Distribution in iptables:
iptables -A FORWARD -m statistic --mode nth --every 4 -j NFQUEUE --queue-num 4 iptables -A FORWARD -m statistic --mode nth --every 3 -j NFQUEUE --queue-num 4 iptables -A FORWARD -m statistic --mode nth --every 2 -j NFQUEUE --queue-num 2 iptables -A FORWARD -j NFQUEUE --queue-num 1
- Doing a $iptables -nvL shows that distribution is equall between queueus
Suricata is running with:
$suricata -D -c /etc/suricata/suricata.yaml -q 1 -q 2 -q 3 -q 4
Using hping3 for stressing the server
$hping3 somesrv -d 1400 --flood
In a few seconds after doing hping packets begin to drop
Apr 11 18:02:48 ips2 kernel: nf_conntrack: table full, dropping packet.
I know that as we are dealing with a flood traffic queues would eventually be filled an packets dropped, the problem that I see is that only one queue is overwhelmed.
$watch cat /proc/net/netfilter/nfnetlink_queue queue pid dropped 1 -4281 17110 2 65535 0 0 398751 1 2 13918 5736 2 65535 0 0 398751 1 3 -4279 5743 2 65535 0 0 398751 1 4 -4280 79972 2 65535 18256 0 398751 1
As you can see, queues 2 and 3 have plenty room yet, and so does queue 1 but queue 4 is being a bottleneck. I double check the queue distribution and there is not a problem in there. I assume is a problem with Verdicts threats.
By the way, no cpu is saturated or close to saturation.
Is this a bug or something I am missing in my configuration.
Updated by Victor Julien over 13 years ago
Can you use iptables -vnL to see if each of your iptables rules received the same amount of packets?
Updated by Victor Julien over 13 years ago
Btw, the iptables rules you list have 2 that go to queue 4, none to 3. Also in the threading config I see:
- receive_cpu_set: cpu: [ "4-7" ] mode: "exclusive"
- verdict_cpu_set: cpu: [ "5-7" ] mode: "exclusive"
Shouldn't that last one be 4-7 too?
Updated by Fernando Ortiz over 13 years ago
Yes
Chain FORWARD (policy ACCEPT 17 packets, 9080 bytes) pkts bytes target prot opt in out source destination 11M 12G NFQUEUE all -- * * 0.0.0.0/0 0.0.0.0/0 statistic mode nth every 4 NFQUEUE num 3 11M 12G NFQUEUE all -- * * 0.0.0.0/0 0.0.0.0/0 statistic mode nth every 3 NFQUEUE num 4 11M 12G NFQUEUE all -- * * 0.0.0.0/0 0.0.0.0/0 statistic mode nth every 2 NFQUEUE num 3 11M 12G NFQUEUE all -- * * 0.0.0.0/0 0.0.0.0/0 NFQUEUE num 2
Also, it is weird that packets processe en nfnetlink_queue is equally distributed.
Victor Julien wrote:
Can you use iptables -vnL to see if each of your iptables rules received the same amount of packets?
Updated by Fernando Ortiz over 13 years ago
You are right, sorry for that. But the result i pasted with the queues was with the iptables distributing among 4 queues, not 3. And verdict cpu:[4-7]
I was messing around with the configuration trying to understand what is going on, and I forgot to correct that when i submitted the issue.
Victor Julien wrote:
Btw, the iptables rules you list have 2 that go to queue 4, none to 3. Also in the threading config I see:
- receive_cpu_set: cpu: [ "4-7" ] mode: "exclusive"
- verdict_cpu_set: cpu: [ "5-7" ] mode: "exclusive"
Shouldn't that last one be 4-7 too?
Updated by Victor Julien over 13 years ago
This time I see queue num 3 two times, is that right? Quite confusing.
Updated by Fernando Ortiz over 13 years ago
Victor Julien wrote:
This time I see queue num 3 two times, is that right? Quite confusing.
I know, I am really sorry about that. Next time I will triple check the snapshot before sending.
I was observing that when I change the order of queues in iptables (Using the 4 queues, really) it didn't change the queue that saturates. But when I stop using one queue, or change the verdict_cpu_set setting the queue that saturates change.
Another thing, in this snapshot:
$watch cat /proc/net/netfilter/nfnetlink_queue queue pid dropped 1 -4281 17110 2 65535 0 0 398751 1 2 13918 5736 2 65535 0 0 398751 1 3 -4279 5743 2 65535 0 0 398751 1 4 -4280 79972 2 65535 18256 0 398751 1
The queues 1, 2 and 3 stays around those values while in queue 4 packets keep being dropped.
In /var/log/messages I see this notification.
Apr 12 15:42:30 ips2 kernel: nf_conntrack: table full, dropping packet.
I did a
echo 524288 > /proc/sys/net/ipv4/netfilter/ip_conntrack_max
And now all queues got saturated, although the order of saturation was q4, q1,q2 and q3.
Now I got this message that makes sense.
Apr 12 15:46:47 ips2 kernel: nf_queue: full at 80000 entries, dropping packets(s). Dropped: 1160456
So, the reason queues got overwhelmed is that I am flooding the net, but I don't understant one queue saturates much faster than other instead of being proportional.
Testing with tomahawk, I could not stress suricata that much so I didn't get any relevant result.
1 -4297 29 2 65535 0 0 1084077 1 2 30061 29 2 65535 0 0 1084077 1 3 -4298 38 2 65535 0 0 1084077 1 4 -4299 42 2 65535 0 0 1084077 1
For the moment I am trying to simmulate some more real traffic until I can test suricata in a real environment.
Updated by Eric Leblond over 13 years ago
Did you generate the traffic on the same box?
If this is the case, this is possible, that the traffic generator (or another service, device) is using the CPU. It could be interesting to use linux perf-tools to study how the CPU with the slow verdict is used.