pcap logging out of order
Using 3.1dev (rev 5db3220)
and the pcap attached (malicious test) I tried the following-
/usr/bin/suricata -c /etc/suricata/suricata.yaml -v -r httptunneled.pcap -S /dev/null
/usr/bin/suricata -c /etc/suricata/suricata.yaml -v -r httptunneled.pcap -S /dev/null --runmode=single
while having pcap logging enabled in suricata.yaml.
In case (1) - we have a full pcap being written (all 72 packets) -
however they have been written unordered and wireshark shows a lot of
"TCP Spurious Retransmission/Previous Segment not captured". So the packets are there - just not in the right order.
In case (2) - we have a full pcap being written (all 72 packets) -
identical to the one being read.
Updated by Jason Ish about 6 years ago
I'm going to guess that if you run in the single threaded run mode that they are all in order? (Oops, you commented on that, sorry).
This is probably because packets are logged by multiple threads, and Suricata does not implement a re-ordering queue - this may be beyond the scope of Suricata, but I have a few ideas.
The best long term solution is probably a pcap file per thread, then an external that is aware of the multiple files and can then re-order. I'm looking into this for dumpy, however Mergecap (https://www.wireshark.org/docs/wsug_html_chunked/AppToolsmergecap.html) is another option as well.
Updated by Victor Julien about 3 years ago
Here's what I think is happening:
- the packets that are written to pcap are the original tunnel packets. These packets are not part of a flow, so distributed round robin over the available threads
- the encapsulated packets are split off and processed separately. They are part of a flow, so handled by a single thread.
- the pcap log is written to by all threads, so the outer layer packets are written to file by all the threads. Due to timing non-determinism, they won't be in-order.
- this means the encapsulated flow is also out of order in the pcap
I can't really think of an easy solution, other than these 2:
1. force outer layer packets to go through a single thread (e.g. always the first thread). This would be a hack and not be very scalable.
2. track flow for the outer layer. This would automagically have the same effect as (1), except it will be more scalable as the flow balancing would kick in. A side effect is that flow logging would happen too and detection of IP-only rules would change.