Feature #2343
openAdd "flush" command to unix socket
Description
If network data is not sent continuously onto a live traffic capturing interface it can be, that some flow information is stuck inside the suricata engine and will never be written into the logs, until other traffic is processed or suricata is shut down. This is due to the "laziness" of the cleanup procedures in suricata. By adding a "flush" command to the unix socket interface it should be possible to trigger the cleanup procedures manually.
Files
Updated by Andreas Herz almost 7 years ago
- Assignee set to OISF Dev
- Target version set to TBD
Updated by Victor Julien almost 7 years ago
Chris, could you describe your scenario here again? Reading it here again it doesn't really make sense to me anymore. Even without packets a live instance of Suricata should do its flow cleaning w/o issues.
Updated by Chris Knott almost 7 years ago
- File test.pcap test.pcap added
- File test_missing_end10.pcap test_missing_end10.pcap added
- File test_end10.pcap test_end10.pcap added
- File single_packet.pcap single_packet.pcap added
- File eve_test2.json eve_test2.json added
- File eve_test3.json eve_test3.json added
I've tested some scenarios in order to understand the behavior.
First I want to explain my test setup:
In order to get a reproducible result I am using a dummy network interface (dummy kernel module) without any IP configuration ... so the interface is completely silent and only sends data that I want it so send:
dummy: flags=195<UP,BROADCAST,RUNNING,NOARP> mtu 1500
ether 52:54:00:7e:27:af txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
In order to get data onto the interface I am using prerecorded PCAP files and tcpreplay (e.g. "tcpreplay --intf1=dummy /home/christoph/single_packet.pcap"). Suricata listens to the dummy interface and ignores checksum errors recorded in the PCAP files ("suricata -i dummy -k none").
My findings:
Test 1: Sending a complete flow at once (file: test.pcap) ... all information was inside the eve.json file. So no findings there.
Test 2: I was curious what happens if I send the content of the file in chunks (keeping all flow timeout values on default values in the configuration). So the second test was: sending the beginning of the flow (file: test_missing_end10.pcap) ... wait a bit (more than 10 minutes ... so all timeouts should hit) ... and send the end of the flow (file: test_end10.pcap). Surprisingly to flow did not timeout after the 10 minutes that I would have expected. Instead it timed out after sending the second part (file: eve_test2.json).
Test 3: My question was now: What happens if I send the beginning of the flow (file: test_missing_end10.pcap) ... wait a bit (more than 10 minutes ... so all timeouts should hit) ... and send another packet of a different flow (file: single_packet.pcap). Also after sending the single packet the original flow timed out (file: eve_test3.json).
So it seems for cleaning up a flow (and sending the flow information to the eve.json file) you need a network packet to arrive at the interface (seems that the timeout checks for flows are done when receiving a packet). If the network packet flow suddenly stops no cleanup can be done any more. I don't know if this was done intentionally?
Updated by Victor Julien about 5 years ago
This is not intentional, no. Is this still reproducible? We've fixed some bugs with how packets are handled at flow timeout.