Project

General

Profile

Actions

Feature #2343

open

Add "flush" command to unix socket

Added by Chris Knott over 6 years ago. Updated over 4 years ago.

Status:
New
Priority:
Normal
Assignee:
Target version:
Effort:
Difficulty:
Label:

Description

If network data is not sent continuously onto a live traffic capturing interface it can be, that some flow information is stuck inside the suricata engine and will never be written into the logs, until other traffic is processed or suricata is shut down. This is due to the "laziness" of the cleanup procedures in suricata. By adding a "flush" command to the unix socket interface it should be possible to trigger the cleanup procedures manually.


Files

test.pcap (7.01 KB) test.pcap Chris Knott, 02/06/2018 04:44 AM
test_missing_end10.pcap (6.87 KB) test_missing_end10.pcap Chris Knott, 02/06/2018 04:44 AM
test_end10.pcap (170 Bytes) test_end10.pcap Chris Knott, 02/06/2018 04:45 AM
single_packet.pcap (345 Bytes) single_packet.pcap Chris Knott, 02/06/2018 04:45 AM
eve_test2.json (292 KB) eve_test2.json Chris Knott, 02/06/2018 04:45 AM
eve_test3.json (289 KB) eve_test3.json Chris Knott, 02/06/2018 04:45 AM
Actions #1

Updated by Andreas Herz over 6 years ago

  • Assignee set to OISF Dev
  • Target version set to TBD
Actions #2

Updated by Victor Julien over 6 years ago

Chris, could you describe your scenario here again? Reading it here again it doesn't really make sense to me anymore. Even without packets a live instance of Suricata should do its flow cleaning w/o issues.

Updated by Chris Knott about 6 years ago

I've tested some scenarios in order to understand the behavior.

First I want to explain my test setup:
In order to get a reproducible result I am using a dummy network interface (dummy kernel module) without any IP configuration ... so the interface is completely silent and only sends data that I want it so send:

dummy: flags=195<UP,BROADCAST,RUNNING,NOARP> mtu 1500
ether 52:54:00:7e:27:af txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

In order to get data onto the interface I am using prerecorded PCAP files and tcpreplay (e.g. "tcpreplay --intf1=dummy /home/christoph/single_packet.pcap"). Suricata listens to the dummy interface and ignores checksum errors recorded in the PCAP files ("suricata -i dummy -k none").

My findings:

Test 1: Sending a complete flow at once (file: test.pcap) ... all information was inside the eve.json file. So no findings there.

Test 2: I was curious what happens if I send the content of the file in chunks (keeping all flow timeout values on default values in the configuration). So the second test was: sending the beginning of the flow (file: test_missing_end10.pcap) ... wait a bit (more than 10 minutes ... so all timeouts should hit) ... and send the end of the flow (file: test_end10.pcap). Surprisingly to flow did not timeout after the 10 minutes that I would have expected. Instead it timed out after sending the second part (file: eve_test2.json).

Test 3: My question was now: What happens if I send the beginning of the flow (file: test_missing_end10.pcap) ... wait a bit (more than 10 minutes ... so all timeouts should hit) ... and send another packet of a different flow (file: single_packet.pcap). Also after sending the single packet the original flow timed out (file: eve_test3.json).

So it seems for cleaning up a flow (and sending the flow information to the eve.json file) you need a network packet to arrive at the interface (seems that the timeout checks for flows are done when receiving a packet). If the network packet flow suddenly stops no cleanup can be done any more. I don't know if this was done intentionally?

Actions #4

Updated by Victor Julien over 4 years ago

This is not intentional, no. Is this still reproducible? We've fixed some bugs with how packets are handled at flow timeout.

Actions

Also available in: Atom PDF