Feature #2958

Suricata 5.0.0beta1 and way too much anomaly logging

Added by Andre ten Bohmer about 2 months ago. Updated about 1 month ago.

Target version:


If outputs: -> -eve-log: -> types: -> - anomaly: is enabled in suricata.yaml, eve.json gets flooded with event type anomaly.
I've seen more then 13 million of these in 5 minutes which also drastically reduces performance seen capture.kernel_drops.
capture.kernel_drops was under v4.1.3 way below 0.01% and now I see numbers like:
capture.kernel_packets | Total | 47542250
capture.kernel_drops | Total | 37202776

Event logged in eve.json: {"timestamp":"2019-05-03T09:11:57.277701+0200","in_iface":"ens2f0","event_type":"anomaly","vlan":[403],"anomaly":{"type":"packet","event":"decoder.ipv4.trunc_pkt"}} {"timestamp":"2019-05-03T09:11:55.623627+0200","in_iface":"ens2f1","event_type":"anomaly","vlan":[403],"anomaly":{"type":"packet","event":"decoder.ipv4.trunc_pkt"}}

Is it possible to limit this logging? An other option/solution?



Updated by Victor Julien about 1 month ago

For now has been merged. It disables the log by default and adds a warning to the yaml we ship. We'll be working on this further.


Updated by Jeff Lucovsky about 1 month ago

  • Assignee set to Jeff Lucovsky

Here are some possible directions for reducing anomaly log activity:

  • Rate limit log records. Use a mechanism like the Linux kernel's "printk ratelimit" that restricts the number of messages logged within a time interval. Log records that exceed the threshold are dropped; when drops occur, a log record is logged stating how many records were dropped when the next log record is written. The advantage of this approach is simplicity; the disadvantage is lost records..
  • Store and forward. Batch successive log records into a fixed size memory area (size TBD). When memory area reaches capacity, the accumulated logs will be written. This maintains ordering at the expense of latency. An option would be to buffer messages until (1) a time threshold or (2) a size/count threshold is reached. Which ever occurs first, causes the log records to be written. This approach increases the memory footprint of Suricata but amortizes the cost to write over many records. This approach is simple, doesn't lose information, smooths jitter but uses more memory.
  • Compress adjacent like records. Adjacent log records that are the same* (sameness tbd) would be accumulated with and marked with an occurrence value. This approach will store the last record that would've been logged and increase it's occurrence count as long as subsequent records are identical (TBD). When a non-identical record is submitted, the record being held is logged (output) and the non-identical record is buffered as long as subsequent records are identical. This is like the store and forward mechanism with store size of 1 and a semantic that identical records will be combined and the duplicates discarded. The chief disadvantage is complexity and performance may continue to suffer when few successive records are deemed identical.
  • Filtering options. The chief drawback is that no relief may be provided when the filter choice doesn't match or isn't suitable for the anomalies that are occurring. Some ideas on filter choices:
  • Filter on stream or packet events using the event code. Log records are packet events when the event code is less than or equal to DECODE_EVENT_PACKET_MAX.
  • Filter on layer 3 protocol (unable to determine, ip, icmp)
  • Filter on layer 4 protocol (udp, tcp, ...).
  • Filter on layer 7 protocol (if available).
  • Filter on whether packet is invalid (PKT_IS_INVALID) or not.
  • Filter on specific decode events. This would be difficult to explain and configure.
  • A combination of one or more of the preceding choices.

Also available in: Atom PDF