Project

General

Profile

Actions

Bug #4502

closed

TCP reassembly memuse approaching memcap value results in TCP detection being stopped

Added by Andreas Herz almost 3 years ago. Updated over 1 year ago.

Status:
Closed
Priority:
Normal
Assignee:
Target version:
Affected Versions:
Effort:
medium
Difficulty:
Label:

Description

We discovered on the majority of 6.0 deployments have an increase in overall memory usage but especially for tcp reassembly memusage we saw an increase that results with the memuse hitting the memcap (but not .memcap entry in stats.log) and at that point nearly the whole tcp app-layer detection stopped.

We see on all of those deployments some amount of tcp.reassambly_gaps and after the breakpoint where tcp reassembly memuse hits the memcap border those gaps increase much more and now tcp.insert_data_normal_fail starts to occur as well. As you can see in the Grafana output there are some correlations:

  • tcp reassembly memuse hitting the limit after less then 24hours with a 8Gbit/s traffic ingest (done via t-rex)
  • tcp insert fail rise as soon as the limit is hit
  • tcp gaps make a jump as well
  • tcp app layer totally gone, only udp (dns) still inspected
  • stable values of traffic, load, no drops, flows/min

The most interesting thing is that we can't reproduce it with 5.0.3 while everything else is the same (we only changed the config to not use the new app layer parsers that are only in 6.0).
(In general the overall memory usage with 5.0.3 is much smaller compared to 6.0.2)
In 5.0.3 the tcp memuse as well as flow memuse settle at one plateau which makes much more sense compared to the stable t-rex traffic ingest.

Some system Details:

Debian Buster with 5.10 Backports kernel.
Capture on X710 10GE SFP+ NICS with AF_PACKETv3 although there was no diff with using cluster_qm + RSS setting or just cluster_flow with 1 Queue.

Current workaround is a daily restart of Suricata for 6.0.2 and might be necessary to rollback to 5.0.3.

Any hints where we should look on the code side will help, nevertheless we will also check the git log if we can spot something for that regression.


Files

tcpreassemblymemuse.png (482 KB) tcpreassemblymemuse.png Andreas Herz, 05/25/2021 09:11 AM
buildinfo_50.txt (3.86 KB) buildinfo_50.txt Andreas Herz, 05/27/2021 09:15 AM
buildinfo_60.txt (3.89 KB) buildinfo_60.txt Andreas Herz, 05/27/2021 09:15 AM
flowmemuse.png (27.8 KB) flowmemuse.png Andreas Herz, 05/27/2021 09:15 AM
perftop50.png (23.6 KB) perftop50.png Andreas Herz, 05/27/2021 09:15 AM
perftop60.png (19 KB) perftop60.png Andreas Herz, 05/27/2021 09:15 AM
tcpmemuse.png (30.3 KB) tcpmemuse.png Andreas Herz, 05/27/2021 09:15 AM
top50.png (35.9 KB) top50.png Andreas Herz, 05/27/2021 09:15 AM
top60.png (34.8 KB) top60.png Andreas Herz, 05/27/2021 09:15 AM
massif2.out (31.3 KB) massif2.out Andreas Herz, 07/07/2021 12:35 PM
massif.out (58.1 KB) massif.out Andreas Herz, 07/07/2021 12:35 PM

Related issues 4 (0 open4 closed)

Related to Suricata - Bug #4650: Stream TCP raw reassembly is leakingClosedVictor JulienActions
Related to Suricata - Optimization #4652: GAP handling improvements seem expensiveRejectedActions
Related to Suricata - Optimization #4653: Flow cleaning with chunked approach is memory hungryClosedVictor JulienActions
Related to Suricata - Bug #4654: tcp: insert_data_normal_fail can hit without triggering memcapClosedEric LeblondActions
Actions #1

Updated by Andreas Herz almost 3 years ago

  • Effort set to medium
Actions #2

Updated by Andreas Herz almost 3 years ago

We also saw, on live deployments, that the flow.timeout_inuse counter is between 30-70% of the overall flows. So might seem to be another correlation that would hint that some flows don't timeout as before

Actions #3

Updated by Peter Manev almost 3 years ago

Is it possible to post: build-info and pef top, top -H -p suripid when that starts to happen?
Also there is no midstream pickups in the config enabled right?

Actions #5

Updated by Andreas Herz almost 3 years ago

midstream pickup is not enabled.

Both runs you see are the same traffic (via trex), same config it's just the different versions of Suricata.

5.0 stable memuse as expected with the stable traffic, with 6.0 it is increasing and after some hours will hit the memcap.

Actions #6

Updated by Peter Manev almost 3 years ago

I am not sure if ti is related, but noticed the following for 5.x from the info provided

compiled with LibHTP v0.5.33, linked against LibHTP v0.5.37

Also rust is not the same version. 1.34 for 5.x and 1.42 for 6.0.
Wondering if it has any effect - just something worth of "equalizing" to remove it as potential cause.

Actions #7

Updated by Andreas Herz almost 3 years ago

good catch, we did a rebuild with the same versions that are used for 6.0 but no change, 5.0 still looking good and 6.0 bad. So we can rule that out as well.

We could at least narrow it down to this commit set:

last known good commit https://github.com/OISF/suricata/commit/be57f6d3cc74f3bb1315439218c24a928cae7c48
first known bad https://github.com/OISF/suricata/commit/999af4f62a234d78f682c2b842f434410a61f56f (is just documentation)

So in between are several Flow related commits from that I see in https://github.com/OISF/suricata/pull/5284 as well. Which would make sense since the relevant sections have been touched/changed. But the change is rather complex so might take some time to find out what causes the big memory increase/"leak".

So either the whole rework or a specific commit seems to be the reason for that.

Actions #8

Updated by Andreas Herz almost 3 years ago

A deeper dive revealed:

last known good commit https://github.com/OISF/suricata/commit/48605f48674b1478a7dfcb03d67fa2d92eff4f51#diff-ae20d432e75e77294238b0a3633016e6efaa39c1b759d2a1d0b18e135bed7eac
first bad one https://github.com/OISF/suricata/commit/b3599507f4eb891841417575587d690ea13fe6c0#diff-ae20d432e75e77294238b0a3633016e6efaa39c1b759d2a1d0b18e135bed7eac

in between the commit https://github.com/OISF/suricata/commit/2fd7c87f2271b14ba8c1ed4246c04fbe0457b1cf#diff-ae20d432e75e77294238b0a3633016e6efaa39c1b759d2a1d0b18e135bed7eac introcuded another regression that lead to a very low tcp reassembly memuse but high flow use and much more events and flows on a T-rex test setup which also resulted in much higher drops, but nearly 0 tcp reassembly gaps.

Actions #9

Updated by Odin Jenseg almost 3 years ago

I think this issue is related to what I observed in https://redmine.openinfosecfoundation.org/issues/4200. We did observe issues around increasing flows in the eBPF map that was never deleted, and a huge difference between the number of "closed" bypass flows in 5 and 6. Its been a while since I did any testing on 6, but using Suricata 5 did resolve our issue.

Updated by Andreas Herz almost 3 years ago

I added two valgrind runs with massif enabled, as you can see one has stable memory usage, the other has a steady increase while it's the same traffic

Actions #11

Updated by Andreas Herz almost 3 years ago

Comparing the two outputs between the good and the bad one you can see that the good one has 92% in the SCMallocFunc tree and just 4% for the SCReallocFunc while that increases in the bad one to 42%. This could explain the increase in memory usage if it's not released or at least not released fast enough.
So looking into flow-worker and tm-threads changes will be the next step.

Actions #12

Updated by Andreas Herz almost 3 years ago

I did a check on all deployments and while I can see the issue everywhere I can also confirm that it's not solely related to `tcp.reassembly_gap` since I have several deployments with the issue that don't have any gaps.
So IMHO most of the correlation are just a sideeffect.

Actions #13

Updated by Andreas Herz over 2 years ago

  • Related to Bug #4650: Stream TCP raw reassembly is leaking added
Actions #14

Updated by Andreas Herz over 2 years ago

Actions #15

Updated by Andreas Herz over 2 years ago

  • Related to Optimization #4653: Flow cleaning with chunked approach is memory hungry added
Actions #16

Updated by Andreas Herz over 2 years ago

  • Related to Bug #4654: tcp: insert_data_normal_fail can hit without triggering memcap added
Actions #18

Updated by Victor Julien over 2 years ago

  • Status changed from New to In Progress
  • Assignee set to Victor Julien
  • Target version changed from TBD to 7.0.0-beta1
Actions #19

Updated by Victor Julien over 1 year ago

  • Status changed from In Progress to Closed

Closing this as a lot as been done to improve this. If this is still or again an issue, it will need new analysis, tools output, etc. So a new ticket would be best then.

Actions

Also available in: Atom PDF