Project

General

Profile

Bug #1178

tcp.reassembly_memuse missprint in stats.log

Added by Peter Manev almost 5 years ago. Updated about 3 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Target version:
-
Affected Versions:
Effort:
Difficulty:

Description

Using 2.0dev (rev ab50387)

In my suricata.yaml I have:

  reassembly:
    memcap: 30gb

2 min after start tcp.reassembly_memuse shows that the whole amount (30GB) is used for every thread :


root@suricata:~# grep memuse /var/log/suricata/stats.log | tail -64
tcp.memuse                | AFPacketeth31             | 302434368
tcp.reassembly_memuse     | AFPacketeth31             | 32212254681
http.memuse               | AFPacketeth31             | 1385239
dns.memuse                | AFPacketeth32             | 14468098
tcp.memuse                | AFPacketeth32             | 302137200
tcp.reassembly_memuse     | AFPacketeth32             | 32212254681
http.memuse               | AFPacketeth32             | 1387903
dns.memuse                | AFPacketeth33             | 14017181
tcp.memuse                | AFPacketeth33             | 302169856
tcp.reassembly_memuse     | AFPacketeth33             | 32212254683
http.memuse               | AFPacketeth33             | 1385537
dns.memuse                | AFPacketeth34             | 14019163
tcp.memuse                | AFPacketeth34             | 302172560
tcp.reassembly_memuse     | AFPacketeth34             | 32212254683
http.memuse               | AFPacketeth34             | 1380014
....

tcp.memuse                | AFPacketeth316            | 302176576
tcp.reassembly_memuse     | AFPacketeth316            | 32212254683
http.memuse               | AFPacketeth316            | 1377703

However htop (attached) shows 12 GB mem usage in total,
but in stats.log - tcp.reassembly_memuse shows that 30GB are used right away for every thread, which is not true.


Files

htop.PNG (33.6 KB) htop.PNG Peter Manev, 04/13/2014 07:11 AM

History

#1

Updated by Andreas Moe over 4 years ago

Is this a missprint, or "wrongly labeled"? If memcap is set to 30gb, and tcp.reasembly_memuse is not memory in use, but the cap?

#2

Updated by Peter Manev over 4 years ago

I think this should depict "current" usage on a global level (not per thread)- the cap is already known in yaml.

#3

Updated by Ken Steele over 4 years ago

Looking at the Suricata source code, it appears that the memcaps are global, but reported by every thread. There is only one global for TCP reassembly (ra_memuse in stream-tcp-reassemble.c).

The reporting is confusing, since it looks like each worker thread is using the memuse amount of memory.

#4

Updated by Victor Julien over 4 years ago

Since the memuse is global, we should probably not update it from each of the threads either. Maybe the flow manager could register the counter instead. Or we need different method completely.

#5

Updated by Ken Steele over 4 years ago

I agree that it should only be reported once, not per-thread, given it is a global number.

I also see that the value in the global ra_memuse is only copied into the stats counter by StreamTcpReassembleMemuseCounter() which is only called at the end of StreamTcpReassembleHandleSegment(). This means that the stats value does not get reduced when flows expire. It also requires extra work to copy the value into the stat.

It would be better if the stats reporting thread could simply read the value from ra_memuse.

#6

Updated by Andreas Herz about 3 years ago

  • Assignee set to OISF Dev
  • Target version set to TBD
#7

Updated by Victor Julien about 3 years ago

  • Status changed from New to Closed
  • Assignee deleted (OISF Dev)
  • Target version deleted (TBD)

This should be fixed in 3.0.

#8

Updated by Peter Manev about 3 years ago

Confirmed - it is.

Also available in: Atom PDF