Project

General

Profile

Actions

Support #3679

closed

Memory usage rises but does not fall

Added by ice cheng over 4 years ago. Updated almost 3 years ago.

Status:
Closed
Priority:
Normal
Affected Versions:
Label:

Description

After suricate program runs on centos7.6 os for a while, tcp.memuse and tcp.reassembly_memuse, http.memuse, flow.memuse increased normaly.
Next,stop network flow,
After waiting 10 minutes, all the tcp flow were timeout, tcp.memuse and tcp.reassembly_memuse, http.memuse, flow.memuse reduced to initial values,
but the memory usage of suricate has increased about 1GB and never decreased,even if you wait a few days.
( PS: memory usage stat cmd are " pmap -p `pidof suricata` -d" and "top grep suricata" )

May I ask the memory is freed by suricate, why not recovered by operating system。( PS: No memory leak detected by Valgrind )

Looking forward to help,thanks very much!

Actions #1

Updated by Andreas Herz over 4 years ago

  • Assignee changed from Victor Julien to Community Ticket

Please provide us with more details about your setup/configuration.

Actions #2

Updated by ice cheng over 4 years ago

#Modify the following configuration:
(PS: rules and output eve-log are not enable)

runmode: workers

af-packet:
- interface: default
threads: 1 # cluster-id: 99
cluster-type: cluster_flow
defrag: no
rollover: yes
use-mmap: yes
tpacket-v3: yes
ring-size: 400000
block-size: 524288

flow:
memcap: 4gb
hash-size: 1048576
prealloc: 300000
emergency-recovery: 30

flow-timeouts:
tcp:
new: 60
established: 480
closed: 20
bypassed: 100
emergency-new: 10
emergency-established: 30
emergency-closed: 5
emergency-bypassed: 50

stream:
memcap: 4gb
prealloc-sessions: 40000
checksum-validation: no # reject wrong csums
inline: no # auto will use inline mode in IPS mode, yes or no set it statically
reassembly:
memcap: 8gb
depth: 5mb # reassemble 1mb into a stream
toserver-chunk-size: 5120
toclient-chunk-size: 5120
randomize-chunk-size: yes
raw: yes
segment-prealloc: 20000

#Operating system: CentOS Linux release 7.6.1810 (Core)
#Start parameters:
./suricata -c ./suricata.yaml --af-packet=enp14s0 -l /data/logs -D -v

#Initial memory usage of suricata:
[root@localhost ~]# top
top - 10:13:17 up 17 days, 23:24, 6 users, load average: 0.00, 0.05, 0.15
Tasks: 272 total, 1 running, 271 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 0.1 sy, 0.0 ni, 99.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 32748604 total, 18092928 free, 4599472 used, 10056204 buff/cache
KiB Swap: 33554428 total, 32934652 free, 619776 used. 27771016 avail Mem

PID USER PR NI VIRT RES SHR S %CPU. %MEM TIME+ COMMAND
4494 root 20 0 1393024 943920 654212 S 2.3 2.9 0:05.58 Suricata-Main

[root@localhost ~]# pmap -p `pidof suricata` -d |tail -1
mapped: 1393028K writeable/private: 342120K shared: 650240K

#Initial stat log:
[root@localhost ~]# tailf /data/logs/stats.log |grep -E "(memuse|capture)"
capture.kernel_packets | Total | 541
tcp.memuse | Total | 11200000
tcp.reassembly_memuse | Total | 960000
flow.memuse | Total | 71511944

#Playback pcap packet:tcpreplay -i enp14s0 -M 200 http_10.20.121.33_190920.pcap
#Pcap packet size: 8.6G
[root@localhost pcap_test]# tcpreplay -i enp14s0 -M 200 http_10.20.121.33_190920.pcap
Actual: 12949068 packets (9022972114 bytes) sent in 360.91 seconds
Rated: 25000061.2 Bps, 200.00 Mbps, 35878.14 pps
Statistics for network device: enp14s0
Successful packets: 12949068
Failed packets: 0
Truncated packets: 0
Retried packets (ENOBUFS): 0
Retried packets (EAGAIN): 0

#Memory usage of suricata after tcpreplay over:
[root@localhost ~]# top
........
KiB Mem : 32748604 total, 17533316 free, 5158848 used, 10056440 buff/cache
KiB Swap: 33554428 total, 32934652 free, 619776 used. 27211652 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4494 root 20 0 1925508 1.4g 654596 S 2.0 4.6 0:36.69 Suricata-Main

[root@localhost src]# pmap -p `pidof suricata` -d |tail -1
mapped: 1925512K writeable/private: 923140K shared: 650240K

[root@localhost ~]# tailf /data/logs/stats.log |grep -E "(memuse|capture)"
capture.kernel_packets | Total | 12958154
tcp.memuse | Total | 13166848
tcp.reassembly_memuse | Total | 105577768
http.memuse | Total | 144767956
flow.memuse | Total | 81482344

#Memory usage of suricata after about 10 mins:

[root@localhost ~]# top
............
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4494 root 20 0 1925508 1.4g 654656 S 1.7 4.6 0:53.70 Suricata-Main

[root@localhost ~]# pmap -p `pidof suricata` -d |tail -1
mapped: 1925512K writeable/private: 923140K shared: 650240K

[root@localhost ~]# tailf /data/logs/stats.log |grep -E "(memuse|capture)"
capture.kernel_packets | Total | 12962197
tcp.memuse | Total | 11200280
tcp.reassembly_memuse | Total | 968192
flow.memuse | Total | 71511504

#My conclusion:
All memuse has reduced to initial values,but memory usage never change,
perhaps some memory leak no found.
I test with threads: 1, if change it to threads: auto, the test results are the same.
If tcpreplay more pacp packets for long time, the memory usage of suricata while always increase.

Actions #3

Updated by Andreas Herz about 4 years ago

Do you see this with real traffic or just on the tcpreplay setup?

Actions #4

Updated by Andreas Herz almost 3 years ago

  • Status changed from New to Closed

Hi, we're closing this issue since there have been no further responses.
If you think this issue is still relevant, try to test it again with the
most recent version of suricata and reopen the issue. If you want to
improve the bug report please take a look at
https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Reporting_Bugs

Actions

Also available in: Atom PDF