Bug #5363


Memory leak in rust SMB file tracker

Added by Maayan Fish about 2 years ago. Updated about 1 year ago.

Target version:
Affected Versions:


Hey All,
We've experienced a memory leak in suricata 6.0.5 while processing SMB2 traffic that contains transfer of many files in SMB2.
This is the steps I did to reproduce the problem:
1. Started suricata with default 6.0.5 suricata.yaml
2. Checked with htop its memory consumption - ~100MB
3. Played the PCAPs with tcpreplay
4. Waited 45 minutes
5. Checked with htop - suricata memory is around ~450MB

It must be noted that suricata with default config in our lab with almost ZERO TRAFFIC is ALWAYS at around 120MB RAM, and the fact that it stayed at 450MB and did not come back, is not normal.

The discovery of this bug started at a customer of ours, where we saw in dmesg that the kernel killed suricata multiple times, because it exhausted all system memory
We saw lines like the following:
[19808.216017] Out of memory: Kill process 12368 (Suricata-Main) score 748 or sacrifice child
[19808.216076] Killed process 12368 (Suricata-Main) total-vm:18705720kB, anon-rss:11259404kB, file-rss:0kB, shmem-rss:0kB
Which means suricata consumed 18GB of RAM, and the kernel terminated it.
Suricata usually consumes 300-600MB at our customers, so 18GB was obviously very strange.
Then we recorded PCAPs, and I could reproduce a memory leak using the steps above on a local VM.

I also compiled suricata in debug mode and did the following extra checks:
1. Ran it with a memory profiler "heaptrack" which discovered memory leaks and high memory usage in smb/ &
2. To verify this is 100% related to SMB files, I commented lines "c.chunk.extend(d);" in and there was no memory leak nor high memory usage after !

Unfortunately I cannot attach the PCAPs that reproduce this because they contain data from a customer, but I'd be happy to collaborate and give any needed extra information or to do a joint debug session.

Attaching the following files:
1. build info
2. htop before running pcap - 100MB
3. htop after running pcap - 480MB
4. dmesg output - suricata out of memory
5. heaptrack memory profiler memory leak
6. suricata.yaml - default 6.0.5



build_info.txt (4.07 KB) build_info.txt Maayan Fish, 05/16/2022 05:44 PM
default-config-htop-after-45min.png (109 KB) default-config-htop-after-45min.png Maayan Fish, 05/16/2022 05:44 PM
default-config-htop-before.png (110 KB) default-config-htop-before.png Maayan Fish, 05/16/2022 05:44 PM
Suricata-dmesg.txt (2.11 KB) Suricata-dmesg.txt Maayan Fish, 05/16/2022 05:44 PM
heaptrack-memory.png (636 KB) heaptrack-memory.png Maayan Fish, 05/16/2022 05:44 PM
suricata.yaml (71.5 KB) suricata.yaml Maayan Fish, 05/16/2022 05:48 PM
smb-thresholds-after.png (140 KB) smb-thresholds-after.png 498MB RAM after 30 minutes - mem leak Maayan Fish, 05/30/2022 12:05 PM
smb-thresholds-before.png (160 KB) smb-thresholds-before.png 94MB - Initial RAM Maayan Fish, 05/30/2022 12:12 PM

Related issues 2 (0 open2 closed)

Related to Suricata - Optimization #5782: smb: set defaults for file chunk limitsClosedVictor JulienActions
Related to Suricata - Bug #5781: smb: unbounded file chunk queuing after gapClosedVictor JulienActions

Updated by Maayan Fish almost 2 years ago

Victor, I checked the new smb configuration, and the memory leak still occurs.
I'll explain what I did:
1. I wanted to verify that what I have here is indeed a memory leak, and not high memory usage, which means that memory never gets freed
2. I used the following smb configuration:
enabled: yes
dp: 139, 445
max-read-size: 64mb
max-write-size: 64mb
max-read-queue-size: 256mb
max-read-queue-cnt: 100
max-write-queue-size: 256mb
max-write-queue-cnt: 100
3. Yes - I used high numbers, because I wanted the memory leak to occur big time, and not a small leak. The test here was to see if after 30 minutes the memory gets freed.
4. I ran suricata, default 6.0.5 config with the smb config above, on a very quite network.
5. Suricata initial memory - 100MB
6. I ran the SMB PCAPs with tcpreplay
7. memory leak still happens - after 30 minutes - suricata with 498MB RAM

I cannot share the PCAPs because they have customer data, but I can do whatever you'd ask for me, including a joint debug session.
Thanks !

Actions #3

Updated by Victor Julien about 1 year ago

Actions #4

Updated by Victor Julien about 1 year ago

  • Related to Bug #5781: smb: unbounded file chunk queuing after gap added
Actions #5

Updated by Victor Julien about 1 year ago

  • Status changed from New to Feedback

Can you rerun your tests with 6.0.10 or 7.0.0-rc1? Several issues that are possibly related have been addressed.

Actions #6

Updated by Orion Poplawski about 1 year ago

I'd be curious to hear the results of Maayan's tests as well, but on our network we are still seeing significant suricata memory growth with 6.0.10.


Also available in: Atom PDF