Bug #5363
openMemory leak in rust SMB file tracker
Description
Hey All,
We've experienced a memory leak in suricata 6.0.5 while processing SMB2 traffic that contains transfer of many files in SMB2.
This is the steps I did to reproduce the problem:
1. Started suricata with default 6.0.5 suricata.yaml
2. Checked with htop its memory consumption - ~100MB
3. Played the PCAPs with tcpreplay
4. Waited 45 minutes
5. Checked with htop - suricata memory is around ~450MB
It must be noted that suricata with default config in our lab with almost ZERO TRAFFIC is ALWAYS at around 120MB RAM, and the fact that it stayed at 450MB and did not come back, is not normal.
The discovery of this bug started at a customer of ours, where we saw in dmesg that the kernel killed suricata multiple times, because it exhausted all system memory
We saw lines like the following:
[19808.216017] Out of memory: Kill process 12368 (Suricata-Main) score 748 or sacrifice child
[19808.216076] Killed process 12368 (Suricata-Main) total-vm:18705720kB, anon-rss:11259404kB, file-rss:0kB, shmem-rss:0kB
Which means suricata consumed 18GB of RAM, and the kernel terminated it.
Suricata usually consumes 300-600MB at our customers, so 18GB was obviously very strange.
Then we recorded PCAPs, and I could reproduce a memory leak using the steps above on a local VM.
I also compiled suricata in debug mode and did the following extra checks:
1. Ran it with a memory profiler "heaptrack" which discovered memory leaks and high memory usage in smb/files.rs & filetracker.rs
2. To verify this is 100% related to SMB files, I commented lines "c.chunk.extend(d);" in filetracker.rs and there was no memory leak nor high memory usage after !
Unfortunately I cannot attach the PCAPs that reproduce this because they contain data from a customer, but I'd be happy to collaborate and give any needed extra information or to do a joint debug session.
Attaching the following files:
1. build info
2. htop before running pcap - 100MB
3. htop after running pcap - 480MB
4. dmesg output - suricata out of memory
5. heaptrack memory profiler memory leak
6. suricata.yaml - default 6.0.5
Thanks
Maayan
Files
Updated by Victor Julien over 2 years ago
Could be related to #4861, for which we did a work around in #4842, see:
https://suricata.readthedocs.io/en/suricata-6.0.5/configuration/suricata-yaml.html#resource-limits
Updated by Maayan Fish over 2 years ago
- File smb-thresholds-after.png smb-thresholds-after.png added
- File smb-thresholds-before.png smb-thresholds-before.png added
Victor, I checked the new smb configuration, and the memory leak still occurs.
I'll explain what I did:
1. I wanted to verify that what I have here is indeed a memory leak, and not high memory usage, which means that memory never gets freed
2. I used the following smb configuration:
smb:
enabled: yes
detection-ports:
dp: 139, 445
max-read-size: 64mb
max-write-size: 64mb
max-read-queue-size: 256mb
max-read-queue-cnt: 100
max-write-queue-size: 256mb
max-write-queue-cnt: 100
3. Yes - I used high numbers, because I wanted the memory leak to occur big time, and not a small leak. The test here was to see if after 30 minutes the memory gets freed.
4. I ran suricata, default 6.0.5 config with the smb config above, on a very quite network.
5. Suricata initial memory - 100MB
6. I ran the SMB PCAPs with tcpreplay
7. memory leak still happens - after 30 minutes - suricata with 498MB RAM
I cannot share the PCAPs because they have customer data, but I can do whatever you'd ask for me, including a joint debug session.
Thanks !
Updated by Victor Julien almost 2 years ago
- Related to Optimization #5782: smb: set defaults for file chunk limits added
Updated by Victor Julien almost 2 years ago
- Related to Bug #5781: smb: unbounded file chunk queuing after gap added
Updated by Victor Julien almost 2 years ago
- Status changed from New to Feedback
Can you rerun your tests with 6.0.10 or 7.0.0-rc1? Several issues that are possibly related have been addressed.
Updated by Orion Poplawski over 1 year ago
I'd be curious to hear the results of Maayan's tests as well, but on our network we are still seeing significant suricata memory growth with 6.0.10.