Security #7280
closeddns: quadratic complexity in logging and invalid json as output
Added by Philippe Antoine 10 months ago. Updated 19 days ago.
3a5671739f5b25e5dd973a74ca5fd8ea40e1ae2d
37f4c52b22fcdde4adf9b479cb5700f89d00768d
19cf0f81335d9f787d587450f7105ad95a648951
Description
Running /src/suricata -l log -c suricata.yaml -k none -r poc2.pcap --disable-detection
yields an invalid json as if a append_String was not complete
123456789abcdef123456789abcdef.123456789abc{"timestamp"
Files
poc.py (755 Bytes) poc.py | Philippe Antoine, 09/26/2024 09:39 AM | ||
pocs.py (437 Bytes) pocs.py | Philippe Antoine, 09/26/2024 09:39 AM | ||
poc2.pcap (17.3 KB) poc2.pcap | Philippe Antoine, 09/26/2024 09:40 AM | ||
poc.pcapng (64.2 KB) poc.pcapng | Philippe Antoine, 09/26/2024 01:29 PM |
Updated by Philippe Antoine 10 months ago
Warning: buffer: Mem buffer asked to create buffer with size greater than API limit - 10485760 [MemBufferExpand:util-buffer.c:64]
Updated by Philippe Antoine 10 months ago
A first fix is to simply check the return value of MemBufferExpand
in OutputJsonBuilderBuffer
Updated by Philippe Antoine 10 months ago
- File poc.pcapng poc.pcapng added
Optimized POC : 112Mbytes log for a 65Kbyte DNS record
Updated by Victor Julien 10 months ago
Will this also affect alert records? Or just dns records?
Updated by Philippe Antoine 10 months ago
Need to add 1053 to suricata.yaml detection port for DNS
Updated by Jason Ish 10 months ago ยท Edited
2 issues right?
- quadratic complexity in decoding the large DNS message (do I have that right?): could put a size limit on name sizes, and number of elements to decode I suppose, set an event when reached
- how do we handle a log record that is over the buffer size (and I suppose other protocols could trigger this as well with correctly crafted pcaps)
Updated by Philippe Antoine 10 months ago
Jason Ish wrote in #note-7:
2 issues right?
Right
- quadratic complexity in decoding the large DNS message (do I have that right?): could put a size limit on name sizes, and number of elements to decode I suppose, set an event when reached
Another solution : See also how we handled this in HTTP2 : we just log the first using rust Rc stuff to avoid duplication in memory as well.
Putting a size limit on domain name looks standard...
- how do we handle a log record that is over the buffer size (and I suppose other protocols could trigger this as well with correctly crafted pcaps)
Just keep the eve header and add a field "error" : "the dns metadata was over 10Mbytes" ?
I suppose other protocols could trigger this as well with correctly crafted pcaps
Examples are welcome if you can find one
Maybe we should have fuzzing find them...
Updated by Jason Ish 10 months ago
Philippe Antoine wrote in #note-8:
Jason Ish wrote in #note-7:
2 issues right?
Right
- quadratic complexity in decoding the large DNS message (do I have that right?): could put a size limit on name sizes, and number of elements to decode I suppose, set an event when reached
Another solution : See also how we handled this in HTTP2 : we just log the first using rust Rc stuff to avoid duplication in memory as well.
Putting a size limit on domain name looks standard...
Not sure if Rc would work here. We do matching on the re-assembled buffer. We already cap the "components" to 255, maybe need to cap the length as well. Then we also shouldn't error out, but return what we have and a flag that its truncated or whatever. So a bit of change to parsing is needed.
- how do we handle a log record that is over the buffer size (and I suppose other protocols could trigger this as well with correctly crafted pcaps)
Just keep the eve header and add a field "error" : "the dns metadata was over 10Mbytes" ?
The JSON buffer is already complete, so we'd have to lightly parse it to know how to do that. Funny that its already in memory, but its the abstraction to underlying writers where we hit this limit. IMO, if we decode it we should probably log it, enforce limits in the decoding, but if it makes it to the logger, and then deem it too big maybe some special engine event.. Just thinking out loud. Or better yet, if its logging a JsonBuilder, using the built buffer directly instead of copying?
I suppose other protocols could trigger this as well with correctly crafted pcaps
Examples are welcome if you can find one
Maybe we should have fuzzing find them...
Updated by Philippe Antoine 10 months ago
- quadratic complexity in decoding the large DNS message (do I have that right?): could put a size limit on name sizes, and number of elements to decode I suppose, set an event when reached
Another solution : See also how we handled this in HTTP2 : we just log the first using rust Rc stuff to avoid duplication in memory as well.
Putting a size limit on domain name looks standard...Not sure if Rc would work here. We do matching on the re-assembled buffer. We already cap the "components" to 255, maybe need to cap the length as well. Then we also shouldn't error out, but return what we have and a flag that its truncated or whatever. So a bit of change to parsing is needed.
The DNS name compression looks like fit for Rc : instead of recopying the whole domain name, we put a reference to it...
So, I think we could implement the dns parsing to use Rc.
But I guess this is only beneficial if we use it to avoid over logging...
maybe need to cap the length as well.
That seems a good idea indeed
Updated by Jason Ish 10 months ago
Philippe Antoine wrote in #note-11:
- quadratic complexity in decoding the large DNS message (do I have that right?): could put a size limit on name sizes, and number of elements to decode I suppose, set an event when reached
Another solution : See also how we handled this in HTTP2 : we just log the first using rust Rc stuff to avoid duplication in memory as well.
Putting a size limit on domain name looks standard...Not sure if Rc would work here. We do matching on the re-assembled buffer. We already cap the "components" to 255, maybe need to cap the length as well. Then we also shouldn't error out, but return what we have and a flag that its truncated or whatever. So a bit of change to parsing is needed.
The DNS name compression looks like fit for Rc : instead of recopying the whole domain name, we put a reference to it...
So, I think we could implement the dns parsing to use Rc.
But I guess this is only beneficial if we use it to avoid over logging...
Maybe so. How would something like dns_query
content matching work?
Updated by Philippe Antoine 10 months ago
The DNS name compression looks like fit for Rc : instead of recopying the whole domain name, we put a reference to it...
So, I think we could implement the dns parsing to use Rc.
But I guess this is only beneficial if we use it to avoid over logging...Maybe so. How would something like
dns_query
content matching work?
Well, for HTTP/2 with commit 5bd17934df321b88f502d48afdd6cc8bad4787a7 we just removed the duplication of headers because of you only need the first of the duplicates to tell if you match or not, right ?
Updated by Jason Ish 10 months ago
Philippe Antoine wrote in #note-13:
The DNS name compression looks like fit for Rc : instead of recopying the whole domain name, we put a reference to it...
So, I think we could implement the dns parsing to use Rc.
But I guess this is only beneficial if we use it to avoid over logging...Maybe so. How would something like
dns_query
content matching work?Well, for HTTP/2 with commit 5bd17934df321b88f502d48afdd6cc8bad4787a7 we just removed the duplication of headers because of you only need the first of the duplicates to tell if you match or not, right ?
I don't think that would work for DNS. We might have compression to efficiently store google.google.google.com
, but one might then right a rule to match dns_query; content:"google.google.";
.
Updated by Philippe Antoine 10 months ago
Indeed, HTTP2 compression is more straight forward : you copy a reference to a whole header, DNS you just have an offset into whatever...
Updated by Philippe Antoine 10 months ago
Jason Ish wrote in #note-16:
DNSMasq appears to limit hostnames in their presentation state to 1025. I believe this is larger than what the RFC recommends, but DNSMasq is in heavy use today.
Still mush smaller then the 16kbytes we can get now with the attached pcap :-)
Updated by Philippe Antoine 10 months ago
Also https://redmine.openinfosecfoundation.org/issues/7211 may help to detect DNS with too many names ;-)
Updated by Philippe Antoine 8 months ago
- Status changed from In Progress to In Review
Updated by Jason Ish 8 months ago
- Blocks Optimization #7430: dns: parse more than 255 name segments to find end of name added
Updated by Juliana Fajardini Reichow 8 months ago
- CVE set to 2024-55628
Updated by Victor Julien 8 months ago
- Status changed from In Review to Closed
- Git IDs updated (diff)
Updated by Juliana Fajardini Reichow 19 days ago
- Private changed from Yes to No