Project

General

Profile

Actions

Security #7280

closed

dns: quadratic complexity in logging and invalid json as output

Added by Philippe Antoine 10 months ago. Updated 19 days ago.

Status:
Closed
Priority:
Normal
Assignee:
Target version:
Affected Versions:
Label:
Git IDs:

3a5671739f5b25e5dd973a74ca5fd8ea40e1ae2d
37f4c52b22fcdde4adf9b479cb5700f89d00768d
19cf0f81335d9f787d587450f7105ad95a648951

Severity:
HIGH
Disclosure Date:

Description

Running /src/suricata -l log -c suricata.yaml -k none -r poc2.pcap --disable-detection yields an invalid json as if a append_String was not complete

123456789abcdef123456789abcdef.123456789abc{"timestamp"


Files

poc.py (755 Bytes) poc.py Philippe Antoine, 09/26/2024 09:39 AM
pocs.py (437 Bytes) pocs.py Philippe Antoine, 09/26/2024 09:39 AM
poc2.pcap (17.3 KB) poc2.pcap Philippe Antoine, 09/26/2024 09:40 AM
poc.pcapng (64.2 KB) poc.pcapng Philippe Antoine, 09/26/2024 01:29 PM

Subtasks 3 (0 open3 closed)

Bug #7300: output: oversized records lead to invalid jsonClosedJason IshActions
Bug #7301: output: oversized records lead to invalid json (7.0.x backport)ClosedJason IshActions
Security #7405: dns: quadratic complexity in logging and invalid json as output (7.0.x backport)ClosedJason IshActions

Related issues 1 (1 open0 closed)

Blocks Suricata - Optimization #7430: dns: parse more than 255 name segments to find end of nameAssignedJason IshActions
Actions #1

Updated by Philippe Antoine 10 months ago

Warning: buffer: Mem buffer asked to create buffer with size greater than API limit - 10485760 [MemBufferExpand:util-buffer.c:64]

Actions #2

Updated by Philippe Antoine 10 months ago

A first fix is to simply check the return value of MemBufferExpand in OutputJsonBuilderBuffer

Actions #3

Updated by Philippe Antoine 10 months ago

Optimized POC : 112Mbytes log for a 65Kbyte DNS record

Actions #4

Updated by Victor Julien 10 months ago

Will this also affect alert records? Or just dns records?

Actions #5

Updated by Philippe Antoine 10 months ago

Both alert and dns records...

Actions #6

Updated by Philippe Antoine 10 months ago

Need to add 1053 to suricata.yaml detection port for DNS

Actions #7

Updated by Jason Ish 10 months ago ยท Edited

2 issues right?

- quadratic complexity in decoding the large DNS message (do I have that right?): could put a size limit on name sizes, and number of elements to decode I suppose, set an event when reached
- how do we handle a log record that is over the buffer size (and I suppose other protocols could trigger this as well with correctly crafted pcaps)

Actions #8

Updated by Philippe Antoine 10 months ago

Jason Ish wrote in #note-7:

2 issues right?

Right

- quadratic complexity in decoding the large DNS message (do I have that right?): could put a size limit on name sizes, and number of elements to decode I suppose, set an event when reached

Another solution : See also how we handled this in HTTP2 : we just log the first using rust Rc stuff to avoid duplication in memory as well.
Putting a size limit on domain name looks standard...

- how do we handle a log record that is over the buffer size (and I suppose other protocols could trigger this as well with correctly crafted pcaps)

Just keep the eve header and add a field "error" : "the dns metadata was over 10Mbytes" ?

I suppose other protocols could trigger this as well with correctly crafted pcaps

Examples are welcome if you can find one
Maybe we should have fuzzing find them...

Actions #9

Updated by Jason Ish 10 months ago

Philippe Antoine wrote in #note-8:

Jason Ish wrote in #note-7:

2 issues right?

Right

- quadratic complexity in decoding the large DNS message (do I have that right?): could put a size limit on name sizes, and number of elements to decode I suppose, set an event when reached

Another solution : See also how we handled this in HTTP2 : we just log the first using rust Rc stuff to avoid duplication in memory as well.
Putting a size limit on domain name looks standard...

Not sure if Rc would work here. We do matching on the re-assembled buffer. We already cap the "components" to 255, maybe need to cap the length as well. Then we also shouldn't error out, but return what we have and a flag that its truncated or whatever. So a bit of change to parsing is needed.

- how do we handle a log record that is over the buffer size (and I suppose other protocols could trigger this as well with correctly crafted pcaps)

Just keep the eve header and add a field "error" : "the dns metadata was over 10Mbytes" ?

The JSON buffer is already complete, so we'd have to lightly parse it to know how to do that. Funny that its already in memory, but its the abstraction to underlying writers where we hit this limit. IMO, if we decode it we should probably log it, enforce limits in the decoding, but if it makes it to the logger, and then deem it too big maybe some special engine event.. Just thinking out loud. Or better yet, if its logging a JsonBuilder, using the built buffer directly instead of copying?

I suppose other protocols could trigger this as well with correctly crafted pcaps

Examples are welcome if you can find one
Maybe we should have fuzzing find them...

Actions #10

Updated by Philippe Antoine 10 months ago

  • Subtask #7300 added
Actions #11

Updated by Philippe Antoine 10 months ago

- quadratic complexity in decoding the large DNS message (do I have that right?): could put a size limit on name sizes, and number of elements to decode I suppose, set an event when reached

Another solution : See also how we handled this in HTTP2 : we just log the first using rust Rc stuff to avoid duplication in memory as well.
Putting a size limit on domain name looks standard...

Not sure if Rc would work here. We do matching on the re-assembled buffer. We already cap the "components" to 255, maybe need to cap the length as well. Then we also shouldn't error out, but return what we have and a flag that its truncated or whatever. So a bit of change to parsing is needed.

The DNS name compression looks like fit for Rc : instead of recopying the whole domain name, we put a reference to it...
So, I think we could implement the dns parsing to use Rc.
But I guess this is only beneficial if we use it to avoid over logging...

maybe need to cap the length as well.

That seems a good idea indeed

Actions #12

Updated by Jason Ish 10 months ago

Philippe Antoine wrote in #note-11:

- quadratic complexity in decoding the large DNS message (do I have that right?): could put a size limit on name sizes, and number of elements to decode I suppose, set an event when reached

Another solution : See also how we handled this in HTTP2 : we just log the first using rust Rc stuff to avoid duplication in memory as well.
Putting a size limit on domain name looks standard...

Not sure if Rc would work here. We do matching on the re-assembled buffer. We already cap the "components" to 255, maybe need to cap the length as well. Then we also shouldn't error out, but return what we have and a flag that its truncated or whatever. So a bit of change to parsing is needed.

The DNS name compression looks like fit for Rc : instead of recopying the whole domain name, we put a reference to it...
So, I think we could implement the dns parsing to use Rc.
But I guess this is only beneficial if we use it to avoid over logging...

Maybe so. How would something like dns_query content matching work?

Actions #13

Updated by Philippe Antoine 10 months ago

The DNS name compression looks like fit for Rc : instead of recopying the whole domain name, we put a reference to it...
So, I think we could implement the dns parsing to use Rc.
But I guess this is only beneficial if we use it to avoid over logging...

Maybe so. How would something like dns_query content matching work?

Well, for HTTP/2 with commit 5bd17934df321b88f502d48afdd6cc8bad4787a7 we just removed the duplication of headers because of you only need the first of the duplicates to tell if you match or not, right ?

Actions #14

Updated by Jason Ish 10 months ago

Philippe Antoine wrote in #note-13:

The DNS name compression looks like fit for Rc : instead of recopying the whole domain name, we put a reference to it...
So, I think we could implement the dns parsing to use Rc.
But I guess this is only beneficial if we use it to avoid over logging...

Maybe so. How would something like dns_query content matching work?

Well, for HTTP/2 with commit 5bd17934df321b88f502d48afdd6cc8bad4787a7 we just removed the duplication of headers because of you only need the first of the duplicates to tell if you match or not, right ?

I don't think that would work for DNS. We might have compression to efficiently store google.google.google.com, but one might then right a rule to match dns_query; content:"google.google.";.

Actions #15

Updated by Philippe Antoine 10 months ago

Indeed, HTTP2 compression is more straight forward : you copy a reference to a whole header, DNS you just have an offset into whatever...

Actions #16

Updated by Jason Ish 10 months ago

DNSMasq appears to limit hostnames in their presentation state to 1025. I believe this is larger than what the RFC recommends, but DNSMasq is in heavy use today.

Actions #17

Updated by Philippe Antoine 10 months ago

Jason Ish wrote in #note-16:

DNSMasq appears to limit hostnames in their presentation state to 1025. I believe this is larger than what the RFC recommends, but DNSMasq is in heavy use today.

Still mush smaller then the 16kbytes we can get now with the attached pcap :-)

Actions #18

Updated by Philippe Antoine 10 months ago

Also https://redmine.openinfosecfoundation.org/issues/7211 may help to detect DNS with too many names ;-)

Actions #19

Updated by Jason Ish 9 months ago

  • Status changed from New to In Progress
Actions #20

Updated by Victor Julien 8 months ago

  • Label Needs backport to 7.0 added
Actions #21

Updated by OISF Ticketbot 8 months ago

  • Subtask #7405 added
Actions #22

Updated by OISF Ticketbot 8 months ago

  • Label deleted (Needs backport to 7.0)
Actions #23

Updated by Philippe Antoine 8 months ago

  • Status changed from In Progress to In Review
Actions #24

Updated by Jason Ish 8 months ago

  • Blocks Optimization #7430: dns: parse more than 255 name segments to find end of name added
Actions #25

Updated by Victor Julien 8 months ago

  • Severity changed from MODERATE to HIGH
Actions #27

Updated by Victor Julien 8 months ago

  • Status changed from In Review to Closed
  • Git IDs updated (diff)
Actions #28

Updated by Juliana Fajardini Reichow 19 days ago

  • Private changed from Yes to No
Actions

Also available in: Atom PDF