Bug #4542
closedSIGSEGV in BodyBase64Buffer at output-json-http.c:439
Description
Dear maintainers,
We came across a bug in Suricata (tested versions : 6.0.0 and 6.0.2) which causes a segmentation fault at random times (can occur once per hour or per day).
The backtrace :
Thread 2 "W#01-enp2s0" received signal SIGSEGV, Segmentation fault.
[Switching to LWP 32]
0x00005618c12b1b08 in BodyBase64Buffer (js=js@entry=0x7f9af1ece000, key=key@entry=0x5618c15c2998 "http_response_body", body=<optimized out>, body=<optimized out>)
at output-json-http.c:439
439 output-json-http.c: No such file or directory.
(gdb) bt
#0 0x00005618c12b1b08 in BodyBase64Buffer (js=js@entry=0x7f9af1ece000, key=key@entry=0x5618c15c2998 "http_response_body", body=<optimized out>, body=<optimized out>)
at output-json-http.c:439
#1 0x00005618c12b1d02 in EveHttpLogJSONBodyBase64 (js=js@entry=0x7f9af1ece000, f=<optimized out>, tx_id=tx_id@entry=2) at output-json-http.c:454
#2 0x00005618c12a4618 in AlertAddAppLayer (option_flags=<optimized out>, tx_id=2, jb=0x7f9af1ece000, p=0x7f9afbe7b9f0) at output-json-alert.c:458
#3 AlertJson (aft=0x7f9b00a61510, p=0x7f9afbe7b9f0, tv=<optimized out>) at output-json-alert.c:638
#4 0x00005618c12b88d2 in OutputPacketLog (tv=0x7f9b0a274010, p=0x7f9afbe7b9f0, thread_data=<optimized out>) at output-packet.c:116
#5 0x00005618c12a01d4 in OutputLoggerLog (tv=tv@entry=0x7f9b0a274010, p=p@entry=0x7f9afbe7b9f0, thread_data=<optimized out>) at output.c:882
#6 0x00005618c12936bf in FlowWorkerFlowTimeout (tv=tv@entry=0x7f9b0a274010, p=p@entry=0x7f9afbe7b9f0, fw=fw@entry=0x7f9b0a18b0b0, detect_thread=detect_thread@entry=0x7f9b0032c780)
at flow-worker.c:414
#7 0x00005618c129398f in FlowFinish (detect_thread=0x7f9b0032c780, fw=0x7f9b0a18b0b0, f=0x7f9b06b78680, tv=0x7f9b0a274010) at flow-worker.c:157
#8 CheckWorkQueue (tv=tv@entry=0x7f9b0a274010, fw=fw@entry=0x7f9b0a18b0b0, detect_thread=detect_thread@entry=0x7f9b0032c780, counters=counters@entry=0x7f9afc605368,
fq=fq@entry=0x7f9afc605370) at flow-worker.c:183
#9 0x00005618c1293d34 in FlowWorkerProcessInjectedFlows (p=0x7f9afbe7c490, detect_thread=0x7f9b0032c780, fw=0x7f9b0a18b0b0, tv=0x7f9b0a274010) at flow-worker.c:447
#10 FlowWorker (tv=0x7f9b0a274010, p=0x7f9afbe7c490, data=0x7f9b0a18b0b0) at flow-worker.c:570
#11 0x00005618c12e8861 in TmThreadsSlotVarRun (tv=tv@entry=0x7f9b0a274010, p=p@entry=0x7f9afbe7c490, slot=<optimized out>) at tm-threads.c:117
#12 0x00005618c12c8494 in TmThreadsSlotProcessPkt (tv=0x7f9b0a274010, s=<optimized out>, p=0x7f9afbe7c490) at tm-threads.h:192
#13 0x00005618c12c8f1f in AFPReadFromRing (ptv=0x7f9b09f42650) at source-af-packet.c:1011
#14 0x00005618c12ca2ee in ReceiveAFPLoop (tv=0x7f9b0a274010, data=0x7f9b09f42650, slot=<optimized out>) at source-af-packet.c:1571
#15 0x00005618c12e9f97 in TmThreadsSlotPktAcqLoop (td=0x7f9b0a274010) at tm-threads.c:312
#16 0x00007f9b0c70419e in ?? () from /lib/ld-musl-x86_64.so.1
#17 0x0000000000000000 in ?? ()
The full backtrace:
#0 0x00005618c12b1b08 in BodyBase64Buffer (js=js@entry=0x7f9af1ece000, key=key@entry=0x5618c15c2998 "http_response_body", body=<optimized out>, body=<optimized out>)
at output-json-http.c:439
body_data = 0x7f9af02b9fa0 "\330Kix1\021t\260\373\227\035\bR\"HU\033\266\200\351\t\373--H\206k+.\211\246uDO\024cx{a\276Ew;\223\221\231\231yэ~\244\200\347\216\313P\f#|y\373\353\201c%H&\337\335hbH\213+̚Z\363\246f\327\b\n\024\217\336", <incomplete sequence \330>
body_data_len = 102400
len = 136537
body_offset = 0
encoded = '\000' <repeats 32888 times>...
#1 0x00005618c12b1d02 in EveHttpLogJSONBodyBase64 (js=js@entry=0x7f9af1ece000, f=<optimized out>, tx_id=tx_id@entry=2) at output-json-http.c:454
htud = <optimized out>
tx = <optimized out>
htp_state = <optimized out>
#2 0x00005618c12a4618 in AlertAddAppLayer (option_flags=<optimized out>, tx_id=2, jb=0x7f9af1ece000, p=0x7f9afbe7b9f0) at output-json-alert.c:458
proto = <optimized out>
mark = {position = 0, state_index = 0, state = 0}
proto = <optimized out>
mark = {position = <optimized out>, state_index = <optimized out>, state = <optimized out>}
#3 AlertJson (aft=0x7f9b00a61510, p=0x7f9afbe7b9f0, tv=<optimized out>) at output-json-alert.c:638
xff_cfg = 0x7f9b0c26bb90
have_xff_ip = 0
jb = 0x7f9af1ece000
pa = 0x7f9afbe7bba8
addr = {src_ip = "93.184.221.240", '\000' <repeats 31 times>, dst_ip = "10.105.35.12", '\000' <repeats 33 times>, sp = 80, dp = 1243, proto = "TCP", '\000' <repeats 12 times>}
xff_buffer = "\n", '\000' <repeats 15 times>, "\025\000\000\000\233\177\000\000\024\000\000\000\232\177\000\000\200\206\267\006\233\177\000\000\nQ`\374\232\177"
i = 0
payload = <optimized out>
json_output_ctx = 0x7f9b0c26ab10
#4 0x00005618c12b88d2 in OutputPacketLog (tv=0x7f9b0a274010, p=0x7f9afbe7b9f0, thread_data=<optimized out>) at output-packet.c:116
op_thread_data = <optimized out>
logger = 0x7f9b0c2802e0
store = 0x7f9b0096a4f0
#5 0x00005618c12a01d4 in OutputLoggerLog (tv=tv@entry=0x7f9b0a274010, p=p@entry=0x7f9afbe7b9f0, thread_data=<optimized out>) at output.c:882
thread_store = <optimized out>
logger = 0x7f9b0c2782a0
thread_store_node = 0x7f9b0096bfc0
#6 0x00005618c12936bf in FlowWorkerFlowTimeout (tv=tv@entry=0x7f9b0a274010, p=p@entry=0x7f9afbe7b9f0, fw=fw@entry=0x7f9b0a18b0b0, detect_thread=detect_thread@entry=0x7f9b0032c780)
at flow-worker.c:414
No locals.
And the Suricata build info:
/ # suricata --build-info
This is Suricata version 6.0.2 RELEASE
Features: NFQ PCAP_SET_BUFF AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK PCRE_JIT HAVE_NSS HAVE_LUA HAVE_LUAJIT HAVE_LIBJANSSON TLS TLS_C11 MAGIC RUST
SIMD support: none
Atomic intrinsics: 1 2 4 8 byte(s)
64-bits, Little-endian architecture
GCC version 10.2.1 20201203, C version 201112
compiled with -fstack-protector
compiled with _FORTIFY_SOURCE=2
L1 cache line size (CLS)=64
thread local storage method: _Thread_local
compiled with LibHTP v0.5.37, linked against LibHTP v0.5.37
Suricata Configuration:
AF_PACKET support: yes
eBPF support: no
XDP support: no
PF_RING support: no
NFQueue support: yes
NFLOG support: no
IPFW support: no
Netmap support: no
DAG enabled: no
Napatech enabled: no
WinDivert enabled: no
Unix socket enabled: yes
Detection enabled: yes
Libmagic support: yes
libnss support: yes
libnspr support: yes
libjansson support: yes
hiredis support: yes
hiredis async with libevent: no
Prelude support: no
PCRE jit: yes
LUA support: yes, through luajit
libluajit: yes
GeoIP2 support: yes
Non-bundled htp: yes
Hyperscan support: no
Libnet support: yes
liblz4 support: yes
Rust support: yes
Rust strict mode: no
Rust compiler path: /usr/bin/rustc
Rust compiler version: rustc 1.47.0
Cargo path: /usr/bin/cargo
Cargo version: cargo 1.47.0
Cargo vendor: yes
Python support: yes
Python path: /usr/bin/python3
Python distutils yes
Python yaml yes
Install suricatactl: yes
Install suricatasc: yes
Install suricata-update: yes
Profiling enabled: no
Profiling locks enabled: no
Plugin support (experimental): yes
Development settings:
Coccinelle / spatch: no
Unit tests enabled: no
Debug output enabled: no
Debug validation enabled: no
Generic build parameters:
Installation prefix: /usr
Configuration directory: /etc/suricata/
Log directory: /var/log/suricata/
--prefix /usr
--sysconfdir /etc
--localstatedir /var
--datarootdir /usr/share
Host: x86_64-pc-linux-gnu
Compiler: gcc (exec name) / g++ (real)
GCC Protect enabled: yes
GCC march native enabled: no
GCC Profile enabled: no
Position Independent Executable enabled: yes
CFLAGS -g -O2 -std=c11 -I${srcdir}/../rust/gen -I${srcdir}/../rust/dist
PCAP_CFLAGS -I/usr/include
SECCFLAGS -fstack-protector -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security
Updated by Victor Julien over 3 years ago
We think this may be caused by the musl library using a much smaller default stack size per thread.
https://wiki.musl-libc.org/functional-differences-from-glibc.html has some details on what you might try during compilation:
Since 1.1.21, musl supports increasing the default thread stack size via the PT_GNU_STACK program header, which can be set at link time via -Wl,-z,stack-size=N.
In my Ubuntu glibc system the default stack size is 8192k instead of 128k for musls default.
Updated by OpenSOC ESIEA over 3 years ago
Victor Julien wrote in #note-1:
We think this may be caused by the musl library using a much smaller default stack size per thread.
https://wiki.musl-libc.org/functional-differences-from-glibc.html has some details on what you might try during compilation:
[...]
In my Ubuntu glibc system the default stack size is 8192k instead of 128k for musls default.
Thanks for your help, we'll try this and wait for the next few days to see if the problem persists. However, I also think that the problem comes from there.
I will update the situation next week.
Updated by OpenSOC ESIEA over 3 years ago
OpenSOC ESIEA wrote in #note-2:
Victor Julien wrote in #note-1:
We think this may be caused by the musl library using a much smaller default stack size per thread.
https://wiki.musl-libc.org/functional-differences-from-glibc.html has some details on what you might try during compilation:
[...]
In my Ubuntu glibc system the default stack size is 8192k instead of 128k for musls default.Thanks for your help, we'll try this and wait for the next few days to see if the problem persists. However, I also think that the problem comes from there.
I will update the situation next week.
We can confirm that the solution works. Thank you for your help.
Updated by Victor Julien over 3 years ago
- Related to Feature #4550: pthreads: set minimum stack size added
Updated by Victor Julien over 3 years ago
- Related to Feature #4551: eve: add direct base64 to json option to json builder added
Updated by Philippe Antoine almost 2 years ago
- Status changed from New to Closed
Solved in other tickets as described