Project

General

Profile

Actions

Bug #1163

closed

HTP Segfault

Added by Brad Roether over 10 years ago. Updated over 10 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Target version:
Affected Versions:
Effort:
Difficulty:
Label:

Description

HTP Segfaults - irregular period of time between incidents (have observed cores within as little as 2 minutes or as long as 18+ hours)

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffb743f700 (LWP 23342)]
0x00007ffff7bcdf8c in htp_connp_REQ_CONNECT_WAIT_RESPONSE (connp=0x7ffdebef14d0) at htp_request.c:322
322         if (connp->in_tx->response_progress <= HTP_RESPONSE_LINE) {
(gdb)
(gdb) bt
#0  0x00007ffff7bcdf8c in htp_connp_REQ_CONNECT_WAIT_RESPONSE (connp=0x7ffdebef14d0) at htp_request.c:322
#1  0x00007ffff7bce3f9 in htp_connp_req_data (connp=0x7ffdebef14d0, timestamp=<value optimized out>, data=<value optimized out>, len=<value optimized out>)
    at htp_request.c:851
#2  0x00000000004230b1 in HTPHandleRequestData (f=<value optimized out>, htp_state=0x7fff90e4b130, pstate=0x7fff3043f900,
    input=0x7fffb743c920 "CONNECT tools.google.com:443 HTTP/1.0\r\nHost: tools.google.com\r\nContent-Length: 0\r\nProxy-Connection: Keep-Alive\r\nProxy-Authorization: Negotiate TlRMTVNTUAADAAAAAQABAGIAAAAAAAAAYwAAAAAAAABIAAAAAAAAAEgAA"..., input_len=<value optimized out>, local_data=<value optimized out>) at app-layer-htp.c:720
#3  0x0000000000427d7a in AppLayerParserParse (alp_tctx=<value optimized out>, f=0x7fff5c98baf0, alproto=1, flags=6 '\006', input=<value optimized out>,
    input_len=<value optimized out>) at app-layer-parser.c:818
#4  0x0000000000410219 in AppLayerHandleTCPData (tv=0x1390ed00, ra_ctx=0x7fffb00135f0, p=0x3738460, f=0x7fff5c98baf0, ssn=0x7ffe90debd90, stream=<value optimized out>,
    data=0x7fffb743c920 "CONNECT tools.google.com:443 HTTP/1.0\r\nHost: tools.google.com\r\nContent-Length: 0\r\nProxy-Connection: Keep-Alive\r\nProxy-Authorization: Negotiate TlRMTVNTUAADAAAAAQABAGIAAAAAAAAAYwAAAAAAAABIAAAAAAAAAEgAA"..., data_len=279, flags=6 '\006') at app-layer.c:360
#5  0x0000000000517875 in StreamTcpReassembleAppLayer (tv=0x1390ed00, ra_ctx=0x7fffb00135f0, ssn=0x7ffe90debd90, stream=0x7ffe90debde0, p=0x3738460)
    at stream-tcp-reassemble.c:3199
#6  0x0000000000517d00 in StreamTcpReassembleHandleSegmentUpdateACK (tv=0x1390ed00, ra_ctx=0x7fffb00135f0, ssn=0x7ffe90debd90, stream=0x7ffe90debde0, p=0x3738460)
    at stream-tcp-reassemble.c:3545
#7  0x0000000000519e9a in StreamTcpReassembleHandleSegment (tv=0x1390ed00, ra_ctx=0x7fffb00135f0, ssn=0x7ffe90debd90, stream=0x7ffe90debd98, p=0x3738460,
    pq=<value optimized out>) at stream-tcp-reassemble.c:3573
#8  0x00000000005146e5 in StreamTcpPacket (tv=0x1390ed00, p=0x43734a0, stt=0x7fffb0012f00, pq=0x137ecb20) at stream-tcp.c:4363
#9  0x0000000000515cec in StreamTcp (tv=0x1390ed00, p=0x43734a0, data=0x7fffb0012f00, pq=0x137ecb20, postpq=<value optimized out>) at stream-tcp.c:4485
#10 0x000000000052a4d0 in TmThreadsSlotVarRun (tv=0x1390ed00, p=0x43734a0, slot=<value optimized out>) at tm-threads.c:559
#11 0x000000000050bdef in TmThreadsSlotProcessPkt (tv=0x1390ed00, data=<value optimized out>, slot=<value optimized out>) at tm-threads.h:142
#12 ReceivePfringLoop (tv=0x1390ed00, data=<value optimized out>, slot=<value optimized out>) at source-pfring.c:361
#13 0x000000000052a11e in TmThreadsSlotPktAcqLoop (td=0x1390ed00) at tm-threads.c:703
#14 0x00007ffff51be9d1 in start_thread (arg=0x7fffb743f700) at pthread_create.c:301
#15 0x00007ffff4d07b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

--build-info:

This is Suricata version 2.0 RELEASE
Features: PCAP_SET_BUFF LIBPCAP_VERSION_MAJOR=1 PF_RING HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK PCRE_JIT HAVE_NSS HAVE_LUAJIT HAVE_LIBJANSSON PROFILING
SIMD support: SSE_4_2 SSE_4_1 SSE_3
Atomic intrisics: 1 2 4 8 16 byte(s)
64-bits, Little-endian architecture
GCC version 4.4.7 20120313 (Red Hat 4.4.7-4), C version 199901
compiled with -fstack-protector
compiled with _FORTIFY_SOURCE=2
L1 cache line size (CLS)=64
compiled with LibHTP v0.5.10, linked against LibHTP v0.5.10
Suricata Configuration:
  AF_PACKET support:                       no
  PF_RING support:                         yes
  NFQueue support:                         no
  IPFW support:                            no
  DAG enabled:                             no
  Napatech enabled:                        no
  Unix socket enabled:                     yes
  Detection enabled:                       yes

  libnss support:                          yes
  libnspr support:                         yes
  libjansson support:                      yes
  Prelude support:                         no
  PCRE jit:                                yes
  libluajit:                               yes
  libgeoip:                                yes
  Non-bundled htp:                         no
  Old barnyard2 support:                   no
  CUDA enabled:                            no

  Suricatasc install:                      yes

  Unit tests enabled:                      no
  Debug output enabled:                    no
  Debug validation enabled:                no
  Profiling enabled:                       yes
  Profiling locks enabled:                 no
  Coccinelle / spatch:                     no

Generic build parameters:
  Installation prefix (--prefix):          [ REDACTED ]
  Configuration directory (--sysconfdir):  [ REDACTED ]
  Log directory (--localstatedir) :        [ REDACTED ]

  Host:                                    x86_64-unknown-linux-gnu
  GCC binary:                              gcc
  GCC Protect enabled:                     yes
  GCC march native enabled:                yes
  GCC Profile enabled:                     no

Other configuration items of interest:
-Using PF_RING build 7180 w/ DNA driver on Intel 82599 NIC
-Suricata is using runmode: workers and cluster_flow
-Included IRQ affinity script is being used
-ixgbe parameters: MQ=1,1 RSS=16,16 FdirPballoc=3,3 num_rx_slots=32768 mtu=1500
-pf_ring parameters: transparent_mode=2 quick_mode=1 enable_frag_coherence=1 min_num_slots=65536 enable_tx_capture=0 enable_ip_defrag=0

Actions #1

Updated by Brad Roether over 10 years ago

Additional information, from a different core:

Core was generated by `/usr/src/pf_ring/bin/suricata --pfring -Dvc /usr/src/pf_ring/etc/suricata/suric'.
Program terminated with signal 11, Segmentation fault.
#0  0x00007fa7f7caef8c in htp_connp_REQ_CONNECT_WAIT_RESPONSE (connp=0x7fa7e5858830) at htp_request.c:322
322         if (connp->in_tx->response_progress <= HTP_RESPONSE_LINE) {

(gdb) print connp
$1 = (htp_connp_t *) 0x7fa7e5858830

(gdb) print connp->in_tx
$2 = (htp_tx_t *) 0x0

(gdb) bt
#0  0x00007fa7f7caef8c in htp_connp_REQ_CONNECT_WAIT_RESPONSE (connp=0x7fa7e5858830) at htp_request.c:322
#1  0x00007fa7f7caf3f9 in htp_connp_req_data (connp=0x7fa7e5858830, timestamp=<value optimized out>, data=<value optimized out>, len=<value optimized out>)
    at htp_request.c:851
#2  0x00000000004230b1 in HTPHandleRequestData (f=<value optimized out>, htp_state=0x7fa6c875c610, pstate=0x7fa7e5858a40,
    input=0x7fa7bfdbc920 "GET http://clients5.google.com/complete/search?hl=en-us&q=g&client=ie8&inputencoding=UTF-8&outputencoding=UTF-8 HTTP/1.1\r\nAccept: */*\r\nAccept-Language: en-us\r\nUser-Agent: Mozilla/4.0 (compatible; MSIE"..., input_len=<value optimized out>, local_data=<value optimized out>) at app-layer-htp.c:720
#3  0x0000000000427d7a in AppLayerParserParse (alp_tctx=<value optimized out>, f=0x7fa75d0ca8f0, alproto=1, flags=6 '\006', input=<value optimized out>,
    input_len=<value optimized out>) at app-layer-parser.c:818
#4  0x0000000000410219 in AppLayerHandleTCPData (tv=0x13d9fff0, ra_ctx=0x7fa7b80135f0, p=0xeb2c8f0, f=0x7fa75d0ca8f0, ssn=0x7fa702d945a0, stream=<value optimized out>,
    data=0x7fa7bfdbc920 "GET http://clients5.google.com/complete/search?hl=en-us&q=g&client=ie8&inputencoding=UTF-8&outputencoding=UTF-8 HTTP/1.1\r\nAccept: */*\r\nAccept-Language: en-us\r\nUser-Agent: Mozilla/4.0 (compatible; MSIE"..., data_len=690, flags=6 '\006') at app-layer.c:360
#5  0x0000000000517875 in StreamTcpReassembleAppLayer (tv=0x13d9fff0, ra_ctx=0x7fa7b80135f0, ssn=0x7fa702d945a0, stream=0x7fa702d945f0, p=0xeb2c8f0)
    at stream-tcp-reassemble.c:3199
#6  0x0000000000517d00 in StreamTcpReassembleHandleSegmentUpdateACK (tv=0x13d9fff0, ra_ctx=0x7fa7b80135f0, ssn=0x7fa702d945a0, stream=0x7fa702d945f0, p=0xeb2c8f0)
    at stream-tcp-reassemble.c:3545
#7  0x0000000000519e9a in StreamTcpReassembleHandleSegment (tv=0x13d9fff0, ra_ctx=0x7fa7b80135f0, ssn=0x7fa702d945a0, stream=0x7fa702d945a8, p=0xeb2c8f0,
    pq=<value optimized out>) at stream-tcp-reassemble.c:3573
#8  0x00000000005146e5 in StreamTcpPacket (tv=0x13d9fff0, p=0xeb9f310, stt=0x7fa7b8012f00, pq=0x161274b0) at stream-tcp.c:4363
#9  0x0000000000515cec in StreamTcp (tv=0x13d9fff0, p=0xeb9f310, data=0x7fa7b8012f00, pq=0x161274b0, postpq=<value optimized out>) at stream-tcp.c:4485
#10 0x000000000052a4d0 in TmThreadsSlotVarRun (tv=0x13d9fff0, p=0xeb9f310, slot=<value optimized out>) at tm-threads.c:559
#11 0x000000000050bdef in TmThreadsSlotProcessPkt (tv=0x13d9fff0, data=<value optimized out>, slot=<value optimized out>) at tm-threads.h:142
#12 ReceivePfringLoop (tv=0x13d9fff0, data=<value optimized out>, slot=<value optimized out>) at source-pfring.c:361
#13 0x000000000052a11e in TmThreadsSlotPktAcqLoop (td=0x13d9fff0) at tm-threads.c:703
#14 0x00007fa7f529f9d1 in start_thread (arg=0x7fa7bfdbf700) at pthread_create.c:301
#15 0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

(gdb) print *connp
$3 = {cfg = 0x19886e0, conn = 0x7fa70b158af0, user_data = 0x7fa6c875c610, last_error = 0x0, in_status = HTP_STREAM_DATA, out_status = HTP_STREAM_DATA,
  out_data_other_at_tx_end = 0, in_timestamp = {tv_sec = 1396488357, tv_usec = 0},
  in_current_data = 0x7fa7bfdbc920 "GET http://clients5.google.com/complete/search?hl=en-us&q=g&client=ie8&inputencoding=UTF-8&outputencoding=UTF-8 HTTP/1.1\r\nAccept: */*\r\nAccept-Language: en-us\r\nUser-Agent: Mozilla/4.0 (compatible; MSIE"..., in_current_len = 690, in_current_read_offset = 0, in_current_consume_offset = 0,
  in_current_receiver_offset = 0, in_chunk_count = 2, in_chunk_request_index = 1, in_stream_offset = 330, in_next_byte = 10, in_buf = 0x0, in_buf_size = 0,
  in_header = 0x0, in_tx = 0x0, in_content_length = -1, in_body_data_left = -1, in_chunked_length = 0, in_state = 0x7fa7f7caef80 <htp_connp_REQ_CONNECT_WAIT_RESPONSE>,
  in_state_previous = 0x7fa7f7cafd90 <htp_connp_REQ_CONNECT_CHECK>, in_data_receiver_hook = 0x0, out_next_tx_index = 1, out_timestamp = {tv_sec = 1396488356,
    tv_usec = 0}, out_current_data = 0x7fa7bfdbc8e0 "", out_current_len = 965, out_current_read_offset = 965, out_current_consume_offset = 965,
  out_current_receiver_offset = 299, out_stream_offset = 965, out_next_byte = 10, out_buf = 0x0, out_buf_size = 0, out_header = 0x0, out_tx = 0x0,
  out_content_length = 666, out_body_data_left = 0, out_chunked_length = 0, out_state = 0x7fa7f7cb0b40 <htp_connp_RES_FINALIZE>,
  out_state_previous = 0x7fa7f7cb0b40 <htp_connp_RES_FINALIZE>, out_data_receiver_hook = 0x0, out_decompressor = 0x0, put_file = 0x0}
Actions #2

Updated by Brad Roether over 10 years ago

(gdb) print *((HtpTxUserData *)connp->user_data)
$4 = {request_body_init = 48 '0', response_body_init = 136 '\210', request_body = {first = 0x7fa70b158af0, last = 0x7fa75d0ca8f0, content_len = 1,
    content_len_so_far = 0, body_parsed = 0, body_inspected = 0}, response_body = {first = 0x7e9ba0, last = 0x9, content_len = 373, content_len_so_far = 323,
    body_parsed = 323, body_inspected = 0}, request_uri_normalized = 0x72646c6968633d71,
  request_headers_raw = 0x68413325666f6e65 <Address 0x68413325666f6e65 out of bounds>,
  response_headers_raw = 0x3225413325707474 <Address 0x3225413325707474 out of bounds>, request_headers_raw_len = 1177691462, response_headers_raw_len = 779581303,
  decoder_events = 0x6d6f632e6f706377, boundary = 0x6874616577463225 <Address 0x6874616577463225 out of bounds>, boundary_len = 101 'e', tsflags = 114 'r',
  tcflags = 37 '%', operation = 13382, request_body_type = 48 '0', response_body_type = 45 '-'}
Actions #3

Updated by Anoop Saldanha over 10 years ago

Just wondering if it is the below situation -

The request is still waiting on the response. The response comes in, the response progress moves forward.

Meanwhile suricata doesn't consider the inspect_ids for both directions while culling the transaction(I recollect we checking both directions, but the code seems to have changed to consider the direction flags). The response progress might have moved on to the next one, and we end up culling the transaction, which libhtp is still parsing.

I think this should be reproducible even with non CONNECT requests.

Actions #4

Updated by Ivan Ristic over 10 years ago

Anoop Saldanha wrote:

Just wondering if it is the below situation -

The request is still waiting on the response. The response comes in, the response progress moves forward.

Meanwhile suricata doesn't consider the inspect_ids for both directions while culling the transaction(I recollect we checking both directions, but the code seems to have changed to consider the direction flags). The response progress might have moved on to the next one, and we end up culling the transaction, which libhtp is still parsing.

I think this should be reproducible even with non CONNECT requests.

Could this be a race condition where htp_connp_destroy_all() is called and another thread attempts to send more data on the same connection?

I see in the Suricata code that you destroy transactions in one of two ways: htp_tx_destroy() and htp_connp_destroy_all(). But the former will not allow you to destroy an incomplete transaction, which leaves the other path. From the crashdump, we know that both in_tx and out_tx are NULL, but both connection states are inconsistent with that.

Here's what might have happened:

1. CONNECT request comes in (req); LibHTP parses it and stops to see the response. The state is REQ_CONNECT_WAIT_RESPONSE.
2. CONNECT request has been rejected (headers 299 bytes, body 666); LibHTP parses it, sees that the other side is waiting for the response, and returns. The state is RES_FINALIZE, but notice that the previous state on that connection is also RES_FINALIZE. This means that the state was already invoked at least once, but couldn't complete.

Speculation:

3. htp_connp_destroy_all() is invoked. While it is executing:
4. The next request comes in, but in_tx and out_tx are already NULL. Some other values are not yet NULL, which is why we don't see a segfault earlier.

The reason I suspect this is that connp->conn is not (yet) NULL.

Actions #5

Updated by Ivan Ristic over 10 years ago

Brad, could we please also get a dump of connp->conn? Thanks.

Actions #6

Updated by Brad Roether over 10 years ago

Per Victor's request in IRC:

(gdb) thread apply all bt
Thread 21 (Thread 0x7fa7b751f700 (LWP 16237)):
#0  0x00007fa7f4ddf343 in __poll (fds=<value optimized out>, nfds=<value optimized out>, timeout=<value optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87
#1  0x00007fa7f71ac713 in pfring_mod_poll () from /usr/src/pf_ring/lib/libpcap.so.1
#2  0x00007fa7f71b5c1a in ixgbe_there_is_a_packet_to_read () from /usr/src/pf_ring/lib/libpcap.so.1
#3  0x00007fa7f71b5b11 in get_next_ixgbe_packet () from /usr/src/pf_ring/lib/libpcap.so.1
#4  0x00007fa7f71aae7a in pfring_recv () from /usr/src/pf_ring/lib/libpcap.so.1
#5  0x000000000050bd18 in ReceivePfringLoop (tv=0x1482ad00, data=<value optimized out>, slot=<value optimized out>) at source-pfring.c:339
#6  0x000000000052a11e in TmThreadsSlotPktAcqLoop (td=0x1482ad00) at tm-threads.c:703
#7  0x00007fa7f529f9d1 in start_thread (arg=0x7fa7b751f700) at pthread_create.c:301
#8  0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 20 (Thread 0x7fa7f31ec700 (LWP 16230)):
#0  0x00007fa7f4ddf343 in __poll (fds=<value optimized out>, nfds=<value optimized out>, timeout=<value optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87
#1  0x00007fa7f71ac713 in pfring_mod_poll () from /usr/src/pf_ring/lib/libpcap.so.1
#2  0x00007fa7f71b5c1a in ixgbe_there_is_a_packet_to_read () from /usr/src/pf_ring/lib/libpcap.so.1
#3  0x00007fa7f71b5b11 in get_next_ixgbe_packet () from /usr/src/pf_ring/lib/libpcap.so.1
#4  0x00007fa7f71aae7a in pfring_recv () from /usr/src/pf_ring/lib/libpcap.so.1
#5  0x000000000050bd18 in ReceivePfringLoop (tv=0x1613f200, data=<value optimized out>, slot=<value optimized out>) at source-pfring.c:339
#6  0x000000000052a11e in TmThreadsSlotPktAcqLoop (td=0x1613f200) at tm-threads.c:703
#7  0x00007fa7f529f9d1 in start_thread (arg=0x7fa7f31ec700) at pthread_create.c:301
#8  0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 19 (Thread 0x7fa7ea99f700 (LWP 16231)):
#0  0x00007fa7f4ddf343 in __poll (fds=<value optimized out>, nfds=<value optimized out>, timeout=<value optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87
#1  0x00007fa7f71ac713 in pfring_mod_poll () from /usr/src/pf_ring/lib/libpcap.so.1
#2  0x00007fa7f71b5c1a in ixgbe_there_is_a_packet_to_read () from /usr/src/pf_ring/lib/libpcap.so.1
#3  0x00007fa7f71b5b11 in get_next_ixgbe_packet () from /usr/src/pf_ring/lib/libpcap.so.1
#4  0x00007fa7f71aae7a in pfring_recv () from /usr/src/pf_ring/lib/libpcap.so.1
#5  0x000000000050bd18 in ReceivePfringLoop (tv=0x14551450, data=<value optimized out>, slot=<value optimized out>) at source-pfring.c:339
#6  0x000000000052a11e in TmThreadsSlotPktAcqLoop (td=0x14551450) at tm-threads.c:703
#7  0x00007fa7f529f9d1 in start_thread (arg=0x7fa7ea99f700) at pthread_create.c:301
#8  0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 18 (Thread 0x7fa7a639f700 (LWP 16239)):
#0  0x00007fa7f4ddf343 in __poll (fds=<value optimized out>, nfds=<value optimized out>, timeout=<value optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87
#1  0x00007fa7f71ac713 in pfring_mod_poll () from /usr/src/pf_ring/lib/libpcap.so.1
#2  0x00007fa7f71b5c1a in ixgbe_there_is_a_packet_to_read () from /usr/src/pf_ring/lib/libpcap.so.1
#3  0x00007fa7f71b5b11 in get_next_ixgbe_packet () from /usr/src/pf_ring/lib/libpcap.so.1
#4  0x00007fa7f71aae7a in pfring_recv () from /usr/src/pf_ring/lib/libpcap.so.1
#5  0x000000000050bd18 in ReceivePfringLoop (tv=0x15f72ab0, data=<value optimized out>, slot=<value optimized out>) at source-pfring.c:339
#6  0x000000000052a11e in TmThreadsSlotPktAcqLoop (td=0x15f72ab0) at tm-threads.c:703
#7  0x00007fa7f529f9d1 in start_thread (arg=0x7fa7a639f700) at pthread_create.c:301
#8  0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 17 (Thread 0x7fa7c869f700 (LWP 16235)):
#0  0x00007fa7f4ddf343 in __poll (fds=<value optimized out>, nfds=<value optimized out>, timeout=<value optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87
#1  0x00007fa7f71ac713 in pfring_mod_poll () from /usr/src/pf_ring/lib/libpcap.so.1
#2  0x00007fa7f71b5c1a in ixgbe_there_is_a_packet_to_read () from /usr/src/pf_ring/lib/libpcap.so.1
#3  0x00007fa7f71b5b11 in get_next_ixgbe_packet () from /usr/src/pf_ring/lib/libpcap.so.1
#4  0x00007fa7f71aae7a in pfring_recv () from /usr/src/pf_ring/lib/libpcap.so.1
#5  0x000000000050bd18 in ReceivePfringLoop (tv=0x143c26c0, data=<value optimized out>, slot=<value optimized out>) at source-pfring.c:339
#6  0x000000000052a11e in TmThreadsSlotPktAcqLoop (td=0x143c26c0) at tm-threads.c:703
#7  0x00007fa7f529f9d1 in start_thread (arg=0x7fa7c869f700) at pthread_create.c:301
#8  0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 16 (Thread 0x7fa7d0f5f700 (LWP 16234)):
#0  0x00007fa7f4ddf343 in __poll (fds=<value optimized out>, nfds=<value optimized out>, timeout=<value optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87
#1  0x00007fa7f71ac713 in pfring_mod_poll () from /usr/src/pf_ring/lib/libpcap.so.1
#2  0x00007fa7f71b5c1a in ixgbe_there_is_a_packet_to_read () from /usr/src/pf_ring/lib/libpcap.so.1
#3  0x00007fa7f71b5b11 in get_next_ixgbe_packet () from /usr/src/pf_ring/lib/libpcap.so.1
#4  0x00007fa7f71aae7a in pfring_recv () from /usr/src/pf_ring/lib/libpcap.so.1
#5  0x000000000050bd18 in ReceivePfringLoop (tv=0x15f6bb70, data=<value optimized out>, slot=<value optimized out>) at source-pfring.c:339
#6  0x000000000052a11e in TmThreadsSlotPktAcqLoop (td=0x15f6bb70) at tm-threads.c:703
#7  0x00007fa7f529f9d1 in start_thread (arg=0x7fa7d0f5f700) at pthread_create.c:301
#8  0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 15 (Thread 0x7fa7e20df700 (LWP 16232)):
#0  PoolGet (p=0x14295ec0) at util-pool.c:285
#1  0x00000000005166e1 in StreamTcpGetSegment (tv=0x143b3020, ra_ctx=0x7fa7dc0135f0, len=<value optimized out>) at stream-tcp-reassemble.c:3768
#2  0x0000000000519cee in StreamTcpReassembleHandleSegmentHandleData (tv=0x143b3020, ra_ctx=0x7fa7dc0135f0, ssn=<value optimized out>, stream=0x7fa73983f7b0, 
    p=0x250d870) at stream-tcp-reassemble.c:1873
#3  0x0000000000519f1d in StreamTcpReassembleHandleSegment (tv=0x143b3020, ra_ctx=0x7fa7dc0135f0, ssn=0x7fa73983f760, stream=0x7fa73983f7b0, p=0x250d870, 
    pq=<value optimized out>) at stream-tcp-reassemble.c:3584
#4  0x000000000051172c in HandleEstablishedPacketToClient (tv=0x143b3020, p=0x250d870, stt=0x7fa7dc012f00, ssn=0x7fa73983f760, pq=<value optimized out>)
    at stream-tcp.c:2091
#5  StreamTcpPacketStateEstablished (tv=0x143b3020, p=0x250d870, stt=0x7fa7dc012f00, ssn=0x7fa73983f760, pq=<value optimized out>) at stream-tcp.c:2337
#6  0x0000000000514b49 in StreamTcpPacket (tv=0x143b3020, p=0x250d870, stt=0x7fa7dc012f00, pq=0x13f23a50) at stream-tcp.c:4243
#7  0x0000000000515cec in StreamTcp (tv=0x143b3020, p=0x250d870, data=0x7fa7dc012f00, pq=0x13f23a50, postpq=<value optimized out>) at stream-tcp.c:4485
#8  0x000000000052a4d0 in TmThreadsSlotVarRun (tv=0x143b3020, p=0x250d870, slot=<value optimized out>) at tm-threads.c:559
#9  0x000000000050bdef in TmThreadsSlotProcessPkt (tv=0x143b3020, data=<value optimized out>, slot=<value optimized out>) at tm-threads.h:142
#10 ReceivePfringLoop (tv=0x143b3020, data=<value optimized out>, slot=<value optimized out>) at source-pfring.c:361
#11 0x000000000052a11e in TmThreadsSlotPktAcqLoop (td=0x143b3020) at tm-threads.c:703
#12 0x00007fa7f529f9d1 in start_thread (arg=0x7fa7e20df700) at pthread_create.c:301
#13 0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 14 (Thread 0x7fa7d981f700 (LWP 16233)):
#0  0x00007fa7f4ddf343 in __poll (fds=<value optimized out>, nfds=<value optimized out>, timeout=<value optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87
#1  0x00007fa7f71ac713 in pfring_mod_poll () from /usr/src/pf_ring/lib/libpcap.so.1
#2  0x00007fa7f71b5c1a in ixgbe_there_is_a_packet_to_read () from /usr/src/pf_ring/lib/libpcap.so.1
#3  0x00007fa7f71b5b11 in get_next_ixgbe_packet () from /usr/src/pf_ring/lib/libpcap.so.1
#4  0x00007fa7f71aae7a in pfring_recv () from /usr/src/pf_ring/lib/libpcap.so.1
#5  0x000000000050bd18 in ReceivePfringLoop (tv=0x171a7c30, data=<value optimized out>, slot=<value optimized out>) at source-pfring.c:339
#6  0x000000000052a11e in TmThreadsSlotPktAcqLoop (td=0x171a7c30) at tm-threads.c:703
#7  0x00007fa7f529f9d1 in start_thread (arg=0x7fa7d981f700) at pthread_create.c:301
#8  0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 13 (Thread 0x7fa7f80c7840 (LWP 16219)):
#0  0x00007fa7f4daccdd in nanosleep () at ../sysdeps/unix/syscall-template.S:82
#1  0x00007fa7f4de1e54 in usleep (useconds=<value optimized out>) at ../sysdeps/unix/sysv/linux/usleep.c:33
#2  0x0000000000520b57 in main (argc=<value optimized out>, argv=<value optimized out>) at suricata.c:2289

Thread 12 (Thread 0x7fa76925d700 (LWP 16248)):
#0  pthread_cond_timedwait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:239
#1  0x0000000000438468 in SCPerfWakeupThread (arg=0x1443d7b0) at counters.c:408
#2  0x00007fa7f529f9d1 in start_thread (arg=0x7fa76925d700) at pthread_create.c:301
#3  0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 11 (Thread 0x7fa75bfff700 (LWP 16249)):
#0  pthread_cond_timedwait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:239
#1  0x0000000000439f18 in SCPerfMgmtThread (arg=0x1472ad00) at counters.c:340
#2  0x00007fa7f529f9d1 in start_thread (arg=0x7fa75bfff700) at pthread_create.c:301
#3  0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 10 (Thread 0x7fa77b7df700 (LWP 16244)):
#0  0x00007fa7f4ddf343 in __poll (fds=<value optimized out>, nfds=<value optimized out>, timeout=<value optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87
#1  0x00007fa7f71ac713 in pfring_mod_poll () from /usr/src/pf_ring/lib/libpcap.so.1
#2  0x00007fa7f71b5c1a in ixgbe_there_is_a_packet_to_read () from /usr/src/pf_ring/lib/libpcap.so.1
#3  0x00007fa7f71b5b11 in get_next_ixgbe_packet () from /usr/src/pf_ring/lib/libpcap.so.1
#4  0x00007fa7f71aae7a in pfring_recv () from /usr/src/pf_ring/lib/libpcap.so.1
#5  0x000000000050bd18 in ReceivePfringLoop (tv=0x13d90e20, data=<value optimized out>, slot=<value optimized out>) at source-pfring.c:339
#6  0x000000000052a11e in TmThreadsSlotPktAcqLoop (td=0x13d90e20) at tm-threads.c:703
#7  0x00007fa7f529f9d1 in start_thread (arg=0x7fa77b7df700) at pthread_create.c:301
#8  0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 9 (Thread 0x7fa76a65f700 (LWP 16246)):
#0  0x00007fa7f4de15e3 in select () at ../sysdeps/unix/syscall-template.S:82
#1  0x000000000052fdc6 in UnixMain (this=0x8146c0) at unix-manager.c:508
#2  0x000000000053013a in UnixManagerThread (td=0x146c6120) at unix-manager.c:867
#3  0x00007fa7f529f9d1 in start_thread (arg=0x7fa76a65f700) at pthread_create.c:301
#4  0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 8 (Thread 0x7fa769c5e700 (LWP 16247)):
#0  0x00007fa7f52a1633 in __pthread_mutex_trylock (mutex=0x7fa7f3fe6650) at pthread_mutex_trylock.c:65
#1  0x0000000000444391 in DefragTimeoutHash (ts=0x7fa769c5d500) at defrag-timeout.c:132
#2  0x00000000004ce57c in FlowManagerThread (td=0x15fd14d0) at flow-manager.c:486
#3  0x00007fa7f529f9d1 in start_thread (arg=0x7fa769c5e700) at pthread_create.c:301
#4  0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 7 (Thread 0x7fa783fff700 (LWP 16243)):
#0  0x00007fa7f4ddf343 in __poll (fds=<value optimized out>, nfds=<value optimized out>, timeout=<value optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87
#1  0x00007fa7f71ac713 in pfring_mod_poll () from /usr/src/pf_ring/lib/libpcap.so.1
#2  0x00007fa7f71b5c1a in ixgbe_there_is_a_packet_to_read () from /usr/src/pf_ring/lib/libpcap.so.1
#3  0x00007fa7f71b5b11 in get_next_ixgbe_packet () from /usr/src/pf_ring/lib/libpcap.so.1
#4  0x00007fa7f71aae7a in pfring_recv () from /usr/src/pf_ring/lib/libpcap.so.1
#5  0x000000000050bd18 in ReceivePfringLoop (tv=0x143cff00, data=<value optimized out>, slot=<value optimized out>) at source-pfring.c:339
#6  0x000000000052a11e in TmThreadsSlotPktAcqLoop (td=0x143cff00) at tm-threads.c:703
#7  0x00007fa7f529f9d1 in start_thread (arg=0x7fa783fff700) at pthread_create.c:301
#8  0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 6 (Thread 0x7fa78c95f700 (LWP 16242)):
#0  0x00000000004cd6c0 in FlowCompare (p=0xeb1a980) at flow-hash.c:394
#1  FlowGetFlowFromHash (p=0xeb1a980) at flow-hash.c:531
#2  0x00000000004cc2b1 in FlowHandlePacket (tv=<value optimized out>, p=0xeb1a980) at flow.c:242
#3  0x000000000043f0c8 in DecodeTCP (tv=0x147fb110, dtv=<value optimized out>, p=0xeb1a980, pkt=<value optimized out>, len=<value optimized out>, 
    pq=<value optimized out>) at decode-tcp.c:206
#4  0x000000000043c82d in DecodeIPV4 (tv=0x147fb110, dtv=0x7fa784011450, p=0xeb1a980, pkt=0xeb1b146 "E", len=<value optimized out>, pq=0x14906a20) at decode-ipv4.c:593
#5  0x000000000043b6e5 in DecodeEthernet (tv=0x147fb110, dtv=0x7fa784011450, p=0xeb1a980, pkt=0xeb1b138 "", len=503, pq=0x14906a20) at decode-ethernet.c:60
#6  0x000000000050a724 in DecodePfring (tv=0x147fb110, p=0xeb1a980, data=0x7fa784011450, pq=0x14906a20, postpq=<value optimized out>) at source-pfring.c:629
#7  0x000000000052a4d0 in TmThreadsSlotVarRun (tv=0x147fb110, p=0xeb1a980, slot=<value optimized out>) at tm-threads.c:559
#8  0x000000000050bdef in TmThreadsSlotProcessPkt (tv=0x147fb110, data=<value optimized out>, slot=<value optimized out>) at tm-threads.h:142
#9  ReceivePfringLoop (tv=0x147fb110, data=<value optimized out>, slot=<value optimized out>) at source-pfring.c:361
#10 0x000000000052a11e in TmThreadsSlotPktAcqLoop (td=0x147fb110) at tm-threads.c:703
#11 0x00007fa7f529f9d1 in start_thread (arg=0x7fa78c95f700) at pthread_create.c:301
#12 0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 5 (Thread 0x7fa772f1f700 (LWP 16245)):
#0  0x00007fa7f4ddf343 in __poll (fds=<value optimized out>, nfds=<value optimized out>, timeout=<value optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87
#1  0x00007fa7f71ac713 in pfring_mod_poll () from /usr/src/pf_ring/lib/libpcap.so.1
#2  0x00007fa7f71b5c1a in ixgbe_there_is_a_packet_to_read () from /usr/src/pf_ring/lib/libpcap.so.1
#3  0x00007fa7f71b5b11 in get_next_ixgbe_packet () from /usr/src/pf_ring/lib/libpcap.so.1
#4  0x00007fa7f71aae7a in pfring_recv () from /usr/src/pf_ring/lib/libpcap.so.1
#5  0x000000000050bd18 in ReceivePfringLoop (tv=0x13cec3e0, data=<value optimized out>, slot=<value optimized out>) at source-pfring.c:339
#6  0x000000000052a11e in TmThreadsSlotPktAcqLoop (td=0x13cec3e0) at tm-threads.c:703
#7  0x00007fa7f529f9d1 in start_thread (arg=0x7fa772f1f700) at pthread_create.c:301
#8  0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 4 (Thread 0x7fa79521f700 (LWP 16241)):
#0  0x00007fa7f4ddf343 in __poll (fds=<value optimized out>, nfds=<value optimized out>, timeout=<value optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87
#1  0x00007fa7f71ac713 in pfring_mod_poll () from /usr/src/pf_ring/lib/libpcap.so.1
#2  0x00007fa7f71b5c1a in ixgbe_there_is_a_packet_to_read () from /usr/src/pf_ring/lib/libpcap.so.1
#3  0x00007fa7f71b5b11 in get_next_ixgbe_packet () from /usr/src/pf_ring/lib/libpcap.so.1
#4  0x00007fa7f71aae7a in pfring_recv () from /usr/src/pf_ring/lib/libpcap.so.1
#5  0x000000000050bd18 in ReceivePfringLoop (tv=0x13f3bfb0, data=<value optimized out>, slot=<value optimized out>) at source-pfring.c:339
#6  0x000000000052a11e in TmThreadsSlotPktAcqLoop (td=0x13f3bfb0) at tm-threads.c:703
#7  0x00007fa7f529f9d1 in start_thread (arg=0x7fa79521f700) at pthread_create.c:301
#8  0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 3 (Thread 0x7fa7aec5f700 (LWP 16238)):
#0  0x00007fa7f4ddf343 in __poll (fds=<value optimized out>, nfds=<value optimized out>, timeout=<value optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87
#1  0x00007fa7f71ac713 in pfring_mod_poll () from /usr/src/pf_ring/lib/libpcap.so.1
#2  0x00007fa7f71b5c1a in ixgbe_there_is_a_packet_to_read () from /usr/src/pf_ring/lib/libpcap.so.1
#3  0x00007fa7f71b5b11 in get_next_ixgbe_packet () from /usr/src/pf_ring/lib/libpcap.so.1
#4  0x00007fa7f71aae7a in pfring_recv () from /usr/src/pf_ring/lib/libpcap.so.1
#5  0x000000000050bd18 in ReceivePfringLoop (tv=0x14825ff0, data=<value optimized out>, slot=<value optimized out>) at source-pfring.c:339
#6  0x000000000052a11e in TmThreadsSlotPktAcqLoop (td=0x14825ff0) at tm-threads.c:703
#7  0x00007fa7f529f9d1 in start_thread (arg=0x7fa7aec5f700) at pthread_create.c:301
#8  0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 2 (Thread 0x7fa79dadf700 (LWP 16240)):
#0  0x00007fa7f4ddf343 in __poll (fds=<value optimized out>, nfds=<value optimized out>, timeout=<value optimized out>) at ../sysdeps/unix/sysv/linux/poll.c:87
#1  0x00007fa7f71ac713 in pfring_mod_poll () from /usr/src/pf_ring/lib/libpcap.so.1
#2  0x00007fa7f71b5c1a in ixgbe_there_is_a_packet_to_read () from /usr/src/pf_ring/lib/libpcap.so.1
#3  0x00007fa7f71b5b11 in get_next_ixgbe_packet () from /usr/src/pf_ring/lib/libpcap.so.1
#4  0x00007fa7f71aae7a in pfring_recv () from /usr/src/pf_ring/lib/libpcap.so.1
#5  0x000000000050bd18 in ReceivePfringLoop (tv=0x15f64d50, data=<value optimized out>, slot=<value optimized out>) at source-pfring.c:339
#6  0x000000000052a11e in TmThreadsSlotPktAcqLoop (td=0x15f64d50) at tm-threads.c:703
#7  0x00007fa7f529f9d1 in start_thread (arg=0x7fa79dadf700) at pthread_create.c:301
#8  0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115

Thread 1 (Thread 0x7fa7bfdbf700 (LWP 16236)):
#0  0x00007fa7f7caef8c in htp_connp_REQ_CONNECT_WAIT_RESPONSE (connp=0x7fa7e5858830) at htp_request.c:322
#1  0x00007fa7f7caf3f9 in htp_connp_req_data (connp=0x7fa7e5858830, timestamp=<value optimized out>, data=<value optimized out>, len=<value optimized out>)
    at htp_request.c:851
#2  0x00000000004230b1 in HTPHandleRequestData (f=<value optimized out>, htp_state=0x7fa6c875c610, pstate=0x7fa7e5858a40, 
    input=0x7fa7bfdbc920 "GET http://clients5.google.com/complete/search?hl=en-us&q=g&client=ie8&inputencoding=UTF-8&outputencoding=UTF-8 HTTP/1.1\r\nAccept: */*\r\nAccept-Language: en-us\r\nUser-Agent: Mozilla/4.0 (compatible; MSIE"..., input_len=<value optimized out>, local_data=<value optimized out>) at app-layer-htp.c:720
#3  0x0000000000427d7a in AppLayerParserParse (alp_tctx=<value optimized out>, f=0x7fa75d0ca8f0, alproto=1, flags=6 '\006', input=<value optimized out>, 
    input_len=<value optimized out>) at app-layer-parser.c:818
#4  0x0000000000410219 in AppLayerHandleTCPData (tv=0x13d9fff0, ra_ctx=0x7fa7b80135f0, p=0xeb2c8f0, f=0x7fa75d0ca8f0, ssn=0x7fa702d945a0, stream=<value optimized out>, 
    data=0x7fa7bfdbc920 "GET http://clients5.google.com/complete/search?hl=en-us&q=g&client=ie8&inputencoding=UTF-8&outputencoding=UTF-8 HTTP/1.1\r\nAccept: */*\r\nAccept-Language: en-us\r\nUser-Agent: Mozilla/4.0 (compatible; MSIE"..., data_len=690, flags=6 '\006') at app-layer.c:360
#5  0x0000000000517875 in StreamTcpReassembleAppLayer (tv=0x13d9fff0, ra_ctx=0x7fa7b80135f0, ssn=0x7fa702d945a0, stream=0x7fa702d945f0, p=0xeb2c8f0)
    at stream-tcp-reassemble.c:3199
#6  0x0000000000517d00 in StreamTcpReassembleHandleSegmentUpdateACK (tv=0x13d9fff0, ra_ctx=0x7fa7b80135f0, ssn=0x7fa702d945a0, stream=0x7fa702d945f0, p=0xeb2c8f0)
    at stream-tcp-reassemble.c:3545
#7  0x0000000000519e9a in StreamTcpReassembleHandleSegment (tv=0x13d9fff0, ra_ctx=0x7fa7b80135f0, ssn=0x7fa702d945a0, stream=0x7fa702d945a8, p=0xeb2c8f0, 
    pq=<value optimized out>) at stream-tcp-reassemble.c:3573
#8  0x00000000005146e5 in StreamTcpPacket (tv=0x13d9fff0, p=0xeb9f310, stt=0x7fa7b8012f00, pq=0x161274b0) at stream-tcp.c:4363
#9  0x0000000000515cec in StreamTcp (tv=0x13d9fff0, p=0xeb9f310, data=0x7fa7b8012f00, pq=0x161274b0, postpq=<value optimized out>) at stream-tcp.c:4485
#10 0x000000000052a4d0 in TmThreadsSlotVarRun (tv=0x13d9fff0, p=0xeb9f310, slot=<value optimized out>) at tm-threads.c:559
#11 0x000000000050bdef in TmThreadsSlotProcessPkt (tv=0x13d9fff0, data=<value optimized out>, slot=<value optimized out>) at tm-threads.h:142
#12 ReceivePfringLoop (tv=0x13d9fff0, data=<value optimized out>, slot=<value optimized out>) at source-pfring.c:361
#13 0x000000000052a11e in TmThreadsSlotPktAcqLoop (td=0x13d9fff0) at tm-threads.c:703
#14 0x00007fa7f529f9d1 in start_thread (arg=0x7fa7bfdbf700) at pthread_create.c:301
#15 0x00007fa7f4de8b6d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
Actions #7

Updated by Victor Julien over 10 years ago

The user data dump seems to contain only randomness, that is odd.

The threads don't show that another thread is doing something with that HTP state at the exact same time.

Actions #8

Updated by Brad Roether over 10 years ago

Ivan Ristic wrote:

Brad, could we please also get a dump of connp->conn? Thanks.

(gdb) print connp->conn
$1 = (htp_conn_t *) 0x7fa70b158af0
Actions #9

Updated by Brad Roether over 10 years ago

(gdb) print *connp->conn
$2 = {client_addr = 0x0, client_port = 2785, server_addr = 0x0, server_port = 8080, transactions = 0x7fa7e5858dc0, messages = 0x7fa7e5cbed90, flags = 0 '\000',
  open_timestamp = {tv_sec = 1396488355, tv_usec = 984693}, close_timestamp = {tv_sec = 0, tv_usec = 0}, in_data_counter = 1020, out_data_counter = 965}
Actions #10

Updated by Ivan Ristic over 10 years ago

Victor Julien wrote:

The user data dump seems to contain only randomness, that is odd.

The threads don't show that another thread is doing something with that HTP state at the exact same time.

In that case, I think Anoop's theory is the next likely explanation. I've just discovered that a partial transaction cleanup is invoked on CONNECT requests, causing the request to be marked as complete. After the response is seen, invoking htp_tx_destroy() will clear both in_tx and out_tx, leading to the segfault when the next request arrives.

I see two improvements to LibHTP here (neither of which would fix the problem, but it would at least make if fail gracefully): (1) do not mark request as complete until it's actually complete and (2) check in_tx and out_tx when new data arrives.

Actions #11

Updated by Victor Julien over 10 years ago

  • Status changed from New to Assigned
  • Assignee set to Victor Julien
  • Target version set to 2.0.1rc1

We mark a tx for removal when it's progress is complete. So we check to see if htp_tx_t::progress is HTP_REQUEST_COMPLETE (for our request inspection) or HTP_RESPONSE_COMPLETE (for our response inspection). If the progress is complete, we update the active tx id to move beyond the current tx. When both sides have moved beyond a tx, we remove it by ultimately calling htp_tx_destroy(). So our handling depends on correct progress tracking in htp.

I don't see any other paths leading to us calling htp_tx_destroy(), but I'll give it some more thought as well.

Actions #12

Updated by Victor Julien over 10 years ago

  • Description updated (diff)
Actions #13

Updated by Ivan Ristic over 10 years ago

Victor Julien wrote:

We mark a tx for removal when it's progress is complete. So we check to see if htp_tx_t::progress is HTP_REQUEST_COMPLETE (for our request inspection) or HTP_RESPONSE_COMPLETE (for our response inspection). If the progress is complete, we update the active tx id to move beyond the current tx. When both sides have moved beyond a tx, we remove it by ultimately calling htp_tx_destroy(). So our handling depends on correct progress tracking in htp.

That makes sense. I will fix it soon (hopefully today) and release 0.5.11.

Actions #14

Updated by Ivan Ristic over 10 years ago

FYI: LibHTP v0.5.11 has been released: https://github.com/ironbee/libhtp/releases/tag/0.5.11

Actions #15

Updated by Victor Julien over 10 years ago

Brad, have you been able to test the new libhtp?

Actions #16

Updated by Brad Roether over 10 years ago

Yes, this issue does appear to be fixed - thank you!

Actions #17

Updated by Victor Julien over 10 years ago

  • Status changed from Assigned to Closed
  • % Done changed from 0 to 100

Great, thanks Brad and thanks Ivan!

Actions

Also available in: Atom PDF