Project

General

Profile

Actions

Bug #5406

open

HTTP Req and resp correlation incorrect

Added by Sachin Desai 3 months ago. Updated 2 months ago.

Status:
New
Priority:
Normal
Assignee:
Target version:
Affected Versions:
Effort:
Difficulty:
Label:

Description

Hi,

On a setup with medium to heavy load we are seeing that the suricata logs do not have the right correlation between request and responses.

  1. The customer is running HTTP1/1 on upstream servers between the load balancer and application which may point to multiple streams over persistent connections.
  1. We are also seeing a large amount of logs with same flow_id.

Some stats of concern (complete stats attached).

"tcp":{
"sessions":1474566,
"ssn_memcap_drop":0,
"pseudo":0,
"pseudo_failed":0,
"invalid_checksum":1878,
"no_flow":0,
"syn":1499057,
"synack":1624427,
"rst":1021348,
"midstream_pickups":601,
"pkt_on_wrong_thread":0,
"segment_memcap_drop":0,
"stream_depth_reached":0,
"reassembly_gap":16172,
"overlap":244367,
"overlap_diff_data":0,
"insert_data_normal_fail":0,
"insert_data_overlap_fail":0,
"insert_list_fail":14865,
"memuse":1212656,
"reassembly_memuse":6535168
},

A few questions,
1. We are seeing large amount of flows with identical flow_ids (due to persistent connections)
2. Also, there are large reassembly gaps. Can this cause req/resp transaction correlation?
3. In case of reassembly gaps, how doesnt Suricata resume processing of next request over the same TCP connection. If it doesnt, does it mean, we have a lot more loss?

Unfortunately, we do not have access to the setup to collect pcaps. But will try to collect anything that helps.

Setup: suricata-6.0.4 and libhtp 0.5.39 running VXLAN packets.

Thanks for the great product!!!!


Files

suricatastats.json (35 KB) suricatastats.json Suricata stats Sachin Desai, 06/24/2022 07:52 AM
test.pcap (1.16 KB) test.pcap Sachin Desai, 06/24/2022 01:50 PM
Actions #1

Updated by Sachin Desai 3 months ago

We are able to reproduce possibily, one of the scenarios. In a persistent connection, assume there are 2 transactions. If the response of first and req of second are lost, Suricata ends up incorrectly co-relating.

Counters are flagging the reassembly gap but the output log does get generated causing some amount of confusion.

"reassembly_gap": 1,      
"overlap": 2,

Is there way to stop Suricata from generating logs on such reassembly gaps?

For ex: attached is one such pcap that can help reproduce the issue.

Actions #2

Updated by Sachin Desai 2 months ago

Was wondering if there are any helpful comments or is this a Suricata limitation we need to live with?

Thanks!

Actions

Also available in: Atom PDF