Project

General

Profile

Actions

Bug #1958

closed

Possible confusion or bypass within the stream engine with retransmits.

Added by Andreas Herz about 8 years ago. Updated over 7 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Target version:
Affected Versions:
Effort:
Difficulty:
Label:

Description

I got this bugreport from my former employer, since it might be a security issue it's privat for now:

We encountered a problem when trying to send an email to
remote.wunderlich-architekten.de. This host seems to be very keen on sending
retransmits - proably a non-standard setting or maybe even a misconfiguration
there - on the other hand it seems to confuse the stream engine, so I guess
suricata should be able to deal with it.

With the attached .pcap we could reproduce it with --simulate-ips and -r foo.pcap.

Excerpt from debug log with comments from me:

# SYN
 IPV4 213.179.141.101->178.15.93.126 PROTO: 6 OFFSET: 0 RF: 0 DF: 1 MF: 0 ID:
48631
 ssn 0x7f0cd407a0a0: =~ ssn state is now TCP_SYN_SENT
# SYN ACK
 IPV4 178.15.93.126->213.179.141.101 PROTO: 6 OFFSET: 0 RF: 0 DF: 1 MF: 0 ID:
14956
 ssn 0x7f0cd407a0a0: =~ ssn state is now TCP_SYN_RECV
# Retransmit of SYN ACK (ignored)
 IPV4 178.15.93.126->213.179.141.101 PROTO: 6 OFFSET: 0 RF: 0 DF: 1 MF: 0 ID:
14956
# ACK completing 3whs; corresponds to "state is now TCP_ESTABLISHED" below
 IPV4 213.179.141.101->178.15.93.126 PROTO: 6 OFFSET: 0 RF: 0 DF: 1 MF: 0 ID:
48632
# Retransmit of ACK completing 3whs (ignored) - in the log of the working connection below, this packet shows up later and fixes the state
 IPV4 213.179.141.101->178.15.93.126 PROTO: 6 OFFSET: 0 RF: 0 DF: 1 MF: 0 ID:
48633
# Retransmit of SYN ACK: corresponds to "SYN/ACK packet on state ESTABLISHED" and "state is now reset to TCP_SYN_RECV" messages below
 IPV4 178.15.93.126->213.179.141.101 PROTO: 6 OFFSET: 0 RF: 0 DF: 1 MF: 0 ID:
14956
 ssn 0x7f0cd407a0a0: =~ ssn state is now TCP_ESTABLISHED
 ssn 0x7f0cd407a0a0: SYN/ACK packet on state ESTABLISHED... resent. Likely due
server not receiving final ACK in 3whs
 ssn 0x7f0cd407a0a0: =~ ssn state is now reset to TCP_SYN_RECV
# First data packet from server being. Dropped due to state TCP_SYN_RECV.
 IPV4 178.15.93.126->213.179.141.101 PROTO: 6 OFFSET: 0 RF: 0 DF: 1 MF: 0 ID:
14957
 ssn 0x7f0cd407a0a0: ACK received in the wrong direction
# Repeated again and again
 IPV4 178.15.93.126->213.179.141.101 PROTO: 6 OFFSET: 0 RF: 0 DF: 1 MF: 0 ID:
14957
 ssn 0x7f0cd407a0a0: ACK received in the wrong direction
 ...

By and then the connection works (at customer site: 0%, here maybe 5%). In
these cases the retransmit of the ACK packet completing the 3whs shows up
after receiving the SYN ACK retransmit:
# SYN
 IPV4 213.179.141.101->178.15.93.126 PROTO: 6 OFFSET: 0 RF: 0 DF: 1 MF: 0 ID:
28585
 ssn 0x7f0cdc07a0a0: =~ ssn state is now TCP_SYN_SENT
# SYN ACK
 IPV4 178.15.93.126->213.179.141.101 PROTO: 6 OFFSET: 0 RF: 0 DF: 1 MF: 0 ID:
15666
 ssn 0x7f0cdc07a0a0: =~ ssn state is now TCP_SYN_RECV
# ACK completing 3whs
 IPV4 213.179.141.101->178.15.93.126 PROTO: 6 OFFSET: 0 RF: 0 DF: 1 MF: 0 ID:
28586
 ssn 0x7f0cdc07a0a0: =~ ssn state is now TCP_ESTABLISHED
# retransmit of SYN ACK
 IPV4 178.15.93.126->213.179.141.101 PROTO: 6 OFFSET: 0 RF: 0 DF: 1 MF: 0 ID:
15666
 ssn 0x7f0cdc07a0a0: SYN/ACK packet on state ESTABLISHED... resent. Likely due
server not receiving final ACK in 3whs
 ssn 0x7f0cdc07a0a0: =~ ssn state is now reset to TCP_SYN_RECV
# retransmit of SYN ACK (ignored)
 IPV4 178.15.93.126->213.179.141.101 PROTO: 6 OFFSET: 0 RF: 0 DF: 1 MF: 0 ID:
15666
# retransmit of ACK completing 3whs - sent later than in above example and restoring ESTABLISHED state
 IPV4 213.179.141.101->178.15.93.126 PROTO: 6 OFFSET: 0 RF: 0 DF: 1 MF: 0 ID:
28587
 ssn 0x7f0cdc07a0a0: =~ ssn state is now TCP_ESTABLISHED
# first data packet from server
 IPV4 178.15.93.126->213.179.141.101 PROTO: 6 OFFSET: 0 RF: 0 DF: 1 MF: 0 ID:
15667
 3whs is now confirmed by server
 IPV4 178.15.93.126->213.179.141.101 PROTO: 6 OFFSET: 0 RF: 0 DF: 1 MF: 0 ID:
15667
 IPV4 213.179.141.101->178.15.93.126 PROTO: 6 OFFSET: 0 RF: 0 DF: 1 MF: 0 ID:
28588

Suricata is running on kernel 4.4.26. The following changes don't make a
difference - bug still shows up:
- updated suricata 3.1.1 to 3.2RC1
- Client is the suricata machine itself or client is an other machine running
kernel 4.4.26
- SACK enabled or disabled
- using NFQUEUE --queue-balance with multiple suricata queues (q) or just a
single queue
stream engine settings: midstream true or false

Interestingly the bug does not show up when using a client running an old
2.6.32.71 kernel.

I attached the following files:
- pcap of failure case
- pcap of working case with old client 2.6.32.71 kernel
- suricata debug log of failure case (client is suricata machine)
- suricata debug log of working case (client is suricata machine)
- suricata config
The pcaps unfortunately don't correspond to the debug logs.

The primary question might be how dangerous this problem might be for suricata itself.
If you can use retransmit to confuse the stream-engine that might be an issue.
On the other hand there is "max-synack-queued" that should take care of that or might be capable of dealing with that.


Files

suricata.tgz (33.7 KB) suricata.tgz Andreas Herz, 11/21/2016 03:03 PM
Actions #1

Updated by Victor Julien about 8 years ago

  • Status changed from New to Assigned
  • Assignee changed from OISF Dev to Victor Julien
  • Target version changed from TBD to 70
Actions #2

Updated by Victor Julien almost 8 years ago

  • Priority changed from Normal to High
Actions #3

Updated by Victor Julien over 7 years ago

  • Status changed from Assigned to Closed
  • Priority changed from High to Normal
  • Target version changed from 70 to 4.0rc2
Actions #4

Updated by Victor Julien over 7 years ago

  • Private changed from Yes to No
Actions

Also available in: Atom PDF