Project

General

Profile

Actions

Bug #8278

open

krb5: TCP parser never advances past the first record in a multi-record segment

Added by Alexey Monastyrskiy 4 days ago. Updated 26 minutes ago.

Status:
In Review
Priority:
Normal
Assignee:
Target version:
Affected Versions:
Effort:
Difficulty:
Label:
Protocol, Rust

Description

When a TCP segment contains multiple Kerberos records, the KRB5 TCP parser only processes the first one and silently drops the rest. All KRB5 detection keywords (krb5_msg_type, krb5.cname, etc.) and eve-log events are lost for the dropped records. The root cause is that record_ts (and record_tc) is zeroed out before it is used to advance the buffer pointer. Practical severity is low: while RFC 4120 appears to permit multiple requests per connection, no major KDC implementation we examined (MIT, Heimdal, Windows AD) exercises this — they all close after one response.

Affected Code

rust/src/krb/krb5.rs, functions krb5_parse_request_tcp (line 525–526) and krb5_parse_response_tcp (line 583–584).

Confirmed on current main.

Background

RFC 4120 §7.2.2 specifies that Kerberos over TCP uses a 4-byte big-endian length prefix before each message. The parser is supposed to loop through cur_i, reading the length prefix and the message body for each record, until all data is consumed.

RFC 4120 §7.2.2 also states: "A client MAY send multiple requests before receiving responses." Our reading of this is that multiple length-prefixed records in a single TCP segment is a valid scenario that the parser should handle, though we are not aware of any KDC implementation that actually exercises this.

What Happens

krb5_parse_request_tcp iterates through records in a while !cur_i.is_empty() loop:

if cur_i.len() >= state.record_ts {
    if state.parse(cur_i, flow, Direction::ToServer) < 0 {
        return AppLayerResult::err();
    }
    state.record_ts = 0;                    // <- resets to 0
    cur_i = &cur_i[state.record_ts..];      // <- uses 0, NOT the record length
} else {
    // more fragments required
    state.defrag_buf_ts.extend_from_slice(cur_i);
    return AppLayerResult::ok();
}

After parsing, state.record_ts is reset to 0 (line 525), and then immediately used to slice cur_i (line 526). The slice &cur_i[0..] is the entire buffer, so cur_i never advances. On the next loop iteration, state.record_ts == 0, so be_u32 reads the next 4 bytes as a length prefix — but these bytes are the start of the first record's DER body (e.g. 0x6a819f30 for an AS-REQ, which is ~1.7 billion). This garbage length far exceeds cur_i.len(), so the parser falls into the else branch, buffers the remaining data into defrag_buf_ts, and returns ok() — silently dropping all subsequent records.

The identical pattern exists in krb5_parse_response_tcp at lines 583–584 (state.record_tc).

Reproduction

Two attached PCAPs demonstrate the issue. Both contain valid Kerberos messages (parseable in Wireshark). Apply a rule matching TGS-REQ:

alert krb5 any any -> any any (msg:"KRB5 TGS-REQ"; flow:to_server,established; krb5_msg_type:12; sid:1;)

baseline.pcap: AS-REQ and TGS-REQ in separate TCP segments.
Expected: 1 alert (TGS-REQ detected). Actual: 1 alert. ✓

evasion.pcap: AS-REQ + TGS-REQ coalesced in a single TCP segment (AS-REQ first, TGS-REQ second).
Expected: 1 alert (TGS-REQ detected). Actual: 0 alerts — the parser re-reads AS-REQ's DER body as a garbage length, buffers the remaining data, and exits without ever reaching the TGS-REQ.

Fix

Save record_ts / record_tc to a local variable before resetting:

if cur_i.len() >= state.record_ts {
    let record_len = state.record_ts;
    if state.parse(cur_i, flow, Direction::ToServer) < 0 {
        return AppLayerResult::err();
    }
    state.record_ts = 0;
    cur_i = &cur_i[record_len..];
}

Same fix for krb5_parse_response_tcp:

if cur_i.len() >= state.record_tc {
    let record_len = state.record_tc;
    if state.parse(cur_i, flow, Direction::ToClient) < 0 {
        return AppLayerResult::err();
    }
    state.record_tc = 0;
    cur_i = &cur_i[record_len..];
}

With this fix, all records in a multi-record segment are correctly parsed and generate individual transactions.

Note: the AppLayerResult::incomplete() refactor suggested in #3540 would also eliminate this bug by removing the manual TCP buffering entirely.

Impact

  • Detection: krb5_msg_type, krb5.cname, and other KRB5 detection keywords stop matching for any record after the first in a multi-record TCP segment.
  • Logging: KRB5 entries in eve.json are silently dropped for subsequent records. For example, baseline.pcap produces two KRB5 events (AS-REQ + TGS-REQ) while evasion.pcap produces only one (AS-REQ) — the TGS-REQ event is missing entirely.

Practical Severity

This is a low-severity issue in practice. While the bug could theoretically allow detection evasion by hiding a malicious KRB5 message behind a benign first record in a coalesced TCP segment, none of the three major KDC implementations we examined appear to support multiple requests on a single TCP connection — they all close the connection after a single request-response exchange:

  • MIT Kerberos: src/lib/apputils/net-server.c, process_stream_connection_write() — comment at line 1491 says "We should go back to reading" but the code just calls verto_del(ev), closing the connection.
  • Heimdal: kdc/connect.c, handle_tcp() — comment says "this means we don't keep the connection open even where the protocol permits it", then calls clear_descr().
  • Windows AD: Tested live — KDC returns one response and closes.

Since no production KDC generates multi-record TCP segments, the evasion threat is more theoretical than practical. Still, the code should probably match the intent of the while !cur_i.is_empty() loop.

References


Files

evasion.1.flow.2.segments.pcap (1.09 KB) evasion.1.flow.2.segments.pcap Alexey Monastyrskiy, 02/11/2026 09:31 PM
evasion.1.flow.1.segment.pcap (976 Bytes) evasion.1.flow.1.segment.pcap Alexey Monastyrskiy, 02/11/2026 09:31 PM
baseline.2.flows.pcap (6.12 KB) baseline.2.flows.pcap Alexey Monastyrskiy, 02/13/2026 08:41 PM

Subtasks 2 (2 open0 closed)

Bug #8287: krb5: TCP parser never advances past the first record in a multi-record segment (8.0.x backport)AssignedVictor JulienActions
Bug #8288: krb5: TCP parser never advances past the first record in a multi-record segment (7.0.x backport)AssignedVictor JulienActions
Actions #1

Updated by Victor Julien 2 days ago

Thanks alexey, at this time I'm unable to reproduce your results. In both @main and main-7.0.x I get only a record for the AS-REQ record:

{"timestamp":"2023-11-14T23:13:20.010000+0100","flow_id":2547081746,"pcap_cnt":4,"event_type":"krb5","src_ip":"10.0.0.100","src_port":49152,"dest_ip":"10.0.0.1","dest_port":88,"proto":"TCP","pkt_src":"wire/pcap","krb5":{"msg_type":"KRB_AS_REQ","cname":"testuser","realm":"DEMO.LOCAL","sname":"krbtgt/DEMO.LOCAL","encryption":"<none>","weak_encryption":false}}

I guess this still shows a bug, as I would expect the TGS-REQ message to be logged for both pcaps.

Can you share the configure options and commandline you use to generate the alert?

Actions #2

Updated by Alexey Monastyrskiy 2 days ago

Victor Julien wrote in #note-1:

In both main and main-7.0.x I get only a record for the AS-REQ record:

I'm sorry, you are correct, the "baseline" PCAP I attached to the report contains two Kerberos messages in two separate TCP segments, but this doesn't prevent the bug from triggering.

I have now attached the PCAP with AS and TGS requests issued in two separate flows, and for this one all Kerberos events should be logged correctly.

(Just in case, for reproduction, I used Suricata-8.0.3-1-64bit.msi with no command line options besides "-l", "-S", "-r".)

Actions #3

Updated by Victor Julien 1 day ago

  • Tracker changed from Bug to Security
  • Subject changed from KRB5 TCP parser never advances past the first record in a multi-record segment to krb5: TCP parser never advances past the first record in a multi-record segment
  • Status changed from New to In Progress
  • Assignee set to Victor Julien
  • Target version changed from TBD to 9.0.0-beta1
  • Effort deleted (low)
  • Severity set to LOW
  • Label Needs backport to 7.0, Needs backport to 8.0 added
Actions #4

Updated by OISF Ticketbot 1 day ago

  • Subtask #8287 added
Actions #5

Updated by OISF Ticketbot 1 day ago

  • Label deleted (Needs backport to 8.0)
Actions #6

Updated by OISF Ticketbot 1 day ago

  • Subtask #8288 added
Actions #7

Updated by OISF Ticketbot 1 day ago

  • Label deleted (Needs backport to 7.0)
Actions #8

Updated by Victor Julien 1 day ago

  • Status changed from In Progress to In Review
Actions #9

Updated by Philippe Antoine 26 minutes ago

  • Tracker changed from Security to Bug
  • Severity deleted (LOW)
Actions

Also available in: Atom PDF