Project

General

Profile

Actions

Support #1432

closed

Reassembling segments with size > 1500

Added by Mateusz Pigulski about 9 years ago. Updated almost 8 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Affected Versions:
Label:

Description

Hi experts!
I can observe that suricata 2.03 can't reassemble tcp segments if one of the segments has size > 1500bytes, if size of segments are < 1500 bytes then reassembling works fine. Could You tell which parametr should I increase to be able to reassemble "big" segments ??


Files

stats.log.txt (13.2 KB) stats.log.txt Mateusz Pigulski, 03/31/2015 12:22 AM
suricata_yaml.txt (17.8 KB) suricata_yaml.txt Mateusz Pigulski, 04/02/2015 07:15 AM
Actions #1

Updated by Peter Manev about 9 years ago

You can try increasing the MTU size on the interface you are sniffing on.

Actions #2

Updated by Mateusz Pigulski about 9 years ago

Peter Manev wrote:

You can try increasing the MTU size on the interface you are sniffing on.

My suricata is snifing on 2 NIC which has :

ifconfig eth2
eth3 Link encap:Ethernet HWaddr 00:11:0A:5E:D7:69
inet6 addr: fe80::211:aff:fe5e:d769/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1530 Metric:1
RX packets:136324466 errors:437101 dropped:0 overruns:0 frame:437101
TX packets:27 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:42492672580 (39.5 GiB) TX bytes:5562 (5.4 KiB)

ifconfig eth2
eth2 Link encap:Ethernet HWaddr 00:11:0A:5E:D7:68
inet6 addr: fe80::211:aff:fe5e:d768/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1530 Metric:1
RX packets:538011567 errors:148237 dropped:0 overruns:0 frame:148237
TX packets:29 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:324235039930 (301.9 GiB) TX bytes:6246 (6.0 KiB)

tcpstat tells me :

tcpstat -i eth3 -l -o "Time:%S\tn=%n\tavg=%a\tstddev=%d\tbps=%b\tMaxPacketSize=%M\n"  5
Time:1427447723 n=1391 avg=310.66 stddev=458.60 bps=691398.40 MaxPacketSize=1522
tcpstat -i eth2 -l -o "Time:%S\tn=%n\tavg=%a\tstddev=%d\tbps=%b\tMaxPacketSize=%M\n"  5
Time:1427447745 n=3769 avg=198.91 stddev=347.45 bps=1199536.00 MaxPacketSize=1522

I am pretty sure that 1522 bytes is max size, because on my switch MTU is set to 1522 bytes. I am using suricata with pf_ring.

Do You think that I should more increase MTU on NIC?

Actions #3

Updated by Victor Julien about 9 years ago

Can you try setting the 'default-packet-size' in your yaml to 1530 (or a bit higher if it doesn't work yet). There is an issue with mtu detection with pfring in our code.

Actions #4

Updated by Mateusz Pigulski almost 9 years ago

Victor Julien wrote:

Can you try setting the 'default-packet-size' in your yaml to 1530 (or a bit higher if it doesn't work yet). There is an issue with mtu detection with pfring in our code.

I increased default-packet-size to 1532, but still the same effect, big packets aren't reassembled

Actions #5

Updated by Victor Julien almost 9 years ago

Have you made sure NIC offloading is disabled? See Self_Help_Diagrams

Actions #6

Updated by Mateusz Pigulski almost 9 years ago

Victor Julien wrote:

Have you made sure NIC offloading is disabled? See Self_Help_Diagrams

Yes I have disabled offloading

ethtool --show-offload eth2

Offload parameters for eth2:
rx-checksumming: off
tx-checksumming: off
scatter-gather: off
tcp-segmentation-offload: off
udp-fragmentation-offload: off
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off

ethtool --show-offload eth3

Offload parameters for eth3:
rx-checksumming: off
tx-checksumming: off
scatter-gather: off
tcp-segmentation-offload: off
udp-fragmentation-offload: off
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off

Actions #7

Updated by Mateusz Pigulski almost 9 years ago

Mateusz Pigulski wrote:

Victor Julien wrote:

Have you made sure NIC offloading is disabled? See Self_Help_Diagrams

Yes I have disabled offloading

ethtool --show-offload eth2

Offload parameters for eth2:
rx-checksumming: off
tx-checksumming: off
scatter-gather: off
tcp-segmentation-offload: off
udp-fragmentation-offload: off
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off

ethtool --show-offload eth3

Offload parameters for eth3:
rx-checksumming: off
tx-checksumming: off
scatter-gather: off
tcp-segmentation-offload: off
udp-fragmentation-offload: off
generic-segmentation-offload: off
generic-receive-offload: off
large-receive-offload: off

Without pf_ring problem also exist ...

Actions #8

Updated by Peter Manev almost 9 years ago

Can you please post the last entry run from your stats.log?

Actions #9

Updated by Mateusz Pigulski almost 9 years ago

Peter Manev wrote:

Can you please post the last entry run from your stats.log?

Please find it in attachment

Actions #10

Updated by Peter Manev almost 9 years ago

I missed that -

I am pretty sure that 1522 bytes is max size, because on my switch MTU is set to 1522 bytes. I am using suricata with pf_ring.

Are you confident the MTU on the switch is correspondent to the MTU in the traffic?
When and how did you start experiencing this problem?

Actions #11

Updated by Mateusz Pigulski almost 9 years ago

Are you confident the MTU on the switch is correspondent to the MTU in the traffic?

Yes I am sure that traffic packets are not longer than 1522bytes

When and how did you start experiencing this problem?

My suricata is loging XML sent via network to my provisioning system, in u2 files I can see that events which consist of parts, and in my case first part is longer than 1500bytes other part is not logged. Everything is alright is non of parts is longer than 1522

Actions #12

Updated by Peter Manev almost 9 years ago

Do you have the same issue present as well with 2.0.7?
Can you please send over (privately if you would like) the output of

suricata -c /etc/suricata/suricata.yaml --dump-config

Thank you

Actions #13

Updated by Mateusz Pigulski almost 9 years ago

Peter Manev wrote:

Do you have the same issue present as well with 2.0.7?

I don't know because I didn't try upgradnig suricata, should I ?

Can you please send over (privately if you would like) the output of

Yaml file You can find in attachment

Actions #14

Updated by Peter Manev almost 9 years ago

Yes, can you please try 2.0.7

Why are your timeouts so low?

flow-timeouts.tcp = (null)
flow-timeouts.tcp.new = 6
flow-timeouts.tcp.established = 8
flow-timeouts.tcp.closed = 0
flow-timeouts.tcp.emergency-new = 3
flow-timeouts.tcp.emergency-established = 5
flow-timeouts.tcp.emergency-closed = 0

Can you please try with flow-timeouts.tcp.established = 80 at least?

Do you have/use vlans in your mirrored traffic?

Thank you

Actions #15

Updated by Mateusz Pigulski almost 9 years ago

Hi, I have upgraded suricata to 2.0.7 version but effect is the same.
Yes, traffic is in vlan
Flow-timeouts are low because I have machine with low RAM configuration so I want use RAM as samll as it possible

Actions #16

Updated by Peter Manev almost 9 years ago

Hi,

Can you reproduce that with a pcap - if you can, can you please consider sharing (privately if you would like) the pcap and your yaml?

Thank you

Actions #17

Updated by Mateusz Pigulski almost 9 years ago

Peter Manev wrote:

Hi,

Can you reproduce that with a pcap - if you can, can you please consider sharing (privately if you would like) the pcap and your yaml?

Thank you

Hi, sorry for my absence but I was extremely busy, in mean time I find workaround for that problem.
You wrote that I can reproduce that problem with pcap, You mean that should I read pcap file with suricata then check if entire packets is logged in u2 file ??

Actions #18

Updated by Peter Manev almost 9 years ago

I meant if you can reproduce the problem you mention here (can't reassemble tcp segments if one of the segments has size > 1500bytes) - with a pcap. If you have a pcap that can consistently reproduce the problem described when read/run through Suricata?

Actions #19

Updated by Victor Julien almost 8 years ago

  • Status changed from New to Closed
Actions

Also available in: Atom PDF