Project

General

Profile

Actions

Bug #3358

open

bypass_filter AFPBypassCallback Segmentation Fault

Added by Vincent Li almost 3 years ago. Updated almost 3 years ago.

Status:
Assigned
Priority:
Normal
Assignee:
Target version:
Affected Versions:
Effort:
Difficulty:
Label:

Description

disabled VLAN tracking for bypass_filter

diff --git a/ebpf/bypass_filter.c b/ebpf/bypass_filter.c
index eda9650ed..613ff2861 100644
--- a/ebpf/bypass_filter.c
+++ b/ebpf/bypass_filter.c
@@ -28,7 +28,7 @@
 #include "bpf_helpers.h" 

 /* vlan tracking: set it to 0 if you don't use VLAN for flow tracking */
-#define VLAN_TRACKING    1
+#define VLAN_TRACKING    0 

 #define LINUX_VERSION_CODE 263682

suricata af-packet config

af-packet:
  - interface: enp4s0f0
    threads: auto
    cluster-id: 99
    cluster-type: cluster_qm
    defrag: yes
    use-mmap: yes
    bypass: yes
    ring-size: 200000
    copy-mode: ips
    copy-iface: enp4s0f1
    xdp-mode: soft
    pinned-maps: true
    pinned-maps-name: flow_table_v4
    ebpf-filter-file:  /usr/libexec/suricata/ebpf/bypass_filter.bpf
  - interface: enp4s0f1
    threads: auto
    cluster-id: 100
    cluster-type: cluster_qm
    defrag: yes
    use-mmap: yes
    bypass: yes
    ring-size: 200000
    copy-mode: ips
    copy-iface: enp4s0f0
    xdp-mode: soft
    pinned-maps: true
    pinned-maps-name: flow_table_v4
    ebpf-filter-file:  /usr/libexec/suricata/ebpf/bypass_filter.bpf

run suricata as:


# gdb --args suricata  -c /etc/suricata/suricata.yaml --pidfile /var/run/suricata.pid --af-packet -vvv

then run iperf test to pass traffic to suricata

run iperf test

ns1> ./iperf -c 10.8.8.9
------------------------------------------------------------
Client connecting to 10.8.8.9, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 10.8.8.8 port 35166 connected with 10.8.8.9 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.3 sec  1.46 MBytes  1.19 Mbits/sec

then the suricata cored:

Thread 23 "W#06-enp4s0f1" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fff937fe700 (LWP 3723)]
0x000000000077ecc8 in AFPBypassCallback (p=0x7fff84268180) at source-af-packet.c:2437

#0  0x000000000077ecc8 in AFPBypassCallback (p=0x7fff84268180) at source-af-packet.c:2437
#1  0x0000000000886567 in EBPFUpdateFlow (f=0x1f3d130, p=0x7fff84268180, data=0x0) at util-ebpf.c:1063
#2  0x000000000077ee8e in AFPBypassCallback (p=0x7fff84268180) at source-af-packet.c:2460
#3  0x0000000000886567 in EBPFUpdateFlow (f=0x1f3d130, p=0x7fff84268180, data=0x0) at util-ebpf.c:1063
#4  0x000000000077ee8e in AFPBypassCallback (p=0x7fff84268180) at source-af-packet.c:2460
#5  0x0000000000886567 in EBPFUpdateFlow (f=0x1f3d130, p=0x7fff84268180, data=0x0) at util-ebpf.c:1063
#6  0x000000000077ee8e in AFPBypassCallback (p=0x7fff84268180) at source-af-packet.c:2460
#7  0x0000000000886567 in EBPFUpdateFlow (f=0x1f3d130, p=0x7fff84268180, data=0x0) at util-ebpf.c:1063
#8  0x000000000077ee8e in AFPBypassCallback (p=0x7fff84268180) at source-af-packet.c:2460
#9  0x0000000000886567 in EBPFUpdateFlow (f=0x1f3d130, p=0x7fff84268180, data=0x0) at util-ebpf.c:1063
#10 0x000000000077ee8e in AFPBypassCallback (p=0x7fff84268180) at source-af-packet.c:2460
#11 0x0000000000886567 in EBPFUpdateFlow (f=0x1f3d130, p=0x7fff84268180, data=0x0) at util-ebpf.c:1063
#12 0x000000000077ee8e in AFPBypassCallback (p=0x7fff84268180) at source-af-packet.c:2460
#13 0x0000000000886567 in EBPFUpdateFlow (f=0x1f3d130, p=0x7fff84268180, data=0x0) at util-ebpf.c:1063
#14 0x000000000077ee8e in AFPBypassCallback (p=0x7fff84268180) at source-af-packet.c:2460
#15 0x0000000000886567 in EBPFUpdateFlow (f=0x1f3d130, p=0x7fff84268180, data=0x0) at util-ebpf.c:1063
#16 0x000000000077ee8e in AFPBypassCallback (p=0x7fff84268180) at source-af-packet.c:2460
#17 0x0000000000886567 in EBPFUpdateFlow (f=0x1f3d130, p=0x7fff84268180, data=0x0) at util-ebpf.c:1063
#18 0x000000000077ee8e in AFPBypassCallback (p=0x7fff84268180) at source-af-packet.c:2460
#19 0x0000000000886567 in EBPFUpdateFlow (f=0x1f3d130, p=0x7fff84268180, data=0x0) at util-ebpf.c:1063
#20 0x000000000077ee8e in AFPBypassCallback (p=0x7fff84268180) at source-af-packet.c:2460
#21 0x0000000000886567 in EBPFUpdateFlow (f=0x1f3d130, p=0x7fff84268180, data=0x0) at util-ebpf.c:1063
#22 0x000000000077ee8e in AFPBypassCallback (p=0x7fff84268180) at source-af-packet.c:2460
#23 0x0000000000886567 in EBPFUpdateFlow (f=0x1f3d130, p=0x7fff84268180, data=0x0) at util-ebpf.c:1063
---Type <return> to continue, or q <return> to quit---
..............CUT.........
---Type <return> to continue, or q <return> to quit---
#528 0x000000000077ee8e in AFPBypassCallback (p=0x7fff84268180) at source-af-packet.c:2460
#529 0x0000000000886567 in EBPFUpdateFlow (f=0x1f3d130, p=0x7fff84268180, data=0x0) at util-ebpf.c:1063
#530 0x00000000006b09cb in BypassedFlowUpdate (f=0x1f3d130, p=0x7fff84268180) at flow-bypass.c:210
#531 0x00000000006accc4 in FlowHandlePacketUpdate (f=0x1f3d130, p=0x7fff84268180) at flow.c:420
#532 0x00000000006be104 in FlowUpdate (tv=0x71c5220, fw=0x7fff8428d620, p=0x7fff84268180) at flow-worker.c:77
#533 0x00000000006bd64d in FlowWorker (tv=0x71c5220, p=0x7fff84268180, data=0x7fff8428d620, preq=0x528fbe0, unused=0x0) at flow-worker.c:213
#534 0x000000000083837a in TmThreadsSlotVarRun (tv=0x71c5220, p=0x7fff84268180, slot=0x528fa60) at tm-threads.c:130
#535 0x0000000000780f44 in TmThreadsSlotProcessPkt (tv=0x71c5220, s=0x528fa60, p=0x7fff84268180) at ./tm-threads.h:162
#536 0x000000000077818e in AFPReadFromRing (ptv=0x7fff84268b20) at source-af-packet.c:1025
#537 0x0000000000774636 in ReceiveAFPLoop (tv=0x71c5220, data=0x7fff84268b20, slot=0x71c5340) at source-af-packet.c:1589
#538 0x0000000000841060 in TmThreadsSlotPktAcqLoop (td=0x71c5220) at tm-threads.c:335
#539 0x00007ffff695f6db in start_thread (arg=0x7fff937fe700) at pthread_create.c:463
#540 0x00007ffff5dc388f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

bpftool map list

# bpftool map list

9: percpu_hash  name flow_table_v4  flags 0x0
    key 16B  value 16B  max_entries 32768  memlock 19660800B
10: percpu_hash  name flow_table_v6  flags 0x0
    key 40B  value 16B  max_entries 32768  memlock 20447232B
12: percpu_hash  name flow_table_v4  flags 0x0
    key 16B  value 16B  max_entries 32768  memlock 19660800B
13: percpu_hash  name flow_table_v6  flags 0x0
    key 40B  value 16B  max_entries 32768  memlock 20447232B

bpftool map dump id 9


key:
08 08 08 0a 09 08 08 0a  22 a5 51 14 01 00 00 00
value (CPU 00): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 01): 0c 00 00 00 00 00 00 00  f8 46 00 00 00 00 00 00
value (CPU 02): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 03): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 04): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 05): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 06): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 07): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 08): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 09): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 10): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 11): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 12): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 13): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 14): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 15): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 16): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 17): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 18): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 19): 10 b0 2c f1 ff 7f 00 00  00 00 00 00 00 00 00 00
value (CPU 20): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 21): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 22): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 23): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 24): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 25): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 26): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 27): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 28): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 29): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 30): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 31): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
key:
09 08 08 0a 08 08 08 0a  51 14 22 a5 01 00 00 00
value (CPU 00): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 01): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 02): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 03): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 04): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 05): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 06): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 07): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 08): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 09): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 10): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 11): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 12): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 13): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 14): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 15): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 16): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 17): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 18): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 19): 10 b0 2c f1 ff 7f 00 00  00 00 00 00 00 00 00 00
value (CPU 20): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 21): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 22): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 23): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 24): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 25): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 26): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 27): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 28): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 29): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 30): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
value (CPU 31): 00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
Found 2 elements
Actions #1

Updated by Vincent Li almost 3 years ago

the crash does not happen if I set bypass: no in the af-packet config:


af-packet:
  - interface: enp4s0f0
    threads: auto
    cluster-id: 99
    cluster-type: cluster_qm
    defrag: yes
    use-mmap: yes
    bypass: no
    ring-size: 200000
    copy-mode: ips
    copy-iface: enp4s0f1
    xdp-mode: soft
    pinned-maps: true
    pinned-maps-name: flow_table_v4
    ebpf-filter-file:  /usr/libexec/suricata/ebpf/bypass_filter.bpf
  - interface: enp4s0f1
    threads: auto
    cluster-id: 100
    cluster-type: cluster_qm
    defrag: yes
    use-mmap: yes
    bypass: no
    ring-size: 200000
    copy-mode: ips
    copy-iface: enp4s0f0
    xdp-mode: soft
    pinned-maps: true
    pinned-maps-name: flow_table_v4
    ebpf-filter-file:  /usr/libexec/suricata/ebpf/bypass_filter.bpf
Actions #2

Updated by Andreas Herz almost 3 years ago

  • Assignee set to OISF Dev
  • Target version set to TBD
Actions #3

Updated by Victor Julien almost 3 years ago

  • Status changed from New to Assigned
  • Assignee changed from OISF Dev to Eric Leblond
Actions

Also available in: Atom PDF