Project

General

Profile

Support #3876

Updated by Victor Julien over 4 years ago

Hey Team, 

 I currently have a Centos 7 box running kernel 3.10.0-1127.el7.x86_64. I have the box inline underneath a firewall and before a switch so traffic flows internet-->firewall-->suricata-->switch. And I am trying to take advantage of the AF_Packet mode. 

 Unfortunately the firewall sitting above Suricata only has 1GbE interfaces. To increase throughput, these interfaces are bonded together via a LACP port channel. One port channel serves the inside (internal hosts) vlan and the other serves a dmz vlan. One the Centos7 box that is running Suricata I have bonded the proper interfaces together and setup the appropriate port channels. The Centos7 box is able to successfully bond with the firewall inside and dmz port channel and the switch inside and dmz port channel. So in total I have four port channels, 2 going from Centos7 to firewall, and 2 going from Centos7 to switch. Each port channel has multiple interfaces that are a part of it. This all works well. 

 My thought is to run Suricata in AF_Packet mode to bridge the bonds together. I will detail out the bond names below: 

 bond_firewall (serves inside vlan with 2 1GbE interfaces) 
 bond_firewall2 (serves dmz vlan with 2 1GbE interfaces) 
 bond_switch (serves inside vlan with 4 10GbE interfaces) 
 bond_switch2 (serves dmz vlan with 4 10GbE interfaces) 

 My suricata config is below: 
 <pre><code class="yaml"> 
 

 max-pending-packets: 1024 

 # Runmode the engine should use. Please check --list-runmodes to get the available 
 # runmodes for each packet acquisition method. Default depends on selected capture 
 # method. 'workers' generally gives best performance. 
 runmode: workers 


 af-packet: 
   - interface: bond_firewall 
     threads: auto 
     defrag: yes 
     cluster-type: cluster_flow 
     cluster-id: 99 
     ring-size: 2000 
     copy-mode: ips 
     copy-iface: bond_switch 
     #buffer-size: 6453555 
     use-mmap: yes 
     tpacket-v3: no 
     #rollover: yes 

   - interface: bond_switch 
     threads: auto 
     defrag: yes 
     cluster-type: cluster_flow 
     cluster-id: 98 
     ring-size: 2000 
     copy-mode: ips 
     copy-iface: bond_firewall 
     #buffer-size: 6453555 
     use-mmap: yes 
     tpacket-v3: no 
     #rollover: yes 

   

    - interface: bond_firewall2 
     threads: auto 
     defrag: yes 
     cluster-type: cluster_flow 
     cluster-id: 97 
     ring-size: 2000 
     copy-mode: ips 
     copy-iface: bond_switch2 
     #buffer-size: 6453555 
     use-mmap: yes 
     tpacket-v3: no 
     #rollover: yes 

   - interface: bond_switch2 
     threads: auto 
     defrag: yes 
     cluster-type: cluster_flow 
     cluster-id: 96 
     ring-size: 2000 
     copy-mode: ips 
     copy-iface: bond_firewall2 
     #buffer-size: 6453555 
     use-mmap: yes 
     tpacket-v3: no 
     #rollover: yes 

 </code></pre> 

 


 I then start suricata and it looks to start up ok (see images). 

 However, performance is brutally slow. When downloading a 2.0GB file from the internet on a host sitting below suricata, transfer rate is an average of 12 KB/s. 

 Just to make sure it wasn't a layer 1 issue, or OS issue I was having, I removed Suricata and used the Linux bridging kernel module to bridge together the port channels and that worked as expected, up to 10 MB/s for the same file download.  

 Is Suricata able to bind to these port channels? My guess is that Suricata is getting confused by the multiple interfaces that are apart of the bonds. The LACP bond is set to a transmit hash of Layer2+3 so is this a hash that is difficult for suricata to understand when it does its own internal hashing to match a packet to a given flow?  

 Is there any way for me to accomplish what I am trying to do? 

 I really appreciate any insight any of you guys may have as I have been left really scratching my head on this. Have you seen other users achieve this in the past but maybe through different options? 

 Thanks so much! 

 Taylor 

Back