Project

General

Profile

Actions

Bug #3191

closed

When run suricata with pf_ring zc mode suricata did not try to connect redis.

Added by KH NAM over 4 years ago. Updated about 2 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Target version:
Affected Versions:
Effort:
Difficulty:
Label:

Description

OS : CentOS Linux release 7.7.1908
kernel : 3.10.0-1062.el7.x86_64
Suricata : 4.1.4 RELEASE

1. run suricata with pf_ring zc.
[root@localhost logstash]# PF_RING_FT_CONF=/etc/pf_ring/ft-rules.conf suricata --pfring-int=zc:ens1f0 c /etc/suricata/suricata.yaml
24/9/2019 -
17:15:42 - <Notice> - This is Suricata version 4.1.4 RELEASE
24/9/2019 -- 17:15:42 - <Info> - CPUs/cores online: 4
24/9/2019 -- 17:15:42 - <Config> - luajit states preallocated: 128
24/9/2019 -- 17:15:42 - <Config> - 'default' server has 'request-body-minimal-inspect-size' set to 32133 and 'request-body-inspect-window' set to 3959 after randomization.
24/9/2019 -- 17:15:42 - <Config> - 'default' server has 'response-body-minimal-inspect-size' set to 41880 and 'response-body-inspect-window' set to 16890 after randomization.
24/9/2019 -- 17:15:42 - <Config> - SMB stream depth: 0
24/9/2019 -- 17:15:42 - <Config> - Protocol detection and parser disabled for modbus protocol.
24/9/2019 -- 17:15:42 - <Config> - Protocol detection and parser disabled for enip protocol.
24/9/2019 -- 17:15:42 - <Config> - Protocol detection and parser disabled for DNP3.
24/9/2019 -- 17:15:42 - <Config> - allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
24/9/2019 -- 17:15:42 - <Config> - preallocated 1000 hosts of size 136
24/9/2019 -- 17:15:42 - <Config> - host memory usage: 398144 bytes, maximum: 33554432
24/9/2019 -- 17:15:42 - <Info> - Max dump is 0
24/9/2019 -- 17:15:42 - <Info> - Core dump setting attempted is 0
24/9/2019 -- 17:15:42 - <Info> - Core dump size set to 0
24/9/2019 -- 17:15:42 - <Config> - allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
24/9/2019 -- 17:15:42 - <Config> - preallocated 65535 defrag trackers of size 160
24/9/2019 -- 17:15:42 - <Config> - defrag memory usage: 14155616 bytes, maximum: 33554432
24/9/2019 -- 17:15:42 - <Config> - stream "prealloc-sessions": 2048 (per thread)
24/9/2019 -- 17:15:42 - <Config> - stream "memcap": 67108864
24/9/2019 -- 17:15:42 - <Config> - stream "midstream" session pickups: disabled
24/9/2019 -- 17:15:42 - <Config> - stream "async-oneside": disabled
24/9/2019 -- 17:15:42 - <Config> - stream "checksum-validation": enabled
24/9/2019 -- 17:15:42 - <Config> - stream."inline": disabled
24/9/2019 -- 17:15:42 - <Config> - stream "bypass": disabled
24/9/2019 -- 17:15:42 - <Config> - stream "max-synack-queued": 5
24/9/2019 -- 17:15:42 - <Config> - stream.reassembly "memcap": 268435456
24/9/2019 -- 17:15:42 - <Config> - stream.reassembly "depth": 1048576
24/9/2019 -- 17:15:42 - <Config> - stream.reassembly "toserver-chunk-size": 2618
24/9/2019 -- 17:15:42 - <Config> - stream.reassembly "toclient-chunk-size": 2519
24/9/2019 -- 17:15:42 - <Config> - stream.reassembly.raw: enabled
24/9/2019 -- 17:15:42 - <Config> - stream.reassembly "segment-prealloc": 2048
24/9/2019 -- 17:15:42 - <Config> - enabling 'eve-log' module 'alert'
24/9/2019 -- 17:15:42 - <Config> - enabling 'eve-log' module 'http'
24/9/2019 -- 17:15:42 - <Config> - enabling 'eve-log' module 'dns'
24/9/2019 -- 17:15:42 - <Config> - enabling 'eve-log' module 'tls'
24/9/2019 -- 17:15:42 - <Config> - enabling 'eve-log' module 'files'
24/9/2019 -- 17:15:42 - <Config> - forcing magic lookup for logged files
24/9/2019 -- 17:15:42 - <Config> - forcing sha256 calculation for logged or stored files
24/9/2019 -- 17:15:42 - <Config> - enabling 'eve-log' module 'smtp'
24/9/2019 -- 17:15:42 - <Config> - enabling 'eve-log' module 'nfs'
24/9/2019 -- 17:15:42 - <Config> - enabling 'eve-log' module 'smb'
24/9/2019 -- 17:15:42 - <Config> - enabling 'eve-log' module 'tftp'
24/9/2019 -- 17:15:42 - <Config> - enabling 'eve-log' module 'ikev2'
24/9/2019 -- 17:15:42 - <Config> - enabling 'eve-log' module 'krb5'
24/9/2019 -- 17:15:42 - <Config> - enabling 'eve-log' module 'dhcp'
24/9/2019 -- 17:15:42 - <Config> - enabling 'eve-log' module 'ssh'
24/9/2019 -- 17:15:42 - <Config> - enabling 'eve-log' module 'stats'
24/9/2019 -- 17:15:42 - <Warning> - [ERRCODE: SC_WARN_EVE_MISSING_EVENTS(318)] - eve.stats will not display all decoder events correctly. See #2225. Set a prefix in stats.decoder-events-prefix. In 5.0 the prefix will default to 'decoder.event'.
24/9/2019 -- 17:15:42 - <Config> - enabling 'eve-log' module 'flow'
24/9/2019 -- 17:15:42 - <Config> - enabling 'eve-log' module 'netflow'
24/9/2019 -- 17:15:42 - <Info> - stats output device (regular) initialized: stats.log
24/9/2019 -- 17:15:42 - <Config> - Delayed detect disabled
24/9/2019 -- 17:15:42 - <Info> - Running in live mode, activating unix socket
24/9/2019 -- 17:15:42 - <Config> - pattern matchers: MPM: hs, SPM: hs
24/9/2019 -- 17:15:42 - <Config> - grouping: tcp-whitelist (default) 53, 80, 139, 443, 445, 1433, 3306, 3389, 6666, 6667, 8080
24/9/2019 -- 17:15:42 - <Config> - grouping: udp-whitelist (default) 53, 135, 5060
24/9/2019 -- 17:15:42 - <Config> - prefilter engines: MPM
24/9/2019 -- 17:15:42 - <Info> - Loading reputation file: /etc/suricata/rules/scirius-iprep.list
24/9/2019 -- 17:15:42 - <Perf> - host memory usage: 2268688 bytes, maximum: 33554432
24/9/2019 -- 17:15:42 - <Config> - Loading rule file: /etc/suricata/rules/scirius.rules
24/9/2019 -- 17:15:48 - <Info> - 1 rule files processed. 18918 rules successfully loaded, 0 rules failed
24/9/2019 -- 17:15:48 - <Info> - Threshold config parsed: 0 rule(s) found
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for tcp-packet
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for tcp-stream
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for udp-packet
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for other-ip
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_uri
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_request_line
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_client_body
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_response_line
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_header
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_header
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_header_names
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_header_names
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_accept
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_accept_enc
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_accept_lang
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_referer
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_connection
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_content_len
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_content_len
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_content_type
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_content_type
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_protocol
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_protocol
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_start
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_start
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_raw_header
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_raw_header
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_method
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_cookie
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_cookie
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_raw_uri
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_user_agent
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_host
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_raw_host
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_stat_msg
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for http_stat_code
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for dns_query
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for tls_sni
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for tls_cert_issuer
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for tls_cert_subject
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for tls_cert_serial
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for tls_cert_fingerprint
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for ja3_hash
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for ja3_string
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for dce_stub_data
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for dce_stub_data
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for smb_named_pipe
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for smb_share
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for ssh_protocol
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for ssh_protocol
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for ssh_software
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for ssh_software
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for file_data
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for file_data
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for file_data
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for file_data
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for krb5_cname
24/9/2019 -- 17:15:48 - <Perf> - using shared mpm ctx' for krb5_sname
24/9/2019 -- 17:15:48 - <Info> - 18921 signatures processed. 10 are IP-only rules, 5044 are inspecting packet payload, 16091 inspect application layer, 0 are decoder event only
24/9/2019 -- 17:15:48 - <Config> - building signature grouping structure, stage 1: preprocessing rules... complete
24/9/2019 -- 17:15:48 - <Perf> - TCP toserver: 41 port groups, 35 unique SGH's, 6 copies
24/9/2019 -- 17:15:48 - <Perf> - TCP toclient: 21 port groups, 21 unique SGH's, 0 copies
24/9/2019 -- 17:15:48 - <Perf> - UDP toserver: 41 port groups, 35 unique SGH's, 6 copies
24/9/2019 -- 17:15:48 - <Perf> - UDP toclient: 21 port groups, 16 unique SGH's, 5 copies
24/9/2019 -- 17:15:49 - <Perf> - OTHER toserver: 254 proto groups, 3 unique SGH's, 251 copies
24/9/2019 -- 17:15:49 - <Perf> - OTHER toclient: 254 proto groups, 0 unique SGH's, 254 copies
24/9/2019 -- 17:15:55 - <Perf> - Unique rule groups: 110
24/9/2019 -- 17:15:55 - <Perf> - Builtin MPM "toserver TCP packet": 27
24/9/2019 -- 17:15:55 - <Perf> - Builtin MPM "toclient TCP packet": 20
24/9/2019 -- 17:15:55 - <Perf> - Builtin MPM "toserver TCP stream": 27
24/9/2019 -- 17:15:55 - <Perf> - Builtin MPM "toclient TCP stream": 21
24/9/2019 -- 17:15:55 - <Perf> - Builtin MPM "toserver UDP packet": 35
24/9/2019 -- 17:15:55 - <Perf> - Builtin MPM "toclient UDP packet": 15
24/9/2019 -- 17:15:55 - <Perf> - Builtin MPM "other IP packet": 2
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver http_uri": 12
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver http_request_line": 1
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver http_client_body": 5
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toclient http_response_line": 1
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver http_header": 6
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toclient http_header": 3
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver http_header_names": 1
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver http_accept": 1
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver http_referer": 1
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver http_content_len": 1
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver http_content_type": 1
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toclient http_content_type": 1
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver http_start": 1
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver http_raw_header": 1
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver http_method": 3
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver http_cookie": 1
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toclient http_cookie": 2
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver http_raw_uri": 1
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver http_user_agent": 4
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver http_host": 2
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toclient http_stat_code": 1
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver dns_query": 4
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver tls_sni": 2
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toclient tls_cert_issuer": 2
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toclient tls_cert_subject": 2
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toclient tls_cert_serial": 1
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver ssh_protocol": 1
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toserver file_data": 1
24/9/2019 -- 17:15:55 - <Perf> - AppLayer MPM "toclient file_data": 5
24/9/2019 -- 17:16:06 - <Info> - ZC interface detected, not setting cluster-id for PF_RING (iface zc:ens1f0)
24/9/2019 -- 17:16:06 - <Info> - ZC interface detected, not setting cluster type for PF_RING (iface zc:ens1f0)
24/9/2019 -- 17:16:06 - <Warning> - [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'zc:ens1f0': No such device (19)
24/9/2019 -- 17:16:06 - <Warning> - [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'zc:ens1f0': No such device (19)
24/9/2019 -- 17:16:06 - <Warning> - [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'zc:ens1f0': No such device (19)
24/9/2019 -- 17:16:06 - <Warning> - [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'zc:ens1f0': No such device (19)
24/9/2019 -- 17:16:06 - <Warning> - [ERRCODE: SC_ERR_SYSCALL(50)] - Failure when trying to get feature via ioctl for 'zc:ens1f0': No such device (19)
24/9/2019 -- 17:16:06 - <Info> - Going to use 1 thread(s)
24/9/2019 -- 17:16:06 - <Perf> - Enabling zero-copy for zc:ens1f0
24/9/2019 -- 17:16:07 - <Info> - ZC interface detected, not adding thread to cluster
24/9/2019 -- 17:16:07 - <Perf> - (W#01-zc:ens1f0) Using PF_RING v.7.5.0, interface zc:ens1f0, cluster-id 1, single-pfring-thread
24/9/2019 -- 17:16:07 - <Info> - RunModeIdsPfringWorkers initialised
24/9/2019 -- 17:16:07 - <Config> - using 1 flow manager threads
24/9/2019 -- 17:16:07 - <Config> - using 1 flow recycler threads
24/9/2019 -- 17:16:07 - <Info> - Running in live mode, activating unix socket
24/9/2019 -- 17:16:07 - <Info> - Using unix socket file '/var/run/suricata/suricata-command.socket'
24/9/2019 -- 17:16:07 - <Notice> - all 1 packet processing threads, 2 management threads initialized, engine started.
24/9/2019 -- 17:16:07 - <Warning> - [ERRCODE: SC_ERR_PF_RING_VLAN(304)] - no VLAN header in the raw packet. See #2355.
24/9/2019 -- 17:18:32 - <Notice> - Signal Received. Stopping engine.
24/9/2019 -- 17:18:32 - <Perf> - 0 new flows, 0 established flows were timed out, 0 flows in closed state
24/9/2019 -- 17:18:32 - <Info> - time elapsed 145.572s
24/9/2019 -- 17:18:32 - <Perf> - 0 flows processed
24/9/2019 -- 17:18:32 - <Perf> - (W#01-zc:ens1f0) Kernel: Packets 49259, dropped 0
24/9/2019 -- 17:18:32 - <Perf> - (W#01-zc:ens1f0) Packets 49259, bytes 44128784
24/9/2019 -- 17:18:32 - <Info> - Alerts: 0
24/9/2019 -- 17:18:32 - <Perf> - ippair memory usage: 414144 bytes, maximum: 16777216
24/9/2019 -- 17:18:32 - <Perf> - host memory usage: 2268688 bytes, maximum: 33554432
24/9/2019 -- 17:18:32 - <Info> - cleaning up signature grouping structure... complete
24/9/2019 -- 17:18:32 - <Notice> - Stats for 'zc:ens1f0': pkts: 49259, drop: 0 (0.00%), invalid chksum: 0
24/9/2019 -- 17:18:32 - <Perf> - Cleaning up Hyperscan global scratch
24/9/2019 -- 17:18:32 - <Perf> - Clearing Hyperscan database cache

- redis db record did not changed either.

2. run suricata with out pf ring(af packet, same config)
24/9/2019 -- 17:19:26 - <Notice> - This is Suricata version 4.1.4 RELEASE
24/9/2019 -- 17:19:26 - <Info> - CPUs/cores online: 4
24/9/2019 -- 17:19:26 - <Config> - luajit states preallocated: 128
24/9/2019 -- 17:19:26 - <Config> - 'default' server has 'request-body-minimal-inspect-size' set to 32165 and 'request-body-inspect-window' set to 4055 after randomization.
24/9/2019 -- 17:19:26 - <Config> - 'default' server has 'response-body-minimal-inspect-size' set to 39328 and 'response-body-inspect-window' set to 15681 after randomization.
24/9/2019 -- 17:19:26 - <Config> - SMB stream depth: 0
24/9/2019 -- 17:19:26 - <Config> - Protocol detection and parser disabled for modbus protocol.
24/9/2019 -- 17:19:26 - <Config> - Protocol detection and parser disabled for enip protocol.
24/9/2019 -- 17:19:26 - <Config> - Protocol detection and parser disabled for DNP3.
24/9/2019 -- 17:19:26 - <Config> - allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
24/9/2019 -- 17:19:26 - <Config> - preallocated 1000 hosts of size 136
24/9/2019 -- 17:19:26 - <Config> - host memory usage: 398144 bytes, maximum: 33554432
24/9/2019 -- 17:19:26 - <Info> - Max dump is 0
24/9/2019 -- 17:19:26 - <Info> - Core dump setting attempted is 0
24/9/2019 -- 17:19:26 - <Info> - Core dump size set to 0
24/9/2019 -- 17:19:26 - <Config> - allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
24/9/2019 -- 17:19:26 - <Config> - preallocated 65535 defrag trackers of size 160
24/9/2019 -- 17:19:26 - <Config> - defrag memory usage: 14155616 bytes, maximum: 33554432
24/9/2019 -- 17:19:26 - <Config> - stream "prealloc-sessions": 2048 (per thread)
24/9/2019 -- 17:19:26 - <Config> - stream "memcap": 67108864
24/9/2019 -- 17:19:26 - <Config> - stream "midstream" session pickups: disabled
24/9/2019 -- 17:19:26 - <Config> - stream "async-oneside": disabled
24/9/2019 -- 17:19:26 - <Config> - stream "checksum-validation": enabled
24/9/2019 -- 17:19:26 - <Config> - stream."inline": disabled
24/9/2019 -- 17:19:26 - <Config> - stream "bypass": disabled
24/9/2019 -- 17:19:26 - <Config> - stream "max-synack-queued": 5
24/9/2019 -- 17:19:26 - <Config> - stream.reassembly "memcap": 268435456
24/9/2019 -- 17:19:26 - <Config> - stream.reassembly "depth": 1048576
24/9/2019 -- 17:19:26 - <Config> - stream.reassembly "toserver-chunk-size": 2439
24/9/2019 -- 17:19:26 - <Config> - stream.reassembly "toclient-chunk-size": 2492
24/9/2019 -- 17:19:26 - <Config> - stream.reassembly.raw: enabled
24/9/2019 -- 17:19:26 - <Config> - stream.reassembly "segment-prealloc": 2048
24/9/2019 -- 17:19:26 - <Config> - enabling 'eve-log' module 'alert'
24/9/2019 -- 17:19:26 - <Config> - enabling 'eve-log' module 'http'
24/9/2019 -- 17:19:26 - <Config> - enabling 'eve-log' module 'dns'
24/9/2019 -- 17:19:26 - <Config> - enabling 'eve-log' module 'tls'
24/9/2019 -- 17:19:26 - <Config> - enabling 'eve-log' module 'files'
24/9/2019 -- 17:19:26 - <Config> - forcing magic lookup for logged files
24/9/2019 -- 17:19:26 - <Config> - forcing sha256 calculation for logged or stored files
24/9/2019 -- 17:19:26 - <Config> - enabling 'eve-log' module 'smtp'
24/9/2019 -- 17:19:26 - <Config> - enabling 'eve-log' module 'nfs'
24/9/2019 -- 17:19:26 - <Config> - enabling 'eve-log' module 'smb'
24/9/2019 -- 17:19:26 - <Config> - enabling 'eve-log' module 'tftp'
24/9/2019 -- 17:19:26 - <Config> - enabling 'eve-log' module 'ikev2'
24/9/2019 -- 17:19:26 - <Config> - enabling 'eve-log' module 'krb5'
24/9/2019 -- 17:19:26 - <Config> - enabling 'eve-log' module 'dhcp'
24/9/2019 -- 17:19:26 - <Config> - enabling 'eve-log' module 'ssh'
24/9/2019 -- 17:19:26 - <Config> - enabling 'eve-log' module 'stats'
24/9/2019 -- 17:19:26 - <Warning> - [ERRCODE: SC_WARN_EVE_MISSING_EVENTS(318)] - eve.stats will not display all decoder events correctly. See #2225. Set a prefix in stats.decoder-events-prefix. In 5.0 the prefix will default to 'decoder.event'.
24/9/2019 -- 17:19:26 - <Config> - enabling 'eve-log' module 'flow'
24/9/2019 -- 17:19:26 - <Config> - enabling 'eve-log' module 'netflow'
24/9/2019 -- 17:19:26 - <Info> - stats output device (regular) initialized: stats.log
24/9/2019 -- 17:19:26 - <Config> - Delayed detect disabled
24/9/2019 -- 17:19:26 - <Info> - Running in live mode, activating unix socket
24/9/2019 -- 17:19:26 - <Config> - pattern matchers: MPM: hs, SPM: hs
24/9/2019 -- 17:19:26 - <Config> - grouping: tcp-whitelist (default) 53, 80, 139, 443, 445, 1433, 3306, 3389, 6666, 6667, 8080
24/9/2019 -- 17:19:26 - <Config> - grouping: udp-whitelist (default) 53, 135, 5060
24/9/2019 -- 17:19:26 - <Config> - prefilter engines: MPM
24/9/2019 -- 17:19:26 - <Info> - Loading reputation file: /etc/suricata/rules/scirius-iprep.list
24/9/2019 -- 17:19:26 - <Perf> - host memory usage: 2268688 bytes, maximum: 33554432
24/9/2019 -- 17:19:26 - <Config> - Loading rule file: /etc/suricata/rules/scirius.rules
24/9/2019 -- 17:19:33 - <Info> - 1 rule files processed. 18918 rules successfully loaded, 0 rules failed
24/9/2019 -- 17:19:33 - <Info> - Threshold config parsed: 0 rule(s) found
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for tcp-packet
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for tcp-stream
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for udp-packet
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for other-ip
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_uri
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_request_line
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_client_body
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_response_line
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_header
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_header
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_header_names
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_header_names
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_accept
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_accept_enc
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_accept_lang
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_referer
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_connection
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_content_len
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_content_len
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_content_type
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_content_type
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_protocol
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_protocol
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_start
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_start
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_raw_header
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_raw_header
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_method
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_cookie
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_cookie
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_raw_uri
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_user_agent
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_host
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_raw_host
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_stat_msg
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for http_stat_code
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for dns_query
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for tls_sni
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for tls_cert_issuer
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for tls_cert_subject
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for tls_cert_serial
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for tls_cert_fingerprint
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for ja3_hash
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for ja3_string
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for dce_stub_data
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for dce_stub_data
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for smb_named_pipe
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for smb_share
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for ssh_protocol
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for ssh_protocol
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for ssh_software
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for ssh_software
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for file_data
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for file_data
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for file_data
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for file_data
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for krb5_cname
24/9/2019 -- 17:19:33 - <Perf> - using shared mpm ctx' for krb5_sname
24/9/2019 -- 17:19:33 - <Info> - 18921 signatures processed. 10 are IP-only rules, 5044 are inspecting packet payload, 16091 inspect application layer, 0 are decoder event only
24/9/2019 -- 17:19:33 - <Config> - building signature grouping structure, stage 1: preprocessing rules... complete
24/9/2019 -- 17:19:33 - <Perf> - TCP toserver: 41 port groups, 35 unique SGH's, 6 copies
24/9/2019 -- 17:19:33 - <Perf> - TCP toclient: 21 port groups, 21 unique SGH's, 0 copies
24/9/2019 -- 17:19:33 - <Perf> - UDP toserver: 41 port groups, 35 unique SGH's, 6 copies
24/9/2019 -- 17:19:33 - <Perf> - UDP toclient: 21 port groups, 16 unique SGH's, 5 copies
24/9/2019 -- 17:19:33 - <Perf> - OTHER toserver: 254 proto groups, 3 unique SGH's, 251 copies
24/9/2019 -- 17:19:34 - <Perf> - OTHER toclient: 254 proto groups, 0 unique SGH's, 254 copies
24/9/2019 -- 17:19:40 - <Perf> - Unique rule groups: 110
24/9/2019 -- 17:19:40 - <Perf> - Builtin MPM "toserver TCP packet": 27
24/9/2019 -- 17:19:40 - <Perf> - Builtin MPM "toclient TCP packet": 20
24/9/2019 -- 17:19:40 - <Perf> - Builtin MPM "toserver TCP stream": 27
24/9/2019 -- 17:19:40 - <Perf> - Builtin MPM "toclient TCP stream": 21
24/9/2019 -- 17:19:40 - <Perf> - Builtin MPM "toserver UDP packet": 35
24/9/2019 -- 17:19:40 - <Perf> - Builtin MPM "toclient UDP packet": 15
24/9/2019 -- 17:19:40 - <Perf> - Builtin MPM "other IP packet": 2
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver http_uri": 12
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver http_request_line": 1
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver http_client_body": 5
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toclient http_response_line": 1
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver http_header": 6
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toclient http_header": 3
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver http_header_names": 1
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver http_accept": 1
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver http_referer": 1
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver http_content_len": 1
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver http_content_type": 1
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toclient http_content_type": 1
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver http_start": 1
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver http_raw_header": 1
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver http_method": 3
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver http_cookie": 1
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toclient http_cookie": 2
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver http_raw_uri": 1
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver http_user_agent": 4
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver http_host": 2
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toclient http_stat_code": 1
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver dns_query": 4
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver tls_sni": 2
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toclient tls_cert_issuer": 2
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toclient tls_cert_subject": 2
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toclient tls_cert_serial": 1
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver ssh_protocol": 1
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toserver file_data": 1
24/9/2019 -- 17:19:40 - <Perf> - AppLayer MPM "toclient file_data": 5
24/9/2019 -- 17:19:51 - <Perf> - 4 cores, so using 4 threads
24/9/2019 -- 17:19:51 - <Perf> - Using 4 AF_PACKET threads for interface ens1f0
24/9/2019 -- 17:19:51 - <Config> - ens1f0: enabling zero copy mode by using data release call
24/9/2019 -- 17:19:51 - <Info> - Going to use 4 thread(s)
24/9/2019 -- 17:19:51 - <Config> - using 1 flow manager threads
24/9/2019 -- 17:19:51 - <Config> - using 1 flow recycler threads
24/9/2019 -- 17:19:51 - <Info> - Running in live mode, activating unix socket
24/9/2019 -- 17:19:51 - <Info> - Using unix socket file '/var/run/suricata/suricata-command.socket'
24/9/2019 -- 17:19:51 - <Notice> - all 4 packet processing threads, 2 management threads initialized, engine started.
24/9/2019 -- 17:19:51 - <Perf> - AF_PACKET RX Ring params: block_size=32768 block_nr=26 frame_size=1584 frame_nr=520
24/9/2019 -- 17:19:51 - <Perf> - AF_PACKET RX Ring params: block_size=32768 block_nr=26 frame_size=1584 frame_nr=520
24/9/2019 -- 17:19:51 - <Perf> - AF_PACKET RX Ring params: block_size=32768 block_nr=26 frame_size=1584 frame_nr=520
24/9/2019 -- 17:19:51 - <Perf> - AF_PACKET RX Ring params: block_size=32768 block_nr=26 frame_size=1584 frame_nr=520
24/9/2019 -- 17:19:51 - <Info> - All AFP capture threads are running.
24/9/2019 -- 17:19:52 - <Notice> - Trying to connect to Redis
24/9/2019 -- 17:19:52 - <Notice> - Connected to Redis.
24/9/2019 -- 17:22:04 - <Notice> - Signal Received. Stopping engine.
24/9/2019 -- 17:22:04 - <Perf> - 0 new flows, 0 established flows were timed out, 0 flows in closed state
24/9/2019 -- 17:22:04 - <Info> - time elapsed 133.340s
24/9/2019 -- 17:22:04 - <Perf> - 302 flows processed
24/9/2019 -- 17:22:04 - <Perf> - (W#01-ens1f0) Kernel: Packets 8525, dropped 0
24/9/2019 -- 17:22:04 - <Perf> - (W#02-ens1f0) Kernel: Packets 4610, dropped 0
24/9/2019 -- 17:22:04 - <Perf> - (W#03-ens1f0) Kernel: Packets 3089, dropped 0
24/9/2019 -- 17:22:04 - <Perf> - (W#04-ens1f0) Kernel: Packets 29637, dropped 0
24/9/2019 -- 17:22:04 - <Info> - Alerts: 0
24/9/2019 -- 17:22:04 - <Info> - QUIT Command sent to redis. Connection will terminate!
24/9/2019 -- 17:22:04 - <Info> - Missing reply from redis, disconnected.
24/9/2019 -- 17:22:04 - <Info> - Disconnecting from redis!
24/9/2019 -- 17:22:04 - <Perf> - ippair memory usage: 414144 bytes, maximum: 16777216
24/9/2019 -- 17:22:05 - <Perf> - host memory usage: 2268688 bytes, maximum: 33554432
24/9/2019 -- 17:22:05 - <Info> - cleaning up signature grouping structure... complete
24/9/2019 -- 17:22:05 - <Notice> - Stats for 'ens1f0': pkts: 45861, drop: 0 (0.00%), invalid chksum: 0
24/9/2019 -- 17:22:05 - <Perf> - Cleaning up Hyperscan global scratch
24/9/2019 -- 17:22:05 - <Perf> - Clearing Hyperscan database cache

- work fine with afpacket

I don't know this is pf_ring problem or a problem with Suricata.
However, some of the errors that pf_ring causes, let skip some of the settings of the suricata.


Files

suricata.yaml (72.9 KB) suricata.yaml KH NAM, 09/25/2019 12:12 AM
Actions #1

Updated by Andreas Herz over 4 years ago

  • Status changed from New to Feedback
  • Assignee set to KH NAM
  • Target version set to TBD

Can you provide us with the config you are using?

Actions #2

Updated by KH NAM over 4 years ago

Andreas Herz wrote:

Can you provide us with the config you are using?

Sure.

I attached suricata.yaml.

Actions #3

Updated by Andreas Herz about 2 years ago

  • Status changed from Feedback to Closed

Hi, we're closing this issue since there have been no further responses.
If you think this issue is still relevant, try to test it again with the
most recent version of suricata and reopen the issue. If you want to
improve the bug report please take a look at
https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Reporting_Bugs

Actions

Also available in: Atom PDF