Bug #2358
openInconsistent DNS/flows extracted from pcap
Description
ISSUE DESCRIPTION
Analyzing through a UNIX socket file the same pcap several times in the same Suricata run doesn't yield the same results for DNS and flows.
Tested with latest stable (v4.0.3) and latest github commit 6f0794c16f6adaa3e8a79553a8fcc81aadeed9c7 (dated 2017/Dec/11)
Platform is Debian Jessie x64
$ uname -a
Linux mad-fra-028 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux
SURICATA LOG
# suricata -vv --unix-socket -c /usr/local/etc/suricata/test-dns.yaml
13/12/2017 -- 16:47:31 - <Notice> - This is Suricata version 4.0.3 RELEASE
13/12/2017 -- 16:47:31 - <Info> - CPUs/cores online: 40
13/12/2017 -- 16:47:31 - <Config> - 'default' server has 'request-body-minimal-inspect-size' set to 31964 and 'request-body-inspect-window' set to 3993 after randomization.
13/12/2017 -- 16:47:31 - <Config> - 'default' server has 'response-body-minimal-inspect-size' set to 40675 and 'response-body-inspect-window' set to 16299 after randomization.
13/12/2017 -- 16:47:31 - <Config> - DNS request flood protection level: 500
13/12/2017 -- 16:47:31 - <Config> - DNS per flow memcap (state-memcap): 524288
13/12/2017 -- 16:47:31 - <Config> - DNS global memcap: 16777216
13/12/2017 -- 16:47:31 - <Config> - Protocol detection and parser disabled for modbus protocol.
13/12/2017 -- 16:47:31 - <Config> - Protocol detection and parser disabled for enip protocol.
13/12/2017 -- 16:47:31 - <Config> - Protocol detection and parser disabled for DNP3.
13/12/2017 -- 16:47:31 - <Config> - allocated 262144 bytes of memory for the host hash... 4096 buckets of size 64
13/12/2017 -- 16:47:31 - <Config> - preallocated 1000 hosts of size 136
13/12/2017 -- 16:47:31 - <Config> - host memory usage: 398144 bytes, maximum: 33554432
13/12/2017 -- 16:47:31 - <Info> - Max dump is 0
13/12/2017 -- 16:47:31 - <Info> - Core dump setting attempted is 0
13/12/2017 -- 16:47:31 - <Info> - Core dump size set to 0
13/12/2017 -- 16:47:31 - <Config> - Delayed detect disabled
13/12/2017 -- 16:47:31 - <Config> - pattern matchers: MPM: ac, SPM: bm
13/12/2017 -- 16:47:31 - <Config> - grouping: tcp-whitelist (default) 53, 80, 139, 443, 445, 1433, 3306, 3389, 6666, 6667, 8080
13/12/2017 -- 16:47:31 - <Config> - grouping: udp-whitelist (default) 53, 135, 5060
13/12/2017 -- 16:47:31 - <Config> - prefilter engines: MPM
13/12/2017 -- 16:47:31 - <Config> - IP reputation disabled
13/12/2017 -- 16:47:31 - <Config> - Loading rule file: /usr/local/etc/suricata/rules/etpro-all.rules
13/12/2017 -- 16:47:38 - <Info> - 1 rule files processed. 19043 rules successfully loaded, 0 rules failed
13/12/2017 -- 16:47:38 - <Info> - Threshold config parsed: 0 rule(s) found
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for tcp-packet
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for tcp-stream
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for udp-packet
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for other-ip
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_uri
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_request_line
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_client_body
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_response_line
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_header
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_header
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_header_names
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_header_names
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_accept
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_accept_enc
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_accept_lang
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_referer
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_connection
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_content_len
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_content_len
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_content_type
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_content_type
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_protocol
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_protocol
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_start
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_start
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_raw_header
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_raw_header
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_method
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_cookie
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_cookie
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_raw_uri
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_user_agent
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_host
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_raw_host
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_stat_msg
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for http_stat_code
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for dns_query
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for tls_sni
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for tls_cert_issuer
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for tls_cert_subject
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for tls_cert_serial
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for dce_stub_data
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for dce_stub_data
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for ssh_protocol
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for ssh_protocol
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for ssh_software
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for ssh_software
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for file_data
13/12/2017 -- 16:47:38 - <Perf> - using shared mpm ctx' for file_data
13/12/2017 -- 16:47:38 - <Info> - 19044 signatures processed. 437 are IP-only rules, 9917 are inspecting packet payload, 11102 inspect application layer, 0 are decoder event only
13/12/2017 -- 16:47:38 - <Config> - building signature grouping structure, stage 1: preprocessing rules... complete
13/12/2017 -- 16:47:38 - <Perf> - TCP toserver: 41 port groups, 39 unique SGH's, 2 copies
13/12/2017 -- 16:47:38 - <Perf> - TCP toclient: 21 port groups, 18 unique SGH's, 3 copies
13/12/2017 -- 16:47:38 - <Perf> - UDP toserver: 41 port groups, 30 unique SGH's, 11 copies
13/12/2017 -- 16:47:38 - <Perf> - UDP toclient: 21 port groups, 13 unique SGH's, 8 copies
13/12/2017 -- 16:47:38 - <Perf> - OTHER toserver: 254 proto groups, 3 unique SGH's, 251 copies
13/12/2017 -- 16:47:38 - <Perf> - OTHER toclient: 254 proto groups, 0 unique SGH's, 254 copies
13/12/2017 -- 16:47:40 - <Perf> - Unique rule groups: 103
13/12/2017 -- 16:47:40 - <Perf> - Builtin MPM "toserver TCP packet": 32
13/12/2017 -- 16:47:40 - <Perf> - Builtin MPM "toclient TCP packet": 14
13/12/2017 -- 16:47:40 - <Perf> - Builtin MPM "toserver TCP stream": 31
13/12/2017 -- 16:47:40 - <Perf> - Builtin MPM "toclient TCP stream": 15
13/12/2017 -- 16:47:40 - <Perf> - Builtin MPM "toserver UDP packet": 30
13/12/2017 -- 16:47:40 - <Perf> - Builtin MPM "toclient UDP packet": 13
13/12/2017 -- 16:47:40 - <Perf> - Builtin MPM "other IP packet": 3
13/12/2017 -- 16:47:40 - <Perf> - AppLayer MPM "toserver http_uri": 11
13/12/2017 -- 16:47:40 - <Perf> - AppLayer MPM "toserver http_client_body": 4
13/12/2017 -- 16:47:40 - <Perf> - AppLayer MPM "toserver http_header": 5
13/12/2017 -- 16:47:40 - <Perf> - AppLayer MPM "toclient http_header": 3
13/12/2017 -- 16:47:40 - <Perf> - AppLayer MPM "toserver http_raw_header": 1
13/12/2017 -- 16:47:40 - <Perf> - AppLayer MPM "toserver http_method": 1
13/12/2017 -- 16:47:40 - <Perf> - AppLayer MPM "toserver http_cookie": 1
13/12/2017 -- 16:47:40 - <Perf> - AppLayer MPM "toclient http_cookie": 2
13/12/2017 -- 16:47:40 - <Perf> - AppLayer MPM "toserver http_raw_uri": 1
13/12/2017 -- 16:47:40 - <Perf> - AppLayer MPM "toserver http_user_agent": 2
13/12/2017 -- 16:47:40 - <Perf> - AppLayer MPM "toclient http_stat_code": 1
13/12/2017 -- 16:47:40 - <Perf> - AppLayer MPM "toclient file_data": 5
13/12/2017 -- 16:47:42 - <Config> - AutoFP mode using "Hash" flow load balancer
13/12/2017 -- 16:47:42 - <Info> - Using unix socket file '/usr/local/var/run/suricata//suricata.socket'
13/12/2017 -- 16:47:42 - <Notice> - all 0 packet processing threads, 0 management threads initialized, engine started.
13/12/2017 -- 16:48:00 - <Info> - Added file '/tmp/fred/domtest.pcap' to list
13/12/2017 -- 16:48:00 - <Info> - Starting run for '/tmp/fred/domtest.pcap'
13/12/2017 -- 16:48:00 - <Info> - pcap-file.tenant-id not set
13/12/2017 -- 16:48:00 - <Config> - allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
13/12/2017 -- 16:48:00 - <Config> - preallocated 65535 defrag trackers of size 168
13/12/2017 -- 16:48:00 - <Config> - defrag memory usage: 14679896 bytes, maximum: 33554432
13/12/2017 -- 16:48:00 - <Config> - stream "prealloc-sessions": 2048 (per thread)
13/12/2017 -- 16:48:00 - <Config> - stream "memcap": 67108864
13/12/2017 -- 16:48:00 - <Config> - stream "midstream" session pickups: disabled
13/12/2017 -- 16:48:00 - <Config> - stream "async-oneside": disabled
13/12/2017 -- 16:48:00 - <Config> - stream "checksum-validation": enabled
13/12/2017 -- 16:48:00 - <Config> - stream."inline": disabled
13/12/2017 -- 16:48:00 - <Config> - stream "bypass": disabled
13/12/2017 -- 16:48:00 - <Config> - stream "max-synack-queued": 5
13/12/2017 -- 16:48:00 - <Config> - stream.reassembly "memcap": 268435456
13/12/2017 -- 16:48:00 - <Config> - stream.reassembly "depth": 0
13/12/2017 -- 16:48:00 - <Config> - stream.reassembly "toserver-chunk-size": 2669
13/12/2017 -- 16:48:00 - <Config> - stream.reassembly "toclient-chunk-size": 2639
13/12/2017 -- 16:48:00 - <Config> - stream.reassembly.raw: enabled
13/12/2017 -- 16:48:00 - <Config> - stream.reassembly "segment-prealloc": 2048
13/12/2017 -- 16:48:00 - <Info> - eve-log output device (regular) initialized: eve.json
13/12/2017 -- 16:48:00 - <Config> - enabling 'eve-log' module 'dns'
13/12/2017 -- 16:48:00 - <Config> - AutoFP mode using "Hash" flow load balancer
13/12/2017 -- 16:48:00 - <Info> - reading pcap file /tmp/fred/domtest.pcap
13/12/2017 -- 16:48:01 - <Config> - using 1 flow manager threads
13/12/2017 -- 16:48:01 - <Config> - using 1 flow recycler threads
13/12/2017 -- 16:48:01 - <Notice> - all 41 packet processing threads, 2 management threads initialized, engine started.
13/12/2017 -- 16:48:01 - <Info> - pcap file end of file reached (pcap err code 0)
13/12/2017 -- 16:48:02 - <Perf> - 0 new flows, 0 established flows were timed out, 0 flows in closed state
13/12/2017 -- 16:48:02 - <Perf> - 2982 flows processed
13/12/2017 -- 16:48:02 - <Notice> - Pcap-file module read 6691 packets, 676046 bytes
13/12/2017 -- 16:48:02 - <Perf> - AutoFP - Total flow handler queues - 40
13/12/2017 -- 16:48:02 - <Info> - Alerts: 0
13/12/2017 -- 16:48:02 - <Perf> - ippair memory usage: 398144 bytes, maximum: 16777216
13/12/2017 -- 16:50:09 - <Info> - Added file '/tmp/fred/domtest.pcap' to list
13/12/2017 -- 16:50:09 - <Info> - Starting run for '/tmp/fred/domtest.pcap'
13/12/2017 -- 16:50:09 - <Info> - pcap-file.tenant-id not set
13/12/2017 -- 16:50:09 - <Config> - allocated 3670016 bytes of memory for the defrag hash... 65536 buckets of size 56
13/12/2017 -- 16:50:09 - <Config> - preallocated 65535 defrag trackers of size 168
13/12/2017 -- 16:50:09 - <Config> - defrag memory usage: 14679896 bytes, maximum: 33554432
13/12/2017 -- 16:50:09 - <Config> - stream "prealloc-sessions": 2048 (per thread)
13/12/2017 -- 16:50:09 - <Config> - stream "memcap": 67108864
13/12/2017 -- 16:50:09 - <Config> - stream "midstream" session pickups: disabled
13/12/2017 -- 16:50:09 - <Config> - stream "async-oneside": disabled
13/12/2017 -- 16:50:09 - <Config> - stream "checksum-validation": enabled
13/12/2017 -- 16:50:09 - <Config> - stream."inline": disabled
13/12/2017 -- 16:50:09 - <Config> - stream "bypass": disabled
13/12/2017 -- 16:50:09 - <Config> - stream "max-synack-queued": 5
13/12/2017 -- 16:50:09 - <Config> - stream.reassembly "memcap": 268435456
13/12/2017 -- 16:50:09 - <Config> - stream.reassembly "depth": 0
13/12/2017 -- 16:50:09 - <Config> - stream.reassembly "toserver-chunk-size": 2676
13/12/2017 -- 16:50:09 - <Config> - stream.reassembly "toclient-chunk-size": 2606
13/12/2017 -- 16:50:09 - <Config> - stream.reassembly.raw: enabled
13/12/2017 -- 16:50:09 - <Config> - stream.reassembly "segment-prealloc": 2048
13/12/2017 -- 16:50:09 - <Info> - eve-log output device (regular) initialized: eve.json
13/12/2017 -- 16:50:09 - <Config> - enabling 'eve-log' module 'dns'
13/12/2017 -- 16:50:09 - <Config> - AutoFP mode using "Hash" flow load balancer
13/12/2017 -- 16:50:09 - <Info> - reading pcap file /tmp/fred/domtest.pcap
13/12/2017 -- 16:50:09 - <Config> - using 1 flow manager threads
13/12/2017 -- 16:50:09 - <Config> - using 1 flow recycler threads
13/12/2017 -- 16:50:09 - <Notice> - all 41 packet processing threads, 2 management threads initialized, engine started.
13/12/2017 -- 16:50:09 - <Info> - pcap file end of file reached (pcap err code 0)
13/12/2017 -- 16:50:10 - <Perf> - 0 new flows, 0 established flows were timed out, 0 flows in closed state
13/12/2017 -- 16:50:10 - <Perf> - 3007 flows processed
13/12/2017 -- 16:50:10 - <Notice> - Pcap-file module read 6691 packets, 676046 bytes
13/12/2017 -- 16:50:10 - <Perf> - AutoFP - Total flow handler queues - 40
13/12/2017 -- 16:50:10 - <Info> - Alerts: 0
13/12/2017 -- 16:50:10 - <Perf> - ippair memory usage: 398144 bytes, maximum: 16777216
^C13/12/2017 -- 16:57:42 - <Notice> - Signal Received. Stopping engine.
13/12/2017 -- 16:57:42 - <Perf> - host memory usage: 398144 bytes, maximum: 33554432
13/12/2017 -- 16:57:43 - <Info> - cleaning up signature grouping structure... complete
ATTACHED FILES DESCRIPTION
- domtest.pcap PCAP file, keep care!! Generated by malware!!
- test-dns.yaml Suricata configuration file
- eve-1.json EVE json file generated by the first try (line on the above Suricata log: ...<Perf> - 2982 flows processed)
- eve-2.json EVE json file generated by the second try (line on the above Suricata log: ...<Perf> - 3007 flows processed)
Files
Updated by Andreas Herz about 7 years ago
- Assignee set to OISF Dev
- Target version set to TBD
Updated by Fanny Dwargee almost 7 years ago
I'm guessing that's confirming the bug... right?
Updated by Peter Manev almost 7 years ago
Would it be the same case if you read the pcap "-r -k none" several times?
Updated by Fanny Dwargee over 6 years ago
Peter Manev wrote:
Would it be the same case if you read the pcap "-r -k none" several times?
Sorry for the very long delay...
No, I ran it for 6 times and it gave me always 2982 flows processed.
I'll test with the latest 4.1.0-rc1 asap...
Updated by Fanny Dwargee over 6 years ago
- File eve-run1.json eve-run1.json added
- File eve-run2.json eve-run2.json added
Ok, here it goes...
Retested with Suricata v4.1.0rc1 and the bug still persists:
$ sudo /usr/local/src/suricata-4.1.0-rc1/src/suricata -V
This is Suricata version 4.1.0-dev
$ sudo /usr/local/src/suricata-4.1.0-rc1/src/suricata -vv --unix-socket -c /tmp/suricata-flow-count-bug/test-dns.yaml
[...]
[30593] 23/8/2018 -- 12:50:27 - (source-pcap-file.c:241) <Info> (ReceivePcapFileThreadInit) -- Checking file or directory /tmp/suricata-flow-count-bug/domtest.pcap
[30593] 23/8/2018 -- 12:50:27 - (source-pcap-file-directory-helper.c:212) <Info> (PcapDetermineDirectoryOrFile) -- /tmp/suricata-flow-count-bug/domtest.pcap: Plain file, not a directory
[30593] 23/8/2018 -- 12:50:27 - (source-pcap-file.c:249) <Info> (ReceivePcapFileThreadInit) -- Argument /tmp/suricata-flow-count-bug/domtest.pcap was a file
[29725] 23/8/2018 -- 12:50:28 - (flow-manager.c:819) <Config> (FlowManagerThreadSpawn) -- using 1 flow manager threads
[29725] 23/8/2018 -- 12:50:28 - (flow-manager.c:980) <Config> (FlowRecyclerThreadSpawn) -- using 1 flow recycler threads
[29725] 23/8/2018 -- 12:50:28 - (tm-threads.c:2172) <Notice> (TmThreadWaitOnThreadInit) -- all 41 packet processing threads, 2 management threads initialized, engine started.
[29725] 23/8/2018 -- 12:50:28 - (runmode-unix-socket.c:579) <Info> (UnixSocketPcapFilesCheck) -- Starting run for '/tmp/suricata-flow-count-bug/domtest.pcap'
[30593] 23/8/2018 -- 12:50:28 - (source-pcap-file.c:167) <Info> (ReceivePcapFileLoop) -- Starting file run for /tmp/suricata-flow-count-bug/domtest.pcap
[30593] 23/8/2018 -- 12:50:28 - (source-pcap-file-helper.c:149) <Info> (PcapFileDispatch) -- pcap file /tmp/suricata-flow-count-bug/domtest.pcap end of file reached (pcap err code 0)
[30593] 23/8/2018 -- 12:50:28 - (runmode-unix-socket.c:608) <Info> (UnixSocketPcapFile) -- Marking current task as done
[29725] 23/8/2018 -- 12:50:28 - (runmode-unix-socket.c:477) <Info> (UnixSocketPcapFilesCheck) -- Resetting engine state
[30635] 23/8/2018 -- 12:50:28 - (flow-manager.c:798) <Perf> (FlowManager) -- 0 new flows, 0 established flows were timed out, 0 flows in closed state
[30636] 23/8/2018 -- 12:50:29 - (flow-manager.c:949) <Perf> (FlowRecycler) -- 2982 flows processed
[30593] 23/8/2018 -- 12:50:29 - (source-pcap-file.c:383) <Notice> (ReceivePcapFileThreadExitStats) -- Pcap-file module read 1 files, 6691 packets, 676046 bytes
[...]
[30743] 23/8/2018 -- 12:51:42 - (source-pcap-file.c:241) <Info> (ReceivePcapFileThreadInit) -- Checking file or directory /tmp/suricata-flow-count-bug/domtest.pcap
[30743] 23/8/2018 -- 12:51:42 - (source-pcap-file-directory-helper.c:212) <Info> (PcapDetermineDirectoryOrFile) -- /tmp/suricata-flow-count-bug/domtest.pcap: Plain file, not a directory
[30743] 23/8/2018 -- 12:51:42 - (source-pcap-file.c:249) <Info> (ReceivePcapFileThreadInit) -- Argument /tmp/suricata-flow-count-bug/domtest.pcap was a file
[29725] 23/8/2018 -- 12:51:43 - (flow-manager.c:819) <Config> (FlowManagerThreadSpawn) -- using 1 flow manager threads
[29725] 23/8/2018 -- 12:51:43 - (flow-manager.c:980) <Config> (FlowRecyclerThreadSpawn) -- using 1 flow recycler threads
[29725] 23/8/2018 -- 12:51:43 - (tm-threads.c:2172) <Notice> (TmThreadWaitOnThreadInit) -- all 41 packet processing threads, 2 management threads initialized, engine started.
[29725] 23/8/2018 -- 12:51:43 - (runmode-unix-socket.c:579) <Info> (UnixSocketPcapFilesCheck) -- Starting run for '/tmp/suricata-flow-count-bug/domtest.pcap'
[30743] 23/8/2018 -- 12:51:43 - (source-pcap-file.c:167) <Info> (ReceivePcapFileLoop) -- Starting file run for /tmp/suricata-flow-count-bug/domtest.pcap
[30743] 23/8/2018 -- 12:51:43 - (source-pcap-file-helper.c:149) <Info> (PcapFileDispatch) -- pcap file /tmp/suricata-flow-count-bug/domtest.pcap end of file reached (pcap err code 0)
[30743] 23/8/2018 -- 12:51:43 - (runmode-unix-socket.c:608) <Info> (UnixSocketPcapFile) -- Marking current task as done
[29725] 23/8/2018 -- 12:51:43 - (runmode-unix-socket.c:477) <Info> (UnixSocketPcapFilesCheck) -- Resetting engine state
[30784] 23/8/2018 -- 12:51:43 - (flow-manager.c:798) <Perf> (FlowManager) -- 0 new flows, 0 established flows were timed out, 0 flows in closed state
[30785] 23/8/2018 -- 12:51:43 - (flow-manager.c:949) <Perf> (FlowRecycler) -- 3010 flows processed
[30743] 23/8/2018 -- 12:51:43 - (source-pcap-file.c:383) <Notice> (ReceivePcapFileThreadExitStats) -- Pcap-file module read 1 files, 6691 packets, 676046 bytes
ATTACHED FILES
Find attached the EVE json files generated by both runs.
This qualifies for being a bug or not?
Updated by Peter Manev over 6 years ago
I have similar results. Using latest gitmaster - 4.1.0-dev (rev 1f4cd75f).
The processed flow number varies with unix socket but not if you read it one time with "-r" . That always yields 2982 flows and 6691 DNS events while it varies slightly with unix socket processing. I think those should be consistent.
Updated by Fanny Dwargee over 6 years ago
Peter, do you need anything else from me?
Updated by Peter Manev over 6 years ago
No, i think we have everything we need.Thank you
Updated by Fanny Dwargee almost 6 years ago
Hi again,
Do you have any info related to in which release or version could this issue be fixed or perhaps it's not acknowleged as a bug?
Thanks in advance
Updated by Peter Manev almost 6 years ago
I just double checked - the issue is still present with 5.0.0-dev (rev 2bd23bc1)