Open Information Security Foundation: Issueshttps://redmine.openinfosecfoundation.org/https://redmine.openinfosecfoundation.org/favicon.ico?17011170022021-02-12T17:38:50ZOpen Information Security Foundation
Redmine Suricata - Support #4327 (Closed): Packet loss and high tcp reasembly with upgrade to 5.xhttps://redmine.openinfosecfoundation.org/issues/43272021-02-12T17:38:50ZEric Urban
<p><strong>Summary</strong><br />We experience periods of packet loss at times when using Suricata 5.0.5 that we do not see in a 4.1.8 instance with the same traffic, hardware (on a separate host), and config. We had a previous case open in <a class="issue tracker-3 status-5 priority-4 priority-default closed" title="Support: Signficant packet loss when using Suricata with Rust enabled (Closed)" href="https://redmine.openinfosecfoundation.org/issues/3320">#3320</a> where adding the stream-depth with a value of 1mb on the SMB parser improved the situation, but we still experience the issue. The stats.tcp.reassembly_gap_delta value is also often much higher on the 5.0.5 version, especially during these times of high dropped packets. Finally, it may not be related or significant, but I have noticed too that stats.tcp.pkt_on_wrong_thread grows slowly on our 5.0.5 version (currently at 42) but has mostly been at 0 for our 4.1.8 instance.</p>
<p><strong>Details</strong></p>
<p>The current comparison is not being done on our production sensors, so are lab boxes where I can make changes if needed. The 4.1.8 version in this case does not have Rust enabled. I have run the Rust enabled 4.1.8 version side-by-side the 5.0.5 instance and we still have these situations where the 4.1.8 with Rust has no drops but the 5.0.5 version does. However, the 4.1.8 version with Rust does seem to generally have more packet loss.</p>
<p>I will attach stats logs from two separate occasions where significant drops occurred on our 5.0.5 instance but did not occur on the 4.1.8 version. Note that it seems our packet counters may have rolled over because if you follow the deltas we have not had even close to a noticeable percentage of packets dropped on either host long term, save these bursts on the 5.0.5 and every now and then drops on both the 4.1.8 and 5.0.5 instance at the same time. *Note that the data is a few weeks old now as I was pulled away from this issue to work on something else but I can get more current data if needed.</p>
<p>One example of numbers is on 2021-02-01 22:48:16 on our 5.0.5 host we had 20,501,550 dropped packets where our 4.1.8 host had 0. The minute surrounding this time on both sides had several million packets dropped on the 5.0.5 host and none and the 4.1.8 as well. The strange thing is there also appears to be a burst in the number of packets received on the 5.0.5 host and if you subtract the difference, the number of packets between both hosts is closer, though the 5.0.5 one still has much lower numbers packets that are not dropped so it is still quite significant. The stats.tcp.reassembly_gap_delta peaks at 2021-02-01 22:49:56 at 60,844 on the 5.0.5 version but the 4.1.8 instance has 0 at this time and the surrounding period.</p>
<p>Another example is 2021-02-01 08:28:02 there were 9,885,141 drops on 5.0.5 and 4.1.8 had 0. At this same time the tcp reassembly gap was at 3862 on 5.0.5 and under 10 on 4.1.8.</p>
<p>We do have eve logs with the deltas enabled on stats, so if you would prefer those logs let me know. That is what we typically use for comparisons, but to avoid sending over alert data included in our eve logs I am including the stats logs and am hoping you have tools to read these.</p>
<p>I can also provide our config outside of Redmine.</p>
<p>Some additional info that applies to both 4.1.8 and 5.0.5 instances:<br />- CentOS Linux release 7.9.2009 (Core) / 3.10.0-1160.11.1.el7.x86_64 <a class="issue tracker-1 status-5 priority-4 priority-default closed behind-schedule" title="Bug: within doesn't respect distance while carrying out a match (Closed)" href="https://redmine.openinfosecfoundation.org/issues/1">#1</a> SMP Fri Dec 18 16:34:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux<br />- 128GB memory<br />- (lscpu info) Model name: Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz, <abbr title="s">CPU</abbr>: 40<br />- Pcap capture method (using --pcap command-line option) with workers runmode<br />- Myricom cards.<br />ProductCode Driver Version<br />10G-PCIE2-8C2-2S myri_snf 3.0.23.50919</p> Suricata - Support #3320 (Closed): Signficant packet loss when using Suricata with Rust enabledhttps://redmine.openinfosecfoundation.org/issues/33202019-11-05T21:00:55ZEric Urban
<p><strong>Summary</strong><br />We experience significant packet loss at times with Rust enabled in Suricata. In our environment we have two instances with the same version, same configuration file, same rules loaded, and same traffic where the one without Rust enabled has little to no packet loss and the one with Rust experienced packet loss. Disabling Rust on the host with packet loss has been shown to correct the issue.</p>
<p><strong>Details</strong></p>
<p>Currently we are running two instances on 4.1.5 side by side with the same configuration, rules loaded, and traffic. In both cases Suricata was complied with the options "HAVE_PYTHON=/usr/bin/python3 ./configure --with-libpcap=/opt/snf --localstatedir=/var/ --with-libhs-includes=/usr/local/include/hs/ --with-libhs-libraries=/usr/local/lib64/" but one had Rust/Cargo present during compliation and the other didn't. We also have a 5.0.0 instance, where Rust is required and enabled by default, with the same config/rules/traffic that experiences drops as well. This same behavior was also seen on 4.1.2 where we did a side by side compare of using Rust vs. not using it.</p>
<p>Our current comparison setup unfortunately is being done on hosts with different hardware. However, we did run this comparison on identical hardware back when using 4.1.2 and had the same results where Rust being enabled produced many more drops. I also believe in our current test setup that both hosts are more than adequately sized. The Rust enabled host has 40 cores with 128GB memory and 1 instance of Suricata. The non-Rust host has 88 cores with 256GB memory and 4 instances of Suricata, though only one of four instances is getting the traffic mirroring that of our Rust enabled instance.</p>
<p>The Suricata stats show drops and so do our Myricom stats. It appears there could be a counter issue of some kind because the number of packets during these periods of large drops also increases significantly. When I compared packets received minus packets dropped across these two hosts, the Rust enabled instance still had noticeably fewer total packets in most cases, so it would seem something else is going on. One example difference of the sum of stats.capture.kernel_packets_delta and stats.capture.kernel_drops_delta on Nov 4 over the minute of 11:06 is the Rust instance had 1,440,535 packets vs. 8,237,600 without Rust.</p>
<p>During the periods of drops, the Rust enabled instance has fewer alerts. The difference varies quite a bit depending on time period analyzed and which period of drops is analyzed. One example is between 09:00 and 10:00 on November 4 when drops were happening that the Rust instance had 13601 alerts and the one without Rust had 15820. When looking at times outside of drop periods, for the times I sampled, the Rust host generally has slightly more alerts but this is around 1% or less. I am guessing this small difference during normal operating periods isn't too unusual since enabling Rust does change the traffic analyzers for some protocols.</p>
<p>I did seek the help through the mailing list earlier this year at a thread started with <a class="external" href="https://lists.openinfosecfoundation.org/pipermail/oisf-users/2019-February/016618.html">https://lists.openinfosecfoundation.org/pipermail/oisf-users/2019-February/016618.html</a>. That had some activity over at least a few months, but there was no resolution and the thread became quite long so it may be best to avoid looking at that and to start from scratch.</p>
<p>Some additional info that applies to both 4.1.5 instances:<br />- CentOS Linux release 7.7.1908 (Core) / 3.10.0-1062.1.2.el7.x86_64 <a class="issue tracker-1 status-5 priority-4 priority-default closed behind-schedule" title="Bug: within doesn't respect distance while carrying out a match (Closed)" href="https://redmine.openinfosecfoundation.org/issues/1">#1</a> SMP Mon Sep 30 14:19:46 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux<br />- Pcap capture method (using --pcap command-line option) with workers runmode<br />- Myricom cards. <br />ProductCode Driver Version<br />10G-PCIE2-8C2-2S myri_snf 3.0.18.50878<br />- Rust/cargo versions:<br />Rust compiler: rustc 1.38.0<br />Rust cargo: cargo 1.38.0</p>
<p>I will attach stats from the eve logs for both hosts and also Myricom stats logs. Note that the counters ending in __per_second in the Myricom log should not be used as these are not standard. Build-info output is also included. I can provide configuration directly (not through Redmine) if requested.</p>
<p><strong>Steps to reproduce</strong><br />Unknown for sure how to reproduce other than building Suricata with Rust.</p> Suricata - Bug #3229 (In Progress): Abnormal traffic produces unexpected alerts for traffic that ...https://redmine.openinfosecfoundation.org/issues/32292019-10-09T19:08:18ZEric Urban
<p><em>Note that I am opening this as a support ticket, but is probably a feature request. As best as I can tell, this behavior is not expected but is only present under improper network behavior so is likely a feature request to better handle abnormal traffic instead of a support issue. I wanted to bring it to the attention of the Suricata maintainers to get their input on the situation in case I am overlooking something and leave it up them for any action that may or may not result from this scenario.</em></p>
<p><strong>Summary:</strong><br />Under abnormal network conditions (e.g. where the SYN-ACK and SYN are seen by Suricata out of order) alerts are sometimes generated by traffic that is the opposite of that defined in rule.</p>
<p><strong>Details:</strong><br />We noticed a number of alerts in our environment where the traffic direction appeared to be reversed, and the source and destination IP addresses were reversed as well. For example SID 2828913 from the Emerging Threats Pro rule set detects Kovter activity. The rule header is "http $HOME_NET any -> $EXTERNAL_NET any" so a match should occur only on packets originating from the addresses in the HOME_NET variable with matching rule content. We observed an external host sending Kovter traffic to one of our hosts yet this rule was triggering. An example alert is as follows:<br /><pre><code class="javascript syntaxhl" data-language="javascript">
<span class="p">{</span>
<span class="dl">"</span><span class="s2">timestamp</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">2019-09-25T14:16:44.037998-0500</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">flow_id</span><span class="dl">"</span><span class="p">:</span> <span class="mi">715291317194162</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">event_type</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">alert</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">src_ip</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">131.212.X.X</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">src_port</span><span class="dl">"</span><span class="p">:</span> <span class="mi">80</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">dest_ip</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">66.216.145.51</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">dest_port</span><span class="dl">"</span><span class="p">:</span> <span class="mi">33070</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">proto</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">TCP</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">tx_id</span><span class="dl">"</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">alert</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">action</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">allowed</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">gid</span><span class="dl">"</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">signature_id</span><span class="dl">"</span><span class="p">:</span> <span class="mi">2828913</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">rev</span><span class="dl">"</span><span class="p">:</span> <span class="mi">5</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">signature</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">ETPRO TROJAN WIN32/KOVTER.B Checkin 2 M3</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">category</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">A Network Trojan was Detected</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">severity</span><span class="dl">"</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">metadata</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">updated_at</span><span class="dl">"</span><span class="p">:</span> <span class="p">[</span>
<span class="dl">"</span><span class="s2">2018_02_16</span><span class="dl">"</span>
<span class="p">],</span>
<span class="dl">"</span><span class="s2">performance_impact</span><span class="dl">"</span><span class="p">:</span> <span class="p">[</span>
<span class="dl">"</span><span class="s2">Moderate</span><span class="dl">"</span>
<span class="p">],</span>
<span class="dl">"</span><span class="s2">malware_family</span><span class="dl">"</span><span class="p">:</span> <span class="p">[</span>
<span class="dl">"</span><span class="s2">Kovter</span><span class="dl">"</span>
<span class="p">],</span>
<span class="dl">"</span><span class="s2">created_at</span><span class="dl">"</span><span class="p">:</span> <span class="p">[</span>
<span class="dl">"</span><span class="s2">2017_12_15</span><span class="dl">"</span>
<span class="p">],</span>
<span class="dl">"</span><span class="s2">signature_severity</span><span class="dl">"</span><span class="p">:</span> <span class="p">[</span>
<span class="dl">"</span><span class="s2">Major</span><span class="dl">"</span>
<span class="p">],</span>
<span class="dl">"</span><span class="s2">deployment</span><span class="dl">"</span><span class="p">:</span> <span class="p">[</span>
<span class="dl">"</span><span class="s2">Perimeter</span><span class="dl">"</span>
<span class="p">],</span>
<span class="dl">"</span><span class="s2">attack_target</span><span class="dl">"</span><span class="p">:</span> <span class="p">[</span>
<span class="dl">"</span><span class="s2">Client_Endpoint</span><span class="dl">"</span>
<span class="p">],</span>
<span class="dl">"</span><span class="s2">affected_product</span><span class="dl">"</span><span class="p">:</span> <span class="p">[</span>
<span class="dl">"</span><span class="s2">Windows_XP_Vista_7_8_10_Server_32_64_Bit</span><span class="dl">"</span>
<span class="p">],</span>
<span class="dl">"</span><span class="s2">former_category</span><span class="dl">"</span><span class="p">:</span> <span class="p">[</span>
<span class="dl">"</span><span class="s2">TROJAN</span><span class="dl">"</span>
<span class="p">]</span>
<span class="p">}</span>
<span class="p">},</span>
<span class="dl">"</span><span class="s2">http</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">hostname</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">131.212.X.X</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">url</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">/</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">http_user_agent</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">http_method</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">POST</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">protocol</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">HTTP/1.1</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">length</span><span class="dl">"</span><span class="p">:</span> <span class="mi">0</span>
<span class="p">},</span>
<span class="dl">"</span><span class="s2">app_proto</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">http</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">flow</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">pkts_toserver</span><span class="dl">"</span><span class="p">:</span> <span class="mi">3</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">pkts_toclient</span><span class="dl">"</span><span class="p">:</span> <span class="mi">6</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">bytes_toserver</span><span class="dl">"</span><span class="p">:</span> <span class="mi">684</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">bytes_toclient</span><span class="dl">"</span><span class="p">:</span> <span class="mi">1007</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">start</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">2019-09-25T14:15:43.216498-0500</span><span class="dl">"</span>
<span class="p">},</span>
<span class="dl">"</span><span class="s2">payload_printable</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">POST / HTTP/1.1</span><span class="se">\r\n</span><span class="s2">Content-Type: application/x-www-form-urlencoded</span><span class="se">\r\n</span><span class="s2">User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)</span><span class="se">\r\n</span><span class="s2">Host: 131.212.X.X</span><span class="se">\r\n</span><span class="s2">Content-Length: 428</span><span class="se">\r\n</span><span class="s2">Cache-Control: no-cache</span><span class="se">\r\n\r\n</span><span class="s2">Sq1ckI+eZ1BNFIv60tuHkM3uIRR0DpByJRn7fEu9if/J9uQSyF89o1RXCp0DzJ/Fv81IpKLSqannQtCD2TB/7iML1UYpoKlwAr+oFnzmBxzfYS2N5oVBt7X+q6ddGs8MJqJjI0i/PdY8UCBeiEvv3XSmFZvosbDYXyXGVjiYrNURKVEYC7Kf2rqQda7tiSnEepvND78Ukslpgsz+9F7vgx79L/yZ1SDNhFmNuc8UsxHynEKWymnvZAglqk7exXrIFXNDmqc5if0ZB1JHJauUa4miiW7UirCNl4CARGExeEY9xjjwmLemSWzNXbdIJmruH1Cp5ZzOBWDU2lOLRNbJ/8KbC5ixIbPrAM3f7ouQ24AyYUyz66G4VoAdUAGlsAr5PIwPjIg3XkyLu8sNddmB2VLk/VOtFcD6ObLbxMK1Kp8=</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">stream</span><span class="dl">"</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">packet</span><span class="dl">"</span><span class="p">:</span> <span class="dl">"</span><span class="s2">RQAAKAAAAABABgYug9Qcw0LYkTMAUIEuCehEWy3hS5VQNwoA59IAAA==</span><span class="dl">"</span><span class="p">,</span>
<span class="dl">"</span><span class="s2">packet_info</span><span class="dl">"</span><span class="p">:</span> <span class="p">{</span>
<span class="dl">"</span><span class="s2">linktype</span><span class="dl">"</span><span class="p">:</span> <span class="mi">12</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre><br />Notice in the above alert that the source is 131.212.x.x, which we have in our HOME_NET variable, on port 80. Yet the rule is defined to match on "$HOME_NET any -> $EXTERNAL_NET any" for the packet that must contain this POST. I was able to confirm this traffic was in fact originating from the external source, which seemed likely anyway just based on the ports and the Host header of the PORT. I have seen other situations where I at first believed the source and destination were reversed but then after looking at the rule definition these were explainable with the direction defined.</p>
<p>I was able to get a packet capture on one of our Suricata sensors with the above traffic happening. Using this capture in pcap offline mode allowed me to reproduce the behavior, which required the midstream configuration to be enabled. In the capture there were a total of 149 flows that have content that match this rule if we ignore the direction. I confirmed this number by modifying the rule to remove the threshold and also cloned the rule then reversed the source and destination in the rule header. Though the traffic matching this Kovter rule always originated from the external host, there were 6 packets that triggered HOME_NET -> EXTERNAL_NET rules and 143 that triggered EXTERNAL_NET -> HOME_NET rules. I found that in all 6 cases triggering the HOME_NET -> EXTERNAL_NET rule that for these flows the SYN-ACK occurred before the SYN (due to an issue with our taps that we plan to look into fixing). For additional investigation I used Scapy to swap only the order of the SYN and SYN-ACK packets along with the packet times and this caused the HOME_NET -> EXTERNAL_NET rule to no longer fire. I repeated this with Scapy twice more with similar results. However, when using Scapy on flows with the correct ordering of SYN before the SYN-ACK to swap these so the SYN-ACK came first, it did not cause the HOME_NET -> EXTERNAL_NET rule to fire. It seems then there is something else involved that I could not find contributing to this behavior. I enabled debug to try to figure out what else there could be, but was unable to identify exactly what was happening.</p>
<p>I can provide packet captures and a configuration used to reproduce this situation but I would need to do so outside of attaching to this issue so that they are not publicly available. I also have debug logs if you are interested. I will attach to the ticket the suricata --build-info output on the VM I was using to reproduce this.</p>
<p><strong>Steps to reproduce:</strong><br />I was able to reproduce on 4.1.4 and 4.1.5.<br />1. Using a default config:<br /> a. set HOME_NET as appropriate<br /> b. disable vlan.use-for-tracking<br /> c. enable stream.midstream<br /> d. download the ETPRO rule sets to get rule with SID 2828913<br /> e. (optional) remove the threshold from 2828913 and clone the rule to have the reversed direction of EXTERNAL_NET -> HOME_NET<br />2. Run packet capture I will provide (with SYN-ACK before SYN and some other strangeness) through Suricata using "suricata -r <capturefile.pcap>". Specifically I used the following with debug enabled:<br /><pre><code class="shell syntaxhl" data-language="shell"><span class="nv">SC_LOG_LEVEL</span><span class="o">=</span>Debug suricata <span class="nt">-c</span> /etc/suricata/suricata_4_1_4.yaml <span class="nt">-k</span> none <span class="nt">--pidfile</span> suricata.pid <span class="nt">-l</span> logs <span class="nt">-r</span> capturefile.pcap <span class="o">>></span> logs/debug.out
</code></pre><br />3. Check logs to confirm whether or not alerts were generated</p>
<p><strong>Actual Results:</strong><br />The rule is defined to match on traffic originating from HOME_NET to EXTERNAL_NET. Since the traffic is confirmed to have originated from EXTERNAL_NET and per <a class="external" href="https://suricata.readthedocs.io/en/suricata-4.1.5/rules/intro.html#direction">https://suricata.readthedocs.io/en/suricata-4.1.5/rules/intro.html#direction</a>, it is my understanding this should not trigger an alert.</p>
<p><strong>Expected Results:</strong><br />Although the traffic is abnormal, since there is a SYN-ACK and a SYN that follows from the same source and destination in the flow, Suricata could possibly use some heuristics to determine the correct traffic direction.</p> Suricata - Bug #3004 (Closed): SC_ERR_PCAP_DISPATCH with message "error code -2" upon rule reload...https://redmine.openinfosecfoundation.org/issues/30042019-05-31T20:47:22ZEric Urban
<p><strong>Summary</strong><br />When doing rule reloads (using the USR2 signal), about half of the time at the completion of the reload we see the error SC_ERR_PCAP_DISPATCH with error_code 20 and the message "error code -2". I am not sure at this time if we should be concerned about this error or if it can safely be ignored.</p>
<p><strong>Details</strong><br />Upon the completion of rule reloads, about half of the time this produces a sequence of events like the following:<br /><pre>
{"timestamp":"2019-04-26T10:07:54.238852-0500","log_level":"Error","event_type":"engine","engine":{"error_code":20,"error":"SC_ERR_PCAP_DISPATCH","message":"error code -2 "}}
{"timestamp":"2019-04-26T10:07:54.664296-0500","log_level":"Info","event_type":"engine","engine":{"message":"cleaning up signature grouping structure... complete"}}
{"timestamp":"2019-04-26T10:07:54.665821-0500","log_level":"Notice","event_type":"engine","engine":{"message":"rule reload complete"}}
</pre></p>
<p>We upgraded from 4.0.6 to 4.1.3 a while back and that is believed to be when this started happening. I checked the 4.0.6 logs and did not see these messages. Upgrading from 4.1.3 to 4.1.4 did not resolve the issue.</p>
<p>We are using the pcap capture method (Myricom) with workers runmode.</p>
<p>I looked into this issue back when I submitted this to the OISF-USERS mailing list (<a class="external" href="https://lists.openinfosecfoundation.org/pipermail/oisf-users/2019-April/016850.html">https://lists.openinfosecfoundation.org/pipermail/oisf-users/2019-April/016850.html</a>) and have the following observations:<br />It seems this error is coming from source-pcap.c on line 269 (<a class="external" href="https://github.com/OISF/suricata/blob/7f38ffc8bcfa3bca793eb3be41f112634b48de2a/src/source-pcap.c#L269">https://github.com/OISF/suricata/blob/7f38ffc8bcfa3bca793eb3be41f112634b48de2a/src/source-pcap.c#L269</a>), since we aren't loading a pcap file in this case and that is mostly where else this error is thrown.</p>
<p>There is a pcap_dispatch call above this one (line 265) and the conditional on line 267 to enter the trigger for this error checks that the return from pcap_dispatch is < 0. From <a class="external" href="https://linux.die.net/man/3/pcap_dispatch">https://linux.die.net/man/3/pcap_dispatch</a>, "-2 (is returned) if the loop terminated due to a call to pcap_breakloop() before any packets were processed". The PCAP_ERROR_BREAK (-2) code would be handled on line 272 once inside of here. There is a pcap_breakloop() call (line 226) inside PcapCallbackLoop which is called on line 266, but I believe instead this could be the result of the change for 4.1.3 in <a class="external" href="https://github.com/OISF/suricata/commit/bb26e6216e5190d841529c0ecb1292b9a358ed54#diff-2079412a59d37868318fc953aeddef52">https://github.com/OISF/suricata/commit/bb26e6216e5190d841529c0ecb1292b9a358ed54#diff-2079412a59d37868318fc953aeddef52</a> where ReceivePcapBreakLoop was created for PktAcqBreakLoop. So possibly in tm-threads.c at <a class="external" href="https://github.com/OISF/suricata/blob/d6903e70c1b653984ca95f8808755efbc6a9ece4/src/tm-threads.c#L1610">https://github.com/OISF/suricata/blob/d6903e70c1b653984ca95f8808755efbc6a9ece4/src/tm-threads.c#L1610</a>?</p>
<p>If that is how the error occurs, then I am curious if we may be losing a half second (at least) of traffic visibility due to the reconnect on line 277 of source-pcap.c?</p>
<p><strong>Steps to reproduce</strong><br />Unknown at this time other than the possibility of needing to use pcap capture method.</p> Suricata - Feature #3002 (Closed): Flow and Netflow Not Logging ESP Traffichttps://redmine.openinfosecfoundation.org/issues/30022019-05-31T15:50:18ZEric Urban
<p><strong>Summary</strong><br />With both flow and netflow enabled in the eve log, I do not see any log entries for ESP traffic (IP protocol 50). If I create a rule to alert on ESP traffic, then there are alerts generated.</p>
<p><strong>Details</strong><br />I tested this running Suricata version 4.1.4 both with live ESP traffic and also in pcap offline mode. Neither produced any flow or netflow log entries in the eve log. The only protocols I found in flow/netflow after letting it run for some time were TCP, UDP, ICMP, IPv6, IPv6-ICMP, and SCTP. To confirm ESP traffic was being processed in some way by Suricata I added the following rule:<br /><pre>
alert ip any any -> any any (msg:"IP Proto 50 (ESP)"; ip_proto:50; classtype:non-standard-protocol; sid:10010002; rev:1;)
</pre><br />With the rule enabled there were alerts generated for the ESP traffic as expected.</p>
<p>I found a capture containing ESP traffic from <a class="external" href="https://wiki.wireshark.org/SampleCaptures#IPsec_-_ESP_Payload_Decryption_and_Authentication_Checking_Examples">https://wiki.wireshark.org/SampleCaptures#IPsec_-_ESP_Payload_Decryption_and_Authentication_Checking_Examples</a> to use for testing. I will attach ipsec_esp_capture_1.tgz from that page for convenience and for historical purposes in case the link or content changes.</p>
<p>I enabled debug and found that when running through a pcap only containing ESP traffic that there were messages like "7/5/2019 -- 11:40:09 - <Debug> - packet 1 has flow? no", which I believe is a message from <a class="external" href="https://github.com/OISF/suricata/blob/16643befe7bebb9736d44f3a02efdf71135a7b84/src/flow-worker.c#L199">https://github.com/OISF/suricata/blob/16643befe7bebb9736d44f3a02efdf71135a7b84/src/flow-worker.c#L199</a>. When I used a capture containing additional packets that were logged in flow/netflow I saw output like "packet x has flow? yes". This is possibly a coincidence but when inspecting output-json-flow.c and output-json-netflow.c I see that both rely on data from the flow object, so I am wondering if this is a potential source of the problem?</p>
<strong>Steps to reproduce</strong>
<ol>
<li>Using Suricata 4.1.4 enable outputs.eve-log.types.flow and outputs.eve-log.types.flow in the Suricata configuration file.</li>
<li>Extract the attached ipsec_esp_capture_1.tgz file and get ipsec_esp_capture1.tgz:/t1/capture.pcap.</li>
<li>(Optional) Add the ESP rule from above to confirm that Suricata is reading this traffic as ESP.</li>
<li>Run Suricata in pcap offline mode. The command I used was:<br /><pre>
suricata -vv -c /etc/suricata/suricata.yaml --runmode autofp -k none --pidfile suricata.pid -l logging/ -r esp_capture_filtered.pcap
</pre></li>
<li>Check the log output to note that there are no flow or netflow entries for this traffic.</li>
</ol> Suricata - Optimization #2845 (Closed): Counters for kernel_packets decreases at times without re...https://redmine.openinfosecfoundation.org/issues/28452019-02-22T21:31:18ZEric Urban
<p>We have seen cases in Suricata where the stats.capture.kernel_packets counter decreases while Suricata is running. My understanding is that this is supposed to be a running counter that should not decrease unless Suricata is restarted. This behavior has been observed on 4.0.6 and 4.1.2. I am fairly confident I have also seen this on 3.2.2 as well. This decrease would be more expected if the value reset or rolled over from overflow, but I don't believe that is what is happening here.</p>
<p>Below is one example from the logs I am attaching. I have many other logs I can provide if desired.</p>
<pre>
$ jq 'select(.event_type == "stats") | select(.timestamp | startswith("2019-02-22T07:55:")) | .timestamp, .stats.capture' eve.json_stats_only_08-snf3-2019022208
...
"2019-02-22T07:55:36.000327-0600"
{
"kernel_packets": 17308040184,
"kernel_packets_delta": 1039779,
"kernel_drops": 0,
"kernel_drops_delta": 0,
"kernel_ifdrops": 0,
"kernel_ifdrops_delta": 0
}
"2019-02-22T07:55:45.000335-0600"
{
"kernel_packets": 13013890235,
"kernel_packets_delta": -4294149949,
"kernel_drops": 0,
"kernel_drops_delta": 0,
"kernel_ifdrops": 0,
"kernel_ifdrops_delta": 0
}
"2019-02-22T07:55:54.000320-0600"
{
"kernel_packets": 13014866476,
"kernel_packets_delta": 976241,
"kernel_drops": 0,
"kernel_drops_delta": 0,
"kernel_ifdrops": 0,
"kernel_ifdrops_delta": 0
}
</pre>
<p>Corresponding from stats.log:</p>
<pre>
Date: 2/22/2019 -- 07:55:36 (uptime: 2d, 21h 14m 54s)
------------------------------------------------------------------------------------
Counter | TM Name | Value
------------------------------------------------------------------------------------
capture.kernel_packets | Total | 17308040184
------------------------------------------------------------------------------------
Date: 2/22/2019 -- 07:55:45 (uptime: 2d, 21h 15m 03s)
------------------------------------------------------------------------------------
Counter | TM Name | Value
------------------------------------------------------------------------------------
capture.kernel_packets | Total | 13013890235
------------------------------------------------------------------------------------
Date: 2/22/2019 -- 07:55:54 (uptime: 2d, 21h 15m 12s)
------------------------------------------------------------------------------------
Counter | TM Name | Value
------------------------------------------------------------------------------------
capture.kernel_packets | Total | 13014866476
------------------------------------------------------------------------------------
</pre>
<p>Here are more examples from other Suricata instances that don't have logs attached, but I am including for reference:<br /><pre>
"2019-02-22T15:09:00.000327-0600"
{
"kernel_packets": 15681829155,
"kernel_packets_delta": -4294025171,
"kernel_drops": 0,
"kernel_drops_delta": 0,
"kernel_ifdrops": 0,
"kernel_ifdrops_delta": 0
}
"2019-02-22T03:18:51.000325-0600"
{
"kernel_packets": 15980883154,
"kernel_packets_delta": -4293598551,
"kernel_drops": 0,
"kernel_drops_delta": 0,
"kernel_ifdrops": 0,
"kernel_ifdrops_delta": 0
}
"2019-02-19T10:22:00.000363-0600"
{
"kernel_packets": 17749102321,
"kernel_packets_delta": -4294216445,
"kernel_drops": 2227794327,
"kernel_drops_delta": 0,
"kernel_ifdrops": 0,
"kernel_ifdrops_delta": 0
}
"2019-02-19T10:17:40.000327-0600"
{
"kernel_packets": 16791755239,
"kernel_packets_delta": -4294006615,
"kernel_drops": 1280457873,
"kernel_drops_delta": 0,
"kernel_ifdrops": 0,
"kernel_ifdrops_delta": 0
}
"2019-02-19T09:30:35.000346-0600"
{
"kernel_packets": 17342905685,
"kernel_packets_delta": -4294369072,
"kernel_drops": 580833306,
"kernel_drops_delta": 0,
"kernel_ifdrops": 0,
"kernel_ifdrops_delta": 0
}
"2019-02-19T09:25:05.000338-0600"
{
"kernel_packets": 23570036423,
"kernel_packets_delta": -4293688281,
"kernel_drops": 775213362,
"kernel_drops_delta": 0,
"kernel_ifdrops": 0,
"kernel_ifdrops_delta": 0
}
"2019-02-19T08:51:53.000331-0600"
{
"kernel_packets": 12005768232,
"kernel_packets_delta": -4294159125,
"kernel_drops": 4547641950,
"kernel_drops_delta": 0,
"kernel_ifdrops": 0,
"kernel_ifdrops_delta": 0
}
"2019-02-19T08:51:03.000358-0600"
{
"kernel_packets": 22256188092,
"kernel_packets_delta": -4294023378,
"kernel_drops": 722622375,
"kernel_drops_delta": 0,
"kernel_ifdrops": 0,
"kernel_ifdrops_delta": 0
}
</pre></p>
<p>I do not see any messages in the suricata.log file during this time.</p>
<p>Is this behavior expected and if not what additional troubleshooting would you like us to perform to assist with this issue?</p> Suricata - Feature #2671 (Closed): Add Log level to suricata.log when using JSON typehttps://redmine.openinfosecfoundation.org/issues/26712018-11-12T17:44:52ZEric Urban
<p>Currently the log level (Info, Warning, Error, etc.) is missing from the suricata.log file when choosing JSON as the type.</p>
<p>Here is an example of the log output in 4.0.5:<br /><pre>
{"timestamp":"2018-11-09T10:43:51.454590-0600","event_type":"engine","engine":{"message":"This is Suricata version 4.0.5 RELEASE"}}
{"timestamp":"2018-11-09T10:43:51.454766-0600","event_type":"engine","engine":{"message":"CPUs\/cores online: 1"}}
{"timestamp":"2018-11-09T10:43:51.459482-0600","event_type":"engine","engine":{"message":"Found an MTU of 1500 for 'eth0'"}}
{"timestamp":"2018-11-09T10:43:51.459548-0600","event_type":"engine","engine":{"message":"Found an MTU of 1500 for 'eth0'"}}
{"timestamp":"2018-11-09T10:43:51.482034-0600","event_type":"engine","engine":{"message":"Running in live mode, activating unix socket"}}
</pre></p>
<p>This request is to add the log level, as this is useful when using logging for alerting purposes.</p>
<p>An example of the desired output is:<br /><pre>
{"timestamp":"2018-11-09T12:05:27.806528-0600","log_level":"Notice","event_type":"engine","engine":{"message":"This is Suricata version 4.0.5 RELEASE"}}
{"timestamp":"2018-11-09T12:05:27.806976-0600","log_level":"Info","event_type":"engine","engine":{"message":"CPUs\/cores online: 1"}}
{"timestamp":"2018-11-09T12:05:27.812498-0600","log_level":"Info","event_type":"engine","engine":{"message":"Found an MTU of 1500 for 'eth0'"}}
{"timestamp":"2018-11-09T12:05:27.812555-0600","log_level":"Info","event_type":"engine","engine":{"message":"Found an MTU of 1500 for 'eth0'"}}
</pre></p> Suricata - Bug #2656 (Assigned): Alerts not triggered under some conditions on traffic containing...https://redmine.openinfosecfoundation.org/issues/26562018-10-31T21:18:55ZEric Urban
<p><strong>Summary:</strong></p>
<p>We encountered a situation where a single connection contains traffic that has multiple rule matches, yet there are no alerts being generated. This behavior seems to be affected by the part of a connection captured in the packet capture.</p>
<p><strong>Details:</strong></p>
<p>There was an email sent to OISF-users at <a class="external" href="https://lists.openinfosecfoundation.org/pipermail/oisf-users/2018-October/016297.html">https://lists.openinfosecfoundation.org/pipermail/oisf-users/2018-October/016297.html</a>. That has full details of the situation and how it was discovered in our environment. The short of it is that we saw a large difference in the number of alerts between our sensor sets and these alerts were all produced from a single flow of SMB (specifically SMB2) traffic. In our environment we have two sensor sets that are configured to receive identical copies of traffic. The sets are identical in terms of hardware, OS, Suricata version (4.0.5), and Suricata configuration.</p>
<p>I was able to gather two ~500MB packet captures traffic on the sensor where alerts were missing. When one of these captures is run through Suricata in pcap file mode, then all expected alerts are generated. The other capture produces no alerts even though there is traffic that matches specific SMB rules. If I remove a single frame from the capture where no alerts are generated, then many alerts are generated. This single frame is an SMB Close Response message. If the capture is modified to start with any SMB Close Response message in the capture, then no alerts are generated even though there are matches through the capture.</p>
<p>Unfortunately, I am not able to provide either of these 500MB captures. The good news is that I was able to recreate a similar situation using publicly available SMB/SMB2 pcaps that have some modifications. The captures trigger alerts on different rules than the ones we observed in our environment, but are still rules that look for SMB traffic (though they are not SMB protocol rules). The captures were found at <a class="external" href="https://github.com/401trg/detections/tree/master/pcaps">https://github.com/401trg/detections/tree/master/pcaps</a> and I will attach these along with the modified files to this issue. The situation was reproducible using an almost vanilla configuration and also was reproducible on Suricata 3.1.</p>
<p>Attached you will find packet captures, the suricata.yaml config file I used to recreate this, and also the --build-info output.</p>
<p><strong>Actual Behavior:</strong></p>
<p>Running a packet capture through Suricata in pcap file mode does not produce any alerts. The captures contain traffic that match loaded rules. When removing a single frame from this capture, then rules are triggered.</p>
<p><strong>Expected Behavior:</strong></p>
<p>The traffic should generate alerts since there are rules being used that match traffic in the captures.</p>
<p><strong>Steps to Reproduce:</strong></p>
<p>On Suricata 4.0.5, use the attached suricata.yaml, custom.rules file, and pcap files.</p>
<p>Run the pcaps through Suricata in pcap file mode using:<br />suricata -c suricata.yaml -r <pcap file></p>
<p>One example of the exact command I ran is:<br />sudo suricata -vv -c suricata.yaml -k none --pidfile pcap.pid -l logging -r pcaps/C_20171220_smb_mimikatz_copy_SMB2_frame_26_NOT_close.pcap</p>
<p>Here are details on the number of alerts triggered by each capture attached. There are 4 groups, A through D. Note that group A for some reason did not produce any alerts even though it does not start with an SMB Close Respones, so it appears this is not the only thing affecting this behavior. The file names contain the frame number the captures start relative to the original capture for each group.</p>
<p>alerts,pcap_name<br />3,A_20171220_smb_metasploit_psexec_pth_download_meterpreter.pcap<br />0,A_20171220_smb_metasploit_psexec_pth_download_meterpreter_frame_31_close.pcap<br />0,A_20171220_smb_metasploit_psexec_pth_download_meterpreter_frame_32_NOT_close.pcap</p>
<p>3,B_20171220_smb_mimikatz_copy_to_host.pcap<br />0,B_20171220_smb_mimikatz_copy_to_host_SMB2_frame_15_close.pcap<br />2,B_20171220_smb_mimikatz_copy_to_host_SMB2_frame_16_NOT_close.pcap</p>
<p>4,C_20171220_smb_mimikatz_copy.pcap<br />0,C_20171220_smb_mimikatz_copy_SMB2_frame_25_close.pcap<br />4,C_20171220_smb_mimikatz_copy_SMB2_frame_26_NOT_close.pcap</p>
<p>2,D_20171220_smb_psexec_mimikatz_ticket_dump.pcap<br />0,D_20171220_smb_psexec_mimikatz_ticket_dump_SMB2_frame_49_close.pcap<br />1,D_20171220_smb_psexec_mimikatz_ticket_dump_SMB2_frame_50_NOT_close.pcap</p>
<p>Only the rules with these SIDs from Emerging Threats are triggered for pcap each group:</p>
<p>group,sid<br />A,2025719<br />B,2025701<br />C,2025701<br />D,2025701</p> Suricata - Documentation #2640 (Closed): http-body and http-body-printable in eve-log require met...https://redmine.openinfosecfoundation.org/issues/26402018-10-11T19:24:04ZEric Urban
<p><ins>Summary</ins><br />In Suricata when enabling outputs.eve-log.types.alert.http-body or .http-body-printable, it is required that either outputs.eve-log.types.alert.metadata or outputs.eve-log.types.alert.http be enabled. Otherwise there is no output in the eve-log.</p>
<p>If this is intentional to require metadata be enabled, then it should at least be documented in the standard documentation and/or in suricata.yaml next to the config option. Another suggestion would be to have this embedded under outputs.eve-log.types.alert.metadata or .http if metadata is required in order for body logging to occur.</p>
<ins>Steps to reproduce</ins>
<ol>
<li>Start with the default suricata.yaml config file.</li>
<li>Set outputs.eve-log.types.alert.metadata to no.</li>
<li>Set outputs.eve-log.types.alert.http-body and/or outputs.eve-log.types.alert.http-body-printable to yes.</li>
<li>Generate HTTP traffic that will cause some alert to trigger.</li>
</ol>
<p><ins>Actual results</ins><br />There is no http-body/http-body-response data in the eve-log. If this is by design, I was not able to find documentation supporting it.</p>
<p><ins>Expected results</ins><br />This behavior should at a minimum be documented. It would be more self-documented if the config option was nested under the metadata config option.</p> Suricata - Bug #2482 (Closed): HTTP connect: difference in detection rates between 3.1 and 4.0.xhttps://redmine.openinfosecfoundation.org/issues/24822018-04-11T16:32:53ZEric Urban
<p>Note that this is a request for information. The issue appears to be resolved in 4.1.0-beta1 so the purpose of this case is to see if we can get confirmation this issue we are seeing was intentionally fixed in the 4.1.0 version and possibly the Redmine issue/bug number.</p>
<p>Summary:<br />When comparing Suricata 3.x to 4.0.x, we have been seeing a large difference in the alerts triggered on one specific rule. The 3.x versions (both 3.1 and 3.2.5) have significantly more detections than the 4.0.x (4.0.3 and 4.0.4) versions. I was able to create a packet capture that closely resembles the behavior and we could reproduce the differences easily with the capture.</p>
<p>Details:<br />The following Emerging Threats rule triggers many more alerts in version 3.x than it does in 4.0.x versions:<br />alert http $HOME_NET any -> $EXTERNAL_NET 443 (msg:"ET POLICY HTTP traffic on port 443 (CONNECT)"; flow:to_server,established; content:"CONNECT"; http_method; classtype:bad-unknown; sid:2013933; rev:4; metadata:created_at 2011_11_17, updated_at 2011_11_17;)</p>
<p>In the example capture attached, proxyCONNECT_443.pcap, there are 124 requests that should match this rule above. On the 3.x versions, there are 124 alerts generated. On 4.0.x versions, there are only 38.</p>
<p>There are not any threshold settings in the rule and we don't have any configured in the threshold.config file.</p>
<p>Steps to reproduce:<br />1. Using a vanilla Suricata config, change the HOME_NET to [10.0.0.0/8] and the EXTERNAL_NET to any. Then include a file with the rule above.<br />2. Run the packet capture proxyCONNECT_443.pcap through Suricata using the -r option.</p>
<p>Actual Results:<br />When using either 4.0.3 or 4.0.4, only 38 alerts are generated.</p>
<p>Expected Results:<br />Since there are 124 packets that should match the rule above, we expect that many alerts. This behavior is present on 3.x and 4.1.0.</p> Suricata - Documentation #2470 (Feedback): document content inspection in chunkshttps://redmine.openinfosecfoundation.org/issues/24702018-03-28T13:57:26ZEric Urban
<p>Summary:<br />Suricata is not always alerting on traffic that matches content in rules.</p>
<p>Details:<br />When testing Suricata on my virtual machine (Virtual Box 5.2.6) running CentOS Linux release 7.4.1708 (Core), I found there were some situations when traffic was not triggering alerts but was still being picked up in the http.log. As best as I can tell, the behavior is influenced by the combination of the location of the content in the packets and also whether or not the content is split between packets , though it cannot reliably be reproduced. For my tests I was using nc (Ncat version 6.40 <a class="external" href="http://nmap.org/ncat">http://nmap.org/ncat</a>) to serve the content on the VM where Suricata is hosted, and then cURL on a separate physical machine to retrieve it from the VM with Suricata. There is NAT'ing and a port forward to the VM running Suricata in this case.</p>
<p>It is possible there is something weird going on in the way the traffic is passed to the VM so that Suricata is behaving as expected, but I wanted to report this issue anyway especially after reading Bug <a class="issue tracker-7 status-5 priority-4 priority-default closed" title="Security: Suricata 3.x.x and 4.x.x do not parse HTTP responses if tcp data was sent before 3-way-handshake ... (Closed)" href="https://redmine.openinfosecfoundation.org/issues/2427">#2427</a>.</p>
<p>I was able to reproduce this on Suricata 3.x versions and also 4.x versions (4.0.3, 4.0.4, 4.1.0-beta1). It is worth noting that the 3.x versions had some differences in what traffic was picked up when compared to 4.x versions. I was also more reliably able to reproduce the behavior on the 3.x versions, meaning when content was in a specific location it always did NOT trigger an alert. For the sake of the issue here I plan to focus on the 4.x versions.</p>
<p>The rules file is attached as custom.rules. I tried to make my rules as simple as possible so that there was not much room for error.</p>
<p>I ran some packet captures on the VM where Suricata is located. These should allow for reliably reproducing the behavior I have seen. The attachment pcaps.tar.gz contains these captures. I tried to name the captures to describe the expected results. There is even one capture where the same payload is served three times yet only one of the three streams triggers an alert. Here are some details on the captures:</p>
<ul>
<li>11114chars_catch-meContent_MISS.pcap</li>
</ul>
<blockquote>
<p>This capture does NOT trigger an alert in my tests but it seems should trigger the "catch-me" rule. Content is split between frames 19 and 20.</p>
</blockquote>
<ul>
<li>11121chars_catch-meContent_HIT.pcap</li>
</ul>
<blockquote>
<p>This capture does trigger an alert. The content "catch-me" is contained in frame 20.</p>
</blockquote>
<ul>
<li>5842chars_hitContent_1and2MISS_3ALERT.pcap</li>
</ul>
<blockquote>
<p>This capture has 3 streams and should match the rule with the "hit" content. The first 2 streams do NOT trigger alerts and the last one does. It is the same payload served in all three cases.</p>
</blockquote>
<ul>
<li>5842chars_hitContent_MISS.pcap</li>
</ul>
<blockquote>
<p>This capture does NOT trigger an alert. The "hit" content is split between frames 11 and 12.</p>
</blockquote>
<p>Steps to reproduce:<br />1. Use vanilla suricata.yaml config. Set HOME_NET to appropriate address and update rule-files section to point to the attached custom.rules file. (Tested with both ac and hs mpm-algos and reproducible in both cases)<br />2. Run Suricata in pcap file mode with attached packet captures using "suricata -r <pcap name>"</p>
<p>Actual Behavior:<br />The traffic in the captures does not always trigger alerts even though the rules seem to be configured in a way that should match the traffic. Every stream is logged in the http.log file.</p>
<p>Expected Behavior:<br />The traffic in the captures should trigger for every stream since they each contain content that match in custom.rules.</p>