https://redmine.openinfosecfoundation.org/https://redmine.openinfosecfoundation.org/favicon.ico?17011170022018-03-16T18:37:17ZOpen Information Security FoundationSuricata - Bug #2457: Suricata 4.0.4 exits with [ERRCODE: SC_ERR_FATAL(171)] - Re-entered profiling, exitinghttps://redmine.openinfosecfoundation.org/issues/2457?journal_id=95652018-03-16T18:37:17ZAndreas Herzoisf@herzandreas.de
<ul><li><strong>Assignee</strong> set to <i>OISF Dev</i></li><li><strong>Priority</strong> changed from <i>High</i> to <i>Normal</i></li><li><strong>Target version</strong> set to <i>Support</i></li></ul><p>Can you paste the run command you are using for suricata and maybe the config file as well?</p> Suricata - Bug #2457: Suricata 4.0.4 exits with [ERRCODE: SC_ERR_FATAL(171)] - Re-entered profiling, exitinghttps://redmine.openinfosecfoundation.org/issues/2457?journal_id=98692018-06-27T09:05:04ZManolo Cabezabolo
<ul></ul><p>The command is /usr/local/bin/suricata -c /etc/suricata/suricata.yaml --pfring -D</p>
<p>%YAML 1.1<br />---</p>
<ol>
<li>Suricata configuration file. In addition to the comments describing all</li>
<li>options in this file, full documentation can be found at:</li>
<li><a class="external" href="https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Suricatayaml">https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Suricatayaml</a></li>
</ol>
##
<ol>
<li>Step 1: inform Suricata about your network
##</li>
</ol>
<p>vars:
# more specifc is better for alert accuracy and performance<br /> address-groups:<br /> HOME_NET: .......<br /> #HOME_NET: "[192.168.0.0/16,10.0.0.0/8,172.16.0.0/12]" <br /> #HOME_NET: "[192.168.0.0/16]" <br /> #HOME_NET: "[10.0.0.0/8]" <br /> #HOME_NET: "[172.16.0.0/12]" <br /> #HOME_NET: "any" </p>
<pre><code>EXTERNAL_NET: "!$HOME_NET" <br /> #EXTERNAL_NET: "any" </code></pre>
<pre><code>HTTP_SERVERS: "$HOME_NET" <br /> SMTP_SERVERS: "$HOME_NET" <br /> SQL_SERVERS: "$HOME_NET" <br /> DNS_SERVERS: "$HOME_NET" <br /> TELNET_SERVERS: "$HOME_NET" <br /> AIM_SERVERS: "$EXTERNAL_NET" <br /> DNP3_SERVER: "$HOME_NET" <br /> DNP3_CLIENT: "$HOME_NET" <br /> MODBUS_CLIENT: "$HOME_NET" <br /> MODBUS_SERVER: "$HOME_NET" <br /> ENIP_CLIENT: "$HOME_NET" <br /> ENIP_SERVER: "$HOME_NET"</code></pre>
<pre><code>port-groups:</code></pre>
<pre><code>.....</code></pre>
##
<ol>
<li>Step 2: select the rules to enable or disable
##</li>
</ol>
<p>include: rules.yaml</p>
#default-rule-path: /etc/suricata/rules<br />#rule-files:
<ol>
<li>- botcc.rules
# - botcc.portgrouped.rules</li>
<li>- ciarmy.rules</li>
<li>- compromised.rules</li>
<li>- drop.rules</li>
<li>- dshield.rules</li>
<li>- emerging-activex.rules</li>
<li>- emerging-attack_response.rules</li>
<li>- emerging-chat.rules</li>
<li>- emerging-current_events.rules</li>
<li>- emerging-dns.rules</li>
<li>- emerging-dos.rules</li>
<li>- emerging-exploit.rules</li>
<li>- emerging-ftp.rules</li>
<li>- emerging-games.rules</li>
<li>- emerging-icmp_info.rules</li>
<li>- emerging-icmp.rules</li>
<li>- emerging-imap.rules</li>
<li>- emerging-inappropriate.rules</li>
<li>- emerging-info.rules</li>
<li>- emerging-malware.rules</li>
<li>- emerging-misc.rules</li>
<li>- emerging-mobile_malware.rules</li>
<li>- emerging-netbios.rules</li>
<li>- emerging-p2p.rules</li>
<li>- emerging-policy.rules</li>
<li>- emerging-pop3.rules</li>
<li>- emerging-rpc.rules</li>
<li>- emerging-scada.rules</li>
<li>- emerging-scada_special.rules</li>
<li>- emerging-scan.rules</li>
<li>- emerging-shellcode.rules</li>
<li>- emerging-smtp.rules</li>
<li>- emerging-snmp.rules</li>
<li>- emerging-sql.rules</li>
<li>- emerging-telnet.rules</li>
<li>- emerging-tftp.rules</li>
<li>- emerging-trojan.rules</li>
<li>- emerging-user_agents.rules</li>
<li>- emerging-voip.rules</li>
<li>- emerging-web_client.rules</li>
<li>- emerging-web_server.rules</li>
<li>- emerging-web_specific_apps.rules</li>
<li>- emerging-worm.rules</li>
<li>- tor.rules</li>
<li>- decoder-events.rules # available in suricata sources under rules dir</li>
<li>- stream-events.rules # available in suricata sources under rules dir</li>
<li>- http-events.rules # available in suricata sources under rules dir</li>
<li>- smtp-events.rules # available in suricata sources under rules dir</li>
<li>- dns-events.rules # available in suricata sources under rules dir</li>
<li>- tls-events.rules # available in suricata sources under rules dir</li>
<li>- modbus-events.rules # available in suricata sources under rules dir</li>
<li>- app-layer-events.rules # available in suricata sources under rules dir</li>
<li>- dnp3-events.rules # available in suricata sources under rules dir</li>
</ol>
<ol>
<li>classification-file: /etc/suricata/classification.config</li>
<li>reference-config-file: /etc/suricata/reference.config</li>
<li>threshold-file: /etc/suricata/threshold.config</li>
</ol>
##
<ol>
<li>Step 3: select outputs to enable
##</li>
</ol>
<ol>
<li>The default logging directory. Any log or output file will be</li>
<li>placed here if its not specified with a full path name. This can be</li>
<li>overridden with the -l command line parameter.<br />default-log-dir: /var/log/suricata/</li>
</ol>
<ol>
<li>global stats configuration<br />stats:<br /> enabled: yes
# The interval field (in seconds) controls at what interval
# the loggers are invoked.<br /> interval: 8</li>
</ol>
<ol>
<li>Configure the type of alert (and other) logging you would like.<br />outputs:
# a line based alerts log similar to Snort's fast.log<br /> - fast:<br /> enabled: yes<br /> filename: fast.log<br /> append: yes<br /> #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'
<ol>
<li>Extensible Event Format (nicknamed EVE) event log in JSON format<br /> - eve-log:<br /> enabled: yes<br /> filetype: regular #regular|syslog|unix_dgram|unix_stream|redis<br /> #filename: eve.json<br /> filename: eve-alerts.json<br /> filemode: ...<br /> #rotate-interval: hour<br /> #prefix: "@cee: " # prefix to prepend to each log entry
# the following are valid when type: syslog above<br /> #identity: "suricata" <br /> #facility: local5<br /> #level: Info ## possible levels: Emergency, Alert, Critical,
## Error, Warning, Notice, Info, Debug
# redis:
# 127.0.0.1
# port: ... <a class="issue tracker-2 status-7 priority-4 priority-default" title="Feature: JA4 support for TLS and QUIC (In Review)" href="https://redmine.openinfosecfoundation.org/issues/6379">#6379</a>
# mode: list ## possible values: list (default), channel
# key: suricata-test ## key or channel to use (default to suricata)
# Redis pipelining set up. This will enable to only do a query every
# 'batch-size' events. This should lower the latency induced by network
# connection at the cost of some memory. There is no flushing implemented
# so this setting as to be reserved to high traffic suricata.
# pipelining:
# enabled: yes ## set enable to yes to enable query pipelining
# batch-size: 10 ## number of entry to keep in buffer<br /> types:<br /> - alert:<br /> payload: yes # enable dumping payload in Base64
# payload-buffer-size: 4kb # max size of payload buffer to output in eve-log<br /> payload-printable: no #yes # enable dumping payload in printable (lossy) format<br /> packet: yes # enable dumping of packet (without stream segments)<br /> http: yes # enable dumping of http fields<br /> tls: yes # enable dumping of tls fields<br /> ssh: yes # enable dumping of ssh fields<br /> smtp: yes # enable dumping of smtp fields<br /> dnp3: yes # enable dumping of DNP3 fields <ol>
<li>Enable the logging of tagged packets for rules using the
# "tag" keyword.<br /> tagged-packets: yes</li>
<li>- alert:</li>
<li> payload: yes</li>
<li> payload-printable: no</li>
<li> packet: yes</li>
<li> http: yes</li>
<li> tls: yes</li>
<li> ssh: yes</li>
<li> smtp: yes</li>
<li> dnp3: yes</li>
</ol>
<ol>
<li>HTTP X-Forwarded-For support by adding an extra field or overwriting</li>
<li>the source or destination IP address (depending on flow direction)</li>
<li>with the one reported in the X-Forwarded-For HTTP header. This is</li>
<li>helpful when reviewing alerts for traffic that is being reverse</li>
<li>or forward proxied.<br /> xff:<br /> enabled: yes
# Two operation modes are available, "extra-data" and "overwrite".<br /> mode: extra-data
# Two proxy deployments are supported, "reverse" and "forward". In
# a "reverse" deployment the IP address used is the last one, in a
# "forward" deployment the first IP address is used.<br /> deployment: reverse
# Header name where the actual IP address will be reported, if more
# than one IP address is present, the last IP address will be the
# one taken into consideration.<br /> header: X-Forwarded-For<br /> - eve-log:<br /> enabled: yes<br /> filetype: regular #regular|syslog|unix_dgram|unix_stream|redis<br /> filename: eve-others.json<br /> filemode: 644<br /> #rotate-interval: hour<br /> types:</li>
</ol>
<ol>
<li> tagged-packets: yes
<ol>
<li> xff:</li>
<li> enabled: yes</li>
<li> mode: extra-data</li>
<li> deployment: reverse</li>
<li> header: X-Forwarded-For<br /> - http:<br /> extended: yes # enable this for extended logging information
# custom allows additional http fields to be included in eve-log
# the example below adds three additional fields when uncommented<br /> #custom: [Accept-Encoding, Accept-Language, Authorization]</li>
</ol>
</li>
<li>- dns:</li>
<li> # control logging of queries and answers</li>
<li> # default yes, no to disable</li>
<li> query: yes # enable logging of DNS queries</li>
<li> answer: yes # enable logging of DNS answers</li>
<li> # control which RR types are logged</li>
<li> # all enabled if custom not specified</li>
<li> #custom: [a, aaaa, cname, mx, ns, ptr, txt]<br /> - tls:<br /> extended: yes # enable this for extended logging information</li>
<li>- files:</li>
<li> force-magic: no # force logging magic on all logged files</li>
<li> # force logging of checksums, available hash functions are md5,</li>
<li> # sha1 and sha256</li>
<li> #force-hash: [md5]<br /> #- drop:
# alerts: yes # log alerts that caused drops
# flows: all # start or all: 'start' logs only a single drop
# # per flow direction. All logs each dropped pkt.<br /> - smtp:<br /> extended: yes # enable this for extended logging information
# this includes: bcc, message-id, subject, x_mailer, user-agent
# custom fields logging from the list:
# reply-to, bcc, message-id, subject, x-mailer, user-agent, received,
# x-originating-ip, in-reply-to, references, importance, priority,
# sensitivity, organization, content-md5, date<br /> #custom: [received, x-mailer, x-originating-ip, relays, reply-to, bcc]
# output md5 of fields: body, subject
# for the body you need to set app-layer.protocols.smtp.mime.body-md5
# to yes<br /> #md5: [body, subject]
- ssh<br /> - eve-log:<br /> enabled: yes<br /> filetype: regular #regular|syslog|unix_dgram|unix_stream|redis<br /> filename: eve-stats.json<br /> filemode: 644<br /> #rotate-interval: day<br /> types:<br /> - stats:<br /> totals: yes # stats for all threads merged together<br /> threads: yes # per thread stats<br /> deltas: yes # include delta values
<ol>
<li>bi-directional flows<br /> #- flow</li>
<li>uni-directional flows<br /> #- netflow<br /> #- dnp3</li>
</ol></li>
</ol></li>
</ol>
<ol>
<li>alert output for use with Barnyard2<br /> - unified2-alert:<br /> enabled: yes<br /> filename: unified2.alert
<ol>
<li>File size limit. Can be specified in kb, mb, gb. Just a number</li>
<li>is parsed as bytes.<br /> limit: 128mb</li>
</ol>
<ol>
<li>Sensor ID field of unified2 alerts.<br /> #sensor-id: 0</li>
</ol>
<ol>
<li>Include payload of packets related to alerts. Defaults to true, set to</li>
<li>false if payload is not required.<br /> #payload: yes</li>
</ol>
<ol>
<li>HTTP X-Forwarded-For support by adding the unified2 extra header or</li>
<li>overwriting the source or destination IP address (depending on flow</li>
<li>direction) with the one reported in the X-Forwarded-For HTTP header.</li>
<li>This is helpful when reviewing alerts for traffic that is being reverse</li>
<li>or forward proxied.<br /> xff:<br /> enabled: yes
# Two operation modes are available, "extra-data" and "overwrite". Note
# that in the "overwrite" mode, if the reported IP address in the HTTP
# X-Forwarded-For header is of a different version of the packet
# received, it will fall-back to "extra-data" mode.<br /> mode: extra-data
# Two proxy deployments are supported, "reverse" and "forward". In
# a "reverse" deployment the IP address used is the last one, in a
# "forward" deployment the first IP address is used.<br /> deployment: reverse
# Header name where the actual IP address will be reported, if more
# than one IP address is present, the last IP address will be the
# one taken into consideration.<br /> header: X-Forwarded-For</li>
</ol></li>
</ol>
<ol>
<li>a line based log of HTTP requests (no alerts)<br /> - http-log:<br /> enabled: no<br /> filename: http.log<br /> append: yes<br /> #extended: yes # enable this for extended logging information<br /> #custom: yes # enabled the custom logging format (defined by customformat)<br /> #customformat: "%{%D-%H:%M:%S}t.%z %{X-Forwarded-For}i %H %m %h %u %s %B %a:%p -> %A:%P" <br /> #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'</li>
</ol>
<ol>
<li>a line based log of TLS handshake parameters (no alerts)<br /> - tls-log:<br /> enabled: no # Log TLS connections.<br /> filename: tls.log # File to store TLS logs.<br /> append: yes<br /> #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'<br /> #extended: yes # Log extended information like fingerprint</li>
</ol>
<ol>
<li>output module to store certificates chain to disk<br /> - tls-store:<br /> enabled: no<br /> #certs-log-dir: certs # directory to store the certificates files</li>
</ol>
<ol>
<li>a line based log of DNS requests and/or replies (no alerts)<br /> - dns-log:<br /> enabled: no<br /> filename: dns.log<br /> append: yes<br /> filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'</li>
</ol>
<ol>
<li>Packet log... log packets in pcap format. 3 modes of operation: "normal" </li>
<li>"multi" and "sguil".
#</li>
<li>In normal mode a pcap file "filename" is created in the default-log-dir,</li>
<li>or are as specified by "dir".</li>
<li>In multi mode, a file is created per thread. This will perform much</li>
<li>better, but will create multiple files where 'normal' would create one.</li>
<li>In multi mode the filename takes a few special variables:</li>
<li>- %n -- thread number</li>
<li>- %i -- thread id</li>
<li>- %t -- timestamp (secs or secs.usecs based on 'ts-format'</li>
<li>E.g. filename: pcap.%n.%t
#</li>
<li>Note that it's possible to use directories, but the directories are not</li>
<li>created by Suricata. E.g. filename: pcaps/%n/log.%s will log into the</li>
<li>per thread directory.
#</li>
<li>Also note that the limit and max-files settings are enforced per thread.</li>
<li>So the size limit when using 8 threads with 1000mb files and 2000 files</li>
<li>is: 8*1000*2000 ~ 16TiB.
#</li>
<li>In Sguil mode "dir" indicates the base directory. In this base dir the</li>
<li>pcaps are created in th directory structure Sguil expects:
#</li>
<li>$sguil-base-dir/YYYY-MM-DD/$filename.<timestamp>
#</li>
<li>By default all packets are logged except:</li>
<li>- TCP streams beyond stream.reassembly.depth</li>
<li>- encrypted streams after the key exchange
#<br /> - pcap-log:<br /> enabled: no<br /> filename: log.pcap
<ol>
<li>File size limit. Can be specified in kb, mb, gb. Just a number</li>
<li>is parsed as bytes.<br /> limit: 1000mb</li>
</ol>
<ol>
<li>If set to a value will enable ring buffer mode. Will keep Maximum of "max-files" of size "limit" <br /> max-files: 2000</li>
</ol>
<p>mode: normal # normal, multi or sguil.</p>
<ol>
<li>Directory to place pcap files. If not provided the default log</li>
<li>directory will be used. Required for "sguil" mode.<br /> #dir: /nsm_data/</li>
</ol>
<p>#ts-format: usec # sec or usec second format (default) is filename.sec usec is filename.sec.usec<br /> use-stream-depth: no #If set to "yes" packets seen after reaching stream inspection depth are ignored. "no" logs all packets<br /> honor-pass-rules: no # If set to "yes", flows in which a pass rule matched will stopped being logged.</p></li>
</ol>
<ol>
<li>a full alerts log containing much information for signature writers</li>
<li>or for investigating suspected false positives.<br /> - alert-debug:<br /> enabled: no<br /> filename: alert-debug.log<br /> append: yes<br /> #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'</li>
</ol>
<ol>
<li>alert output to prelude (<a class="external" href="http://www.prelude-technologies.com/">http://www.prelude-technologies.com/</a>) only</li>
<li>available if Suricata has been compiled with --enable-prelude<br /> - alert-prelude:<br /> enabled: no<br /> profile: suricata<br /> log-packet-content: no<br /> log-packet-header: yes</li>
</ol>
<ol>
<li>Stats.log contains data from various counters of the suricata engine.<br /> - stats:<br /> enabled: yes<br /> filename: stats.log<br /> totals: yes # stats for all threads merged together<br /> threads: yes # per thread stats<br /> #null-values: yes # print counters that have value 0</li>
</ol>
<ol>
<li>a line based alerts log similar to fast.log into syslog<br /> - syslog:<br /> enabled: no
# reported identity to syslog. If ommited the program name (usually
# suricata) will be used.<br /> #identity: "suricata" <br /> facility: local5<br /> #level: Info ## possible levels: Emergency, Alert, Critical,
## Error, Warning, Notice, Info, Debug</li>
</ol>
<ol>
<li>a line based information for dropped packets in IPS mode<br /> - drop:<br /> enabled: no<br /> filename: drop.log<br /> append: yes<br /> #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'</li>
</ol>
<ol>
<li>output module to store extracted files to disk
#</li>
<li>The files are stored to the log-dir in a format "file.<id>" where <id> is</li>
<li>an incrementing number starting at 1. For each file "file.<id>" a meta</li>
<li>file "file.<id>.meta" is created.
#</li>
<li>File extraction depends on a lot of things to be fully done:</li>
<li>- file-store stream-depth. For optimal results, set this to 0 (unlimited)</li>
<li>- http request / response body sizes. Again set to 0 for optimal results.</li>
<li>- rules that contain the "filestore" keyword.<br /> - file-store:<br /> enabled: no # set to yes to enable<br /> log-dir: files # directory to store the files<br /> force-magic: no # force logging magic on all stored files
# force logging of checksums, available hash functions are md5,
# sha1 and sha256<br /> #force-hash: [md5]<br /> force-filestore: no # force storing of all files
# override global stream-depth for sessions in which we want to
# perform file extraction. Set to 0 for unlimited.<br /> #stream-depth: 0<br /> #waldo: file.waldo # waldo file to store the file_id across runs</li>
</ol>
<ol>
<li>output module to log files tracked in a easily parsable json format<br /> - file-log:<br /> enabled: no<br /> filename: files-json.log<br /> append: yes<br /> #filetype: regular # 'regular', 'unix_stream' or 'unix_dgram'
force-magic: no # force logging magic on all logged files
<ol>
<li>force logging of checksums, available hash functions are md5,</li>
<li>sha1 and sha256<br /> #force-hash: [md5]</li>
</ol></li>
</ol>
<ol>
<li>Log TCP data after stream normalization</li>
<li>2 types: file or dir. File logs into a single logfile. Dir creates</li>
<li>2 files per TCP session and stores the raw TCP data into them.</li>
<li>Using 'both' will enable both file and dir modes.
#</li>
<li>Note: limited by stream.depth<br /> - tcp-data:<br /> enabled: no<br /> type: file<br /> filename: tcp-data.log</li>
</ol>
<ol>
<li>Log HTTP body data after normalization, dechunking and unzipping.</li>
<li>2 types: file or dir. File logs into a single logfile. Dir creates</li>
<li>2 files per HTTP session and stores the normalized data into them.</li>
<li>Using 'both' will enable both file and dir modes.
#</li>
<li>Note: limited by the body limit settings<br /> - http-body-data:<br /> enabled: no<br /> type: file<br /> filename: http-data.log</li>
</ol>
<ol>
<li>Lua Output Support - execute lua script to generate alert and event</li>
<li>output.</li>
<li>Documented at:</li>
<li><a class="external" href="https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Lua_Output">https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Lua_Output</a><br /> - lua:<br /> enabled: no<br /> #scripts-dir: /etc/suricata/lua-output/<br /> scripts:
# - script1.lua</li>
</ol></li>
</ol>
<ol>
<li>Logging configuration. This is not about logging IDS alerts/events, but</li>
<li>output about what Suricata is doing, like startup messages, errors, etc.<br />logging:
# The default log level, can be overridden in an output section.
# Note that debug level logging will only be emitted if Suricata was
# compiled with the --enable-debug configure option.
#
# This value is overriden by the SC_LOG_LEVEL env var.<br /> default-log-level: notice
<ol>
<li>The default output format. Optional parameter, should default to</li>
<li>something reasonable if not provided. Can be overriden in an</li>
<li>output section. You can leave this out to get the default.
#</li>
<li>This value is overriden by the SC_LOG_FORMAT env var.<br /> #default-log-format: "[%i] %t - (%f:%l) <%d> (%n) -- "</li>
</ol>
<ol>
<li>A regex to filter output. Can be overridden in an output section.</li>
<li>Defaults to empty (no filter).
#</li>
<li>This value is overriden by the SC_LOG_OP_FILTER env var.<br /> default-output-filter:</li>
</ol>
<ol>
<li>Define your logging outputs. If none are defined, or they are all</li>
<li>disabled you will get the default - console output.<br /> outputs:<br /> - console:<br /> enabled: yes
# type: json<br /> - file:<br /> enabled: yes<br /> level: info<br /> filename: /var/log/suricata/suricata.log
# type: json<br /> - syslog:<br /> enabled: no<br /> facility: local5<br /> format: "[%i] <%d> -- "
# type: json</li>
</ol></li>
</ol>
##
<ol>
<li>Step 4: configure common capture settings
##</li>
<li>See "Advanced Capture Options" below for more options, including NETMAP</li>
<li>and PF_RING.
##</li>
</ol>
<ol>
<li>Linux high speed capture support<br />af-packet:<br /> - interface: p2p1
# Number of receive threads. "auto" uses the number of cores<br /> threads: 1 #auto
# Default clusterid. AF_PACKET will load balance packets based on flow.<br /> cluster-id: 99
# Default AF_PACKET cluster type. AF_PACKET can load balance per flow or per hash.
# This is only supported for Linux kernel > 3.1
# possible value are:
# * cluster_round_robin: round robin load balancing
# * cluster_flow: all packets of a given flow are send to the same socket
# * cluster_cpu: all packets treated in kernel by a CPU are send to the same socket
# * cluster_qm: all packets linked by network card to a RSS queue are sent to the same
# socket. Requires at least Linux 3.14.
# * cluster_random: packets are sent randomly to sockets but with an equipartition.
# Requires at least Linux 3.14.
# * cluster_rollover: kernel rotates between sockets filling each socket before moving
# to the next. Requires at least Linux 3.10.
# Recommended modes are cluster_flow on most boxes and cluster_cpu or cluster_qm on system
# with capture card using RSS (require cpu affinity tuning and system irq tuning)<br /> cluster-type: cluster_flow
# In some fragmentation case, the hash can not be computed. If "defrag" is set
# to yes, the kernel will do the needed defragmentation before sending the packets.<br /> defrag: yes
# After Linux kernel 3.10 it is possible to activate the rollover option: if a socket is
# full then kernel will send the packet on the next socket with room available. This option
# can minimize packet drop and increase the treated bandwidth on single intensive flow.<br /> #rollover: yes
# To use the ring feature of AF_PACKET, set 'use-mmap' to yes<br /> #use-mmap: yes
# Lock memory map to avoid it goes to swap. Be careful that over suscribing could lock
# your system<br /> #mmap-locked: yes
# Use experimental tpacket_v3 capture mode, only active if use-mmap is true<br /> #tpacket-v3: yes
# Ring size will be computed with respect to max_pending_packets and number
# of threads. You can set manually the ring size in number of packets by setting
# the following value. If you are using flow cluster-type and have really network
# intensive single-flow you could want to set the ring-size independently of the number
# of threads:<br /> #ring-size: 2048
# Block size is used by tpacket_v3 only. It should set to a value high enough to contain
# a decent number of packets. Size is in bytes so please consider your MTU. It should be
# a power of 2 and it must be multiple of page size (usually 4096).<br /> #block-size: 32768
# tpacket_v3 block timeout: an open block is passed to userspace if it is not
# filled after block-timeout milliseconds.<br /> #block-timeout: 10
# On busy system, this could help to set it to yes to recover from a packet drop
# phase. This will result in some packets (at max a ring flush) being non treated.<br /> #use-emergency-flush: yes
# recv buffer size, increase value could improve performance
# buffer-size: 32768
# Set to yes to disable promiscuous mode
# disable-promisc: no
# Choose checksum verification mode for the interface. At the moment
# of the capture, some packets may be with an invalid checksum due to
# offloading to the network card of the checksum computation.
# Possible values are:
# - kernel: use indication sent by kernel for each packet (default)
# - yes: checksum validation is forced
# - no: checksum validation is disabled
# - auto: suricata uses a statistical approach to detect when
# checksum off-loading is used.
# Warning: 'checksum-validation' must be set to yes to have any validation<br /> #checksum-checks: kernel
# BPF filter to apply to this interface. The pcap filter syntax apply here.<br /> #bpf-filter: port 80 or udp
# You can use the following variables to activate AF_PACKET tap or IPS mode.
# If copy-mode is set to ips or tap, the traffic coming to the current
# interface will be copied to the copy-iface interface. If 'tap' is set, the
# copy is complete. If 'ips' is set, the packet matching a 'drop' action
# will not be copied.<br /> #copy-mode: ips<br /> #copy-iface: eth1
<p>- interface: p2p2<br /> threads: 1<br /> cluster-id: 99<br /> cluster-type: cluster_flow<br /> defrag: yes</p>
<ol>
<li>Put default values here. These will be used for an interface that is not</li>
<li>in the list above.<br /> - interface: default<br /> #threads: auto<br /> #use-mmap: no<br /> #rollover: yes<br /> #tpacket-v3: yes</li>
</ol></li>
</ol>
<ol>
<li>Cross platform libpcap capture support<br />pcap:<br /> - interface: ...
# On Linux, pcap will try to use mmaped capture and will use buffer-size
# as total of memory used by the ring. So set this to something bigger
# than 1% of your bandwidth.<br /> #buffer-size: 16777216<br /> #bpf-filter: "tcp and port 25"
# Choose checksum verification mode for the interface. At the moment
# of the capture, some packets may be with an invalid checksum due to
# offloading to the network card of the checksum computation.
# Possible values are:
# - yes: checksum validation is forced
# - no: checksum validation is disabled
# - auto: suricata uses a statistical approach to detect when
# checksum off-loading is used. (default)
# Warning: 'checksum-validation' must be set to yes to have any validation<br /> #checksum-checks: auto
# With some accelerator cards using a modified libpcap (like myricom), you
# may want to have the same number of capture threads as the number of capture
# rings. In this case, set up the threads variable to N to start N threads
# listening on the same interface.<br /> #threads: 16
# set to no to disable promiscuous mode:<br /> #promisc: no
# set snaplen, if not set it defaults to MTU if MTU can be known
# via ioctl call and to full capture if not.<br /> #snaplen: 1518
# Put default values here<br /> - interface: default<br /> #checksum-checks: auto</li>
</ol>
<ol>
<li>Settings for reading pcap files<br />pcap-file:
# Possible values are:
# - yes: checksum validation is forced
# - no: checksum validation is disabled
# - auto: suricata uses a statistical approach to detect when
# checksum off-loading is used. (default)
# Warning: 'checksum-validation' must be set to yes to have checksum tested<br /> checksum-checks: auto</li>
</ol>
<ol>
<li>See "Advanced Capture Options" below for more options, including NETMAP</li>
<li>and PF_RING.</li>
</ol>
##
<ol>
<li>Step 5: App Layer Protocol Configuration
##</li>
</ol>
<ol>
<li>Configure the app-layer parsers. The protocols section details each</li>
<li>protocol.
#</li>
<li>The option "enabled" takes 3 values - "yes", "no", "detection-only".</li>
<li>"yes" enables both detection and the parser, "no" disables both, and</li>
<li>"detection-only" enables protocol detection only (parser disabled).<br />app-layer:<br /> protocols:<br /> tls:<br /> enabled: yes<br /> detection-ports:<br /> dp: 443,8443 <ol>
<li>Completely stop processing TLS/SSL session after the handshake
# completed. If bypass is enabled this will also trigger flow
# bypass. If disabled (the default), TLS/SSL session is still
# tracked for Heartbleed and other anomalies.<br /> #no-reassemble: yes<br /> dcerpc:<br /> enabled: yes<br /> ftp:<br /> enabled: yes<br /> ssh:<br /> enabled: yes<br /> smtp:<br /> enabled: yes
# Configure SMTP-MIME Decoder<br /> mime:
# Decode MIME messages from SMTP transactions
# (may be resource intensive)
# This field supercedes all others because it turns the entire
# process on or off<br /> decode-mime: yes</li>
<li>smb2 detection is disabled internally inside the engine.<br /> #smb2:</li>
<li> enabled: yes<br /> dns:
# memcaps. Globally and per flow/state.<br /> #global-memcap: 16mb<br /> #state-memcap: 512kb</li>
</ol>
<ol>
<li>How many unreplied DNS requests are considered a flood.</li>
<li>If the limit is reached, app-layer-event:dns.flooded; will match.<br /> #request-flood: 500</li>
</ol>
tcp:<br /> enabled: yes<br /> detection-ports:<br /> dp: 53<br /> udp:<br /> enabled: yes<br /> detection-ports:<br /> dp: 53<br /> http:<br /> enabled: yes
<ol>
<li>memcap: 64mb</li>
</ol>
<ol>
<li>default-config: Used when no server-config matches</li>
<li> personality: List of personalities used by default</li>
<li> request-body-limit: Limit reassembly of request body for inspection</li>
<li> by http_client_body & pcre /P option.</li>
<li> response-body-limit: Limit reassembly of response body for inspection</li>
<li> by file_data, http_server_body & pcre /Q option.</li>
<li> double-decode-path: Double decode path section of the URI</li>
<li> double-decode-query: Double decode query section of the URI</li>
<li> response-body-decompress-layer-limit:</li>
<li> Limit to how many layers of compression will be</li>
<li> decompressed. Defaults to 2.
#</li>
<li>server-config: List of server configurations to use if address matches</li>
<li> address: List of ip addresses or networks for this block</li>
<li> personalitiy: List of personalities used by this block</li>
<li> request-body-limit: Limit reassembly of request body for inspection</li>
<li> by http_client_body & pcre /P option.</li>
<li> response-body-limit: Limit reassembly of response body for inspection</li>
<li> by file_data, http_server_body & pcre /Q option.</li>
<li> double-decode-path: Double decode path section of the URI</li>
<li> double-decode-query: Double decode query section of the URI
#</li>
<li> uri-include-all: Include all parts of the URI. By default the</li>
<li> 'scheme', username/password, hostname and port</li>
<li> are excluded. Setting this option to true adds</li>
<li> all of them to the normalized uri as inspected</li>
<li> by http_uri, urilen, pcre with /U and the other</li>
<li> keywords that inspect the normalized uri.</li>
<li> Note that this does not affect http_raw_uri.</li>
<li> Also, note that including all was the default in</li>
<li> 1.4 and 2.0beta1.
#</li>
<li> meta-field-limit: Hard size limit for request and response size</li>
<li> limits. Applies to request line and headers,</li>
<li> response line and headers. Does not apply to</li>
<li> request or response bodies. Default is 18k.</li>
<li> If this limit is reached an event is raised.
#</li>
<li>Currently Available Personalities:</li>
<li> Minimal, Generic, IDS (default), IIS_4_0, IIS_5_0, IIS_5_1, IIS_6_0,</li>
<li> IIS_7_0, IIS_7_5, Apache_2<br /> libhtp:<br /> default-config:<br /> personality: IDS <ol>
<li>Can be specified in kb, mb, gb. Just a number indicates
# it's in bytes.<br /> request-body-limit: 100kb<br /> response-body-limit: 100kb</li>
</ol>
<ol>
<li>inspection limits<br /> request-body-minimal-inspect-size: 32kb<br /> request-body-inspect-window: 4kb<br /> response-body-minimal-inspect-size: 40kb<br /> response-body-inspect-window: 16kb</li>
</ol>
<ol>
<li>response body decompression (0 disables)<br /> response-body-decompress-layer-limit: 2</li>
</ol>
<ol>
<li>auto will use http-body-inline mode in IPS mode, yes or no set it statically<br /> http-body-inline: auto</li>
</ol>
<ol>
<li>Take a random value for inspection sizes around the specified value.</li>
<li>This lower the risk of some evasion technics but could lead</li>
<li>detection change between runs. It is set to 'yes' by default.<br /> #randomize-inspection-sizes: yes</li>
<li>If randomize-inspection-sizes is active, the value of various</li>
<li>inspection size will be choosen in the [1 - range%, 1 + range%]</li>
<li>range</li>
<li>Default value of randomize-inspection-range is 10.<br /> #randomize-inspection-range: 10</li>
</ol>
<ol>
<li>decoding<br /> double-decode-path: no<br /> double-decode-query: no</li>
</ol>
<p>server-config:</p>
</li>
<li>apache:
<ol>
<li> address: [192.168.1.0/24, 127.0.0.0/8, "::1"]</li>
<li> personality: Apache_2</li>
<li> # Can be specified in kb, mb, gb. Just a number indicates</li>
<li> # it's in bytes.</li>
<li> request-body-limit: 4096</li>
<li> response-body-limit: 4096</li>
<li> double-decode-path: no</li>
<li> double-decode-query: no</li>
</ol>
</li>
<li>iis7:
<ol>
<li> address:</li>
</ol>
<ol>
<li> personality: IIS_7_0</li>
<li> # Can be specified in kb, mb, gb. Just a number indicates</li>
<li> # it's in bytes.</li>
<li> request-body-limit: 4096</li>
<li> response-body-limit: 4096</li>
<li> double-decode-path: no</li>
<li> double-decode-query: no</li>
</ol></li>
</ol>
<ol>
<li>Note: Modbus probe parser is minimalist due to the poor significant field</li>
<li>Only Modbus message length (greater than Modbus header length)</li>
<li>And Protocol ID (equal to 0) are checked in probing parser</li>
<li>It is important to enable detection port and define Modbus port</li>
<li>to avoid false positive<br /> modbus:
# How many unreplied Modbus requests are considered a flood.
# If the limit is reached, app-layer-event:modbus.flooded; will match.<br /> #request-flood: 500
enabled: no<br /> detection-ports:<br /> dp: 502
<ol>
<li>According to MODBUS Messaging on TCP/IP Implementation Guide V1.0b, it</li>
<li>is recommended to keep the TCP connection opened with a remote device</li>
<li>and not to open and close it for each MODBUS/TCP transaction. In that</li>
<li>case, it is important to set the depth of the stream reassembling as</li>
<li>unlimited (stream.reassembly.depth: 0)</li>
</ol>
<ol>
<li>Stream reassembly size for modbus. By default track it completely.<br /> stream-depth: 0</li>
</ol></li>
</ol>
<ol>
<li>DNP3<br /> dnp3:<br /> enabled: no<br /> detection-ports:<br /> dp: 20000</li>
</ol>
<ol>
<li>SCADA EtherNet/IP and CIP protocol support<br /> enip:<br /> enabled: no<br /> detection-ports:<br /> dp: 44818<br /> sp: 44818</li>
</ol></li>
</ol>
<ol>
<li>Limit for the maximum number of asn1 frames to decode (default 256)<br />asn1-max-frames: 256</li>
</ol>
##############################################################################
##
<ol>
<li>Advanced settings below
##
##############################################################################</li>
</ol>
##
<ol>
<li>Run Options
##</li>
</ol>
<ol>
<li>Run suricata as user and group.<br />run-as:<br /> user: suricata<br /> group: suricata</li>
</ol>
<ol>
<li>Some logging module will use that name in event as identifier. The default</li>
<li>value is the hostname<br />#sensor-name: suricata</li>
</ol>
<ol>
<li>Default pid file.</li>
<li>Will use this file if no --pidfile in command options.<br />#pid-file: /var/run/suricata.pid</li>
</ol>
<ol>
<li>Daemon working directory</li>
<li>Suricata will change directory to this one if provided</li>
<li>Default: "/" <br />#daemon-directory: "/"</li>
</ol>
<ol>
<li>Suricata core dump configuration. Limits the size of the core dump file to</li>
<li>approximately max-dump. The actual core dump size will be a multiple of the</li>
<li>page size. Core dumps that would be larger than max-dump are truncated. On</li>
<li>Linux, the actual core dump size may be a few pages larger than max-dump.</li>
<li>Setting max-dump to 0 disables core dumping.</li>
<li>Setting max-dump to 'unlimited' will give the full core dump file.</li>
<li>On 32-bit Linux, a max-dump value >= ULONG_MAX may cause the core dump size</li>
<li>to be 'unlimited'.</li>
</ol>
<p>coredump:<br /> max-dump: unlimited</p>
<ol>
<li>If suricata box is a router for the sniffed networks, set it to 'router'. If</li>
<li>it is a pure sniffing setup, set it to 'sniffer-only'.</li>
<li>If set to auto, the variable is internally switch to 'router' in IPS mode</li>
<li>and 'sniffer-only' in IDS mode.</li>
<li>This feature is currently only used by the reject* keywords.<br />host-mode: auto</li>
</ol>
<ol>
<li>Number of packets preallocated per thread. The default is 1024. A higher number</li>
<li>will make sure each CPU will be more easily kept busy, but may negatively</li>
<li>impact caching.
#</li>
<li>If you are using the CUDA pattern matcher (mpm-algo: ac-cuda), different rules</li>
<li>apply. In that case try something like 60000 or more. This is because the CUDA</li>
<li>pattern matcher buffers and scans as many packets as possible in parallel.<br />#max-pending-packets: 1024</li>
</ol>
<ol>
<li>Runmode the engine should use. Please check --list-runmodes to get the available</li>
<li>runmodes for each packet acquisition method. Defaults to "autofp" (auto flow pinned</li>
<li>load balancing).<br />#runmode: autofp<br />runmode: workers</li>
</ol>
<ol>
<li>Specifies the kind of flow load balancer used by the flow pinned autofp mode.
#</li>
<li>Supported schedulers are:
#</li>
<li>round-robin - Flows assigned to threads in a round robin fashion.</li>
<li>active-packets - Flows assigned to threads that have the lowest number of</li>
<li> unprocessed packets (default).</li>
<li>hash - Flow alloted usihng the address hash. More of a random</li>
<li> technique. Was the default in Suricata 1.2.1 and older.
#<br />#autofp-scheduler: active-packets</li>
</ol>
<ol>
<li>Preallocated size for packet. Default is 1514 which is the classical</li>
<li>size for pcap on ethernet. You should adjust this value to the highest</li>
<li>packet size (MTU + hardware header) on your system.<br />#default-packet-size: 1514</li>
</ol>
<ol>
<li>Unix command socket can be used to pass commands to suricata.</li>
<li>An external tool can then connect to get information from suricata</li>
<li>or trigger some modifications of the engine. Set enabled to yes</li>
<li>to activate the feature. In auto mode, the feature will only be</li>
<li>activated in live capture mode. You can use the filename variable to set</li>
<li>the file name of the socket.<br />unix-command:<br /> enabled: yes<br /> #filename: custom.socket</li>
</ol>
<ol>
<li>Magic file. The extension .mgc is added to the value here.<br />#magic-file: /usr/share/file/magic<br />#magic-file:</li>
</ol>
<p>legacy:<br /> uricontent: enabled</p>
##
<ol>
<li>Detection settings
##</li>
</ol>
<ol>
<li>Set the order of alerts bassed on actions</li>
<li>The default order is pass, drop, reject, alert</li>
<li>action-order:</li>
<li> - pass</li>
<li> - drop</li>
<li> - reject</li>
<li> - alert</li>
</ol>
<ol>
<li>IP Reputation<br />#reputation-categories-file: /etc/suricata/iprep/categories.txt<br />#default-reputation-path: /etc/suricata/iprep<br />#reputation-files:</li>
<li>- reputation.list</li>
</ol>
<ol>
<li>When run with the option --engine-analysis, the engine will read each of</li>
<li>the parameters below, and print reports for each of the enabled sections</li>
<li>and exit. The reports are printed to a file in the default log dir</li>
<li>given by the parameter "default-log-dir", with engine reporting</li>
<li>subsection below printing reports in its own report file.<br />engine-analysis:
# enables printing reports for fast-pattern for every rule.<br /> rules-fast-pattern: yes
# enables printing reports for each rule<br /> rules: yes</li>
</ol>
<p>#recursion and match limits for PCRE where supported<br />pcre:<br /> match-limit: 3500<br /> match-limit-recursion: 1500</p>
##
<ol>
<li>Advanced Traffic Tracking and Reconstruction Settings
##</li>
</ol>
<ol>
<li>Host specific policies for defragmentation and TCP stream</li>
<li>reassembly. The host OS lookup is done using a radix tree, just</li>
<li>like a routing table so the most specific entry matches.<br />host-os-policy:
# Make the default policy windows.<br /> windows: [0.0.0.0/0]<br /> bsd: []<br /> bsd-right: []<br /> old-linux: []<br /> linux: []<br /> old-solaris: []<br /> solaris: []<br /> hpux10: []<br /> hpux11: []<br /> irix: []<br /> macos: []<br /> vista: []<br /> windows2k3: []</li>
</ol>
<ol>
<li>Defrag settings:</li>
</ol>
<p>defrag:<br /> memcap: 32mb<br /> hash-size: 65536<br /> trackers: 65535 # number of defragmented flows to follow<br /> max-frags: 65535 # number of fragments to keep (higher than trackers)<br /> prealloc: yes<br /> timeout: 60</p>
<ol>
<li>Enable defrag per host settings</li>
<li> host-config:
#</li>
<li> - dmz:</li>
<li> timeout: 30</li>
<li> address: ...
#</li>
<li> - lan:</li>
<li> timeout: 45</li>
<li> address:</li>
</ol>
<ol>
<li>Flow settings:</li>
<li>By default, the reserved memory (memcap) for flows is 32MB. This is the limit</li>
<li>for flow allocation inside the engine. You can change this value to allow</li>
<li>more memory usage for flows.</li>
<li>The hash-size determine the size of the hash used to identify flows inside</li>
<li>the engine, and by default the value is 65536.</li>
<li>At the startup, the engine can preallocate a number of flows, to get a better</li>
<li>performance. The number of flows preallocated is 10000 by default.</li>
<li>emergency-recovery is the percentage of flows that the engine need to</li>
<li>prune before unsetting the emergency state. The emergency state is activated</li>
<li>when the memcap limit is reached, allowing to create new flows, but</li>
<li>prunning them with the emergency timeouts (they are defined below).</li>
<li>If the memcap is reached, the engine will try to prune flows</li>
<li>with the default timeouts. If it doens't find a flow to prune, it will set</li>
<li>the emergency bit and it will try again with more agressive timeouts.</li>
<li>If that doesn't work, then it will try to kill the last time seen flows</li>
<li>not in use.</li>
<li>The memcap can be specified in kb, mb, gb. Just a number indicates it's</li>
<li>in bytes.</li>
</ol>
<p>flow:<br /> memcap: 128mb<br /> hash-size: 65536<br /> prealloc: 10000<br /> emergency-recovery: 30<br /> #managers: 1 # default to one flow manager<br /> #recyclers: 1 # default to one flow recycler thread</p>
<ol>
<li>This option controls the use of vlan ids in the flow (and defrag)</li>
<li>hashing. Normally this should be enabled, but in some (broken)</li>
<li>setups where both sides of a flow are not tagged with the same vlan</li>
<li>tag, we can ignore the vlan id's in the flow hashing.<br />vlan:<br /> use-for-tracking: true</li>
</ol>
<ol>
<li>Specific timeouts for flows. Here you can specify the timeouts that the</li>
<li>active flows will wait to transit from the current state to another, on each</li>
<li>protocol. The value of "new" determine the seconds to wait after a hanshake or</li>
<li>stream startup before the engine free the data of that flow it doesn't</li>
<li>change the state to established (usually if we don't receive more packets</li>
<li>of that flow). The value of "established" is the amount of</li>
<li>seconds that the engine will wait to free the flow if it spend that amount</li>
<li>without receiving new packets or closing the connection. "closed" is the</li>
<li>amount of time to wait after a flow is closed (usually zero). "bypassed" </li>
<li>timeout controls locally bypassed flows. For these flows we don't do any other</li>
<li>tracking. If no packets have been seen after this timeout, the flow is discarded.
#</li>
<li>There's an emergency mode that will become active under attack circumstances,</li>
<li>making the engine to check flow status faster. This configuration variables</li>
<li>use the prefix "emergency-" and work similar as the normal ones.</li>
<li>Some timeouts doesn't apply to all the protocols, like "closed", for udp and</li>
<li>icmp.</li>
</ol>
<p>flow-timeouts:</p>
<pre><code>default:<br /> new: 30<br /> established: 300<br /> closed: 0<br /> bypassed: 100<br /> emergency-new: 10<br /> emergency-established: 100<br /> emergency-closed: 0<br /> emergency-bypassed: 50<br /> tcp:<br /> new: 60<br /> established: 600<br /> closed: 60<br /> bypassed: 100<br /> emergency-new: 5<br /> emergency-established: 100<br /> emergency-closed: 10<br /> emergency-bypassed: 50<br /> udp:<br /> new: 30<br /> established: 300<br /> bypassed: 100<br /> emergency-new: 10<br /> emergency-established: 100<br /> emergency-bypassed: 50<br /> icmp:<br /> new: 30<br /> established: 300<br /> bypassed: 100<br /> emergency-new: 10<br /> emergency-established: 100<br /> emergency-bypassed: 50</code></pre>
<ol>
<li>Stream engine settings. Here the TCP stream tracking and reassembly</li>
<li>engine is configured.
#</li>
<li>stream:</li>
<li> memcap: 32mb # Can be specified in kb, mb, gb. Just a</li>
<li> # number indicates it's in bytes.</li>
<li> checksum-validation: yes # To validate the checksum of received</li>
<li> # packet. If csum validation is specified as</li>
<li> # "yes", then packet with invalid csum will not</li>
<li> # be processed by the engine stream/app layer.</li>
<li> # Warning: locally generated trafic can be</li>
<li> # generated without checksum due to hardware offload</li>
<li> # of checksum. You can control the handling of checksum</li>
<li> # on a per-interface basis via the 'checksum-checks'</li>
<li> # option</li>
<li> prealloc-sessions: 2k # 2k sessions prealloc'd per stream thread</li>
<li> midstream: false # don't allow midstream session pickups</li>
<li> async-oneside: false # don't enable async stream handling</li>
<li> inline: no # stream inline mode</li>
<li> max-synack-queued: 5 # Max different SYN/ACKs to queue</li>
<li> bypass: no # Bypass packets when stream.depth is reached
#</li>
<li> reassembly:</li>
<li> memcap: 64mb # Can be specified in kb, mb, gb. Just a number</li>
<li> # indicates it's in bytes.</li>
<li> depth: 1mb # Can be specified in kb, mb, gb. Just a number</li>
<li> # indicates it's in bytes.</li>
<li> toserver-chunk-size: 2560 # inspect raw stream in chunks of at least</li>
<li> # this size. Can be specified in kb, mb,</li>
<li> # gb. Just a number indicates it's in bytes.</li>
<li> # The max acceptable size is 4024 bytes.</li>
<li> toclient-chunk-size: 2560 # inspect raw stream in chunks of at least</li>
<li> # this size. Can be specified in kb, mb,</li>
<li> # gb. Just a number indicates it's in bytes.</li>
<li> # The max acceptable size is 4024 bytes.</li>
<li> randomize-chunk-size: yes # Take a random value for chunk size around the specified value.</li>
<li> # This lower the risk of some evasion technics but could lead</li>
<li> # detection change between runs. It is set to 'yes' by default.</li>
<li> randomize-chunk-range: 10 # If randomize-chunk-size is active, the value of chunk-size is</li>
<li> # a random value between (1 - randomize-chunk-range/100)*toserver-chunk-size</li>
<li> # and (1 + randomize-chunk-range/100)*toserver-chunk-size and the same</li>
<li> # calculation for toclient-chunk-size.</li>
<li> # Default value of randomize-chunk-range is 10.
#</li>
<li> raw: yes # 'Raw' reassembly enabled or disabled.</li>
<li> # raw is for content inspection by detection</li>
<li> # engine.
#</li>
<li> chunk-prealloc: 250 # Number of preallocated stream chunks. These</li>
<li> # are used during stream inspection (raw).</li>
<li> segments: # Settings for reassembly segment pool.</li>
<li> - size: 4 # Size of the (data)segment for a pool</li>
<li> prealloc: 256 # Number of segments to prealloc and keep</li>
<li> # in the pool.</li>
<li> zero-copy-size: 128 # This option sets in bytes the value at</li>
<li> # which segment data is passed to the app</li>
<li> # layer API directly. Data sizes equal to</li>
<li> # and higher than the value set are passed</li>
<li> # on directly.
#<br />stream:<br /> memcap: 512mb #64mb<br /> checksum-validation: yes # reject wrong csums<br /> inline: auto # auto will use inline mode in IPS mode, yes or no set it statically<br /> reassembly:<br /> memcap: 2gb #256mb<br /> depth: 8mb # 1mb # reassemble 1mb into a stream<br /> toserver-chunk-size: 2560<br /> toclient-chunk-size: 2560<br /> randomize-chunk-size: yes<br /> #randomize-chunk-range: 10<br /> #raw: yes<br /> #chunk-prealloc: 250<br /> #segments:
# - size: 4
# prealloc: 256
# - size: 16
# prealloc: 512
# - size: 112
# prealloc: 512
# - size: 248
# prealloc: 512
# - size: 512
# prealloc: 512
# - size: 768
# prealloc: 1024
# 'from_mtu' means that the size is mtu - 40,
# or 1460 if mtu couldn't be determined.
# - size: from_mtu
# prealloc: 1024
# - size: 65535
# prealloc: 128<br /> #zero-copy-size: 128</li>
</ol>
<ol>
<li>Host table:
#</li>
<li>Host table is used by tagging and per host thresholding subsystems.
#<br />host:<br /> hash-size: 4096<br /> prealloc: 1000<br /> memcap: 32mb</li>
</ol>
<ol>
<li>IP Pair table:
#</li>
<li>Used by xbits 'ippair' tracking.
#<br />#ippair:</li>
<li> hash-size: 4096</li>
<li> prealloc: 1000</li>
<li> memcap: 32mb</li>
</ol>
##
<ol>
<li>Performance tuning and profiling
##</li>
</ol>
<ol>
<li>The detection engine builds internal groups of signatures. The engine</li>
<li>allow us to specify the profile to use for them, to manage memory on an</li>
<li>efficient way keeping a good performance. For the profile keyword you</li>
<li>can use the words "low", "medium", "high" or "custom". If you use custom</li>
<li>make sure to define the values at "- custom-values" as your convenience.</li>
<li>Usually you would prefer medium/high/low.
#</li>
<li>"sgh mpm-context", indicates how the staging should allot mpm contexts for</li>
<li>the signature groups. "single" indicates the use of a single context for</li>
<li>all the signature group heads. "full" indicates a mpm-context for each</li>
<li>group head. "auto" lets the engine decide the distribution of contexts</li>
<li>based on the information the engine gathers on the patterns from each</li>
<li>group head.
#</li>
<li>The option inspection-recursion-limit is used to limit the recursive calls</li>
<li>in the content inspection code. For certain payload-sig combinations, we</li>
<li>might end up taking too much time in the content inspection code.</li>
<li>If the argument specified is 0, the engine uses an internally defined</li>
<li>default limit. On not specifying a value, we use no limits on the recursion.<br />detect:<br /> profile: medium<br /> custom-values:<br /> toclient-groups: 3<br /> toserver-groups: 25<br /> sgh-mpm-context: auto<br /> inspection-recursion-limit: 3000
# If set to yes, the loading of signatures will be made after the capture
# is started. This will limit the downtime in IPS mode.<br /> #delayed-detect: yes
<p>prefilter:
# default prefiltering setting. "mpm" only creates MPM/fast_pattern
# engines. "auto" also sets up prefilter engines for other keywords.
# Use --list-keywords=all to see which keywords support prefiltering.<br /> default: mpm</p>
<ol>
<li>the grouping values above control how many groups are created per</li>
<li>direction. Port whitelisting forces that port to get it's own group.</li>
<li>Very common ports will benefit, as well as ports with many expensive</li>
<li>rules.<br /> grouping:<br /> #tcp-whitelist: 53, 80, 139, 443, 445, 1433, 3306, 3389, 6666, 6667, 8080<br /> #udp-whitelist: 53, 135, 5060</li>
</ol>
<p>profiling:
# Log the rules that made it past the prefilter stage, per packet
# default is off. The threshold setting determines how many rules
# must have made it past pre-filter for that rule to trigger the
# logging.<br /> #inspect-logging-threshold: 200<br /> grouping:<br /> dump-to-disk: false<br /> include-rules: false # very verbose<br /> include-mpm-stats: false</p></li>
</ol>
<ol>
<li>Select the multi pattern algorithm you want to run for scan/search the</li>
<li>in the engine.
#</li>
<li>The supported algorithms are:</li>
<li>"ac" - Aho-Corasick, default implementation</li>
<li>"ac-bs" - Aho-Corasick, reduced memory implementation</li>
<li>"ac-cuda" - Aho-Corasick, CUDA implementation</li>
<li>"ac-ks" - Aho-Corasick, "Ken Steele" variant</li>
<li>"hs" - Hyperscan, available when built with Hyperscan support
#</li>
<li>The default mpm-algo value of "auto" will use "hs" if Hyperscan is</li>
<li>available, "ac" otherwise.
#</li>
<li>The mpm you choose also decides the distribution of mpm contexts for</li>
<li>signature groups, specified by the conf - "detect.sgh-mpm-context".</li>
<li>Selecting "ac" as the mpm would require "detect.sgh-mpm-context" </li>
<li>to be set to "single", because of ac's memory requirements, unless the</li>
<li>ruleset is small enough to fit in one's memory, in which case one can</li>
<li>use "full" with "ac". Rest of the mpms can be run in "full" mode.
#</li>
<li>There is also a CUDA pattern matcher (only available if Suricata was</li>
<li>compiled with --enable-cuda: b2g_cuda. Make sure to update your</li>
<li>max-pending-packets setting above as well if you use b2g_cuda.</li>
</ol>
<p>mpm-algo: auto</p>
<ol>
<li>Select the matching algorithm you want to use for single-pattern searches.
#</li>
<li>Supported algorithms are "bm" (Boyer-Moore) and "hs" (Hyperscan, only</li>
<li>available if Suricata has been built with Hyperscan support).
#</li>
<li>The default of "auto" will use "hs" if available, otherwise "bm".</li>
</ol>
<p>spm-algo: auto</p>
<ol>
<li>Suricata is multi-threaded. Here the threading can be influenced.<br />threading:<br /> set-cpu-affinity: yes #no
# Tune cpu affinity of threads. Each family of threads can be bound
# on specific CPUs.
#
# These 2 apply to the all runmodes:
# management-cpu-set is used for flow timeout handling, counters
# worker-cpu-set is used for 'worker' threads
#
# Additionally, for autofp these apply:
# receive-cpu-set is used for capture threads
# verdict-cpu-set is used for IPS verdict threads
#<br /> cpu-affinity:<br /> - management-cpu-set:<br /> cpu: [ 0 ] # include only these cpus in affinity settings<br /> - receive-cpu-set:<br /> cpu: [ 0 ] # include only these cpus in affinity settings<br /> - worker-cpu-set:<br /> cpu: [ 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54 ] # [ "all" ]<br /> mode: "exclusive"
# Use explicitely 3 threads and don't compute number by using
# detect-thread-ratio variable:
# threads: 3<br /> prio:
# low: [ 0 ]
# medium: [ "1-2" ]
# high: [ 3 ]<br /> default: "high" # "medium" <br /> #- verdict-cpu-set:
# cpu: [ 0 ]
# prio:
# default: "high"
#
# By default Suricata creates one "detect" thread per available CPU/CPU core.
# This setting allows controlling this behaviour. A ratio setting of 2 will
# create 2 detect threads for each CPU/CPU core. So for a dual core CPU this
# will result in 4 detect threads. If values below 1 are used, less threads
# are created. So on a dual core CPU a setting of 0.5 results in 1 detect
# thread being created. Regardless of the setting at a minimum 1 detect
# thread will always be created.
#<br /> detect-thread-ratio: 0.5 #1.0</li>
</ol>
<ol>
<li>Luajit has a strange memory requirement, it's 'states' need to be in the</li>
<li>first 2G of the process' memory.
#</li>
<li>'luajit.states' is used to control how many states are preallocated.</li>
<li>State use: per detect script: 1 per detect thread. Per output script: 1 per</li>
<li>script.<br />luajit:<br /> states: 128</li>
</ol>
<ol>
<li>Profiling settings. Only effective if Suricata has been built with the</li>
<li>the --enable-profiling configure flag.
#<br />profiling:
# Run profiling for every xth packet. The default is 1, which means we
# profile every packet. If set to 1000, one packet is profiled for every
# 1000 received.<br /> sample-rate: 100
<ol>
<li>rule profiling<br /> rules:
<ol>
<li>Profiling can be disabled here, but it will still have a</li>
<li>performance impact if compiled in.<br /> enabled: yes<br /> filename: rule_perf.log<br /> append: yes</li>
</ol>
<ol>
<li>Sort options: ticks, avgticks, checks, matches, maxticks<br /> sort: avgticks</li>
</ol>
<ol>
<li>Limit the number of items printed at exit (ignored for json).<br /> limit: 100</li>
</ol>
<ol>
<li>output to json<br /> json: yes</li>
</ol></li>
</ol>
<ol>
<li>per keyword profiling<br /> keywords:<br /> enabled: yes<br /> filename: keyword_perf.log<br /> append: yes</li>
</ol>
<ol>
<li>per rulegroup profiling<br /> rulegroups:<br /> enabled: yes<br /> filename: rule_group_perf.log<br /> append: yes</li>
</ol>
<ol>
<li>packet profiling<br /> packets:
<ol>
<li>Profiling can be disabled here, but it will still have a</li>
<li>performance impact if compiled in.<br /> enabled: yes<br /> filename: packet_stats.log<br /> append: yes</li>
</ol>
<ol>
<li>per packet csv output<br /> csv:
<ol>
<li>Output can be disabled here, but it will still have a</li>
<li>performance impact if compiled in.<br /> enabled: no<br /> filename: packet_stats.csv</li>
</ol></li>
</ol></li>
</ol>
<ol>
<li>profiling of locking. Only available when Suricata was built with</li>
<li>--enable-profiling-locks.<br /> locks:<br /> enabled: no<br /> filename: lock_stats.log<br /> append: yes</li>
</ol>
<p>pcap-log:<br /> enabled: no<br /> filename: pcaplog_stats.log<br /> append: yes</p></li>
</ol>
##
<ol>
<li>Netfilter integration
##</li>
</ol>
<ol>
<li>When running in NFQ inline mode, it is possible to use a simulated</li>
<li>non-terminal NFQUEUE verdict.</li>
<li>This permit to do send all needed packet to suricata via this a rule:</li>
<li> iptables -I FORWARD -m mark ! --mark $MARK/$MASK -j NFQUEUE</li>
<li>And below, you can have your standard filtering ruleset. To activate</li>
<li>this mode, you need to set mode to 'repeat'</li>
<li>If you want packet to be sent to another queue after an ACCEPT decision</li>
<li>set mode to 'route' and set next-queue value.</li>
<li>On linux >= 3.1, you can set batchcount to a value > 1 to improve performance</li>
<li>by processing several packets before sending a verdict (worker runmode only).</li>
<li>On linux >= 3.6, you can set the fail-open option to yes to have the kernel</li>
<li>accept the packet if suricata is not able to keep pace.</li>
<li>bypass mark and mask can be used to implement NFQ bypass. If bypass mark is</li>
<li>set then the NFQ bypass is activated. Suricata will set the bypass mark/mask</li>
<li>on packet of a flow that need to be bypassed. The Nefilter ruleset has to</li>
<li>directly accept all packets of a flow once a packet has been marked.<br />nfq:</li>
<li> mode: accept</li>
<li> repeat-mark: 1</li>
<li> repeat-mask: 1</li>
<li> bypass-mark: 1</li>
<li> bypass-mask: 1</li>
<li> route-queue: 2</li>
<li> batchcount: 20</li>
<li> fail-open: yes</li>
</ol>
<p>#nflog support<br />nflog:
# netlink multicast group
# (the same as the iptables --nflog-group param)
# Group 0 is used by the kernel, so you can't use it<br /> - group: 2
# netlink buffer size<br /> buffer-size: 18432
# put default value here<br /> - group: default
# set number of packet to queue inside kernel<br /> qthreshold: 1
# set the delay before flushing packet in the queue inside kernel<br /> qtimeout: 100
# netlink max buffer size<br /> max-size: 20000</p>
##
<ol>
<li>Advanced Capture Options
##</li>
</ol>
<ol>
<li>general settings affecting packet capture<br />capture:
# disable NIC offloading. It's restored when Suricata exists.
# Enabled by default<br /> #disable-offloading: false
#
# disable checksum validation. Same as setting '-k none' on the
# commandline<br /> #checksum-validation: none</li>
</ol>
<ol>
<li>Netmap support
#</li>
<li>Netmap operates with NIC directly in driver, so you need FreeBSD wich have</li>
<li>built-in netmap support or compile and install netmap module and appropriate</li>
<li>NIC driver on your Linux system.</li>
<li>To reach maximum throughput disable all receive-, segmentation-,</li>
<li>checksum- offloadings on NIC.</li>
<li>Disabling Tx checksum offloading is <strong>required</strong> for connecting OS endpoint</li>
<li>with NIC endpoint.</li>
<li>You can find more information at <a class="external" href="https://github.com/luigirizzo/netmap">https://github.com/luigirizzo/netmap</a>
#<br />netmap:
# To specify OS endpoint add plus sign at the end (e.g. "eth0+")<br /> - interface: eth2
# Number of receive threads. "auto" uses number of RSS queues on interface.<br /> #threads: auto
# You can use the following variables to activate netmap tap or IPS mode.
# If copy-mode is set to ips or tap, the traffic coming to the current
# interface will be copied to the copy-iface interface. If 'tap' is set, the
# copy is complete. If 'ips' is set, the packet matching a 'drop' action
# will not be copied.
# To specify the OS as the copy-iface (so the OS can route packets, or forward
# to a service running on the same machine) add a plus sign at the end
# (e.g. "copy-iface: eth0+"). Don't forget to set up a symmetrical eth0+ -> eth0
# for return packets. Hardware checksumming must be <strong>off</strong> on the interface if
# using an OS endpoint (e.g. 'ifconfig eth0 -rxcsum -txcsum -rxcsum6 -txcsum6' for FreeBSD
# or 'ethtool -K eth0 tx off rx off' for Linux).<br /> #copy-mode: tap<br /> #copy-iface: eth3
# Set to yes to disable promiscuous mode
# disable-promisc: no
# Choose checksum verification mode for the interface. At the moment
# of the capture, some packets may be with an invalid checksum due to
# offloading to the network card of the checksum computation.
# Possible values are:
# - yes: checksum validation is forced
# - no: checksum validation is disabled
# - auto: suricata uses a statistical approach to detect when
# checksum off-loading is used.
# Warning: 'checksum-validation' must be set to yes to have any validation<br /> #checksum-checks: auto
# BPF filter to apply to this interface. The pcap filter syntax apply here.<br /> #bpf-filter: port 80 or udp<br /> #- interface: eth3<br /> #threads: auto<br /> #copy-mode: tap<br /> #copy-iface: eth2
# Put default values here<br /> - interface: default</li>
</ol>
<ol>
<li>PF_RING configuration. for use with native PF_RING support</li>
<li>for more info see <a class="external" href="http://www.ntop.org/products/pf_ring/">http://www.ntop.org/products/pf_ring/</a><br />pfring:<br /> - interface: ...
# Number of receive threads (>1 will enable experimental flow pinned
# runmode)<br /> threads: 10 <ol>
<li>Default clusterid. PF_RING will load balance packets based on flow.
# All threads/processes that will participate need to have the same
# clusterid.<br /> cluster-id: 99</li>
<li>Second interface<br /> - interface: ...<br /> threads: 10<br /> cluster-id: 93<br /> cluster-type: cluster_flow</li>
<li>Put default values here<br /> - interface: default<br /> #threads: 2</li>
</ol>
<ol>
<li>Default PF_RING cluster type. PF_RING can load balance per flow.</li>
<li>Possible values are cluster_flow or cluster_round_robin.<br /> cluster-type: cluster_flow</li>
<li>bpf filter for this interface<br /> #bpf-filter: tcp</li>
<li>Choose checksum verification mode for the interface. At the moment</li>
<li>of the capture, some packets may be with an invalid checksum due to</li>
<li>offloading to the network card of the checksum computation.</li>
<li>Possible values are:</li>
<li> - rxonly: only compute checksum for packets received by network card.</li>
<li> - yes: checksum validation is forced</li>
<li> - no: checksum validation is disabled</li>
<li> - auto: suricata uses a statistical approach to detect when</li>
<li> checksum off-loading is used. (default)</li>
<li>Warning: 'checksum-validation' must be set to yes to have any validation<br /> #checksum-checks: auto</li>
</ol></li>
</ol>
<ol>
<li>For FreeBSD ipfw(8) divert(4) support.</li>
<li>Please make sure you have ipfw_load="YES" and ipdivert_load="YES" </li>
<li>in /etc/loader.conf or kldload'ing the appropriate kernel modules.</li>
<li>Additionally, you need to have an ipfw rule for the engine to see</li>
<li>the packets from ipfw. For Example:
#</li>
<li> ipfw add 100 divert 8000 ip from any to any
#</li>
<li>The 8000 above should be the same number you passed on the command</li>
<li>line, i.e. -d 8000
#<br />ipfw:
<ol>
<li>Reinject packets at the specified ipfw rule number. This config</li>
<li>option is the ipfw rule number AT WHICH rule processing continues</li>
<li>in the ipfw processing system after the engine has finished</li>
<li>inspecting the packet for acceptance. If no rule number is specified,</li>
<li>accepted packets are reinjected at the divert rule which they entered</li>
<li>and IPFW rule processing continues. No check is done to verify</li>
<li>this will rule makes sense so care must be taken to avoid loops in ipfw.
#
<ol>
<li>The following example tells the engine to reinject packets</li>
</ol>
</li>
<li>back into the ipfw firewall AT rule number 5500:
#</li>
<li>ipfw-reinjection-rule-number: 5500</li>
</ol></li>
</ol>
<p>napatech:
# The Host Buffer Allowance for all streams
# (-1 = OFF, 1 - 100 = percentage of the host buffer that can be held back)<br /> hba: -1</p>
<ol>
<li>use_all_streams set to "yes" will query the Napatech service for all configured</li>
<li>streams and listen on all of them. When set to "no" the streams config array</li>
<li>will be used.<br /> use-all-streams: yes</li>
</ol>
<ol>
<li>The streams to listen on<br /> streams: [1, 2, 3]</li>
</ol>
<ol>
<li>Tilera mpipe configuration. for use on Tilera TILE-Gx.<br />mpipe:
<ol>
<li>Load balancing modes: "static", "dynamic", "sticky", or "round-robin".<br /> load-balance: dynamic</li>
</ol>
<ol>
<li>Number of Packets in each ingress packet queue. Must be 128, 512, 2028 or 65536<br /> iqueue-packets: 2048</li>
</ol>
<ol>
<li>List of interfaces we will listen on.<br /> inputs:<br /> - interface: xgbe2<br /> - interface: xgbe3<br /> - interface: xgbe4</li>
</ol>
<ol>
<li>Relative weight of memory for packets of each mPipe buffer size.<br /> stack:<br /> size128: 0<br /> size256: 9<br /> size512: 0<br /> size1024: 0<br /> size1664: 7<br /> size4096: 0<br /> size10386: 0<br /> size16384: 0</li>
</ol></li>
</ol>
##
<ol>
<li>Hardware accelaration
##</li>
</ol>
<ol>
<li>Cuda configuration.<br />cuda:
# The "mpm" profile. On not specifying any of these parameters, the engine's
# internal default values are used, which are same as the ones specified in
# in the default conf file.<br /> mpm:
# The minimum length required to buffer data to the gpu.
# Anything below this is MPM'ed on the CPU.
# Can be specified in kb, mb, gb. Just a number indicates it's in bytes.
# A value of 0 indicates there's no limit.<br /> data-buffer-size-min-limit: 0
# The maximum length for data that we would buffer to the gpu.
# Anything over this is MPM'ed on the CPU.
# Can be specified in kb, mb, gb. Just a number indicates it's in bytes.<br /> data-buffer-size-max-limit: 1500
# The ring buffer size used by the CudaBuffer API to buffer data.<br /> cudabuffer-buffer-size: 500mb
# The max chunk size that can be sent to the gpu in a single go.<br /> gpu-transfer-size: 50mb
# The timeout limit for batching of packets in microseconds.<br /> batching-timeout: 2000
# The device to use for the mpm. Currently we don't support load balancing
# on multiple gpus. In case you have multiple devices on your system, you
# can specify the device to use, using this conf. By default we hold 0, to
# specify the first device cuda sees. To find out device-id associated with
# the card(s) on the system run "suricata --list-cuda-cards".<br /> device-id: 0
# No of Cuda streams used for asynchronous processing. All values > 0 are valid.
# For this option you need a device with Compute Capability > 1.0.<br /> cuda-streams: 2</li>
</ol>
##
<ol>
<li>Include other configs
##</li>
</ol>
<ol>
<li>Includes. Files included here will be handled as if they were</li>
<li>inlined in this configuration file.<br />#include: include1.yaml<br />#include: include2.yaml</li>
</ol> Suricata - Bug #2457: Suricata 4.0.4 exits with [ERRCODE: SC_ERR_FATAL(171)] - Re-entered profiling, exitinghttps://redmine.openinfosecfoundation.org/issues/2457?journal_id=110752019-02-18T23:05:01ZAndreas Herzoisf@herzandreas.de
<ul></ul><p>Do you have the same issue with Suricata 4.1.2/3?</p> Suricata - Bug #2457: Suricata 4.0.4 exits with [ERRCODE: SC_ERR_FATAL(171)] - Re-entered profiling, exitinghttps://redmine.openinfosecfoundation.org/issues/2457?journal_id=126152019-06-15T22:06:42ZAndreas Herzoisf@herzandreas.de
<ul><li><strong>Status</strong> changed from <i>New</i> to <i>Closed</i></li></ul><p>Hi, we're closing this issue since there have been no further responses. <br />If you think this bug is still relevant, try to test it again with the <br />most recent version of suricata and reopen the issue. If you want to <br />improve the bug report please take a look at <br /><a class="external" href="https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Reporting_Bugs">https://redmine.openinfosecfoundation.org/projects/suricata/wiki/Reporting_Bugs</a></p>