Optimization #3322
open
As we discussed offline, it would be nice to create some kind of benchmarking framework where we could validate such changes. Pure pcap tests may not always give enough insight. For example with the bm optimizations pcap based tests showed no difference, while I think more micro level benchmarks would have shown something.
It would be nice if this benchmarking framework handles caches realistically.
With the example of Boyer-Moore optimizations (one less call to alloc), I am not sure a naive benchmarking would shows much difference as the additional call to alloc would grab repeatedly the same cached memory area, whereas in a real Suricata execution, this would not be the case
- Assignee set to Philippe Antoine
- Target version set to TBD
- Status changed from New to In Review
There are a few other hash functions that might be interesting. Google's CityHash comes to mind as it's their "general purpose" hash. SpookyHash/xxhash might be options as well. The stackexchange page lists them as well I think.
Ultimately, it depends on how much time you want to spend on this vs how happy you are with the current hash. Plus, what the usage patterns of the hash are as well as the data to be hashed. ;)
- Blocked by Bug #4265: QA lab: add possibility to do repeatable replay tests added
- Assignee changed from Philippe Antoine to Community Ticket
- Priority changed from Normal to Low
This does not seem the most important area to optimize...
Also available in: Atom
PDF