Monday, October 7, 2019

Performance with a single large dataset and single rule

Our singe large dataset that we loaded in this post has been running for about 4 days. Taking a look at our stats from stats.json.

tail -n 1 stats.json | jq -c 'select(.event_type=="stats").stats.capture'
{"kernel_packets":56282979747,"kernel_packets_delta":12499348,"kernel_drops":9927231,"kernel_drops_delta":0,"errors":0,"errors_delta":0}

So if we do some quick math:
tail -n 1 stats.json | jq '.stats.capture.kernel_drops / .stats.capture.kernel_packets'
0.00017638069349960352

tail -n 1 stats.json | jq -c 'select(.event_type=="stats")|.stats.detect.engines'
[{"id":0,"last_reload":"2019-10-03T20:09:39.014088+0000","rules_loaded":1,"rules_failed":0}]

tail -n 1 stats.json | jq 'select(.event_type=="stats").stats.uptime'
339487

So our sensor has processed 56,282,979,747 packets in the last ~4 days and has dropped 9,927,231 packets which gives us a drop percentage of 0.00017638069349960352%.

It should be noted that the only rule loaded is our dataset based DNS rule, which means that it's time to load the ET Pro rule set and see what performance looks like. :)


No comments:

Post a Comment