Elasticstack (ELK), Suricata and pfSense Firewall – Part 4: Kibana Visualizations and Dashboards (Pretty Pictures)

Elasticstack (ELK), Suricata and pfSense Firewall – Part 4: Kibana Visualizations and Dashboards (Pretty Pictures)


In previous parts we have configured Elasticstack (Logstash, Elasticsearch and Kibana) on an Ubuntu server instance and the Elasticbeats Filebeats log shipper on a pfSense firewall to ship Suricata IDPS logs to the Elasticstack instance. We have configured the Logstash pipeline to enhance and enrich the data from our Suricata IDPS to enable richer and more advanced visualizations in Kibana.

In this part we will create the visualizations and dashboard to take the data in Elasticsearch and present it in a useful way.

Kibana – Searches, Visualizations and Dashboards

Kibana  is the component of the Elasticstack which allows you to explore and visualize the data in Elasticsearch index. The three main Kibana features we will use in this article are:

  • Searches: The basis of visualizations, refining the data to include only what you want to see. They are a saved version of what you create on the Kibana ‘Discover’ tab, including filters (or search string) and the index pattern. Using a saved search means you can adjust the data provided to any visualizations linked to it.
  • Visualizations: The individual graphs, charts and pretty pictures that aggregate the data and show you, visually, the trends, spikes, dips and locations related to your data set.
  • Dashboards: Simply a collection of visualizations that can be arranged as a single page or view and that report on the same time-series of data at the same time.

For those that are just interested in getting the pretty picture above links to the objects which can be imported directly into Kibana are in the last section. The intermediate sections aim to provide a bit more of a narrative of the why and how of what you are getting.

Searches – The basis

For the Visualizations we will create for the Suricata Dashboards we are only interested in records with the following filter criteria:

_type: "SuricataIDPS"
event_type: "alert"

From the discover tab add the filters by expanding the above fields and clicking on the + by the data you want included in the filter. As you do this you will see it appear in the filter section, just under the search box. Once you have added both filter fields click on ‘Save’ and save the search as something meaningful e.g. Suricata – Alerts.

If you are just logging the data that has been gathered using this series of articles then, in reality, this is not required as these filters would match all of the records in the index. However, it may be that in the future you will want to log further information from other processes or turn on further logging types from within the Suricata eve.json list. Setting the filter criteria is good practice to ensure that as your Elasticstack instance grows in what it is logging you do not have strange things occurring in visualizations that you need to go and debug.

Visualizations – The Pictures

This section will walk through the creation of a few of the visualizations, detailing some of the more interesting features. It is expected that the reader will tailor the provided visualizations to their own needs and create additional as they require.

Visualizations are created from within the ‘Visualize’ section of Kibana. When clicking to add a visualization you can pick from a selection across various types.

Maps – GeoIP data based visualizations

These are possibly the most visually impressive visualizations available in Kibana and well worth that extra effort in configuring your Logstash pipeline to add the GeoIP data from the basic log records shipped by Elasticbeats log shippers (or any other ingest method)

Firstly we will create a ‘Region Map’ Visualization to display the volume of alerts that are generated from IP addresses in each country:

Click on the [+] in the visualizations screen to create a new visualization and pick ‘Region Map’ from the Maps type.

The visualization needs you to choose a search source. Choose the search you created in the previous step. Once this is done you should be presented with a blank map visualization.

The Map visualization needs a few things to work. Firstly you need to tell it which field the data is coming from, secondly you need to tell it how that data should be interpreted.

In the Data section pick the field (geoip.country_code2.keyword) which contains the data to be mapped. In this case it is the ISO two-letter country code

The ‘Size’ field determines the number of records that will be returned based upon the metric and aggregate. Think of it as a ‘Top N’. As we probably want all the countries put in a reasonably large number here. Custom labels are relevant when showing legends and labels. By default they will be the field name. Country is a little human more friendly than geoip.country_code2.keyword I am sure you’ll agree.


In the options section you now select the vector map of ‘World Countries’ and the join field of ‘Two Letter abbreviation’. This informs the visualization that we are looking at countries and we are passing it ISO 2-letter codes as the key in the data.

The other settings are related to the layout and visual aspects of the visualization.



Click on the [>] button and you should be greeted with a nice colorful representation of the country origin of the ‘bad guys’ causing your Suricata instance to raise and log alerts.


Click on the ‘Save’ option in the top-right of the visualization and give it a meaningful name e.g. Suricata Alerts – Country.

Congratulations – you have just created your first visualization based upon Suricata log data, shipped by Elasticbeats to Logstash which filtered the data to add richer fields before inserting it into Elasticsearch, which was then extracted by a Kibana search.

Now a Coordinate Map Heatmap visualization to display the same data in a different way:

Create a new Kibana visualization but this time choose ‘Coordinate map’. Pick the saved ‘Suricata – Alerts’ search as for the Region Map above.

In the data screen you will notice that only a single aggregation and field are available. This is specifically because this visualization requires a geo_point data type field. The geoip.location field is the only one of these in the index and was created by our GeoIP logstash filter in Part 2.

In the options screen select the ‘Heatmap’ type and then click on the [>] button and you should see a map similar to the one below. Play with the sliders to learn what they do and until you are happy with the visualization based on your dataset.

Click on the ‘Save’ option in the top-right of the visualization and give it a meaningful name e.g. Suricata Alerts – Heatmap.

Vertical bar visualization with further filters

While the map visualizations are great for getting an idea of where in the world things are coming from or going to, as Elasticsearch holds time series data you’ll probably want to chart that data in a visualization that shows trends or data over time. One of those visualizations is the ‘vertical bar’ visualization. This section shows how to create such a visualization and add further filters to the data to further refine the data from the saved search. This will allow us to create 3 separate visualizations each showing the count over time of the different severity alerts.

Create a new Kibana visualization of the type ‘Vertical bar’ and pick the saved search as the data source. When you go into the visulaization data section the standard Y-Axis is count. This is fine as we are going to chart the number of records. The next thing is the X-Axis. As we want to chart over time we need a Date Histogram. This will split the data into time buckets (count of records in each n mins). Bucket size is controlled by the ‘interval’ setting and auto means the buckets are dependent upon the span being charted. e.g. a 24 hour chart may have buckets of 30 mins, but a 1 hour chart may have buckets of 30 secs. Pick the @timestamp field for the field to be aggregated. If you run the visualization now what you would have charted is the count of records per time bucket over the duration specificed. That’s great if you want to know the total volume (based upon the saved search filter criteria) but we want to just know the number of alerts for a given severity. Visualizations can have their own filters applied to allow this. Click on the add filter button (see below) select the alert.severity field and set to “2”. Run the visualization and you will see only the count per time bucket of sev-2 alerts.

Click on the ‘Save’ option in the top-right of the visualization and give it a meaningful name e.g. Suricata Alerts – Sefv 2. Repeat twice  creating a new visualization but setting the filter for alert.severity to each of “1” and “3”. Alternatively you can change the filter on the existing saved visualization and click save, edit the name and check the box to save as a *new* visualization.

That’s the last visualization I’m going to go through step-by-step. However there is a lot more that can be done – the best thing to do is learn by playing. In addition all of the visualizations created for the dashboard are available at the bottom of this post as a extract which can be imported into your Kibana instance.


Dashboards really are simple. It comes down to adding a set of visualizations onto a single page in a layout that you find useful. All of the hard work has been done in the previous sections.

Click on the dashboard option on the left bar and click on the [+] to add a new dashboard. In the dashboard editing screen you click on ‘Add’ and then add one or more saved visualizations you’d like. Hide the visualizations list if it makes it easier and then drag and resize the visualizations as you wish.

Some points to note: There is a dark and alight theme which can be chosen under the options for the Dashboard. You can further filter the data displayed by adding filters to dashboards as we did in the ‘Suricata Alerts – Sev 2’ visualization above, lastly when you save the dashboard you can choose to save the time-span you have set in the top left saved with the dashboard. This means that every time someone opens up the dashboard the time-span will be set as you specify. It doesn’t stop them subsequently changing the time-span on that instance of the dashboard in their session

Exported Files

Here I provide an export of the searches, visualizations and dashboard used to create the screenshot at the top of the page. Save these file sand rename to .json from .txt

To import go into the Kibana Management menu and select the Saved Objects section. Click [Import] and then import the searches, visualizations and dashboard json files in order.

searches visualizations dashboard


In this series of 4 articles we have created an Elasticstack instance, configured our pfSense firewall to use Elastic Filebeats to ship logs to the Elasticstack instance, filtering them through Logstash to enhance the data, and then created searches, visualizations and a dashboard to display the Suricata alerts in a graphical and pleasant manner.

I hope this series of articles will benefit others and I must acknowledge a number of sources that helped me get through to this stage:

The pfSense community forums – always a wealth of information and ideas.

This site: http://pfelksuricata.3ilson.com/ which got me to a starting point on my first few installations where I was then able to expand and learn myself and refine my personal implementation as I have documented here.

The Elasticstack folks – Their documentation is (mostly) understandable and their software is pretty awesome as well

Where to now?……

I’ve been expanding the data coming from my pfSense instance and into my Elasticstack so I’m planning some further articles at some point in the future. They’ll likely be around:

Additional Suricata data (Such as HTTP and DNS)

The pfBlockerNG pfSense plugin alert data for IP blocklists and the DNS Blocker

The pfSense firewall logs

The first one is pretty straightforward and will just be an expansion on the Logstash filters and Kibana visualitions and dashboards in this series. The last two get into a whole new world of the GROK filter and patterns to take non-JSON log data and parse it into known fields. I may even cover using syslog to take data from other systems where wither a log shipping tool is not feasible or is not available.