Elasticstack (ELK), Suricata and pfSense Firewall – Part 1: Elasticbeats and pfSense configuration

Elasticstack (ELK), Suricata and pfSense Firewall – Part 1: Elasticbeats and pfSense configuration

Introduction

This is the first article in a series documenting the implementation of reporting using Elastic Stack of log data from the Suricata IDPS running on the Open Source pfSense firewall. It covers the installation and configuration of Elastic Filebeat on pfSense to ship logs to a remote Ubuntu server running the Elastic Stack. Installation of the Elastic Stack onto Ubuntu and the configuration of LogStash and Kibana to consume and present the Suricata information will be covered in later parts. This series of articles presumes you have a working pfSense system with the Suricata pfSense package installed, configured and working.

Log Shipping (or why not syslog)

The first challenge to be overcome when analyzing logs is getting the logs to somewhere useful. Local storage of logs on a device is OK but typically you would want to consolidate all of your logs from multiple devices into a central location in order to process them and easily correlate events across devices and there are multiple ways this can be done. Syslog, for example, is a fairly standard and ubiquitous solution for sending logging messages to a remote system for storing and reporting. The syslog agent on the source device sends formatted message to the destination server using UDP or TCP port 514. However sending the messages is a ‘fire-and-forget’ affair. If the destination server is unavailable or the packet gets lost then you are missing those events, and they can’t be replayed.

Typically systems and services of interest write log files on the source device. In this case an alternative to syslog is a ‘log shipping’ solution. This can range from a basic script that copies files to a remote host via a protocol such as tftp through to a (typically) lightweight service or application that runs on the source device, scraping the logs from disk and transmitting to the destination system.

Log shipping solutions tend to be more sophisticated and provide the ability transmit events in real-time or in a scheduled manner, to replay logs, guarantee the destination  has received the data, retry if the destination is unavailable and provide flow-control of transmission to prevent the down-stream host becoming overwhelmed.

Elastic Beats (have you heard of this band?)

Elastic produce a full range of log shippers known as ‘Beats’ which run as lightweight agents on the source devices and transmit data to a destination either running Elasticsearch or Logstash. In order to ship the logs from Suricata on our pfSense box we will use the Filebeat agent.

Installing Filebeat on pfSense

As the pfSense platform is based upon freeBSD and it is able to utilise native freeBSD packages, these are in addition to packages in the pfSense package system from the web GUI. There are caveats around security and supportability as they are not automatically updated etc. so if this is a potential issue for you then syslog may be your only hope (which I may cover in a later article).

The pfSense documentation site provide all the details regarding the freeBSD native packages and their installation here: https://doc.pfsense.org/index.php/Installing_FreeBSD_Packages

Determining the current Elastic Filebeat freeBSD package:

The packages depend upon the version of freeBSD running, and this depends upon the version of pfSense. pfSense 2.3.x is based on a freeBSD 10. pfSense 2.4.x is based upon freeBSD 11.1

If you are using pfSense 2.3.4:

The current native package list is available here: http://pkg.freebsd.org/freebsd:10:x86:64/latest/All/

If you are using pfSense 2.4 (Released in October 2017):

The current native package list is available here:

http://pkg.freebsd.org/FreeBSD:11:amd64/latest/All/

 

Search through the list to determine the available Filebeats package. As of writing the available Filebeats package version is 5.6.3 and the package file is named beats-5.6.3.txz

Installing the package:

Use your terminal program of choice to SSH to your PFSense box as your administrative user and pick option 8 to enter the freeBSD shell

Install the package using the following command:

pkg add [url to package]

eg:
pkg add http://pkg.freebsd.org/freebsd:10:x86:64/latest/All/beats-5.6.3.txz

The package should install under /usr/local

Executable: /usr/local/sbin/filebeat

Config File: /usr/local/etc/filebeat.yml

 

Configuring Suricata to log to json (that’s the way we like it – ah-ha ah-ha)

JSON (JavaScript Object Notation) format is a format for data which is both human-readable and easy to parse. It uses name/value pairs to describe fields, objects and arrays of data and this makes it ideal for transmitting data such as from log files where the format of the data and the relevant fields will most likely be different between services and systems. By using JSON format logs it means we will not have to spend a lot of effort configuring Elastic Stack to parse strings and fields that we would likely need to do if we were using a CSV or other format for the log data.

It is possible to configure logging at the level of each interface within Suricata.  Within the interfaces screen of Suricata click to edit the relevant  interface that you want to gather log data for and scroll down to the ‘Logging Settings’ section shown here (pfSense Version 2.3.4):

The relevant settings are:

EVE JSON Log: Checked

EVE Output Type: Select ‘FILE’

EVE Logged Info: Check ‘Alerts’ and ‘Suricata will log additional payload data’

The other settings are not relevant for this series of articles.

Once you have configured the above settings don’t forget to click the ‘Save’ button at the bottom of the screen.

The logs can be found in a sub-directory relevant to your interface within:

/var/logs/suricata/

The eve.json log is the file we are interested in. Confirm it is receiving data using cat or tail on the file. If the file is not being populated you may have to restart the Suricata service from the pfSense services control panel.

Configuring Filebeat to ship logs

The first thing to do is create a directory for Filebeat to place its own logs into. The config file we will create will ensure that Filebeat logs to this location to provide us some useful data for debugging:

mkdir /var/log/filebeat

Then a create a filebeat.yml configuration file containing the following in: /usr/local/etc/filebeat.yml

Indentation is meaningful in YAML. Make sure that you use spaces, rather than tab characters, to indent sections. YAML Tips and Gotchas

#------------------------- File prospectors --------------------------------
filebeat.prospectors:

- input_type: log
  paths:
  - /var/log/suricata/*/eve.json*
  fields_under_root: true
  fields:
    type: "suricataIDPS"
    tags: ["SuricataIDPS","JSON"]

#----------------------------- Logstash output --------------------------------
output.logstash:
  hosts: ["192.168.1.123:5044"]

#---------------------------- filebeat logging -------------------------------

logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat.log
  keepfiles: 7

The actual config file syntax is available on the Elastic website but this config file does the following:

Process all files in subdirectories under /var/log/suricata/ that match the filespec: eve.json*

The files will be text (log) files

For each log entry add the field ‘type’ at the root level with the value of ‘suricataIDPS’ – this will be used to determine the processing within LogStash once the file reaches our destination server

For each log entry add the tags ‘SuricataIDPS’ and ‘JSON’ – These are arbitary tags added to the records for use in queries in Kibana.

Output the events to the logstash process at 192.168.1.123:5044

Log to the file /var/log/filebeat/filebeat.log and keep no more than 7 log files.

 

Test the config:

/usr/local/sbin/filebeat -c /usr/local/etc/filebeat.yml -configtest

This should indicate if there are any issues with the config file. Note that the config file is sensitive to tabs in indentation so if you have used these rather than spaces then an error may be thrown and it will not be obvious what the issue is.

Test run:

/usr/local/sbin/filebeat -c /usr/local/etc/filebeat.yml -N

This will run filebeat and process the suricata logs. the -N option prevents the events from being sent to the destination server (As we don’t have one yet).

tail -f /var/log/filebeat/filebeat.log to see what’s happening.

 

Configure pfSense to start Filebeat on startup

Note: This section has been edited to replace the use of the Shellcmd pfSense package. This is due to the fact that the commands run during startup in Shellcmd are blocking (even when back-grounded using ‘&’). This causes a number of issues such as: No further shellcmd commands run after the first has started and blocked. Some services may fail to start automatically (including Suricata!). So while Shellcmd seemed like the right way to do things, it doesn’t work as expected.

The beats package installer was good enough to create some rc.d startup scripts for filebeat in:

/usr/local/etc/rc.d

Due to the fact this is pfSense and therefore a customised freeBSD implementation scripts in this directory need to have the .sh file extension to run. Copy the filebeat script:

cp /usr/local/etc/rc.d/filebeat /usr/local/etc/rc.d/filebeat.sh

If you take a look at the script it instructs that some settings be configured in /etc/rc.conf

Again, due to the pfSense customisation, this file is overwritten on boot and should not be edited. However the creation of an /etc/rc.conf.local file will take care of things for us. Set filebeat to start and specify the config file as follows:

echo "filebeat_enable=yes" >> /etc/rc.conf.local
echo "filebeat_conf=/usr/local/etc/filebeat.yml" >> /etc/rc.conf.local

This will cause Filebeat to start on boot. Reboot your pfSense firewall and check with PS:

/var/run: ps aux | grep beat
root 15227 0.0 0.0 14400 1948 - Is 10:58PM 0:00.00 daemon: /usr/local/sbin/filebeat[15564] (daemon)
root 15564 0.0 1.2 83692 49664 - S 10:58PM 0:18.47 /usr/local/sbin/filebeat -path.home /var/db/beats/file
root 16999 0.0 0.0 14400 1948 - Is 10:58PM 0:00.00 daemon: /usr/local/sbin/metricbeat[17318] (daemon)
root 17318 0.0 0.5 71184 20352 - S 10:58PM 0:08.25 /usr/local/sbin/metricbeat -path.home /var/db/beats/me

Monitoring Filebeat

This is as simple as, from an SSH shell, issuing the following command. Output similar to the below will be displayed.

tail -f /var/log/filebeat/filebeat.log

Re-sending Logs

One of the advantages of Filebeat is that it keeps track of what files and events it has processed and which ones have been sent to and acknowledged by the destination. This is great in production but it is likely that you are going to need to re-send the same logs numerous times during configuration of the solution. To do this you need to delete the Filebeat registry and restart the process. To achieve this perform the following:

Ensure there are no filebeat processes running using ps -aux

If any are still running kill the processes (kill process_id) otherwise the process may re-create the registry with the current progress (which we don’t want)

Delete the registry file:

rm /usr/local/sbin/data/registry

restart Filebeat:

/usr/local/etc/rc.d/filebeat.sh stop

/usr/local/etc/rc.d/filebeat.sh start

 

Conclusion

We now have performed all the steps to get our source device (the pfSense firewall) to run FileBeat to ship the Suricata logs in JSON format to logstash running on a remote system. But we don;t yet have a configured remote system to receive these logs. The next article will cover installing Elasticstack on Ubuntu server and configuring logstash to receive and process the logs into the Elasticsearch indexes ready for Kibana to visualize!