Well, it would be unfair, after talking of how to integrate Fl0wer with ELK, to ignore one of the market leaders, so in this post I'll describe integration of Fl0wer with Splunk.

Well, it's pretty easy.

My Journey to Splunk

Installing Splunk is even easier, you will need to get just two packages for your platform:

  1. Splunk
  2. SplunkForwarder

Splunk will store all the data that will be pushed inside him (but not only) by the SplunkForwarder.

So, the first step is to install the Splunk Packages, and in my Xubuntu 14.04 laptop it was easy like:

# dpkg -i splunk-6.6.2-4b804538c686-linux-2.6-amd64.deb

# dpkg -i splunkforwarder-6.6.2-4b804538c686-linux-2.6-amd64.deb

Obviously, if you are already running it, there is no need to install it :-D

Once you installed all the packages, setup your /opt/fl0wer/etc/fl0wer.conf file setting the storage_format as CSV or CSVFULL (it depends on the detail level you want, I set up CSVFULL to experiment, but if you want you can use simpler CSV, it's up to you).

The flow of the whole thing is:

Flower -> logs in CSV to /opt/fl0wer/data/netflowdata.csv

SplunkForwarder -> reads continuously (in "tail" mode) from/opt/fl0wer/data/netflowdata.csv, parses the fields and sends the data to Splunk Indexer

Splunk Indexer -> receives and stores the data into its indexes from the forwarder and does all the rest.

Once packages are installed, Fl0wer is configured and it is storing data in CSV at your favorite location in the filesystem, let's see how to add Fl0wer as a source.

Start Splunk with: # /opt/splunk/bin/splunk start

Accept the license and once installation is complete, you will be able to access it at http://localhost:8000

After the splunk daemon started, enable the reception of forwarder messages using a command like:

# /opt/splunk/bin/splunk enable listen 9997

So now we have the Splunk Indexer ready to receive data from the Forwaders.

Let's start our SplunkForwarder with: # /opt/splunkforwarder/bin/splunk start

As we did before, accept the license and once installation is complete, let's make it "register" itself with the Splunk Indexer with the following command:

# /opt/splunkforwarder/bin/splunk add forward-server localhost:9997

Well, we're almost done, we just need to configure the SplunkForwarder to fetch our netflow data into the Indexer with the following command:

# /opt/splunkforwarder/bin/splunk add monitor /opt/fl0wer/data/netflowdata.csv -index main -sourcetype csv

Done !

Now we can connect to the Splunk web application at http://localhost:8000 and login with admin/changeme (you will be forced to change the default password).

You should land on the search page and you should see something like that:

Splunk SearchSplunk Search


Doh, data is happily coming inside Splunk !

Now, time for some fancy charts. Splunk has something similar to ELK's TimeLion that is the Visualization, where you can produce all kind of charts, real-time plots, reports and whatever you want.

This is something horrible I've created after half an hour from installing Splunk, I'm sure that you can customize it the way you want !

Example Splunk DashboardExample Splunk Dashboard


Well, for today that's all, I will avoid the tutorial on how to import your data in CSV format in your favorite spreadsheet because I am sure that you are perfectly capable of doing it yourself :-)