You need to log in to create posts and topics.

Creating An Ingest Pipeline

Once you begin ingesting raw logs into Kibana with Filebeat, you will notice that all the data is being shipped together in the message field. In this way you won't be able to create any meaningful visuals. Luckily, there is an easy-to-do method to parse this data in a way that can be meaningful and measurable.

To do this, you will need to setup an Ingest Pipeline.

 

  1. Create the custom patterns that will break out and individualize the data from your logs.

 

To do this we will use Grok to debug a sample log message from the data set. Grok is a pattern matching syntax that is used to parse arbitrary text and structure it. Grok is good for parsing syslog, apache, and other web server logs, mysql logs, and in general, any log format that is written for human consumption.

You can refer to the Grok section in the Vizion.ai forum for more tips on how to create custom Grok patterns.

 

2. Create the custom pipeline that will use the grok patterns as a filter for the log data.

To do this we will use the DevTools section that can be found in your Vizion.ai Kibana. 

First, though, since this specific example will be using geographic location, we will need to create a 'geo_point' mapping that we can send Geoip data to. For this to work, the index must have already been created. This can be done maually through DevTools or automatically when shipping logs into the Elastic Stack.

 

Now, create the pipeline. Here is a screenshot of a custom pipeline created to handle the Apache2 access logs that have an extra WafIP address that needs to be filtered out. We have also included a GeoIP plugin for the client IP addresses and setting it to the newly created 'location' field so that we can utilize the global map visualization.

You can see that this pipeline uses multiple 'Processors' that further parse and structure the data. The processors used here are: Grok, GeoIP, Set, and User Agent.

For a full list of all available processors and what they do, refer here.

 

3. Connect the Filebeat that is shipping the logs to Vizion.ai with the newly created pipeline.

Add the pipeline in the Elasticsearch Output section of the filebeat.yml. This pipeline was named “test” when we created it in the Kibana DevTools.

 

4. Refresh the index with the newly created pipeline mappings

Back in the Kibana, click on the 'Management' tab and then 'Index Patterns'. Then click on the refresh icon in the top right corner.

You should then see that the total number of fields has increased to include the new mappings.

 

5. Check that the incoming logs are being parsed correctly.

In the 'Discover' tab, open one of the incoming logs too see that the fields are being parsed out correctly.

 

With the data correctly parsed you will now be able to create custom and meaningful visuals. Here is an example of a dashboard built using the pipeline and mappings created above: