Skip to content

Latest commit

 

History

History
191 lines (170 loc) · 6.82 KB

filebeat-modules.asciidoc

File metadata and controls

191 lines (170 loc) · 6.82 KB

Working with {filebeat} Modules

{filebeat} comes packaged with pre-built {filebeat-ref}/filebeat-modules.html[modules] that contain the configurations needed to collect, parse, enrich, and visualize data from various log file formats. Each {filebeat} module consists of one or more filesets that contain ingest node pipelines, {es} templates, {filebeat} input configurations, and {kib} dashboards.

You can use {filebeat} modules with {ls}, but you need to do some extra setup. The simplest approach is to set up and use the ingest pipelines provided by {filebeat}.

Use ingest pipelines for parsing

When you use {filebeat} modules with {ls}, you can use the ingest pipelines provided by {filebeat} to parse the data. You need to load the pipelines into {es} and configure {ls} to use them.

To load the ingest pipelines:

On the system where {filebeat} is installed, run the setup command with the --pipelines option specified to load ingest pipelines for specific modules. For example, the following command loads ingest pipelines for the system and nginx modules:

filebeat setup --pipelines --modules nginx,system

A connection to {es} is required for this setup step because {filebeat} needs to load the ingest pipelines into {es}. If necessary, you can temporarily disable your configured output and enable the {es} output before running the command.

To configure {ls} to use the pipelines:

On the system where {ls} is installed, create a {ls} pipeline configuration that reads from a {ls} input, such as {beats} or Kafka, and sends events to an {es} output. Set the pipeline option in the {es} output to %{[@metadata][pipeline]} to use the ingest pipelines that you loaded previously.

Here’s an example configuration that reads data from the Beats input and uses {filebeat} ingest pipelines to parse data collected by modules:

input {
  beats {
    port => 5044
  }
}

output {
  if [@metadata][pipeline] {
    elasticsearch {
      hosts => "https://061ab24010a2482e9d64729fdb0fd93a.us-east-1.aws.found.io:9243"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
      pipeline => "%{[@metadata][pipeline]}" (1)
      user => "elastic"
      password => "secret"
    }
  } else {
    elasticsearch {
      hosts => "https://061ab24010a2482e9d64729fdb0fd93a.us-east-1.aws.found.io:9243"
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
      user => "elastic"
      password => "secret"
    }
  }
}
  1. Set the pipeline option to %{[@metadata][pipeline]}. This setting configures {ls} to select the correct ingest pipeline based on metadata passed in the event.

See the {filebeat} {filebeat-ref}/filebeat-modules-overview.html[Modules] documentation for more information about setting up and running modules.

For a full example, see [use-filebeat-modules-kafka].