Filebeat
Using filebeat to send the logs to PacketAI
Download the Filebeat
Linux
Download the filebeat using below bash script: https://www.elastic.co/downloads/past-releases/filebeat-8-4-3
curl https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.4.3-linux-x86_64.tar.gz -o filebeat.tar.gz
tar -xf filebeat.tar.gz
mv filebeat-8.4.3-linux-x86_64 filebeat
rm filebeat.tar.gzWindows
Download the filebeat using Powershell script:
$ProgressPreference = 'SilentlyContinue'
Invoke-WebRequest -Uri https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.4.3-windows-x86_64.zip -OutFile fbeat.zip
Expand-Archive .\fbeat.zip
mv ./fbeat/filebeat-8.4.3-windows-x86_64/ ./filebeat
rm -r ./fbeat
rm ./fbeat.zip
cd ./filebeat
Configure Filebeat
Edit the filebeat.yml, use the below filebeat configuration, we need to customise this according to your requirements, explained in details about each section below. The complete filebeat configuration can be found here
filebeat.inputs:
# This section is to monitor which files on your machine and their paths.
- type: filestream
id: wifi-filestream-id
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/wifi.log
fields:
appName: wifi
tail_files: true
- type: filestream
id: system-filestream-id
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/system.log
fields:
appName: system
tail_files: true
- type: filestream
id: fsck-filestream-id
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/fsck*.log
fields:
appName: fsck
tail_files: true
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: true
# Disable template, dashboards, index management, don't change these values to true
setup.template.enabled: false
setup.dashboards.enabled: false
setup.ilm.enabled: false
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
allow_older_versions: true
hosts: ["beats-ingester-logpatterns.packetai.co:443"]
protocol: https
path: /elasticsearch/fb
compression_level: 6
index: "index"
headers:
X-PAI-IID: YOUR_PAI_IID
X-PAI-TOKEN: YOUR_PAI_TOKEN
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_fields:
fields:
clusterName: YOUR_CLUSTER_NAME
target: fields
filebeat.inputs:
This section is to monitor the list of log files on your host. we need to mention the
pathswhich takes array of files with regular expression (glob), make sure thatidis unique, and set theenabledto true to start the monitoring of the log lines.We could define the
appNameto each log file, this can be helpful to filter the logs at packetai. When we set thetail_filestotruefor not sending the logs from the beginning of the log file. Heretype: filestreamdefines that this is a filestream type monitoring.filebeat.inputsis an array where we could monitor multiple log files.
output.elasticsearch:
hosts: Make sure that your hosts entries are correct.hosts: ["beats-ingester-logpatterns.packetai.co:443"]change if your PacketAI API's are differentcompression_level is between 0 to 9, 0 being no compression at all, and 9 being best compression, we would suggest to use the value 6. higher compression values means higher cpu usage and lower network usage.
headerssub section needs to update according to you PAI_IID and PAI_TOKEN. you can get them on packetai after login, and under the section ofDeploy PacketAI / Agent
processors:
We need to modify the YOUR_CLUSTER_NAME with appropriate cluster name. clusterName can be used to manage the retention period on PacketAI managed.
add_cloud_metadata: ~ is optional, this adds the metadata of the cloud. i.e. region, zone, machine_id etc...
add_docker_metadata is optional, this also docker metadata, docker container name, image name, docker labels, etc.... This extra metadata will increase the index size at PacketAI.
Installation of Filebeat
Linux:
We could run the filebeat with the below command to start the filebeat, we need to install a service, if we want to automatically start the filebeat service on system startup.
./filebeat -c filebeat.ymlsystemd service file (filebeat.service): Here we are assuming that filebeat is located at /opt/filebeat, if the filebeat is located somewhere please update the filebeat.service file to reflect the same.
# filebeat.service
[Unit]
Description=filebeat
After=syslog.target network.target remote-fs.target nss-lookup.target
[Service]
Type=simple
ExecStart=/opt/filebeat/filebeat -c filebeat.yml
Restart=on-failure
WorkingDirectory=/opt/filebeat
[Install]
WantedBy=multi-user.target
copy the file to /etc/systemd/system directory, and run the below commands. In
// Some code
systemctl enable filebeat
systemctl start filebeatWindows:
In order to install the filebeat on Windows, we need to run the below command, in filebeat folder.
./install-service-filebeat.ps1
Start-Service filebeatLast updated
Was this helpful?