top of page
Search

Using Fluent Bit to replicate NGINX logs to Azure Storage

Writer's picture: Steve FlowersSteve Flowers

Updated: Mar 24, 2021




In this post I will introduce you to Fluent Bit and show how to enable the service on an Ubuntu server to forward nginx access logs to an Azure Store blob.


Why would you want to use Fluent Bit instead of the Microsoft Monitoring Agent or Azure Monitor for containers? Speed. Azure Monitor still suffers from an ingestion delay of 2-5 minutes. Exporting logs to Azure Storage or Event Hubs allows you to action insights from your logs in near real-time.


Fluent Bit is a great service for replicating events or logging to Azure for your web app or containers.



From their site:


"Fluent Bit is an open source and multi-platform Log Processor and Forwarder which allows you to collect data/logs from different sources, unify and send them to multiple destinations. It's fully compatible with Docker and Kubernetes environments."


Aggregate all of your logs in Azure Storage blobs or send them to an Azure Event Hubs Kafka endpoint for exploration and analysis, alerting, or processing.


Pre-requisites:


You can either create the VM with a public IP or private IP depending on your VNET configuration. For this test I used a public IP and allowed SSH traffic through the NSG. This is not recommended for production.


Login to the server and test the nginx web server.

curl http://<my public ip>

HTML will be returned saying "Welcome to nginx!"


Issuing the curl command creates an entry in the nginx access log which we'll use later.


Now to install Fluent Bit on the server, I followed the walkthrough here which I will provide below with some additional context and troubleshooting.


To start, add the public key for the Fluent Bit repository

$ wget -qO - https://packages.fluentbit.io/fluentbit.key | sudo apt-key add -

The GPG key will be added to your keyring.


You must have sudo or root access to perform the next steps.


Next we will add the URL of the repository to our source list.

deb https://packages.fluentbit.io/ubuntu/bionic bionic main

Finally, install the td-agent-bit package, start it, and check that it is running.

sudo apt-get install td-agent-bit
service td-agent-bit start
service td-agent-bit status

Ok, let's stop the service and explore the configuration files.


parsers.conf: Defines how to parse fields within a record

plugins.conf: Define paths to external plugins

td-agent-bit.conf: Configure the service, input, and outputs.


We'll be editing the td-agent-bit config file for this demo. First we need to add an INPUT, in this case we'll be using the access.log from nginx. The OUTPUT will be configured for Azure Storage blob container.


Our input looks like this:

[INPUT]
    Name              tail
    Tag               nginxaccess.*
    Path              /var/log/nginx/access.log
    Parser            nginx
    Mem_Buf_Limit     5MB
    Skip_Long_Lines   On
    Refresh_Interval  10

We are using the Tail input plugin which is just like the GNU Tail command. In this case we are monitoring the access.log file. This Path key accepts wildcards if you have rotating logs. Additionally we are tagging the input with "nginxaccess" and the file name.


We are leveraging the nginx parser which means the service will look to see how nginx is defined in the parser.conf file. This can be changed depending on how you want your input formatted.



Next we will look at the output:

[OUTPUT]
    name                  azure_blob
    match                 *
    account_name          sanginglogs
    shared_key            <my secret key>
    container_name        logs
    auto_create_container on
    tls                   on

Here we are telling the service to output using the "azure_blob" plugin. We didn't have to add this to the plugins conf as it is included. I am matching all input entries, though we could get more selective based on tags or filenames. In this case, you use the primary key for the storage account for access. Finally define the container name where you want to land log files.


The behavior is append which means that as new records come from input, output will append them to the file in the blob. It would be best to configure rotating logs for your nginx service to break up the files based on date/time.


Finally, start the td-agent-bit service. Ensure that it is running. If it is not running check the syslog. The fluent bit config files are indentation sensitive. Extra new lines will cause issues. It expects 4 spaces for indents, not tabs.


Run the curl command or visit the site in a browser to generate entries in the access.log file. You will now see those entries in your Azure Storage blob. These events can be ingested using Azure Data Factory, ingested in Azure Data Explorer, or referenced as an external table in Azure Data Explorer.


In the next blog I will show how to enable fluent bit for containerized nginx and ship logs to Azure Event Hubs using the Kafka endpoint.




981 views0 comments

Recent Posts

See All

Comentarios


© 2023 by Walkaway. Proudly created with Wix.com

  • Facebook Black Round
  • Twitter Black Round
bottom of page