This guide explains the installation and configuration of a Go application that streams HTTP access logs in a custom format directly to a PostgreSQL database in real-time.
First, you need to configure your Nginx server to use the custom log format:
- Open your Nginx configuration file (
/etc/nginx/nginx.conf
or/etc/nginx/conf.d/default.conf
):
sudo nano /etc/nginx/nginx.conf
- Add the custom log format inside the
http
block:
http {
# ... other configurations ...
log_format detailed_log '[$time_local] $remote_addr:$remote_port -> $server_addr:$server_port '
'$request_method "$request_uri" "$http_referer" '
'Status: $status Bytes: $body_bytes_sent '
'UA: "$http_user_agent" '
'RT: $request_time '
'Forwarded IP: $http_x_forwarded_for';
# ... other configurations ...
}
- Apply this log format to your virtual hosts in the
server
block:
server {
# ... other configurations ...
access_log /var/log/nginx/access.log detailed_log;
# ... other configurations ...
}
- Test and restart Nginx:
sudo nginx -t
sudo systemctl restart nginx
Set up PostgreSQL database and user for storing logs:
# Connect to PostgreSQL as postgres user
sudo -u postgres psql
# Create database and user
CREATE DATABASE logs;
CREATE USER loguser WITH ENCRYPTED PASSWORD 'your_secure_password';
GRANT ALL PRIVILEGES ON DATABASE logs TO loguser;
# Connect to the logs database
\c logs
# The application will create the necessary tables automatically
# Exit PostgreSQL
\q
- Install Go (1.16 or later):
sudo apt update
sudo apt install golang-go
- Install Git:
sudo apt install git
- Clone the repository:
git clone https://github.com/yourusername/nginx-logs-to-postgres.git
cd nginx-logs-to-postgres
- Initialize Go modules (if not already set up):
go mod init github.com/yourusername/nginx-logs-to-postgres
go mod tidy
- Install the required Go package:
go get github.com/lib/pq
- Build the application:
go build -o log-processor
Run the application with the -createtable
flag to create the required database tables:
./log-processor -log=/var/log/nginx/access.log -dbname=logs -dbuser=loguser -dbpassword='your_secure_password' -createtable
Create a systemd service to ensure the application runs continuously:
- Create a service file:
sudo nano /etc/systemd/system/log-processor.service
- Add the following content:
[Unit]
Description=Nginx Logs to PostgreSQL Processor
After=network.target postgresql.service nginx.service
[Service]
Type=simple
User=www-data
Group=www-data
WorkingDirectory=/path/to/nginx-logs-to-postgres
ExecStart=/path/to/nginx-logs-to-postgres/log-processor -log=/var/log/nginx/access.log -dbname=logs -dbuser=loguser -dbpassword='your_secure_password' -interval=1
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
- Enable and start the service:
sudo systemctl daemon-reload
sudo systemctl enable log-processor.service
sudo systemctl start log-processor.service
- Check service status:
sudo systemctl status log-processor.service
To ensure the application handles log rotation properly, configure logrotate:
- Edit the Nginx logrotate configuration:
sudo nano /etc/logrotate.d/nginx
- Ensure it includes:
/var/log/nginx/*.log {
daily
missingok
rotate 14
compress
delaycompress
notifempty
create 0640 www-data adm
sharedscripts
postrotate
[ -s /run/nginx.pid ] && kill -USR1 `cat /run/nginx.pid`
endscript
}
The log processor supports various command-line parameters:
-log
: Path to the log file (required)-dbhost
: PostgreSQL server address (default: localhost)-dbport
: PostgreSQL port number (default: 5432)-dbname
: PostgreSQL database name (default: logs)-dbuser
: PostgreSQL username (default: postgres)-dbpassword
: PostgreSQL password-createtable
: Create database tables if they don't exist-interval
: Log file check interval in seconds (default: 1)-batchsize
: Number of log lines to process in a batch (default: 100)-reset
: Reset processing state and start from beginning-forcereset
: Force reset offset in each cycle (for testing)-debug
: Show detailed debug logs
-
Application not processing new logs:
# Reset the processing state ./log-processor -log=/var/log/nginx/access.log -reset
-
Database connection issues:
# Test database connection psql -h localhost -U loguser -d logs -W
-
Permission issues:
# Ensure the application has read access to log files sudo usermod -a -G adm www-data sudo chmod 640 /var/log/nginx/access.log
-
Log format not matching: Check if the log format in Nginx matches the expected format by the parser. You may need to adjust the regular expressions in the
parseLogLine
function. -
Service not starting:
# Check service logs sudo journalctl -u log-processor.service
Monitor the application logs:
sudo journalctl -u log-processor.service -f
Query processed logs in PostgreSQL:
psql -U loguser -d logs -c "SELECT COUNT(*) FROM http_access_logs;"
psql -U loguser -d logs -c "SELECT * FROM http_access_logs ORDER BY timestamp DESC LIMIT 10;"
The application creates two tables:
http_access_logs
: Stores the processed log entrieslog_processing_state
: Tracks processing state (offset, line number, etc.)
You can query these tables for custom reports or monitoring.
To update the application:
cd /path/to/nginx-logs-to-postgres
git pull
go build -o log-processor
sudo systemctl restart log-processor.service
This setup provides a robust solution for streaming Nginx logs to PostgreSQL in real-time, with automatic handling of log rotation and error recovery.