Table of Contents
It project inlcudes a log ingestor which consumes log data on port 3000. And it has an easy to use web interface to query, search and filter data.
Features:
- It can scale and consume vast amount of log data. It uses Kafka for this purpose.
- Used MongoDB as database.
- It can do full-text search on the field 'message'.
- It can apply nested filters in search.
- It can search data within a range of date (timestamp).
- It has real-time log ingestion & search results.
- Apache Kafka
- MongoDB
- Python
- Postman
- WSL (Ubuntu)
This project requires Python 3.8, Python Virtual Environment Module, Apache Kafka, MongoDB to be pre-installed.
Instructions for projct setup.
git clone https://github.com/dyte-submissions/november-2023-hiring-sudiptab2100.git
cd november-2023-hiring-sudiptab2100/src
python3 -m venv env
source env/bin/activate
pip install -r requirements.txt
Be in the src
directory and run the following commands
-
Run
Log Ingestor Server
python app.py
-
Run
Web Interface Server
python ui.py
New logs can ingested to the server on port 3000 using the following cURL
command:
curl --location 'http://127.0.0.1:3000/' \
--header 'Content-Type: application/json' \
--data '{
"level": "error",
"message": "Succeed to connect to DB",
"resourceId": "server-1234",
"timestamp": "2023-09-15T08:00:00Z",
"traceId": "abc-xyz-123",
"spanId": "span-456",
"commit": "5e5342f",
"metadata": {
"parentResourceId": "server-0987"
}
}'
See the demo of working project: YouTube Video
Distributed under the MIT License. See LICENSE.txt
for more information.
Sudipta Basak - @sudipta__basak - [email protected]
Project Link: https://github.com/dyte-submissions/november-2023-hiring-sudiptab2100