Note: Zinc and all its APIs are considered to be alpha stage at this time. Expect breaking changes in API contracts and data format at this stage.
Zinc is a search engine that does full text indexing. It is a lightweight alternative to Elasticsearch and runs using a fraction of the resources. It uses bluge as the underlying indexing library.
It is very simple and easy to operate as opposed to Elasticsearch which requires a couple dozen knobs to understand and tune.
It is a drop-in replacement for Elasticsearch if you are just ingesting data using APIs and searching using kibana (Kibana is not supported with zinc. Zinc provides its own UI).
Check the below video for a quick demo of Zinc.
Join slack channel
While Elasticsearch is a very good product, it is complex and requires lots of resources and is more than a decade old. I built Zinc so it becomes easier for folks to use full text search indexing without doing a lot of work.
- Provides full text indexing capability
- Single binary for installation and running. Binaries available under releases for multiple platforms.
- Web UI for querying data written in Vue
- Compatibility with Elasticsearch APIs for ingestion of data (single record and bulk API)
- Out of the box authentication
- Schema less - No need to define schema upfront and different documents in the same index can have different fields.
- Index storage in s3 (experimental)
- aggregation support
- High Availability
- Distributed reads and writes
- Geosptial search
- Raise an issue if you are looking for something.
Binaries can be downloaded from releases page for appropriate platform.
C:\> set ZINC_FIRST_ADMIN_USER=admin
C:\> set ZINC_FIRST_ADMIN_PASSWORD=Complexpass#123
C:\> mkdir data
C:\> zinc.exe
$ brew tap prabhatsharma/tap
$ brew install prabhatsharma/tap/zinc
$ mkdir data
$ ZINC_FIRST_ADMIN_USER=admin ZINC_FIRST_ADMIN_PASSWORD=Complexpass#123 zinc
Now point your browser to http://localhost:4080 and login
Binaries can be downloaded from releases page for appropriate platform.
Create a data folder that will store the data
$ mkdir data
$ ZINC_FIRST_ADMIN_USER=admin ZINC_FIRST_ADMIN_PASSWORD=Complexpass#123 ./zinc
Now point your browser to http://localhost:4080 and login
Optional - Only if you have AWS CLI installed.
If you have AWS CLI installed amd get login error then run below command:
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
Docker images are available at https://gallery.ecr.aws/prabhat/zinc
$ mkdir data
$ docker run -v /full/path/of/data:/data -e ZINC_DATA_PATH="/data" -p 4080:4080 -e ZINC_FIRST_ADMIN_USER=admin -e ZINC_FIRST_ADMIN_PASSWORD=Complexpass#123 --name zinc public.ecr.aws/prabhat/zinc:latest
Now point your browser to http://localhost:4080 and login
Create the namespace:
$ kubectl create ns zinc
$ kubectl apply -f k8s/kube-deployment.yaml
$ kubectl -n zinc port-forward svc/z 4080:4080
Now point your browser to http://localhost:4080 and login
Update Helm values located in values.yaml
Create the namespace:
$ kubectl create ns zinc
Install the chart:
$ helm install zinc helm/zinc -n zinc
Zinc can be available with an ingress or port-forward:
$ kubectl -n zinc port-forward svc/zinc 4080:4080
curl \
-u admin:Complexpass#123 \
-XPUT \
-d '{"author":"Prabhat Sharma"}' \
http://localhost:4080/api/myshinynewindex/document
Bulk ingestion API follows same interface as Elasticsearch API defined in Elasticsearch documentation.
Here is a sample of how to use it:
curl -L https://github.com/prabhatsharma/zinc/releases/download/v0.1.1/olympics.ndjson.gz -o olympics.ndjson.gz
gzip -d olympics.ndjson.gz
curl http://localhost:4080/api/_bulk -i -u admin:Complexpass#123 --data-binary "@olympics.ndjson"
Data ingestion can also be done using APIs and log forwarders like fluent-bit and syslog-ng. Check docs for details.
Check docs
Please do raise a PR adding your details if you are using Zinc.