Skip to content

changyonggang/kv-db

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

kv-db

inspired by etcd-raft-example and leveldb, etcd's raft library. It provides a simple REST API for a key-value store cluster backed by the Raft consensus algorithm.

Getting Started

Building kv-db

go build -o kv-db cmd/main.go

Running single node kv-db

First start a single-member cluster of kv-db:

mkdir data
kv-db --id 1 --cluster http://127.0.0.1:12379 --port 12380

Each kv-db process maintains a single raft instance and a key-value server. The process's list of comma separated peers (--cluster), its raft ID index into the peer list (--id), and http key-value server port (--port) are passed through the command line.

Next, store a value ("hello") to a key ("my-key"):

curl -L http://127.0.0.1:12380/my-key -XPUT -d hello

Finally, retrieve the stored key:

curl -L http://127.0.0.1:12380/my-key

Running a local cluster

First install goreman, which manages Procfile-based applications.

The Procfile script will set up a local example cluster. Start it with:

mkdir data
goreman start

This will bring up three kv-db instances.

Now it's possible to write a key-value pair to any member of the cluster and likewise retrieve it from any member.

Fault Tolerance

To test cluster recovery, first start a cluster and write a value "foo":

mkdir data
goreman start
curl -L http://127.0.0.1:12380/my-key -XPUT -d foo

Next, remove a node and replace the value with "bar" to check cluster availability:

goreman run stop raftexample2
curl -L http://127.0.0.1:12380/my-key -XPUT -d bar
curl -L http://127.0.0.1:32380/my-key

Finally, bring the node back up and verify it recovers with the updated value "bar":

goreman run start raftexample2
curl -L http://127.0.0.1:22380/my-key

Dynamic cluster reconfiguration

Nodes can be added to or removed from a running cluster using requests to the REST API.

For example, suppose we have a 3-node cluster that was started with the commands:

kv-db --id 1 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 12380
kv-db --id 2 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 22380
kv-db --id 3 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379 --port 32380

A fourth node with ID 4 can be added by issuing a POST:

curl -L http://127.0.0.1:12380/4 -XPOST -d http://127.0.0.1:42379

Then the new node can be started as the others were, using the --join option:

kv-db --id 4 --cluster http://127.0.0.1:12379,http://127.0.0.1:22379,http://127.0.0.1:32379,http://127.0.0.1:42379 --port 42380 --join

The new node should join the cluster and be able to service key/value requests.

We can remove a node using a DELETE request:

curl -L http://127.0.0.1:12380/3 -XDELETE

Node 3 should shut itself down once the cluster has processed this request.

Design

The kv-db consists of three components: a raft-backed key-value store, a REST API server, and a raft consensus server based on etcd's raft implementation.

The raft-backed key-value store is a key-value map that holds all committed key-values. The store bridges communication between the raft server and the REST server. Key-value updates are issued through the store to the raft server. The store updates its map once raft reports the updates are committed.

The REST server exposes the current raft consensus by accessing the raft-backed key-value store. A GET command looks up a key in the store and returns the value, if any. A key-value PUT command issues an update proposal to the store.

The raft server participates in consensus with its cluster peers. When the REST server submits a proposal, the raft server transmits the proposal to its peers. When raft reaches a consensus, the server publishes all committed updates over a commit channel. For kv-db, this commit channel is consumed by the key-value store.

About

inspired by etcd-raft-example and leveldb

Resources

License

Stars

Watchers

Forks

Packages

No packages published