Skip to content

redbill/kong

 
 

Repository files navigation

KONG: Microservice Management Layer

Build Status License Badge Gitter Badge

Kong was created to secure, manage and extend Microservices & APIs. Kong is powered by the battle-tested tech of NGINX and Cassandra with a focus on scalability, high performance & reliability. Kong runs in production at Mashape handling billions of requests to over ten thousand APIs.

Core Features

  • CLI: Control your Kong cluster from the command line just like Neo in The Matrix.
  • REST API: Kong can be operated with its RESTful API for maximum flexibility.
  • Scalability: Distributed by nature, Kong scales horizontally simply by adding nodes.
  • Performance: Kong handles load with ease by scaling and using NGINX at the core.
  • Plugins: Extendable architecture for adding functionality to Kong and APIs.
    • Logging: Log requests and responses to your system over HTTP, TCP, UDP or to disk.
    • SSL: Setup a specific SSL certificate for an underlying service or API
    • Monitoring: Live monitoring provides key load and performance server metrics.
    • Authentication: Manage consumer credentials query string and header tokens.
    • Rate-limiting: Block and throttle requests based on IP, authentication or body size.
    • Transformations: Add, remove or manipulate HTTP requests and responses.
    • CORS: Enable cross-origin requests to your APIs that would otherwise be blocked.
    • Anything: Need custom functionality? Extend Kong with your own Lua plugins!

Architecture

If you're building for web, mobile or IoT (Internet of Things) you will likely end up needing common functionality on top of your actual software. Kong can help by acting as a gateway for HTTP requests while providing logging, authentication, rate-limiting and more through plugins.

Benchmarks

We set Kong up on AWS and load tested it to get some performance metrics. The setup consisted of three m3.medium EC2 instances; one for Kong, one for Cassandra and a third for an upstream API. After adding the upstream API's target_url into Kong we load tested from 1 to 2000 concurrent connections. Complete reproduction instructions are available and we are currently working towards automating a suite of benchmarks to compare against subsequent releases.

Over two minutes 117,185 requests with an average latency of 10ms at 976 requests a second or about 84,373,200 requests a day went through Kong and back with only a single timeout.

Development

  1. Download the latest released version of Kong, and install it on your development machine. This will install all the required dependencies.

  2. Clone the repository and make it your working directory.

  3. Run [sudo] make install

This will build and install the kong luarock globally.

  1. Delete the /etc/kong folder: [sudo] rm -rf /etc/kong

This is necessary just in case you have previously installed Kong with a package distribution.

  1. Run make dev

This will install development dependencies and create your environment configuration files:

  • kong_TESTS.yml
  • kong_DEVELOPMENT.yml
  1. Run the tests:
make test-all
  1. Run Kong with the development configuration file:

    $ kong start -c kong_DEVELOPMENT.yml

Makefile Operations

When developing, use the Makefile for doing the following operations:

Name Description
install Install the Kong luarock globally
dev Setup your development environment
clean Clean your development environment
start Start the DEVELOPMENT environment (kong_DEVELOPMENT.yml)
seed Seed the DEVELOPMENT environment (kong_DEVELOPMENT.yml)
drop Drop the DEVELOPMENT environment (kong_DEVELOPMENT.yml)
lint Lint Lua files in kong/
coverage Run unit tests + coverage report
test Run the unit tests
test-all Run all unit + integration tests at once

Documentation

Complete & versioned documentation is available at GetKong.org:

About

Microservice & API Management Layer

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Lua 96.9%
  • Shell 2.8%
  • Makefile 0.3%