diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml new file mode 100644 index 000000000..8b1ad9a7d --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -0,0 +1,29 @@ +name: Bug Report +description: Report any issue with the project +labels: ["bug"] +body: + - type: textarea + id: what-happened + attributes: + label: What Happened? + validations: + required: true + - type: textarea + id: expected-behavior + attributes: + label: What Should Have Happened? + validations: + required: false + - type: textarea + id: code-snippet + attributes: + label: Relevant Code Snippet + validations: + required: false + - type: input + id: contact + attributes: + label: Your Twitter/LinkedIn + description: When the bug get fixed, we'd like to thank you publicly for reporting it. + validations: + required: false diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml index fb20dc3a3..7d3875a8d 100644 --- a/.github/ISSUE_TEMPLATE/config.yml +++ b/.github/ISSUE_TEMPLATE/config.yml @@ -1,8 +1,8 @@ blank_issues_enabled: true contact_links: - - name: Portkey Community Support + - name: Discord for Support & Discussions url: https://discord.com/invite/DD7vgKK299 - about: Please ask and answer questions here. - - name: Portkey Bounty - url: https://discord.com/invite/DD7vgKK299 - about: Please report security vulnerabilities here. + about: Hang out with the community of LLM practitioners and resolve your issues fast + - name: Get on a Call + url: https://calendly.com/rohit-portkey/noam + about: Get a tailored demo for your use cases diff --git a/.github/ISSUE_TEMPLATE/feature_request.yml b/.github/ISSUE_TEMPLATE/feature_request.yml new file mode 100644 index 000000000..8cdf473f2 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/feature_request.yml @@ -0,0 +1,25 @@ +name: Feature Request +description: Suggest a new provider to integrate, new features, or something more +title: "[Feature] " +labels: ["enhancement"] +body: + - type: textarea + id: feature + attributes: + label: What Would You Like to See with the Gateway? + validations: + required: true + - type: textarea + id: context + attributes: + label: Context for your Request + description: Why you want this feature and how it beneits you. + validations: + required: false + - type: input + id: contact + attributes: + label: Your Twitter/LinkedIn + description: If we work on this request, we'd like to thank you publicly for suggesting it. + validations: + required: false diff --git a/README.md b/README.md index a9327e6ed..0865519a7 100644 --- a/README.md +++ b/README.md @@ -1,9 +1,13 @@
-# AI Gateway +# Gateway +```sh +npx @portkey-ai/gateway +``` ### Route to 100+ LLMs with 1 fast & friendly API. + [![License](https://img.shields.io/github/license/Ileriayo/markdown-badges)](./LICENSE) [![Discord](https://img.shields.io/discord/1143393887742861333)](https://portkey.ai/community) [![Twitter](https://img.shields.io/twitter/url/https/twitter/follow/portkeyai?style=social&label=Follow%20%40PortkeyAI)](https://twitter.com/portkeyai) @@ -15,12 +19,17 @@ [Portkey's AI Gateway](https://portkey.ai/features/ai-gateway) is the interface between your app and hosted LLMs. It streamlines API requests to OpenAI, Anthropic, Mistral, LLama2, Anyscale, Google Gemini and more with a unified API. + ✅  Blazing **fast** (9.9x faster) with a **tiny footprint** (~45kb installed)
✅  **Load balance** across multiple models, providers, and keys
✅  **Fallbacks** make sure your app stays resilient
✅  **Automatic Retries** with exponential fallbacks come by default
+✅  **Configurable Request Timeouts** to easily handle unresponsive LLM requests
✅  Plug-in middleware as needed
✅  Battle tested over **100B tokens**
+✅  **Enterprise-ready** for enhanced security, scale, and custom deployments
+ +Enterprise Version: [Read more here](#gateway-enterprise-version)

## Getting Started @@ -30,7 +39,6 @@ If you're familiar with Node.js and `npx`, you can run your private AI gateway l npx @portkey-ai/gateway ``` > Your AI Gateway is now running on http://localhost:8787 🚀 -
### Usage Let's try making a **chat completions** call to OpenAI through the AI gateway: @@ -42,9 +50,7 @@ curl '127.0.0.1:8787/v1/chat/completions' \ -d '{"messages": [{"role": "user","content": "Say this is test."}], "max_tokens": 20, "model": "gpt-4"}' ``` [Full list of supported SDKs](#supported-sdks) - -
- +

## Supported Providers @@ -66,11 +72,11 @@ curl '127.0.0.1:8787/v1/chat/completions' \ | | Ollama | ✅ |✅ | `/chat/completions` | > [View the complete list of 100+ supported models here](https://portkey.ai/docs/welcome/what-is-portkey#ai-providers-supported) -
+
## Features - +
- +

Unified API Signature

@@ -78,37 +84,47 @@ curl '127.0.0.1:8787/v1/chat/completions' \

           -    

+             -    

+            

+
+ + + - -

Fallback

Don't let failures stop you. The Fallback feature allows you to specify a list of Language Model APIs (LLMs) in a prioritized order. If the primary LLM fails to respond or encounters an error, Portkey will automatically fallback to the next LLM in the list, ensuring your application's robustness and reliability.

- - - - +
+

Automatic Retries

Temporary issues shouldn't mean manual re-runs. AI Gateway can automatically retry failed requests upto 5 times. We apply an exponential backoff strategy, which spaces out retry attempts to prevent network overload.

+
+ + + +

Load Balancing

Distribute load effectively across multiple API keys or providers based on custom weights. This ensures high availability and optimal performance of your generative AI apps, preventing any single LLM from becoming a performance bottleneck.

+

Request Timeouts

+ Unpredictable response times shouldn't hinder your app's experience. Manage unruly LLMs & latencies by setting up granular request timeouts. This feature allows automatic termination of requests that exceed a specified duration, letting you gracefully handle errors or make another, faster request. +

+ +

@@ -177,11 +193,28 @@ const client = new OpenAI({ | Java | [openai-java](https://github.com/TheoKanning/openai-java) | | Rust | [async-openai](https://docs.rs/async-openai/latest/async_openai/) | | Ruby | [ruby-openai](https://github.com/alexrudall/ruby-openai) | -
## Deploying AI Gateway [See docs](docs/installation-deployments.md) on installing the AI Gateway locally or deploying it on popular locations. +- Deploy to [Cloudflare Workers](https://github.com/Portkey-AI/gateway/blob/main/docs/installation-deployments.md#deploy-to-cloudflare-workers) +- Deploy using [Docker](https://github.com/Portkey-AI/gateway/blob/main/docs/installation-deployments.md#deploy-using-docker) +- Deploy using [Docker Compose](https://github.com/Portkey-AI/gateway/blob/main/docs/installation-deployments.md#deploy-using-docker-compose) +- Deploy to [Zeabur](https://github.com/Portkey-AI/gateway/blob/main/docs/installation-deployments.md#deploy-to-zeabur) +- Run a [Node.js server](https://github.com/Portkey-AI/gateway/blob/main/docs/installation-deployments.md#run-a-nodejs-server) +
+ +## Gateway Enterprise Version +Make your AI app more reliable and forward compatible, while ensuring complete data security and privacy. + +✅  Secure Key Management - for role-based access control and tracking
+✅  Simple & Semantic Caching - to serve repeat queries faster & save costs
+✅  Access Control & Inbound Rules - to control which IPs and Geos can connect to your deployments
+✅  PII Redaction - to automatically remove sensitive data from your requests to prevent indavertent exposure
+✅  SOC2, ISO, HIPAA, GDPR Compliances - for best security practices
+✅  Professional Support - along with feature prioritization
+ +[Schedule a call to discuss enterprise deployments](https://calendly.com/rohit-portkey/noam)
@@ -192,7 +225,7 @@ const client = new OpenAI({ 3. More robust fallback and retry strategies to further improve the reliability of requests. 4. Increased customizability of the unified API signature to cater to more diverse use cases. -[💬 Participate in Roadmap discussions here.](https://github.com/Portkey-AI/gateway/projects/) +[Participate in Roadmap discussions here.](https://github.com/Portkey-AI/gateway/projects/)