description |
---|
⚡️The open-source observability platform for large language models⚡️ |
- Easy to integrate, requiring only one line of code to be added.
- Provides a user-friendly dashboard for visualizing logs and metrics.
- Segments your metrics by prompts, users, and models
- Visualize requests, including conversations or chained prompts
- Understand how latency varies throughout the day and when rate limits are hit
- Identify users with disproportional usage costs
- Bill users by OpenAI usage as a pricing model
- Slice metrics by prompts, finetuned models, or API keys
Helicone is a proxy server for executing OpenAI completions queries on your behalf, logging useful stats about your request such as the latency, result, and cost to a database.
Our workers are secured by Cloudflare to ensure you are getting the best latency on your requests, wherever you are in the world. Read more.
You may use our hosted solution as the proxy or self-host workers in your own environment for free!