Skip to content

Build, Improve Performance, and Productionize your LLM Application with an Integrated Framework

License

Notifications You must be signed in to change notification settings

relic-dev/relic

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Palico AI - Tech-Stack for Iterative Development

NPM Downloads NPM Version X (formerly Twitter) Follow GitHub Repo stars GitHub License

Developing an LLM application requires continuously trying different combinations of models, prompts, RAG datasets, call chaining, custom code, and more. Palico helps you build a tech-stack designed for the iterative nature of LLM Development.

With Palico you can

  • ✅  Build any application with complete flexibility (docs)
  • ✅  Preview changes locally with a Playground UI (docs)
  • ✅  Improve performance with Experiments and Evals (docs)
  • ✅  Debug issues with logs and traces (docs)
  • ✅  Integrate with your frontend with ClientSDKs or REST API (docs)
  • ✅  Setup Continous Integration and Pull-Request Previews (docs)
  • ✅  Manage your application from a control panel (docs)

Tip

⭐️ Star this repo to get release notifications for new features.

ezgif-4-c4cae043ed

⚡ Get started in seconds ⚡

npx palico init <project-name>

Checkout our quickstart guide.

Overview of your Palico App

Product.Demo.-.V2.copy.mp4

🛠️ Building your Application

Build your application with complete flexibility

With Palico, you have complete control over the implementation details of your LLM application. Build any application by creating a Chat function.

import { Chat } from '@palico-ai/app';
import OpenAI from 'openai';

// 1. implement the Chat type
const handler: Chat = async ({ userMessage }) => {
  // 2. implement your application logic
  const response = await openai.chat.completions.create({
    model: 'gpt-3.5-turbo-0125',
    messages: [{ role: 'user', content: userMessage }],
  });
  return {
    message: response.choices[0].message.content,
  };
};

// 3. export the handler
export default handler;

Learn more about building your application with palico (docs).

Build complex interactions with powerful primitives

Feature Description
Streaming Stream messages, data, and intermediate steps to your client-app
Memory Management Store conversation states between request without managing any storage infrastructure
Tool Executions Build Agents that can execute tools on client-side and server-side
Feature Flags Easily swap models, prompts, RAG, and custom logic at runtime
Monitoring Debug issues with faster with comprehensive logs and traces

Integrates with your favorite tools and libraries

Since you own the implementation details, you can use Palico with most other external tools and libraries

Tools or Libraries Supported
Langchain
LlamaIndex
Portkey
OpenAI
Anthropic
Cohere
Azure
AWS Bedrock
GCP Vertex
Pinecone
PG Vector
Chroma

Learn more from docs.

Locally test your changes in Playground UI

Make a code change and instantly preview it locally on our playground UI

CleanShot 2024-11-12 at 21 37 50

🔄 Improve Performance with Experiments

Palico helps you create an iterative loop to systematically improve the performance of your application.

LandPageAssets-Page-2 drawio

Create Test-Cases

Define test-cases that models the expected behavior of your application

const testCases: TestDatasetFN = async () => {
  return [
    {
      input: { // input to your LLM Application
        userMessage: "What is the capital of France?",
      },
      tags: { // tags to help you categorize your test-cases
        intent: "question",
      },
      metrics: [
        // example metrics
        containsEvalMetric({
          substring: "Paris",
        }),
        levensteinEvalMetric({
          expected: "Paris",
        }),
        rougeSkipBigramSimilarityEvalMetric({
          expected: "Paris",
        }),
      ],
    },
  ];
};

Run Evaluations

CleanShot 2024-11-12 at 20 57 29

Analyze Results

385547731-bed393b9-64e8-4735-8548-1d5ff52c0c01

Learn more about experiments

🚀 Going to Production

You can deploy your Palico app to any cloud provider using Docker.

Continuous Integration

Setup CI/CD and Pull-Request preview with Coolify and Palico. Learn more about deployment.

Integrate with Client SDKs

const response = await client.agent.chat({
  agentName: "chat",
  stream: true,
  userMessage: "What is the weather today?",
  payload: {
    location: "San Francisco",
  },
  appConfig: {
    model: "gpt-3.5-turbo",
    provider: "openai",
  },
});

First-party support for React

import { useChat } from "@palico-ai/react";

const { messages, sendMessage } = useChat({
  agentName: "assistant",
  apiURL: "/api/palico",
});

Learn more about Client SDKs

🤝 Contributing

The easiest way to contribute is to pick an issue with the good first issue tag 💪. Read the contribution guidelines here.

Bug Report? File here | Feature Request? File here

✨ Contributors

contributors

↑ Back to Top ↑