Skip to content

XzgZh/chat-langchain

 
 

Repository files navigation

🦜️🔗 Chat LangChain

This repo is an implementation of a locally hosted chatbot specifically focused on question answering over the LangChain documentation. Built with LangChain, FastAPI, and Next.js.

Deployed version: chat.langchain.com

Looking for the JS version? Click here.

The app leverages LangChain's streaming support and async API to update the page in real time for multiple users.

✅ Running locally

  1. Install backend dependencies: poetry install.
  2. Make sure to enter your environment variables to configure the application:
export OPENAI_API_KEY=
export WEAVIATE_URL=
export WEAVIATE_API_KEY=
export RECORD_MANAGER_DB_URL=

# for tracing
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
export LANGCHAIN_API_KEY=
export LANGCHAIN_PROJECT=
  1. Run python backend/ingest.py to ingest LangChain docs data into the Weaviate vectorstore (only needs to be done once).
    1. You can use other Document Loaders to load your own data into the vectorstore.
  2. Start the Python backend with make start.
  3. Install frontend dependencies by running cd ./frontend, then yarn.
  4. Run the frontend with yarn dev for frontend.
  5. Open localhost:3000 in your browser.

📚 Technical description

There are two components: ingestion and question-answering.

Ingestion has the following steps:

  1. Pull html from documentation site as well as the Github Codebase
  2. Load html with LangChain's RecursiveURLLoader and SitemapLoader
  3. Split documents with LangChain's RecursiveCharacterTextSplitter
  4. Create a vectorstore of embeddings, using LangChain's Weaviate vectorstore wrapper (with OpenAI's embeddings).

Question-Answering has the following steps:

  1. Given the chat history and new user input, determine what a standalone question would be using GPT-3.5.
  2. Given that standalone question, look up relevant documents from the vectorstore.
  3. Pass the standalone question and relevant documents to the model to generate and stream the final answer.
  4. Generate a trace URL for the current chat session, as well as the endpoint to collect feedback.

Documentation

Looking to use or modify this Use Case Accelerant for your own needs? We've added a few docs to aid with this:

  • Concepts: A conceptual overview of the different components of Chat LangChain. Goes over features like ingestion, vector stores, query analysis, etc.
  • Modify: A guide on how to modify Chat LangChain for your own needs. Covers the frontend, backend and everything in between.
  • Running Locally: The steps to take to run Chat LangChain 100% locally.
  • LangSmith: A guide on adding robustness to your application using LangSmith. Covers observability, evaluations, and feedback.
  • Production: Documentation on preparing your application for production usage. Explains different security considerations, and more.
  • Deployment: How to deploy your application to production. Covers setting up production databases, deploying the frontend, and more.

Releases

No releases published

Packages

No packages published

Languages

  • Python 55.2%
  • TypeScript 34.5%
  • HCL 9.0%
  • Dockerfile 0.4%
  • CSS 0.3%
  • Makefile 0.3%
  • Other 0.3%