Skip to content

Learn how to engineer your end-to-end LLM ecosystem: training, streaming, and inference pipelines | deploy & automate | work in progress...

License

Notifications You must be signed in to change notification settings

mindkhichdi/hands-on-llms

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hands-on LLMOps

Train and Deploy a Real-Time Financial Advisor

by Paul Iusztin and Pau Labarta Bajo

Table of Contents


1. Building Blocks

Training pipeline

  • Fine-tune Falcon 7B using our own Q&A generated dataset containing investing questions and answers based on Alpaca News.
    • It seems that 1 GPU is enough if we use Lit-Parrot

Real-time data pipeline

  • Build real-time feature pipeline, that ingests data form Alpaca, computes embeddings, and stores them into a serverless Vector DB.

Inference pipeline

  • REST API for inference, that
    1. receives a question (e.g. "Is it a good time to invest in renewable energy?"),
    2. finds the most relevant documents in the VectorDB (aka context)
    3. sends a prompt with question and context to our fine-tuned Falcon and return response.

2. Setup External Services

Before diving into the modules, you have to set up a couple of additional tools for the course.

2.1. Comet ML

ML platform

Go to Comet ML, create an account, a project, and an API KEY. We will show you in every module how to add these credentials.

2.2. Beam

cloud compute

Go to Beam and follow their quick setup/get started tutorial. You must install their CLI and configure your credentials on your local machine.

When using Poetry, we had issues locating the Beam CLI when using the Poetry virtual environment. To fix this, create a symlink using the following command - replace <your-poetry-env-name> with your Poetry env name:

export POETRY_ENV_NAME=<your-poetry-env-name>
 ln -s /usr/local/bin/beam ~/.cache/pypoetry/virtualenvs/${POETRY_ENV_NAME}/bin/beam

3. Install & Usage

Every module has its dependencies and scripts. In a production setup, every module would have its repository, but in this use case, for learning purposes, we put everything in one place:

Thus, check out the README for every module individually to see how to install & use it:

  1. q_and_a_dataset_generator
  2. training_pipeline
  3. streaming_pipeline
  4. inference_pipeline

3.1 Run Notebooks Server

If you want to run a notebook server inside a virtual environment, follow the next steps.

First, expose the virtual environment as a notebook kernel:

python -m ipykernel install --user --name hands-on-llms --display-name "hands-on-llms"

Now run the notebook server:

jupyter notebook notebooks/ --ip 0.0.0.0 --port 8888

About

Learn how to engineer your end-to-end LLM ecosystem: training, streaming, and inference pipelines | deploy & automate | work in progress...

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 81.6%
  • Python 17.3%
  • Other 1.1%