Skip to content

Commit

Permalink
docs: Add Minimum Hardware Requirements sections (iusztinpaul#55)
Browse files Browse the repository at this point in the history
* docs: Add Minimum Hardware Requirements sections

* docs: Add emojis
  • Loading branch information
iusztinpaul authored Nov 28, 2023
1 parent 10684ac commit 9120430
Showing 1 changed file with 19 additions and 0 deletions.
19 changes: 19 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,13 @@ The **training pipeline** is **deployed** using [Beam](https://docs.beam.cloud/g

-> Found under the `modules/training_pipeline` directory.

#### 💻 Minimum Hardware Requirements
* CPU: 4 Cores
* RAM: 14 GiB
* VRAM: 10 GiB (mandatory CUDA-enabled Nvidia GPU)

**Note:** Do not worry if you don't have the minimum hardware requirements. We will show you how to deploy the training pipeline to [Beam's](https://docs.beam.cloud/getting-started/quickstart?utm_source=thepauls&utm_medium=partner&utm_content=github) serverless infrastructure and train the LLM there.

### 1.2. Streaming Real-time Pipeline

Real-time feature pipeline that:
Expand All @@ -52,6 +59,11 @@ The **streaming pipeline** is **automatically deployed** on an AWS EC2 machine u

-> Found under the `modules/streaming_pipeline` directory.

#### 💻 Minimum Hardware Requirements
* CPU: 1 Core
* RAM: 2 GiB
* VRAM: -

### 1.3. Inference Pipeline

Inference pipeline that uses [LangChain](https://github.com/langchain-ai/langchain) to create a chain that:
Expand All @@ -66,6 +78,13 @@ The **inference pipeline** is **deployed** using [Beam](https://docs.beam.cloud/

-> Found under the `modules/financial_bot` directory.

#### 💻 Minimum Hardware Requirements
* CPU: 4 Cores
* RAM: 14 GiB
* VRAM: 8 GiB (mandatory CUDA-enabled Nvidia GPU)

**Note:** Do not worry if you don't have the minimum hardware requirements. We will show you how to deploy the inference pipeline to [Beam's](https://docs.beam.cloud/getting-started/quickstart?utm_source=thepauls&utm_medium=partner&utm_content=github) serverless infrastructure and call the LLM from there.

<br/>

![architecture](media/architecture.png)
Expand Down

0 comments on commit 9120430

Please sign in to comment.