- Before you begin
- Deploy the starter kit GUI
- Use the starter kit GUI
- Customize the starter kit
- Examples, third-party tools, and data sources
You have to set up your environment before you can run the starter kit.
Clone the starter kit repo.
git clone https://github.com/sambanova/ai-starter-kit.git
The next step sets you up to use one of the models available from SambaNova. It depends on whether you're a SambaNova customer who uses SambaStudio or want to use the publicly available Sambaverse.
- Create a Sambaverse account at Sambaverse and select your model.
- Get your Sambaverse API key (from the user button).
- In the repo root directory find the config file in
sn-ai-starter-kit/.env
and specify the Sambaverse API key, as in the following example:
SAMBAVERSE_API_KEY="456789ab-cdef-0123-4567-89abcdef0123"
- In the config file, set the
api
variable to"sambaverse"
.
To perform this setup, you must be a SambaNova customer with a SambaStudio account.
- Log in to SambaStudio and get your API authorization key. The steps for getting this key are described here.
- Select the LLM you want to use (e.g. Llama 2 70B chat) and deploy an endpoint for inference. See the SambaStudio endpoint documentation.
- Update the
sn-ai-starter-kit/.env
config file in the root repo directory. Here's an example:
BASE_URL="https://api-stage.sambanova.net"
PROJECT_ID="12345678-9abc-def0-1234-56789abcdef0"
ENDPOINT_ID="456789ab-cdef-0123-4567-89abcdef0123"
API_KEY="89abcdef-0123-4567-89ab-cdef01234567"
- Open the config file and set the variable
api
to"sambastudio"
.
We recommend that you run the the starter kit in a virtual environment or use a container.
If you want to use virtualenv or conda environment
-
Install and update pip.
cd ai-starter-kit/prompt-engineering python3 -m venv prompt_engineering_env source prompt_engineering_env/bin/activate pip install -r requirements.txt
-
Run the following command:
streamlit run streamlit/app.py --browser.gatherUsageStats false
You should see the following user interface:
If you want to use Docker:
-
Update the
SAMBASTUDIO_KEY
,SNAPI
,SNSDK
args in docker-compose.yaml file -
Run the command:
docker-compose up --build
You will be prompted to go to the link (http://localhost:8501/) in your browser where you will be greeted with the streamlit page as above.
To use the starter kit, follow these steps:
-
Choose the LLM to use from the options available under Model Selection (Currently, only Llama2 70B is available). You'll see a description of the architecture, prompting tips, and the metatag format required to optimize the model's performance.
-
In Use Case for Sample Prompt, select a template. You have the following choices:
-
General Assistant: Provides comprehensive assistance on a wide range of topics, including answering questions, offering explanations, and giving advice. It's ideal for general knowledge, trivia, educational support, and everyday inquiries.
-
Document Search: Specializes in locating and briefing relevant information from large documents or databases. Useful for research, data analysis, and extracting key points from extensive text sources.
-
Product Selection: Assists in choosing products by comparing features, prices, and reviews. Ideal for shopping decisions, product comparisons, and understanding the pros and cons of different items.
-
Code Generation: Helps in writing, debugging, and explaining code. Useful for software development, learning programming languages, and automating simple tasks through scripting.
-
Summarization: Outputs a summary based on a given context. Essential for condensing large volumes of text
-
-
In the Prompt field, review and edit the input to the model
-
Click the Send button to submit the prompt. The model will retrieve and display the response.
You have several options for customizing this starter kit.
You can include more models with the kit. They will then show up in the Model Selection pulldown in the GUI.
If you're using a SambaStudio endpoint, follow these steps:
- Create a SambaStudio endpoint for inference.
- In the
config.json
file, include the model description in the model section - Populate key variables from your env file in
streamlit/app.py
- Define the method for calling the model. See
call_sambanova_llama2_70b_api
instreamlit/app.py
for an example. - Include the new method in the
st.button(send)
section in thestreamlit/app.py
.
If you're using a Sambaverse endpoint, follow these steps:
- In the playground, find the model you're interested in.
- Select the three dots and then Show code and note down the values of
modelName
andselect_expert
. - Define the method for calling the model. In
streamlit/app.py
, set the values ofsambaverse_model_name
andselect_expert
. Seecall_sambaverse_llama2_70b_api
for an example. - Include the new method in the
st.button(send)
section in thestreamlit/app.py
.`
To change a template:
- Edit the
create_prompt_yamls()
method instreamlit/app.py
. - Execute the method to modify the prompt yaml file in the
prompts
folder.
To add a prompt template:
- Follow the instructions in Edit a template.
- Include the template use case in the
use_cases
list ofconfig.yaml
file.
For further examples, we encourage you to visit any of the following resources:
All the packages/tools are listed in the requirements.txt file in the project directory. Some of the main packages are listed below:
- streamlit (version 1.25.0)
- langchain (version 1.1.4)
- python-dotenv (version 1.0.0)
- Requests (version 2.31.0)
- sseclient (version 0.0.27)
- streamlit-extras (version 0.3.6)
- pydantic (version 1.10.14)
- pydantic_core (version 2.10.1)