hero.mp4
blocks.mp4
visualization.mp4
evals.mp4
optimization.mp4
- Easy-to-hack, eg., one can add new workflow nodes by simply creating a single Python file.
- JSON configs of workflow graphs, enabling easy sharing and version control.
- Lightweight via minimal dependencies, avoiding bloated LLM frameworks.
You can launch PySpur using pre-built docker images in the following steps:
-
Clone the repository:
git clone https://github.com/PySpur-com/pyspur.git cd pyspur
-
Create a .env file:
Create a
.env
file at the root of the project. You may use.env.example
as a starting point.cp .env.example .env
Please go through the .env file and change configs wherver necessary If you plan to use third party model providers, please add their API keys in the .env file in this step.
-
Start the docker services:
docker compose -f ./docker-compose.prod.yml up --build -d
This will start a local instance of PySpur that will store spurs in a local sqlite database (or your database if you provided it in .env file in step 2)
-
Access the portal:
Go to
http://localhost:6080/
in your browser.
Set up is completed. Click on "New Spur" to create a workflow, or start with one of the stock templates.
-
[Optional] Manage your LLM provider keys from the app:
Once PySpur app is running you can manage your LLM provider keys through the portal:
Select API keys tab
Enter your provider's key and click save (save button will appear after you add/modify a key)
The steps for dev setup are same as above, except for step 3: we launch the app in the dev mode instead
-
Start the docker services:
docker compose up --build -d
This will start a local instance of PySpur that will store spurs and their runs in a local SQLite file. Note: For some environments you may want to try:
sudo docker compose up --build -d
.
PySpur can work with local models served using Ollama.
Steps to configure PySpur to work with Ollama running on the same host.
To ensure Ollama API is reachable from PySpur, we need to start the Ollama service with environment variable OLLAMA_HOST=0.0.0.0
. This allows requests coming from PySpur docker's bridge network to get through to Ollama.
An easy way to do this is to launch the ollama service with the following command:
OLLAMA_HOST="0.0.0.0" ollama serve
Next up we need to update the OLLAMA_BASE_URL
environment value in the .env
file.
If your Ollama port is 11434 (the default port), then the entry in .env
file should look like this:
OLLAMA_BASE_URL=http://host.docker.internal:11434
(Please make sure that there is no trailing slash in the end!)
In PySpur's set up, host.docker.internal
refers to the host machine where both PySpur and Ollama are running.
Follow the usual steps to launch the PySpur app, starting with the command:
docker compose -f docker-compose.prod.yml up --build -d
If you wish to do PySpur development with ollama please run the following command instead of above:
docker compose -f docker-compose.yml up --build -d
You will be able to select Ollama models [ollama/llama3.2
, ollama/llama3
, ...] from the sidebar for LLM nodes.
Please make sure the model you select is explicitly downloaded in ollama. That is, you will need to manually manage these models via ollama. To download a model you can simply run ollama pull <model-name>
.
PySpur only works with models that support structured-output and json mode. Most newer models should be good, but it would still be good to confirm this from Ollama documentation for the model you wish to use.
- Canvas
- Async/Batch Execution
- Evals
- Spur API
- Support Ollama
- New Nodes
- LLM Nodes
- If-Else
- Merge Branches
- Tools
- Loops
- RAG
- Pipeline optimization via DSPy and related methods
- Templates
- Compile Spurs to Code
- Multimodal support
- Containerization of Code Verifiers
- Leaderboard
- Generate Spurs via AI
Your feedback will be massively appreciated. Please tell us which features on that list you like to see next or request entirely new ones.