Modular container build system that provides various AI/ML packages for NVIDIA Jetson 🚀🤖
See the packages
directory for the full list, including pre-built container images and CI/CD status for JetPack/L4T.
Using the included tools, you can easily combine packages together for building your own containers. Want to run ROS2 with PyTorch and Transformers? No problem - just do the system setup, and build it on your Jetson like this:
$ ./build.sh --name=my_container pytorch transformers ros:humble-desktop
There are shortcuts for running containers too - this will pull or build a l4t-pytorch
image that's compatible:
$ ./run.sh $(./autotag l4t-pytorch)
run.sh
forwards arguments todocker run
with some defaults added (like--runtime nvidia
, mounts a/data
cache, and detects devices)
autotag
finds a container image that's compatible with your version of JetPack/L4T - either locally, pulled from a registry, or by building it.
If you look at any package's readme (like l4t-pytorch
), it will have detailed instructions for running it's container.
Check out the tutorials at the Jetson Generative AI Lab!
Refer to the System Setup page for tips about setting up your Docker daemon and memory/storage tuning.
sudo apt-get update && sudo apt-get install git python3-pip
git clone --depth=1 https://github.com/dusty-nv/jetson-containers
cd jetson-containers
pip3 install -r requirements.txt
./run.sh $(./autotag l4t-pytorch)
Or you can manually run a container image of your choice without using the helper scripts above:
sudo docker run --runtime nvidia -it --rm --network=host dustynv/l4t-pytorch:r35.4.1
Looking for the old jetson-containers? See the legacy
branch.
Multimodal Voice Chat with LLaVA-1.5 13B on NVIDIA Jetson AGX Orin (container:
local_llm
)
Interactive Voice Chat with Llama-2-70B on NVIDIA Jetson AGX Orin (container:
local_llm
)
Realtime Multimodal VectorDB on NVIDIA Jetson (container:
nanodb
)
NanoOWL - Open Vocabulary Object Detection ViT (container:
nanoowl
)