Stars
Approaching (Almost) Any Machine Learning Problem
Build and run containers leveraging NVIDIA GPUs
Build and run Docker containers leveraging NVIDIA GPUs
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
Fast and memory-efficient exact attention
Large Language Model Text Generation Inference
Inference and training library for high-quality TTS models.
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
Jbuilder: generate JSON objects with a Builder-style DSL
A Guide to Extracting Terms and Definitions
Enterprise-grade and API-first LLM workspace for unstructured documents, including data extraction, redaction, rights management, prompt playground, and more!
Starter App to Build Your Own App to Query Doc Collections with Large Language Models (LLMs) using LlamaIndex, Langchain, OpenAI and more (MIT Licensed)
This repository provides very basic flask, streamlit, and docker examples for the llama_index (fka gpt_index) package
Code samples from our Python agents tutorial
Copora for evaluating NLU Services/Platforms such as Dialogflow, LUIS, Watson, Rasa etc.
a small build system with a focus on speed
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
An Application Framework for Java Developers
🦜⛏️ Did you say you like data?