Stars
A minimal PyTorch implementation of probabilistic diffusion models for 2D datasets.
Implementation of Denoising Diffusion Probabilistic Model in Pytorch
Depth Pro: Sharp Monocular Metric Depth in Less Than a Second.
Entropy Based Sampling and Parallel CoT Decoding
Interpreter for the perfect programming language
A massively parallel, high-level programming language
A massively parallel, optimal functional runtime in Rust
A minimal GPU design in Verilog to learn how GPUs work from the ground up
Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.
Deep learning accelerator architectures requiring half the multipliers
PyTorch pre-trained model for real-time interest point detection, description, and sparse tracking (https://arxiv.org/abs/1712.07629)
ALIKE: Accurate and Lightweight Keypoint Detection and Descriptor Extraction
Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.
High Quality Resources on GPU Programming/Architecture
A multi-voice TTS system trained with an emphasis on quality
A single notebook for fine-tuning GPT-3.5 turbo
The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
A natural language interface for computers
A small toy lisp programming language built in typescript
a typescript library that implements a nearly identical API to Karpathy's micrograd
A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API