Stars
Implementation of Facebook's FastText model in Torch
A pure Julia implementation of denoising diffusion probabilistic models
Simple, blazing fast, transformer components.
Fork of https://bitbucket.org/omerlevy/hyperwords
rogerallen / llama2.cu
Forked from karpathy/llama2.cInference Llama 2 in one file of pure C & one file with CUDA
An introduction to ARM64 assembly on Apple Silicon Macs
A concise set of Latex templates that serves a small set of needs - CV, Essays, Articles and Problem Sets
A poor guide to Pollen, that amazing document formatting system in Racket
Github host for Julia GPU tutorial hosted at https://jenni-westoby.github.io/Julia_GPU_examples/dev/
Julia package for inference and training of Llama-style language models
SpikingJelly is an open-source deep learning framework for Spiking Neural Network (SNN) based on PyTorch.
RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RN…
RL starter files in order to immediately train, visualize and evaluate an agent without writing any line of code
Emacs highlighting using Ethan Schoonover’s Solarized color scheme
Julia code for the book Reinforcement Learning An Introduction
Annotated version of the Mamba paper
Implement a ChatGPT-like LLM in PyTorch from scratch, step by step
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
An implementation of Maximum Mean Discrepancy (MMD) as a differentiable loss in PyTorch.