- Ajikan
Stars
- All languages
- Ada
- Assembly
- Bikeshed
- C
- C#
- C++
- CMake
- CSS
- CoffeeScript
- Common Lisp
- Cuda
- D
- Dockerfile
- Emacs Lisp
- Erlang
- GLSL
- Gherkin
- Go
- HCL
- HTML
- Handlebars
- Haskell
- Java
- JavaScript
- Jsonnet
- Jupyter Notebook
- Kotlin
- LLVM
- Lua
- MATLAB
- MDX
- Makefile
- Markdown
- Mustache
- OCaml
- PHP
- PLpgSQL
- Perl
- Python
- QML
- R
- Raku
- Ruby
- Rust
- SCSS
- Scala
- Shell
- Svelte
- Swift
- TLA
- TSQL
- Tcl
- TeX
- TypeScript
- VHDL
- Vala
- Verilog
- Vim Script
- Vue
- WebAssembly
- XSLT
- YAML
- Zig
Interactive deep learning book with multi-framework code, math, and discussions. Adopted at 500 universities from 70 countries including Stanford, MIT, Harvard, and Cambridge.
A programming framework for agentic AI 🤖 PyPi: autogen-agentchat Discord: https://aka.ms/autogen-discord Office Hour: https://aka.ms/autogen-officehour
SOTA discrete acoustic codec models with 40 tokens per second for audio language modeling
MiniCPM-o 2.6: A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming on Your Phone
SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer
🤗 smolagents: a barebones library for agents. Agents write python code to call tools and orchestrate other agents.
FlashInfer: Kernel Library for LLM Serving
Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)
Sky-T1: Train your own O1 preview model within $450
Accessible large language models via k-bit quantization for PyTorch.
Make websites accessible for AI agents
Cache-Augmented Generation: A Simple, Efficient Alternative to RAG
Official repository of the AWS EC2 FPGA Hardware and Software Development Kit
Vitis AI is Xilinx’s development stack for AI inference on Xilinx hardware platforms, including both edge devices and Alveo cards.
OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
Code and other material for the book "Deep Learning and the Game of Go"
NoMIRACL: A multilingual hallucination evaluation dataset to evaluate LLM robustness in RAG against first-stage retrieval errors on 18 languages.
A large-scale multilingual dataset for Information Retrieval. Thorough human-annotations across 18 diverse languages.
MTEB: Massive Text Embedding Benchmark
Finetune Llama 3.3, Mistral, Phi-4, Qwen 2.5 & Gemma LLMs 2-5x faster with 70% less memory
A MNIST-like fashion product database. Benchmark 👇
Chat with your current directory's files using a local or API LLM.
Boa: The language you didn't know you needed, but here we are.
The simplest, fastest repository for training/finetuning medium-sized GPTs.