- CHINA, SHANGHAI
-
06:45
(UTC +08:00)
Highlights
- Pro
Lists (1)
Sort Name ascending (A-Z)
Stars
Github Pages template for academic personal websites, forked from mmistakes/minimal-mistakes
Lightweight Armoury Crate alternative for Asus laptops and ROG Ally. Control tool for ROG Zephyrus G14, G15, G16, M16, Flow X13, Flow X16, TUF, Strix, Scar and other models
Unlock your displays on your Mac! Flexible HiDPI scaling, XDR/HDR extra brightness, virtual screens, DDC control, extra dimming, PIP/streaming, EDID override and lots more!
Tile primitives for speedy kernels
A book about how to write OS kernels in Rust easily.
Let's write an OS which can run on RISC-V in Rust from scratch!
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
A generative world for general-purpose robotics & embodied AI learning.
Tensors and Dynamic neural networks in Python with strong GPU acceleration
An Open Source Machine Learning Framework for Everyone
A collection of resources and papers on Diffusion Models
Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"
Curated list of papers and resources focused on 3D Gaussian Splatting, intended to keep pace with the anticipated surge of research in the coming months.
Unsupervised text tokenizer for Neural Network-based text generation.
Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.
tiktoken is a fast BPE tokeniser for use with OpenAI's models.
整理开源的中文大语言模型,以规模较小、可私有化部署、训练成本较低的模型为主,包括底座模型,垂直领域微调及应用,数据集与教程等。
😎 A curated list of awesome GitHub Profile which updates in real time
This repository will assist you in creating a more beautiful and appealing github profile, and you will have access to a comprehensive range of tools and tutorials for beautifying your github profi…
🚀🚀 「大模型」3小时完全从0训练26M的小参数GPT!🌏 Train a 26M-parameter GPT from scratch in just 3 hours!
The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.
Fast and memory-efficient exact attention
Transformer related optimization, including BERT, GPT
A high-throughput and memory-efficient inference and serving engine for LLMs