Stars
AI的分布式训练
5 repositories
Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
Running large language models on a single GPU for throughput-oriented scenarios.
Decentralized deep learning in PyTorch. Built to train models on thousands of volunteers across the world.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.