English | 简体中文
A curated list of DeepSeek resources, including cutting-edge AI models, developer tools, research papers, and community projects. Maintained by the community, for the community.
- Official Resources
- Models
- Technical Reports
- Tools and Applications
- Prompts
- Tutorials and Documentation
- Community Projects
- Other Awesome Lists
- How to Contribute
- License
- Acknowledgments
- 🌐 DeepSeek Website - Official portal for product updates and research
- 💻 DeepSeek GitHub - Official code repositories
- 🤖 DeepSeek Chat - Live demo of conversational AI
- 📱 DeepSeek App - Mobile application for iOS and Android
- 📄 API Documentation - Official API reference
- 💰 API Pricing - Detailed pricing information for API usage
- 📊 Service Status - Real-time service status monitoring
- 🧠 DeepSeek-LLM - 7B/67B parameter general-purpose LLM
- 🚀 DeepSeek-V2 - MoE model with 236B total params (21B activated), 128k context
- ⚡ DeepSeek-V3 - MoE model with 671B total params (37B activated), 128k context
- 🔬 DeepSeek-R1 - RL-enhanced model with 60.9% on MATH, 128k context
- 💻 DeepSeek-Coder: Let the Code Write Itself - Introduces the first version of DeepSeek Coder model
- 🔧 DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence - Presents an open-source MoE code language model
- 🛠️ DeepSeek-MoE - Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models
- 🖼️ DeepSeek-VL - Vision-language model for real-world understanding (1.3B/7B), 4096 context length
- 🎨 Janus - Unified Multimodal Understanding and Generation Models
- 🌌 DreamCraft3D - Official Implementation of DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior
- 🧮 DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models - Introduces a 7B math specialist model achieving 51.7% on MATH benchmark
- ⚙️ ESFT - Expert Specialized Fine-Tuning
- 🔗 awesome-deepseek-integration - A curated list of DeepSeek integration resources
- 🛠️ awesome-deepseek-coder - A curated list of open-source projects related to DeepSeek Coder
- 📜 DeepSeek LLM: Scaling Open-Source Language Models with Longtermism - Introduces the training methodology and innovations of DeepSeek base models
- 🚀 DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model - Presents the architecture and training strategies of DeepSeek-V2
- ⚡ DeepSeek-V3 Technical Report - Details the MoE architecture with 671B total parameters
- 🔬 DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning - Introduces reinforcement learning approach for reasoning capabilities
- 💻 DeepSeek-Coder: When the Large Language Model Meets Programming - In-depth exploration of DeepSeek Coder's technical details and performance
- 🔧 DeepSeek-Coder-V2: Breaking the Barrier of Closed-Source Models in Code Intelligence - Presents an open-source MoE code language model
- 🧮 DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models - Introduces a 7B math specialist model achieving 51.7% on MATH benchmark
- 📚 DeepSeek Prompt Library - Collection of curated prompts and examples
- ⚡ API Quickstart - Get started in 5 minutes
- 🤗 Hugging Face Models - Model hub integration
- 🧪 Colab Notebooks - Free experimentation
- 📚 Official Prompt Library - Curated collection of prompts and examples
- 🚀 Quickstart Guide - First steps with DeepSeek
- 📚 Full Documentation - Comprehensive technical reference
- 🤗 Hugging Face Models - Model documentation and demos
- 🌟 Awesome LLM - Large Language Models resources
- 🔓 Awesome Open LLM - Open-source LLMs
- 🌟 Awesome-Chinese-LLM - A curated list of Chinese LLMs, focusing on small-scale, privately deployable models with low training costs
- 🛡️ Awesome LLM Security - Security best practices
Your contributions are welcome! Please:
- Read our Contribution Guidelines
- Ensure links are relevant and functional
- Add concise descriptions (50-150 characters)
- Follow the Code of Conduct
Special thanks to:
- DeepSeek engineering team for open-source contributions
- @realShineHuang for maintaining this list
- All contributors (view full list)