Starred repositories
杭州电子科技大学学位论文 LaTeX 模板 / LaTeX class for bachelor and MPhil theses in Hangzhou Dianzi University. Also available on Gitee: https://gitee.com/myhsia/hduthesis
Apache Arrow is the universal columnar format and multi-language toolbox for fast data interchange and in-memory analytics
🌵 A responsive, clean and simple theme for Hexo.
The world’s fastest framework for building websites.
A cloud-native vector database, storage for next generation AI applications
An industrial-grade C++ implementation of RAFT consensus algorithm based on brpc, widely used inside Baidu to build highly-available distributed systems.
Cinemagoer is a Python package useful to retrieve and manage the data of the IMDb (to which we are not affiliated in any way) movie database about movies, people, characters and companies
oneAPI Threading Building Blocks (oneTBB)
Collection of experiments to carve out the differences between two types of relational query processing engines: Vectorizing (interpretation based) engines and compiling engines.
DuckDB is an analytical in-process SQL database management system
C++ implementation of a fast hash map and hash set using hopscotch hashing
A tutorial of building an LSM-Tree storage engine in a week.
LevelDB is a fast key-value storage library written at Google that provides an ordered mapping from string keys to string values.
MIT 6.824: Distributed Systems - 课程笔记、论文总结 与 Labs 实现思路 (Spring 2020)
A course to build distributed key-value service based on TiKV model
My solution for MIT 6.5840 (aka. MIT 6.824). No failure within 30,000 tests.
OpenMMLab Pre-training Toolbox and Benchmark
Code samples for C++ Concurrency in Action
📖 作为对《C++ Concurrency in Action - SECOND EDITION》的中文翻译。
C++11/14/17/20 multithreading, involving operating system principles and concurrent programming technology.
UNICEF Inventory theme, for use with Hugo static site generator. A simple knowledgebase to share information with others.
『ゼロから作る Deep Learning』(O'Reilly Japan, 2016)
Letta (formerly MemGPT) is a framework for creating LLM services with memory.