Starred repositories
Repository hosting code for "Actions Speak Louder than Words: Trillion-Parameter Sequential Transducers for Generative Recommendations" (https://arxiv.org/abs/2402.17152).
A high-performance framework for training wide-and-deep recommender systems on heterogeneous cluster
Deep Pray(深度祈祷): LEGO for deep learning, Making AI easier, faster and cheaper👻
The source code for our paper "Scenario-Adaptive Feature Interaction for Click-Through Rate Prediction" (accepted by KDD2023 Applied Science Track), which proposes a model for Multi-Scenario/Multi-…
A simple and efficient Mamba implementation in pure PyTorch and MLX.
Simple, minimal implementation of the Mamba SSM in one file of PyTorch.
PeaBrane / mamba-tiny
Forked from johnma2006/mamba-minimalSimple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).
Official PyTorch Implementation of "The Hidden Attention of Mamba Models"
An Attention Free Transformer without Self-Attention mechanism in PyTorch.
Implementation of a modular, high-performance, and simplistic mamba for high-speed applications
[RelKD'24] Mamba4Rec: Towards Efficient Sequential Recommendation with Selective State Space Models
Uncovering Selective State Space Model's Capabilities in Lifelong Sequential Recommendation
对推荐广告中,序列推荐、多任务推荐、跨域推荐、冷启动等方向主要算法学习笔记。
搜索、推荐、广告、用增等工业界实践文章收集(来源:知乎、Datafuntalk、技术公众号)
Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch
[CAAI AIR'24] Minimize Quantization Output Error with Bias Compensation
PyTorch Implementation of CURL-Neural Network Pruning with Residual-Connections and Limited-Data
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.
PyTorch implementation of "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference"
Papers for CNN, object detection, keypoint detection, semantic segmentation, medical image processing, SLAM, etc.
Prune a model while finetuning or training.