Stars
Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
This repo is the official implementations of "Linear Video Transformer with Feature Fixation".
This repo is the official implementations of "Neural Architecture Search on Efficient Transformers and Beyond".
[CVPR 2023] Official implementation of our paper - Learning Audio-Visual Source Localization via False Negative Aware Contrastive Learning
[ICLR 2023] Official implementation of Transnormer in our ICLR 2023 paper - Toeplitz Neural Network for Sequence Modeling
[EMNLP 2022] Official implementation of Transnormer in our EMNLP 2022 paper - The Devil in Linear Transformer
Official repository of NeuMan: Neural Human Radiance Field from a Single Video (ECCV 2022)
[ICLR 2022] Official implementation of cosformer-attention in cosFormer: Rethinking Softmax in Attention
[ECCV 2022] & [IJCV 2024] Official implementation of the paper: Audio-Visual Segmentation (with Semantics)
[CVPR 2021] Deep Two-View Structure-from-Motion Revisited
[NeurIPS 2020] Displacement-Invariant Matching Cost Learning for Accurate Optical Flow Estimation
Hierarchical Neural Architecture Searchfor Deep Stereo Matching (NeurIPS 2020)
Website for personal collection and previewing of LaTeX templates. Presented with Python/Jinja2.
Noise-Aware Unsupervised Deep Lidar-Stereo Fusion (CVPR 2019)
Unsupervised Deep Epipolar Flow for Stationary or Dynamic Scenes
Tutorials and implementations for "Self-normalizing networks"
Compare SELUs (scaled exponential linear units) with other activations on MNIST, CIFAR10, etc.
Unsupervised single image depth prediction with CNNs
TensorFlow-based neural network library
Implementation of recent Deep Learning papers