-
Imperial College London
- London, UK
- https://lsying009.github.io/
Stars
chenzhik / VisualizerX
Forked from luo3300612/Visualizerassistant tools for attention visualization in deep learning
[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
assistant tools for attention visualization in deep learning
Spikingformer: Spike-driven Residual Learning for Transformer-based Spiking Neural Network
[CVPR 2024] Unified Multi-Sensor Tracker With One Parameter Set
Paper list for single object tracking (State-of-the-art SOT trackers)
OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.
Resources for Multiple Object Tracking (MOT)
Advancing Spiking Neural Networks towards Deep Residual Learning
A large-scale benchmark dataset for color-event based visual tracking
Brain-inspired Many-core Architecture Exploration platform
Paper list for SNN based computer vision tasks.
Implementation of "A Hybrid ANN-SNN Architecture for Low-Power and Low-Latency Visual Perception". CVPRW 2024
A programming framework based on PyTorch for hybrid neural networks with automatic quantization
Official Pytorch implementation for video neural representation (NeRV)
Official LaTeX templates employing the Imperial College London brand.
Official implementation of "Implicit Neural Representations with Periodic Activation Functions"
CISTA-Flow network for events-to-video reconstruction
CVPR23, Learning Spatial-Temporal Implicit Neural Representations for Event-Guided Video Super-Resolution
lsying009 / CISTA-EVREAL
Forked from ercanburak/EVREALCISTA-EVREAL: EVREAL with the family of CISTA networks for Event-based Video Reconstruction
pld-group / V2E2V
Forked from lsying009/V2E2VVideo-to-events-to-video framework, including CISTA-LSTC/ CISTA-TC reconstruction networks
Video-to-events (V2E) generation to create training data for events-to-video (E2V) reconstruction
Video-to-events-to-video framework, including CISTA-LSTC/ CISTA-TC reconstruction networks
HyperE2VID: Improving Event-Based Video Reconstruction via Hypernetworks (IEEE Transactions on Image Processing, 2024)
The official implementation of "Secrets of Event-based Optical Flow" (ECCV2022 Oral and IEEE T-PAMI 2024)
Official code for First-Spike (FS) coding of spiking neural networks
Learning Dense and Continuous Optical Flow from an Event Camera (TIP 2022)