Skip to content

paper list on dynamic context generation including gif generation, video generatio, etc.

Notifications You must be signed in to change notification settings

wenhao7841/diffusion_model_on_dynamic_context_generation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 

Repository files navigation

paper survey

base model

  • [VDM] Video Diffusion Models

    (2022.04.07) NIPS 2022 Google

    [page] [paper] [code]

  • MCVD:Masked conditional video diffusion for prediction, generation, and interpolation

    (2022.05.19) NIPS 2022

    [page] [paper] [code]

  • [FDM] Flexible Diffusion Modeling of Long Videos

    (2022.05.23) arXiv

    [page] [paper] [code]

  • CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers Paper

    (2022.05.29) ICLR 2023 THU

    [page] [paper] [code]

  • Make-A-Video:Text-to-Video Generation without Text-Video Data

    (2022.09.29) ICLR 2023 Meta

    [page] [paper] [code]

  • IMAGEN VIDEO: HIGH DEFINITION VIDEO GENERATION WITH DIFFUSION MODELS

    (2022.10.05) arXiv Google

    [page] [paper] [code]

  • [PVDM] Video Probabilistic Diffusion Models in Projected Latent Space

    (2023.02.15) CVPR 2023 Google

    [page] [paper] [code]

  • MagicVideo: Efficient Video Generation With Latent Diffusion Models

    (2022.11.20) arXiv ByteDance

    [page] [paper] [code]

  • LVDM:Latent Video Diffusion Models for High-Fidelity Long Video Generation

    (2022.11.23) arXiv HKUST Tencent

    [page] [paper] [code]

  • VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation

    (2023.3.15) CVPR 2023 Alibaba UCAS

    [page] [paper] [code]

video editing

  • Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation

    (2022.12.22) arXiv Tencent(PCG) NUS

    [page] [paper] [code]

  • Shape-aware Text-driven Layered Video Editing

    (2023.1.30) CVPR 2023 University of Maryland

    [page] [paper] [code]

  • Gen-1: Structure and Content-Guided Video Synthesis with Diffusion Models

    (2023.2.6) arXiv Runway

    [page] [paper] [code]

  • Video-P2P: Video Editing with Cross-attention Control

    (2023.3.8) arXiv Adobe CUHK

    [page] [paper] [code]

  • FateZero: Fusing Attentions for Zero-shot Text-based Video Editing

    (2023.3.16) ICCV 2023 Tencent(AI Lab) HKUST

    [page] [paper] [code]

  • Pix2video: Video editing using image diffusion

    (2023.3.22) ICCV 2023 Adobe

    [page] [paper] [code]

  • Text2video-zero: Text-toimage diffusion models are zero-shot video generators.

    (2023.3.23) arXiv Picsart

    [page] [paper] [code]

  • vid2vid-zero: Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models

    (2023.3.30) arXiv BAAI ZJU

    [page] [paper] [code]

  • ControlVideo: Training-free Controllable Text-to-Video Generation

    (2023.5.22) arXiv Huawei HIT

    [page] [paper] [code]

  • ControlVideo: Adding Conditional Control for One Shot Text-to-Video Editing

    (2023.5.26) arXiv Tsinghua

    [page] [paper] [code]

  • Gen-L-Video: Multi-Text to Long Video Generation via Temporal Co-Denoising

    (2023.5.29) arXiv Shanghai AI Lab CUHK(mmLab)

    [page] [paper] [code]

  • Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation

    (2023.6.13) arXiv NTU

    [page] [paper] [code]

  • VidEdit: Zero-shot and Spatially Aware Text-driven Video Editing

    (2023.6.14) arXiv Paris, France

    [page] [paper] [code]

  • TokenFlow: Consistent Diffusion Features for Consistent Video Editing

    (2023.7.19) arXiv Weizmann Institute of Science

    [page] [paper] [code coming]

  • VideoControlNet: A Motion-Guided Video-to-Video Translation Framework by Using Diffusion Model with ControlNet

    (2023.7.26) arXiv Beihang University, University of Hong Kong

    [page] [paper] [code coming]

About

paper list on dynamic context generation including gif generation, video generatio, etc.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published