Skip to content

mayankgrwl97/awesome-gans

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

62 Commits
 
 

Repository files navigation

Awesome GANs

Courses

Conference Workshops

Tutorials and Blogs

Training GANs with Limited Data

GAN Architectures (StyleGAN, ProgressiveGAN types)

GAN Editing

GAN Inversion: Inverting Real Faces to Latent Code (Image2StyleGAN types)

Flow based networks (Invertible by design)

Adding encoder to GAN generator (Reconstructions are not good)

  • ECCV 2020: In-Domain GAN Inversion for Real Image Editing
    • Comments Novel Encoder training for GAN Inversion - Introduce adversarial loss for training encoder; Encoder trained using real images. Resolves out-of-domain image inversion (from Image2StyleGAN) by doing an Encoder-constrained Optimization (i.e. minimize distance between encoder predicted latent code and optimized latent code). They compute loss on both reconstructed image and the predicted latent code. Hence, we can use these latent codes for image editing. Check application: Semantic Diffusion
  • CVPR 2020: Adversarial Latent Autoencoders
  • NeurIPS 2019: BigBiGAN - Large Scale Adversarial Representation Learning

Interpretability

Require supervision in form of off-the-shelf supervised classifiers

  • ICLR 2020: On the "steerability" of generative adversarial networks
    • Comments: Explore correspondence of latent space trajectories in GANs to simple image transformations. Dataset biases limit the extent of transformations (e.g: can't convert red firetruck to blue firetruck by moving in the blueness direction in the latent space). Data augmentation and jointly training the walk trajectory and the generator weights imroves steerability, resulting in larger transformation effects.
  • CVPR 2020: InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing
    • Comments: Similar to HiGAN below, they use off-the-shelf image classifiers (like male/female, old/young, smile/no-smile, artifacts/no-artifacts) to find semantic boundaries in the latent space. Check for metrics to measure the disentanglement of faces.
  • CVPRW 2020: HiGAN - Semantic Hierarchy Emerges in Deep Generative Representations for Scene Synthesis
    • Comments: Investigates the causality between latent space vectors and generated image attributes/semantics. For normal GANs, they use off-the-shelf image classifiers (like cloud/no-cloud, lighting/no-lighting) to find semantic boundaries in the latent space. For StyleGAN-like architectures, where stochasticity/randomness is introduced at multiple layers, they find that by perturbing input latent vectors at different layer depths, different semantics are controlled: Layout -> Objects -> Attributes -> Color Schemes
  • ICCV 2019: GANalyze: Toward Visual Definitions of Cognitive Image Properties
    • Comments: Learn transformation in latent space (via a Transformer network) to improve memorability of generated images. Also check MemNet
  • ICLR 2019: GAN Dissection: Visualizing and Understanding Generative Adversarial Networks, Video
    • Comments: It's a framework to interpret and label the internal units inside the Generator. Labels are associated by checking correlation of feature activations of individual units with the segmentation mask of the generated image

Unsupervised Attribute Discovery in GANs

Disentanglement of Variation Factors in Generative Models

Image to Image (Pix2Pix and CycleGAN types)

Improving GANs

Other Applications

Quantitative Analysis

====

About

Latest resources on Generative Adversarial Networks

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published