ECE 663 Project - Haoming, Oded, Rucha, Angikar
Please refer to the final report pdf
The project focuses on addressing security concerns in generative modeling, particularly targeting deep image-generative models that employ invertible transformations to map data distributions to evaluable latent distributions, known as Normalizing Flows (NF). The researchers propose two distinct methods for conducting backdoor attacks in the generative modeling domain, demonstrating their efficacy through both numeric and heuristic evaluations. In the first attack a small proportion of training data is intentionally poisoned by changing labels and adding fixed noise. The second attack aims to manipulate the latent Gaussian distribution, causing the generative model to produce incorrect outputs when triggered. Additionally, the project explores mitigation strategies against latent backdoor attacks, emphasizing the importance of robust architectures to withstand data poisoning attempts. The research sheds light on potential vulnerabilities in generative models and provides insights into safeguarding them against adversarial attacks.