-
University of Melbourne
- Melbourne
-
21:50
(UTC +11:00)
Stars
Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
ChatGLM3 series: Open Bilingual Chat LLMs | 开源双语对话语言模型
Implementation of Denoising Diffusion Probabilistic Model in Pytorch
arXiv LaTeX Cleaner: Easily clean the LaTeX code of your paper to submit to arXiv
Universal and Transferable Attacks on Aligned Language Models
Lumina-T2X is a unified framework for Text to Any Modality Generation
MambaOut: Do We Really Need Mamba for Vision?
PyTorch implementation of adversarial attacks [torchattacks]
RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]
A trivial programmatic Llama 3 jailbreak. Sorry Zuck!
[ICML 2024] TrustLLM: Trustworthiness in Large Language Models
The official implementation of "Relay Diffusion: Unifying diffusion process across resolutions for image synthesis" [ICLR 2024 Spotlight]
A new adversarial purification method that uses the forward and reverse processes of diffusion models to remove adversarial perturbations.
The official implementation of our ICLR2024 paper "AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models".
Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models
[arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"
[AAAI'25] Jailbreaking Large Vision-language Models via Typographic Visual Prompts
[NeurIPS'2023] Official Code Repo:Diffusion-Based Adversarial Sample Generation for Improved Stealthiness and Controllability
Code for "On Adaptive Attacks to Adversarial Example Defenses"
From-scratch diffusion model implemented in PyTorch.
Document the demo and a series of documents for learning the diffusion model.