Highlights
- Pro
Stars
Qwen2.5-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20 via OpenAI’s APIs.
This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)
Joint distribution optimal transportation for domain adaptation
Pytorch implementation of DAC-Net ("Zhongying Deng, Kaiyang Zhou, Yongxin Yang, Tao Xiang. Domain Attention Consistency for Multi-Source Domain Adaptation. BMVC 2021")
Cross-Architectural Knowledge Distillation : Multi Scaler Geometric Feature Fusion for medical imaging object detection.
Code release for Representation Subspace Distance for Domain Adaptation Regression (ICML 2021)
The source code for "Deep transfer learning for conditional shift in regression"
Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.
[ICML 2024 Oral] Official repository of the SparseTSF paper: "SparseTSF: Modeling Long-term Time Series Forecasting with 1k Parameters". This work is developed by the Lab of Professor Weiwei Lin (l…
This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturbation" (ICLR2025).
A simple GUI for OneDrive Linux client with multi-account support.
An implementation of SEAL: Safety-Enhanced Aligned LLM fine-tuning via bilevel data selection.
SMART: Submodular Data Mixture Strategy for Efficient and Effective Instruction Tuning
A deep reinforcement learning (DRL) based approach for spatial layout of land use and roads in urban communities. (Nature Computational Science)
A survey on harmful fine-tuning attack for large language model
[ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"
[EMNLP 24] Source code for paper 'AdaZeta: Adaptive Zeroth-Order Tensor-Train Adaption for Memory-Efficient Large Language Models Fine-Tuning'
This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)
LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning
Code and documentation to train Stanford's Alpaca models, and generate the data.
Official Pytorch Implementation of "OwLore: Outlier-weighed Layerwise Sampled Low-Rank Projection for Memory-Efficient LLM Fine-tuning" by Pengxiang Li, Lu Yin, Xiaowei Gao, Shiwei Liu
[CVPR 2023] DepGraph: Towards Any Structural Pruning
A plug-and-play library for parameter-efficient-tuning (Delta Tuning)
本项目基于SadTalkers实现视频唇形合成的Wav2lip。通过以视频文件方式进行语音驱动生成唇形,设置面部区域可配置的增强方式进行合成唇形(人脸)区域画面增强,提高生成唇形的清晰度。使用DAIN 插帧的DL算法对生成视频进行补帧,补充帧间合成唇形的动作过渡,使合成的唇形更为流畅、真实以及自然。