-
Rutgers University
- https://www.ruixiangtang.net/
Highlights
- Pro
Stars
Official repo of Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics
[preprint] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.
An overview of LLMs for cybersecurity.
TrustAgent: Towards Safe and Trustworthy LLM-based Agents
UP-TO-DATE LLM Watermark paper. π₯π₯π₯
The code for the bark-voicecloning model. Training and inference.
πΈπ¬ - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
Learning to Compress Prompts with Gist Tokens - https://arxiv.org/abs/2304.08467
A curated list of practical guide resources of LLMs (LLMs Tree, Examples, Papers)
We have created a new Github repository. Please visit https://github.com/ynchuang/DiscoverPath for the latest update.
Multimodal Question Answering in the Medical Domain: A summary of Existing Datasets and Systems
This repo includes ChatGPT prompt curation to use ChatGPT better.
Human ChatGPT Comparison Corpus (HC3), Detectors, and more! π₯
A codebase that makes differentially private training of transformers easy.
Test code for pytorch_influence_functions
This is a PyTorch reimplementation of Influence Functions from the ICML2017 best paper: Understanding Black-box Predictions via Influence Functions by Pang Wei Koh and Percy Liang.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Fine-tuning CLIP using ROCO dataset which contains image-caption pairs from PubMed articles.
[MICCAI-2022] This is the official implementation of Multi-Modal Masked Autoencoders for Medical Vision-and-Language Pre-Training.
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
Paper collections of retrieval-based (augmented) language model.
Code for obtaining the Curation Corpus abstractive text summarisation dataset