- China
-
16:07
(UTC +08:00) - https://orcid.org/0000-0002-7503-6783
Highlights
- Pro
Stars
An easy-to-use Python framework to generate adversarial jailbreak prompts.
[EMNLP 2024] Data Advisor: Dynamic Data Curation for Safety Alignment of Large Language Models
PyContinual (An Easy and Extendible Framework for Continual Learning)
An Extensible Continual Learning Framework Focused on Language Models (LMs)
Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
温州大学《机器学习》课程资料(代码、课件等)
Machine Learning Journal for Intermediate to Advanced Topics.
This repository collects awesome survey, resource, and paper for Lifelong Learning with Large Language Models. (Updated Regularly)
[ACL2024] A Codebase for Incremental Learning with Large Language Models; Official released code for "Learn or Recall? Revisiting Incremental Learning with Pre-trained Language Models (ACL 2024)", …
Code implementation for the paper "Enhancing Contrastive Learning with Noise-Guided Attack: Towards Continual Relation Extraction in the Wild".
Paper Reproduction Google SCoRE(Training Language Models to Self-Correct via Reinforcement Learning)
Awesome papers about generative Information Extraction (IE) using Large Language Models (LLMs)
source code for {D}ocument-level {R}elation {E}xtraction with {E}vidence-guided {A}ttention {M}echanism
A NeuroSymbolic AI technique for extracting relations from documents.
A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).
Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.
[ACL 2024] A Survey of Chain of Thought Reasoning: Advances, Frontiers and Future
A standardized, fair, and reproducible benchmark for evaluating event extraction approaches
这是一款提高ChatGPT的数据安全能力和效率的插件。并且免费共享大量创新功能,如:自动刷新、保持活跃、数据安全、取消审计、克隆对话、言无不尽、净化页面、展示大屏、拦截跟踪、日新月异、明察秋毫等。让我们的AI体验无比安全、顺畅、丝滑、高效、简洁。
Writing AI Conference Papers: A Handbook for Beginners
《大模型白盒子构建指南》:一个全手搓的Tiny-Universe
Recipes to train reward model for RLHF.
We release our code and data for SEAS in this repository.
A curated list of reinforcement learning with human feedback resources (continually updated)
A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide researchers, practitioners, and enthusiasts with insights i…