-
China University of Geoscience, Wuhan
- Tianjin or Wuhan (P.R.China)
-
11:02
(UTC +08:00) - changwenhan.github.io
- https://orcid.org/0000-0003-3350-5171
Lists (4)
Sort Name ascending (A-Z)
CoT Mechanism
I'm looking for some code to mimic the GPT o1's mechanism. Based on these codes, we can explore many security and privacy problems in CoT and LLMs.LLM Fine-Tuning
This list includes some codes for LLM Fine-Tuning, especially PEFT Fine-TuningLLM RLHF Fine-tuning
This list is made for saving some codes that I can use in future for LLMs.Machine Unlearning
This list is made for "Machine Unlearning" experiments that I maybe conduct in the future.Stars
A game theoretic approach to explain the output of any machine learning model.
Scripts for fine-tuning Meta Llama with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting…
Book about interpretable machine learning
A course on aligning smol models.
Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, B…
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
Official implementation for "Automatic Chain of Thought Prompting in Large Language Models" (stay tuned & more will be updated)
OmniXAI: A Library for eXplainable AI
Awesome Machine Unlearning (A Survey of Machine Unlearning)
PyHessian is a Pytorch library for second-order based analysis and training of Neural Networks
Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations
PyTorch Implementation of In-Domain GAN Inversion for StyleGAN2
A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled concept data
Code related to the paper "Machine Unlearning of Features and Labels"
Notebooks and Code about Generative Ai, LLMs, MLOPS, NLP , CV and Graph databases
Methods for removing learned data from neural nets and evaluation of those methods
Repo of the paper "On the Robustness of Sparse Counterfactual Explanations to Adverse Perturbations"
Streamlit demo for A surprisingly effective way to estimate token importances in LLM prompts
Repository for the paper: Explaining Concept Bottleneck Models with Layer-wise Relevance Propagation