Skip to content
View huawei-lin's full-sized avatar
🏠
Working from home
🏠
Working from home

Organizations

@DynamonAI

Block or report huawei-lin

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
huawei-lin/README.md

Visit My HomePage: https://huaweilin.net/clap

Important

🔥 I am actively looking for research internship opportunities in Large Language Models, Computer Vision and Machine Learning for the summer of 2025. More research experience and papers (under review) can be found at my CV.

Ongoing Projects

  • About LLMs
  • Diffusion Models

Open-source AI systems

We launched a team dedicated to building open-source AI systems. For more information, please visit us at DynamonAI.

GitHub Status

Huawei Lin's GitHub Stats Top Langs

Harlok's WakaTime stats

Pinned Loading

  1. ggerganov/llama.cpp ggerganov/llama.cpp Public

    LLM inference in C/C++

    C++ 69.2k 9.9k

  2. RapidIn RapidIn Public

    RapidIn: Scalable Influence Estimation for Large Language Models (LLMs). The implementation for paper "Token-wise Influential Training Data Retrieval for Large Language Models" (Accepted on ACL 2024).

    Python 11 1

  3. DMin DMin Public

    Implementation for "DMin: Scalable Training Data Influence Estimation for Diffusion Models". Influence Function, Influence Estimation and Training Data Attribution for Diffusion Models

    Python 1

  4. GBDT_unlearning GBDT_unlearning Public

    The implementation for paper Machine Unlearning in Gradient Boosting Decision Trees (Accepted on KDD 2023), supporting training and unlearning.

    C++ 9

  5. LLMsEasyFinetune LLMsEasyFinetune Public

    An easy-to-run implementation for finetuning large language models (LLMs) such as llama and gemma, supporting full parameter finetuning, LoRA, and QLoRA.

    Python 10 3

  6. HCP_Dataset_Download_Automatically_Script HCP_Dataset_Download_Automatically_Script Public

    This script can download the HCP dataset automatically from amazon s3 browser by using python. You can download tfMRI, rfMRI, dfMRI, MEG etc. dataset from amazon s3 of HCP.

    Python 23 2