-
Mass General Brigham
- Boston, MA
-
05:54
(UTC -04:00) - https://sites.google.com/view/xinsong-du/home
- @xinsongdu
- in/xinsong-du-900736106
Highlights
- Pro
Starred repositories
Integrate the DeepSeek API into popular softwares
The twitter sentiment corpus created by Sanders Analytics, it consists of 5513 hand-classified tweets(however, 400 tweets missing due to the scripts created by the company). Each tweet was classifi…
Text-to-SQL Generation for Question Answering on Electronic Medical Records
ACL2023 - AlignScore, a metric for factual consistency evaluation.
Adding guardrails to large language models.
Magnificent app which corrects your previous console command.
Pyenv plugin to create a jupyter kernel for every installed pyenv version
TensorZero creates a feedback loop for optimizing LLM applications — turning production data into smarter, faster, and cheaper models.
A high-throughput and memory-efficient inference and serving engine for LLMs
Official implementation for ICML24 paper "Irregular Multivariate Time Series Forecasting: A Transformable Patching Graph Neural Networks Approach"
Code for the paper "Larger and more instructable language models become less reliable"
This is a Phi Family of SLMs book for getting started with Phi Models. Phi a family of open sourced AI models developed by Microsoft. Phi models are the most capable and cost-effective small langua…
✨✨Latest Advances on Multimodal Large Language Models
Annotation Tool: The extensible Human Oracle Suite of Tools (eHOST)
BARTScore: Evaluating Generated Text as Text Generation
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
🦜🔗 Build context-aware reasoning applications
[IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection
A library for efficient similarity search and clustering of dense vectors.
List of papers on hallucination detection in LLMs.
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
An annotated implementation of the Transformer paper.