Skip to content

Commit

Permalink
add instruction tuning
Browse files Browse the repository at this point in the history
  • Loading branch information
mac committed Feb 22, 2023
1 parent 197c803 commit 71f54d4
Show file tree
Hide file tree
Showing 3 changed files with 34 additions and 2 deletions.
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,8 @@ If you're interested in the field of LLM, you may find the above list of milesto
- [Chain-of-Thought](paper_list/chain_of_thougt.md)
- [In-Context-Learning](paper_list/in_context_learning.md)
- [RLHF](paper_list/RLHF.md)
- [Prompt-Tuning](paper_list/prompt_tuning.md)
- [Prompt-Learning](paper_list/prompt_learning.md)
- [Instruction-Tuning](paper_list/instruction-tuning.md)
- [MOE](paper_list/moe.md)
- [Code-Pretraining](paper_list/code_pretraining.md)
- [LLM-Evaluation](paper_list/protein_pretraining.md)
Expand Down
30 changes: 30 additions & 0 deletions paper_list/instruction-tuning.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# Instruction-Tuning

## Papers

### 2021

- **Cross-task generalization via natural language crowdsourcing instructions.** (2021-04) Swaroop Mishra et al. [paper](https://arxiv.org/abs/2104.08773)
- **Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections** (2021-04) Ruiqi Zhong et al. [paper](https://aclanthology.org/2021.findings-emnlp.244/)
- **Crossfit: A few-shot learning challenge for cross-task general- ization in NLP** (2021-04) QinYuan Ye et al. [paper](https://arxiv.org/abs/2104.08835)

- **Finetuned language models are zero-shot learners** (2021-09) Jason Wei et al. [paper](https://openreview.net/forum?id=gEZrGCozdqR)

> FLAN
- **Multitask prompted training enables zero-shot task generalization** (2021-10) Victor Sanh et al. [paper](https://openreview.net/forum?id=9Vrb9D0WI4)

- **MetaICL: Learning to learn in context** (2021-10) Sewon Min et al. [paper](https://arxiv.org/abs/2110.15943#:~:text=We%20introduce%20MetaICL%20%28Meta-training%20for%20In-Context%20Learning%29%2C%20a,learning%20on%20a%20large%20set%20of%20training%20tasks.)

### 2022

- **Training language models to follow instructions with human feedback.** (2022-03) Long Ouyang et al. [paper](https://arxiv.org/abs/2203.02155)

- **Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks** (2022-04) Yizhong Wang et al. [paper](https://arxiv.org/abs/2204.07705)

- **Scaling Instruction-Finetuned Language Models** (20220-10) Hyung Won Chung et al. [paper](https://arxiv.org/pdf/2210.11416.pdf)

> Flan-T5/PaLM
## Useful Resources

Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Prompt Tuning
# Prompt Learning

## Papers
- **Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing** (2021-07) Pengfei Liu et al. [paper](https://arxiv.org/abs/2107.13586)
> A Systematic Survey
## Useful Resources

0 comments on commit 71f54d4

Please sign in to comment.