forked from jackmpcollins/Awesome-LLM
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
mac
committed
Feb 22, 2023
1 parent
197c803
commit 71f54d4
Showing
3 changed files
with
34 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
# Instruction-Tuning | ||
|
||
## Papers | ||
|
||
### 2021 | ||
|
||
- **Cross-task generalization via natural language crowdsourcing instructions.** (2021-04) Swaroop Mishra et al. [paper](https://arxiv.org/abs/2104.08773) | ||
- **Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections** (2021-04) Ruiqi Zhong et al. [paper](https://aclanthology.org/2021.findings-emnlp.244/) | ||
- **Crossfit: A few-shot learning challenge for cross-task general- ization in NLP** (2021-04) QinYuan Ye et al. [paper](https://arxiv.org/abs/2104.08835) | ||
|
||
- **Finetuned language models are zero-shot learners** (2021-09) Jason Wei et al. [paper](https://openreview.net/forum?id=gEZrGCozdqR) | ||
|
||
> FLAN | ||
- **Multitask prompted training enables zero-shot task generalization** (2021-10) Victor Sanh et al. [paper](https://openreview.net/forum?id=9Vrb9D0WI4) | ||
|
||
- **MetaICL: Learning to learn in context** (2021-10) Sewon Min et al. [paper](https://arxiv.org/abs/2110.15943#:~:text=We%20introduce%20MetaICL%20%28Meta-training%20for%20In-Context%20Learning%29%2C%20a,learning%20on%20a%20large%20set%20of%20training%20tasks.) | ||
|
||
### 2022 | ||
|
||
- **Training language models to follow instructions with human feedback.** (2022-03) Long Ouyang et al. [paper](https://arxiv.org/abs/2203.02155) | ||
|
||
- **Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks** (2022-04) Yizhong Wang et al. [paper](https://arxiv.org/abs/2204.07705) | ||
|
||
- **Scaling Instruction-Finetuned Language Models** (20220-10) Hyung Won Chung et al. [paper](https://arxiv.org/pdf/2210.11416.pdf) | ||
|
||
> Flan-T5/PaLM | ||
## Useful Resources | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,6 +1,7 @@ | ||
# Prompt Tuning | ||
# Prompt Learning | ||
|
||
## Papers | ||
- **Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing** (2021-07) Pengfei Liu et al. [paper](https://arxiv.org/abs/2107.13586) | ||
> A Systematic Survey | ||
## Useful Resources |