Skip to content
forked from Shark-NLP/OpenICL

OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.

License

Notifications You must be signed in to change notification settings

wennitao/OpenICL

 
 

Repository files navigation


OverviewInstallationPaperExamplesDocsCitation

version

Overview

OpenICL provides an easy interface for in-context learning, with many state-of-the-art retrieval and inference methods built in to facilitate systematic comparison of LMs and fast research prototyping. Users can easily incorporate different retrieval and inference methods, as well as different prompt instructions into their workflow.

What's News

  • v0.1.8 Support LLaMA and self-consistency

Installation

Note: OpenICL requires Python 3.8+

Using Pip

pip install openicl

Installation for local development:

git clone https://github.com/Shark-NLP/OpenICL
cd OpenICL
pip install -e .

Quick Start

Following example shows you how to perform ICL on sentiment classification dataset. More examples and tutorials can be found at examples

Step 1: Load and prepare data

from datasets import load_dataset
from openicl import DatasetReader

# Loading dataset from huggingface
dataset = load_dataset('gpt3mix/sst2')

# Define a DatasetReader, with specified column names where input and output are stored.
data = DatasetReader(dataset, input_columns=['text'], output_column='label')

Step 2: Define the prompt template (Optional)

from openicl import PromptTemplate
tp_dict = {
    0: "</E>Positive Movie Review: </text>",
    1: "</E>Negative Movie Review: </text>" 
}

template = PromptTemplate(tp_dict, {'text': '</text>'}, ice_token='</E>')

The placeholder </E> and </text> will be replaced by in-context examples and testing input, respectively. For more detailed information about PromptTemplate (such as string-type template) , please see tutorial1.

Step 3: Initialize the Retriever

from openicl import TopkRetriever
# Define a retriever using the previous `DataLoader`.
# `ice_num` stands for the number of data in in-context examples.
retriever = TopkRetriever(data, ice_num=8)

Here we use the popular TopK method to build the retriever.

Step 4: Initialize the Inferencer

from openicl import PPLInferencer
inferencer = PPLInferencer(model_name='distilgpt2')

Step 5: Inference and scoring

from openicl import AccEvaluator
# the inferencer requires retriever to collect in-context examples, as well as a template to wrap up these examples.
predictions = inferencer.inference(retriever, ice_template=template)
# compute accuracy for the prediction
score = AccEvaluator().score(predictions=predictions, references=data.references)
print(score)

Docs

(updating...)

OpenICL Documentation

Citation

If you find this repository helpful, feel free to cite our paper:

@article{wu2023openicl,
  title={OpenICL: An Open-Source Framework for In-context Learning},
  author={Zhenyu Wu, Yaoxiang Wang, Jiacheng Ye, Jiangtao Feng, Jingjing Xu, Yu Qiao, Zhiyong Wu},
  journal={arXiv preprint arXiv:2303.02913},
  year={2023}
}

About

OpenICL is an open-source framework to facilitate research, development, and prototyping of in-context learning.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%