forked from langchain-ai/langchain
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add OpenLM LLM multi-provider (langchain-ai#4993)
OpenLM is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP. It implements the OpenAI Completion class so that it can be used as a drop-in replacement for the OpenAI API. This changeset utilizes BaseOpenAI for minimal added code. --------- Co-authored-by: Dev 2049 <[email protected]>
- Loading branch information
Showing
6 changed files
with
191 additions
and
4 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,133 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# OpenLM\n", | ||
"[OpenLM](https://github.com/r2d4/openlm) is a zero-dependency OpenAI-compatible LLM provider that can call different inference endpoints directly via HTTP. \n", | ||
"\n", | ||
"\n", | ||
"It implements the OpenAI Completion class so that it can be used as a drop-in replacement for the OpenAI API. This changeset utilizes BaseOpenAI for minimal added code.\n", | ||
"\n", | ||
"This examples goes over how to use LangChain to interact with both OpenAI and HuggingFace. You'll need API keys from both." | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"### Setup\n", | ||
"Install dependencies and set API keys." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 1, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# Uncomment to install openlm and openai if you haven't already\n", | ||
"\n", | ||
"# !pip install openlm\n", | ||
"# !pip install openai" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 2, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from getpass import getpass\n", | ||
"import os\n", | ||
"import subprocess\n", | ||
"\n", | ||
"\n", | ||
"# Check if OPENAI_API_KEY environment variable is set\n", | ||
"if \"OPENAI_API_KEY\" not in os.environ:\n", | ||
" print(\"Enter your OpenAI API key:\")\n", | ||
" os.environ[\"OPENAI_API_KEY\"] = getpass()\n", | ||
"\n", | ||
"# Check if HF_API_TOKEN environment variable is set\n", | ||
"if \"HF_API_TOKEN\" not in os.environ:\n", | ||
" print(\"Enter your HuggingFace Hub API key:\")\n", | ||
" os.environ[\"HF_API_TOKEN\"] = getpass()\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"### Using LangChain with OpenLM\n", | ||
"\n", | ||
"Here we're going to call two models in an LLMChain, `text-davinci-003` from OpenAI and `gpt2` on HuggingFace." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 4, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from langchain.llms import OpenLM\n", | ||
"from langchain import PromptTemplate, LLMChain" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 5, | ||
"metadata": {}, | ||
"outputs": [ | ||
{ | ||
"name": "stdout", | ||
"output_type": "stream", | ||
"text": [ | ||
"Model: text-davinci-003\n", | ||
"Result: France is a country in Europe. The capital of France is Paris.\n", | ||
"Model: huggingface.co/gpt2\n", | ||
"Result: Question: What is the capital of France?\n", | ||
"\n", | ||
"Answer: Let's think step by step. I am not going to lie, this is a complicated issue, and I don't see any solutions to all this, but it is still far more\n" | ||
] | ||
} | ||
], | ||
"source": [ | ||
"question = \"What is the capital of France?\"\n", | ||
"template = \"\"\"Question: {question}\n", | ||
"\n", | ||
"Answer: Let's think step by step.\"\"\"\n", | ||
"\n", | ||
"prompt = PromptTemplate(template=template, input_variables=[\"question\"])\n", | ||
"\n", | ||
"for model in [\"text-davinci-003\", \"huggingface.co/gpt2\"]:\n", | ||
" llm = OpenLM(model=model)\n", | ||
" llm_chain = LLMChain(prompt=prompt, llm=llm)\n", | ||
" result = llm_chain.run(question)\n", | ||
" print(\"\"\"Model: {}\n", | ||
"Result: {}\"\"\".format(model, result))" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "Python 3 (ipykernel)", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.11.3" | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 2 | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,26 @@ | ||
from typing import Any, Dict | ||
|
||
from pydantic import root_validator | ||
|
||
from langchain.llms.openai import BaseOpenAI | ||
|
||
|
||
class OpenLM(BaseOpenAI): | ||
@property | ||
def _invocation_params(self) -> Dict[str, Any]: | ||
return {**{"model": self.model_name}, **super()._invocation_params} | ||
|
||
@root_validator() | ||
def validate_environment(cls, values: Dict) -> Dict: | ||
try: | ||
import openlm | ||
|
||
values["client"] = openlm.Completion | ||
except ImportError: | ||
raise ValueError( | ||
"Could not import openlm python package. " | ||
"Please install it with `pip install openlm`." | ||
) | ||
if values["streaming"]: | ||
raise ValueError("Streaming not supported with openlm") | ||
return values |
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
from langchain.llms.openlm import OpenLM | ||
|
||
|
||
def test_openlm_call() -> None: | ||
"""Test valid call to openlm.""" | ||
llm = OpenLM(model_name="dolly-v2-7b", max_tokens=10) | ||
output = llm(prompt="Say foo:") | ||
assert isinstance(output, str) |