Skip to content

Commit

Permalink
move inference api details to models
Browse files Browse the repository at this point in the history
  • Loading branch information
saa1605 committed Oct 22, 2024
1 parent 1f609e0 commit fcb0daf
Show file tree
Hide file tree
Showing 2 changed files with 66 additions and 15 deletions.
17 changes: 2 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,25 +47,12 @@ Install the agent_s package and dependencies
pip install -e .
```

Set your LLM API Keys and other environment variables. You can do this by adding the following lines to your .bashrc (Linux), or .zshrc (MacOS) file. We support OpenAI, Azure OpenAI, Anthropic, and vLLM models.
Set your LLM API Keys and other environment variables. You can do this by adding the following lines to your .bashrc (Linux), or .zshrc (MacOS) file.

1. OpenAI
```
export OPENAI_API_KEY=<YOUR_API_KEY>
```
2. Anthropic
```
export ANTHROPIC_API_KEY=<YOUR_API_KEY>
```
3. OpenAI on Azure
```
export AZURE_OPENAI_API_BASE=<DEPLOYMENT_NAME>
export AZURE_OPENAI_API_KEY=<YOUR_API_KEY>
```
4. vLLM for Local Models
```
export vLLM_ENDPOINT_URL=<YOUR_DEPLOYMENT_URL>
```
We also support Azure OpenAI, Anthropic, and vLLM inference. For more information refer to [models.md](models.md).

### Setup Retrieval from Web using Perplexica

Expand Down
64 changes: 64 additions & 0 deletions models.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
We support the following APIs for MLLM inference: OpenAI, Anthropic, Azure OpenAI, and vLLM for Local Models. To use these APIs, you need to set the corresponding environment variables:

1. OpenAI

```python
export OPENAI_API_KEY=<YOUR_API_KEY>
```

2. Anthropic

```python
export ANTHROPIC_API_KEY=<YOUR_API_KEY>
```

3. OpenAI on Azure

```python
export AZURE_OPENAI_API_BASE=<DEPLOYMENT_NAME>
export AZURE_OPENAI_API_KEY=<YOUR_API_KEY>
```

4. vLLM for Local Models

```python
export vLLM_ENDPOINT_URL=<YOUR_DEPLOYMENT_URL>
```

Alternatively you can directly pass the API keys into the engine_params argument while instantating the agent.

```python
from agent_s.GraphSearchAgent import GraphSearchAgent
engine_params = {
"engine_type": 'anthropic', # Allowed Values: 'openai', 'anthropic', 'azure_openai', 'vllm'
"model": 'claude-3-5-sonnet-20240620', # Allowed Values: Any Vision and Language Model from the supported APIs
}
agent = GraphSearchAgent(
engine_params,
experiment_type='openaci',
platform=platform_os,
max_tokens=1500,
top_p=0.9,
temperature=0.5,
action_space="pyautogui",
observation_type="atree",
max_trajectory_length=3,
a11y_tree_max_tokens=10000,
enable_reflection=True,
)
```

To use the underlying Multimodal Agent (LMMAgent) which wraps LLMs with message handling functionality, you can use the following code snippet:

```python
engine_params = {
"engine_type": 'anthropic', # Allowed Values: 'openai', 'anthropic', 'azure_openai', 'vllm'
"model": 'claude-3-5-sonnet-20240620', # Allowed Values: Any Vision and Language Model from the supported APIs
}
from agent_s.MultimodalAgent import LMMAgent
agent = LMMAgent(
engine_params = engine_params,
)
```

The GraphSearchAgent also utilizes this LMMAgent internally.

0 comments on commit fcb0daf

Please sign in to comment.