Skip to content

Commit

Permalink
Prepare for v0.3.0-rc.1 (#217)
Browse files Browse the repository at this point in the history
* update changelog

* updated changelog for migration instructions

* added missing entries to changelog

* formatting

* prep for release
  • Loading branch information
brainlid authored Dec 16, 2024
1 parent 94980a3 commit d766e79
Show file tree
Hide file tree
Showing 10 changed files with 153 additions and 13 deletions.
120 changes: 120 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,125 @@
# Changelog

## v0.3.0-rc.1 (2024-12-15)

### Breaking Changes
- Change return of LLMChain.run/2 ([#170](https://github.com/brainlid/langchain/pull/170))
- Revamped error handling and handles Anthropic's "overload_error" - ([#194](https://github.com/brainlid/langchain/pull/194))

#### Change return of LLMChain.run/2 ([#170](https://github.com/brainlid/langchain/pull/170))

##### Why the change

Before this change, an `LLMChain`'s `run` function returned `{:ok, updated_chain, last_message}`.

When an assistant (ie LLM) issues a ToolCall and when `run` is in the mode `:until_success` or `:while_need_response`, the `LLMChain` will automatically execute the function and return the result as a new Message back to the LLM. This works great!

The problem comes when an application needs to keep track of all the messages being exchanged during a run operation. That can be done by using callbacks and sending and receiving messages, but that's far from ideal. It makes more sense to have access to that information directly after the `run` operation completes.

##### What this change does

This PR changes the returned type to `{:ok, updated_chain}`.

The `last_message` is available in `updated_chain.last_message`. This cleans up the return API.

This change also adds `%LLMChain{exchanged_messages: exchanged_messages}`,or `updated_chain.exchanged_messages` which is a list of all the messages exchanged between the application and the LLM during the execution of the `run` function.

This breaks the return contract for the `run` function.

##### How to adapt to this change

To adapt to this, if the application isn't using the `last_message` in `{:ok, updated_chain, _last_message}`, then delete the third position in the tuple. Ex: `{:ok, updated_chain}`.

Access to the `last_message` is available on the `updated_chain`.

```elixir
{:ok, updated_chain} =
%{llm: model}
|> LLMChain.new!()
|> LLMChain.run()

last_message = updated_chain.last_message
```

NOTE: that the `updated_chain` now includes `updated_chain.exchanged_messages` which can also be used.

#### Revamped error handling and handles Anthropic's "overload_error" - ([#194](https://github.com/brainlid/langchain/pull/194))

**What you need to do:**
Check your application code for how it is responding to and handling error responses.

If you want to keep the same previous behavior, the following code change will do that:

```elixir
case LLMChain.run(chain) do
{:ok, _updated_chain} ->
:ok

# return the error for display
{:error, _updated_chain, %LangChainError{message: reason}} ->
{:error, reason}
end
```

The change from:

```
{:error, _updated_chain, reason}
```

To:

```
{:error, _updated_chain, %LangChainError{message: reason}}
```

When possible, a `type` value may be set on the `LangChainError`, making it easier to handle some error types programmatically.

### Features
- Added ability to summarize LLM conversations (#216)
- Implemented initial support for fallbacks (#207)
- Added AWS Bedrock support for ChatAnthropic (#154)
- Added OpenAI's new structured output API (#180)
- Added support for examples to title chain (#191)
- Added tool_choice support for OpenAI and Anthropic (#142)
- Added support for passing safety settings to Google AI (#186)
- Added OpenAI project authentication (#166)

### Fixes
- Fixed specs and examples (#211)
- Fixed content-part encoding and decoding for Google API (#212)
- Fixed ChatOllamaAI streaming response (#162)
- Fixed streaming issue with Azure OpenAI Service (#158, #161)
- Fixed OpenAI stream decode issue (#156)
- Fixed typespec error on Message.new_user/1 (#151)
- Fixed duplicate tool call parameters (#174)

### Improvements
- Added error type support for Azure token rate limit exceeded
- Improved error handling (#194)
- Enhanced function execution failure response
- Added "processed_content" to ToolResult struct (#192)
- Implemented support for strict mode for tools (#173)
- Updated documentation for ChatOpenAI use on Azure
- Updated config documentation for API keys
- Updated README examples

### Azure & Google AI Updates
- Added Azure test for ChatOpenAI usage
- Added support for system instructions for Google AI (#182)
- Handle functions with no parameters for Google AI (#183)
- Handle missing token usage fields for Google AI (#184)
- Handle empty text parts from GoogleAI responses (#181)
- Handle all possible finishReasons for ChatGoogleAI (#188)

### Documentation
- Added LLM Model documentation for tool_choice
- Updated documentation using new functions
- Added custom functions notebook
- Improved documentation formatting (#145)
- Added links to models in the config section
- Updated getting started doc for callbacks

## v0.3.0-rc.0 (2024-06-05)

**Added:**
Expand Down
5 changes: 4 additions & 1 deletion lib/chains/data_extraction_chain.ex
Original file line number Diff line number Diff line change
Expand Up @@ -132,7 +132,10 @@ Passage:
"Caught unexpected exception in DataExtractionChain. Error: #{inspect(exception)}"
)

{:error, LangChainError.exception("Unexpected error in DataExtractionChain. Check logs for details.")}
{:error,
LangChainError.exception(
"Unexpected error in DataExtractionChain. Check logs for details."
)}
end
end

Expand Down
11 changes: 8 additions & 3 deletions lib/chat_models/chat_ollama_ai.ex
Original file line number Diff line number Diff line change
Expand Up @@ -280,7 +280,10 @@ defmodule LangChain.ChatModels.ChatOllamaAI do
def do_api_request(ollama_ai, messages, functions, retry_count \\ 3)

def do_api_request(_ollama_ai, _messages, _functions, 0) do
raise LangChainError.exception(type: "retries_exceeded", message: "Retries exceeded. Connection failed.")
raise LangChainError.exception(
type: "retries_exceeded",
message: "Retries exceeded. Connection failed."
)
end

def do_api_request(
Expand Down Expand Up @@ -352,7 +355,8 @@ defmodule LangChain.ChatModels.ChatOllamaAI do
{:error, error}

{:error, %Req.TransportError{reason: :timeout} = err} ->
{:error, LangChainError.exception(type: "timeout", message: "Request timed out", original: err)}
{:error,
LangChainError.exception(type: "timeout", message: "Request timed out", original: err)}

{:error, %Req.TransportError{reason: :closed}} ->
# Force a retry by making a recursive call decrementing the counter
Expand All @@ -364,7 +368,8 @@ defmodule LangChain.ChatModels.ChatOllamaAI do
"Unhandled and unexpected response from streamed post call. #{inspect(other)}"
)

{:error, LangChainError.exception(type: "unexpected_response", message: "Unexpected response")}
{:error,
LangChainError.exception(type: "unexpected_response", message: "Unexpected response")}
end
end

Expand Down
8 changes: 6 additions & 2 deletions lib/message.ex
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,10 @@ defmodule LangChain.Message do
end
else
# only a user message can have ContentParts
Logger.error("Invalid message content #{inspect get_field(changeset, :content)} for role #{role}")
Logger.error(
"Invalid message content #{inspect(get_field(changeset, :content))} for role #{role}"
)

add_error(changeset, :content, "is invalid for role #{role}")
end

Expand Down Expand Up @@ -348,7 +351,8 @@ defmodule LangChain.Message do
Create a new user message which represents a human message or a message from
the application.
"""
@spec new_user!(content :: String.t() | [ContentPart.t() | PromptTemplate.t()]) :: t() | no_return()
@spec new_user!(content :: String.t() | [ContentPart.t() | PromptTemplate.t()]) ::
t() | no_return()
def new_user!(content) do
case new_user(content) do
{:ok, msg} ->
Expand Down
4 changes: 3 additions & 1 deletion lib/message/tool_call.ex
Original file line number Diff line number Diff line change
Expand Up @@ -228,11 +228,13 @@ defmodule LangChain.Message.ToolCall do
# We want to take whatever we are given here.
defp assign_string_value(changeset, field, attrs) do
# get both possible versions of the arguments.
case Map.get(attrs, field) || Map.get(attrs, to_string(field)) do
case Map.get(attrs, field) || Map.get(attrs, to_string(field)) do
"" ->
changeset

val when is_binary(val) ->
put_change(changeset, field, val)

_ ->
changeset
end
Expand Down
4 changes: 3 additions & 1 deletion mix.exs
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ defmodule LangChain.MixProject do
use Mix.Project

@source_url "https://github.com/brainlid/langchain"
@version "0.3.0-rc.0"
@version "0.3.0-rc.1"

def project do
[
Expand Down Expand Up @@ -73,6 +73,7 @@ defmodule LangChain.MixProject do
Chains: [
LangChain.Chains.LLMChain,
LangChain.Chains.TextToTitleChain,
LangChain.Chains.SummarizeConversationChain,
LangChain.Chains.DataExtractionChain
],
Messages: [
Expand Down Expand Up @@ -109,6 +110,7 @@ defmodule LangChain.MixProject do
],
Utils: [
LangChain.Utils,
LangChain.Utils.BedrockConfig,
LangChain.Utils.ChatTemplates,
LangChain.Utils.ChainResult,
LangChain.Config,
Expand Down
3 changes: 2 additions & 1 deletion test/chains/data_extraction_chain_test.exs
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,8 @@ defmodule LangChain.Chains.DataExtractionChainTest do
|> FunctionParam.to_parameters_schema()

# Model setup - specify the model and seed
{:ok, chat} = ChatOpenAI.new(%{model: "gpt-4o-mini-2024-07-18", temperature: 0, seed: 0, stream: false})
{:ok, chat} =
ChatOpenAI.new(%{model: "gpt-4o-mini-2024-07-18", temperature: 0, seed: 0, stream: false})

# run the chain, chain.run(prompt to extract data from)
data_prompt = """
Expand Down
3 changes: 1 addition & 2 deletions test/chains/llm_chain_test.exs
Original file line number Diff line number Diff line change
Expand Up @@ -1072,8 +1072,7 @@ defmodule LangChain.Chains.LLMChainTest do
test "returns error when receives overloaded from Anthropic" do
# Made NOT LIVE here
expect(ChatAnthropic, :call, fn _model, _prompt, _tools ->
{:error,
LangChainError.exception(type: "overloaded", message: "Overloaded (from test)")}
{:error, LangChainError.exception(type: "overloaded", message: "Overloaded (from test)")}
end)

model = ChatAnthropic.new!(%{stream: true, model: @anthropic_test_model})
Expand Down
6 changes: 4 additions & 2 deletions test/chat_models/chat_mistral_ai_test.exs
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,8 @@ defmodule LangChain.ChatModels.ChatMistralAITest do
}
}

assert {:error, %LangChainError{} = error} = ChatMistralAI.do_process_response(model, response)
assert {:error, %LangChainError{} = error} =
ChatMistralAI.do_process_response(model, response)

assert error.type == nil
assert error.message == "Invalid request"
Expand All @@ -172,7 +173,8 @@ defmodule LangChain.ChatModels.ChatMistralAITest do
test "handles Jason.DecodeError", %{model: model} do
response = {:error, %Jason.DecodeError{}}

assert {:error, %LangChainError{} = error} = ChatMistralAI.do_process_response(model, response)
assert {:error, %LangChainError{} = error} =
ChatMistralAI.do_process_response(model, response)

assert error.type == "invalid_json"
assert "Received invalid JSON:" <> _ = error.message
Expand Down
2 changes: 2 additions & 0 deletions test/message/tool_call_test.exs
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,7 @@ defmodule LangChain.Message.ToolCallTest do

test "casts spaces in arguments as spaces" do
one_space = " "

assert {:ok, %ToolCall{} = msg} =
ToolCall.new(%{
"status" => :incomplete,
Expand All @@ -83,6 +84,7 @@ defmodule LangChain.Message.ToolCallTest do

# Multiple spaces
four_spaces = " "

assert {:ok, %ToolCall{} = msg} =
ToolCall.new(%{
"status" => :incomplete,
Expand Down

0 comments on commit d766e79

Please sign in to comment.