Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable tool support for ollama #164

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

alappe
Copy link
Contributor

@alappe alappe commented Jul 26, 2024

Adds basic support for tools with ollama v0.3, tested with the hairbrush example:

alias LangChain.Function
alias LangChain.Message
alias LangChain.Chains.LLMChain
alias LangChain.ChatModels.ChatOllamaAI


# map of data we want to be passed as `context` to the function when
# executed.
custom_context = %{
  "user_id" => 123,
  "hairbrush" => "drawer",
  "dog" => "backyard",
  "sandwich" => "kitchen"
}

# a custom Elixir function made available to the LLM
custom_fn =
  Function.new!(%{
    name: "custom",
    description: "Returns the location of the requested element or item.",
    parameters_schema: %{
      type: "object",
      properties: %{
        thing: %{
          type: "string",
          description: "The thing whose location is being requested."
        }
      },
      required: ["thing"]
    },
    function: fn %{"thing" => thing} = _arguments, context ->
      # our context is a pretend item/location location map
      {:ok, context[thing]}
  end
  })

# create and run the chain
{:ok, updated_chain, %Message{} = message} =
  LLMChain.new!(%{
    llm: ChatOllamaAI.new!(%{model: "llama3.1", verbose: true}),
    custom_context: custom_context,
    verbose: true
  })
  |> LLMChain.add_tools(custom_fn)
  |> LLMChain.add_message(Message.new_user!("Where is the hairbrush located?"))
  |> LLMChain.run(mode: :while_needs_response)

# print the LLM's answer
IO.puts(message.content)
#=> "The hairbrush is located in the drawer."

Copy link
Owner

@brainlid brainlid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just reviewing the code, it looks good!

Can you add some tests? I'd like to see tests for the following:

  • for_api
  • get_parameters
  • call using Mimic to avoid doing a live call
  • do_process_response

Having tests really helps the maintainability of the project since I'm not easily able to test all the different chat models.

Feel free to ask questions if you need help with any of that. Thanks!

@alappe alappe force-pushed the ollama_support_tools branch from 1792a81 to daedf50 Compare August 5, 2024 09:44
@alappe
Copy link
Contributor Author

alappe commented Aug 5, 2024

I added several tests to the best of my knowledge (I'm new to langchain). Please have a look…

@alappe alappe force-pushed the ollama_support_tools branch from daedf50 to 85f00b9 Compare August 5, 2024 13:05
Copy link
Owner

@brainlid brainlid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for adding tests! Sorry I was slow to respond.

I added some specific requests. The one example I gave about assert [%{"function" => _} | _] = data.tools applies to all the other tests as well.

ChatModels.ChatOllamaAI,
Function,
FunctionParam
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's flatten this out into explicit aliases.

  alias LangChain.ChatModels.ChatOllamaAI
  alias LangChain.Function
  alias LangChain.FunctionParam


data = ChatOllamaAI.for_api(ollama_ai, [], [fun])

assert [%{"function" => _} | _] = data.tools
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One of the main goals with the test is to verify it's formatting the data as the targeted LLM expects it. Not just that it's a map with a "function" key. Also, because of the test setup, you can safely make assumptions about the expected data. For example...

assert [%{} = result_data] = data.tools

# now create a result_data assertion

We want to test that the LangChain.Function data structure is structured how the supported Ollama server expects it to be. What does it expect things to be called? Anything different or unusual?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants