Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable strong cache in integration tests #96

Closed
wants to merge 85 commits into from
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
85 commits
Select commit Hold shift + click to select a range
4673b7b
bump version
whimo Jul 22, 2024
51b678b
add images in docs
whimo Jul 22, 2024
d4663c5
Support crewai tools (#63)
BespalovSergey Jul 25, 2024
769cf79
bump version
whimo Jul 25, 2024
b766d89
temp disable indeterministic test
whimo Jul 25, 2024
a5542c5
Generalize common graph store tests (#68)
whimo Jul 31, 2024
77b55be
Update README.md
whimo Aug 14, 2024
8646784
Replicate image api (#69)
ZmeiGorynych Aug 23, 2024
ad47d3d
Add langchain-openai dependency
whimo Aug 25, 2024
080faad
Supply prompt prefix as a list of messages (#70)
whimo Aug 25, 2024
176c0a0
bump version
whimo Aug 25, 2024
d8b731e
minor tweaks (#67)
ZmeiGorynych Aug 26, 2024
b781cc1
Relax Python version upper constraint
whimo Aug 28, 2024
a0b4065
Remove optional dependencies from Poetry resolver (#72)
whimo Aug 29, 2024
423ef8e
Support custom callbacks in Langchain agents + streaming demo (#73)
whimo Aug 30, 2024
bb43aad
bump version
whimo Aug 30, 2024
169c6e1
Fix typo in example
whimo Aug 30, 2024
09d1b0d
Add compatibility checks for `config` parameter in MotleyTool's Llama…
whimo Sep 2, 2024
e59f830
Support various LLM providers (Ollama, Groq, Together...) + docs (#75)
whimo Sep 6, 2024
a758348
bump version
whimo Sep 6, 2024
8b85a9a
Unify output handlers and regular tools, improve exception management…
whimo Sep 11, 2024
78ee38a
Fix link in README
whimo Sep 16, 2024
babf140
Support format strings in prompt_prefix (#77)
ViStefan Sep 17, 2024
ec699d3
populate __version__ (#78)
ViStefan Sep 17, 2024
c332e0e
Fix CrewAI delegation (#79)
whimo Sep 19, 2024
713ca46
Remove explicit setuptools dependency (#80)
whimo Sep 20, 2024
191771b
bump version
whimo Sep 20, 2024
e51e1fc
Fixes for asynchronous crew execution (#81)
whimo Sep 23, 2024
679de70
disable strong cache in tests
ViStefan Sep 26, 2024
8bd6a99
secrets for integration tests
ViStefan Sep 26, 2024
daaafa0
Event-driven orchestration demo with Faust (#85)
whimo Sep 27, 2024
bac5843
Update Event-driven orchestration for AI systems.ipynb
ZmeiGorynych Sep 27, 2024
246c683
Update Event-driven orchestration for AI systems.ipynb
ZmeiGorynych Sep 27, 2024
260ff53
Support custom LLMs in research agent (#86)
whimo Sep 27, 2024
068e396
Upgrade Langchain to v0.3 (#82)
whimo Sep 30, 2024
010a05e
Retry mechanism in MotleyTool (#88)
whimo Sep 30, 2024
3fc4970
Merge branch 'main' into no_strong_cache
whimo Sep 30, 2024
20a25f6
Fix Autogen example
whimo Sep 30, 2024
db2c11d
disable results writing in integration tests
whimo Sep 30, 2024
439c3e1
minor fix
whimo Sep 30, 2024
22a5fc1
bump version
whimo Oct 1, 2024
1fc9bfd
use cache in tests
ViStefan Oct 2, 2024
13482eb
disallow parallel test execution
ViStefan Oct 2, 2024
1e14d6d
bump version
whimo Oct 1, 2024
0de4899
disable strong cache in tests
ViStefan Sep 26, 2024
e0bcb0a
use cache in tests
ViStefan Oct 2, 2024
f5888c6
install external dependencies in examples, retry duckduckgo ratelimits
ViStefan Oct 10, 2024
a4d8e8b
install external dependencies in examples, retry duckduckgo ratelimits
ViStefan Oct 10, 2024
7f99f9b
integration tests workflow concurrency
ViStefan Oct 10, 2024
716e601
remove unneded concurrency mapping
ViStefan Oct 10, 2024
aaff71a
RetryConfig import for AutoGen example
ViStefan Oct 10, 2024
6ede710
wip: skip using autogen test
ViStefan Oct 10, 2024
eb42d67
increase number of retries for duckduckgo
ViStefan Oct 14, 2024
78117f5
Support agent app & async tools (#89)
whimo Oct 14, 2024
8576f40
restore keys for test cache
ViStefan Oct 14, 2024
8a61ed1
bump duckduckgo-search version
ViStefan Oct 15, 2024
e54d301
wip: disable blog_with_images_test
ViStefan Oct 15, 2024
acb5843
Update README.md
whimo Oct 16, 2024
252f0e8
Customer support app & event driven workflows docs (#92)
whimo Oct 16, 2024
4b4f046
bump version
whimo Oct 17, 2024
0f49c45
Fixes for async tools & doc updates (#93)
whimo Oct 17, 2024
66fd10c
Update README.md
whimo Oct 17, 2024
a21ebd9
Update Multi-step research agent.ipynb with local embeddings example …
iSevenDays Oct 21, 2024
2696464
Fix research agent notebook
whimo Oct 21, 2024
080d861
Support AzureOpenAI LLMs & fix research agent defaulting to OpenAI (#94)
whimo Oct 25, 2024
1a29759
bump version
whimo Oct 25, 2024
5234da5
Mention Azure OpenAI in docs
whimo Oct 25, 2024
1942847
skip blog with images test on windows workers
ViStefan Oct 29, 2024
a9ca9b8
Merge remote-tracking branch 'origin/main' into no_strong_cache
ViStefan Oct 30, 2024
3b8c776
Merge branch 'main' into no_strong_cache
ViStefan Oct 30, 2024
dc6b5a2
disable blog with images test
ViStefan Oct 30, 2024
ccd16d5
raise duckduckgo version
ViStefan Oct 30, 2024
8657206
Remove version constraint for duckduckgo_search in examples
whimo Nov 3, 2024
3578e75
Reusable workflow for integration tests
ViStefan Nov 26, 2024
8981e98
Job dependencies in integration tests
ViStefan Nov 26, 2024
f964cec
Sane naming for test groups
ViStefan Nov 26, 2024
11fac3a
Github action booleans magic
ViStefan Nov 26, 2024
8e845d2
Naming test and boolean magic
ViStefan Nov 26, 2024
3cfef05
File renaming was a bad idea
ViStefan Nov 26, 2024
456178f
Test against upper version of python
ViStefan Nov 26, 2024
9ffc867
Pass secrets to separate workflow
ViStefan Dec 2, 2024
a781915
Enable test cache on windows runners
ViStefan Dec 2, 2024
583b675
Bump ddg-search version
ViStefan Dec 2, 2024
ff091ad
Output log in tests
ViStefan Dec 13, 2024
f05da2c
Merge remote-tracking branch 'refs/remotes/origin/main' into no_stron…
ViStefan Dec 13, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Fixes for asynchronous crew execution (#81)
* Create new Lunary queue if running in a separate thread

* Agent call_as_tool explicit input schema

* Proper async invocation for Langchain and LlamaIndex agents
  • Loading branch information
whimo authored Sep 23, 2024
commit e51e1fc9409a9f2b3565c4446adc7bc6402a6a39
5 changes: 2 additions & 3 deletions motleycrew/agents/crewai/crewai.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,7 @@

from motleycrew.agents.crewai import CrewAIAgentWithConfig
from motleycrew.agents.parent import MotleyAgentParent
from motleycrew.common import MotleyAgentFactory
from motleycrew.common import MotleySupportedTool
from motleycrew.common import MotleyAgentFactory, MotleySupportedTool
from motleycrew.common.utils import ensure_module_is_installed
from motleycrew.tools import MotleyTool
from motleycrew.tracking import add_default_callbacks_to_langchain_config
Expand Down Expand Up @@ -92,7 +91,7 @@ def invoke(
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
prompt = self.prepare_for_invocation(input=input)
prompt = self._prepare_for_invocation(input=input)

langchain_tools = [tool.to_langchain_tool() for tool in self.tools.values()]
config = add_default_callbacks_to_langchain_config(config)
Expand Down
47 changes: 35 additions & 12 deletions motleycrew/agents/langchain/langchain.py
Original file line number Diff line number Diff line change
@@ -1,18 +1,21 @@
from __future__ import annotations

import asyncio
from typing import Any, Optional, Sequence

from langchain.agents import AgentExecutor
from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.prompts.chat import ChatPromptTemplate
from langchain_core.runnables import RunnableConfig
from langchain_core.runnables.config import merge_configs
from langchain_core.runnables.history import RunnableWithMessageHistory, GetSessionHistoryCallable
from langchain_core.prompts.chat import ChatPromptTemplate
from langchain_core.runnables.history import (
GetSessionHistoryCallable,
RunnableWithMessageHistory,
)

from motleycrew.agents.mixins import LangchainOutputHandlingAgentMixin
from motleycrew.agents.parent import MotleyAgentParent
from motleycrew.common import MotleyAgentFactory
from motleycrew.common import MotleySupportedTool, logger
from motleycrew.common import MotleyAgentFactory, MotleySupportedTool, logger
from motleycrew.tracking import add_default_callbacks_to_langchain_config


Expand Down Expand Up @@ -146,26 +149,46 @@ def materialize(self):
history_messages_key="chat_history",
)

def invoke(
self,
input: dict,
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
def _prepare_config(self, config: RunnableConfig) -> RunnableConfig:
config = merge_configs(self.runnable_config, config)
prompt = self.prepare_for_invocation(input=input, prompt_as_messages=self.input_as_messages)

config = add_default_callbacks_to_langchain_config(config)
if self.get_session_history_callable:
config["configurable"] = config.get("configurable") or {}
config["configurable"]["session_id"] = (
config["configurable"].get("session_id") or "default"
)
return config

def invoke(
self,
input: dict,
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
config = self._prepare_config(config)
prompt = self._prepare_for_invocation(
input=input, prompt_as_messages=self.input_as_messages
)

output = self.agent.invoke({"input": prompt}, config, **kwargs)
output = output.get("output")
return output

async def ainvoke(
self,
input: dict,
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
config = self._prepare_config(config)
prompt = await asyncio.to_thread(
self._prepare_for_invocation, input=input, prompt_as_messages=self.input_as_messages
)

output = await self.agent.ainvoke({"input": prompt}, config, **kwargs)
output = output.get("output")
return output

@staticmethod
def from_agent(
agent: AgentExecutor,
Expand Down
23 changes: 19 additions & 4 deletions motleycrew/agents/llama_index/llama_index.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
from __future__ import annotations

import asyncio
import uuid
from typing import Any, Optional, Sequence

try:
from llama_index.core.agent import AgentRunner
from llama_index.core.chat_engine.types import ChatResponseMode
from llama_index.core.agent.types import TaskStep, TaskStepOutput
from llama_index.core.chat_engine.types import AgentChatResponse
from llama_index.core.chat_engine.types import AgentChatResponse, ChatResponseMode
except ImportError:
AgentRunner = None
ChatResponseMode = None
Expand All @@ -18,7 +18,7 @@
from langchain_core.runnables import RunnableConfig

from motleycrew.agents.parent import MotleyAgentParent
from motleycrew.common import MotleySupportedTool, MotleyAgentFactory, AuxPrompts
from motleycrew.common import AuxPrompts, MotleyAgentFactory, MotleySupportedTool
from motleycrew.common.utils import ensure_module_is_installed
from motleycrew.tools import DirectOutput

Expand Down Expand Up @@ -154,7 +154,7 @@ def invoke(
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
prompt = self.prepare_for_invocation(input=input)
prompt = self._prepare_for_invocation(input=input)

output = self.agent.chat(prompt)

Expand All @@ -163,6 +163,21 @@ def invoke(

return output.response

async def ainvoke(
self,
input: dict,
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
prompt = await asyncio.to_thread(self._prepare_for_invocation, input=input)

output = await self.agent.achat(prompt)

if self.direct_output is not None:
return self.direct_output.output

return output.response

@staticmethod
def from_agent(
agent: AgentRunner,
Expand Down
14 changes: 9 additions & 5 deletions motleycrew/agents/parent.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,9 @@

from langchain_core.messages import BaseMessage
from langchain_core.prompts.chat import ChatPromptTemplate, HumanMessage, SystemMessage
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.runnables import RunnableConfig
from langchain_core.tools import Tool
from langchain_core.tools import StructuredTool

from motleycrew.agents.abstract_parent import MotleyAgentAbstractParent
from motleycrew.common import MotleyAgentFactory, MotleySupportedTool, logger
Expand Down Expand Up @@ -174,7 +175,7 @@ def materialize(self):

self._agent = self.agent_factory(tools=self.tools)

def prepare_for_invocation(self, input: dict, prompt_as_messages: bool = False) -> str:
def _prepare_for_invocation(self, input: dict, prompt_as_messages: bool = False) -> str:
"""Prepare the agent for invocation by materializing it and composing the prompt.

Should be called in the beginning of the agent's invoke method.
Expand Down Expand Up @@ -225,8 +226,10 @@ def as_tool(self, **kwargs) -> MotleyTool:
if not getattr(self, "name", None) or not getattr(self, "description", None):
raise ValueError("Agent must have a name and description to be called as a tool")

def call_as_tool(self, *args, **kwargs):
# TODO: this thing is hacky, we should have a better way to pass structured input
class CallAsToolInput(BaseModel):
input: str = Field(..., description="Input to the tool")

def call_as_tool(*args, **kwargs):
if args:
return self.invoke({"prompt": args[0]})
if len(kwargs) == 1:
Expand All @@ -235,12 +238,13 @@ def call_as_tool(self, *args, **kwargs):

# To be specialized if we expect structured input
return MotleyTool.from_langchain_tool(
Tool(
StructuredTool(
name=self.name.replace(
" ", "_"
).lower(), # OpenAI doesn't accept spaces in function names
description=self.description,
func=call_as_tool,
args_schema=CallAsToolInput,
),
**kwargs,
)
Expand Down
6 changes: 3 additions & 3 deletions motleycrew/crew/crew.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
import asyncio
import threading
import time
from typing import Collection, Generator, Optional, Any
from typing import Any, Collection, Generator, Optional

from motleycrew.agents.parent import MotleyAgentParent
from motleycrew.common import logger, AsyncBackend, Defaults
from motleycrew.common import AsyncBackend, Defaults, logger
from motleycrew.crew.crew_threads import TaskUnitThreadPool
from motleycrew.storage import MotleyGraphStore
from motleycrew.storage.graph_store_utils import init_graph_store
Expand Down Expand Up @@ -112,7 +112,7 @@ def _prepare_next_unit_for_dispatch(
Agent, task, unit to be dispatched.
"""
available_tasks = self.get_available_tasks()
logger.info("Available tasks: %s", available_tasks)
logger.debug("Available tasks: %s", available_tasks)

for task in available_tasks:
if not task.allow_async_units and task in running_sync_tasks:
Expand Down
23 changes: 15 additions & 8 deletions motleycrew/tracking/callbacks.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,27 +2,28 @@
The module contains callback handlers for sending data to the Lunary service
"""

from typing import List, Dict, Optional, Any, Union
import traceback
from typing import Any, Dict, List, Optional, Union

try:
from llama_index.core.base.llms.types import ChatMessage
from llama_index.core.callbacks.base_handler import BaseCallbackHandler
from llama_index.core.callbacks.schema import CBEventType, EventPayload
from llama_index.core.base.llms.types import ChatMessage
except ImportError:
BaseCallbackHandler = object
CBEventType = None
ChatMessage = None

try:
from lunary import track_event, event_queue_ctx
from lunary import EventQueue, event_queue_ctx, track_event
except ImportError:
track_event = None
event_queue_ctx = None
EventQueue = None

from motleycrew.common.enums import LunaryRunType, LunaryEventName
from motleycrew.common.utils import ensure_module_is_installed
from motleycrew.common import logger
from motleycrew.common.enums import LunaryEventName, LunaryRunType
from motleycrew.common.utils import ensure_module_is_installed


def event_delegate_decorator(f):
Expand All @@ -33,6 +34,7 @@ def event_delegate_decorator(f):
Args:
f (callable):
"""

def wrapper(self, *args, **kwargs):
ensure_module_is_installed("llama_index")
run_type = "start" if "start" in f.__name__ else "end"
Expand Down Expand Up @@ -122,7 +124,12 @@ def __init__(
self._track_event = track_event
self._event_run_type_ids = []

self.queue = event_queue_ctx.get()
try:
self.queue = event_queue_ctx.get()
except (
LookupError
): # happens when running in a separate thread, so we can't use the global queue
self.queue = EventQueue()

def _get_initial_track_event_params(
self, run_type: LunaryRunType, event_name: LunaryEventName, run_id: str = None
Expand Down Expand Up @@ -312,7 +319,7 @@ def _on_agent_step_end(

if payload:
response = payload.get(EventPayload.RESPONSE)
output = response.response
output = response.response
else:
output = ""
params["output"] = output
Expand Down Expand Up @@ -400,7 +407,7 @@ def on_event_end(
payload (dict): related event data
event_id (str): event id (uuid)
**kwargs:
"""
"""
return

@staticmethod
Expand Down