langgraph_agent_toolkit.client package

class langgraph_agent_toolkit.client.AgentClient(base_url='http://0.0.0.0', agent=None, timeout=None, get_info=True, verify=False)[source][source]

Bases: object

Client for interacting with the agent service.

Initialize the client.

Parameters:
  • base_url (str) – The base URL of the agent service.

  • agent (str) – The name of the default agent to use.

  • timeout (float, optional) – The timeout for requests.

  • get_info (bool, optional) – Whether to fetch agent information on init. Default: True

  • verify (bool, optional) – Whether to verify the agent information. Default: False

__init__(base_url='http://0.0.0.0', agent=None, timeout=None, get_info=True, verify=False)[source][source]

Initialize the client.

Parameters:
  • base_url (str) – The base URL of the agent service.

  • agent (str) – The name of the default agent to use.

  • timeout (float, optional) – The timeout for requests.

  • get_info (bool, optional) – Whether to fetch agent information on init. Default: True

  • verify (bool, optional) – Whether to verify the agent information. Default: False

Return type:

None

async aadd_messages(messages, thread_id=None, user_id=None)[source][source]

Add messages to chat history asynchronously.

Parameters:
  • messages (list[dict[str, str]] | list[MessageInput]) – Messages to add

  • thread_id (str, optional) – Thread ID for identifying a conversation

  • user_id (str, optional) – User ID for identifying the user

Return type:

AddMessagesResponse

async aclear_history(thread_id=None, user_id=None)[source][source]

Clear chat history asynchronously.

Parameters:
  • thread_id (str, optional) – Thread ID for identifying a conversation

  • user_id (str, optional) – User ID for identifying the user

Return type:

ClearHistoryResponse

async acreate_feedback(run_id, key, score, kwargs={}, user_id=None)[source][source]

Create a feedback record for a run.

Parameters:
  • run_id (str) – The ID of the run to provide feedback for

  • key (str) – The key for the feedback

  • score (float) – The score for the feedback

  • kwargs (dict[str, Any], optional) – Additional metadata for the feedback

  • user_id (str, optional) – User ID for identifying the user

Return type:

None

add_messages(messages, thread_id=None, user_id=None)[source][source]

Add messages to chat history.

Parameters:
  • messages (list[dict[str, str]] | list[MessageInput]) – Messages to add

  • thread_id (str, optional) – Thread ID for identifying a conversation

  • user_id (str, optional) – User ID for identifying the user

Return type:

AddMessagesResponse

async aget_history(thread_id, user_id=None)[source][source]

Get chat history asynchronously.

Parameters:
  • thread_id (str, optional) – Thread ID for identifying a conversation

  • user_id (str, optional) – User ID for identifying the user

Return type:

ChatHistory

async ainvoke(input, model_name=None, model_provider=None, model_config_key=None, thread_id=None, user_id=None, agent_config=None, recursion_limit=None)[source][source]

Invoke the agent asynchronously. Only the final message is returned.

Parameters:
  • input (Dict[str, Any]) – The input to send to the agent

  • model_name (str, optional) – LLM model to use for the agent

  • model_provider (str, optional) – LLM model provider to use for the agent

  • model_config_key (str, optional) – Key for predefined model configuration

  • thread_id (str, optional) – Thread ID for continuing a conversation

  • user_id (str, optional) – User ID for identifying the user

  • agent_config (dict[str, Any], optional) – Additional configuration to pass through to the agent

  • recursion_limit (int, optional) – Recursion limit for the agent

Returns:

The response from the agent

Return type:

ChatMessage

async astream(input, model_name=None, model_provider=None, model_config_key=None, thread_id=None, user_id=None, agent_config=None, recursion_limit=None, stream_tokens=True)[source][source]

Stream the agent’s response asynchronously.

Each intermediate message of the agent process is yielded as a ChatMessage. If stream_tokens is True (the default value), the response will also yield content tokens from streaming models as they are generated.

Parameters:
  • input (Dict[str, Any]) – The input to send to the agent

  • model_name (str, optional) – LLM model to use for the agent

  • model_provider (str, optional) – LLM model provider to use for the agent

  • model_config_key (str, optional) – Key for predefined model configuration

  • thread_id (str, optional) – Thread ID for continuing a conversation

  • user_id (str, optional) – User ID for identifying the user

  • agent_config (dict[str, Any], optional) – Additional configuration to pass through to the agent

  • recursion_limit (int, optional) – Recursion limit for the agent

  • stream_tokens (bool, optional) – Stream tokens as they are generated Default: True

Returns:

The response from the agent

Return type:

AsyncGenerator[ChatMessage | str, None]

clear_history(thread_id=None, user_id=None)[source][source]

Clear chat history.

Parameters:
  • thread_id (str, optional) – Thread ID for identifying a conversation

  • user_id (str, optional) – User ID for identifying the user

Return type:

ClearHistoryResponse

create_feedback(run_id, key, score, kwargs={}, user_id=None)[source][source]

Create a feedback record for a run.

Parameters:
  • run_id (str) – The ID of the run to provide feedback for

  • key (str) – The key for the feedback

  • score (float) – The score for the feedback

  • kwargs (dict[str, Any], optional) – Additional metadata for the feedback

  • user_id (str, optional) – User ID for identifying the user

Return type:

FeedbackResponse

get_history(thread_id, user_id=None)[source][source]

Get chat history.

Parameters:
  • thread_id (str, optional) – Thread ID for identifying a conversation

  • user_id (str, optional) – User ID for identifying the user

Return type:

ChatHistory

invoke(input, model_name=None, model_provider=None, model_config_key=None, thread_id=None, user_id=None, agent_config=None, recursion_limit=None)[source][source]

Invoke the agent synchronously. Only the final message is returned.

Parameters:
  • input (Dict[str, Any]) – The input to send to the agent

  • model_name (str, optional) – LLM model to use for the agent

  • model_provider (str, optional) – LLM model provider to use for the agent

  • model_config_key (str, optional) – Key for predefined model configuration

  • thread_id (str, optional) – Thread ID for continuing a conversation

  • user_id (str, optional) – User ID for identifying the user

  • agent_config (dict[str, Any], optional) – Additional configuration to pass through to the agent

  • recursion_limit (int, optional) – Recursion limit for the agent

Returns:

The response from the agent

Return type:

ChatMessage

retrieve_info()[source][source]
Return type:

None

stream(input, model_name=None, model_provider=None, model_config_key=None, thread_id=None, user_id=None, agent_config=None, recursion_limit=None, stream_tokens=True)[source][source]

Stream the agent’s response synchronously.

Each intermediate message of the agent process is yielded as a ChatMessage. If stream_tokens is True (the default value), the response will also yield content tokens from streaming models as they are generated.

Parameters:
  • input (Dict[str, Any]) – The input to send to the agent

  • model_name (str, optional) – LLM model to use for the agent

  • model_provider (str, optional) – LLM model provider to use for the agent

  • model_config_key (str, optional) – Key for predefined model configuration

  • thread_id (str, optional) – Thread ID for continuing a conversation

  • user_id (str, optional) – User ID for identifying the user

  • agent_config (dict[str, Any], optional) – Additional configuration to pass through to the agent

  • recursion_limit (int, optional) – Recursion limit for the agent

  • stream_tokens (bool, optional) – Stream tokens as they are generated Default: True

Returns:

The response from the agent

Return type:

Generator[ChatMessage | str, None, None]

update_agent(agent, verify=True)[source][source]
Parameters:
Return type:

None

exception langgraph_agent_toolkit.client.AgentClientError[source][source]

Bases: Exception

__init__(*args, **kwargs)
add_note()

Exception.add_note(note) – add a note to the exception

args
with_traceback()

Exception.with_traceback(tb) – set self.__traceback__ to tb and return self.

Submodules