- async langgraph_agent_toolkit.service.routes.info(request)[source][source]
- Parameters:
request (Request)
- Return type:
- async langgraph_agent_toolkit.service.routes.invoke(user_input, agent_id=None, request=None)[source][source]
Invoke an agent with user input to retrieve a final response.
If agent_id is not provided, the default agent will be used. Use thread_id to persist and continue a multi-turn conversation. run_id kwarg is also attached to messages for recording feedback.
- Parameters:
- Return type:
- async langgraph_agent_toolkit.service.routes.stream(user_input, agent_id=None, request=None)[source][source]
Stream an agent’s response to a user input, including intermediate messages and tokens.
If agent_id is not provided, the default agent will be used. Use thread_id to persist and continue a multi-turn conversation. run_id kwarg is also attached to all messages for recording feedback.
Set stream_tokens=false to return intermediate messages but not token-by-token.
- Parameters:
user_input (StreamInput)
agent_id (str | None)
request (Request)
- Return type:
- async langgraph_agent_toolkit.service.routes.feedback(feedback, agent_id=None, request=None)[source][source]
Record feedback for a run to the configured observability platform.
This routes the feedback to the appropriate platform based on the agent’s configuration.
- Parameters:
- Return type:
- async langgraph_agent_toolkit.service.routes.history(input=Depends(), agent_id=None, request=None)[source][source]
Get chat history.
- Parameters:
input (ChatHistoryInput)
agent_id (str | None)
request (Request)
- Return type:
- async langgraph_agent_toolkit.service.routes.clear_history(input, agent_id=None, request=None)[source][source]
Clear chat history.
- Parameters:
input (ClearHistoryInput)
agent_id (str | None)
request (Request)
- Return type:
- async langgraph_agent_toolkit.service.routes.add_messages(input, agent_id=None, request=None)[source][source]
Add messages to the end of chat history.
- Parameters:
input (AddMessagesInput)
agent_id (str | None)
request (Request)
- Return type:
- async langgraph_agent_toolkit.service.routes.health_check()[source][source]
Health check endpoint.
- Return type:
- async langgraph_agent_toolkit.service.routes.liveness_probe()[source][source]
Liveness probe for Kubernetes.
This probe indicates if the process is alive and should be restarted if it fails. It performs minimal checks - just confirms the process can respond.
- Return type:
- async langgraph_agent_toolkit.service.routes.readiness_probe(request)[source][source]
Readiness probe for Kubernetes.
This probe indicates if the service is ready to accept traffic. It checks that agents have been initialized successfully. Kubernetes will not route traffic to the pod until this returns 200.
- Parameters:
request (Request)
- async langgraph_agent_toolkit.service.routes.startup_probe(request)[source][source]
Startup probe for Kubernetes.
This probe indicates if the application has finished its initialization. It’s designed for slow-starting containers and allows more time than liveness probe. The startup probe is checked before liveness/readiness probes are activated.
- Parameters:
request (Request)