async langgraph_agent_toolkit.service.routes.info(request)[source][source]
Parameters:

request (Request)

Return type:

ServiceMetadata

async langgraph_agent_toolkit.service.routes.invoke(user_input, agent_id=None, request=None)[source][source]

Invoke an agent with user input to retrieve a final response.

If agent_id is not provided, the default agent will be used. Use thread_id to persist and continue a multi-turn conversation. run_id kwarg is also attached to messages for recording feedback.

Parameters:
Return type:

ChatMessage

async langgraph_agent_toolkit.service.routes.stream(user_input, agent_id=None, request=None)[source][source]

Stream an agent’s response to a user input, including intermediate messages and tokens.

If agent_id is not provided, the default agent will be used. Use thread_id to persist and continue a multi-turn conversation. run_id kwarg is also attached to all messages for recording feedback.

Set stream_tokens=false to return intermediate messages but not token-by-token.

Parameters:
Return type:

StreamingResponse

async langgraph_agent_toolkit.service.routes.feedback(feedback, agent_id=None, request=None)[source][source]

Record feedback for a run to the configured observability platform.

This routes the feedback to the appropriate platform based on the agent’s configuration.

Parameters:
Return type:

FeedbackResponse

async langgraph_agent_toolkit.service.routes.history(input=Depends(), agent_id=None, request=None)[source][source]

Get chat history.

Parameters:
Return type:

ChatHistory

async langgraph_agent_toolkit.service.routes.clear_history(input, agent_id=None, request=None)[source][source]

Clear chat history.

Parameters:
Return type:

ClearHistoryResponse

async langgraph_agent_toolkit.service.routes.add_messages(input, agent_id=None, request=None)[source][source]

Add messages to the end of chat history.

Parameters:
Return type:

AddMessagesResponse

async langgraph_agent_toolkit.service.routes.redirect_to_docs()[source][source]
Return type:

RedirectResponse

async langgraph_agent_toolkit.service.routes.health_check()[source][source]

Health check endpoint.

Return type:

HealthCheck

async langgraph_agent_toolkit.service.routes.liveness_probe()[source][source]

Liveness probe for Kubernetes.

This probe indicates if the process is alive and should be restarted if it fails. It performs minimal checks - just confirms the process can respond.

Return type:

LivenessResponse

async langgraph_agent_toolkit.service.routes.readiness_probe(request)[source][source]

Readiness probe for Kubernetes.

This probe indicates if the service is ready to accept traffic. It checks that agents have been initialized successfully. Kubernetes will not route traffic to the pod until this returns 200.

Parameters:

request (Request)

async langgraph_agent_toolkit.service.routes.startup_probe(request)[source][source]

Startup probe for Kubernetes.

This probe indicates if the application has finished its initialization. It’s designed for slow-starting containers and allows more time than liveness probe. The startup probe is checked before liveness/readiness probes are activated.

Parameters:

request (Request)

async langgraph_agent_toolkit.service.routes.db_health_check(request)[source][source]

Database pool health check endpoint.

Parameters:

request (Request)

Return type:

DatabaseHealthResponse