LangGraph Agent Toolkit Documentation
A comprehensive toolkit for building, deploying, and managing AI agents using LangGraph, FastAPI, and Streamlit. It provides a production-ready framework for creating conversational AI agents with features like multi-provider LLM support, streaming responses, observability, and memory management.
What is langgraph-agent-toolkit?
The langgraph-agent-toolkit is a full-featured framework for developing and deploying AI agent services. Built on the foundation of:
LangGraph for agent creation with advanced flows and human-in-the-loop capabilities
FastAPI for robust, high-performance API services with streaming support
Streamlit for intuitive user interfaces
Key components include:
Data structures and settings built with Pydantic
Multi-provider LLM support
Comprehensive memory management and persistence using PostgreSQL/SQLite
Advanced observability tooling via Langfuse and Langsmith
Modular architecture allowing customization while maintaining a consistent application structure
Whether you’re building a simple chatbot or complex multi-agent system, this toolkit provides the infrastructure to develop, test, and deploy your LangGraph-based agents with confidence.
Architecture

Quickstart
Create a
.env
file based on.env.example
Option 1: Run with Python from source
# Install dependencies pip install uv uv sync --frozen source .venv/bin/activate # Start the service python langgraph_agent_toolkit/run_service.py # In another terminal source .venv/bin/activate streamlit run langgraph_agent_toolkit/streamlit_app.py
Option 2: Run with Python from PyPi repository
pip install langgraph-agent-toolkit
Option 3: Run with Docker
docker compose watch
Content
User Guide:
Development: