langgraph_agent_toolkit.core.models package
- class langgraph_agent_toolkit.core.models.ChatOpenAIPatched(*args, name=None, cache=None, verbose=<factory>, callbacks=None, tags=None, metadata=None, custom_get_token_ids=None, rate_limiter=None, disable_streaming=False, output_version=<factory>, profile=None, client=None, async_client=None, root_client=None, root_async_client=None, model='gpt-3.5-turbo', temperature=None, model_kwargs=<factory>, api_key=<factory>, base_url=None, organization=None, openai_proxy=<factory>, timeout=None, stream_usage=None, max_retries=None, presence_penalty=None, frequency_penalty=None, seed=None, logprobs=None, top_logprobs=None, logit_bias=None, streaming=False, n=None, top_p=None, max_completion_tokens=None, reasoning_effort=None, reasoning=None, verbosity=None, tiktoken_model_name=None, default_headers=None, default_query=None, http_client=None, http_async_client=None, stop_sequences=None, extra_body=None, include_response_headers=False, disabled_params=None, include=None, service_tier=None, store=None, truncation=None, use_previous_response_id=False, use_responses_api=None)[source][source]
Bases:
ChatOpenAI- Parameters:
args (Any)
name (str | None)
cache (BaseCache | bool | None)
verbose (bool)
callbacks (list[BaseCallbackHandler] | BaseCallbackManager | None)
rate_limiter (BaseRateLimiter | None)
output_version (str | None)
profile (ModelProfile | None)
client (Any)
async_client (Any)
root_client (Any)
root_async_client (Any)
model (str)
temperature (float | None)
api_key (SecretStr | None | Callable[[], str] | Callable[[], Awaitable[str]])
base_url (str | None)
organization (str | None)
openai_proxy (str | None)
stream_usage (bool | None)
max_retries (int | None)
presence_penalty (float | None)
frequency_penalty (float | None)
seed (int | None)
logprobs (bool | None)
top_logprobs (int | None)
streaming (bool)
n (int | None)
top_p (float | None)
max_completion_tokens (int | None)
reasoning_effort (str | None)
verbosity (str | None)
tiktoken_model_name (str | None)
http_client (Any | None)
http_async_client (Any | None)
include_response_headers (bool)
service_tier (str | None)
store (bool | None)
truncation (str | None)
use_previous_response_id (bool)
use_responses_api (bool | None)
- async abatch(inputs, config=None, *, return_exceptions=False, **kwargs)[source]
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses must override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.
- Parameters:
inputs (list[Input]) – A list of inputs to the Runnable.
config (RunnableConfig | list[RunnableConfig] | None) –
A config to use when invoking the Runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys.
Please refer to RunnableConfig for more details.
return_exceptions (bool) – Whether to return exceptions instead of raising them.
**kwargs (Any | None) – Additional keyword arguments to pass to the Runnable.
- Returns:
A list of outputs from the Runnable.
- Return type:
list[Output]
- async abatch_as_completed(inputs, config=None, *, return_exceptions=False, **kwargs)[source]
Run ainvoke in parallel on a list of inputs.
Yields results as they complete.
- Parameters:
inputs (Sequence[Input]) – A list of inputs to the Runnable.
config (RunnableConfig | Sequence[RunnableConfig] | None) –
A config to use when invoking the Runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys.
Please refer to RunnableConfig for more details.
return_exceptions (bool) – Whether to return exceptions instead of raising them.
**kwargs (Any | None) – Additional keyword arguments to pass to the Runnable.
- Yields:
A tuple of the index of the input and the output from the Runnable.
- Return type:
AsyncIterator[tuple[int, Output | Exception]]
- async agenerate(messages, stop=None, callbacks=None, *, tags=None, metadata=None, run_name=None, run_id=None, **kwargs)[source]
Asynchronously pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
Take advantage of batched calls,
Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model
type (e.g., pure text completion models vs chat models).
- Parameters:
messages (list[list[BaseMessage]]) – List of list of messages.
Stop words to use when generating.
Model output is cut off at the first occurrence of any of these substrings.
callbacks (Callbacks) –
Callbacks to pass through.
Used for executing additional functionality, such as logging or streaming, throughout generation.
run_name (str | None) – The name of the run.
run_id (uuid.UUID | None) – The ID of the run.
**kwargs (Any) –
Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
- Returns:
- An LLMResult, which contains a list of candidate Generations for each
input prompt and additional model provider-specific output.
- Return type:
LLMResult
- async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)[source]
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
Take advantage of batched calls,
Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model
type (e.g., pure text completion models vs chat models).
- Parameters:
prompts (list[PromptValue]) –
List of PromptValue objects.
A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessage objects for chat models).
Stop words to use when generating.
Model output is cut off at the first occurrence of any of these substrings.
callbacks (list[BaseCallbackHandler] | BaseCallbackManager | None) –
Callbacks to pass through.
Used for executing additional functionality, such as logging or streaming, throughout generation.
**kwargs (Any) –
Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
- Returns:
- An LLMResult, which contains a list of candidate Generation objects for
each input prompt and additional model provider-specific output.
- Return type:
LLMResult
- async ainvoke(input, config=None, *, stop=None, **kwargs)[source]
Transform a single input into an output.
- Parameters:
input (LanguageModelInput) – The input to the Runnable.
config (RunnableConfig | None) –
A config to use when invoking the Runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys.
Please refer to RunnableConfig for more details.
kwargs (Any)
- Returns:
The output of the Runnable.
- Return type:
AIMessage
- as_tool(args_schema=None, *, name=None, description=None, arg_types=None)[source]
Create a BaseTool from a Runnable.
as_tool will instantiate a BaseTool with a name, description, and args_schema from a Runnable. Where possible, schemas are inferred from runnable.get_input_schema.
Alternatively (e.g., if the Runnable takes a dict as input and the specific dict keys are not typed), the schema can be specified directly with args_schema.
You can also pass arg_types to just specify the required arguments and their types.
- Parameters:
- Returns:
A BaseTool instance.
- Return type:
BaseTool
!!! example “TypedDict input”
!!! example “dict input, specifying schema via args_schema”
```python from typing import Any from pydantic import BaseModel, Field from langchain_core.runnables import RunnableLambda
- def f(x: dict[str, Any]) -> str:
return str(x[“a”] * max(x[“b”]))
- class FSchema(BaseModel):
“””Apply a function to an integer and list of integers.”””
a: int = Field(…, description=”Integer”) b: list[int] = Field(…, description=”List of ints”)
runnable = RunnableLambda(f) as_tool = runnable.as_tool(FSchema) as_tool.invoke({“a”: 3, “b”: [1, 2]}) ```
!!! example “dict input, specifying schema via arg_types”
!!! example “str input”
- assign(**kwargs)[source]
Assigns new fields to the dict output of this Runnable.
```python from langchain_core.language_models.fake import FakeStreamingListLLM from langchain_core.output_parsers import StrOutputParser from langchain_core.prompts import SystemMessagePromptTemplate from langchain_core.runnables import Runnable from operator import itemgetter
- prompt = (
SystemMessagePromptTemplate.from_template(“You are a nice assistant.”) + “{question}”
) model = FakeStreamingListLLM(responses=[“foo-lish”])
chain: Runnable = prompt | model | {“str”: StrOutputParser()}
chain_with_assign = chain.assign(hello=itemgetter(“str”) | model)
print(chain_with_assign.input_schema.model_json_schema()) # {‘title’: ‘PromptInput’, ‘type’: ‘object’, ‘properties’: {‘question’: {‘title’: ‘Question’, ‘type’: ‘string’}}} print(chain_with_assign.output_schema.model_json_schema()) # {‘title’: ‘RunnableSequenceOutput’, ‘type’: ‘object’, ‘properties’: {‘str’: {‘title’: ‘Str’, ‘type’: ‘string’}, ‘hello’: {‘title’: ‘Hello’, ‘type’: ‘string’}}} ```
- Parameters:
**kwargs (Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any] | Mapping[str, Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any]]) – A mapping of keys to Runnable or Runnable-like objects that will be invoked with the entire output dict of this Runnable.
- Returns:
A new Runnable.
- Return type:
RunnableSerializable[Any, Any]
- async astream(input, config=None, *, stop=None, **kwargs)[source]
Default implementation of astream, which calls ainvoke.
Subclasses must override this method if they support streaming output.
- Parameters:
- Yields:
The output of the Runnable.
- Return type:
AsyncIterator[AIMessageChunk]
- async astream_events(input, config=None, *, version='v2', include_names=None, include_types=None, include_tags=None, exclude_names=None, exclude_types=None, exclude_tags=None, **kwargs)[source]
Generate a stream of events.
Use to create an iterator over StreamEvent that provide real-time information about the progress of the Runnable, including StreamEvent from intermediate results.
A StreamEvent is a dictionary with the following schema:
- event: Event names are of the format:
on_[runnable_type]_(start|stream|end).
name: The name of the Runnable that generated the event.
- run_id: Randomly generated ID associated with the given execution of the
Runnable that emitted the event. A child Runnable that gets invoked as part of the execution of a parent Runnable is assigned its own unique ID.
- parent_ids: The IDs of the parent runnables that generated the event. The
root Runnable will have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.
tags: The tags of the Runnable that generated the event.
metadata: The metadata of the Runnable that generated the event.
- data: The data associated with the event. The contents of this field
depend on the type of event. See the table below for more details.
Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.
- !!! note
This reference table is for the v2 version of the schema.
event | name | chunk | input | output |———————- | ——————– | ———————————– | ————————————————- | ————————————————— |on_chat_model_start | ‘[model name]’ | | {“messages”: [[SystemMessage, HumanMessage]]} | |on_chat_model_stream | ‘[model name]’ | AIMessageChunk(content=”hello”) | | |on_chat_model_end | ‘[model name]’ | | {“messages”: [[SystemMessage, HumanMessage]]} | AIMessageChunk(content=”hello world”) |on_llm_start | ‘[model name]’ | | {‘input’: ‘hello’} | |on_llm_stream | ‘[model name]’ | `’Hello’ ` | | |on_llm_end | ‘[model name]’ | | ‘Hello human!’ | |on_chain_start | ‘format_docs’ | | | |on_chain_stream | ‘format_docs’ | ‘hello world!, goodbye world!’ | | |on_chain_end | ‘format_docs’ | | [Document(…)] | ‘hello world!, goodbye world!’ |on_tool_start | ‘some_tool’ | | {“x”: 1, “y”: “2”} | |on_tool_end | ‘some_tool’ | | | {“x”: 1, “y”: “2”} |on_retriever_start | ‘[retriever name]’ | | {“query”: “hello”} | |on_retriever_end | ‘[retriever name]’ | | {“query”: “hello”} | [Document(…), ..] |on_prompt_start | ‘[template_name]’ | | {“question”: “hello”} | |on_prompt_end | ‘[template_name]’ | | {“question”: “hello”} | ChatPromptValue(messages: [SystemMessage, …]) |In addition to the standard events, users can also dispatch custom events (see example below).
Custom events will be only be surfaced with in the v2 version of the API!
A custom event has following format:
Attribute | Type | Description |———– | —— | ——————————————————————————————————— |name | str | A user defined name for the event. |data | Any | The data associated with the event. This can be anything, though we suggest making it JSON serializable. |Here are declarations associated with the standard events shown above:
format_docs:
```python def format_docs(docs: list[Document]) -> str:
‘’’Format the docs.’’’ return “, “.join([doc.page_content for doc in docs])
format_docs = RunnableLambda(format_docs) ```
some_tool:
```python @tool def some_tool(x: int, y: str) -> dict:
‘’’Some_tool.’’’ return {“x”: x, “y”: y}
prompt:
```python template = ChatPromptTemplate.from_messages(
- [
(“system”, “You are Cat Agent 007”), (“human”, “{question}”),
]
).with_config({“run_name”: “my_template”, “tags”: [“my_template”]}) ```
!!! example
```python from langchain_core.runnables import RunnableLambda
- async def reverse(s: str) -> str:
return s[::-1]
chain = RunnableLambda(func=reverse)
- events = [
event async for event in chain.astream_events(“hello”, version=”v2”)
]
# Will produce the following events # (run_id, and parent_ids has been omitted for brevity): [
- {
“data”: {“input”: “hello”}, “event”: “on_chain_start”, “metadata”: {}, “name”: “reverse”, “tags”: [],
}, {
“data”: {“chunk”: “olleh”}, “event”: “on_chain_stream”, “metadata”: {}, “name”: “reverse”, “tags”: [],
}, {
“data”: {“output”: “olleh”}, “event”: “on_chain_end”, “metadata”: {}, “name”: “reverse”, “tags”: [],
},
```python title=”Dispatch custom event” from langchain_core.callbacks.manager import (
adispatch_custom_event,
) from langchain_core.runnables import RunnableLambda, RunnableConfig import asyncio
- async def slow_thing(some_input: str, config: RunnableConfig) -> str:
“””Do something that takes a long time.””” await asyncio.sleep(1) # Placeholder for some slow operation await adispatch_custom_event(
“progress_event”, {“message”: “Finished step 1 of 3”}, config=config # Must be included for python < 3.10
) await asyncio.sleep(1) # Placeholder for some slow operation await adispatch_custom_event(
“progress_event”, {“message”: “Finished step 2 of 3”}, config=config # Must be included for python < 3.10
) await asyncio.sleep(1) # Placeholder for some slow operation return “Done”
slow_thing = RunnableLambda(slow_thing)
- async for event in slow_thing.astream_events(“some_input”, version=”v2”):
print(event)
- Parameters:
input (Any) – The input to the Runnable.
config (RunnableConfig | None) – The config to use for the Runnable.
version (Literal['v1', 'v2']) –
The version of the schema to use, either ‘v2’ or ‘v1’.
Users should use ‘v2’.
’v1’ is for backwards compatibility and will be deprecated in 0.4.0.
No default will be assigned until the API is stabilized. custom events will only be surfaced in ‘v2’.
include_names (Sequence[str] | None) – Only include events from Runnable objects with matching names.
include_types (Sequence[str] | None) – Only include events from Runnable objects with matching types.
include_tags (Sequence[str] | None) – Only include events from Runnable objects with matching tags.
exclude_names (Sequence[str] | None) – Exclude events from Runnable objects with matching names.
exclude_types (Sequence[str] | None) – Exclude events from Runnable objects with matching types.
exclude_tags (Sequence[str] | None) – Exclude events from Runnable objects with matching tags.
**kwargs (Any) –
Additional keyword arguments to pass to the Runnable.
These will be passed to astream_log as this implementation of astream_events is built on top of astream_log.
- Yields:
An async stream of StreamEvent.
- Raises:
NotImplementedError – If the version is not ‘v1’ or ‘v2’.
- Return type:
AsyncIterator[StreamEvent]
- async astream_log(input, config=None, *, diff=True, with_streamed_output_list=True, include_names=None, include_types=None, include_tags=None, exclude_names=None, exclude_types=None, exclude_tags=None, **kwargs)[source]
Stream all output from a Runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of Jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.
The Jsonpatch ops can be applied in order to construct state.
- Parameters:
input (Any) – The input to the Runnable.
config (RunnableConfig | None) – The config to use for the Runnable.
diff (bool) – Whether to yield diffs between each step or the current state.
with_streamed_output_list (bool) – Whether to yield the streamed_output list.
include_names (Sequence[str] | None) – Only include logs with these names.
include_types (Sequence[str] | None) – Only include logs with these types.
include_tags (Sequence[str] | None) – Only include logs with these tags.
exclude_names (Sequence[str] | None) – Exclude logs with these names.
exclude_types (Sequence[str] | None) – Exclude logs with these types.
exclude_tags (Sequence[str] | None) – Exclude logs with these tags.
**kwargs (Any) – Additional keyword arguments to pass to the Runnable.
- Yields:
A RunLogPatch or RunLog object.
- Return type:
AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]
- async atransform(input, config=None, **kwargs)[source]
Transform inputs to outputs.
Default implementation of atransform, which buffers input and calls astream.
Subclasses must override this method if they can start producing output while input is still being generated.
- Parameters:
input (AsyncIterator[Input]) – An async iterator of inputs to the Runnable.
config (RunnableConfig | None) – The config to use for the Runnable.
**kwargs (Any | None) – Additional keyword arguments to pass to the Runnable.
- Yields:
The output of the Runnable.
- Return type:
AsyncIterator[Output]
- batch(inputs, config=None, *, return_exceptions=False, **kwargs)[source]
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses must override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.
- Parameters:
inputs (list[Input]) – A list of inputs to the Runnable.
config (RunnableConfig | list[RunnableConfig] | None) –
A config to use when invoking the Runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys.
Please refer to RunnableConfig for more details.
return_exceptions (bool) – Whether to return exceptions instead of raising them.
**kwargs (Any | None) – Additional keyword arguments to pass to the Runnable.
- Returns:
A list of outputs from the Runnable.
- Return type:
list[Output]
- batch_as_completed(inputs, config=None, *, return_exceptions=False, **kwargs)[source]
Run invoke in parallel on a list of inputs.
Yields results as they complete.
- Parameters:
inputs (Sequence[Input]) – A list of inputs to the Runnable.
config (RunnableConfig | Sequence[RunnableConfig] | None) –
A config to use when invoking the Runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys.
Please refer to RunnableConfig for more details.
return_exceptions (bool) – Whether to return exceptions instead of raising them.
**kwargs (Any | None) – Additional keyword arguments to pass to the Runnable.
- Yields:
Tuples of the index of the input and the output from the Runnable.
- Return type:
- bind(**kwargs)[source]
Bind arguments to a Runnable, returning a new Runnable.
Useful when a Runnable in a chain requires an argument that is not in the output of the previous Runnable or included in the user input.
- Parameters:
**kwargs (Any) – The arguments to bind to the Runnable.
- Returns:
A new Runnable with the arguments bound.
- Return type:
Runnable[Input, Output]
Example
```python from langchain_ollama import ChatOllama from langchain_core.output_parsers import StrOutputParser
model = ChatOllama(model=”llama3.1”)
# Without bind chain = model | StrOutputParser()
chain.invoke(“Repeat quoted words exactly: ‘One two three four five.’”) # Output is ‘One two three four five.’
# With bind chain = model.bind(stop=[“three”]) | StrOutputParser()
chain.invoke(“Repeat quoted words exactly: ‘One two three four five.’”) # Output is ‘One two’ ```
- bind_tools(tools, *, tool_choice=None, strict=None, parallel_tool_calls=None, response_format=None, **kwargs)[source]
Bind tool-like objects to this chat model.
Assumes model is compatible with OpenAI tool-calling API.
- Parameters:
tools (Sequence[dict[str, Any] | type | Callable | BaseTool]) –
A list of tool definitions to bind to this chat model.
Supports any tool definition handled by [convert_to_openai_tool][langchain_core.utils.function_calling.convert_to_openai_tool].
tool_choice (dict | str | bool | None) –
Which tool to require the model to call. Options are:
str of the form ‘<<tool_name>>’: calls <<tool_name>> tool.
’auto’: automatically selects a tool (including no tool).
’none’: does not call a tool.
’any’ or ‘required’ or True: force at least one tool to be called.
dict of the form {“type”: “function”, “function”: {“name”: <<tool_name>>}}: calls <<tool_name>> tool.
False or None: no effect, default OpenAI behavior.
strict (bool | None) – If True, model output is guaranteed to exactly match the JSON Schema provided in the tool definition. The input schema will also be validated according to the [supported schemas](https://platform.openai.com/docs/guides/structured-outputs/supported-schemas?api-mode=responses#supported-schemas). If False, input schema will not be validated and model output will not be validated. If None, strict argument will not be passed to the model.
parallel_tool_calls (bool | None) – Set to False to disable parallel tool use. Defaults to None (no specification, which allows parallel tool use).
response_format (dict[str, Any] | type[_BM] | type | None) – Optional schema to format model response. If provided and the model does not call a tool, the model will generate a [structured response](https://platform.openai.com/docs/guides/structured-outputs).
kwargs (Any) – Any additional parameters are passed directly to bind.
- Return type:
Runnable[PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]], AIMessage]
- classmethod build_extra(values)[source]
Build extra kwargs from additional params that were passed in.
- config_schema(*, include=None)[source]
The type of config this Runnable accepts specified as a Pydantic model.
To mark a field as configurable, see the configurable_fields and configurable_alternatives methods.
- configurable_alternatives(which, *, default_key='default', prefix_keys=False, **kwargs)[source]
Configure alternatives for Runnable objects that can be set at runtime.
- Parameters:
which (ConfigurableField) – The ConfigurableField instance that will be used to select the alternative.
default_key (str) – The default key to use if no alternative is selected.
prefix_keys (bool) – Whether to prefix the keys with the ConfigurableField id.
**kwargs (Runnable[Input, Output] | Callable[[], Runnable[Input, Output]]) – A dictionary of keys to Runnable instances or callables that return Runnable instances.
- Returns:
A new Runnable with the alternatives configured.
- Return type:
RunnableSerializable
!!! example
```python from langchain_anthropic import ChatAnthropic from langchain_core.runnables.utils import ConfigurableField from langchain_openai import ChatOpenAI
- model = ChatAnthropic(
model_name=”claude-sonnet-4-5-20250929”
- ).configurable_alternatives(
ConfigurableField(id=”llm”), default_key=”anthropic”, openai=ChatOpenAI(),
)
# uses the default model ChatAnthropic print(model.invoke(“which organization created you?”).content)
# uses ChatOpenAI print(
model.with_config(configurable={“llm”: “openai”}) .invoke(“which organization created you?”) .content
- configurable_fields(**kwargs)[source]
Configure particular Runnable fields at runtime.
- Parameters:
**kwargs (ConfigurableField | ConfigurableFieldSingleOption | ConfigurableFieldMultiOption) – A dictionary of ConfigurableField instances to configure.
- Raises:
ValueError – If a configuration key is not found in the Runnable.
- Returns:
A new Runnable with the fields configured.
- Return type:
RunnableSerializable
!!! example
```python from langchain_core.runnables import ConfigurableField from langchain_openai import ChatOpenAI
- model = ChatOpenAI(max_tokens=20).configurable_fields(
- max_tokens=ConfigurableField(
id=”output_token_number”, name=”Max tokens in the output”, description=”The maximum number of tokens in the output”,
)
)
# max_tokens = 20 print(
“max_tokens_20: “, model.invoke(“tell me something about chess”).content
)
# max_tokens = 200 print(
“max_tokens_200: “, model.with_config(configurable={“output_token_number”: 200}) .invoke(“tell me something about chess”) .content,
- copy(*, include=None, exclude=None, update=None, deep=False)[source]
Returns a copy of the model.
- !!! warning “Deprecated”
This method is now deprecated; use model_copy instead.
If you need include or exclude, use:
`python {test="skip" lint="skip"} data = self.model_dump(include=include, exclude=exclude, round_trip=True) data = {**data, **(update or {})} copied = self.model_validate(data) `- Parameters:
include (AbstractSetIntStr | MappingIntStrAny | None) – Optional set or mapping specifying which fields to include in the copied model.
exclude (AbstractSetIntStr | MappingIntStrAny | None) – Optional set or mapping specifying which fields to exclude in the copied model.
update (Dict[str, Any] | None) – Optional dictionary of field-value pairs to override field values in the copied model.
deep (bool) – If True, the values of fields that are Pydantic models will be deep-copied.
- Returns:
A copy of the model with included, excluded and updated fields as specified.
- Return type:
Self
- generate(messages, stop=None, callbacks=None, *, tags=None, metadata=None, run_name=None, run_id=None, **kwargs)[source]
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
Take advantage of batched calls,
Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model
type (e.g., pure text completion models vs chat models).
- Parameters:
messages (list[list[BaseMessage]]) – List of list of messages.
Stop words to use when generating.
Model output is cut off at the first occurrence of any of these substrings.
callbacks (Callbacks) –
Callbacks to pass through.
Used for executing additional functionality, such as logging or streaming, throughout generation.
run_name (str | None) – The name of the run.
run_id (uuid.UUID | None) – The ID of the run.
**kwargs (Any) –
Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
- Returns:
- An LLMResult, which contains a list of candidate Generations for each
input prompt and additional model provider-specific output.
- Return type:
LLMResult
- generate_prompt(prompts, stop=None, callbacks=None, **kwargs)[source]
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
Take advantage of batched calls,
Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model
type (e.g., pure text completion models vs chat models).
- Parameters:
prompts (list[PromptValue]) –
List of PromptValue objects.
A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessage objects for chat models).
Stop words to use when generating.
Model output is cut off at the first occurrence of any of these substrings.
callbacks (list[BaseCallbackHandler] | BaseCallbackManager | None) –
Callbacks to pass through.
Used for executing additional functionality, such as logging or streaming, throughout generation.
**kwargs (Any) –
Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
- Returns:
- An LLMResult, which contains a list of candidate Generation objects for
each input prompt and additional model provider-specific output.
- Return type:
LLMResult
- get_config_jsonschema(*, include=None)[source]
Get a JSON schema that represents the config of the Runnable.
- Parameters:
include (Sequence[str] | None) – A list of fields to include in the config schema.
- Returns:
A JSON schema that represents the config of the Runnable.
- Return type:
!!! version-added “Added in langchain-core 0.3.0”
- get_graph(config=None)[source]
Return a graph representation of this Runnable.
- Parameters:
config (RunnableConfig | None)
- Return type:
Graph
- get_input_jsonschema(config=None)[source]
Get a JSON schema that represents the input to the Runnable.
- Parameters:
config (RunnableConfig | None) – A config to use when generating the schema.
- Returns:
A JSON schema that represents the input to the Runnable.
- Return type:
Example
```python from langchain_core.runnables import RunnableLambda
- def add_one(x: int) -> int:
return x + 1
runnable = RunnableLambda(add_one)
print(runnable.get_input_jsonschema()) ```
!!! version-added “Added in langchain-core 0.3.0”
- get_input_schema(config=None)[source]
Get a Pydantic model that can be used to validate input to the Runnable.
Runnable objects that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the Runnable is invoked with.
This method allows to get an input schema for a specific configuration.
- get_num_tokens(text)[source]
Get the number of tokens present in the text.
Useful for checking if an input fits in a model’s context window.
This should be overridden by model-specific implementations to provide accurate token counts via model-specific tokenizers.
- get_num_tokens_from_messages(messages, tools=None)[source]
Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.
- !!! warning
You must have the pillow installed if you want to count image tokens if you are specifying the image as a base64 string, and you must have both pillow and httpx installed if you are specifying the image as a URL. If these aren’t installed image inputs will be ignored in token counting.
[OpenAI reference](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb).
- get_output_jsonschema(config=None)[source]
Get a JSON schema that represents the output of the Runnable.
- Parameters:
config (RunnableConfig | None) – A config to use when generating the schema.
- Returns:
A JSON schema that represents the output of the Runnable.
- Return type:
Example
```python from langchain_core.runnables import RunnableLambda
- def add_one(x: int) -> int:
return x + 1
runnable = RunnableLambda(add_one)
print(runnable.get_output_jsonschema()) ```
!!! version-added “Added in langchain-core 0.3.0”
- get_output_schema(config=None)[source]
Get a Pydantic model that can be used to validate output to the Runnable.
Runnable objects that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the Runnable is invoked with.
This method allows to get an output schema for a specific configuration.
- get_prompts(config=None)[source]
Return a list of prompts used by this Runnable.
- Parameters:
config (RunnableConfig | None)
- Return type:
list[BasePromptTemplate]
- property input_schema: type[BaseModel]
The type of input this Runnable accepts specified as a Pydantic model.
- invoke(input, config=None, *, stop=None, **kwargs)[source]
Transform a single input into an output.
- Parameters:
input (LanguageModelInput) – The input to the Runnable.
config (RunnableConfig | None) –
A config to use when invoking the Runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys.
Please refer to RunnableConfig for more details.
kwargs (Any)
- Returns:
The output of the Runnable.
- Return type:
AIMessage
- classmethod is_lc_serializable()[source]
Return whether this model can be serialized by LangChain.
- Return type:
- json(*, include=None, exclude=None, by_alias=False, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=PydanticUndefined, models_as_dict=PydanticUndefined, **dumps_kwargs)[source]
- Parameters:
include (set[int] | set[str] | Mapping[int, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None)
exclude (set[int] | set[str] | Mapping[int, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None)
by_alias (bool)
exclude_unset (bool)
exclude_defaults (bool)
exclude_none (bool)
models_as_dict (bool)
dumps_kwargs (Any)
- Return type:
- classmethod lc_id()[source]
Return a unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path to the object.
For example, for the class langchain.llms.openai.OpenAI, the id is [“langchain”, “llms”, “openai”, “OpenAI”].
- map()[source]
Return a new Runnable that maps a list of inputs to a list of outputs.
Calls invoke with each input.
- Returns:
A new Runnable that maps a list of inputs to a list of outputs.
- Return type:
Example
```python from langchain_core.runnables import RunnableLambda
- def _lambda(x: int) -> int:
return x + 1
runnable = RunnableLambda(_lambda) print(runnable.map().invoke([1, 2, 3])) # [2, 3, 4] ```
- model_computed_fields = {}
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'ignore', 'populate_by_name': True, 'protected_namespaces': (), 'validate_by_alias': True, 'validate_by_name': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- classmethod model_construct(_fields_set=None, **values)[source]
Creates a new instance of the Model class with validated data.
Creates a new model setting __dict__ and __pydantic_fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.
- !!! note
model_construct() generally respects the model_config.extra setting on the provided model. That is, if model_config.extra == ‘allow’, then all extra passed values are added to the model instance’s __dict__ and __pydantic_extra__ fields. If model_config.extra == ‘ignore’ (the default), then all extra passed values are ignored. Because no validation is performed with a call to model_construct(), having model_config.extra == ‘forbid’ does not result in an error if extra values are passed, but they will be ignored.
- Parameters:
_fields_set (set[str] | None) – A set of field names that were originally explicitly set during instantiation. If provided, this is directly used for the [model_fields_set][pydantic.BaseModel.model_fields_set] attribute. Otherwise, the field names from the values argument will be used.
values (Any) – Trusted or pre-validated data dictionary.
- Returns:
A new instance of the Model class with validated data.
- Return type:
- model_copy(*, update=None, deep=False)[source]
- !!! abstract “Usage Documentation”
[model_copy](../concepts/models.md#model-copy)
Returns a copy of the model.
- !!! note
The underlying instance’s [__dict__][object.__dict__] attribute is copied. This might have unexpected side effects if you store anything in it, on top of the model fields (e.g. the value of [cached properties][functools.cached_property]).
- model_dump(*, mode='python', include=None, exclude=None, context=None, by_alias=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, exclude_computed_fields=False, round_trip=False, warnings=True, fallback=None, serialize_as_any=False)[source]
- !!! abstract “Usage Documentation”
[model_dump](../concepts/serialization.md#python-mode)
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
- Parameters:
mode (Literal['json', 'python'] | str) – The mode in which to_python should run. If mode is ‘json’, the output will only contain JSON serializable types. If mode is ‘python’, the output may contain non-JSON-serializable Python objects.
include (set[int] | set[str] | Mapping[int, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None) – A set of fields to include in the output.
exclude (set[int] | set[str] | Mapping[int, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None) – A set of fields to exclude from the output.
context (Any | None) – Additional context to pass to the serializer.
by_alias (bool | None) – Whether to use the field’s alias in the dictionary key if defined.
exclude_unset (bool) – Whether to exclude fields that have not been explicitly set.
exclude_defaults (bool) – Whether to exclude fields that are set to their default value.
exclude_none (bool) – Whether to exclude fields that have a value of None.
exclude_computed_fields (bool) – Whether to exclude computed fields. While this can be useful for round-tripping, it is usually recommended to use the dedicated round_trip parameter instead.
round_trip (bool) – If True, dumped values should be valid as input for non-idempotent types such as Json[T].
warnings (bool | Literal['none', 'warn', 'error']) – How to handle serialization errors. False/”none” ignores them, True/”warn” logs errors, “error” raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError].
fallback (Callable[[Any], Any] | None) – A function to call when an unknown value is encountered. If not provided, a [PydanticSerializationError][pydantic_core.PydanticSerializationError] error is raised.
serialize_as_any (bool) – Whether to serialize fields with duck-typing serialization behavior.
- Returns:
A dictionary representation of the model.
- Return type:
- model_dump_json(*, indent=None, ensure_ascii=False, include=None, exclude=None, context=None, by_alias=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, exclude_computed_fields=False, round_trip=False, warnings=True, fallback=None, serialize_as_any=False)[source]
- !!! abstract “Usage Documentation”
[model_dump_json](../concepts/serialization.md#json-mode)
Generates a JSON representation of the model using Pydantic’s to_json method.
- Parameters:
indent (int | None) – Indentation to use in the JSON output. If None is passed, the output will be compact.
ensure_ascii (bool) – If True, the output is guaranteed to have all incoming non-ASCII characters escaped. If False (the default), these characters will be output as-is.
include (set[int] | set[str] | Mapping[int, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None) – Field(s) to include in the JSON output.
exclude (set[int] | set[str] | Mapping[int, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None) – Field(s) to exclude from the JSON output.
context (Any | None) – Additional context to pass to the serializer.
by_alias (bool | None) – Whether to serialize using field aliases.
exclude_unset (bool) – Whether to exclude fields that have not been explicitly set.
exclude_defaults (bool) – Whether to exclude fields that are set to their default value.
exclude_none (bool) – Whether to exclude fields that have a value of None.
exclude_computed_fields (bool) – Whether to exclude computed fields. While this can be useful for round-tripping, it is usually recommended to use the dedicated round_trip parameter instead.
round_trip (bool) – If True, dumped values should be valid as input for non-idempotent types such as Json[T].
warnings (bool | Literal['none', 'warn', 'error']) – How to handle serialization errors. False/”none” ignores them, True/”warn” logs errors, “error” raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError].
fallback (Callable[[Any], Any] | None) – A function to call when an unknown value is encountered. If not provided, a [PydanticSerializationError][pydantic_core.PydanticSerializationError] error is raised.
serialize_as_any (bool) – Whether to serialize fields with duck-typing serialization behavior.
- Returns:
A JSON string representation of the model.
- Return type:
- property model_extra: dict[str, Any] | None
Get extra fields set during validation.
- Returns:
A dictionary of extra fields, or None if config.extra is not set to “allow”.
- model_fields = {'async_client': FieldInfo(annotation=Any, required=False, default=None, exclude=True), 'cache': FieldInfo(annotation=Union[BaseCache, bool, NoneType], required=False, default=None, exclude=True), 'callbacks': FieldInfo(annotation=Union[list[BaseCallbackHandler], BaseCallbackManager, NoneType], required=False, default=None, exclude=True), 'client': FieldInfo(annotation=Any, required=False, default=None, exclude=True), 'custom_get_token_ids': FieldInfo(annotation=Union[Callable[list, list[int]], NoneType], required=False, default=None, exclude=True), 'default_headers': FieldInfo(annotation=Union[Mapping[str, str], NoneType], required=False, default=None), 'default_query': FieldInfo(annotation=Union[Mapping[str, object], NoneType], required=False, default=None), 'disable_streaming': FieldInfo(annotation=Union[bool, Literal['tool_calling']], required=False, default=False), 'disabled_params': FieldInfo(annotation=Union[dict[str, Any], NoneType], required=False, default=None), 'extra_body': FieldInfo(annotation=Union[Mapping[str, Any], NoneType], required=False, default=None), 'frequency_penalty': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'http_async_client': FieldInfo(annotation=Union[Any, NoneType], required=False, default=None, exclude=True), 'http_client': FieldInfo(annotation=Union[Any, NoneType], required=False, default=None, exclude=True), 'include': FieldInfo(annotation=Union[list[str], NoneType], required=False, default=None), 'include_response_headers': FieldInfo(annotation=bool, required=False, default=False), 'logit_bias': FieldInfo(annotation=Union[dict[int, int], NoneType], required=False, default=None), 'logprobs': FieldInfo(annotation=Union[bool, NoneType], required=False, default=None), 'max_retries': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'max_tokens': FieldInfo(annotation=Union[int, NoneType], required=False, default=None, alias='max_completion_tokens', alias_priority=2), 'metadata': FieldInfo(annotation=Union[dict[str, Any], NoneType], required=False, default=None, exclude=True), 'model_kwargs': FieldInfo(annotation=dict[str, Any], required=False, default_factory=dict), 'model_name': FieldInfo(annotation=str, required=False, default='gpt-3.5-turbo', alias='model', alias_priority=2), 'n': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'name': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'openai_api_base': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, alias='base_url', alias_priority=2), 'openai_api_key': FieldInfo(annotation=Union[SecretStr, NoneType, Callable[list, str], Callable[list, Awaitable[str]]], required=False, default_factory=get_secret_from_env, alias='api_key', alias_priority=2), 'openai_organization': FieldInfo(annotation=Union[str, NoneType], required=False, default=None, alias='organization', alias_priority=2), 'openai_proxy': FieldInfo(annotation=Union[str, NoneType], required=False, default_factory=get_from_env_fn), 'output_version': FieldInfo(annotation=Union[str, NoneType], required=False, default_factory=get_from_env_fn), 'presence_penalty': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'profile': FieldInfo(annotation=Union[ModelProfile, NoneType], required=False, default=None, exclude=True), 'rate_limiter': FieldInfo(annotation=Union[BaseRateLimiter, NoneType], required=False, default=None, exclude=True), 'reasoning': FieldInfo(annotation=Union[dict[str, Any], NoneType], required=False, default=None), 'reasoning_effort': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'request_timeout': FieldInfo(annotation=Union[float, tuple[float, float], Any, NoneType], required=False, default=None, alias='timeout', alias_priority=2), 'root_async_client': FieldInfo(annotation=Any, required=False, default=None, exclude=True), 'root_client': FieldInfo(annotation=Any, required=False, default=None, exclude=True), 'seed': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'service_tier': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'stop': FieldInfo(annotation=Union[list[str], str, NoneType], required=False, default=None, alias='stop_sequences', alias_priority=2), 'store': FieldInfo(annotation=Union[bool, NoneType], required=False, default=None), 'stream_usage': FieldInfo(annotation=Union[bool, NoneType], required=False, default=None), 'streaming': FieldInfo(annotation=bool, required=False, default=False), 'tags': FieldInfo(annotation=Union[list[str], NoneType], required=False, default=None, exclude=True), 'temperature': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'tiktoken_model_name': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'top_logprobs': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'truncation': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'use_previous_response_id': FieldInfo(annotation=bool, required=False, default=False), 'use_responses_api': FieldInfo(annotation=Union[bool, NoneType], required=False, default=None), 'verbose': FieldInfo(annotation=bool, required=False, default_factory=_get_verbosity, exclude=True, repr=False), 'verbosity': FieldInfo(annotation=Union[str, NoneType], required=False, default=None)}
- property model_fields_set: set[str]
Returns the set of fields that have been explicitly set on this model instance.
- Returns:
- A set of strings representing the fields that have been set,
i.e. that were not filled from defaults.
- classmethod model_json_schema(by_alias=True, ref_template=DEFAULT_REF_TEMPLATE, schema_generator=GenerateJsonSchema, mode='validation', *, union_format='any_of')[source]
Generates a JSON schema for a model class.
- Parameters:
by_alias (bool) – Whether to use attribute aliases or not.
ref_template (str) – The reference template.
union_format (Literal['any_of', 'primitive_type_array']) –
The format to use when combining schemas from unions together. Can be one of:
’any_of’: Use the [anyOf](https://json-schema.org/understanding-json-schema/reference/combining#anyOf)
keyword to combine schemas (the default). - ‘primitive_type_array’: Use the [type](https://json-schema.org/understanding-json-schema/reference/type) keyword as an array of strings, containing each type of the combination. If any of the schemas is not a primitive type (string, boolean, null, integer or number) or contains constraints/metadata, falls back to any_of.
schema_generator (type[GenerateJsonSchema]) – To override the logic used to generate the JSON schema, as a subclass of GenerateJsonSchema with your desired modifications
mode (Literal['validation', 'serialization']) – The mode in which to generate the schema.
- Returns:
The JSON schema for the given model class.
- Return type:
- classmethod model_parametrized_name(params)[source]
Compute the class name for parametrizations of generic classes.
This method can be overridden to achieve a custom naming scheme for generic BaseModels.
- Parameters:
params (tuple[type[Any], ...]) – Tuple of types of the class. Given a generic class Model with 2 type variables and a concrete model Model[str, int], the value (str, int) would be passed to params.
- Returns:
String representing the new class where params are passed to cls as type variables.
- Raises:
TypeError – Raised when trying to generate concrete names for non-generic models.
- Return type:
- model_post_init(context, /)[source]
Override this method to perform additional initialization after __init__ and model_construct. This is useful if you want to do some validation that requires the entire model to be initialized.
- Parameters:
context (Any)
- Return type:
None
- classmethod model_rebuild(*, force=False, raise_errors=True, _parent_namespace_depth=2, _types_namespace=None)[source]
Try to rebuild the pydantic-core schema for the model.
This may be necessary when one of the annotations is a ForwardRef which could not be resolved during the initial attempt to build the schema, and automatic rebuilding fails.
- Parameters:
force (bool) – Whether to force the rebuilding of the model schema, defaults to False.
raise_errors (bool) – Whether to raise errors, defaults to True.
_parent_namespace_depth (int) – The depth level of the parent namespace, defaults to 2.
_types_namespace (MappingNamespace | None) – The types namespace, defaults to None.
- Returns:
Returns None if the schema is already “complete” and rebuilding was not required. If rebuilding _was_ required, returns True if rebuilding was successful, otherwise False.
- Return type:
bool | None
- classmethod model_validate(obj, *, strict=None, extra=None, from_attributes=None, context=None, by_alias=None, by_name=None)[source]
Validate a pydantic model instance.
- Parameters:
obj (Any) – The object to validate.
strict (bool | None) – Whether to enforce types strictly.
extra (Literal['allow', 'ignore', 'forbid'] | None) – Whether to ignore, allow, or forbid extra data during model validation. See the [extra configuration value][pydantic.ConfigDict.extra] for details.
from_attributes (bool | None) – Whether to extract data from object attributes.
context (Any | None) – Additional context to pass to the validator.
by_alias (bool | None) – Whether to use the field’s alias when validating against the provided input data.
by_name (bool | None) – Whether to use the field’s name when validating against the provided input data.
- Raises:
ValidationError – If the object could not be validated.
- Returns:
The validated model instance.
- Return type:
- classmethod model_validate_json(json_data, *, strict=None, extra=None, context=None, by_alias=None, by_name=None)[source]
- !!! abstract “Usage Documentation”
[JSON Parsing](../concepts/json.md#json-parsing)
Validate the given JSON data against the Pydantic model.
- Parameters:
json_data (str | bytes | bytearray) – The JSON data to validate.
strict (bool | None) – Whether to enforce types strictly.
extra (Literal['allow', 'ignore', 'forbid'] | None) – Whether to ignore, allow, or forbid extra data during model validation. See the [extra configuration value][pydantic.ConfigDict.extra] for details.
context (Any | None) – Extra variables to pass to the validator.
by_alias (bool | None) – Whether to use the field’s alias when validating against the provided input data.
by_name (bool | None) – Whether to use the field’s name when validating against the provided input data.
- Returns:
The validated Pydantic model.
- Raises:
ValidationError – If json_data is not a JSON string or the object could not be validated.
- Return type:
- classmethod model_validate_strings(obj, *, strict=None, extra=None, context=None, by_alias=None, by_name=None)[source]
Validate the given object with string data against the Pydantic model.
- Parameters:
obj (Any) – The object containing string data to validate.
strict (bool | None) – Whether to enforce types strictly.
extra (Literal['allow', 'ignore', 'forbid'] | None) – Whether to ignore, allow, or forbid extra data during model validation. See the [extra configuration value][pydantic.ConfigDict.extra] for details.
context (Any | None) – Extra variables to pass to the validator.
by_alias (bool | None) – Whether to use the field’s alias when validating against the provided input data.
by_name (bool | None) – Whether to use the field’s name when validating against the provided input data.
- Returns:
The validated Pydantic model.
- Return type:
- property output_schema: type[BaseModel]
Output schema.
The type of output this Runnable produces specified as a Pydantic model.
- classmethod parse_file(path, *, content_type=None, encoding='utf8', proto=None, allow_pickle=False)[source]
- classmethod parse_raw(b, *, content_type=None, encoding='utf8', proto=None, allow_pickle=False)[source]
- pick(keys)[source]
Pick keys from the output dict of this Runnable.
!!! example “Pick a single key”
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str) as_json = RunnableLambda(json.loads) chain = RunnableMap(str=as_str, json=as_json)
chain.invoke(“[1, 2, 3]”) # -> {“str”: “[1, 2, 3]”, “json”: [1, 2, 3]}
json_only_chain = chain.pick(“json”) json_only_chain.invoke(“[1, 2, 3]”) # -> [1, 2, 3] ```
!!! example “Pick a list of keys”
```python from typing import Any
import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str) as_json = RunnableLambda(json.loads)
- def as_bytes(x: Any) -> bytes:
return bytes(x, “utf-8”)
- chain = RunnableMap(
str=as_str, json=as_json, bytes=RunnableLambda(as_bytes)
)
chain.invoke(“[1, 2, 3]”) # -> {“str”: “[1, 2, 3]”, “json”: [1, 2, 3], “bytes”: b”[1, 2, 3]”}
json_and_bytes_chain = chain.pick([“json”, “bytes”]) json_and_bytes_chain.invoke(“[1, 2, 3]”) # -> {“json”: [1, 2, 3], “bytes”: b”[1, 2, 3]”} ```
- pipe(*others, name=None)[source]
Pipe Runnable objects.
Compose this Runnable with Runnable-like objects to make a RunnableSequence.
Equivalent to RunnableSequence(self, *others) or self | others[0] | …
Example
```python from langchain_core.runnables import RunnableLambda
- def add_one(x: int) -> int:
return x + 1
- def mul_two(x: int) -> int:
return x * 2
runnable_1 = RunnableLambda(add_one) runnable_2 = RunnableLambda(mul_two) sequence = runnable_1.pipe(runnable_2) # Or equivalently: # sequence = runnable_1 | runnable_2 # sequence = RunnableSequence(first=runnable_1, last=runnable_2) sequence.invoke(1) await sequence.ainvoke(1) # -> 4
sequence.batch([1, 2, 3]) await sequence.abatch([1, 2, 3]) # -> [4, 6, 8] ```
- classmethod schema_json(*, by_alias=True, ref_template=DEFAULT_REF_TEMPLATE, **dumps_kwargs)[source]
- classmethod set_verbose(verbose)[source]
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
- stream(input, config=None, *, stop=None, **kwargs)[source]
Default implementation of stream, which calls invoke.
Subclasses must override this method if they support streaming output.
- Parameters:
- Yields:
The output of the Runnable.
- Return type:
Iterator[AIMessageChunk]
- to_json()[source]
Serialize the Runnable to JSON.
- Returns:
A JSON-serializable representation of the Runnable.
- Return type:
SerializedConstructor | SerializedNotImplemented
- to_json_not_implemented()[source]
Serialize a “not implemented” object.
- Returns:
SerializedNotImplemented.
- Return type:
SerializedNotImplemented
- transform(input, config=None, **kwargs)[source]
Transform inputs to outputs.
Default implementation of transform, which buffers input and calls astream.
Subclasses must override this method if they can start producing output while input is still being generated.
- validate_environment()[source]
Validate that api key and python package exists in environment.
- Return type:
- classmethod validate_temperature(values)[source]
Validate temperature parameter for different models.
- gpt-5 models (excluding gpt-5-chat) only allow temperature=1 or unset
(Defaults to 1)
- with_alisteners(*, on_start=None, on_end=None, on_error=None)[source]
Bind async lifecycle listeners to a Runnable.
Returns a new Runnable.
The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.
- Parameters:
on_start (AsyncListener | None) – Called asynchronously before the Runnable starts running, with the Run object.
on_end (AsyncListener | None) – Called asynchronously after the Runnable finishes running, with the Run object.
on_error (AsyncListener | None) – Called asynchronously if the Runnable throws an error, with the Run object.
- Returns:
A new Runnable with the listeners bound.
- Return type:
Runnable[Input, Output]
Example
```python from langchain_core.runnables import RunnableLambda, Runnable from datetime import datetime, timezone import time import asyncio
- def format_t(timestamp: float) -> str:
return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()
- async def test_runnable(time_to_sleep: int):
print(f”Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}”) await asyncio.sleep(time_to_sleep) print(f”Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}”)
- async def fn_start(run_obj: Runnable):
print(f”on start callback starts at {format_t(time.time())}”) await asyncio.sleep(3) print(f”on start callback ends at {format_t(time.time())}”)
- async def fn_end(run_obj: Runnable):
print(f”on end callback starts at {format_t(time.time())}”) await asyncio.sleep(2) print(f”on end callback ends at {format_t(time.time())}”)
- runnable = RunnableLambda(test_runnable).with_alisteners(
on_start=fn_start, on_end=fn_end
)
- async def concurrent_runs():
await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3))
asyncio.run(concurrent_runs()) # Result: # on start callback starts at 2025-03-01T07:05:22.875378+00:00 # on start callback starts at 2025-03-01T07:05:22.875495+00:00 # on start callback ends at 2025-03-01T07:05:25.878862+00:00 # on start callback ends at 2025-03-01T07:05:25.878947+00:00 # Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00 # Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00 # Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00 # on end callback starts at 2025-03-01T07:05:27.882360+00:00 # Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00 # on end callback starts at 2025-03-01T07:05:28.882428+00:00 # on end callback ends at 2025-03-01T07:05:29.883893+00:00 # on end callback ends at 2025-03-01T07:05:30.884831+00:00 ```
- with_config(config=None, **kwargs)[source]
Bind config to a Runnable, returning a new Runnable.
- Parameters:
config (RunnableConfig | None) – The config to bind to the Runnable.
**kwargs (Any) – Additional keyword arguments to pass to the Runnable.
- Returns:
A new Runnable with the config bound.
- Return type:
Runnable[Input, Output]
- with_fallbacks(fallbacks, *, exceptions_to_handle=(Exception,), exception_key=None)[source]
Add fallbacks to a Runnable, returning a new Runnable.
The new Runnable will try the original Runnable, and then each fallback in order, upon failures.
- Parameters:
fallbacks (Sequence[Runnable[Input, Output]]) – A sequence of runnables to try if the original Runnable fails.
exceptions_to_handle (tuple[type[BaseException], ...]) – A tuple of exception types to handle.
exception_key (str | None) –
If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key.
If None, exceptions will not be passed to fallbacks.
If used, the base Runnable and its fallbacks must accept a dictionary as input.
- Returns:
- A new Runnable that will try the original Runnable, and then each
Fallback in order, upon failures.
- Return type:
RunnableWithFallbacksT[Input, Output]
Example
```python from typing import Iterator
from langchain_core.runnables import RunnableGenerator
- def _generate_immediate_error(input: Iterator) -> Iterator[str]:
raise ValueError() yield “”
- def _generate(input: Iterator) -> Iterator[str]:
yield from “foo bar”
- runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks(
[RunnableGenerator(_generate)]
) print(“”.join(runnable.stream({}))) # foo bar ```
- Parameters:
fallbacks (Sequence[Runnable[Input, Output]]) – A sequence of runnables to try if the original Runnable fails.
exceptions_to_handle (tuple[type[BaseException], ...]) – A tuple of exception types to handle.
exception_key (str | None) –
If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key.
If None, exceptions will not be passed to fallbacks.
If used, the base Runnable and its fallbacks must accept a dictionary as input.
- Returns:
- A new Runnable that will try the original Runnable, and then each
Fallback in order, upon failures.
- Return type:
RunnableWithFallbacksT[Input, Output]
- with_listeners(*, on_start=None, on_end=None, on_error=None)[source]
Bind lifecycle listeners to a Runnable, returning a new Runnable.
The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.
- Parameters:
on_start (Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None) – Called before the Runnable starts running, with the Run object.
on_end (Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None) – Called after the Runnable finishes running, with the Run object.
on_error (Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None) – Called if the Runnable throws an error, with the Run object.
- Returns:
A new Runnable with the listeners bound.
- Return type:
Runnable[Input, Output]
Example
```python from langchain_core.runnables import RunnableLambda from langchain_core.tracers.schemas import Run
import time
- def test_runnable(time_to_sleep: int):
time.sleep(time_to_sleep)
- def fn_start(run_obj: Run):
print(“start_time:”, run_obj.start_time)
- def fn_end(run_obj: Run):
print(“end_time:”, run_obj.end_time)
- chain = RunnableLambda(test_runnable).with_listeners(
on_start=fn_start, on_end=fn_end
- with_retry(*, retry_if_exception_type=(Exception,), wait_exponential_jitter=True, exponential_jitter_params=None, stop_after_attempt=3)[source]
Create a new Runnable that retries the original Runnable on exceptions.
- Parameters:
retry_if_exception_type (tuple[type[BaseException], ...]) – A tuple of exception types to retry on.
wait_exponential_jitter (bool) – Whether to add jitter to the wait time between retries.
stop_after_attempt (int) – The maximum number of attempts to make before giving up.
exponential_jitter_params (ExponentialJitterParams | None) – Parameters for tenacity.wait_exponential_jitter. Namely: initial, max, exp_base, and jitter (all float values).
- Returns:
A new Runnable that retries the original Runnable on exceptions.
- Return type:
Runnable[Input, Output]
Example
```python from langchain_core.runnables import RunnableLambda
count = 0
- def _lambda(x: int) -> None:
global count count = count + 1 if x == 1:
raise ValueError(“x is 1”)
- else:
pass
runnable = RunnableLambda(_lambda) try:
- runnable.with_retry(
stop_after_attempt=2, retry_if_exception_type=(ValueError,),
).invoke(1)
- except ValueError:
pass
- with_structured_output(schema=None, *, method='json_schema', include_raw=False, strict=None, tools=None, **kwargs)[source]
Model wrapper that returns outputs formatted to match the given schema.
- Parameters:
schema (dict[str, Any] | type[_BM] | type | None) –
The output schema. Can be passed in as:
an OpenAI function/tool schema,
a JSON Schema,
a TypedDict class,
or a Pydantic class.
If schema is a Pydantic class then the model output will be a Pydantic instance of that class, and the model-generated fields will be validated by the Pydantic class. Otherwise the model output will be a dict and will not be validated.
See langchain_core.utils.function_calling.convert_to_openai_tool for more on how to properly specify types and descriptions of schema fields when specifying a Pydantic or TypedDict class.
method (Literal['function_calling', 'json_mode', 'json_schema']) –
The method for steering model generation, one of:
- ’json_schema’:
Uses OpenAI’s [Structured Output API](https://platform.openai.com/docs/guides/structured-outputs). See the docs for [supported models](https://platform.openai.com/docs/guides/structured-outputs#supported-models).
- ’function_calling’:
Uses OpenAI’s [tool-calling API](https://platform.openai.com/docs/guides/function-calling) (formerly called function calling).
- ’json_mode’:
Uses OpenAI’s [JSON mode](https://platform.openai.com/docs/guides/structured-outputs#json-mode). Note that if using JSON mode then you must include instructions for formatting the output into the desired schema into the model call.
Learn more about the [differences between methods](https://platform.openai.com/docs/guides/structured-outputs#function-calling-vs-response-format).
include_raw (bool) –
If False then only the parsed structured output is returned.
If an error occurs during model output parsing it will be raised.
If True then both the raw model response (a BaseMessage) and the parsed model response will be returned.
If an error occurs during output parsing it will be caught and returned as well.
The final output is always a dict with keys ‘raw’, ‘parsed’, and ‘parsing_error’.
strict (bool | None) –
- True:
Model output is guaranteed to exactly match the schema. The input schema will also be validated according to the [supported schemas](https://platform.openai.com/docs/guides/structured-outputs#supported-schemas).
- False:
Input schema will not be validated and model output will not be validated.
- None:
strict argument will not be passed to the model.
If schema is specified via TypedDict or JSON schema, strict is not enabled by default. Pass strict=True to enable it.
- !!! note
strict can only be non-null if method is ‘json_schema’ or ‘function_calling’.
tools (list | None) –
A list of tool-like objects to bind to the chat model. Requires that:
method is ‘json_schema’ (default).
strict=True
include_raw=True
If a model elects to call a tool, the resulting AIMessage in ‘raw’ will include tool calls.
??? example
```python from langchain.chat_models import init_chat_model from pydantic import BaseModel
- class ResponseSchema(BaseModel):
response: str
- def get_weather(location: str) -> str:
"""Get weather at a location.""" pass
model = init_chat_model(“openai:gpt-4o-mini”)
- structured_model = model.with_structured_output(
ResponseSchema, tools=[get_weather], strict=True, include_raw=True,
)
structured_model.invoke(“What’s the weather in Boston?”) ```
”raw”: AIMessage(content=””, tool_calls=[…], …), “parsing_error”: None, “parsed”: None,
kwargs (Any) – Additional keyword args are passed through to the model.
- Returns:
- A Runnable that takes same inputs as a
langchain_core.language_models.chat.BaseChatModel. If include_raw is False and schema is a Pydantic class, Runnable outputs an instance of schema (i.e., a Pydantic object). Otherwise, if include_raw is False then Runnable outputs a dict.
If include_raw is True, then Runnable outputs a dict with keys:
’raw’: BaseMessage
- ’parsed’: None if there was a parsing error, otherwise the type
depends on the schema as described above.
’parsing_error’: BaseException | None
- Return type:
Runnable[PromptValue | str | Sequence[BaseMessage | list[str] | tuple[str, str] | str | dict[str, Any]], dict | _BM]
!!! warning “Behavior changed in langchain-openai 0.3.0”
method default changed from “function_calling” to “json_schema”.
!!! warning “Behavior changed in langchain-openai 0.3.12”
Support for tools added.
!!! warning “Behavior changed in langchain-openai 0.3.21”
Pass kwargs through to the model.
??? note “Example: schema=Pydantic class, method=’json_schema’, include_raw=False, strict=True”
Note, OpenAI has a number of restrictions on what types of schemas can be provided if strict = True. When using Pydantic, our model cannot specify any Field metadata (like min/max constraints) and fields cannot have default values.
See [all constraints](https://platform.openai.com/docs/guides/structured-outputs#supported-schemas).
```python from langchain_openai import ChatOpenAI from pydantic import BaseModel, Field
- class AnswerWithJustification(BaseModel):
‘’’An answer to the user question along with justification for the answer.’’’
answer: str justification: str | None = Field(
default=…, description=”A justification for the answer.”
)
model = ChatOpenAI(model=”…”, temperature=0) structured_model = model.with_structured_output(AnswerWithJustification)
- structured_model.invoke(
“What weighs more a pound of bricks or a pound of feathers”
```python AnswerWithJustification(
answer=”They weigh the same”, justification=”Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.”,
??? note “Example: schema=Pydantic class, method=’function_calling’, include_raw=False, strict=False”
```python from langchain_openai import ChatOpenAI from pydantic import BaseModel, Field
- class AnswerWithJustification(BaseModel):
‘’’An answer to the user question along with justification for the answer.’’’
answer: str justification: str | None = Field(
default=…, description=”A justification for the answer.”
)
model = ChatOpenAI(model=”…”, temperature=0) structured_model = model.with_structured_output(
AnswerWithJustification, method=”function_calling”
)
- structured_model.invoke(
“What weighs more a pound of bricks or a pound of feathers”
```python AnswerWithJustification(
answer=”They weigh the same”, justification=”Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.”,
??? note “Example: schema=Pydantic class, method=’json_schema’, include_raw=True”
```python from langchain_openai import ChatOpenAI from pydantic import BaseModel
- class AnswerWithJustification(BaseModel):
‘’’An answer to the user question along with justification for the answer.’’’
answer: str justification: str
model = ChatOpenAI(model=”…”, temperature=0) structured_model = model.with_structured_output(
AnswerWithJustification, include_raw=True
)
- structured_model.invoke(
“What weighs more a pound of bricks or a pound of feathers”
- “raw”: AIMessage(
content=””, additional_kwargs={
- “tool_calls”: [
- {
“id”: “call_Ao02pnFYXD6GN1yzc0uXPsvF”, “function”: {
“arguments”: ‘{“answer”:”They weigh the same.”,”justification”:”Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.”}’, “name”: “AnswerWithJustification”,
}, “type”: “function”,
}
]
},
), “parsed”: AnswerWithJustification(
answer=”They weigh the same.”, justification=”Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.”,
), “parsing_error”: None,
??? note “Example: schema=TypedDict class, method=’json_schema’, include_raw=False, strict=False”
```python from typing_extensions import Annotated, TypedDict
from langchain_openai import ChatOpenAI
- class AnswerWithJustification(TypedDict):
‘’’An answer to the user question along with justification for the answer.’’’
answer: str justification: Annotated[
str | None, None, “A justification for the answer.”
]
model = ChatOpenAI(model=”…”, temperature=0) structured_model = model.with_structured_output(AnswerWithJustification)
- structured_model.invoke(
“What weighs more a pound of bricks or a pound of feathers”
“answer”: “They weigh the same”, “justification”: “Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.”,
??? note “Example: schema=OpenAI function schema, method=’json_schema’, include_raw=False”
```python from langchain_openai import ChatOpenAI
- oai_schema = {
“name”: “AnswerWithJustification”, “description”: “An answer to the user question along with justification for the answer.”, “parameters”: {
“type”: “object”, “properties”: {
“answer”: {“type”: “string”}, “justification”: {
“description”: “A justification for the answer.”, “type”: “string”,
},
}, “required”: [“answer”],
},
}
model = ChatOpenAI(model=”…”, temperature=0) structured_model = model.with_structured_output(oai_schema)
- structured_model.invoke(
“What weighs more a pound of bricks or a pound of feathers”
“answer”: “They weigh the same”, “justification”: “Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.”,
??? note “Example: schema=Pydantic class, method=’json_mode’, include_raw=True”
```python from langchain_openai import ChatOpenAI from pydantic import BaseModel
- class AnswerWithJustification(BaseModel):
answer: str justification: str
model = ChatOpenAI(model=”…”, temperature=0) structured_model = model.with_structured_output(
AnswerWithJustification, method=”json_mode”, include_raw=True
)
- structured_model.invoke(
“Answer the following question. ” “Make sure to return a JSON blob with keys ‘answer’ and ‘justification’.\n\n” “What’s heavier a pound of bricks or a pound of feathers?”
- “raw”: AIMessage(
content=’{\n “answer”: “They are both the same weight.”,\n “justification”: “Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight.” \n}’
), “parsed”: AnswerWithJustification(
answer=”They are both the same weight.”, justification=”Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight.”,
), “parsing_error”: None,
??? note “Example: schema=None, method=’json_mode’, include_raw=True”
```python structured_model = model.with_structured_output(
method=”json_mode”, include_raw=True
)
- structured_model.invoke(
“Answer the following question. ” “Make sure to return a JSON blob with keys ‘answer’ and ‘justification’.\n\n” “What’s heavier a pound of bricks or a pound of feathers?”
- “raw”: AIMessage(
content=’{\n “answer”: “They are both the same weight.”,\n “justification”: “Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight.” \n}’
), “parsed”: {
“answer”: “They are both the same weight.”, “justification”: “Both a pound of bricks and a pound of feathers weigh one pound. The difference lies in the volume and density of the materials, not the weight.”,
}, “parsing_error”: None,
- with_types(*, input_type=None, output_type=None)[source]
Bind input and output types to a Runnable, returning a new Runnable.
- client: Any
- async_client: Any
- root_client: Any
- root_async_client: Any
- model_kwargs: dict[str, Any]
Holds any model parameters valid for create call not explicitly specified.
- openai_api_key: SecretStr | None | Callable[[], str] | Callable[[], Awaitable[str]]
API key to use.
Can be inferred from the OPENAI_API_KEY environment variable, or specified as a string, or sync or async callable that returns a string.
??? example “Specify with environment variable”
??? example “Specify with a string”
- ??? example “Specify with a sync callable”
```python from langchain_openai import ChatOpenAI
- def get_api_key() -> str:
# Custom logic to retrieve API key return “…”
model = ChatOpenAI(model=”gpt-5-nano”, api_key=get_api_key) ```
- ??? example “Specify with an async callable”
```python from langchain_openai import ChatOpenAI
- async def get_api_key() -> str:
# Custom async logic to retrieve API key return “…”
model = ChatOpenAI(model=”gpt-5-nano”, api_key=get_api_key) ```
- openai_api_base: str | None
Base URL path for API requests, leave blank if not using a proxy or service emulator.
- request_timeout: float | tuple[float, float] | Any | None
Timeout for requests to OpenAI completion API. Can be float, httpx.Timeout or None.
- stream_usage: bool | None
Whether to include usage metadata in streaming output. If enabled, an additional message chunk will be generated during the stream including usage metadata.
This parameter is enabled unless openai_api_base is set or the model is initialized with a custom client, as many chat completions APIs do not support streaming token usage.
!!! version-added “Added in langchain-openai 0.3.9”
!!! warning “Behavior changed in langchain-openai 0.3.35”
Enabled for default base URL and client.
- top_logprobs: int | None
Number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.
- logit_bias: dict[int, int] | None
Modify the likelihood of specified tokens appearing in the completion.
- reasoning_effort: str | None
Constrains effort on reasoning for reasoning models. For use with the Chat Completions API.
Reasoning models only.
Currently supported values are ‘minimal’, ‘low’, ‘medium’, and ‘high’. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
- reasoning: dict[str, Any] | None
Reasoning parameters for reasoning models. For use with the Responses API.
“effort”: “medium”, # Can be “low”, “medium”, or “high” “summary”: “auto”, # Can be “auto”, “concise”, or “detailed”
}
!!! version-added “Added in langchain-openai 0.3.24”
- verbosity: str | None
Controls the verbosity level of responses for reasoning models. For use with the Responses API.
Currently supported values are ‘low’, ‘medium’, and ‘high’.
!!! version-added “Added in langchain-openai 0.3.28”
- tiktoken_model_name: str | None
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit.
By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here.
- http_client: Any | None
Optional httpx.Client.
Only used for sync invocations. Must specify http_async_client as well if you’d like a custom client for async invocations.
- http_async_client: Any | None
Optional httpx.AsyncClient.
Only used for async invocations. Must specify http_client as well if you’d like a custom client for sync invocations.
- extra_body: Mapping[str, Any] | None
Optional additional JSON properties to include in the request parameters when making requests to OpenAI compatible APIs, such as vLLM, LM Studio, or other providers.
This is the recommended way to pass custom parameters that are specific to your OpenAI-compatible API provider but not part of the standard OpenAI API.
Examples
[LM Studio](https://lmstudio.ai/) TTL parameter: extra_body={“ttl”: 300}
- [vLLM](https://github.com/vllm-project/vllm) custom parameters:
extra_body={“use_beam_search”: True}
Any other provider-specific parameters
- !!! warning
Do not use model_kwargs for custom parameters that are not part of the standard OpenAI API, as this will cause errors when making API calls. Use extra_body instead.
- include_response_headers: bool
Whether to include response headers in the output message response_metadata.
- disabled_params: dict[str, Any] | None
Parameters of the OpenAI client or chat.completions endpoint that should be disabled for the given model.
Should be specified as {“param”: None | [‘val1’, ‘val2’]} where the key is the parameter and the value is either None, meaning that parameter should never be used, or it’s a list of disabled values for the parameter.
For example, older models may not support the ‘parallel_tool_calls’ parameter at all, in which case disabled_params={“parallel_tool_calls”: None} can be passed in.
If a parameter is disabled then it will not be used by default in any methods, e.g. in with_structured_output. However this does not prevent a user from directly passed in the parameter during invocation.
- include: list[str] | None
Additional fields to include in generations from Responses API.
Supported values:
‘file_search_call.results’
‘message.input_image.image_url’
‘computer_call_output.output.image_url’
‘reasoning.encrypted_content’
‘code_interpreter_call.outputs’
!!! version-added “Added in langchain-openai 0.3.24”
- service_tier: str | None
Latency tier for request.
Options are ‘auto’, ‘default’, or ‘flex’.
Relevant for users of OpenAI’s scale tier service.
- store: bool | None
If True, OpenAI may store response data for future use.
Defaults to True for the Responses API and False for the Chat Completions API.
!!! version-added “Added in langchain-openai 0.3.24”
- truncation: str | None
Truncation strategy (Responses API).
Can be ‘auto’ or ‘disabled’ (default).
If ‘auto’, model may drop input items from the middle of the message sequence to fit the context window.
!!! version-added “Added in langchain-openai 0.3.24”
- use_previous_response_id: bool
If True, always pass previous_response_id using the ID of the most recent response. Responses API only.
Input messages up to the most recent response will be dropped from request payloads.
For example, the following two are equivalent:
model=”…”, use_previous_response_id=True,
) model.invoke(
- [
HumanMessage(“Hello”), AIMessage(“Hi there!”, response_metadata={“id”: “resp_123”}), HumanMessage(“How are you?”),
]
)
`python model = ChatOpenAI(model="...", use_responses_api=True) model.invoke([HumanMessage("How are you?")], previous_response_id="resp_123") `!!! version-added “Added in langchain-openai 0.3.26”
- use_responses_api: bool | None
Whether to use the Responses API instead of the Chat API.
If not specified then will be inferred based on invocation params.
!!! version-added “Added in langchain-openai 0.3.9”
- output_version: str | None
Version of AIMessage output format to use.
This field is used to roll-out new output formats for chat model AIMessage responses in a backwards-compatible way.
Supported values:
‘v0’: AIMessage format as of langchain-openai 0.3.x.
- ‘responses/v1’: Formats Responses API output items into AIMessage content blocks
(Responses API only)
‘v1’: v1 of LangChain cross-provider standard.
!!! warning “Behavior changed in langchain-openai 1.0.0”
Default updated to “responses/v1”.
- rate_limiter: BaseRateLimiter | None
An optional rate limiter to use for limiting the number of requests.
- disable_streaming: bool | Literal['tool_calling']
Whether to disable streaming for this model.
If streaming is bypassed, then stream/astream/astream_events will defer to invoke/ainvoke.
If True, will always bypass streaming case.
- If ‘tool_calling’, will bypass streaming case only when the model is called
with a tools keyword argument. In other words, LangChain will automatically switch to non-streaming behavior (invoke) only when the tools argument is provided. This offers the best of both worlds.
If False (Default), will always use streaming case if available.
The main reason for this flag is that code might be written using stream and a user may want to swap out a given model for another model whose the implementation does not properly support streaming.
- profile: ModelProfile | None
Profile detailing model capabilities.
- !!! warning “Beta feature”
This is a beta feature. The format of model profiles is subject to change.
If not specified, automatically loaded from the provider package on initialization if data is available.
Example profile data includes context window sizes, supported modalities, or support for tool calling, structured output, and other features.
!!! version-added “Added in langchain-core 1.1.0”
- cache: BaseCache | bool | None
Whether to cache the response.
If True, will use the global cache.
If False, will not use a cache
If None, will use the global cache if it’s set, otherwise no cache.
If instance of BaseCache, will use the provided cache.
Caching is not currently supported for streaming methods of models.
- callbacks: Callbacks
Callbacks to add to the run trace.
- class langgraph_agent_toolkit.core.models.FakeToolModel(responses)[source][source]
Bases:
FakeListChatModelA fake model that returns a fixed response for testing purposes.
- Parameters:
responses (List)
- async abatch(inputs, config=None, *, return_exceptions=False, **kwargs)[source]
Default implementation runs ainvoke in parallel using asyncio.gather.
The default implementation of batch works well for IO bound runnables.
Subclasses must override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.
- Parameters:
inputs (list[Input]) – A list of inputs to the Runnable.
config (RunnableConfig | list[RunnableConfig] | None) –
A config to use when invoking the Runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys.
Please refer to RunnableConfig for more details.
return_exceptions (bool) – Whether to return exceptions instead of raising them.
**kwargs (Any | None) – Additional keyword arguments to pass to the Runnable.
- Returns:
A list of outputs from the Runnable.
- Return type:
list[Output]
- async abatch_as_completed(inputs, config=None, *, return_exceptions=False, **kwargs)[source]
Run ainvoke in parallel on a list of inputs.
Yields results as they complete.
- Parameters:
inputs (Sequence[Input]) – A list of inputs to the Runnable.
config (RunnableConfig | Sequence[RunnableConfig] | None) –
A config to use when invoking the Runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys.
Please refer to RunnableConfig for more details.
return_exceptions (bool) – Whether to return exceptions instead of raising them.
**kwargs (Any | None) – Additional keyword arguments to pass to the Runnable.
- Yields:
A tuple of the index of the input and the output from the Runnable.
- Return type:
AsyncIterator[tuple[int, Output | Exception]]
- async agenerate(messages, stop=None, callbacks=None, *, tags=None, metadata=None, run_name=None, run_id=None, **kwargs)[source]
Asynchronously pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
Take advantage of batched calls,
Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model
type (e.g., pure text completion models vs chat models).
- Parameters:
messages (list[list[BaseMessage]]) – List of list of messages.
Stop words to use when generating.
Model output is cut off at the first occurrence of any of these substrings.
callbacks (Callbacks) –
Callbacks to pass through.
Used for executing additional functionality, such as logging or streaming, throughout generation.
run_name (str | None) – The name of the run.
run_id (uuid.UUID | None) – The ID of the run.
**kwargs (Any) –
Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
- Returns:
- An LLMResult, which contains a list of candidate Generations for each
input prompt and additional model provider-specific output.
- Return type:
LLMResult
- async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)[source]
Asynchronously pass a sequence of prompts and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
Take advantage of batched calls,
Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model
type (e.g., pure text completion models vs chat models).
- Parameters:
prompts (list[PromptValue]) –
List of PromptValue objects.
A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessage objects for chat models).
Stop words to use when generating.
Model output is cut off at the first occurrence of any of these substrings.
callbacks (list[BaseCallbackHandler] | BaseCallbackManager | None) –
Callbacks to pass through.
Used for executing additional functionality, such as logging or streaming, throughout generation.
**kwargs (Any) –
Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
- Returns:
- An LLMResult, which contains a list of candidate Generation objects for
each input prompt and additional model provider-specific output.
- Return type:
LLMResult
- async ainvoke(input, config=None, *, stop=None, **kwargs)[source]
Transform a single input into an output.
- Parameters:
input (LanguageModelInput) – The input to the Runnable.
config (RunnableConfig | None) –
A config to use when invoking the Runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys.
Please refer to RunnableConfig for more details.
kwargs (Any)
- Returns:
The output of the Runnable.
- Return type:
AIMessage
- as_tool(args_schema=None, *, name=None, description=None, arg_types=None)[source]
Create a BaseTool from a Runnable.
as_tool will instantiate a BaseTool with a name, description, and args_schema from a Runnable. Where possible, schemas are inferred from runnable.get_input_schema.
Alternatively (e.g., if the Runnable takes a dict as input and the specific dict keys are not typed), the schema can be specified directly with args_schema.
You can also pass arg_types to just specify the required arguments and their types.
- Parameters:
- Returns:
A BaseTool instance.
- Return type:
BaseTool
!!! example “TypedDict input”
!!! example “dict input, specifying schema via args_schema”
```python from typing import Any from pydantic import BaseModel, Field from langchain_core.runnables import RunnableLambda
- def f(x: dict[str, Any]) -> str:
return str(x[“a”] * max(x[“b”]))
- class FSchema(BaseModel):
“””Apply a function to an integer and list of integers.”””
a: int = Field(…, description=”Integer”) b: list[int] = Field(…, description=”List of ints”)
runnable = RunnableLambda(f) as_tool = runnable.as_tool(FSchema) as_tool.invoke({“a”: 3, “b”: [1, 2]}) ```
!!! example “dict input, specifying schema via arg_types”
!!! example “str input”
- assign(**kwargs)[source]
Assigns new fields to the dict output of this Runnable.
```python from langchain_core.language_models.fake import FakeStreamingListLLM from langchain_core.output_parsers import StrOutputParser from langchain_core.prompts import SystemMessagePromptTemplate from langchain_core.runnables import Runnable from operator import itemgetter
- prompt = (
SystemMessagePromptTemplate.from_template(“You are a nice assistant.”) + “{question}”
) model = FakeStreamingListLLM(responses=[“foo-lish”])
chain: Runnable = prompt | model | {“str”: StrOutputParser()}
chain_with_assign = chain.assign(hello=itemgetter(“str”) | model)
print(chain_with_assign.input_schema.model_json_schema()) # {‘title’: ‘PromptInput’, ‘type’: ‘object’, ‘properties’: {‘question’: {‘title’: ‘Question’, ‘type’: ‘string’}}} print(chain_with_assign.output_schema.model_json_schema()) # {‘title’: ‘RunnableSequenceOutput’, ‘type’: ‘object’, ‘properties’: {‘str’: {‘title’: ‘Str’, ‘type’: ‘string’}, ‘hello’: {‘title’: ‘Hello’, ‘type’: ‘string’}}} ```
- Parameters:
**kwargs (Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any] | Mapping[str, Runnable[dict[str, Any], Any] | Callable[[dict[str, Any]], Any]]) – A mapping of keys to Runnable or Runnable-like objects that will be invoked with the entire output dict of this Runnable.
- Returns:
A new Runnable.
- Return type:
RunnableSerializable[Any, Any]
- async astream(input, config=None, *, stop=None, **kwargs)[source]
Default implementation of astream, which calls ainvoke.
Subclasses must override this method if they support streaming output.
- Parameters:
- Yields:
The output of the Runnable.
- Return type:
AsyncIterator[AIMessageChunk]
- async astream_events(input, config=None, *, version='v2', include_names=None, include_types=None, include_tags=None, exclude_names=None, exclude_types=None, exclude_tags=None, **kwargs)[source]
Generate a stream of events.
Use to create an iterator over StreamEvent that provide real-time information about the progress of the Runnable, including StreamEvent from intermediate results.
A StreamEvent is a dictionary with the following schema:
- event: Event names are of the format:
on_[runnable_type]_(start|stream|end).
name: The name of the Runnable that generated the event.
- run_id: Randomly generated ID associated with the given execution of the
Runnable that emitted the event. A child Runnable that gets invoked as part of the execution of a parent Runnable is assigned its own unique ID.
- parent_ids: The IDs of the parent runnables that generated the event. The
root Runnable will have an empty list. The order of the parent IDs is from the root to the immediate parent. Only available for v2 version of the API. The v1 version of the API will return an empty list.
tags: The tags of the Runnable that generated the event.
metadata: The metadata of the Runnable that generated the event.
- data: The data associated with the event. The contents of this field
depend on the type of event. See the table below for more details.
Below is a table that illustrates some events that might be emitted by various chains. Metadata fields have been omitted from the table for brevity. Chain definitions have been included after the table.
- !!! note
This reference table is for the v2 version of the schema.
event | name | chunk | input | output |———————- | ——————– | ———————————– | ————————————————- | ————————————————— |on_chat_model_start | ‘[model name]’ | | {“messages”: [[SystemMessage, HumanMessage]]} | |on_chat_model_stream | ‘[model name]’ | AIMessageChunk(content=”hello”) | | |on_chat_model_end | ‘[model name]’ | | {“messages”: [[SystemMessage, HumanMessage]]} | AIMessageChunk(content=”hello world”) |on_llm_start | ‘[model name]’ | | {‘input’: ‘hello’} | |on_llm_stream | ‘[model name]’ | `’Hello’ ` | | |on_llm_end | ‘[model name]’ | | ‘Hello human!’ | |on_chain_start | ‘format_docs’ | | | |on_chain_stream | ‘format_docs’ | ‘hello world!, goodbye world!’ | | |on_chain_end | ‘format_docs’ | | [Document(…)] | ‘hello world!, goodbye world!’ |on_tool_start | ‘some_tool’ | | {“x”: 1, “y”: “2”} | |on_tool_end | ‘some_tool’ | | | {“x”: 1, “y”: “2”} |on_retriever_start | ‘[retriever name]’ | | {“query”: “hello”} | |on_retriever_end | ‘[retriever name]’ | | {“query”: “hello”} | [Document(…), ..] |on_prompt_start | ‘[template_name]’ | | {“question”: “hello”} | |on_prompt_end | ‘[template_name]’ | | {“question”: “hello”} | ChatPromptValue(messages: [SystemMessage, …]) |In addition to the standard events, users can also dispatch custom events (see example below).
Custom events will be only be surfaced with in the v2 version of the API!
A custom event has following format:
Attribute | Type | Description |———– | —— | ——————————————————————————————————— |name | str | A user defined name for the event. |data | Any | The data associated with the event. This can be anything, though we suggest making it JSON serializable. |Here are declarations associated with the standard events shown above:
format_docs:
```python def format_docs(docs: list[Document]) -> str:
‘’’Format the docs.’’’ return “, “.join([doc.page_content for doc in docs])
format_docs = RunnableLambda(format_docs) ```
some_tool:
```python @tool def some_tool(x: int, y: str) -> dict:
‘’’Some_tool.’’’ return {“x”: x, “y”: y}
prompt:
```python template = ChatPromptTemplate.from_messages(
- [
(“system”, “You are Cat Agent 007”), (“human”, “{question}”),
]
).with_config({“run_name”: “my_template”, “tags”: [“my_template”]}) ```
!!! example
```python from langchain_core.runnables import RunnableLambda
- async def reverse(s: str) -> str:
return s[::-1]
chain = RunnableLambda(func=reverse)
- events = [
event async for event in chain.astream_events(“hello”, version=”v2”)
]
# Will produce the following events # (run_id, and parent_ids has been omitted for brevity): [
- {
“data”: {“input”: “hello”}, “event”: “on_chain_start”, “metadata”: {}, “name”: “reverse”, “tags”: [],
}, {
“data”: {“chunk”: “olleh”}, “event”: “on_chain_stream”, “metadata”: {}, “name”: “reverse”, “tags”: [],
}, {
“data”: {“output”: “olleh”}, “event”: “on_chain_end”, “metadata”: {}, “name”: “reverse”, “tags”: [],
},
```python title=”Dispatch custom event” from langchain_core.callbacks.manager import (
adispatch_custom_event,
) from langchain_core.runnables import RunnableLambda, RunnableConfig import asyncio
- async def slow_thing(some_input: str, config: RunnableConfig) -> str:
“””Do something that takes a long time.””” await asyncio.sleep(1) # Placeholder for some slow operation await adispatch_custom_event(
“progress_event”, {“message”: “Finished step 1 of 3”}, config=config # Must be included for python < 3.10
) await asyncio.sleep(1) # Placeholder for some slow operation await adispatch_custom_event(
“progress_event”, {“message”: “Finished step 2 of 3”}, config=config # Must be included for python < 3.10
) await asyncio.sleep(1) # Placeholder for some slow operation return “Done”
slow_thing = RunnableLambda(slow_thing)
- async for event in slow_thing.astream_events(“some_input”, version=”v2”):
print(event)
- Parameters:
input (Any) – The input to the Runnable.
config (RunnableConfig | None) – The config to use for the Runnable.
version (Literal['v1', 'v2']) –
The version of the schema to use, either ‘v2’ or ‘v1’.
Users should use ‘v2’.
’v1’ is for backwards compatibility and will be deprecated in 0.4.0.
No default will be assigned until the API is stabilized. custom events will only be surfaced in ‘v2’.
include_names (Sequence[str] | None) – Only include events from Runnable objects with matching names.
include_types (Sequence[str] | None) – Only include events from Runnable objects with matching types.
include_tags (Sequence[str] | None) – Only include events from Runnable objects with matching tags.
exclude_names (Sequence[str] | None) – Exclude events from Runnable objects with matching names.
exclude_types (Sequence[str] | None) – Exclude events from Runnable objects with matching types.
exclude_tags (Sequence[str] | None) – Exclude events from Runnable objects with matching tags.
**kwargs (Any) –
Additional keyword arguments to pass to the Runnable.
These will be passed to astream_log as this implementation of astream_events is built on top of astream_log.
- Yields:
An async stream of StreamEvent.
- Raises:
NotImplementedError – If the version is not ‘v1’ or ‘v2’.
- Return type:
AsyncIterator[StreamEvent]
- async astream_log(input, config=None, *, diff=True, with_streamed_output_list=True, include_names=None, include_types=None, include_tags=None, exclude_names=None, exclude_types=None, exclude_tags=None, **kwargs)[source]
Stream all output from a Runnable, as reported to the callback system.
This includes all inner runs of LLMs, Retrievers, Tools, etc.
Output is streamed as Log objects, which include a list of Jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run.
The Jsonpatch ops can be applied in order to construct state.
- Parameters:
input (Any) – The input to the Runnable.
config (RunnableConfig | None) – The config to use for the Runnable.
diff (bool) – Whether to yield diffs between each step or the current state.
with_streamed_output_list (bool) – Whether to yield the streamed_output list.
include_names (Sequence[str] | None) – Only include logs with these names.
include_types (Sequence[str] | None) – Only include logs with these types.
include_tags (Sequence[str] | None) – Only include logs with these tags.
exclude_names (Sequence[str] | None) – Exclude logs with these names.
exclude_types (Sequence[str] | None) – Exclude logs with these types.
exclude_tags (Sequence[str] | None) – Exclude logs with these tags.
**kwargs (Any) – Additional keyword arguments to pass to the Runnable.
- Yields:
A RunLogPatch or RunLog object.
- Return type:
AsyncIterator[RunLogPatch] | AsyncIterator[RunLog]
- async atransform(input, config=None, **kwargs)[source]
Transform inputs to outputs.
Default implementation of atransform, which buffers input and calls astream.
Subclasses must override this method if they can start producing output while input is still being generated.
- Parameters:
input (AsyncIterator[Input]) – An async iterator of inputs to the Runnable.
config (RunnableConfig | None) – The config to use for the Runnable.
**kwargs (Any | None) – Additional keyword arguments to pass to the Runnable.
- Yields:
The output of the Runnable.
- Return type:
AsyncIterator[Output]
- batch(inputs, config=None, *, return_exceptions=False, **kwargs)[source]
Default implementation runs invoke in parallel using a thread pool executor.
The default implementation of batch works well for IO bound runnables.
Subclasses must override this method if they can batch more efficiently; e.g., if the underlying Runnable uses an API which supports a batch mode.
- Parameters:
inputs (list[Input]) – A list of inputs to the Runnable.
config (RunnableConfig | list[RunnableConfig] | None) –
A config to use when invoking the Runnable. The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys.
Please refer to RunnableConfig for more details.
return_exceptions (bool) – Whether to return exceptions instead of raising them.
**kwargs (Any | None) – Additional keyword arguments to pass to the Runnable.
- Returns:
A list of outputs from the Runnable.
- Return type:
list[Output]
- batch_as_completed(inputs, config=None, *, return_exceptions=False, **kwargs)[source]
Run invoke in parallel on a list of inputs.
Yields results as they complete.
- Parameters:
inputs (Sequence[Input]) – A list of inputs to the Runnable.
config (RunnableConfig | Sequence[RunnableConfig] | None) –
A config to use when invoking the Runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys.
Please refer to RunnableConfig for more details.
return_exceptions (bool) – Whether to return exceptions instead of raising them.
**kwargs (Any | None) – Additional keyword arguments to pass to the Runnable.
- Yields:
Tuples of the index of the input and the output from the Runnable.
- Return type:
- bind(**kwargs)[source]
Bind arguments to a Runnable, returning a new Runnable.
Useful when a Runnable in a chain requires an argument that is not in the output of the previous Runnable or included in the user input.
- Parameters:
**kwargs (Any) – The arguments to bind to the Runnable.
- Returns:
A new Runnable with the arguments bound.
- Return type:
Runnable[Input, Output]
Example
```python from langchain_ollama import ChatOllama from langchain_core.output_parsers import StrOutputParser
model = ChatOllama(model=”llama3.1”)
# Without bind chain = model | StrOutputParser()
chain.invoke(“Repeat quoted words exactly: ‘One two three four five.’”) # Output is ‘One two three four five.’
# With bind chain = model.bind(stop=[“three”]) | StrOutputParser()
chain.invoke(“Repeat quoted words exactly: ‘One two three four five.’”) # Output is ‘One two’ ```
- bind_tools(tools, *, tool_choice=None, **kwargs)[source][source]
Bind tools to the model.
- config_schema(*, include=None)[source]
The type of config this Runnable accepts specified as a Pydantic model.
To mark a field as configurable, see the configurable_fields and configurable_alternatives methods.
- configurable_alternatives(which, *, default_key='default', prefix_keys=False, **kwargs)[source]
Configure alternatives for Runnable objects that can be set at runtime.
- Parameters:
which (ConfigurableField) – The ConfigurableField instance that will be used to select the alternative.
default_key (str) – The default key to use if no alternative is selected.
prefix_keys (bool) – Whether to prefix the keys with the ConfigurableField id.
**kwargs (Runnable[Input, Output] | Callable[[], Runnable[Input, Output]]) – A dictionary of keys to Runnable instances or callables that return Runnable instances.
- Returns:
A new Runnable with the alternatives configured.
- Return type:
RunnableSerializable
!!! example
```python from langchain_anthropic import ChatAnthropic from langchain_core.runnables.utils import ConfigurableField from langchain_openai import ChatOpenAI
- model = ChatAnthropic(
model_name=”claude-sonnet-4-5-20250929”
- ).configurable_alternatives(
ConfigurableField(id=”llm”), default_key=”anthropic”, openai=ChatOpenAI(),
)
# uses the default model ChatAnthropic print(model.invoke(“which organization created you?”).content)
# uses ChatOpenAI print(
model.with_config(configurable={“llm”: “openai”}) .invoke(“which organization created you?”) .content
- configurable_fields(**kwargs)[source]
Configure particular Runnable fields at runtime.
- Parameters:
**kwargs (ConfigurableField | ConfigurableFieldSingleOption | ConfigurableFieldMultiOption) – A dictionary of ConfigurableField instances to configure.
- Raises:
ValueError – If a configuration key is not found in the Runnable.
- Returns:
A new Runnable with the fields configured.
- Return type:
RunnableSerializable
!!! example
```python from langchain_core.runnables import ConfigurableField from langchain_openai import ChatOpenAI
- model = ChatOpenAI(max_tokens=20).configurable_fields(
- max_tokens=ConfigurableField(
id=”output_token_number”, name=”Max tokens in the output”, description=”The maximum number of tokens in the output”,
)
)
# max_tokens = 20 print(
“max_tokens_20: “, model.invoke(“tell me something about chess”).content
)
# max_tokens = 200 print(
“max_tokens_200: “, model.with_config(configurable={“output_token_number”: 200}) .invoke(“tell me something about chess”) .content,
- copy(*, include=None, exclude=None, update=None, deep=False)[source]
Returns a copy of the model.
- !!! warning “Deprecated”
This method is now deprecated; use model_copy instead.
If you need include or exclude, use:
`python {test="skip" lint="skip"} data = self.model_dump(include=include, exclude=exclude, round_trip=True) data = {**data, **(update or {})} copied = self.model_validate(data) `- Parameters:
include (AbstractSetIntStr | MappingIntStrAny | None) – Optional set or mapping specifying which fields to include in the copied model.
exclude (AbstractSetIntStr | MappingIntStrAny | None) – Optional set or mapping specifying which fields to exclude in the copied model.
update (Dict[str, Any] | None) – Optional dictionary of field-value pairs to override field values in the copied model.
deep (bool) – If True, the values of fields that are Pydantic models will be deep-copied.
- Returns:
A copy of the model with included, excluded and updated fields as specified.
- Return type:
Self
- generate(messages, stop=None, callbacks=None, *, tags=None, metadata=None, run_name=None, run_id=None, **kwargs)[source]
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
Take advantage of batched calls,
Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model
type (e.g., pure text completion models vs chat models).
- Parameters:
messages (list[list[BaseMessage]]) – List of list of messages.
Stop words to use when generating.
Model output is cut off at the first occurrence of any of these substrings.
callbacks (Callbacks) –
Callbacks to pass through.
Used for executing additional functionality, such as logging or streaming, throughout generation.
run_name (str | None) – The name of the run.
run_id (uuid.UUID | None) – The ID of the run.
**kwargs (Any) –
Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
- Returns:
- An LLMResult, which contains a list of candidate Generations for each
input prompt and additional model provider-specific output.
- Return type:
LLMResult
- generate_prompt(prompts, stop=None, callbacks=None, **kwargs)[source]
Pass a sequence of prompts to the model and return model generations.
This method should make use of batched calls for models that expose a batched API.
Use this method when you want to:
Take advantage of batched calls,
Need more output from the model than just the top generated value,
- Are building chains that are agnostic to the underlying language model
type (e.g., pure text completion models vs chat models).
- Parameters:
prompts (list[PromptValue]) –
List of PromptValue objects.
A PromptValue is an object that can be converted to match the format of any language model (string for pure text generation models and BaseMessage objects for chat models).
Stop words to use when generating.
Model output is cut off at the first occurrence of any of these substrings.
callbacks (list[BaseCallbackHandler] | BaseCallbackManager | None) –
Callbacks to pass through.
Used for executing additional functionality, such as logging or streaming, throughout generation.
**kwargs (Any) –
Arbitrary additional keyword arguments.
These are usually passed to the model provider API call.
- Returns:
- An LLMResult, which contains a list of candidate Generation objects for
each input prompt and additional model provider-specific output.
- Return type:
LLMResult
- get_config_jsonschema(*, include=None)[source]
Get a JSON schema that represents the config of the Runnable.
- Parameters:
include (Sequence[str] | None) – A list of fields to include in the config schema.
- Returns:
A JSON schema that represents the config of the Runnable.
- Return type:
!!! version-added “Added in langchain-core 0.3.0”
- get_graph(config=None)[source]
Return a graph representation of this Runnable.
- Parameters:
config (RunnableConfig | None)
- Return type:
Graph
- get_input_jsonschema(config=None)[source]
Get a JSON schema that represents the input to the Runnable.
- Parameters:
config (RunnableConfig | None) – A config to use when generating the schema.
- Returns:
A JSON schema that represents the input to the Runnable.
- Return type:
Example
```python from langchain_core.runnables import RunnableLambda
- def add_one(x: int) -> int:
return x + 1
runnable = RunnableLambda(add_one)
print(runnable.get_input_jsonschema()) ```
!!! version-added “Added in langchain-core 0.3.0”
- get_input_schema(config=None)[source]
Get a Pydantic model that can be used to validate input to the Runnable.
Runnable objects that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the Runnable is invoked with.
This method allows to get an input schema for a specific configuration.
- classmethod get_lc_namespace()[source]
Get the namespace of the LangChain object.
For example, if the class is langchain.llms.openai.OpenAI, then the namespace is [“langchain”, “llms”, “openai”]
- get_num_tokens(text)[source]
Get the number of tokens present in the text.
Useful for checking if an input fits in a model’s context window.
This should be overridden by model-specific implementations to provide accurate token counts via model-specific tokenizers.
- get_num_tokens_from_messages(messages, tools=None)[source]
Get the number of tokens in the messages.
Useful for checking if an input fits in a model’s context window.
This should be overridden by model-specific implementations to provide accurate token counts via model-specific tokenizers.
!!! note
- The base implementation of get_num_tokens_from_messages ignores tool
schemas.
- The base implementation of get_num_tokens_from_messages adds additional
prefixes to messages in represent user roles, which will add to the overall token count. Model-specific implementations may choose to handle this differently.
- get_output_jsonschema(config=None)[source]
Get a JSON schema that represents the output of the Runnable.
- Parameters:
config (RunnableConfig | None) – A config to use when generating the schema.
- Returns:
A JSON schema that represents the output of the Runnable.
- Return type:
Example
```python from langchain_core.runnables import RunnableLambda
- def add_one(x: int) -> int:
return x + 1
runnable = RunnableLambda(add_one)
print(runnable.get_output_jsonschema()) ```
!!! version-added “Added in langchain-core 0.3.0”
- get_output_schema(config=None)[source]
Get a Pydantic model that can be used to validate output to the Runnable.
Runnable objects that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the Runnable is invoked with.
This method allows to get an output schema for a specific configuration.
- get_prompts(config=None)[source]
Return a list of prompts used by this Runnable.
- Parameters:
config (RunnableConfig | None)
- Return type:
list[BasePromptTemplate]
- property input_schema: type[BaseModel]
The type of input this Runnable accepts specified as a Pydantic model.
- invoke(input, config=None, *, stop=None, **kwargs)[source]
Transform a single input into an output.
- Parameters:
input (LanguageModelInput) – The input to the Runnable.
config (RunnableConfig | None) –
A config to use when invoking the Runnable.
The config supports standard keys like ‘tags’, ‘metadata’ for tracing purposes, ‘max_concurrency’ for controlling how much work to do in parallel, and other keys.
Please refer to RunnableConfig for more details.
kwargs (Any)
- Returns:
The output of the Runnable.
- Return type:
AIMessage
- classmethod is_lc_serializable()[source]
Is this class serializable?
By design, even if a class inherits from Serializable, it is not serializable by default. This is to prevent accidental serialization of objects that should not be serialized.
- Returns:
Whether the class is serializable. Default is False.
- Return type:
- json(*, include=None, exclude=None, by_alias=False, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=PydanticUndefined, models_as_dict=PydanticUndefined, **dumps_kwargs)[source]
- Parameters:
include (set[int] | set[str] | Mapping[int, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None)
exclude (set[int] | set[str] | Mapping[int, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None)
by_alias (bool)
exclude_unset (bool)
exclude_defaults (bool)
exclude_none (bool)
models_as_dict (bool)
dumps_kwargs (Any)
- Return type:
- property lc_attributes: dict
List of attribute names that should be included in the serialized kwargs.
These attributes must be accepted by the constructor.
Default is an empty dictionary.
- classmethod lc_id()[source]
Return a unique identifier for this class for serialization purposes.
The unique identifier is a list of strings that describes the path to the object.
For example, for the class langchain.llms.openai.OpenAI, the id is [“langchain”, “llms”, “openai”, “OpenAI”].
- property lc_secrets: dict[str, str]
A map of constructor argument names to secret ids.
For example, {“openai_api_key”: “OPENAI_API_KEY”}
- map()[source]
Return a new Runnable that maps a list of inputs to a list of outputs.
Calls invoke with each input.
- Returns:
A new Runnable that maps a list of inputs to a list of outputs.
- Return type:
Example
```python from langchain_core.runnables import RunnableLambda
- def _lambda(x: int) -> int:
return x + 1
runnable = RunnableLambda(_lambda) print(runnable.map().invoke([1, 2, 3])) # [2, 3, 4] ```
- model_computed_fields = {}
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'ignore', 'protected_namespaces': ()}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- classmethod model_construct(_fields_set=None, **values)[source]
Creates a new instance of the Model class with validated data.
Creates a new model setting __dict__ and __pydantic_fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed.
- !!! note
model_construct() generally respects the model_config.extra setting on the provided model. That is, if model_config.extra == ‘allow’, then all extra passed values are added to the model instance’s __dict__ and __pydantic_extra__ fields. If model_config.extra == ‘ignore’ (the default), then all extra passed values are ignored. Because no validation is performed with a call to model_construct(), having model_config.extra == ‘forbid’ does not result in an error if extra values are passed, but they will be ignored.
- Parameters:
_fields_set (set[str] | None) – A set of field names that were originally explicitly set during instantiation. If provided, this is directly used for the [model_fields_set][pydantic.BaseModel.model_fields_set] attribute. Otherwise, the field names from the values argument will be used.
values (Any) – Trusted or pre-validated data dictionary.
- Returns:
A new instance of the Model class with validated data.
- Return type:
- model_copy(*, update=None, deep=False)[source]
- !!! abstract “Usage Documentation”
[model_copy](../concepts/models.md#model-copy)
Returns a copy of the model.
- !!! note
The underlying instance’s [__dict__][object.__dict__] attribute is copied. This might have unexpected side effects if you store anything in it, on top of the model fields (e.g. the value of [cached properties][functools.cached_property]).
- model_dump(*, mode='python', include=None, exclude=None, context=None, by_alias=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, exclude_computed_fields=False, round_trip=False, warnings=True, fallback=None, serialize_as_any=False)[source]
- !!! abstract “Usage Documentation”
[model_dump](../concepts/serialization.md#python-mode)
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
- Parameters:
mode (Literal['json', 'python'] | str) – The mode in which to_python should run. If mode is ‘json’, the output will only contain JSON serializable types. If mode is ‘python’, the output may contain non-JSON-serializable Python objects.
include (set[int] | set[str] | Mapping[int, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None) – A set of fields to include in the output.
exclude (set[int] | set[str] | Mapping[int, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None) – A set of fields to exclude from the output.
context (Any | None) – Additional context to pass to the serializer.
by_alias (bool | None) – Whether to use the field’s alias in the dictionary key if defined.
exclude_unset (bool) – Whether to exclude fields that have not been explicitly set.
exclude_defaults (bool) – Whether to exclude fields that are set to their default value.
exclude_none (bool) – Whether to exclude fields that have a value of None.
exclude_computed_fields (bool) – Whether to exclude computed fields. While this can be useful for round-tripping, it is usually recommended to use the dedicated round_trip parameter instead.
round_trip (bool) – If True, dumped values should be valid as input for non-idempotent types such as Json[T].
warnings (bool | Literal['none', 'warn', 'error']) – How to handle serialization errors. False/”none” ignores them, True/”warn” logs errors, “error” raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError].
fallback (Callable[[Any], Any] | None) – A function to call when an unknown value is encountered. If not provided, a [PydanticSerializationError][pydantic_core.PydanticSerializationError] error is raised.
serialize_as_any (bool) – Whether to serialize fields with duck-typing serialization behavior.
- Returns:
A dictionary representation of the model.
- Return type:
- model_dump_json(*, indent=None, ensure_ascii=False, include=None, exclude=None, context=None, by_alias=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, exclude_computed_fields=False, round_trip=False, warnings=True, fallback=None, serialize_as_any=False)[source]
- !!! abstract “Usage Documentation”
[model_dump_json](../concepts/serialization.md#json-mode)
Generates a JSON representation of the model using Pydantic’s to_json method.
- Parameters:
indent (int | None) – Indentation to use in the JSON output. If None is passed, the output will be compact.
ensure_ascii (bool) – If True, the output is guaranteed to have all incoming non-ASCII characters escaped. If False (the default), these characters will be output as-is.
include (set[int] | set[str] | Mapping[int, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None) – Field(s) to include in the JSON output.
exclude (set[int] | set[str] | Mapping[int, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | Mapping[str, set[int] | set[str] | Mapping[int, IncEx | bool] | Mapping[str, IncEx | bool] | bool] | None) – Field(s) to exclude from the JSON output.
context (Any | None) – Additional context to pass to the serializer.
by_alias (bool | None) – Whether to serialize using field aliases.
exclude_unset (bool) – Whether to exclude fields that have not been explicitly set.
exclude_defaults (bool) – Whether to exclude fields that are set to their default value.
exclude_none (bool) – Whether to exclude fields that have a value of None.
exclude_computed_fields (bool) – Whether to exclude computed fields. While this can be useful for round-tripping, it is usually recommended to use the dedicated round_trip parameter instead.
round_trip (bool) – If True, dumped values should be valid as input for non-idempotent types such as Json[T].
warnings (bool | Literal['none', 'warn', 'error']) – How to handle serialization errors. False/”none” ignores them, True/”warn” logs errors, “error” raises a [PydanticSerializationError][pydantic_core.PydanticSerializationError].
fallback (Callable[[Any], Any] | None) – A function to call when an unknown value is encountered. If not provided, a [PydanticSerializationError][pydantic_core.PydanticSerializationError] error is raised.
serialize_as_any (bool) – Whether to serialize fields with duck-typing serialization behavior.
- Returns:
A JSON string representation of the model.
- Return type:
- property model_extra: dict[str, Any] | None
Get extra fields set during validation.
- Returns:
A dictionary of extra fields, or None if config.extra is not set to “allow”.
- model_fields = {'cache': FieldInfo(annotation=Union[BaseCache, bool, NoneType], required=False, default=None, exclude=True), 'callbacks': FieldInfo(annotation=Union[list[BaseCallbackHandler], BaseCallbackManager, NoneType], required=False, default=None, exclude=True), 'custom_get_token_ids': FieldInfo(annotation=Union[Callable[list, list[int]], NoneType], required=False, default=None, exclude=True), 'disable_streaming': FieldInfo(annotation=Union[bool, Literal['tool_calling']], required=False, default=False), 'i': FieldInfo(annotation=int, required=False, default=0), 'metadata': FieldInfo(annotation=Union[dict[str, Any], NoneType], required=False, default=None, exclude=True), 'name': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'output_version': FieldInfo(annotation=Union[str, NoneType], required=False, default_factory=get_from_env_fn), 'profile': FieldInfo(annotation=Union[ModelProfile, NoneType], required=False, default=None, exclude=True), 'rate_limiter': FieldInfo(annotation=Union[BaseRateLimiter, NoneType], required=False, default=None, exclude=True), 'responses': FieldInfo(annotation=List, required=True), 'sleep': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'tags': FieldInfo(annotation=Union[list[str], NoneType], required=False, default=None, exclude=True), 'verbose': FieldInfo(annotation=bool, required=False, default_factory=_get_verbosity, exclude=True, repr=False)}
- property model_fields_set: set[str]
Returns the set of fields that have been explicitly set on this model instance.
- Returns:
- A set of strings representing the fields that have been set,
i.e. that were not filled from defaults.
- classmethod model_json_schema(by_alias=True, ref_template=DEFAULT_REF_TEMPLATE, schema_generator=GenerateJsonSchema, mode='validation', *, union_format='any_of')[source]
Generates a JSON schema for a model class.
- Parameters:
by_alias (bool) – Whether to use attribute aliases or not.
ref_template (str) – The reference template.
union_format (Literal['any_of', 'primitive_type_array']) –
The format to use when combining schemas from unions together. Can be one of:
’any_of’: Use the [anyOf](https://json-schema.org/understanding-json-schema/reference/combining#anyOf)
keyword to combine schemas (the default). - ‘primitive_type_array’: Use the [type](https://json-schema.org/understanding-json-schema/reference/type) keyword as an array of strings, containing each type of the combination. If any of the schemas is not a primitive type (string, boolean, null, integer or number) or contains constraints/metadata, falls back to any_of.
schema_generator (type[GenerateJsonSchema]) – To override the logic used to generate the JSON schema, as a subclass of GenerateJsonSchema with your desired modifications
mode (Literal['validation', 'serialization']) – The mode in which to generate the schema.
- Returns:
The JSON schema for the given model class.
- Return type:
- classmethod model_parametrized_name(params)[source]
Compute the class name for parametrizations of generic classes.
This method can be overridden to achieve a custom naming scheme for generic BaseModels.
- Parameters:
params (tuple[type[Any], ...]) – Tuple of types of the class. Given a generic class Model with 2 type variables and a concrete model Model[str, int], the value (str, int) would be passed to params.
- Returns:
String representing the new class where params are passed to cls as type variables.
- Raises:
TypeError – Raised when trying to generate concrete names for non-generic models.
- Return type:
- model_post_init(context, /)[source]
Override this method to perform additional initialization after __init__ and model_construct. This is useful if you want to do some validation that requires the entire model to be initialized.
- Parameters:
context (Any)
- Return type:
None
- classmethod model_rebuild(*, force=False, raise_errors=True, _parent_namespace_depth=2, _types_namespace=None)[source]
Try to rebuild the pydantic-core schema for the model.
This may be necessary when one of the annotations is a ForwardRef which could not be resolved during the initial attempt to build the schema, and automatic rebuilding fails.
- Parameters:
force (bool) – Whether to force the rebuilding of the model schema, defaults to False.
raise_errors (bool) – Whether to raise errors, defaults to True.
_parent_namespace_depth (int) – The depth level of the parent namespace, defaults to 2.
_types_namespace (MappingNamespace | None) – The types namespace, defaults to None.
- Returns:
Returns None if the schema is already “complete” and rebuilding was not required. If rebuilding _was_ required, returns True if rebuilding was successful, otherwise False.
- Return type:
bool | None
- classmethod model_validate(obj, *, strict=None, extra=None, from_attributes=None, context=None, by_alias=None, by_name=None)[source]
Validate a pydantic model instance.
- Parameters:
obj (Any) – The object to validate.
strict (bool | None) – Whether to enforce types strictly.
extra (Literal['allow', 'ignore', 'forbid'] | None) – Whether to ignore, allow, or forbid extra data during model validation. See the [extra configuration value][pydantic.ConfigDict.extra] for details.
from_attributes (bool | None) – Whether to extract data from object attributes.
context (Any | None) – Additional context to pass to the validator.
by_alias (bool | None) – Whether to use the field’s alias when validating against the provided input data.
by_name (bool | None) – Whether to use the field’s name when validating against the provided input data.
- Raises:
ValidationError – If the object could not be validated.
- Returns:
The validated model instance.
- Return type:
- classmethod model_validate_json(json_data, *, strict=None, extra=None, context=None, by_alias=None, by_name=None)[source]
- !!! abstract “Usage Documentation”
[JSON Parsing](../concepts/json.md#json-parsing)
Validate the given JSON data against the Pydantic model.
- Parameters:
json_data (str | bytes | bytearray) – The JSON data to validate.
strict (bool | None) – Whether to enforce types strictly.
extra (Literal['allow', 'ignore', 'forbid'] | None) – Whether to ignore, allow, or forbid extra data during model validation. See the [extra configuration value][pydantic.ConfigDict.extra] for details.
context (Any | None) – Extra variables to pass to the validator.
by_alias (bool | None) – Whether to use the field’s alias when validating against the provided input data.
by_name (bool | None) – Whether to use the field’s name when validating against the provided input data.
- Returns:
The validated Pydantic model.
- Raises:
ValidationError – If json_data is not a JSON string or the object could not be validated.
- Return type:
- classmethod model_validate_strings(obj, *, strict=None, extra=None, context=None, by_alias=None, by_name=None)[source]
Validate the given object with string data against the Pydantic model.
- Parameters:
obj (Any) – The object containing string data to validate.
strict (bool | None) – Whether to enforce types strictly.
extra (Literal['allow', 'ignore', 'forbid'] | None) – Whether to ignore, allow, or forbid extra data during model validation. See the [extra configuration value][pydantic.ConfigDict.extra] for details.
context (Any | None) – Extra variables to pass to the validator.
by_alias (bool | None) – Whether to use the field’s alias when validating against the provided input data.
by_name (bool | None) – Whether to use the field’s name when validating against the provided input data.
- Returns:
The validated Pydantic model.
- Return type:
- property output_schema: type[BaseModel]
Output schema.
The type of output this Runnable produces specified as a Pydantic model.
- classmethod parse_file(path, *, content_type=None, encoding='utf8', proto=None, allow_pickle=False)[source]
- classmethod parse_raw(b, *, content_type=None, encoding='utf8', proto=None, allow_pickle=False)[source]
- pick(keys)[source]
Pick keys from the output dict of this Runnable.
!!! example “Pick a single key”
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str) as_json = RunnableLambda(json.loads) chain = RunnableMap(str=as_str, json=as_json)
chain.invoke(“[1, 2, 3]”) # -> {“str”: “[1, 2, 3]”, “json”: [1, 2, 3]}
json_only_chain = chain.pick(“json”) json_only_chain.invoke(“[1, 2, 3]”) # -> [1, 2, 3] ```
!!! example “Pick a list of keys”
```python from typing import Any
import json
from langchain_core.runnables import RunnableLambda, RunnableMap
as_str = RunnableLambda(str) as_json = RunnableLambda(json.loads)
- def as_bytes(x: Any) -> bytes:
return bytes(x, “utf-8”)
- chain = RunnableMap(
str=as_str, json=as_json, bytes=RunnableLambda(as_bytes)
)
chain.invoke(“[1, 2, 3]”) # -> {“str”: “[1, 2, 3]”, “json”: [1, 2, 3], “bytes”: b”[1, 2, 3]”}
json_and_bytes_chain = chain.pick([“json”, “bytes”]) json_and_bytes_chain.invoke(“[1, 2, 3]”) # -> {“json”: [1, 2, 3], “bytes”: b”[1, 2, 3]”} ```
- pipe(*others, name=None)[source]
Pipe Runnable objects.
Compose this Runnable with Runnable-like objects to make a RunnableSequence.
Equivalent to RunnableSequence(self, *others) or self | others[0] | …
Example
```python from langchain_core.runnables import RunnableLambda
- def add_one(x: int) -> int:
return x + 1
- def mul_two(x: int) -> int:
return x * 2
runnable_1 = RunnableLambda(add_one) runnable_2 = RunnableLambda(mul_two) sequence = runnable_1.pipe(runnable_2) # Or equivalently: # sequence = runnable_1 | runnable_2 # sequence = RunnableSequence(first=runnable_1, last=runnable_2) sequence.invoke(1) await sequence.ainvoke(1) # -> 4
sequence.batch([1, 2, 3]) await sequence.abatch([1, 2, 3]) # -> [4, 6, 8] ```
- classmethod schema_json(*, by_alias=True, ref_template=DEFAULT_REF_TEMPLATE, **dumps_kwargs)[source]
- classmethod set_verbose(verbose)[source]
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
- stream(input, config=None, *, stop=None, **kwargs)[source]
Default implementation of stream, which calls invoke.
Subclasses must override this method if they support streaming output.
- Parameters:
- Yields:
The output of the Runnable.
- Return type:
Iterator[AIMessageChunk]
- to_json()[source]
Serialize the Runnable to JSON.
- Returns:
A JSON-serializable representation of the Runnable.
- Return type:
SerializedConstructor | SerializedNotImplemented
- to_json_not_implemented()[source]
Serialize a “not implemented” object.
- Returns:
SerializedNotImplemented.
- Return type:
SerializedNotImplemented
- transform(input, config=None, **kwargs)[source]
Transform inputs to outputs.
Default implementation of transform, which buffers input and calls astream.
Subclasses must override this method if they can start producing output while input is still being generated.
- with_alisteners(*, on_start=None, on_end=None, on_error=None)[source]
Bind async lifecycle listeners to a Runnable.
Returns a new Runnable.
The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.
- Parameters:
on_start (AsyncListener | None) – Called asynchronously before the Runnable starts running, with the Run object.
on_end (AsyncListener | None) – Called asynchronously after the Runnable finishes running, with the Run object.
on_error (AsyncListener | None) – Called asynchronously if the Runnable throws an error, with the Run object.
- Returns:
A new Runnable with the listeners bound.
- Return type:
Runnable[Input, Output]
Example
```python from langchain_core.runnables import RunnableLambda, Runnable from datetime import datetime, timezone import time import asyncio
- def format_t(timestamp: float) -> str:
return datetime.fromtimestamp(timestamp, tz=timezone.utc).isoformat()
- async def test_runnable(time_to_sleep: int):
print(f”Runnable[{time_to_sleep}s]: starts at {format_t(time.time())}”) await asyncio.sleep(time_to_sleep) print(f”Runnable[{time_to_sleep}s]: ends at {format_t(time.time())}”)
- async def fn_start(run_obj: Runnable):
print(f”on start callback starts at {format_t(time.time())}”) await asyncio.sleep(3) print(f”on start callback ends at {format_t(time.time())}”)
- async def fn_end(run_obj: Runnable):
print(f”on end callback starts at {format_t(time.time())}”) await asyncio.sleep(2) print(f”on end callback ends at {format_t(time.time())}”)
- runnable = RunnableLambda(test_runnable).with_alisteners(
on_start=fn_start, on_end=fn_end
)
- async def concurrent_runs():
await asyncio.gather(runnable.ainvoke(2), runnable.ainvoke(3))
asyncio.run(concurrent_runs()) # Result: # on start callback starts at 2025-03-01T07:05:22.875378+00:00 # on start callback starts at 2025-03-01T07:05:22.875495+00:00 # on start callback ends at 2025-03-01T07:05:25.878862+00:00 # on start callback ends at 2025-03-01T07:05:25.878947+00:00 # Runnable[2s]: starts at 2025-03-01T07:05:25.879392+00:00 # Runnable[3s]: starts at 2025-03-01T07:05:25.879804+00:00 # Runnable[2s]: ends at 2025-03-01T07:05:27.881998+00:00 # on end callback starts at 2025-03-01T07:05:27.882360+00:00 # Runnable[3s]: ends at 2025-03-01T07:05:28.881737+00:00 # on end callback starts at 2025-03-01T07:05:28.882428+00:00 # on end callback ends at 2025-03-01T07:05:29.883893+00:00 # on end callback ends at 2025-03-01T07:05:30.884831+00:00 ```
- with_config(config=None, **kwargs)[source]
Bind config to a Runnable, returning a new Runnable.
- Parameters:
config (RunnableConfig | None) – The config to bind to the Runnable.
**kwargs (Any) – Additional keyword arguments to pass to the Runnable.
- Returns:
A new Runnable with the config bound.
- Return type:
Runnable[Input, Output]
- with_fallbacks(fallbacks, *, exceptions_to_handle=(Exception,), exception_key=None)[source]
Add fallbacks to a Runnable, returning a new Runnable.
The new Runnable will try the original Runnable, and then each fallback in order, upon failures.
- Parameters:
fallbacks (Sequence[Runnable[Input, Output]]) – A sequence of runnables to try if the original Runnable fails.
exceptions_to_handle (tuple[type[BaseException], ...]) – A tuple of exception types to handle.
exception_key (str | None) –
If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key.
If None, exceptions will not be passed to fallbacks.
If used, the base Runnable and its fallbacks must accept a dictionary as input.
- Returns:
- A new Runnable that will try the original Runnable, and then each
Fallback in order, upon failures.
- Return type:
RunnableWithFallbacksT[Input, Output]
Example
```python from typing import Iterator
from langchain_core.runnables import RunnableGenerator
- def _generate_immediate_error(input: Iterator) -> Iterator[str]:
raise ValueError() yield “”
- def _generate(input: Iterator) -> Iterator[str]:
yield from “foo bar”
- runnable = RunnableGenerator(_generate_immediate_error).with_fallbacks(
[RunnableGenerator(_generate)]
) print(“”.join(runnable.stream({}))) # foo bar ```
- Parameters:
fallbacks (Sequence[Runnable[Input, Output]]) – A sequence of runnables to try if the original Runnable fails.
exceptions_to_handle (tuple[type[BaseException], ...]) – A tuple of exception types to handle.
exception_key (str | None) –
If string is specified then handled exceptions will be passed to fallbacks as part of the input under the specified key.
If None, exceptions will not be passed to fallbacks.
If used, the base Runnable and its fallbacks must accept a dictionary as input.
- Returns:
- A new Runnable that will try the original Runnable, and then each
Fallback in order, upon failures.
- Return type:
RunnableWithFallbacksT[Input, Output]
- with_listeners(*, on_start=None, on_end=None, on_error=None)[source]
Bind lifecycle listeners to a Runnable, returning a new Runnable.
The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run.
- Parameters:
on_start (Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None) – Called before the Runnable starts running, with the Run object.
on_end (Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None) – Called after the Runnable finishes running, with the Run object.
on_error (Callable[[Run], None] | Callable[[Run, RunnableConfig], None] | None) – Called if the Runnable throws an error, with the Run object.
- Returns:
A new Runnable with the listeners bound.
- Return type:
Runnable[Input, Output]
Example
```python from langchain_core.runnables import RunnableLambda from langchain_core.tracers.schemas import Run
import time
- def test_runnable(time_to_sleep: int):
time.sleep(time_to_sleep)
- def fn_start(run_obj: Run):
print(“start_time:”, run_obj.start_time)
- def fn_end(run_obj: Run):
print(“end_time:”, run_obj.end_time)
- chain = RunnableLambda(test_runnable).with_listeners(
on_start=fn_start, on_end=fn_end
- with_retry(*, retry_if_exception_type=(Exception,), wait_exponential_jitter=True, exponential_jitter_params=None, stop_after_attempt=3)[source]
Create a new Runnable that retries the original Runnable on exceptions.
- Parameters:
retry_if_exception_type (tuple[type[BaseException], ...]) – A tuple of exception types to retry on.
wait_exponential_jitter (bool) – Whether to add jitter to the wait time between retries.
stop_after_attempt (int) – The maximum number of attempts to make before giving up.
exponential_jitter_params (ExponentialJitterParams | None) – Parameters for tenacity.wait_exponential_jitter. Namely: initial, max, exp_base, and jitter (all float values).
- Returns:
A new Runnable that retries the original Runnable on exceptions.
- Return type:
Runnable[Input, Output]
Example
```python from langchain_core.runnables import RunnableLambda
count = 0
- def _lambda(x: int) -> None:
global count count = count + 1 if x == 1:
raise ValueError(“x is 1”)
- else:
pass
runnable = RunnableLambda(_lambda) try:
- runnable.with_retry(
stop_after_attempt=2, retry_if_exception_type=(ValueError,),
).invoke(1)
- except ValueError:
pass
- with_structured_output(schema, *, include_raw=False, **kwargs)[source]
Model wrapper that returns outputs formatted to match the given schema.
- Parameters:
The output schema. Can be passed in as:
An OpenAI function/tool schema,
A JSON Schema,
A TypedDict class,
Or a Pydantic class.
If schema is a Pydantic class then the model output will be a Pydantic instance of that class, and the model-generated fields will be validated by the Pydantic class. Otherwise the model output will be a dict and will not be validated.
See langchain_core.utils.function_calling.convert_to_openai_tool for more on how to properly specify types and descriptions of schema fields when specifying a Pydantic or TypedDict class.
include_raw (bool) –
If False then only the parsed structured output is returned.
If an error occurs during model output parsing it will be raised.
If True then both the raw model response (a BaseMessage) and the parsed model response will be returned.
If an error occurs during output parsing it will be caught and returned as well.
The final output is always a dict with keys ‘raw’, ‘parsed’, and ‘parsing_error’.
kwargs (Any)
- Raises:
ValueError – If there are any unsupported kwargs.
NotImplementedError – If the model does not implement with_structured_output().
- Returns:
- A Runnable that takes same inputs as a
langchain_core.language_models.chat.BaseChatModel. If include_raw is False and schema is a Pydantic class, Runnable outputs an instance of schema (i.e., a Pydantic object). Otherwise, if include_raw is False then Runnable outputs a dict.
If include_raw is True, then Runnable outputs a dict with keys:
’raw’: BaseMessage
- ’parsed’: None if there was a parsing error, otherwise the type
depends on the schema as described above.
’parsing_error’: BaseException | None
- Return type:
Runnable[LanguageModelInput, Dict | BaseModel]
???+ example “Pydantic schema (include_raw=False)”
```python from pydantic import BaseModel
- class AnswerWithJustification(BaseModel):
‘’’An answer to the user question along with justification for the answer.’’’
answer: str justification: str
model = ChatModel(model=”model-name”, temperature=0) structured_model = model.with_structured_output(AnswerWithJustification)
- structured_model.invoke(
“What weighs more a pound of bricks or a pound of feathers”
)
# -> AnswerWithJustification( # answer=’They weigh the same’, # justification=’Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.’ # ) ```
??? example “Pydantic schema (include_raw=True)”
```python from pydantic import BaseModel
- class AnswerWithJustification(BaseModel):
‘’’An answer to the user question along with justification for the answer.’’’
answer: str justification: str
model = ChatModel(model=”model-name”, temperature=0) structured_model = model.with_structured_output(
AnswerWithJustification, include_raw=True
)
- structured_model.invoke(
“What weighs more a pound of bricks or a pound of feathers”
) # -> { # ‘raw’: AIMessage(content=’’, additional_kwargs={‘tool_calls’: [{‘id’: ‘call_Ao02pnFYXD6GN1yzc0uXPsvF’, ‘function’: {‘arguments’: ‘{“answer”:”They weigh the same.”,”justification”:”Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.”}’, ‘name’: ‘AnswerWithJustification’}, ‘type’: ‘function’}]}), # ‘parsed’: AnswerWithJustification(answer=’They weigh the same.’, justification=’Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume or density of the objects may differ.’), # ‘parsing_error’: None # } ```
??? example “Dictionary schema (include_raw=False)”
```python from pydantic import BaseModel from langchain_core.utils.function_calling import convert_to_openai_tool
- class AnswerWithJustification(BaseModel):
‘’’An answer to the user question along with justification for the answer.’’’
answer: str justification: str
dict_schema = convert_to_openai_tool(AnswerWithJustification) model = ChatModel(model=”model-name”, temperature=0) structured_model = model.with_structured_output(dict_schema)
- structured_model.invoke(
“What weighs more a pound of bricks or a pound of feathers”
) # -> { # ‘answer’: ‘They weigh the same’, # ‘justification’: ‘Both a pound of bricks and a pound of feathers weigh one pound. The weight is the same, but the volume and density of the two substances differ.’ # } ```
!!! warning “Behavior changed in langchain-core 0.2.26”
Added support for TypedDict class.
- with_types(*, input_type=None, output_type=None)[source]
Bind input and output types to a Runnable, returning a new Runnable.
- responses: List
- rate_limiter: BaseRateLimiter | None
An optional rate limiter to use for limiting the number of requests.
- disable_streaming: bool | Literal['tool_calling']
Whether to disable streaming for this model.
If streaming is bypassed, then stream/astream/astream_events will defer to invoke/ainvoke.
If True, will always bypass streaming case.
- If ‘tool_calling’, will bypass streaming case only when the model is called
with a tools keyword argument. In other words, LangChain will automatically switch to non-streaming behavior (invoke) only when the tools argument is provided. This offers the best of both worlds.
If False (Default), will always use streaming case if available.
The main reason for this flag is that code might be written using stream and a user may want to swap out a given model for another model whose the implementation does not properly support streaming.
- output_version: str | None
Version of AIMessage output format to store in message content.
AIMessage.content_blocks will lazily parse the contents of content into a standard format. This flag can be used to additionally store the standard format in message content, e.g., for serialization purposes.
Supported values:
- ‘v0’: provider-specific format in content (can lazily-parse with
content_blocks)
‘v1’: standardized format in content (consistent with content_blocks)
Partner packages (e.g., [langchain-openai](https://pypi.org/project/langchain-openai)) can also use this field to roll out new content formats in a backward-compatible way.
!!! version-added “Added in langchain-core 1.0.0”
- profile: ModelProfile | None
Profile detailing model capabilities.
- !!! warning “Beta feature”
This is a beta feature. The format of model profiles is subject to change.
If not specified, automatically loaded from the provider package on initialization if data is available.
Example profile data includes context window sizes, supported modalities, or support for tool calling, structured output, and other features.
!!! version-added “Added in langchain-core 1.1.0”
- cache: BaseCache | bool | None
Whether to cache the response.
If True, will use the global cache.
If False, will not use a cache
If None, will use the global cache if it’s set, otherwise no cache.
If instance of BaseCache, will use the provided cache.
Caching is not currently supported for streaming methods of models.
- callbacks: Callbacks
Callbacks to add to the run trace.
- class langgraph_agent_toolkit.core.models.EmbeddingModelFactory[source][source]
Bases:
objectFactory for creating embedding model instances.
- static create(model_provider, model_name=None, model_parameter_values=None, **kwargs)[source][source]
Create and return an embedding model instance.
- Parameters:
model_provider (ModelProvider) – The model provider to use. This should be one of the supported model providers.
model_name (str | None) – The name of the model to use. If not provided, an error will be raised.
model_parameter_values (Tuple[Tuple[str, Any], ...] | None) – The values for the model parameters as a tuple of (key, value) pairs. If not provided, empty dict will be used.
**kwargs (Any) – Additional keyword arguments to pass to the model.
- Returns:
An instance of the requested embedding model
- Raises:
ValueError – If the requested model is not supported or model_name is not provided
- Return type:
Examples
>>> model = EmbeddingModelFactory.create( ... model_provider=ModelProvider.OPENAI, ... model_name="text-embedding-3-small", ... openai_api_key="sk-..." ... )
- classmethod get_model_from_config(config, **override_params)[source][source]
Create an embedding model from a configuration dictionary.
- Parameters:
- Returns:
An Embeddings instance
- Return type:
Example
>>> config = {"provider": "openai", "name": "text-embedding-3-small", "api_key": "sk-..."} >>> model = EmbeddingModelFactory.get_model_from_config(config)
- class langgraph_agent_toolkit.core.models.CompletionModelFactory[source][source]
Bases:
objectFactory for creating model instances.
- static create(model_provider, model_name=None, configurable_fields=None, config_prefix=None, model_parameter_values=None, **kwargs)[source][source]
Create and return a model instance.
- Parameters:
model_provider (ModelProvider) – The model provider to use. This should be one of the supported model providers.
model_name (str | None) – The name of the model to use. If not provided, the default model name will be used.
configurable_fields (Literal['any'] | ~typing.List[str] | ~typing.Tuple[str, ...] | None) – The fields that are configurable. If not provided, the default fields will be used.
config_prefix (str | None) – The prefix to use for the configuration. If not provided, the default prefix will be used.
model_parameter_values (Tuple[Tuple[str, Any], ...] | None) – The values for the model parameters as a tuple of (key, value) pairs. If not provided, the default values will be used.
**kwargs (Any) – Additional keyword arguments to pass to the model.
- Returns:
An instance of the requested model
- Raises:
ValueError – If the requested model is not supported
- Return type:
FakeToolModel | _ConfigurableModel | BaseChatModel
- classmethod get_model_from_config(config, **override_params)[source][source]
Create a model from a configuration dictionary.
- Parameters:
- Returns:
A BaseChatModel instance
- Return type:
BaseChatModel
Example
>>> config = {"provider": "openai", "name": "gpt-4", "temperature": 0.7} >>> model = CompletionModelFactory.get_model_from_config(config)
Submodules
ChatOpenAIPatchedChatOpenAIPatched.InputTypeChatOpenAIPatched.OutputTypeChatOpenAIPatched.__init__()ChatOpenAIPatched.abatch()ChatOpenAIPatched.abatch_as_completed()ChatOpenAIPatched.agenerate()ChatOpenAIPatched.agenerate_prompt()ChatOpenAIPatched.ainvoke()ChatOpenAIPatched.as_tool()ChatOpenAIPatched.assign()ChatOpenAIPatched.astream()ChatOpenAIPatched.astream_events()ChatOpenAIPatched.astream_log()ChatOpenAIPatched.atransform()ChatOpenAIPatched.batch()ChatOpenAIPatched.batch_as_completed()ChatOpenAIPatched.bind()ChatOpenAIPatched.bind_tools()ChatOpenAIPatched.build_extra()ChatOpenAIPatched.config_schema()ChatOpenAIPatched.config_specsChatOpenAIPatched.configurable_alternatives()ChatOpenAIPatched.configurable_fields()ChatOpenAIPatched.construct()ChatOpenAIPatched.copy()ChatOpenAIPatched.dict()ChatOpenAIPatched.from_orm()ChatOpenAIPatched.generate()ChatOpenAIPatched.generate_prompt()ChatOpenAIPatched.get_config_jsonschema()ChatOpenAIPatched.get_graph()ChatOpenAIPatched.get_input_jsonschema()ChatOpenAIPatched.get_input_schema()ChatOpenAIPatched.get_lc_namespace()ChatOpenAIPatched.get_name()ChatOpenAIPatched.get_num_tokens()ChatOpenAIPatched.get_num_tokens_from_messages()ChatOpenAIPatched.get_output_jsonschema()ChatOpenAIPatched.get_output_schema()ChatOpenAIPatched.get_prompts()ChatOpenAIPatched.get_token_ids()ChatOpenAIPatched.input_schemaChatOpenAIPatched.invoke()ChatOpenAIPatched.is_lc_serializable()ChatOpenAIPatched.json()ChatOpenAIPatched.lc_attributesChatOpenAIPatched.lc_id()ChatOpenAIPatched.lc_secretsChatOpenAIPatched.map()ChatOpenAIPatched.model_computed_fieldsChatOpenAIPatched.model_configChatOpenAIPatched.model_construct()ChatOpenAIPatched.model_copy()ChatOpenAIPatched.model_dump()ChatOpenAIPatched.model_dump_json()ChatOpenAIPatched.model_extraChatOpenAIPatched.model_fieldsChatOpenAIPatched.model_fields_setChatOpenAIPatched.model_json_schema()ChatOpenAIPatched.model_parametrized_name()ChatOpenAIPatched.model_post_init()ChatOpenAIPatched.model_rebuild()ChatOpenAIPatched.model_validate()ChatOpenAIPatched.model_validate_json()ChatOpenAIPatched.model_validate_strings()ChatOpenAIPatched.output_schemaChatOpenAIPatched.parse_file()ChatOpenAIPatched.parse_obj()ChatOpenAIPatched.parse_raw()ChatOpenAIPatched.pick()ChatOpenAIPatched.pipe()ChatOpenAIPatched.schema()ChatOpenAIPatched.schema_json()ChatOpenAIPatched.set_verbose()ChatOpenAIPatched.stream()ChatOpenAIPatched.to_json()ChatOpenAIPatched.to_json_not_implemented()ChatOpenAIPatched.transform()ChatOpenAIPatched.update_forward_refs()ChatOpenAIPatched.validate()ChatOpenAIPatched.validate_environment()ChatOpenAIPatched.validate_temperature()ChatOpenAIPatched.with_alisteners()ChatOpenAIPatched.with_config()ChatOpenAIPatched.with_fallbacks()ChatOpenAIPatched.with_listeners()ChatOpenAIPatched.with_retry()ChatOpenAIPatched.with_structured_output()ChatOpenAIPatched.with_types()ChatOpenAIPatched.max_tokensChatOpenAIPatched.clientChatOpenAIPatched.async_clientChatOpenAIPatched.root_clientChatOpenAIPatched.root_async_clientChatOpenAIPatched.model_nameChatOpenAIPatched.temperatureChatOpenAIPatched.model_kwargsChatOpenAIPatched.openai_api_keyChatOpenAIPatched.openai_api_baseChatOpenAIPatched.openai_organizationChatOpenAIPatched.openai_proxyChatOpenAIPatched.request_timeoutChatOpenAIPatched.stream_usageChatOpenAIPatched.max_retriesChatOpenAIPatched.presence_penaltyChatOpenAIPatched.frequency_penaltyChatOpenAIPatched.seedChatOpenAIPatched.logprobsChatOpenAIPatched.top_logprobsChatOpenAIPatched.logit_biasChatOpenAIPatched.streamingChatOpenAIPatched.nChatOpenAIPatched.top_pChatOpenAIPatched.reasoning_effortChatOpenAIPatched.reasoningChatOpenAIPatched.verbosityChatOpenAIPatched.tiktoken_model_nameChatOpenAIPatched.default_headersChatOpenAIPatched.default_queryChatOpenAIPatched.http_clientChatOpenAIPatched.http_async_clientChatOpenAIPatched.stopChatOpenAIPatched.extra_bodyChatOpenAIPatched.include_response_headersChatOpenAIPatched.disabled_paramsChatOpenAIPatched.includeChatOpenAIPatched.service_tierChatOpenAIPatched.storeChatOpenAIPatched.truncationChatOpenAIPatched.use_previous_response_idChatOpenAIPatched.use_responses_apiChatOpenAIPatched.output_versionChatOpenAIPatched.rate_limiterChatOpenAIPatched.disable_streamingChatOpenAIPatched.profileChatOpenAIPatched.cacheChatOpenAIPatched.verboseChatOpenAIPatched.callbacksChatOpenAIPatched.tagsChatOpenAIPatched.metadataChatOpenAIPatched.custom_get_token_idsChatOpenAIPatched.name
CompletionModelFactoryEmbeddingModelFactoryFakeToolModelFakeToolModel.__init__()FakeToolModel.bind_tools()FakeToolModel.InputTypeFakeToolModel.OutputTypeFakeToolModel.abatch()FakeToolModel.abatch_as_completed()FakeToolModel.agenerate()FakeToolModel.agenerate_prompt()FakeToolModel.ainvoke()FakeToolModel.as_tool()FakeToolModel.assign()FakeToolModel.astream()FakeToolModel.astream_events()FakeToolModel.astream_log()FakeToolModel.atransform()FakeToolModel.batch()FakeToolModel.batch_as_completed()FakeToolModel.bind()FakeToolModel.config_schema()FakeToolModel.config_specsFakeToolModel.configurable_alternatives()FakeToolModel.configurable_fields()FakeToolModel.construct()FakeToolModel.copy()FakeToolModel.dict()FakeToolModel.from_orm()FakeToolModel.generate()FakeToolModel.generate_prompt()FakeToolModel.get_config_jsonschema()FakeToolModel.get_graph()FakeToolModel.get_input_jsonschema()FakeToolModel.get_input_schema()FakeToolModel.get_lc_namespace()FakeToolModel.get_name()FakeToolModel.get_num_tokens()FakeToolModel.get_num_tokens_from_messages()FakeToolModel.get_output_jsonschema()FakeToolModel.get_output_schema()FakeToolModel.get_prompts()FakeToolModel.get_token_ids()FakeToolModel.input_schemaFakeToolModel.invoke()FakeToolModel.is_lc_serializable()FakeToolModel.json()FakeToolModel.lc_attributesFakeToolModel.lc_id()FakeToolModel.lc_secretsFakeToolModel.map()FakeToolModel.model_computed_fieldsFakeToolModel.model_configFakeToolModel.model_construct()FakeToolModel.model_copy()FakeToolModel.model_dump()FakeToolModel.model_dump_json()FakeToolModel.model_extraFakeToolModel.model_fieldsFakeToolModel.model_fields_setFakeToolModel.model_json_schema()FakeToolModel.model_parametrized_name()FakeToolModel.model_post_init()FakeToolModel.model_rebuild()FakeToolModel.model_validate()FakeToolModel.model_validate_json()FakeToolModel.model_validate_strings()FakeToolModel.output_schemaFakeToolModel.parse_file()FakeToolModel.parse_obj()FakeToolModel.parse_raw()FakeToolModel.pick()FakeToolModel.pipe()FakeToolModel.schema()FakeToolModel.schema_json()FakeToolModel.set_verbose()FakeToolModel.stream()FakeToolModel.to_json()FakeToolModel.to_json_not_implemented()FakeToolModel.transform()FakeToolModel.update_forward_refs()FakeToolModel.validate()FakeToolModel.with_alisteners()FakeToolModel.with_config()FakeToolModel.with_fallbacks()FakeToolModel.with_listeners()FakeToolModel.with_retry()FakeToolModel.with_structured_output()FakeToolModel.with_types()FakeToolModel.responsesFakeToolModel.sleepFakeToolModel.iFakeToolModel.rate_limiterFakeToolModel.disable_streamingFakeToolModel.output_versionFakeToolModel.profileFakeToolModel.cacheFakeToolModel.verboseFakeToolModel.callbacksFakeToolModel.tagsFakeToolModel.metadataFakeToolModel.custom_get_token_idsFakeToolModel.name