Optional
fields: ChatCohereInputThe name of the model to use.
Whether or not to include token usage when streaming.
This will include an extra chunk at the end of the stream
with eventType: "stream-end"
and the token usage in
usage_metadata
.
Whether or not to stream the response.
What sampling temperature to use, between 0.0 and 2.0. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
Optional
kwargs: Partial<CallOptions>
Integration for Cohere chat models.
Setup: Install
@langchain/cohere
and set a environment variable calledCOHERE_API_KEY
.Constructor args
Runtime args
Runtime args can be passed as the second argument to any of the base runnable methods
.invoke
..stream
,.batch
, etc. They can also be passed via.bind
, or the second arg in.bindTools
, like shown in the examples below:Examples
Instantiate
Invoking
Streaming Chunks
Aggregate Streamed Chunks
Bind tools
Structured Output
Response Metadata