Requirements#
You need a LangGraph Cloud API server. You can start a server locally via LangGraph Studio or use LangSmith for a hosted version.
The state of the graph you are using must have a messages key with a list of LangChain-alike messages.
New project from template#
### Create a new project based on the LangGraph assistant-ui templatenpx create-assistant-ui@latest -t langgraph my-app
Create a .env.local file in your project with the following variables:
# LANGCHAIN_API_KEY=your_api_key # for production
# LANGGRAPH_API_URL=your_api_url # for production
NEXT_PUBLIC_LANGGRAPH_API_URL=your_api_url # for development (no api key required)
NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID=your_graph_id
Installation in existing React project#
Install dependencies#
<InstallCommand npm={["@assistant-ui/react", "@assistant-ui/react-langgraph", "@langchain/langgraph-sdk"]} />
Setup a proxy backend endpoint (optional, for production)#
This example forwards every request to the LangGraph server directly from the browser. For production use-cases, you should limit the API calls to the subset of endpoints that you need and perform authorization checks.import { NextRequest, NextResponse } from "next/server";
function getCorsHeaders() {
return {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, PUT, PATCH, DELETE, OPTIONS",
"Access-Control-Allow-Headers": "*",
};
}
async function handleRequest(req: NextRequest, method: string) {
try {
const path = req.nextUrl.pathname.replace(/^\/?api\//, "");
const url = new URL(req.url);
const searchParams = new URLSearchParams(url.search);
searchParams.delete("_path");
searchParams.delete("nxtP_path");
const queryString = searchParams.toString()
? `?${searchParams.toString()}`
: "";
const options: RequestInit = {
method,
headers: {
"x-api-key": process.env["LANGCHAIN_API_KEY"] || "",
},
};
if (["POST", "PUT", "PATCH"].includes(method)) {
options.body = await req.text();
}
const res = await fetch(
`${process.env["LANGGRAPH_API_URL"]}/${path}${queryString}`,
options,
);
const headers = new Headers(res.headers);
headers.delete("content-encoding");
headers.delete("content-length");
headers.delete("transfer-encoding");
const corsHeaders = getCorsHeaders();
for (const [key, value] of Object.entries(corsHeaders)) {
headers.set(key, value);
}
return new NextResponse(res.body, {
status: res.status,
statusText: res.statusText,
headers,
});
} catch (e: unknown) {
if (e instanceof Error) {
const typedError = e as Error & { status?: number };
return NextResponse.json(
{ error: typedError.message },
{ status: typedError.status ?? 500 },
);
}
return NextResponse.json({ error: "Unknown error" }, { status: 500 });
}
}
export const GET = (req: NextRequest) => handleRequest(req, "GET");
export const POST = (req: NextRequest) => handleRequest(req, "POST");
export const PUT = (req: NextRequest) => handleRequest(req, "PUT");
export const PATCH = (req: NextRequest) => handleRequest(req, "PATCH");
export const DELETE = (req: NextRequest) => handleRequest(req, "DELETE");
export const OPTIONS = () =>
new NextResponse(null, {
status: 204,
headers: getCorsHeaders(),
});
Setup helper functions#
// @filename: /lib/chatApi.ts
// ---cut---
import { Client, type ThreadState } from "@langchain/langgraph-sdk";
import { LangChainMessage, LangGraphCommand } from "@assistant-ui/react-langgraph";
const createClient = () => {
const apiUrl = process.env["NEXT_PUBLIC_LANGGRAPH_API_URL"] || "/api";
return new Client({
apiUrl,
});
};
export const createThread = async () => {
const client = createClient();
return client.threads.create();
};
export const getThreadState = async (
threadId: string,
): Promise<ThreadState<{ messages: LangChainMessage[] }>> => {
const client = createClient();
return client.threads.getState(threadId);
};
export const sendMessage = async (params: {
threadId: string;
messages?: LangChainMessage[];
command?: LangGraphCommand;
}) => {
const client = createClient();
return client.runs.stream(
params.threadId,
process.env["NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID"]!,
{
input: params.messages?.length
? { messages: params.messages }
: null,
command: params.command,
streamMode: ["messages", "updates"],
},
);
};
Define a MyAssistant component#
// @filename: /components/MyAssistant.tsx
// @include: chatApi
// ---cut---
"use client";
import { Thread } from "@/components/assistant-ui/thread";
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { useLangGraphRuntime } from "@assistant-ui/react-langgraph";
import { createThread, getThreadState, sendMessage } from "@/lib/chatApi";
export function MyAssistant() {
const runtime = useLangGraphRuntime({
stream: async function* (messages, { initialize, command }) {
const { externalId } = await initialize();
if (!externalId) throw new Error("Thread not found");
const generator = await sendMessage({
threadId: externalId,
messages,
command,
});
yield* generator;
},
create: async () => {
const { thread_id } = await createThread();
return { externalId: thread_id };
},
load: async (externalId) => {
const state = await getThreadState(externalId);
return {
messages: state.values.messages,
interrupts: state.tasks[0]?.interrupts,
};
},
});
return (
<AssistantRuntimeProvider runtime={runtime}>
<Thread />
</AssistantRuntimeProvider>
);
}
Use the MyAssistant component#
// @include: MyAssistant
// @filename: /app/page.tsx
// ---cut---
import { MyAssistant } from "@/components/MyAssistant";
export default function Home() {
return (
<main className="h-dvh">
<MyAssistant />
</main>
);
}
Setup environment variables#
Create a .env.local file in your project with the following variables:
# LANGCHAIN_API_KEY=your_api_key # for production
# LANGGRAPH_API_URL=your_api_url # for production
NEXT_PUBLIC_LANGGRAPH_API_URL=your_api_url # for development (no api key required)
NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID=your_graph_id
Setup UI components#
Follow the UI Components guide to setup the UI components.
Advanced APIs#
Message Accumulator#
The LangGraphMessageAccumulator lets you append messages incoming from the server to replicate the messages state client side.
import {
LangGraphMessageAccumulator,
appendLangChainChunk,
} from "@assistant-ui/react-langgraph";
const accumulator = new LangGraphMessageAccumulator({
appendMessage: appendLangChainChunk,
});
// Add new chunks from the server
if (event.event === "messages/partial") accumulator.addMessages(event.data);
Message Conversion#
Use convertLangChainMessages to transform LangChain messages to assistant-ui format:
import { convertLangChainMessages } from "@assistant-ui/react-langgraph";
const threadMessage = convertLangChainMessages(langChainMessage);
Event Handlers#
You can listen to streaming events by passing eventHandlers to useLangGraphRuntime:
const runtime = useLangGraphRuntime({
stream: async (messages, { initialize, ...config }) => { /* ... */ },
eventHandlers: {
onMessageChunk: (chunk, metadata) => {
// Fired for each chunk in messages-tuple mode
// metadata contains langgraph_step, langgraph_node, ls_model_name, etc.
},
onValues: (values) => {
// Fired when a "values" event is received
},
onUpdates: (updates) => {
// Fired when an "updates" event is received
},
onMetadata: (metadata) => { /* thread metadata */ },
onInfo: (info) => { /* informational messages */ },
onError: (error) => { /* stream errors */ },
onCustomEvent: (type, data) => { /* custom events */ },
},
});
Message Metadata#
When using streamMode: "messages-tuple", each chunk includes metadata from the LangGraph server. Access accumulated metadata per message with the useLangGraphMessageMetadata hook:
import { useLangGraphMessageMetadata } from "@assistant-ui/react-langgraph";
function MyComponent() {
const metadata = useLangGraphMessageMetadata();
// Map<string, LangGraphTupleMetadata> keyed by message ID
}
Thread Management#
Basic Thread Support#
The useLangGraphRuntime hook now includes built-in thread management capabilities:
const runtime = useLangGraphRuntime({
stream: async (messages, { initialize, ...config }) => {
// initialize() creates or loads a thread and returns its IDs
const { remoteId, externalId } = await initialize();
// Use externalId (your backend's thread ID) for API calls
return sendMessage({ threadId: externalId, messages, config });
},
create: async () => {
// Called when creating a new thread
const { thread_id } = await createThread();
return { externalId: thread_id };
},
load: async (externalId) => {
// Called when loading an existing thread
const state = await getThreadState(externalId);
return {
messages: state.values.messages,
interrupts: state.tasks[0]?.interrupts,
};
},
});
Cloud Persistence#
For persistent thread history across sessions, integrate with assistant-cloud:
const runtime = useLangGraphRuntime({
cloud: new AssistantCloud({
baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL,
anonymous: true,
}),
// ... stream, create, load functions
});
See the Cloud Persistence guide for detailed setup instructions.
Message Editing & Regeneration#
LangGraph uses server-side checkpoints for state management. To support message editing (branching) and regeneration, you need to provide a getCheckpointId callback that resolves the appropriate checkpoint for server-side forking.
const runtime = useLangGraphRuntime({
stream: async (messages, { initialize, ...config }) => {
const { externalId } = await initialize();
if (!externalId) throw new Error("Thread not found");
return sendMessage({ threadId: externalId, messages, config });
},
create: async () => {
const { thread_id } = await createThread();
return { externalId: thread_id };
},
load: async (externalId) => {
const state = await getThreadState(externalId);
return {
messages: state.values.messages,
interrupts: state.tasks[0]?.interrupts,
};
},
getCheckpointId: async (threadId, parentMessages) => {
const client = createClient();
// Get the thread state history and find the checkpoint
// that matches the parent messages by exact message ID sequence.
// If IDs are missing, return null and skip edit/reload for safety.
const history = await client.threads.getHistory(threadId);
for (const state of history) {
const stateMessages = state.values.messages;
if (!stateMessages || stateMessages.length !== parentMessages.length) {
continue;
}
const hasStableIds =
parentMessages.every((message) => typeof message.id === "string") &&
stateMessages.every((message) => typeof message.id === "string");
if (!hasStableIds) {
continue;
}
const isMatch = parentMessages.every(
(message, index) => message.id === stateMessages[index]?.id,
);
if (isMatch) {
return state.checkpoint.checkpoint_id ?? null;
}
}
return null;
},
});
When getCheckpointId is provided:
- Edit buttons appear on user messages, allowing users to edit and resend from that point
- Regenerate buttons appear on assistant messages, allowing users to regenerate the response
The resolved checkpointId is passed to your stream callback via config.checkpointId. Your sendMessage helper should map it to the LangGraph SDK's checkpoint_id parameter (see the helper function in the setup section above).
Interrupt Persistence#
LangGraph supports interrupting the execution flow to request user input or handle specific interactions. These interrupts can be persisted and restored when switching between threads:
- Make sure your thread state type includes the
interruptsfield - Return the interrupts from the
loadfunction along with the messages - The runtime will automatically restore the interrupt state when switching threads
This feature is particularly useful for applications that require user approval flows, multi-step forms, or any other interactive elements that might span multiple thread switches.