← Back to Articles

LangGraph

LangGraph is a library that focus on agent orchestration. It allows you to create workflows with conditionals, parallel flows, have human approval/input and persistent state across requests.

This is an example on how this works:

import { StateSchema, MessagesValue, GraphNode, StateGraph, START, END } from "@langchain/langgraph";

const State = new StateSchema({ messages: MessagesValue });

const mockLlm: GraphNode<typeof State> = (state) => {
  return { messages: [{ role: "ai", content: "hello world!" }] };
};

const graph = new StateGraph(State)
  .addNode("mock_llm", mockLlm)
  .addEdge(START, "mock_llm")
  .addEdge("mock_llm", END)
  .compile();

await graph.invoke({ messages: [{ role: "user", content: "hi bot!" }] })

Let's break down into more manageable parts.

State

The state is responsible to store information that will be used in your workflows:

const State = new StateSchema({ messages: MessagesValue });

For conversational agents it is common to store the messages so it can "remember" what was said in previous messages.

Graph

The Graph class is a builder that will create the definition of the workflow. On its constructor it receives the definition of the State and after that you can start defining the steps.

const graph = new StateGraph(State)
  .addNode("mock_llm", mockLlm)
  .addEdge(START, "mock_llm")
  .addEdge("mock_llm", END)
  .compile();

addNode - Is a method that will have the name of the step and a function that will be executed once the workflow gets to it addEdge - Is a method will use the nodes you defined before and will define how the application will run. In this case the application will Start, then call the mock_llmand will end the flow.

Multiple Edges

The initial example shows a simple linear path, but LangGraph is built to handle more complex cases.

const graph = new StateGraph(State)
  .addNode("greet_and_ask_name", greetUserAndAskName)
  .addNode("create_user_accout", createUserAccount)
  .addNode("farewell_and_redirect", farewellAndRedirect)
  .addEdge(START, "greet_and_ask_name")
  .addEdge("greet_and_ask_name", "create_user_accout")
  .addEdge("create_user_accout", "farewell_and_redirect")
  .addEdge("farewell_and_redirect", END)
  .compile();

In this case above I am breaking a sign in workflow to first ask the user name, then create their account and say farewell. It could be done in just one step, but breaking the workflow into smaller steps will help us to debug better.

Conditional Edges

As we know it is quite uncommon to have an application with a linear flow, so there is a type of edge that allow you to create conditionals.


const checkUserAccountExistance = (state) => { return !!state.account ? "farewell_and_redirect" : "create_user_accout" }

const graph = new StateGraph(State)
  .addNode("greet_and_ask_name", greetUserAndAskName)
  .addNode("create_user_accout", createUserAccount)
  .addNode("fetch_user_data", fetchUserData)
  .addNode("farewell_and_redirect", farewellAndRedirect)
  .addEdge(START, "greet_and_ask_name")
  .addEdge("greet_and_ask_name", "fetch_user_data")
  .addConditionalEdge("fetch_user_data", checkUserAccountExistance, {
	  "farewell_and_redirect": "farewell_and_redirect", 
	  "create_user_accout": "create_user_accout"
  })
  .addEdge("farewell_and_redirect", END)
  .compile();

Complex workflows

Just these tow building blocks allow you to create conditionals, loops, parallelization, routing and any other complex workflow you will ever need:

langchain_workflow.png

Tools

Tools extend what agents can do in a more autonomous way, where you just define what tools the agent has and they will call them when they judge necessary.

Tools are defined using the tool function, receiving as the first argument a function that will be executed, then its definition (name, description and input schema). The definition is not only important for documentation, but also to help the LLM to know how/when to call this tool.

import * as z from "zod"
import { tool } from "langchain"

const searchDatabase = tool(
  ({ query, limit }) => `Found ${limit} results for '${query}'`,
  {
    name: "search_database",
    description: "Search the customer database for records matching the query.",
    schema: z.object({
      query: z.string().describe("Search terms to look for"),
      limit: z.number().describe("Maximum number of results to return"),
    }),
  }
);

// Using the tool
const agent = createAgent({
  model: new ChatOpenAI({ model: "gpt-4.1" }),
  tools: [searchDatabase],
});

const result = await agent.invoke({
    messages: [{"role": "user", "content": "Extract contact info from: John Doe, john@example.com, (555) 123-4567"}]
});

Memory

To make your agents more useful you often need them to retain information about the previous messages. Langchain allows you do do so by passing a checkpointer to your agent, that will automatically allow you to store automatically the state of the previous interactions.

import { createAgent } from "langchain";
import { MemorySaver } from "@langchain/langgraph";

const checkpointer = new MemorySaver(); // In memory storage

const agent = createAgent({
    model: "claude-sonnet-4-5-20250929",
    tools: [],
    checkpointer,
});

await agent.invoke(
    { messages: [{ role: "user", content: "hi! i am Bob" }] },
    { configurable: { thread_id: "1" } } // Thread ID is the conversation Id
);

MemorySaver stores it in memory, but you can have it stored permanently by using a checkpointer that is connected to a database, like the example below:

import { PostgresSaver } from "@langchain/langgraph-checkpoint-postgres";

const DB_URI = "postgresql://postgres:postgres@localhost:5442/postgres?sslmode=disable";
const checkpointer = PostgresSaver.fromConnString(DB_URI);