The "LangChain vs LangGraph" question confuses people because both ship from the same team at LangChain. They're not competitors — they're tools for different jobs. LangChain is the broad LLM application framework (chains, retrievers, document loaders, agents); LangGraph is the lower-level state-graph framework specifically for building agentic systems with explicit control flow. Picking the right one for your project is a design decision more than a brand decision.
We see both used heavily across Respan's customer base — usually in the same codebase. This article is the side-by-side from running production apps that mix them.
TL;DR — when to pick each
| Pick LangChain if... | Pick LangGraph if... |
|---|---|
| You're prototyping an LLM app and want batteries included | You're building an agent with explicit control flow and state |
| Your flow is roughly linear: input → retrieval → LLM → output | Your flow has loops, branches, human-in-the-loop, or persistent state |
| You want the broad ecosystem (loaders, vectorstores, retrievers) | You want fine-grained control over agent execution |
| Speed of building beats fine control | Production stability and debuggability beat speed of building |
| You don't need to checkpoint or replay agent runs | You need to checkpoint runs and replay from any state |
In production, mature teams use both: LangChain primitives for retrieval and document handling, LangGraph for the agent orchestration layer.
What each is
LangChain is the broad framework for building LLM applications. The original library (Python and TypeScript) provides:
- Chains — composable sequences of LLM calls and transformations
- Retrievers — RAG-first document retrieval
- Vectorstores — wrappers around 30+ vector DB backends
- Document loaders — read from PDFs, web, databases, etc.
- Agents (legacy
AgentExecutorstyle) - Memory primitives
- Callbacks for tracing and observability
LangGraph is the framework for building agentic systems with explicit control flow. It models the agent as a directed graph of nodes (computations) and edges (transitions). State is a first-class concept — every step reads and writes shared state. Key features:
- State graph definition (nodes, edges, conditional routing)
- Persistent state with checkpoints (replay any step)
- Human-in-the-loop interrupts
- Streaming, tool calling, multi-agent orchestration
- Production-grade execution semantics (retries, timeouts, partial failures)
LangGraph is lower-level than LangChain agents. You write more code but you have explicit control over agent behavior — which matters when the agent is in production.
When LangChain is the right tool
- Quick prototype. You want to demo an LLM-powered workflow this week. LangChain's primitives let you wire up retrieval + LLM + output formatting in 30 lines.
- Linear workflows. Input → retrieve → LLM call → format → output. The chain abstraction maps naturally onto this shape.
- RAG systems. LangChain's retriever ecosystem is mature — many vectorstore wrappers, hybrid retrieval, re-rankers, evaluation utilities.
- Document-processing pipelines. Document loaders + text splitters + chunking strategies are well-developed.
- Standardized integrations. Need a wrapper for a specific LLM provider, vector store, or tool? LangChain probably has one.
When LangGraph is the right tool
- Production agent with multi-step behavior. Anything where the agent needs to make decisions, branch, loop, retry, or wait for human input.
- Stateful conversation or workflow. When state needs to persist across turns or restart-after-failure semantics matter.
- Multi-agent systems. Coordinator agents with sub-agents, hand-off patterns, parallel execution.
- Auditable / replayable runs. Checkpointing means you can replay any agent step from any prior state for debugging or compliance.
- Long-running agents. Background agents that run for minutes or hours. The state graph model is more reliable than a chain over long executions.
Architectural difference
LangChain (chain mental model):
Input → step 1 → step 2 → step 3 → Output
You compose steps in sequence. Branching exists but is awkward. State is implicit, passed step-to-step.
LangGraph (graph mental model):
┌→ retrieve ─┐
input → llm → tool? → loop or finish
└→ direct ───┘
↑ human-in-the-loop possible at any node
You define nodes (functions) and edges (conditional transitions). State is explicit and persistent. The graph can have cycles, conditional routing, parallel execution.
For simple flows, the chain mental model is faster to build. For complex flows, the graph mental model is the only one that scales without becoming spaghetti.
Migration path
Many teams start in LangChain and migrate to LangGraph as agents grow more complex. The team at LangChain has explicitly acknowledged this — LangChain agents (the old AgentExecutor API) are now positioned as legacy with LangGraph as the production-recommended replacement for agentic patterns.
You don't have to migrate wholesale. Common pattern:
- Keep using LangChain for retrieval and document handling
- Replace
AgentExecutorflows with LangGraph state graphs - Keep LangChain's LLM wrappers, vectorstores, memory primitives
This works because LangGraph integrates naturally with LangChain primitives — they're the same ecosystem.
Observability and debugging
Both frameworks integrate with LangSmith (LangChain's observability product) and with Respan via OpenTelemetry. LangGraph's checkpoint feature is especially valuable for debugging — you can replay any agent run from any prior state, which is impossible with a stateless chain.
For production agent debugging, LangGraph + a tracing platform is meaningfully better than LangChain AgentExecutor + tracing. The persistent state means you can see exactly what the agent thought at each step and why it took the path it did.
Performance
For simple linear chains, LangChain is slightly faster than LangGraph (less framework overhead). For agent workloads with state, LangGraph is comparable or faster (its execution model is more efficient).
The performance gap rarely matters in production — LLM call latency dominates everything by orders of magnitude. Pick based on workflow fit, not framework speed.
Frank's take — when I actually pick which
Default to LangGraph for any new agent work in 2026. The team at LangChain has been signaling this for over a year — AgentExecutor is legacy. LangGraph is the production-recommended path forward.
Use LangChain primitives for the supporting infrastructure. Retrievers, document loaders, vectorstore wrappers, LLM clients — these are well-tested and broadly useful. The whole LangChain ecosystem is still valuable; it's just that the agent layer specifically has moved to LangGraph.
Don't over-engineer simple flows. If your task is "ask LLM for a summary, return it" — you don't need LangChain or LangGraph. Just call the API directly. Frameworks are for managing complexity that already exists, not for adding ceremony to simple work.
Use LangGraph specifically when you need checkpoints. The ability to replay an agent from any prior state is a real production capability that pays off the first time you have to debug a failed run.
The middle path is real. Production stacks should treat LangGraph as the agent-orchestration layer and LangChain as a primitive library. Add observability via tracing regardless of framework — both work with OpenTelemetry now.
How to decide for your project
Quick decision framework:
- Is your workflow linear (no loops, no branches, no state)? → LangChain (or no framework)
- Are you building a production agent? → LangGraph
- Do you need replayable / checkpointed execution? → LangGraph
- Are you doing primarily RAG? → LangChain (retrievers and loaders)
- Are you composing multiple agents? → LangGraph
- Are you prototyping a demo this week? → LangChain (faster to build)
FAQ
Is LangGraph replacing LangChain?
Not entirely — LangGraph is replacing LangChain's agent layer (the legacy AgentExecutor). The rest of LangChain (retrievers, document loaders, LLM wrappers, vectorstore integrations) is still actively maintained and broadly useful.
Can I use LangChain primitives inside LangGraph? Yes — that's the recommended pattern. Use LangChain's retrievers, loaders, and LLM clients inside LangGraph nodes. They're the same ecosystem.
Which is more popular?
LangChain has the broader install base and broader community. LangGraph is growing fast in production-agent contexts. Both ship through langchain.com.
Does LangGraph work with non-Anthropic / non-OpenAI models? Yes — LangGraph is model-agnostic. It works with whatever LLM client you wire into your nodes.
Does LangGraph require LangSmith? No. LangSmith is a separate product (observability). LangGraph works with any tracing backend including Respan via OpenTelemetry.
Should I migrate my LangChain agent to LangGraph?
If you have a production agent on AgentExecutor, plan the migration — it's the strategic direction. If you have simple chain workflows, no rush.
Is LangChain dead? No. The narrative that "LangChain is dead" comes from the legacy agent API being superseded. The broader framework is actively maintained.
Which has better documentation? LangChain's docs are broader (more topics covered). LangGraph's docs are deeper (more focused). Both have improved meaningfully through 2025-2026.