Respan enables tracing of LangGraph workflows, automatically tracking all events in chronological order during graph execution. Use @task and @workflow decorators to capture the full execution tree.
The integration wraps standard LangGraph functions with Respan decorators, enabling automatic tracing without modifying core graph logic. Each node, edge, and LLM call appears as a span in the trace.
Import KeywordsAITelemetry and the @task and @workflow decorators from keywordsai_tracing. Initialize telemetry with your app configuration.
Wrap your LangGraph chatbot logic with @task decorators and your main execution loop with @workflow. The decorators capture all graph events and send them to the Respan platform.
python
from langgraph.graph import StateGraph, START, END
from keywordsai_tracing import KeywordsAITelemetry, task, workflow
telemetry = KeywordsAITelemetry(app_name="my-graph")
@task(name="chatbot_response")
def chatbot(state):
response = llm.invoke(state["messages"])
return {"messages": [response]}
graph = StateGraph(State)
graph.add_node("chatbot", chatbot)
graph.add_edge(START, "chatbot")
graph.add_edge("chatbot", END)
@workflow(name="chat_session")
def main():
app = graph.compile()
app.invoke({"messages": [("user", "Hello!")]})