Documentation Index
Fetch the complete documentation index at: https://docs.awaithumans.dev/llms.txt
Use this file to discover all available pages before exploring further.
LangGraph’s pattern is interrupt/resume rather than Temporal’s signal-based approach. Inside a node, await_human() calls LangGraph’s interrupt(...), which raises and parks the graph. The DRIVER (the code running the graph) catches our shaped interrupt, posts the task to the awaithumans server, polls until terminal, and resumes the graph with the human’s response.
┌──────────────────────────────┐ HTTP POST /api/tasks ┌──────────────────────┐
│ refund_agent.py │ ──────────────────────►│ awaithumans server │
│ - graph: triage → review │ │ │
│ → process_refund │ │ │
│ - drive_human_loop(graph,…) │ long-poll status │ — human reviews ──► │
│ │ ──────────────────────►│ — completes task ──► │
│ ◄── interrupt / resume ──── │ ◄──────────────────────│ │
└──────────────────────────────┘ response payload └──────────────────────┘
Single-process. Unlike the Temporal example, no separate worker or web server.
Install
pip install "awaithumans[langgraph]"
Node side
from typing import TypedDict
from pydantic import BaseModel
from langgraph.graph import StateGraph, START, END
from langgraph.checkpoint.memory import MemorySaver
from awaithumans.adapters.langgraph import await_human
class State(TypedDict):
customer_id: str
amount_usd: int
approved: bool
class RefundPayload(BaseModel):
customer_id: str
amount_usd: int
class RefundDecision(BaseModel):
approved: bool
notes: str | None = None
def review_node(state: State) -> dict:
decision = await_human(
task=f"Approve ${state['amount_usd']} refund for {state['customer_id']}?",
payload_schema=RefundPayload,
payload=RefundPayload(
customer_id=state["customer_id"],
amount_usd=state["amount_usd"],
),
response_schema=RefundDecision,
timeout_seconds=15 * 60,
)
return {"approved": decision.approved}
builder = StateGraph(State)
builder.add_node("review", review_node)
builder.add_edge(START, "review")
builder.add_edge("review", END)
graph = builder.compile(checkpointer=MemorySaver())
await_human() is synchronous — matches LangGraph’s node API. The driver loop handles the actual blocking.
Driver side
import asyncio
import os
from awaithumans.adapters.langgraph import drive_human_loop
# `graph` is the compiled StateGraph from the "Node side" snippet above.
async def main():
config = {"configurable": {"thread_id": "wf-1"}}
final_state = await drive_human_loop(
graph,
input_state={"customer_id": "cus_demo", "amount_usd": 250, "approved": False},
config=config,
server_url="http://localhost:3001",
api_key=os.environ.get("AWAITHUMANS_ADMIN_API_TOKEN"),
)
print(final_state.values)
asyncio.run(main())
drive_human_loop:
- Streams the graph forward
- Catches our shaped interrupt (anything with the magic
awaithumans key)
- POSTs the task to the awaithumans server
- Long-polls until terminal
- Resumes the graph with
Command(resume=response)
- Returns the graph’s final state
Other interrupts (operator confirmations, branching decisions) flow through unchanged — the driver pattern-matches on the awaithumans key, doesn’t grab everything.
Re-execution semantics
LangGraph re-executes the entire node on resume. Any work BEFORE await_human(...) runs twice. Move expensive or non-idempotent work to a separate node downstream:
# Pretend this is your real money-moving call.
def process_refund(customer_id: str, amount_usd: int) -> str:
... # returns a refund id
# ❌ DON'T do this — process_refund runs on every node re-execution.
def review_node_bad(state):
decision = await_human(
task="Approve refund?",
payload_schema=RefundPayload,
payload=RefundPayload(
customer_id=state["customer_id"], amount_usd=state["amount_usd"]
),
response_schema=RefundDecision,
timeout_seconds=15 * 60,
)
refund_id = process_refund(state["customer_id"], state["amount_usd"])
return {"refund_id": refund_id, "approved": decision.approved}
# ✅ DO this — split human review and the side-effect into two nodes.
def review_node(state):
decision = await_human(
task="Approve refund?",
payload_schema=RefundPayload,
payload=RefundPayload(
customer_id=state["customer_id"], amount_usd=state["amount_usd"]
),
response_schema=RefundDecision,
timeout_seconds=15 * 60,
)
return {"approved": decision.approved}
def process_refund_node(state):
if not state["approved"]:
return {"refund_id": None}
return {"refund_id": process_refund(state["customer_id"], state["amount_usd"])}
Error contract
The driver maps polling status to typed exceptions:
| Status | Exception |
|---|
completed | (return validated response to node) |
timed_out | TaskTimeoutError |
cancelled | TaskCancelledError |
verification_exhausted | VerificationExhaustedError |
Catch them where you call drive_human_loop to recover.
Why this works under failure
- Driver process dies during the await — LangGraph’s checkpointer (e.g. SQLite, Postgres, Redis) persists graph state. Re-running the script with the same
thread_id resumes from the parked node. The deterministic idempotency_key (default: langgraph:{sha256(task,payload)}) means the awaithumans server returns the existing task.
- awaithumans server restarts — tasks are persisted; on restart the dashboard reconnects and the polling driver resumes.
- Human times out —
drive_human_loop raises TaskTimeoutError. Catch it, retry with a different reviewer, or fail closed.
End-to-end example
Two runnable examples in the repo, same flow in each language:
| Example | Language | Entry point |
|---|
examples/langgraph-py/ | Python | app.py (FastAPI graph host) + kickoff.py |
examples/langgraph-ts/ | TypeScript | app.ts + kickoff.ts |
Both runnable on a laptop in three terminal windows alongside awaithumans dev. See the per-example README for the run commands.
Cross-language
The TypeScript adapter at awaithumans/langgraph produces the same wire format. A TS driver can resume a graph paused under Python and vice versa.
Common gotchas
- No checkpointer = no interrupts. LangGraph requires a checkpointer to support
interrupt(...). Production graphs should use a durable backend (SQLite / Postgres / Redis), not MemorySaver.
- Side effects before
await_human. Run twice on resume. Move them after, or wrap in idempotency.
- Multiple
await_human calls in one node. Each interrupts independently; LangGraph routes resume values by call order. Pass distinct idempotency_key= if the (task, payload) tuples might collide.
Where to next
- Webhooks (
callback_url) — wire format and signature scheme for the callback your driver receives
- Testing — patterns for testing graph nodes that call
await_human
- Temporal adapter — the same pattern with signals instead of interrupt/resume