Human On the Loop Systems

Everyone’s building agents. Most are building them wrong.
The hottest demos show agents that operate in complete isolation—autonomous research bots, code generators, email responders that never ask for help. It's impressive until you realize that these systems have a shelf life measured in minutes before they do something spectacularly wrong.
Production systems need humans—not just to kick off or mop up, but on the loop when stakes are high or context is weird.
Two complementary patterns are emerging: AG‑UI for real‑time collaboration; HumanLayer for safety‑critical approvals.
Inner Loop, Outer Loop, Humans Between
When we talk about agent architectures, we usually focus on the planning and orchestration layers. The agent gets a request, breaks it down, calls some tools, synthesizes results. Classic OODA loop stuff: Observe, Orient, Decide, Act.
But that's the inner loop—the rapid cycle of AI reasoning and tool execution.
The outer loop is where autonomous agents live long-term. They wake up periodically, check their goals, plan their day, execute multi-step workflows, sleep, repeat. Think of an agent that manages your calendar, monitors competitor pricing, or coordinates your team's standups.
Between these loops—and sometimes within them—you need humans. Not because the AI isn't smart enough, but because certain decisions require human judgment, approval, or context that can't be encoded in a prompt.
This is where “human on the loop” matters.
AG‑UI: Shared Workspaces
AG-UI, developed by the CopilotKit team, tackles the real-time collaboration problem. It's a protocol that standardizes how agents and humans share a workspace—like Cursor for your business logic.
How it works:
Instead of agents running in isolation and dumping results, AG-UI creates a streaming interface where humans can observe agent reasoning, provide input, and co-edit outputs. The protocol defines 16 event types that flow between agents and UIs:
typescript1// Agent emits events as it works 2{ 3 "type": "text-delta", 4 "value": "I'm analyzing your Q4 sales data..." 5} 6 7{ 8 "type": "tool-call", 9 "tool": "database", 10 "input": "SELECT revenue FROM sales WHERE quarter = 'Q4'" 11} 12 13{ 14 "type": "state-update", 15 "diff": { "activeChart": "revenue-breakdown" } 16} 17
Humans can steer in real‑time: “Use Q3 for comparison,” “Focus on enterprise,” etc. The agent adapts without losing context.
This feels like vibe coding for workflows: humans transmit intent; agents handle execution.
Integrations exist for LangGraph, CrewAI, Mastra, etc. The interesting bit: the collaboration layer becomes framework‑agnostic.
HumanLayer: Safety Rails
HumanLayer attacks a different angle: ensuring human oversight for high-stakes function calls. It's less about real-time collaboration and more about deterministic approval workflows.
Core insight: never let agents call sensitive functions without approval. Sending emails, touching billing, changing prod—humans must gate it.
HumanLayer provides them through decorators:
python1@hl.require_approval() 2def send_customer_email(email: str, subject: str, body: str): 3 """Send an email to a customer""" 4 return email_service.send(email, subject, body) 5 6@hl.human_as_tool() 7def get_creative_direction(campaign_brief: str) -> str: 8 """Ask human for creative input on campaign""" 9 # This blocks until human responds 10 pass 11
When the agent hits a decorated function, execution pauses. A human gets pinged via Slack, email, or CLI. They can approve, deny, or provide feedback. The agent continues based on their response.
This creates breakpoints in agency: moments where human judgment overrides automation.
Protocols for Production
What makes both approaches powerful is that they're protocol-first. AG-UI defines a wire format for human-agent collaboration. HumanLayer standardizes approval workflows across any agent framework.
This matters because the agent ecosystem is fragmented. LangChain, CrewAI, AutoGen, home-grown scripts—everyone speaks a slightly different dialect. Without protocols, every human-in-the-loop integration becomes custom plumbing.

AG-UI complements MCP and A2A to form a complete agent protocol stack—tools, agent-to-agent communication, and human collaboration.
Protocols buy composability. AG‑UI frontends work with any agent; HumanLayer approvals work regardless of LLM/framework.
And they're complementary. AG-UI handles collaborative workflows where humans and agents work together. HumanLayer handles approval workflows where humans provide oversight. Use them together for systems that are both collaborative and safe.
The Collaboration Layer
Looking at modern agent architectures, we typically see:
- Safety Layer: Request validation and policy enforcement
- Planning Layer: Task decomposition and orchestration
- Integration Layer: Tool calling and external services
- Collaboration Layer: Human-agent interaction
That last layer is the missing piece. The best systems live in the middle: agents that augment humans.
This maps to the three generations of AI agents that HumanLayer identifies:
- Gen 1: Chat interfaces (human-initiated, single response)
- Gen 2: Agentic assistants (human-initiated, multi-step workflows)
- Gen 3: Autonomous agents (agent-initiated, ongoing goals)
Gen 3 agents need collaboration protocols. They can't just run in isolation—they need ways to surface decisions, request input, and maintain human alignment over time.
When Agents Should Ask for Help
The trick is knowing when to involve humans. Too much oversight kills velocity. Too little creates risk.
HumanLayer suggests a framework based on stakes:
- Low stakes: Read access to public data (let the agent run)
- Medium stakes: Read access to private data (maybe require approval)
- High stakes: Write access or communication on your behalf (definitely require approval)
AG‑UI fits medium‑stakes collaboration: analysis, creation, synthesis.
For high‑stakes ops, use HumanLayer’s explicit gates.
The Scaffolded Stigmergy Angle
This connects to scaffolded stigmergy in interesting ways. Agents leave traces in their work—code patterns, decision histories, approval requests. Humans respond to these traces, creating feedback loops that improve future agent behavior.
The collaboration becomes emergent. Agents learn which requests get approved quickly and which trigger long discussions. Humans develop intuition for when to let agents run versus when to provide guidance.
Over time, the system develops its own patterns of collaboration—a shared language between human and agent that emerges from repeated interaction.
Beyond the Inner Loop
Most agent frameworks optimize for the inner loop—faster reasoning, better tool calling, smarter planning. That's table stakes now.
The differentiation is happening in the collaboration layer. Systems that seamlessly blend human and agent capabilities will outperform purely autonomous ones, especially in domains where stakes are high and context matters.
AG-UI and HumanLayer represent the early protocols for this collaboration layer. Expect more frameworks to emerge as teams realize that the most capable agents aren't the most autonomous ones—they're the ones that know when to ask for help.
So What / Try This Next
If you're building agentic systems:
- Map your function calls by stakes: Which operations need approval? Which benefit from collaboration?
- Add HumanLayer decorators to high-stakes functions—start with anything that sends messages or modifies data
- Experiment with AG-UI for collaborative workflows—data analysis, content creation, strategic planning
- Monitor the patterns: Which requests get approved? What triggers human intervention?
- Iterate on the collaboration: Refine when agents should involve humans versus running autonomously
Goal isn’t bureaucracy. It’s capability + trust: agents that know their limits and involve humans when it matters.
The future of agentic systems isn't full autonomy. It's intelligent collaboration between humans and agents, with protocols that make that collaboration seamless.
If you have thoughts or feedback on these ideas, I welcome your perspective.
Related Articles

Beyond Keywords: The Semantic Search Revolution
How vector embeddings and semantic understanding are fundamentally changing information retrieval in all applications.
