Your agents are working away, but why can't your users see them working?
PwC's AI Agent Survey found that 79% of executives say their company already uses AI agents. But trust fractures under pressure, as only 20% trust agents with financial transactions, only 22% trust them to interact with employees autonomously, and 28% rank lack of trust as a top-three barrier to getting further value. The more consequential the task, the more users need to see what's happening.
The gap between what agents can do and what users will let them do is becoming an interface problem.
The Invisible Agent Problem
Most agentic frameworks are headless by design. They orchestrate conversations, manage tool calls, and coordinate multi-agent workflows — all behind the scenes. But there's no standard way to surface that orchestration to the people who are supposed to benefit from it.
Every team that needs agents in front of users ends up building custom UI integration. It's fragile, tightly coupled, and non-reusable. That's a project bottleneck now, and for the next one.
The cost isn't just engineering time — it's adoption velocity. When agent behavior is opaque, trust doesn't develop, oversight becomes guesswork, and projects stall between pilot and production. KPMG found that 60% of organizations restrict agent access to sensitive data without human oversight. Furthermore, the quality of the human-agent interface directly determines how much value agents can deliver.
Human-in-the-Loop — Part of the Architecture
HITL is often framed as a transitional need because agents aren't reliable or capable enough. This plays down the critical role of humans in work. HITL is a permanent architectural requirement for production systems. Therefore, it becomes a question of how good the experience around it is.
Most implementations today are basic, such as a blocking modal with "approve" and "reject" options. Good HITL requires richer patterns: intent preview before execution, confidence signals that surface uncertainty, explainable rationale linked to user preferences, and escalation pathways for ambiguous situations.
Furthermore, the application of HITL should be considered with a progressive scale. Production systems need an autonomy spectrum:
- Observe & Suggest — the agent offers recommendations
- Plan & Propose — the agent creates plans, the user approves
- Act with Confirmation — the agent executes, the user confirms each step
- Act Autonomously — the agent operates within defined guardrails
The Autonomy Spectrum: from supervised to autonomous
PwC's data supports this: trust sits at 38% for data analysis but drops to 20% for financial transactions. The autonomy level should match the stakes. Frameworks that only support fully autonomous or fully manual miss how enterprise adoption actually works, which is gradual, with increasing confidence.
A Standard Protocol for Agent UX
This is the problem that AG-UI plays an important part in solving. AG-UI (Agent-User Interaction Protocol) is an open, event-based standard that defines how agent backends communicate with frontend applications. Rather than each team inventing custom plumbing, agents emit standardized events that any frontend can consume.
It fits into the protocol stack emerging across the agentic ecosystem:
AG2 and the Agentic Protocol Stack — AG-UI, MCP, A2A, and OpenTelemetry
AG-UI uses Server-Sent Events over HTTP, streaming structured JSON events covering the full agent lifecycle: run management, token-by-token text streaming, tool call execution, and state synchronization. It supports bidirectional control — users can steer execution, and agents can surface checkpoints requiring human input.
The protocol was developed by CopilotKit, and we've worked closely with their team on AG2's integration. It's open, vendor-neutral, and designed to work with any frontend.
Integration
AG2's integration of AG-UI only requires a few lines of code to go from agent to interactive endpoint. Lifecycle events signal when runs start or finish. Text streaming delivers tokens in real time. Tool events show what's being called and returned. State snapshots keep the frontend synchronized, particularly important in multi-agent workflows where users need to see which agent is active.
HITL is native to the protocol, so when an agent needs user input, it surfaces through the same event stream.
As with OpenTelemetry for observability, we chose an open standard. The same applies to AG-UI, empowering AG2 developers to easily integrate their unique UIs.
What's Next
Today's integration supports single-agent UI natively. We're continuing our collaboration with CopilotKit to build out richer multi-agent UI support — giving users direct visibility into multi-agent coordination as it happens.
Get started by:
- Diving into our technical blog post
- Exploring our AG-UI documentation
- Reading CopilotKit's companion post
- Trying the AG-UI Dojo
- Running the multi-agent Feedback Factory
Big thanks to the CopilotKit team!
