Google has introduced A2UI, an open-source protocol that enables AI agents to create dynamic and secure user interfaces that are context-sensitive.
Scientists confirm: This is the most effective way to get your cat’s attention, according to new research
Elderly Couple Refuses Reserved Seats—Viral Train Standoff Sparks Fiery Debate on Courtesy
Google recently announced the public launch of A2UI (Agent to UI), an open-source project designed to enable AI agents to generate rich and interactive user interfaces. Released on December 15, 2025, in version 0.8, this protocol aims to standardize how agents interact with applications to create contextually appropriate graphical interfaces.
The Challenge: The Limitations of Textual Interfaces
Until now, interactions with AI agents have primarily relied on text exchanges. Google highlights this limitation with a practical example: booking a restaurant table through a chatbot involves multiple tedious back-and-forths (date, time, number of people, availability, etc.).
Why You Should Never Reheat These Foods in the Microwave – The Hidden Dangers Experts Warn About
I tried the top 5 guard dogs—here’s what makes these breeds the ultimate protectors
With A2UI, the agent is expected to directly generate a graphical form featuring a date selector, a time dropdown list, and a submit button. Instead of typing messages, the user fills out the form, making the experience smoother and more intuitive.
How A2UI Operates
A2UI is based on a declarative JSON format that agents send to client applications. Unlike traditional approaches that transmitted HTML or JavaScript through iframes, A2UI does not send executable code but rather a structured description of the interface.
The client application maintains a “catalog” of pre-approved UI components (buttons, text fields, cards, charts, etc.). When an agent generates a response using A2UI, it can only request the rendering of components from this trusted catalog. This ensures that the client retains complete control over style, security, and accessibility.
The format is designed to be easily generated by language models. Rather than having to create the complete interface at once, the agent can build it gradually, component by component, and modify it as the conversation with the user progresses.
A Universal Format Adaptable Across All Platforms
A2UI is designed to function everywhere. Essentially, an agent can generate a single interface description, and it will display correctly on any platform (website, iOS or Android mobile app, desktop software).
This universality addresses a significant challenge. In multi-agent systems, agents are not always hosted in the same location. An agent might run on a remote server, belong to another organization, or collaborate with other agents to perform a complex task.
The issue is that these remote agents cannot directly modify your application’s interface. They must send instructions. A2UI provides them with a common language to describe the desired interface, which the client application then displays using its own graphical components. Thus, the interface generated by the agent naturally resembles your application, without any visual discontinuity.
A Complementary Tool for Existing Solutions
Google does not present A2UI as a replacement for current tools but as an additional component in the AI agent ecosystem. The protocol focuses on a specific issue: enabling different agents to generate interfaces that work across all platforms.
Specifically, A2UI can be used to create contextually appropriate reservation forms, generate customized dashboards based on requested data, or compose interfaces for approving enterprise workflows. The agent adapts the interface in real-time according to user needs, without requiring a developer to code every possible variation.
“In this example, a user uploads a photo, and a remote agent uses Gemini to analyze it and create a custom form tailored to the client’s specific landscaping needs,” Google explains:
A2UI also differs from the Model Context Protocol (MCP), which displays interfaces in isolated windows (iframes). With A2UI, the interface integrates directly into the application, maintaining its native appearance and accessibility features. Users logically see no difference between elements generated by the agent and those of the application itself.
How to Test A2UI
The project is available on GitHub under the Apache 2.0 license. Developers can access documentation at a2ui.org and test out provided examples, including a restaurant search agent. Client libraries are already available for Flutter, Web Components, and Angular, with more frameworks planned through community contributions.
Google encourages developers to participate in the protocol’s evolution and to share their feedback on this actively developing version 0.8.
Similar Posts
- Google Unveils Universal Commerce Protocol: Revolutionizing AI-Driven E-Commerce!
- Figma Unleashes AI Agents: Revolutionize Design by Creating and Modifying Components!
- Google Strikes Back at OpenAI: Launches Interactions API and Enhanced Deep Research
- Google Unveils MCP Servers: Bridging AI Agents with Services for Enhanced Performance
- Google Unveils Public Data Games Accessible Through MCP Server: Explore the Possibilities!

Jordan Park writes in-depth reviews and editorial opinion pieces for Touch Reviews. With a background in UI/UX design, Jordan offers a unique perspective on device usability and user experience across smartphones, tablets, and mobile software.