Skip to main content
Exploring ideas, sharing knowledge
Hidden Peaks Unlocked!
Looks like you found the hidden peaks! Future posts are now visible.
Peaks Hidden Again
The future posts are hidden once more. You know how to find them again.
Featured image for post: The Future of Webapps

The Future of Webapps

5 min 1,108 words

There’s a quiet assumption baked into every webapp ever built: that all users need roughly the same interface. You design screens, draw wireframes, agree on a layout, and ship it. Power users ignore 80% of it. Beginners drown in the other 20%. And when two employees have genuinely different workflows, you build two apps—or one bloated app that tries to please everyone.

What if the UI wasn’t pre-designed at all?

The Problem with Today’s Frontends

Consider a realistic scenario at a mid-sized company:

Employee A starts their morning by pulling up a client record, checking contact details from the CRM, and reviewing recent invoices from the billing service. That’s their loop: client data → billing history.

Employee B does something different. They pull the same client record, but then they pivot—they need to see related entities: partners, subsidiaries, linked accounts. Their workflow is about relationships, not invoices.

Today, both employees navigate three separate frontends. The CRM, the billing tool, the relationship graph. They context-switch constantly. They’re trained on all three. And the apps themselves are full of features neither of them uses.

This isn’t a UX failure. It’s a structural one. One-size-fits-all UIs are a product of one-size-fits-all development. You build what you can predict, and you ship it to everyone.

A New Protocol Stack

The technical pieces for a different approach are falling into place.

MCP (Model Context Protocol) established a standard for connecting AI agents to external tools and data sources. An agent can call your CRM, your billing API, your graph database—through a common interface. This was the first piece.

WebMCP extends this to the browser. Instead of agents running in server-side shells, they can operate directly in web environments, interacting with page content and browser APIs.

But the most significant pieces are AG-UI and A2UI—protocols specifically designed for agents to communicate with UI layers. Not just to request data, but to construct interfaces. An agent doesn’t just answer “here are the client’s invoices”—it renders a table, attaches filters, links to the billing record. The UI becomes a side effect of the agent’s reasoning.

Warning

AG-UI and A2UI are emerging protocols still gaining adoption. The patterns described here are directional—the ecosystem is moving this way, but the tooling is early.

The Vision: UIs That Compose Themselves

Imagine both employees from the example above working in the same app. The “app” is just a chat interface backed by agents with access to all their company’s services.

Employee A types: “Show me the details for Acme Corp and their last three invoices.”

The agent pulls from the CRM and the billing service. It composes a dashboard: client card on the left, invoice timeline on the right. Exactly what’s needed. Nothing extra.

Employee B types: “Pull up Acme Corp—I want to see who’s connected to them.”

Different intent. Different UI. Same underlying infrastructure.

There’s no pre-defined screen for either of these interactions. The agent interprets the intent, queries the right services via MCP, and uses AG-UI/A2UI to render the appropriate interface on the fly. Two employees, two workflows, one app.

The frontend becomes a rendering surface, not an application in itself.

The Real Trade-off: Blank Page Syndrome

This sounds compelling, but there’s a genuine problem: blank page syndrome.

Give users a chat box and infinite possibility, and many of them freeze. They don’t know what to ask. They don’t know what the system can do. The power user thrives; the average user is lost.

Today’s frontends, for all their rigidity, do something important: they communicate affordances. Buttons tell you what you can click. Navigation tells you what exists. A well-designed UI is a map of the system’s capabilities.

An agent-composed UI has no map. The user has to know what to ask before they know what’s possible.

Tip

This is the same failure mode as early search engines before Google figured out that “I’m feeling lucky” wasn’t enough—users needed curated entry points, not just a query box.

The Sweet Spot: Pre-Defined Shell, Fluid Interior

The answer isn’t full fluidity—it’s a hybrid.

The shell stays. Navigation, primary workflows, role-based starting points—these are pre-defined. They communicate what the system offers and give users somewhere to land.

The interior composes itself. Once a user is in context—viewing a client, investigating an anomaly, preparing a report—that’s where the agent takes over. The widgets, tables, and panels relevant to this specific task are assembled on the fly.

Think of it as the difference between a city’s street grid (fixed, navigable) and the conversations that happen inside buildings (fluid, contextual). You still need streets. But what happens inside doesn’t have to be pre-architected.

Frontend Paradigms

AspectToday's WebappHybrid Agent UI
LayoutPre-designed by developersShell fixed, interior agent-composed
WorkflowSame for all usersAdapts to individual intent
OnboardingLearn the UILearn what to ask
Context switchingMultiple appsOne surface, multiple agents
DiscoverabilityNavigation menusSuggestions + shell entry points
Backend accessDedicated frontend per serviceAgents via MCP across all services

What This Means for Developers

If this trajectory holds, the job of a frontend developer shifts.

Less time designing pixel-perfect layouts for every state and edge case. More time designing the grammar of what agents can render: what components exist, what data they accept, what interactions they support. You’re building a component library that agents use as a vocabulary.

The backend becomes more important, not less. APIs need to be well-documented, consistently structured, and MCP-compatible. Agents are only as good as the services they can reach.

And security surfaces change. Instead of users clicking through a fixed UI with known paths, agents are dynamically composing requests. Permissions, rate limiting, and audit trails need to think in terms of agent intent, not just endpoint access.

Final Thoughts

We’re not at the endpoint yet. The protocols are young, the tooling is immature, and most enterprise software is still built the way it was in 2010. But the direction is legible.

The future webapp isn’t a traditional SPA with a React component tree. It’s a chat interface with a rendering surface, backed by agents that understand your data and your intent. The frontend becomes a consequence of the conversation, not a pre-condition for it.

The blank page problem is real, and the answer is probably a decade of UX learning we haven’t done yet. But the underlying shift—from designed interfaces to composed interfaces—feels inevitable.

Figma might stay. But the mockup-to-code pipeline is going to look very different.


What’s your take on agent-composed UIs? Have you seen AG-UI or A2UI in the wild yet? The comments are open—or find me on LinkedIn.

Share this article