Most products haven’t caught up to that reality yet. They’re treating AI as something to bolt onto existing experiences rather than a reason to rethink the entire interaction model. But there’s a difference between augmenting existing tools with AI and building something AI-native from the ground up.
Augmentation vs AI-native flows
Current AI tools are mostly using the augmentation pattern: a sidebar in your word processor, a panel in your design tool, an assistant floating beside your spreadsheet. And for many use cases, this is exactly right.
What would AI-native look like? Not a blank page with an assistant waiting on the side. Something different:
- Starting from rough sketches or prompts instead
- Showing multiple writing directions simultaneously, not a single linear document
- Making the editing process visible and collaborative between human and AI
- Introducing version control at the block level.
These aren’t incremental improvements. They’re fundamentally different interaction models. The sidebar pattern serves its purpose for augmentation. But AI-native thinking opens up entirely new possibilities.
The patterns emerging
Just as we discovered responsive design, fluid type-scales, and fluid spacing with mobile, we’re now discovering the underlying patterns that make agentic experiences useful and valuable. The foundations are still being laid, which means we get to question and experiment while building on previous research.
Some of the patterns taking shape:
Interaction modes: When should AI be full-screen and immersive (ChatGPT, Claude)? When should it be a sidebar assistant (Copilot in Office)? When should it be embedded and contextual (inline suggestions)? When should it be invisible and proactive (autocomplete)?
I built Write Brief with AI at Automattic using the embedded contextual pattern. AI suggestions appear inline as users write, providing help exactly when needed without disrupting their flow. What I learned: In this use case users want control over when AI activates. They don’t want suggestions appearing constantly, but they also don’t want to explicitly invoke it. Finding that balance required iteration. Each mode serves different user intents and workflow needs.
Context management: How does the LLM gain context so results don’t waste cycles reaching the user’s intent? Do we provide explicit context windows? Allow file uploads? Maintain conversation history?
Confidence calibration: Unlike traditional UI where a button always does the same thing, LLM outputs are probabilistic. The same prompt can yield different responses. This means we need new patterns for helping users embrace variability rather than fight it. How do we build trust when the system can’t guarantee outputs? How do we provide control when the system is fundamentally non-deterministic?
Tool orchestration: What agentic capabilities could make an experience smoother? Should the AI automatically search the web, or ask permission? Access your calendar, or wait for you to share it? These aren’t just technical questions. They’re interaction design decisions about agency and control.
These are just a few patterns. Many more are still being defined. That’s the opportunity.
Why this matters now
The change we are seeing isn’t just about AI helping us work faster. It’s that the interfaces can become dynamic. They can adjust in real-time to different users, contexts, and needs. That’s a fundamentally different design problem than creating one fixed experience everyone sees.
We’re designing systems that learn, adapt, and respond probabilistically. Systems that can be confident and wrong. Systems that need to build long-term relationships with users, not just complete discrete tasks. IBM’s design for AI framework approaches this through relationship development stages, treating the progression from first impressions to full partnership as a design problem.
The new UX
On the surface, Generative AI appears as a simple chat box. An HTML textarea component we’ve designed hundreds of times. But beneath this familiar surface lie entirely new considerations designed by humans to cultivate the relationship between human and intelligence.
This is the new UX.
The patterns are still emerging. Let’s shape them together.
- Cover Image created with Midjourney. Prompt: Morphogenesis pattern formation, cellular structures emerging and taking shape, kinfolk magazine aesthetic, minimal clean design, muted earth tones, beige and soft gray palette, organic geometric forms, subtle texture, editorial photography style, natural light, soft shadows, tactile quality, scandinavian minimalism, high contrast, lots of negative space, sophisticated and refined, 35mm film grain –ar 16:9 –sref 3123024162 –sw 30
