AI is reshaping the very foundations of how humans and machines interact. Traditional UX patterns (buttons, menus, forms) no longer capture the full spectrum of possibilities. To design meaningfully in this new era, we need to understand the new paradigms: the high-level models that define what kind of relationship a user has with an AI system / and how they come together with the old paradigms to form new surfaces + interactions + experiences.
So far I have identified 6 types of interaction paradigms and this is one way of visualising them together:
.png)
conversational interfaces
Natural language becomes the interface, through back-and-forth between people and machines. Users interact with AI through chat, voice, or multimodal dialogue, engaging in multi-turn conversations where context and memory matter. Conversational UIs lower the barrier to entry / people can simply ask for what they need. But designing them requires solving for ambiguity, grounding, tone, and trust.
The paradigm spans from simple chatbots to sophisticated assistants like GPT, Claude, or Alexa, where the boundary between a “tool” and a “partner” blurs / this is what makes it a foundational piece of many agentic experiences as well.
agentic + assistive experiences
Here, AI is not just reactive but takes initiative. Users delegate goals instead of step-by-step tasks, and the system executes multi-step workflows / sometimes autonomously. This covers copilots that sit inside apps (e.g., GitHub Copilot), full-fledged agents that perform tasks across tools, and background assistants that monitor for triggers. The challenge: visibility, interruptibility, and trust. Users must understand what the agent is doing and retain control without micromanaging.
co-creation + generative experiences
This paradigm frames AI not as a static tool but as a creative partner / one that generates, edits, and refines artifacts alongside the human. The defining quality is the iterative workflow: users issue prompts, receive outputs, and then loop through cycles of modification, variation, and editing. Prompting is central here / but not just as a one-shot command. Prompts become conversational levers for shaping outcomes: users explore ideas, set directions, and progressively refine the result. Importantly, it’s not limited to text — editing artifacts (rewriting, adjusting visuals, remixing outputs) is a core pattern. Whether it’s regenerating a paragraph, tweaking an image, or applying style variations, users and AI move together in a loop of creation and critique.
command-based interactions powering AI
Command-based AI extends the familiar GUI paradigm - menus, sliders, forms, and buttons - but augments it with intelligence. Here, the experience is deterministic and tool-like: you give a clear command, the system executes it. Unlike co-creation flows, the focus is not on open-ended exploration but on precision, efficiency, and control. Commands might take the form of a structured input (“Summarize this document”), a filter, or a quick action (“Remove background” in an image editor).
The paradigm shines in productivity and professional tools - like Figma AI, Photoshop’s Generative Fill, or Excel’s AI-powered formulas - where users know what they want, and AI simply helps them get there faster. The design challenge is to make AI enhancements feel like natural extensions of existing interfaces, not disruptive replacements. It sits closer to conventional UX, with AI acting as a hidden layer of optimization rather than a creative collaborator.
ambient + contextual interfaces