Design in the Age of Agents
How AI is reshaping the design profession — both how designers work and what they design for

How AI is reshaping the design profession — both how designers work and what they design for.
Two transformations are underway in product design. The first is about process: AI has changed how designers work, what tools they use, and what the job requires. The second is about environment: the products designers build now exist in a world where AI agents can read, modify, and route around them.
Most coverage of AI and design focuses on the first transformation. This piece covers both, because the second is the one most designers and product managers have not yet reckoned with.
Process: the job is changing
The consensus description of AI's effect on design, "it makes you faster," misses what is shifting. Henry Modisett, Head of Design at Perplexity, said it plainly: "Traditional PRD > Design > Engineering > Ship process no longer works. The speed of AI capability development outpaces this pipeline."
Speed is a byproduct of AI adoption; the deeper change is structural.
From craftsperson to conductor
Soleio Cuervo, design investor and host of First of a Kind, frames the role shift this way: "Don't think about AI replacing your job — think about increasing your surface area of impact. The next big shift in design is orchestration. Designers will spend less time pushing pixels and more time guiding intelligent systems."
Several leading teams have made this shift concretely. At Cash App, over 90% of designers commit code. At Anthropic, the product design lead on the Claude Code team started shipping pull requests to production tools after adopting AI. At Anthropic Labs, Mike Krieger, co-founder of Instagram, describes his team's designers as writing "almost as much code as the engineers." The team structure has become a designer with conviction paired with a systems engineer: a co-founder dynamic, not a handoff chain.
This is not a demand that every designer become a full-stack engineer. It is a statement that the barrier to touching code, prototyping in real software, and shipping changes is lower, and AI-native teams are pushing everyone toward that edge.
David Kossnick, Head of Product AI at Figma, observed the same blurring: "Role blending lets you keep the team small, even if you're at a big company. AI tools bleed the stark lines in skillsets between different functions. Designers can code. PMs can prototype. Engineers can design. A designer wrote the first system prompt for Figma Make."1
Prototype to productize
Modisett's replacement for the traditional waterfall: Strategic conversation → Get anything working → Prune possibilities → Design → Ship → Observe. The key inversion: make something real first, then develop conviction about what to build.
Kossnick put numbers on it: "We're at the point now where it almost takes as much time to get to a prototype as it does to write a good PRD. Prototypes are the new PRD."2
Ryo Lu, Head of Design at Cursor, built a personal operating system in an afternoon of what he calls vibe coding: a time-traveling browser, a music player, a game. He built a functioning Cursor prototype inside Cursor to test the product's own features. His reason: "It's really hard to prototype these things in Figma if you want them to feel real. And the easier route right now is just to code."3
At Figma, the initial concept for Figma Make emerged from a hackathon prototype that "barely worked end-to-end. Failure rate was high. When it did work, it was incredible." That prototype became one of the company's flagship products. The pattern is consistent across teams: get something real in front of people before writing the spec, not after.
Centaurs, cyborgs, and the 20%
Harvard Business School research identified two patterns of human-AI collaboration that work. Centaur mode: human does the strategic and judgment work, AI does a well-scoped execution task. Cyborg mode: human and AI work back and forth; the final output is a collaboration where you cannot cleanly separate who did what.
Most design workflows involve both. Centaur mode fits well-defined tasks: copy editing, component state generation, code translation. Cyborg mode fits exploration and synthesis, where "good" is hard to pre-specify. The danger in either mode is falling asleep at the wheel. The HBS research found this is where quality degrades; AI reaches a plausible-looking result fast, and the last 20% of polish gets skipped.
Amelia Wattenberger, developer and researcher, described the changed rhythm: "What my rhythm now used to be: think → type → see → think → type → see... What it is now: prompt → wait → read → evaluate → prompt → wait." This is not a sign you are doing it wrong; it is a sign the tools have not yet caught up to the way designers want to think.
Taste as the irreducible skill
Every AI-native design team (Figma, Perplexity, Anthropic, Cash App) has human taste as a gate on AI output. No tool automates this. Jenny Wen, design lead at Anthropic and former director of design at Figma, said it directly: "In a world where you can start to make anything with AI, what really matters is your ability to choose and curate what you make."4
Krieger identified precisely where AI falls short: "The models today are good at adding features. They are not necessarily good about figuring out what to cut out of the product." Taste is the cut function. The execution layer is getting cheaper. The judgment layer is not.
Tools: what collapsed
The traditional design tool stack (Figma for mocking, InVision for prototyping, UserTesting for validation) is not gone. But its categorical boundaries have dissolved. A PM prototypes in v0. A designer ships pull requests from Cursor. A product engineer builds a working application in an afternoon. The line between "design tool" and "development tool" no longer holds.
Prompt-to-UI
v0 (Vercel), Bolt.new, and Lovable generate working interfaces from natural language. v0 has evolved from a component generator into a full development environment with Git integration, Figma import, and in-browser code editing. Lovable's Visual Edits mode lets you click any rendered element and modify spacing, colors, and structure directly.
Jenny Wen observed what this does to product processes: "A PM can get to a working prototype faster than you can write this perfect problem statement or write a brainstorm. They do this without doing any research, making any personas." The design process is not broken because teams got lazy. It changed because tools changed the economics of making things.5
Design-to-code bridges
Figma Make and the Figma MCP Server bring design context directly into coding tools. The MCP server means design system tokens stop being static reference documents and become live context that AI agents read when generating code. Cursor's Visual Editor (launched December 2025) lets you click on rendered elements and describe changes in natural language, with agents running in parallel and results appearing in seconds.
The most conceptually interesting entrant is Paper, built by TK Kong (former Ramp designer). His framing captures the conductor model: "I am in charge of the agents as the commander." Paper generates HTML/CSS on a visual canvas; you edit on the canvas, then push designs back to a coding agent for production implementation. It was a breakout vendor in Ramp's design tooling reports in early 2026.
Context files as infrastructure
The tools are a moving target. Six months from now, half of any specific tool review will need updating. What will not change: the context file.
CLAUDE.md, Cursor rules, Windsurf cascades, design tokens fed via MCP: the act of writing down what "good" means for your product is now load-bearing infrastructure, not documentation. It is the constraint layer that makes AI output consistent rather than generic.
Megan Joy, product design lead on the Claude Code team, set up a personal CLAUDE.md that reads: "I am a product designer. I don't have coding experience. Give me more detailed explanations. Break down changes smaller." She shipped a real permission structure feature, completing 70% with Claude Code before handing to an engineer for the final 30%. Her verdict: "This might be the new way of building things."6
If you are a design team without a context file, you are doing the equivalent of onboarding a new engineer without documentation and hoping for consistency.
Designing for non-determinism
When AI is the product, not just the tool, different design challenges apply. Henry Modisett, who has built AI products at Perplexity, identified several that do not exist in deterministic software.7
Non-determinism in specs. Traditional design specs describe every state. AI products cannot be fully speced because the AI can produce outputs you did not anticipate. Designers are no longer writing exhaustive state catalogs; they are specifying constraints and defaults, then building systems to evaluate what the AI produces.
Speed as a UX variable. AI outputs are often slow. Modisett: "Speed is the most important facet of user experience, but many AI products work slowly." The design problem includes managing waiting, uncertainty, and progressive disclosure of results, a distinct skill from traditional interaction design.
Discoverability without navigation. Andrew Sims, writing in Signal Path newsletter, framed the question: "If navigation helped users find features, what happens when features find you?" AI-native products are dissolving traditional navigation models. Features appear in context. The sidebar you would navigate to becomes a tool that surfaces when needed.
The user needs that navigation served do not go away: orientation ("where am I"), possibility ("what can I do here"), intent ("this is what I want"), continuity ("I was in the middle of something"). New patterns are replacing traditional navigation: persistent containers like threads, workspaces, and collections; intent signals like toolbars, expert agents, and soft-start nudges. The challenge is providing this orientation without the structural scaffolding users have relied on for twenty-five years.
Products in an agent world
Here is where the design challenge expands beyond "how do I work faster." Products now exist in an environment where other AI systems can read, modify, interact with, and route around them. This is not a future scenario.
Users modifying your product
Max Drake, a designer, used Claude Code to add a BPM graph and column to her own Spotify desktop app. Not a plugin. Not a supported extension point. A direct modification of the running application's interface, authored with a coding agent and a few hours. Electron apps run Chromium. A coding agent that can read and write JavaScript can modify them. The user needs no permission from the company.8
Power users have always modified software: Greasemonkey scripts, browser extensions, jailbreaking. What Drake did generalizes. What changed is the skill floor. Previously, modification required programming knowledge. Now it requires the ability to describe what you want. The population of people who can modify your product just expanded by orders of magnitude.
Design implication: your shipped UI is the starting point, not the final surface. Products that make modification easy (clear extension points, semantic component structures, documented APIs) will be preferred over locked-down alternatives, because the modification will happen regardless.
AI agents as product intermediaries
Users increasingly do not interact with your product. Their AI assistant does.
AI agents are making phone calls to pharmacies, handling insurance verification, navigating returns and subscription cancellations, all on behalf of users who never directly contact the company. Google Duplex has done restaurant bookings this way since 2018. Anthropic launched Claude computer use on March 23, 2026: Claude can open apps, click, scroll, type, navigate web browsers, and manage files. When it receives a task, it first checks for a direct integration; if none exists, it falls back to controlling the computer the way a human would.9
This creates a structural consequence that product teams need to reckon with: the only way to stop an agent from accessing your product is to also stop humans from accessing it. If a human can log in, an agent can log in with those credentials. If a human can navigate a website, computer use can navigate it. CAPTCHAs are failing. hCaptcha has shifted to behavioral analysis because LLM-powered agents pass visual puzzles reliably.
Some companies will respond by adding friction. Others will embrace agent access by publishing MCP servers, documenting APIs, and creating agent-friendly endpoints. The companies in the second group will be the ones personal AI assistants prefer to route through: faster, more reliable, better structured data. Being agent-friendly is becoming the 2026 equivalent of being mobile-friendly in 2012.
MCP as the new front door
Model Context Protocol, introduced by Anthropic in November 2024, gives AI agents structured access to services. As of early 2026: 5,800+ available MCP servers, 97 million monthly SDK downloads. Adopted by OpenAI (including the ChatGPT desktop app), Google DeepMind, and Microsoft. Donated to the Agentic AI Foundation under the Linux Foundation in December 2025.10
Every MCP server is a new way for an agent to interact with a product without seeing its interface. Figma's MCP server lets agents read and write to the canvas. GitHub's API lets agents manage repositories. Datadog's API lets agents pull telemetry, then build an entirely new interface on top of it, bypassing Datadog's designed UI entirely.
That last example is worth sitting with. Datadog built dashboards, alerts, and visualizations over years of design investment. None of that matters to a user whose agent pulls raw data via API and generates a bespoke dashboard tailored to their specific metrics. The designed interface competes with a generated interface that is, by definition, more personalized to that user.
The design question this forces: if an agent bypasses your GUI and pulls data from your API, what is the "product experience"? Whatever that data layer looks like. Schema consistency, documentation quality, and endpoint reliability become design problems, not just engineering problems.
SaaS replacement as real-time signal
On March 27, 2026, this phenomenon was trending on X. Zach Lloyd, founder of Warp, posted a thread detailing how his company stopped buying SaaS. They replaced a hosted documentation platform with Markdown + Astro, migrating 266 pages with agents in hours and saving $10K+ per year. They replaced community monitoring subscriptions with an agent watching X and Reddit. They replaced data analytics dashboards with agent skills and a BigQuery CLI. They replaced recruiting software workflows with agent-driven automation.11
The reactions captured the collective mood. One reply read: "Software as a Service becomes Software as a Servant."
The products being replaced are not bad. They are products whose value lived primarily in the interface layer, and the interface layer is now commodity.
Five design shifts
Pull these threads together and five implications emerge for product design:
Design for two audiences. Products now have human users and agent users. Humans need discoverability, delight, and trust signals. Agents need structured data, clear endpoints, and predictable behavior. Both matter, and they require different design thinking.
Your GUI is a starting point, not a mandate. Users with agents will modify, augment, and replace parts of your interface. Semantic component structures, clean data layers, and explicit extension points become a form of user respect.
Quality at the data layer. If agents bypass your GUI and pull from your API, the "product experience" is whatever that data looks like. Consistent schemas and good documentation are design decisions.
Design for agent intermediation. When an AI assistant navigates your product on a user's behalf, what should that experience be? Should your product detect agent access and serve a simplified interface? Should it offer agent-native endpoints alongside the GUI? Companies will need positions on this before those positions are decided for them.
Move the moat. If your product's value is in its interface, agents can replicate that interface on top of your data. If the value is in data, network effects, or integrations, agents amplify it by making it more accessible to users who never interact with your designed surface. The sustainable advantage shifts from "best UI" to "best data" and "best connections."
What to do
For individual designers: Build fluency with one agentic coding tool (Claude Code, Cursor, or v0) by making something real, not hypothetical. Then have your AI agent attempt to use your own product. Where does it need computer use because there is no API? Where would it get stuck? This is a new form of usability testing: agent usability testing. The gaps it exposes are design problems.
For product teams: Expose a connection layer. If you have an API, document it comprehensively. If you do not, build one or publish an MCP server. Design for composability: clean data schemas, webhook support, structured outputs. Your product will increasingly be one component in a user's agent-orchestrated workflow, not a destination. That requires a different architecture of value.
On the question of taste: Dustin Senos, Head of Design at Browser Company, used a "novelty budget" when designing Dia: keep most things familiar (bookmarks, tabs), spend novelty only on the core AI pitch. Keep the runway for trust. The principle extends to how you work with AI. Kossnick's golden rule for AI-native design eval: "Are we moving in the right direction?" That is the core question. Everything else is instrumentation.
Jenny Wen set the stakes: "If you can one-shot prompt something, what does that mean for designers? Your work has to be better than this for you to be valuable."
That is the bar.
Footnotes
-
David Kossnick, Head of Product AI at Figma. First Round Review, 2025. ↩
-
David Kossnick. First Round Review, 2025; Peter Yang, Behind the Craft podcast. ↩
-
Ryo Lu, Head of Design at Cursor. "Our Designer Built an Operating System with Cursor," Cursor blog; Creator Economy podcast with Lenny Rachitsky. ↩
-
Jenny Wen, design lead at Anthropic, former director of design at Figma. Hatch Conference, 2026. "Why Designers Can No Longer Trust the Design Process." ↩
-
Jenny Wen. Hatch Conference, 2026. ↩
-
Megan Joy, product design lead, Claude Code team. Figma Live: "Shipping Designs with AI at Anthropic," 2026. ↩
-
Henry Modisett. "Designing Perplexity," Luke Wroblewski notes from AI Speaker Series. ↩
-
Max Drake (@max__drake), March 2026. Enabled by the agent-browser Electron skill released by Chris Tate (@ctatedev, Vercel), a single command that gives coding agents control of desktop Electron apps. ↩
-
Claude computer use launch, March 23, 2026. CNBC, SiliconANGLE, Dataconomy, March 2026. ↩
-
MCP scale and adoption: Wikipedia, Thoughtworks Technology Radar, Pento ("A Year of MCP"), 2025-2026. Donated to the Agentic AI Foundation under the Linux Foundation in December 2025, co-founded by Anthropic, Block, and OpenAI. ↩
-
Zach Lloyd (@zachlloydtweets), March 27, 2026. Thread reached 53K views and 280 bookmarks within 5 hours. @signulll reply: "the future of saas in one interaction" (932 likes, 50K views). ↩