
The industry reached a definitive crossroads this week. On one hand, we saw Figma honored at the 2025 Enterprise Awards for an IPO that proved design tools are the new backbone of the AI-native economy. On the other, the courtroom battle between Amazon and Perplexity over "Comet" agents making unauthorized purchases has exposed a massive, unaddressed gap in our field. We’ve spent the last decade perfecting how humans interact with interfaces. Now, we have to design how interfaces interact with the world on our behalf. As a Senior Product Designer, I’ve watched our mandate shift. We are no longer just building tools that wait for a click; we are building systems that act on intent. We have officially entered the era of Delegated Authority, where the most important "feature" isn't the speed of the transaction, but the integrity of the guardrails surrounding it.
Product Design Strategy, AI Agents, Fintech UX, SaaS Product Management, Design Systems 2026, UX Research, Figma IPO
When your tools start making decisions without you
I had one of those weeks where everything clicked into place, but not in a good way.
Figma’s getting awards for its IPO, huge validation that design tools are now critical infrastructure, not just nice-to-haves. But then there’s this whole mess with Amazon suing Perplexity because their shopping agent apparently went rogue and started buying stuff without proper authorization.
These stories shouldn’t be happening at the same time. But they are, and that tells me something big is breaking.
Look, we’ve spent years, years, obsessing over how people interact with buttons and screens. Where to put the navigation. Whether that CTA should be blue or green. How many clicks to checkout? All that stuff still matters, but it’s not the main game anymore.
The main game is this: how do we design systems that do things for us when we’re not watching?
I’ve been a Senior Product Designer long enough to recognize when the fundamentals shift. And they’re shifting now. We used to build tools that sat there waiting for input. Now we’re building things that interpret what you probably want and just… go do it. That’s Delegated Authority, and honestly, most of us aren’t ready for what it means.
2024 was all about chat interfaces. Everyone is scrambling to add a chatbot. 2025? It’s agents all the way down.
Everything changed while we were sleeping.
The Amazon-Perplexity lawsuit dropped. OpenAI and Stripe announced a new protocol for commerce agents. Figma’s numbers are prompting uncomfortable questions, like “if AI can design interfaces, why do we need interface designers?”
I’ve been thinking about this a lot.
Our value isn’t in pushing pixels anymore, let’s be real, AI can do that now. Our value is in understanding why certain actions should be allowed and others shouldn’t. We’re not designing screens. We’re designing permission systems.
Which is weird, right? That’s not what I signed up for in design school. But here we are.
The button is dying
Traditional UX was simple. Person clicks thing. The computer does things. Next.
Agents don’t work like that. They’re out there right now comparing insurance rates, booking travel, and executing trades. No buttons involved. And when you give something autonomy without boundaries, well, the Amazon lawsuit shows you precisely what happens. Lawyers get involved.
So we’ve got this new job now: building the guardrails.
I spent most of my career trying to remove friction from user flows. Every extra click was the enemy. But with agents? Friction is suddenly valuable. You want moments where the system stops and goes, “Hey, I’m about to spend $500 of your money, that cool?”
I call these Trust Checkpoints, though I’m sure someone will come up with a better name.
Three things I wish I’d known earlier
After working through this problem on a few different projects and watching some spectacular failures, here’s what I think matters:
Make it obvious what’s happening
The spinning loader icon needs to die. It tells you nothing.
Instead: “I’m checking prices across three vendors. TechSupply is the cheapest, but their shipping times are garbage. Acme Corp costs more, but they’ve never missed a deadline with us. Still working on the third option…”
See the difference? You know what’s happening. You know why it’s taking time. You can interrupt if the reasoning is off.
Let people take it back
You know what makes people trust new technology? Knowing they can undo it.
If an agent does something in the real world, buys something, deploys code, sends an email, there needs to be a rollback period. Not forever, but long enough that if you realize you made a mistake, you can fix it before damage is done.
This isn’t about whether the AI is reliable. It’s about basic human psychology. People try new things when failure isn’t permanent.
Think like a manager, not a user.
This mental model helped me a lot: you’re not using these agents, you’re managing them.
Would you give a new employee full access to company accounts on day one? No. You’d set limits. “You can research and draft the proposal, but I approve before it goes out.” Same deal with agents.
The products that succeed will make it easy to set these boundaries. Visual, intuitive, flexible. Not buried in settings somewhere.
Why your boss should care
I keep hearing executives say they’re skeptical about AI ROI. Can’t blame them, there’ve been a lot of expensive disappointments. Hallucinations. Unreliable outputs. Hype that didn’t deliver.
But here’s the thing: if you design proper guardrails into the system, you turn design from a cost center into risk management. You’re not just making things look professional. You’re making them safe to deploy at scale.
That’s where seniority actually means something. Junior designers make things pretty. Senior designers make things safe.
Where is this all heading
The craft of making pixels look good? That’s getting commoditized fast. Anyone can spin up something decent-looking now.
The hard problems are all about logic, ethics, and trust. How much autonomy do we give these systems? What happens when they conflict with each other? Who’s responsible when something goes wrong?
While tech companies fight over who gets to own the agent economy, designers have a simpler job: make sure humans stay in control. Or at least feel like they are.
Because the worst possible outcome isn’t agents that don’t work. It’s agents that work great but make people feel powerless.
That’s the design challenge we’re facing. And I don’t think we can afford to get it wrong.
Suggested
Continue Reading
More articles you may find useful, carefully selected from our journal




