9 min read

AI Is a Great Starting Point. It’s a Terrible Finishing Line.

What AI Actually Does in a Creative Process, and What It Will Never Do

AI Design ToolsCreative ProcessProduct DesignDesign WorkflowVisual IdeationFigma AIMidjourneyDesign SystemsSenior DesignerAI Augmented Design

9 min read

AI Is a Great Starting Point. It’s a Terrible Finishing Line.

What AI Actually Does in a Creative Process, and What It Will Never Do

AI Design ToolsCreative ProcessProduct DesignDesign WorkflowVisual IdeationFigma AIMidjourneyDesign SystemsSenior DesignerAI Augmented Design

Everyone is asking whether AI replaces designers. Nobody is asking the more useful question: at what exact point in the process does it help, and where does it quietly make your work worse?
AI Design Tools, Creative Process, Product Design, Design Workflow, Visual Ideation, Figma AI, Midjourney, Design Systems, Senior Designer, AI Augmented Design

There’s a conversation happening in design circles right now that keeps circling around the wrong question. People ask: Is AI good or bad for creative work? Is it replacing designers? Should we be using it?

These are the wrong questions. The right question is: what is AI actually useful for, and at what stage of the process?

I’ve been working with AI tools long enough to have a pretty honest answer. And it’s more nuanced than either the enthusiasts or the skeptics want it to be.

What AI Is Genuinely Good At Creatively

Let me start where AI actually earns its keep, because it does.

AI is excellent at generating starting points. Ask it for ten layout directions for a dashboard, and it will give you ten genuinely varied approaches. Some will be useless. A few will spark something you wouldn’t have arrived at alone. That’s valuable. Not because the output is good, but because the range of it accelerates your own thinking.

It’s also good for scenarios. Give AI a brief and ask it to describe how ten different types of users might experience this product, and it will surface perspectives and edge cases you hadn’t considered. That’s useful research fuel, even if none of the specific scenarios are exactly right.

The SaaS product structure is actually pretty strong. It understands how dashboards should be organized, where navigation typically lives, how data tables should behave, and what a reasonable onboarding flow looks like. It has absorbed an enormous amount of good product design thinking, and it applies it well when the problem is structural rather than creative.

For example, when I recently asked AI to generate a sample dashboard layout for a SaaS analytics tool, it instantly suggested a clear top nav with account controls, a left-hand panel for primary navigation, and a main content area with modular chart widgets and summary statistics. For onboarding, it mapped out a multi-step welcome flow introducing features in context, with clear progress indicators and skip options. These are not revolutionary, but they’re effective, and they show how quickly AI can bring you to a solid, practical starting point.

It understands PRDs. Give it a product requirements document and ask it to suggest interface implications, and it responds with real coherence. It helps map features to flows, identify gaps in a spec, and pressure-test whether a proposed structure actually serves the stated goals.

All of this is genuinely useful. None of it is where the design actually lives.

Where AI Falls Apart Creatively

Here’s what I’ve observed after using these tools long enough to form real opinions.

AI has a default visual aesthetic, and it will drift toward it whenever you give it creative latitude. Ask any UI generation tool for a “modern design” and watch what comes back. Purple gradients. Glassmorphism. Softly glowing cards on dark backgrounds. A specific visual language that has absorbed the design internet and averaged it into something that looks familiar but belongs to no brand in particular.

It’s not ugly. But it’s not distinctive either. It’s the visual equivalent of a stock photo: technically competent, deeply generic.

The gradient problem is real. AI reaches for gradients the way a junior designer sometimes does, to make something feel more premium or more interesting when the underlying structure hasn’t earned that treatment yet. The gradient becomes decoration hiding a weak idea, and AI applies it confidently without any awareness that this is what it’s doing.

Truly innovative visual ideas, the kind that make you stop and look twice, the kind that feel like they came from a person with a genuine perspective on the world, I have not consistently seen AI produce these unprompted. It can recombine. It can extrapolate from the references you give it. But it doesn’t start from nothing and arrive somewhere genuinely new. That capacity still sits with humans.

As one analysis of AI in creative work noted: “Curated taste, research-informed contextual understanding, critical thinking, and careful judgment” are what AI cannot automate. That list includes the instinct that produces a genuinely surprising creative idea. The AI doesn’t have that. You do.

What the Tools Are Actually For

So, which specific tools do I use and at which stage? Here’s the honest breakdown.

Claude is my primary thinking partner. Research synthesis, brief writing, content strategy, storyboarding, UX logic, and pressure-testing design decisions. And more recently, my animator for ui presentations and demos. When I’m working through a complex information architecture problem or trying to articulate a value proposition for a client, Claude moves fast and thinks clearly. It’s also where I do scenario building and persona work. Strong on structure and reasoning, not where I go for final visual output.

Midjourney is where I go for visual ideation, but only after I have a direction. I use it to generate mood references, explore visual tones, and produce raw material for my designs, such as objects or header images. The keyword is raw. Nothing that comes out of these tools is ever final or on-brand without significant direction from me first.

Figma AI features are useful for generating components and for rapid variation. Useful in the middle of the process when you’re working through a design system and need to generate states and variants quickly. Figma Make, however, still seems to need some improvements to output a really visually appealing UI unless the instructions are clear or your designs are final in Figma. Even then, it sometimes struggles to convert your designs exactly into Make.

Lovable, similar-vibe coding tools are genuinely impressive for prototyping speed. For quickly getting a working structure in front of a client or stakeholder. The output tends toward the same generic defaults as any AI tool, but for early validation, it seems to output the most on-brand and most creative visuals and UI’s in my experience, this im sure to change over time as other platforms become available.

The pattern across all of them: useful for speed and volume, not for taste and originality. The more direction you bring in, the better the output. The less direction, the more you get the default. Over time, I’ve found that a few simple prompt-writing strategies make a big difference: be as specific as possible about the style or end goal you want, reference concrete visual examples, and outline any constraints up front. For example, if you want a certain mood or brand alignment, include those details at the start of your prompt. Iterating by refining your prompt rather than accepting the first output consistently leads to more interesting results.

Real Projects, Real Workflow

What does this actually look like on a live client project?

On a recent crypto trading platform, I used Claude to synthesize competitive research and user interview findings into a clear priority framework before opening Figma. That took two hours instead of most of a day. The synthesis wasn’t the final document, but it gave me a clear skeleton I could refine. Then I designed the initial layout and component hierarchy myself, from scratch, based on that research. Once I had a direction I was confident in, I used Figma AI to generate component variations at speed.

On a brand identity project, I sketched the logo concept by hand first. Had a clear directional idea. Then I used Midjourney with very specific prompts built around that concept to generate texture references, color mood boards, and typographic pairings I might not have considered. The AI gave me material. The creative direction was already mine, and the final assembly and vectorization were also done manually.

On an EdTech platform spanning a web portal and a real-time 3D game environment in Unreal Engine, I used Claude to draft the design system documentation and component specifications. Pages of it, quickly and accurately. That freed up significantly more of my time for the actual visual work, the parts where the specific creative decisions about how something should feel had to come from me and couldn’t come from anywhere else.

In every case, the pattern is the same. AI handles the heavy lifting in areas where volume and accuracy matter. I handle the areas where taste and judgment are the actual product.

Has AI ever produced something that felt genuinely creative, on-brand, or unexpected in a good way?

Honestly, yes, occasionally, and always in the same conditions: when I gave it very specific, constrained, opinionated prompts rather than open-ended ones.

There was a moment while working on a brand exploration when I gave Midjourney an unusually specific prompt, referencing a particular era of poster design, a specific color temperature, and an abstract concept of the brand’s tension between precision and warmth. What came back had a quality I hadn’t expected. Not because AI was being creative. Because the constraint was tight enough that it had to work within a specific territory rather than defaulting to the average.

The lesson from that is the same as the lesson everywhere else: the more specific and directional your input, the better and more interesting the output. Vague prompts produce vague results. Opinionated prompts occasionally produce something worth keeping.

But I want to be precise about what “worth keeping” means in that context. It wasn’t the final work. It wasn’t on-brand. It was raw visual material with one quality I found interesting, from which I extracted a direction I then developed properly. The AI produced a spark. I built the fire.

The Workflow That Actually Works

Here’s what I’ve landed on after a lot of trial and error.

The instinct most people have is to brief AI first, get output, and refine from there. This sounds efficient. In practice, it anchors you to AI’s defaults before you’ve developed your own perspective on the problem. You end up iterating on something generic rather than pushing toward something genuinely yours.

The approach that produces better work is the reverse.

Think first. Sketch first. Get your own creative instincts out before the AI can define the starting point. Draw the rough idea on paper if you need to. Make the weird, unpolished version of the concept you actually have in your head. Let yourself be specific and directional before the tool gets involved.

Then bring in AI. Not to generate a direction, but to manipulate, extend, and pressure-test the direction you already have. Give it your sketch. Give it your reference images. Give it your copy direction and your color instincts. Ask it to push your idea further, to show you variations that stay true to the concept you brought to it, to challenge your assumptions from inside the creative direction you’ve already set.

This produces dramatically better output than starting with a blank prompt because you’re not asking AI to be creative on your behalf. You’re asking it to be useful in the service of your creativity.

When AI has a strong direction from you, it amplifies. When it has no direction, it defaults. And the default is never where you want to end up.

How I Talk About This With Clients

How do you explain AI’s role to clients or stakeholders who either don’t understand it or have opinions about it that aren’t based in reality?

Honestly, I’ve found that most clients don’t care about the tools. They care about the outcome and the timeline. What they want to know is whether the work will be good and whether it will take too long.

When it comes up, I frame it this way: AI helps me explore more directions faster and handle the structural and documentation work that used to take disproportionate time. It means I can get to the interesting creative decisions earlier, spend more time on them, and deliver more considered work in the same timeframe. The judgment, the taste, and the accountability are all still mine.

What I never say is that AI designed it. It didn’t. It helped me work faster. That’s a true and useful distinction, and most clients understand it when you explain it that clearly.

The occasional client who has anxiety about AI in the process usually just wants to know their brand isn’t being fed into a public training dataset. That’s a legitimate concern and worth addressing directly: the tools I use for client work are either privacy-respecting or I’m working within their own platforms.

As a rule, I check whether a tool supports data opt-outs, whether client assets are stored locally or in isolated instances, and I review the provider’s privacy documentation to confirm there’s no risk of training data exposure. If I’m unsure, I ask the vendor specifically about how training data is handled and whether content is retained or used to improve future models. The creative direction and final output are always mine and theirs. That usually resolves it.

Where This Is Going

The question designers ask most often, and honestly, the one I find most interesting, is: how do AI’s creative capabilities evolve from here?

My honest read is that the structural and logical capabilities will continue to improve faster than the genuinely creative ones. The gap between what AI can do with a clear brief and a clear structure and what it can do when asked to be original will probably persist longer than optimists think.

The reason is that originality in design isn’t just a function of pattern recombination. It comes from understanding a specific cultural moment, a specific brand context, a specific user, and making a decision that speaks to all of those simultaneously in a way that feels both surprising and right. That requires the kind of embedded contextual understanding that AI builds slowly and humans have automatically.

Where I do think AI will improve quickly is the collaboration layer. Better tools for giving AI your existing creative direction and having it maintain fidelity to that direction at scale. Better ability to say “this is our brand, these are our design principles, these are the constraints” and have AI produce work that stays within those boundaries coherently rather than drifting toward its defaults.

That version of AI, one that works reliably within a defined creative system rather than generating one from scratch, would be genuinely transformative for how design teams work. We’re not quite there yet. But it’s closer than it was a year ago.

For now, the creative instinct still comes from you. The AI helps you get there faster once you know where you’re going. That balance will shift over time. But the part that matters, knowing where you’re going, will stay with the designer for longer than most people currently expect.

Suggested

Continue Reading

More articles you may find useful, carefully selected from our journal

Free 20-min intro call

Tell us what you’re building

Work with a team that brings clarity, care, and creativity to every project.

Free 20-min intro call

Tell us what you’re building

Work with a team that brings clarity, care, and creativity to every project.

Free 20-min intro call

Tell us what you’re building

Work with a team that brings clarity, care, and creativity to every project.