Thinking

The Design Problems AI Creates

AI·April 29, 2026·5 min read

Most articles about AI and design focus on what AI fixes. This one is about what it breaks. Not because AI is bad — it isn’t — but because the problems it creates are real, underexamined, and increasingly showing up in products that were built with good intentions and the wrong assumptions.


AI generates output that looks correct faster than any human could review it. This is useful when the output is mostly correct. It’s dangerous when it isn’t.

The specific failure mode is plausible nonsense: content that has the structure and surface of something meaningful but is wrong in ways that aren’t immediately visible. An AI-generated persona that synthesises no actual user research but reads like it did. A user flow that is technically coherent but based on a misunderstanding of what users are actually trying to do. Microcopy that sounds warm and clear but uses phrasing that means something different to the actual users.

These failures are harder to catch than obvious errors because they look right. The artifact is professionally formatted, internally consistent, and syntactically correct. The problem is that it was generated from a model trained on general patterns, not from knowledge of this product, these users, and this context.

The risk increases with seniority. Junior designers tend to treat AI output with appropriate scepticism. Senior designers, under deadline pressure, can slip into using AI output as a shortcut for the thinking they’d normally do — and the output looks credible enough to pass review.

The plausible nonsense failure mode
ArtifactLooks likeActually missing
PersonaResearch-backed profile with themes, quotes, and goalsSynthesised from general patterns — no actual user data
User flowTechnically coherent, all states accounted forBased on a misreading of what users are trying to do
MicrocopyWarm, clear, appropriately briefPhrasing that means something different to actual users
The artifact is professionally formatted, internally consistent, and syntactically correct. The problem is that it was generated from general patterns, not from knowledge of this product and these users.

When most designers in most teams are prompting the same models with similar prompts, the output converges. Products start to look like each other — not because teams are copying, but because they’re all drawing from the same well of probabilistic recombination.

The distinctive visual voice, the unusual interaction pattern, the design decision that reflects a specific deep understanding of a specific user group — these require the kind of deliberate departure from convention that AI, which learns from convention, cannot reliably generate.

This isn’t a reason to avoid AI. It’s a reason to be more deliberate about where human judgment intervenes. If you’re prompting for a “professional dashboard for analytics users,” you’ll get back the centre of the distribution of what that looks like. If you’re prompting for a direction and then departing from it based on actual knowledge of your specific users, you’ll get something more defensible and more distinctive.


AI can synthesise research fast. Thirty interviews become a summary in two hours. That summary is useful. It is not a substitute for reading the interviews yourself.

The thing you lose when you delegate synthesis entirely to AI is the texture — the specific phrase a user repeated three times that doesn’t fit any theme but clearly matters. The tension between what two users said that the AI smoothed over. The moment in an interview where someone contradicted themselves in a way that’s more revealing than their explicit answer.

Teams that use AI synthesis consistently build a progressively thinner understanding of their users. The speed gain compounds into a knowledge debt that shows up later, when a product decision is made on the basis of a theme that was a statistical artefact rather than a genuine signal.

What AI research synthesis captures vs. loses
AI captures
Themes and patternsWhat comes up most frequently across interviews
Frequency signalsHow often something is mentioned
Surface structureCategories that organise the data
Summary narrativeA coherent account of what people said
AI loses
Specific repeated phrasesThe exact words a user kept coming back to
Unresolved tensionsContradictions between users that don't fit a theme
Self-contradictionWhat someone said vs. what they revealed they meant
Outlier signalsThe edge case that doesn't theme but clearly matters
Teams that delegate synthesis entirely to AI build a progressively thinner understanding of their users. The speed gain compounds into a knowledge debt.

None of this argues against using AI. It argues for using it with the same critical eye you’d apply to any other source of design input — which is to say, not treating the confidence of the output as evidence of its quality.

The problems AI creates are solvable. They require the same thing that most design problems require: asking the hard question before accepting the plausible answer.

AI generates confidence faster than it generates understanding. Know the difference.

References

  1. Startup House (2026). The Future of Product Design: Building AI-Native Digital Services. startup-house.com
  2. UOC (2025). How AI is changing professions like design, art, and the media. uoc.edu
  3. Creative Boom (2024). Special report: how design agencies are actually using AI in 2024. creativeboom.com
AI generates confidence faster than it generates understanding. ⚡ Know the difference. 🧠 AI generates confidence faster than it generates understanding. ⚡ Know the difference. 🧠 AI generates confidence faster than it generates understanding. ⚡ Know the difference. 🧠 AI generates confidence faster than it generates understanding. ⚡ Know the difference. 🧠