There’s a version of this conversation that goes: prompting is just writing better instructions. Anyone can do it. It’s not really a skill.
This is wrong, and it’s wrong in a way that matters.
Prompting is the act of translating intent into language precise enough for a system with no context, no taste, and no knowledge of your constraints to produce something useful. That translation — between what you mean and what a model can act on — is exactly the same cognitive work as writing a good brief, defining a design problem, or specifying interaction behavior clearly enough that an engineer can build it without a meeting.
The designers who are good at prompting are, almost without exception, the designers who were already good at specifying problems precisely. The skill isn’t new. The surface it runs on is.
The single most important thing is context. AI models have no knowledge of your product, your users, your constraints, or what you tried last week that didn’t work. Every prompt that produces weak output is weak because the model was missing context that felt obvious to you.
A prompt that says “write error copy for an empty state” will produce generic output. A prompt that says “write error copy for an analytics dashboard used by compliance analysts at enterprise companies, where the empty state occurs because a server failed to return data — the tone should be direct and professional, not reassuring or playful, the copy should acknowledge what happened, rule out user error, and give a specific next action” will produce something usable.
The difference isn’t creativity. It’s specificity. And specificity is a design skill.
Good prompting is iterative — not because AI output is bad, but because you rarely know exactly what you want until you see what you don’t want. The first output defines the problem more precisely. The second prompt is more informed. By the third, you’re refining rather than redirecting.
The designers who struggle with AI tools tend to treat prompting as a one-shot task: write the prompt, get the output, judge it as success or failure. The designers who get value out of them treat it as a conversation. Start with a broad task, review the output, and adjust one variable at a time. This experimental mindset echoes the design process — sketch, test, iterate.
Here’s the part that matters for how you develop this skill: the thinking doesn’t go away. It moves earlier.
When you design without AI, the thinking happens during the making. You discover what you mean by trying things, seeing what’s wrong, and adjusting. The artifact and the thinking co-evolve.
When you design with AI, the thinking has to happen before the making. If you don’t know what you want before you prompt, the output will show you that immediately. The model has no intuition to fall back on. It needs your intent, expressed precisely, before it can be useful.
This is why prompting correlates with seniority. Senior designers have more clearly articulated mental models of what good looks like — which means they can specify their intent more precisely — which means their prompts produce better output. It’s not magic. It’s the skill they’ve been building for years applied to a new surface.
The practical upshot: if you want to get better at prompting, get better at specifying design problems. Write clearer briefs. Define constraints more precisely. Name the things you care about before you open a tool. The prompting follows.
The quality of your output is a direct measure of the clarity of your thinking.
References