Everything You Learned About Product Development is Backwards for AI
Find a problem. Build a solution.
This is the sacred cow of product development. Every business school, every startup accelerator, every innovation consultant preaches it. Start with customer pain. Validate the problem exists. Only then build something.
It makes perfect sense. Except when it doesn’t.
When OpenAI released ChatGPT in late 2022, they violated this principle spectacularly. They didn’t launch a product that solved a clear problem. They released a fascinating technology and said: “Here. Play with it.”
And we did.
I still remember when one of my employees showed it to me. He had written an entire story in seconds. We were completely fascinated—not because we needed AI-generated stories, but because of what it implied. What else could this thing do?
Within weeks, we were chaining tools together. MidJourney for images (bad by today’s standards, but magical then). A lip-sync app. Text-to-speech. We created talking pictures telling their own stories. It was amazing—far from a final product, but we built it by playing around.
We still didn’t have a solution for an existing problem. But we had created so much.
This wasn’t traditional product development. This was something else entirely.
The Inversion
What happened with ChatGPT inverted everything we’re taught about innovation. Instead of problem-first, solution-second, we got:
Solution first. Problems emerge through play.
This isn’t how it’s supposed to work. And yet it did. Spectacularly.
The pattern I’m seeing is what I call Solution-First Innovation:
1. Encounter - Meet the technology with curiosity, not requirements
2. Experiment - Play without pressure, chain things together
3. Emerge - Problems reveal themselves through use
4. Extract - Pull out the actual value created
5. Execute - Now build the “proper” solution
This flips the traditional sequence (Research → Problem → Solution → Build) on its head. And for AI, I’m convinced it’s the only approach that works.
The Language Test Tutor
Here’s a story that captures this perfectly.
A woman was preparing for a language diploma exam. She needed practice tests to prepare, but she ran out of sample materials. So she did what any resourceful person would do - she fed existing tests and requirements into an LLM and asked it to generate new practice tests.
Simple problem. Simple solution. But here’s where it gets interesting.
She started using the AI to grade her answers too. She’d compare its ratings to the official guidelines. And she discovered something surprising: the AI graded with the exact same accuracy as a human examiner. Same scores. Same feedback quality.
She had set out to generate practice tests. What she accidentally built was a personal language tutor that could both test and grade her work.
She found a problem, test prep scarcity, through experimentation, not market research. And in the process, discovered a much bigger problem she could solve: the entire test preparation and grading workflow.
She passed her exam and moved on. But here’s my provocation: she could have productized this right now. A language learning AI that generates personalized tests and provides human-level grading. And at the same time she could have started to take over the testing space, by providing a solution for language test-centers. By the way, the tests she used for herself were hand-written.
If you’re reading this, you might steal this idea and build it right now.
What This Means for Your Company
I work with companies at the beginning of their AI journey. The most common thing I hear is: “We know we need to do something with AI, but we don’t know where to start.”
My response often surprises them: Stop looking for the perfect use case. Start playing.
The companies I see winning with AI aren’t the ones with the best strategy documents. They’re the ones where people have permission to experiment. Where someone can spend an afternoon chaining tools together without a mandatory business case tied to it.
What I can tell you from years of tinkering: the problems will find you. The more you create and play, the more you’ll discover what AI is actually a solution for. And what not.
But this requires something most organizations struggle with: giving people permission to play without knowing the outcome. That’s not how business usually works. We want ROI projections. Use case validation. Analysis phases.
While I still believe that those three are important in the business world, I like to deliver on them fast, to create space for experimentation.
Because the most valuable AI applications I’ve seen weren’t discovered through analysis. They were stumbled upon by someone who was curious enough to experiment and lucky enough to work somewhere that let them.
The Real Challenge
The technical part of AI adoption has become shockingly simple. I’ve taken companies from zero to a working AI pilot in five days. The tools are ready. The capabilities are there.
The hard part isn’t technology. It’s culture.
It’s getting leaders comfortable with “try things and see what happens” as a legitimate strategy. It’s creating space for experimentation without requiring justification. It’s accepting that the best AI use cases in your company probably haven’t been discovered yet—and won’t be discovered by consultants running workshops, but by employees playing around.
The language test story didn’t come from a strategy session. It came from someone who ran out of practice materials and thought, “I wonder if...”
That “I wonder if...” is worth more than a hundred use case workshops.
The Question
So here’s what I’m thinking about:
Are you waiting for the perfect problem before you let your people play with AI? Are you demanding business cases before experimentation? Are you running analysis phases when you should be running experiments?
The companies winning at AI aren’t the ones with the best strategies. They’re the ones experimenting fastest. They’re the ones where someone can discover an accidental tutor while just trying to pass a language test.
Traditional product development says: find a problem, build a solution.
AI adoption says: find some curiosity, and let the problems find you.
What have you discovered by playing that you never intended to build?
Damian helps mid-sized companies adopt AI with a human-centered approach—from zero to working pilots in days, not months. If you’re stuck waiting for the perfect use case, let’s talk about what experimentation could look like for your team.
