The Perspective Problem
The bias in your AI and the bias in your team have the same root cause.
I grew up half-Japanese in Switzerland with an adopted sister from Cameroon. That combination taught me something about perspective that no book ever could.
My sister experienced the kind of racism most people picture when they hear the word. People looking down. Assumptions about intelligence. Doors that stayed closed. The negative kind.
I got the other version. The kind nobody talks about. People looking up. Assumptions about discipline, precision, cultural sophistication. “Oh, Japan!” The positive kind.
Both are the same thing. Both reduce a person to a category. And both come from the same place: a system that only knows how to see people through a narrow lens.
I keep thinking about that word. Lens. Because the same problem that shaped my childhood is now running through every AI system your company deploys.
The Coded Gaze
In 2018, MIT researcher Joy Buolamwini ran a simple experiment. She pointed three commercial facial recognition systems at a diverse set of faces and measured how often they got it wrong.
The results should have stopped the industry cold.
For lighter-skinned men, the error rate was 0.8%. For darker-skinned women, it climbed to 34.7%. One of the systems was essentially flipping a coin.
Buolamwini had discovered this problem years earlier, in the most direct way possible. She was a graduate student at the MIT Media Lab, working with facial recognition software that couldn’t detect her face. She had to put on a white mask for the system to see her.
She called this the coded gaze: the embedded perspective in AI systems that reflects the worldview of whoever built them.
The training data told the story. One major company’s face recognition dataset was 77% male and 83% white. The system worked beautifully. For people who looked like the people who built it.
When Blind Spots Get Real
This isn’t abstract. Robert Williams was arrested in Detroit in 2020 after a facial recognition system misidentified him as a shoplifting suspect. He was handcuffed in front of his daughters. Porcha Woodruff, eight months pregnant, was arrested for a carjacking in 2023 based on the same technology. Nijeer Parks spent ten days in jail in New Jersey before the case fell apart.
All three are Black. All three were innocent. All three were failed by systems trained on data that didn’t adequately represent them.
And facial recognition is just the visible edge. Recruiting tools that filter out candidates with “foreign-sounding” names. Credit scoring systems that penalize zip codes as proxies for race. Healthcare algorithms that systematically underestimate pain in Black patients. The coded gaze runs through every AI application that was trained on data reflecting a narrow slice of human experience.
Last week I wrote about how “hallucination” is [the most effective marketing term in AI history](https://mundaine.substack.com/). A nice word for a product failure. Coded bias is a different kind of failure. Not a random glitch. A systematic blind spot built into the foundation.
The Bridge: Same Problem, Different System
Now, here’s where it gets interesting.
Companies hear about coded bias and think: “We need to audit our AI tools.” Good instinct. But they’re only solving half the problem.
Because there’s another system in your company that suffers from the same blind spot. A system that also defaults to familiar patterns, rewards what it already recognizes, and systematically filters out perspectives it wasn’t designed to see.
Your hiring process.
Specifically, how you build your AI team.
The bias coded into your AI tools and the bias coded into your AI team have the same root cause: homogeneous perspectives producing blind spots that nobody in the room can see. Because everyone in the room sees the same way.
The Inexperience Advantage
When companies look for help with AI strategy, they almost always reach for the same type of person. Industry veteran. Deep domain expertise. Someone who’s “done this before.”
It feels safe. It feels smart. And it often leads to the same conventional thinking that created the blind spot in the first place.
Research from Harvard backs this up. Lars Bo Jeppesen and Karim Lakhani studied over 12,000 scientists solving problems through open innovation challenges. Their finding was counterintuitive: the further a solver’s expertise was from the problem’s domain, the more likely they were to find a winning solution. Outsiders outperformed insiders. Consistently.
Why? Because insiders know “how it’s always been done.” They’ve exercised the same patterns thousands of times. They carry assumptions so deep they don’t even recognize them as assumptions. An outsider doesn’t have that baggage. They need things to make sense from the ground up. They ask the questions that everyone else stopped asking years ago.
We celebrate first principles thinking. We praise design thinking. But then we go hire the person with twenty years of industry experience to lead our AI transformation. And we wonder why we end up with the same approaches everyone else has.
A Harvard Business Review analysis put numbers to this. In experiments conducted in Texas and Singapore, participants on diverse teams were 58% more likely to price stocks correctly than those on homogeneous teams. Homogeneous groups weren’t just less innovative. They made more factual errors. They were worse at processing information. The similarity that felt like alignment was actually a blind spot.
Two Audits, One Principle
So where does this leave you?
With two audits to run. Not one.
Audit your AI tools. Whose perspective does the training data carry? What edge cases is the system failing on? Buolamwini’s Gender Shades study forced IBM, Microsoft, and Amazon to revisit their facial recognition systems. Your company may not build facial recognition, but every AI tool you deploy carries someone’s assumptions. Who’s testing those assumptions before they touch your customers?
Audit your AI team. Who’s in the room when you make AI decisions? If everyone at the table has the same background, the same industry experience, the same mental models, you’re running a homogeneous team on a problem that requires diverse thinking. You need the person who asks “why are we doing it this way?” Not because they’re difficult. Because they genuinely don’t know. And that not-knowing is where breakthroughs live.
The principle underneath both is simple: perspective diversity is a debugging tool. The more perspectives you bring to a system, the more edge cases you catch. The more blind spots you surface. Whether the system is an algorithm or a leadership team.
What This Means in Practice
This isn’t activism. For me, it’s lived experience. Growing up between two kinds of racism taught me that the problem is never just the negative bias or the positive bias. The problem is the narrow lens. Any narrow lens.
And the solution isn’t awareness. Awareness doesn’t debug code. Action does.
When I run a Sprint with a client, one of the things we stress-test is perspective. Not just “does this AI tool work?” but “does it work for everyone it needs to serve?” And when I do Executive Sparring with leaders, part of the value is that I bring an outside perspective to their inside problem. Not because I know their industry better than they do. Because I don’t. And that’s the point.
The companies that will get AI right aren’t the ones with the biggest budgets or the most advanced tools. They’re the ones willing to look at their tools and their teams through a wider lens.
Ask yourself two questions:
1. Whose perspective is missing from your AI systems?
2. Whose perspective is missing from the room where you decide?
If the answer to both is “I don’t know,” you’ve just found your most important blind spot.
Damian Nomura helps mid-sized companies adopt AI through a human-centered approach. His 5-Day Sprint gets teams from stuck to pilot, and Executive Sparring brings outside perspective to inside challenges. Swiss Ambassador for the Responsible AI Governance Network.
Follow for weekly essays on AI adoption that’s Simple. Clear. Applicable.

