<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[mundaine - Damian Nomura]]></title><description><![CDATA[A space that cuts the weeds on your AI adoption journey. Based on years of practical experience, asking the questions you should ask today. If you want to avoid finding yourself in the pitfalls of the mundane, this is the space to stick around.]]></description><link>https://www.fresh.mundaine.ai</link><generator>Substack</generator><lastBuildDate>Sat, 02 May 2026 22:22:08 GMT</lastBuildDate><atom:link href="https://www.fresh.mundaine.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Damian Nomura]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[mundaine@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[mundaine@substack.com]]></itunes:email><itunes:name><![CDATA[Damian Nomura]]></itunes:name></itunes:owner><itunes:author><![CDATA[Damian Nomura]]></itunes:author><googleplay:owner><![CDATA[mundaine@substack.com]]></googleplay:owner><googleplay:email><![CDATA[mundaine@substack.com]]></googleplay:email><googleplay:author><![CDATA[Damian Nomura]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[How to Find the One Bottleneck That's Capping Your Growth]]></title><description><![CDATA[Most scaling founders know something is broken. Very few can name exactly what.]]></description><link>https://www.fresh.mundaine.ai/p/how-to-find-the-one-bottleneck-thats</link><guid isPermaLink="false">https://www.fresh.mundaine.ai/p/how-to-find-the-one-bottleneck-thats</guid><dc:creator><![CDATA[Damian Nomura]]></dc:creator><pubDate>Tue, 24 Mar 2026 07:19:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Eie0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F031a9a9a-c30f-40d9-9ab4-5a6388ae9240_2048x2048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Eie0!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F031a9a9a-c30f-40d9-9ab4-5a6388ae9240_2048x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Eie0!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F031a9a9a-c30f-40d9-9ab4-5a6388ae9240_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!Eie0!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F031a9a9a-c30f-40d9-9ab4-5a6388ae9240_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!Eie0!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F031a9a9a-c30f-40d9-9ab4-5a6388ae9240_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!Eie0!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F031a9a9a-c30f-40d9-9ab4-5a6388ae9240_2048x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Eie0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F031a9a9a-c30f-40d9-9ab4-5a6388ae9240_2048x2048.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/031a9a9a-c30f-40d9-9ab4-5a6388ae9240_2048x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2297725,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.fresh.mundaine.ai/i/191954037?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F031a9a9a-c30f-40d9-9ab4-5a6388ae9240_2048x2048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Eie0!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F031a9a9a-c30f-40d9-9ab4-5a6388ae9240_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!Eie0!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F031a9a9a-c30f-40d9-9ab4-5a6388ae9240_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!Eie0!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F031a9a9a-c30f-40d9-9ab4-5a6388ae9240_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!Eie0!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F031a9a9a-c30f-40d9-9ab4-5a6388ae9240_2048x2048.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>Last week I wrote about the scaling wall. The pattern where a company hits product-market fit, doubles the team, and then watches operational overhead eat everything.</p><p>The response surprised me. Not because people disagreed. Because the most common reply was some version of: &#8220;OK, I get it. But where do I actually start?&#8221;</p><p>Fair question. Naming the problem is step one. But if you can&#8217;t point to the specific constraint that&#8217;s capping your team, you&#8217;re just agreeing with a diagnosis you can&#8217;t act on.</p><p>So here&#8217;s the diagnostic I run in the first two hours of every engagement. You can do it yourself, Monday morning, with a whiteboard and your leadership team.</p><div><hr></div><h2>The Three-Question Diagnostic</h2><p>I&#8217;ve tried complicated frameworks. Weighted matrices. Priority scoring systems. None of them work as well as three direct questions.</p><p><strong>Question 1: Where is your most expensive person spending their cheapest time?</strong></p><p>McKinsey <a href="https://fortune.com/2023/03/14/mckinsey-middle-managers-talent-development/">research</a> found that managers spend just 23% of their time on strategy. Another 31% goes to individual-contributor work. Nearly a full day each week disappears into administrative tasks.</p><p>Now think about your CTO. Your COO. Your head of ops. What are they actually doing all day?</p><p>Not what their job description says. What they&#8217;re actually doing.</p><p>In my experience, the answer usually involves some combination of: manually pulling reports, coordinating across tools that don&#8217;t talk to each other, onboarding new hires into processes that only exist in someone&#8217;s head, and putting out fires that started because nobody automated the thing that catches them early.</p><p>That&#8217;s your CHF 180k-a-year person doing CHF 60k-a-year work. Not because they&#8217;re bad at their job. Because the systems around them force it.</p><p><strong>Question 2: What would break if one person went on vacation for two weeks?</strong></p><p>This is the fastest way to find single points of failure.</p><p>If the answer is &#8220;nothing, we&#8217;d be fine,&#8221; congratulations. You don&#8217;t have the problem I&#8217;m describing. But in most 15-35 person companies, at least one critical process lives entirely in one person&#8217;s head. The weekly client report. The deployment pipeline. The invoice reconciliation.</p><p>That&#8217;s not a people problem. That&#8217;s a systems problem wearing a people costume. And it tells you exactly where your fragility sits.</p><p><strong>Question 3: What are you doing manually that you&#8217;ve been meaning to automate for six months?</strong></p><p>Every scaling founder has a list. The spreadsheet that should be a dashboard. The email sequence that should be triggered automatically. The data entry that someone does every Friday afternoon.</p><p>The reason it hasn&#8217;t been automated isn&#8217;t that it&#8217;s technically hard. It&#8217;s that nobody has the bandwidth to fix it, because everyone&#8217;s bandwidth is consumed by the manual work itself.</p><p>That&#8217;s the loop. And breaking it is usually simpler than you&#8217;d expect.</p><div><hr></div><h2>How to Read the Answers</h2><p>Run these three questions with your leadership team. Write down every answer. You&#8217;ll probably end up with 8-12 items across all three.</p><p>Now sort them by one criterion only: <strong>which one costs the most senior attention per week?</strong></p><p>Not the most annoying. Not the most technically interesting. The one that eats the most hours from the people whose time matters most.</p><p>That&#8217;s your bottleneck. That&#8217;s where you start.</p><p>Not three things. Not a roadmap. One constraint.</p><div><hr></div><h2>Why One, Not Three</h2><p>The instinct is to fix everything at once. Build a &#8220;digital transformation roadmap.&#8221; Hire a consultant to map all the processes. Spend two months planning.</p><p>I&#8217;ve watched this play out dozens of times. The roadmap gets built. It sits in a slide deck. Six months later, the same fires are burning.</p><p>One bottleneck, fixed completely, does something a roadmap never does: it creates belief.</p><p>When the team sees their Friday afternoon data entry disappear, or the CTO gets four hours back because the deployment pipeline doesn&#8217;t need hand-holding anymore, something shifts. The next fix becomes easier to justify. Not because of ROI calculations. Because everyone saw it work.</p><p>The <a href="https://mercury.com/blog/startup-economics-report-2025">Mercury Startup Economics Report 2025</a> found that 69% of companies with significant AI adoption increased their use of external specialists in the past year. Not because they couldn&#8217;t build internally. Because they learned that fixing the first constraint fast creates momentum that internal teams can then carry forward.</p><div><hr></div><h2>What Happens After the First Fix</h2><p>This is the part that matters more than the diagnostic.</p><p>Once the bottleneck is solved, the question becomes: can your team solve the next one without outside help?</p><p>The best engagements I&#8217;ve run follow a pattern:</p><p>&#8594; <strong>Week 1</strong>: Fix the constraint. Working software, running in your environment.</p><p>&#8594; <strong>Week 3</strong>: Hand over. Train the team to extend and maintain what was built.</p><p>&#8594; <strong>Month 3</strong>: Advisory. Available for strategic questions, not daily operations.</p><p>&#8594; <strong>Month 6+</strong>: Independence. Your team picks up the next bottleneck themselves.</p><p>Every other option on the market has a structural incentive to keep you dependent. Agencies bill monthly. Consultancies extend engagements. Internal hires become permanent cost.</p><p>This model has a structural incentive to leave. And that&#8217;s exactly the point.</p><div><hr></div><h2>Try It Monday</h2><p>Block 90 minutes with your co-founder or leadership team. Ask the three questions. Write down everything. Sort by senior attention cost.</p><p>You&#8217;ll walk out with one clear constraint. Not a strategy deck. Not a hiring plan. One thing you can fix, probably faster than you think.</p><p>The scaling wall is real. But it&#8217;s not abstract. It&#8217;s specific. And specific problems have specific solutions.</p><p>Start there.</p><div><hr></div><p><em>Damian Nomura helps scaling startups close the capability gap between where they are and where they need to go. His Build Week format takes the biggest operational bottleneck and turns it into working software in 5 days. Simple. Clear. Applicable.</em></p>]]></content:encoded></item><item><title><![CDATA[The Scaling Wall Nobody Talks About]]></title><description><![CDATA[Your team is fine. Your systems broke six months ago.]]></description><link>https://www.fresh.mundaine.ai/p/the-scaling-wall-nobody-talks-about</link><guid isPermaLink="false">https://www.fresh.mundaine.ai/p/the-scaling-wall-nobody-talks-about</guid><dc:creator><![CDATA[Damian Nomura]]></dc:creator><pubDate>Sun, 15 Mar 2026 17:17:47 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!u87U!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b782521-ec76-41bf-aa7c-b43e45187a55_2048x2048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!u87U!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b782521-ec76-41bf-aa7c-b43e45187a55_2048x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!u87U!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b782521-ec76-41bf-aa7c-b43e45187a55_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!u87U!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b782521-ec76-41bf-aa7c-b43e45187a55_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!u87U!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b782521-ec76-41bf-aa7c-b43e45187a55_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!u87U!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b782521-ec76-41bf-aa7c-b43e45187a55_2048x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!u87U!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b782521-ec76-41bf-aa7c-b43e45187a55_2048x2048.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2b782521-ec76-41bf-aa7c-b43e45187a55_2048x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1999458,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.fresh.mundaine.ai/i/191040835?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b782521-ec76-41bf-aa7c-b43e45187a55_2048x2048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!u87U!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b782521-ec76-41bf-aa7c-b43e45187a55_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!u87U!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b782521-ec76-41bf-aa7c-b43e45187a55_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!u87U!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b782521-ec76-41bf-aa7c-b43e45187a55_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!u87U!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2b782521-ec76-41bf-aa7c-b43e45187a55_2048x2048.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The company hits product-market fit. Revenue grows. The team doubles from 10 to 25. Then something breaks.</p><p>Not the product. Not the market. The machine around the product.</p><p>Operations starts eating everything. The CTO stops building and starts firefighting. Every new hire adds coordination overhead that nobody budgeted for. And the CEO sits in a room with a growing team, thinking: &#8220;We know where we need to go. We can&#8217;t get there with what we have.&#8221;</p><p>I hear some version of this in almost every discovery call.</p><h2><strong>The Wall</strong></h2><p>I call it the scaling wall. It shows up somewhere between 15 and 35 people.</p><p>What worked at 10 stops working. The informal processes, the quick Slack messages, the &#8220;everyone just knows what to do&#8221; culture. All of it collapses under its own weight.</p><p>And the instinct is to hire. Stretched? Hire. Drowning in admin? Hire. CTO can&#8217;t keep up? Hire a second one.</p><p>Premature scaling contributes to <a href="https://s3.amazonaws.com/startupcompass-public/StartupGenomeReport2_Why_Startups_Fail_v2.pdf">74% of high-growth startup failures</a>. Not because the market wasn&#8217;t there. Because the internal systems couldn&#8217;t support the growth.</p><p>Hiring creates coordination overhead, more meetings, and pulls people further from product. <a href="https://www.worklytics.co/resources/software-engineering-productivity-benchmarks-2025-good-scores">Worklytics&#8217; 2025 engineering benchmarks</a> found that the median engineering team already spends 28% of its workday in meetings. At the 25th percentile, that number hits 35%. Coding time drops to 22% of the day.</p><p>Your most expensive people, doing their most valuable work, less than a quarter of the time.</p><h2><strong>Where the Gap Opens</strong></h2><p>Teams are strong. The gap between where the company is and where it needs to go just can&#8217;t be closed by adding headcount alone.</p><p>&#8220;Operations is eating us alive.&#8221;</p><p>I hear this exact phrase. Not my words. Theirs.</p><p>The CEO feels it as growth slowing down despite hiring. The CTO feels it as context-switching between operational fires and product roadmap. The team feels it as everyone doing three jobs, most of which aren&#8217;t the job they were hired for.</p><p><a href="https://aijourn.com/techreviewer-co-research-highlights-it-hiring-paradox-72-report-strong-talent-pool-yet-half-cant-find-right-skills/">Techreviewer&#8217;s 2026 research</a> backs this up: 72% of companies rate their talent pool as strong, yet 53.7% still can&#8217;t find candidates with the skills they actually need. The talent is there. The fit rarely is.</p><p>The capability gap runs deeper than hiring can reach.</p><h2><strong>Breaking Through</strong></h2><p>The companies I&#8217;ve seen get past the wall share a common move: they name the constraint before deploying resources.</p><p>Not &#8220;we need more people.&#8221; Instead: &#8220;Our CTO spends 40% of their time on non-product firefighting. That&#8217;s the constraint.&#8221;</p><p>Naming it changes everything. You stop solving &#8220;growth&#8221; in the abstract. You solve one specific bottleneck that&#8217;s capping your team&#8217;s ability to focus on product.</p><p>Then they fix systems before hiring. Every operational bottleneck you automate is a hire you don&#8217;t need to make. And unlike a hire, automation reduces coordination overhead instead of adding to it.</p><p>The math is straightforward. If your senior team spends 60% of their time on non-product work, that&#8217;s 60% of a CHF 180k salary going to tasks that don&#8217;t move the company forward. Fix the top three time sinks and you&#8217;ve freed up more capacity than a new hire would bring, without the onboarding curve.</p><p>And they start small. No 12-month roadmap. No strategy deck that pulls people off product for two weeks to read. One bottleneck. One solution. Working software by the end of the week.</p><p>One solved problem creates belief. Belief creates the next solved problem.</p><h2><strong>Why This Stays Stuck</strong></h2><p>None of this is technically hard.</p><p>The hard part is cultural. The CEO who believes hiring is the answer because hiring has always been the answer. The CTO who can&#8217;t let go of the firefighting because if they don&#8217;t do it, who will. The team so deep in &#8220;keeping the engine running&#8221; that they can&#8217;t imagine a world where the engine runs itself.</p><p>The scaling wall looks like a growth problem. The bigger issue is systems. And systems can be redesigned.</p><h2><strong>One Question to Start</strong></h2><p>If you&#8217;re running a 15-35 person company and you recognize this pattern, start here: where is your CTO spending their time?</p><p>If the answer is &#8220;operational fires,&#8221; that&#8217;s your constraint. Name it. Then look at the top three tasks eating your team&#8217;s product time. Those are your targets. And ask whether you can fix one of them in a week. The answer is almost always yes.</p><p>The capability gap is real. It also closes faster than most founders expect, once they stop adding headcount and start fixing what&#8217;s underneath.</p><p>The best teams I work with don&#8217;t need more people. They need their people back on product.</p>]]></content:encoded></item><item><title><![CDATA[The Echo Chamber Gap]]></title><description><![CDATA[Your company doesn't have an AI adoption problem. It has a confusion problem.]]></description><link>https://www.fresh.mundaine.ai/p/the-echo-chamber-gap</link><guid isPermaLink="false">https://www.fresh.mundaine.ai/p/the-echo-chamber-gap</guid><dc:creator><![CDATA[Damian Nomura]]></dc:creator><pubDate>Tue, 24 Feb 2026 08:44:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ws_a!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f2238c-78ca-446f-9201-87a899c5014e_320x320.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I keep hearing the same conversation play out. Executives and tech leads are buzzing about AI. They attend conferences, trade case studies, share demos in Slack channels. They&#8217;re believers.</p><p>Then there&#8217;s the other 90%.</p><p>The frontline. The people who actually run the business day-to-day. Their reaction to &#8220;AI adoption&#8221; ranges from a polite nod to silent dread. &#8220;Not another change initiative. We just finished the last digital transformation. Can I just do my job?&#8221;</p><p>Both sides think they&#8217;re right. Both sides are. And neither side is talking about the same thing.</p><h2>The confusion nobody names</h2><p>BCG&#8217;s 2025 <a href="https://www.bcg.com/publications/2025/ai-at-work-momentum-builds-but-gaps-remain">AI at Work report</a> found that 78% of managers use generative AI regularly. Frontline workers? 51%. And that number hasn&#8217;t moved in two years.</p><p>That gap isn&#8217;t about training budgets or tool access. It&#8217;s about a fundamental confusion baked into how we talk about AI.</p><p>When leaders say &#8220;AI adoption,&#8221; they usually mean one thing: deploying AI-powered features. Chatbots. Prediction engines. Recommendation systems. AI as the product. AI as the solution.</p><p>But there&#8217;s a whole other side of AI adoption that most companies aren&#8217;t even naming. Using AI to build better things faster. Not AI as the thing your team interacts with. AI as the way you create what your team interacts with.</p><p>A better onboarding flow. A cleaner data pipeline. A dashboard that actually answers the question someone had. None of these are &#8220;AI solutions.&#8221; But all of them can be built in days instead of months when you use AI as a development tool.</p><p>The distinction matters. Because only one of these requires your entire team to change how they work.</p><h2>Two kinds of AI adoption</h2><p>Let me make this concrete.</p><p><strong>AI as solution</strong>: You deploy a chatbot that handles customer inquiries. Your team needs to learn how to manage it, train it, handle escalations. The end user interacts with AI directly. This is what most people picture when they hear &#8220;AI adoption.&#8221;</p><p><strong>AI as development tool</strong>: You use AI to build a custom scheduling system that eliminates the three-hour weekly coordination nightmare. The end user sees a clean interface. They never touch AI. They just get their time back.</p><p>Both are real AI adoption. Both create value. But they require completely different change management approaches.</p><p>The first one demands that the 90% learn something new, trust something unfamiliar, and change their daily habits. No wonder there&#8217;s resistance.</p><p>The second one? The 90% just gets better tools. Faster. Their workflow improves without them having to become AI-literate. The &#8220;what&#8217;s in it for me?&#8221; gets answered before anyone even asks the question.</p><h2>Why the echo chamber persists</h2><p>The 10% who are excited about AI tend to be excited about AI-as-solution. It&#8217;s the flashy stuff. The demos. The future-is-here moments. And they talk to each other constantly. Executives validate each other. Tech leads showcase possibilities. They&#8217;re in an echo chamber of excitement.</p><p>Meanwhile, the 90% hears &#8220;AI adoption&#8221; and braces for impact. Because to them, it means learning new tools, changing processes, and wondering if they&#8217;re being replaced. Their skepticism isn&#8217;t irrational. It&#8217;s information.</p><p><a href="https://www.gartner.com/en/newsroom/press-releases/2024-07-29-gartner-predicts-30-percent-of-generative-ai-projects-will-be-abandoned-after-proof-of-concept-by-end-of-2025">Gartner predicted that [at least 30% of generative AI projects would be abandoned after proof of concept by end of 2025</a>, citing unclear business value and escalating costs. And that number only covers the projects that got started. It doesn&#8217;t count the ones that died in the echo chamber before anyone built anything.</p><p>The pattern repeats. Excitement at the top. Resistance at the bottom. Paralysis in the middle.</p><h2>The bridge is in the reframe</h2><p>When you stop asking &#8220;how do we get everyone to use AI?&#8221; and start asking &#8220;what problems are we actually solving?&#8221;, something shifts.</p><p>Some of those problems will be best solved by AI-powered features. Great. Build them. But a surprising number will be best solved by solutions that are built with AI but don&#8217;t require anyone to interact with AI at all.</p><p>BCG&#8217;s report landed on the same conclusion: &#8220;Real value is generated when businesses reshape their workflows end-to-end,&#8221; not when they simply introduce AI tools into existing ways of working.</p><p>This is the reframe that bridges the echo chamber gap. You stop selling AI to the 90%. You start solving their actual problems. Sometimes AI is the solution. Sometimes AI is just how you build the solution. The team doesn&#8217;t need to know or care which one it is. They just need things to work better.</p><h2>What this looks like in practice</h2><p>The companies getting this right don&#8217;t start with an AI strategy document. They start with a problem. One bottleneck. One process that eats time.</p><p>Then they build. Fast.</p><p>Day one, they expand the team&#8217;s understanding of what&#8217;s possible. Not a pitch about AI. A demonstration of what it can do for THEIR specific pain points. This is where eyes open. Where frontline workers say &#8220;wait, that thing I spend three hours on every week... you could fix that?&#8221;</p><p>Days two through five, they build the fix. Some outputs are AI-powered. Some are just built faster with AI. The team leaves with working software, not a slide deck about AI strategy.</p><p>The gap between the 10% and the 90% closes not through training or mandates. It closes through proof. Through someone&#8217;s daily friction disappearing. Through getting their time back.</p><h2>The question to bring to your next leadership meeting</h2><p>Stop debating whether your company should &#8220;adopt AI.&#8221; That question is too vague to be useful.</p><p>Ask instead: what are the three biggest time-wasters for our frontline teams right now? Then ask: for each one, is the answer an AI-powered feature, or is it a solution that could be built faster with AI?</p><p>That distinction changes the conversation. It moves from &#8220;how do we get people to use AI&#8221; to &#8220;how do we solve problems faster.&#8221; One creates resistance. The other creates results.</p><p>The echo chamber gap doesn&#8217;t close with more excitement from the top. It closes when the 90% starts seeing their problems disappear.</p><p>And sometimes the best AI adoption happens when nobody even realizes AI was involved.</p><p></p><p>Damian Nomura helps scaling startups close the capability gap in a week. No slide decks. Working software. <a href="https://www.linkedin.com/in/damian-nomura">Follow for more</a>.</p>]]></content:encoded></item><item><title><![CDATA[The Builder’s Paradox]]></title><description><![CDATA[AI gave you superpowers. Nobody gave you the safety manual.]]></description><link>https://www.fresh.mundaine.ai/p/the-builders-paradox</link><guid isPermaLink="false">https://www.fresh.mundaine.ai/p/the-builders-paradox</guid><dc:creator><![CDATA[Damian Nomura]]></dc:creator><pubDate>Tue, 17 Feb 2026 07:04:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!uPQC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f441c65-5377-4f97-b5c9-73effeb29b50_2048x2048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!uPQC!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f441c65-5377-4f97-b5c9-73effeb29b50_2048x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!uPQC!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f441c65-5377-4f97-b5c9-73effeb29b50_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!uPQC!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f441c65-5377-4f97-b5c9-73effeb29b50_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!uPQC!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f441c65-5377-4f97-b5c9-73effeb29b50_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!uPQC!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f441c65-5377-4f97-b5c9-73effeb29b50_2048x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!uPQC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f441c65-5377-4f97-b5c9-73effeb29b50_2048x2048.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6f441c65-5377-4f97-b5c9-73effeb29b50_2048x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2877238,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.fresh.mundaine.ai/i/188228428?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f441c65-5377-4f97-b5c9-73effeb29b50_2048x2048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!uPQC!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f441c65-5377-4f97-b5c9-73effeb29b50_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!uPQC!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f441c65-5377-4f97-b5c9-73effeb29b50_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!uPQC!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f441c65-5377-4f97-b5c9-73effeb29b50_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!uPQC!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6f441c65-5377-4f97-b5c9-73effeb29b50_2048x2048.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Over the past year, I&#8217;ve run multiple hackathons where non-technical teams build real software in hours. An automated monitoring system that proactively proposes services to future clients. Fully automated video avatars that reach out to prospects with personalized messages in the prospect&#8217;s language. An applicant screening tool.</p><p> An automated client qualification system. A customized content creation engine. All built by people who had never written a line of code in their lives.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Every time, the same thing happens. The energy in the room is electric. And then it hits me.</p><p>Not one team asks about data protection. Not one asks what happens if their video avatar says something wrong, or if their screening tool discriminates, or who&#8217;s liable when the monitoring system flags a false positive. They don&#8217;t skip these questions on purpose. They don&#8217;t know the questions exist.</p><p>This is the paradox nobody&#8217;s talking about. The same AI tools that give small companies and non-technical builders unprecedented power also hand them unprecedented responsibility. And responsibility without knowledge is a dangerous combination.</p><h3><strong>The Power Is Real</strong></h3><p>Let me be clear: the competitive shift is happening, and it&#8217;s massive.</p><p>I built a client portal two weeks ago. Full authentication, database, user management. Two days. That used to require a pre-project budget, a dev team, and several weeks of scoping before a single line of code got written.</p><p>A colleague shared data from CJS Agency, the company behind GoDaddy&#8217;s website. They cut 50% of their workforce. Same revenue. They shifted their entire business model from one-time project fees to revenue-share and equity deals with builders. The agency model itself is being disrupted.</p><p>And it&#8217;s not just agencies. Small companies now hold structural advantages that enterprise can&#8217;t match. No legacy systems to maintain. No approval hierarchies to navigate. No multi-culture disasters from forced acquisitions. Pure agility. One subscription. Four parallel sessions. The output of a team.</p><p>You don&#8217;t fight the big ones. You just provide real value at a fraction of their budget. They&#8217;ll struggle on their own.</p><p>68% of U.S. small businesses now use AI regularly, <a href="https://colorwhistle.com/artificial-intelligence-statistics-for-small-business/">up from 48% just a year ago</a>. This isn&#8217;t hype. It&#8217;s a structural inversion. Small is becoming the advantage.</p><h3><strong>But Power Without Knowledge Is a Problem</strong></h3><p>And this is where it gets uncomfortable.</p><p>Professional development teams have entire functions dedicated to what non-technical builders skip. Security reviewers who check for vulnerabilities before deployment. Compliance officers who ensure GDPR and data protection requirements are met. Legal counsel who assess liability exposure. QA engineers who test edge cases. These roles exist because decades of software failures taught us they&#8217;re necessary.</p><p>Non-technical builders skip the entire curriculum. Not because they don&#8217;t care. Because they don&#8217;t know it exists. If you&#8217;ve never worked in software, you don&#8217;t know it needs security review. The same way someone who&#8217;s never built a house doesn&#8217;t know about load-bearing walls. You can&#8217;t check for something you&#8217;ve never heard of.</p><p>Veracode&#8217;s <a href="https://www.veracode.com/blog/genai-code-security-report/">2025 GenAI Code Security Report</a>] tested over 100 large language models across 80 coding tasks. The finding: AI-generated code failed security tests in 45% of cases. Nearly half the time, the code contained vulnerabilities from the OWASP Top 10, the industry&#8217;s standard list of critical security flaws.</p><p>And here&#8217;s what should concern you: the models got better at writing functional code. They did not get better at writing secure code. Speed improved. Safety didn&#8217;t.</p><h3><strong>The Liability Chain Nobody Talks About</strong></h3><p>Right now, there&#8217;s a gap in accountability that most builders don&#8217;t even see.</p><p>The AI providers have their disclaimer: &#8220;AI can make mistakes. Verify the output.&#8221; That language exists for a reason. It shifts liability from the platform to whoever deploys the code.</p><p>The builder says: &#8220;I didn&#8217;t know.&#8221; Genuine ignorance. Not malice, not negligence in the traditional sense. They simply weren&#8217;t aware that their coaching bot could give harmful advice, that their health tracker wasn&#8217;t encrypting user data, or that their financial tool was storing credentials in plain text.</p><p>The user? They just got harmed.</p><p>So who pays?</p><p>Right now, often nobody. The legal frameworks haven&#8217;t caught up. But they will. The Colorado AI Act, <a href="https://boyerlawfirm.com/blog/ai-compliance-legal-risks-startups-2026/">effective in 2026</a>, already imposes a duty of reasonable care on deployers of high-risk AI systems. The EU AI Act is applying similar principles. The regulatory machinery is warming up.</p><p>The first serious incident involving a vibe-coded app will accelerate everything. A health app that gives dangerous advice. A financial tool that exposes personal data. A coaching bot that drives someone to harm. When that happens, regulation won&#8217;t just target the app. It could stifle the entire builder movement. The same democratization that makes this moment so exciting could get locked down because a few people built fast without building responsibly.</p><h3><strong>The Safety Manual That Should Exist</strong></h3><p>So what do you actually need to know before shipping?</p><p>Not four hundred things. Four things.</p><p><strong>1. Does your app handle personal data?</strong> If yes, you&#8217;re likely subject to GDPR (or your local equivalent). That means consent, encryption, the right to deletion, and a data processing record. Most vibe-coded apps handle personal data. Most builders never check.</p><p><strong>2. What happens when your app is wrong?</strong> If your app gives advice, makes recommendations, or processes anything related to health, finance, or legal matters, you need to think about the consequences of bad output. Not &#8220;what if it glitches&#8221; but &#8220;what if someone acts on wrong information.&#8221; This isn&#8217;t hypothetical. It&#8217;s happening.</p><p><strong>3. Who can access what?</strong> Access control is the thing non-technical builders get wrong most often. OWASP created an entire <a href="https://owasp.org/www-project-top-10-low-code-no-code-security-risks/">Top 10 security risk list specifically for low-code/no-code platforms</a>, and excessive permissions and account impersonation sit near the top. If everyone who uses your app can see everyone else&#8217;s data, you have a problem.</p><p><strong>4. Have you tested it like someone who wants to break it?</strong> A few weeks ago, I asked Claude to pen-test my client portal. Unprompted, it offered to check for vulnerabilities. It found issues I wouldn&#8217;t have thought to look for. The AI that helped me build it also helped me secure it. Most builders never take this step. They ship and move on.</p><p>That&#8217;s just enough. Everything else is noise. For now.</p><h3><strong>Build Fast AND Build Responsibly</strong></h3><p>The safeguard isn&#8217;t slowing down. I&#8217;m not arguing for less building. I&#8217;m arguing for informed building.</p><p>The good news: the same AI tools that create the risk can help manage it. You can ask Claude or ChatGPT to review your code for GDPR compliance. You can run security scans in natural language. You can ask &#8220;what regulations apply to an app that handles health data in the EU?&#8221; and get a reasonable starting point.</p><p>But you have to know to ask. That&#8217;s the gap.</p><p>When I build automation systems for clients through my Done-for-You work, this is baked in. Security review, compliance checks, proper data handling. Not because the client asked for it. Because they shouldn&#8217;t have to ask. That&#8217;s what professional building looks like. The safeguard is part of the delivery, not an afterthought.</p><p>For those building themselves, my Sprint program teaches teams to move fast and build responsibly. Speed and safety aren&#8217;t opposites. They&#8217;re partners. The teams that learn both will outlast the ones that only learned speed.</p><h3><strong>The Question That Matters</strong></h3><p>We&#8217;re at a remarkable moment. Small companies can compete with giants. Non-technical founders can build real products. Solo consultants can ship what used to require entire engineering departments.</p><p>This power is real. And it&#8217;s not going away.</p><p>But the builders who will thrive long-term aren&#8217;t the ones who ship fastest. They&#8217;re the ones who know what to check before they ship. The ones who build the house and understand which walls are load-bearing.</p><p>AI gave you superpowers. The safety manual is your responsibility.</p><p>The question is whether you&#8217;ll read it before or after something goes wrong.</p><p></p><p><em>Damian Nomura helps companies adopt AI through a human-centered approach. His Done-for-You Automation builds systems with security and compliance baked in, and his 5-Day Sprint teaches teams to build fast and responsibly. Swiss Ambassador for the Responsible AI Governance Network.</em></p><p><em>Follow for weekly essays on AI adoption that&#8217;s Simple. Clear. Applicable.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Perspective Problem]]></title><description><![CDATA[The bias in your AI and the bias in your team have the same root cause.]]></description><link>https://www.fresh.mundaine.ai/p/the-perspective-problem</link><guid isPermaLink="false">https://www.fresh.mundaine.ai/p/the-perspective-problem</guid><dc:creator><![CDATA[Damian Nomura]]></dc:creator><pubDate>Sun, 08 Feb 2026 22:53:10 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qlyl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5a653df-815e-4b96-8c96-e68053c4fc07_2048x2048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qlyl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5a653df-815e-4b96-8c96-e68053c4fc07_2048x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qlyl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5a653df-815e-4b96-8c96-e68053c4fc07_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!qlyl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5a653df-815e-4b96-8c96-e68053c4fc07_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!qlyl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5a653df-815e-4b96-8c96-e68053c4fc07_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!qlyl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5a653df-815e-4b96-8c96-e68053c4fc07_2048x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qlyl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5a653df-815e-4b96-8c96-e68053c4fc07_2048x2048.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f5a653df-815e-4b96-8c96-e68053c4fc07_2048x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2041851,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.fresh.mundaine.ai/i/187334591?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5a653df-815e-4b96-8c96-e68053c4fc07_2048x2048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qlyl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5a653df-815e-4b96-8c96-e68053c4fc07_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!qlyl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5a653df-815e-4b96-8c96-e68053c4fc07_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!qlyl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5a653df-815e-4b96-8c96-e68053c4fc07_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!qlyl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff5a653df-815e-4b96-8c96-e68053c4fc07_2048x2048.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>I grew up half-Japanese in Switzerland with an adopted sister from Cameroon. That combination taught me something about perspective that no book ever could.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>My sister experienced the kind of racism most people picture when they hear the word. People looking down. Assumptions about intelligence. Doors that stayed closed. The negative kind.</p><p>I got the other version. The kind nobody talks about. People looking up. Assumptions about discipline, precision, cultural sophistication. &#8220;Oh, Japan!&#8221; The positive kind.</p><p>Both are the same thing. Both reduce a person to a category. And both come from the same place: a system that only knows how to see people through a narrow lens.</p><p>I keep thinking about that word. Lens. Because the same problem that shaped my childhood is now running through every AI system your company deploys.</p><p></p><h3><strong>The Coded Gaze</strong></h3><p>In 2018, MIT researcher Joy Buolamwini ran a simple experiment. She pointed three commercial facial recognition systems at a diverse set of faces and measured how often they got it wrong.</p><p>The results should have stopped the industry cold.</p><p>For lighter-skinned men, the error rate was 0.8%. For darker-skinned women, it climbed to 34.7%. One of the systems was essentially flipping a coin.</p><p>Buolamwini had discovered this problem years earlier, in the most direct way possible. She was a graduate student at the MIT Media Lab, working with facial recognition software that couldn&#8217;t detect her face. She had to put on a white mask for the system to see her.</p><p>She called this the <strong>coded gaze</strong>: the embedded perspective in AI systems that reflects the worldview of whoever built them.</p><p>The training data told the story. One major company&#8217;s face recognition dataset was 77% male and 83% white. The system worked beautifully. For people who looked like the people who built it.</p><h3><strong>When Blind Spots Get Real</strong></h3><p>This isn&#8217;t abstract. Robert Williams was arrested in Detroit in 2020 after a facial recognition system misidentified him as a shoplifting suspect. He was handcuffed in front of his daughters. Porcha Woodruff, eight months pregnant, was arrested for a carjacking in 2023 based on the same technology. Nijeer Parks spent ten days in jail in New Jersey before the case fell apart.</p><p>All three are Black. All three were innocent. All three were failed by systems trained on data that didn&#8217;t adequately represent them.</p><p>And facial recognition is just the visible edge. Recruiting tools that filter out candidates with &#8220;foreign-sounding&#8221; names. Credit scoring systems that penalize zip codes as proxies for race. Healthcare algorithms that systematically underestimate pain in Black patients. The coded gaze runs through every AI application that was trained on data reflecting a narrow slice of human experience.</p><p>Last week I wrote about how &#8220;hallucination&#8221; is [the most effective marketing term in AI history](https://mundaine.substack.com/). A nice word for a product failure. Coded bias is a different kind of failure. Not a random glitch. A systematic blind spot built into the foundation.</p><p></p><h3><strong>The Bridge: Same Problem, Different System</strong></h3><p>Now, here&#8217;s where it gets interesting.</p><p>Companies hear about coded bias and think: &#8220;We need to audit our AI tools.&#8221; Good instinct. But they&#8217;re only solving half the problem.</p><p>Because there&#8217;s another system in your company that suffers from the same blind spot. A system that also defaults to familiar patterns, rewards what it already recognizes, and systematically filters out perspectives it wasn&#8217;t designed to see.</p><p>Your hiring process.</p><p>Specifically, how you build your AI team.</p><p>The bias coded into your AI tools and the bias coded into your AI team have the same root cause: homogeneous perspectives producing blind spots that nobody in the room can see. Because everyone in the room sees the same way.</p><p></p><h3><strong>The Inexperience Advantage</strong></h3><p>When companies look for help with AI strategy, they almost always reach for the same type of person. Industry veteran. Deep domain expertise. Someone who&#8217;s &#8220;done this before.&#8221;</p><p>It feels safe. It feels smart. And it often leads to the same conventional thinking that created the blind spot in the first place.</p><p>Research from Harvard backs this up. Lars Bo Jeppesen and Karim Lakhani <a href="https://pubsonline.informs.org/doi/10.1287/orsc.1090.0491">studied over 12,000 scientists</a> solving problems through open innovation challenges. Their finding was counterintuitive: the further a solver&#8217;s expertise was from the problem&#8217;s domain, the more likely they were to find a winning solution. Outsiders outperformed insiders. Consistently.</p><p>Why? Because insiders know &#8220;how it&#8217;s always been done.&#8221; They&#8217;ve exercised the same patterns thousands of times. They carry assumptions so deep they don&#8217;t even recognize them as assumptions. An outsider doesn&#8217;t have that baggage. They need things to make sense from the ground up. They ask the questions that everyone else stopped asking years ago.</p><p>We celebrate first principles thinking. We praise design thinking. But then we go hire the person with twenty years of industry experience to lead our AI transformation. And we wonder why we end up with the same approaches everyone else has.</p><p>A Harvard Business Review analysis put numbers to this. In experiments conducted in Texas and Singapore, participants on diverse teams were <a href="http://(https://hbr.org/2016/11/why-diverse-teams-are-smarter">58% more likely to price stocks correctly</a> than those on homogeneous teams. Homogeneous groups weren&#8217;t just less innovative. They made more factual errors. They were worse at processing information. The similarity that felt like alignment was actually a blind spot.</p><p></p><h3><strong>Two Audits, One Principle</strong></h3><p>So where does this leave you?</p><p>With two audits to run. Not one.</p><p><strong>Audit your AI tools.</strong> Whose perspective does the training data carry? What edge cases is the system failing on? Buolamwini&#8217;s Gender Shades study forced IBM, Microsoft, and Amazon to revisit their facial recognition systems. Your company may not build facial recognition, but every AI tool you deploy carries someone&#8217;s assumptions. Who&#8217;s testing those assumptions before they touch your customers?</p><p><strong>Audit your AI team.</strong> Who&#8217;s in the room when you make AI decisions? If everyone at the table has the same background, the same industry experience, the same mental models, you&#8217;re running a homogeneous team on a problem that requires diverse thinking. You need the person who asks &#8220;why are we doing it this way?&#8221; Not because they&#8217;re difficult. Because they genuinely don&#8217;t know. And that not-knowing is where breakthroughs live.</p><p>The principle underneath both is simple: <strong>perspective diversity is a debugging tool. </strong>The more perspectives you bring to a system, the more edge cases you catch. The more blind spots you surface. Whether the system is an algorithm or a leadership team.</p><p></p><h3><strong>What This Means in Practice</strong></h3><p>This isn&#8217;t activism. For me, it&#8217;s lived experience. Growing up between two kinds of racism taught me that the problem is never just the negative bias or the positive bias. The problem is the narrow lens. Any narrow lens.</p><p>And the solution isn&#8217;t awareness. Awareness doesn&#8217;t debug code. Action does.</p><p>When I run a Sprint with a client, one of the things we stress-test is perspective. Not just &#8220;does this AI tool work?&#8221; but &#8220;does it work for everyone it needs to serve?&#8221; And when I do Executive Sparring with leaders, part of the value is that I bring an outside perspective to their inside problem. Not because I know their industry better than they do. Because I don&#8217;t. And that&#8217;s the point.</p><p>The companies that will get AI right aren&#8217;t the ones with the biggest budgets or the most advanced tools. They&#8217;re the ones willing to look at their tools and their teams through a wider lens.</p><p>Ask yourself two questions:</p><p><strong>1. Whose perspective is missing from your AI systems?</strong></p><p><strong>2. Whose perspective is missing from the room where you decide?</strong></p><p>If the answer to both is &#8220;I don&#8217;t know,&#8221; you&#8217;ve just found your most important blind spot.</p><p></p><p><em>Damian Nomura helps mid-sized companies adopt AI through a human-centered approach. His 5-Day Sprint gets teams from stuck to pilot, and Executive Sparring brings outside perspective to inside challenges. Swiss Ambassador for the Responsible AI Governance Network.</em></p><p><em>Follow for weekly essays on AI adoption that&#8217;s Simple. Clear. Applicable.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[“Hallucination” Is the Best Marketing Term in AI History]]></title><description><![CDATA[A product presents false facts as truth. Convincingly. And we gave it a cute name.]]></description><link>https://www.fresh.mundaine.ai/p/hallucination-is-the-best-marketing</link><guid isPermaLink="false">https://www.fresh.mundaine.ai/p/hallucination-is-the-best-marketing</guid><dc:creator><![CDATA[Damian Nomura]]></dc:creator><pubDate>Mon, 02 Feb 2026 22:46:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!50RS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b806c0b-c073-4283-8cfd-5ae1e35e5b4d_2048x2048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!50RS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b806c0b-c073-4283-8cfd-5ae1e35e5b4d_2048x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!50RS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b806c0b-c073-4283-8cfd-5ae1e35e5b4d_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!50RS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b806c0b-c073-4283-8cfd-5ae1e35e5b4d_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!50RS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b806c0b-c073-4283-8cfd-5ae1e35e5b4d_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!50RS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b806c0b-c073-4283-8cfd-5ae1e35e5b4d_2048x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!50RS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b806c0b-c073-4283-8cfd-5ae1e35e5b4d_2048x2048.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6b806c0b-c073-4283-8cfd-5ae1e35e5b4d_2048x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3017487,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.fresh.mundaine.ai/i/186673481?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b806c0b-c073-4283-8cfd-5ae1e35e5b4d_2048x2048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!50RS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b806c0b-c073-4283-8cfd-5ae1e35e5b4d_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!50RS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b806c0b-c073-4283-8cfd-5ae1e35e5b4d_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!50RS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b806c0b-c073-4283-8cfd-5ae1e35e5b4d_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!50RS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6b806c0b-c073-4283-8cfd-5ae1e35e5b4d_2048x2048.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I&#8217;ve been thinking about language. Not the kind machines generate, but the kind we use to describe them.</p><p>Somewhere along the way, the AI industry settled on &#8220;hallucination&#8221; to describe what happens when a language model presents false information as fact. And I think that word choice might be the single most effective piece of marketing in AI history.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Think about it. &#8220;Hallucination&#8221; borrows from human experience. It softens the blow. It makes a fundamental system limitation sound like a relatable, almost endearing quirk. &#8220;Oh, the AI hallucinated again.&#8221; As if it had a bad dream. As if it&#8217;s just a minor thing that happens sometimes.</p><p>A product is generating false facts and presenting them as truth. Confidently. Fluently. With no indication that anything is wrong.</p><p>Call it what it is. A product failure.</p><h3><strong>The Terminology Does Real Work</strong></h3><p>This isn&#8217;t just semantics. The word &#8220;hallucination&#8221; does three specific things that benefit AI companies and hurt everyone else.</p><p><strong>It humanizes the machine.</strong> When we say a system &#8220;hallucinates,&#8221; we unconsciously attribute human qualities to it. We forgive it, the way we&#8217;d forgive a friend who misremembers a detail. This is software producing unreliable output, not a person having a momentary lapse.</p><p><strong>It softens the severity.</strong> A hallucination sounds temporary and accidental. A product showing false facts as truth sounds like a defect you&#8217;d return the product for. The framing matters. One makes you shrug. The other makes you demand accountability.</p><p><strong>It obscures responsibility.</strong> The word creates a strange gray zone around fault. But when a product delivers incorrect outputs presented as fact, accountability becomes much clearer. The manufacturer has a reliability problem.</p><p>Researchers at Oxford University, published in <em>Nature</em>, prefer a more precise term: <strong>confabulation</strong>. They define these as &#8220;arbitrary and incorrect generations&#8221; produced by language models. No human qualities implied. No cute spin. Just a technical description of what actually happens.</p><h3><strong>Why This Happens (And Why It Won&#8217;t Fully Stop)</strong></h3><p>To understand why this matters for your business, you need to understand one thing about generative AI that most people overlook.</p><p><strong>These systems are probabilistic, not deterministic.</strong></p><p>Traditional software is deterministic. Same input, same output. Every time. That&#8217;s what makes it reliable. It&#8217;s why your accounting software always calculates the same total from the same numbers.</p><p>Generative AI works differently. It predicts the next most likely word based on probability distributions learned from training data. Same question, asked twice, can produce different answers. Not because the system is broken. Because that&#8217;s how it&#8217;s designed.</p><p>This is where it gets important for business leaders.</p><p>Causality is binary. In the real world, something either happened or it didn&#8217;t. A legal precedent either exists or it doesn&#8217;t. A financial figure is either correct or it isn&#8217;t. Facts are true or false.</p><p>But LLMs don&#8217;t verify truth. They estimate probability. The gap between probabilistic generation and binary truth is exactly where &#8220;hallucinations&#8221; live. They&#8217;re not bugs. They&#8217;re the natural output of a system that was never designed to determine what&#8217;s true. Only what&#8217;s statistically plausible.</p><p>A 2025 survey in <em>*Frontiers in AI*</em> puts it plainly: &#8220;Hallucination is an inherent byproduct of language modeling that prioritizes syntactic and semantic plausibility over factual accuracy.&#8221;</p><p>Read that again. Inherent. Not occasional. Not fixable with the next update. Inherent.</p><h3><strong>The Confidence Problem</strong></h3><p>If the system at least signaled uncertainty, you could build around it. But it does the opposite.</p><p>Research from 2025 shows that AI models are significantly more likely to use confident language when generating incorrect information than when providing accurate answers. Phrases like &#8220;definitely,&#8221; &#8220;certainly,&#8221; &#8220;without doubt&#8221; appear more often in false outputs than in correct ones. An OpenAI research paper confirmed the root cause: LLMs are trained to reward confident answers over honest uncertainty, which means the system learns to bluff rather than say &#8220;I don&#8217;t know.&#8221;</p><p>The system is most confident when it&#8217;s most wrong.</p><p>For business leaders, this inverts the natural trust signal. In every other context, confidence correlates with reliability. With generative AI, it can correlate with error. If your team is using AI outputs without verification layers, they&#8217;re more likely to trust the responses that are least trustworthy.</p><h3><strong>Real-World Damage, Not Theoretical Risk</strong></h3><p>This isn&#8217;t hypothetical. The consequences are already here.</p><p>A Stanford HAI study tested AI tools specifically built for legal research. Tools marketed as reliable. Tools using Retrieval-Augmented Generation to reduce errors. The results: Lexis+ AI produced incorrect information more than 17% of the time. Westlaw&#8217;s AI-Assisted Research hallucinated over 34% of the time. General-purpose chatbots? Between 58% and 82% on legal queries.</p><p>These aren&#8217;t fringe tools. These are the industry standard for legal research.</p><p>As of late 2025, researchers tracked over 120 court cases worldwide involving AI-generated hallucinations. 91 in the US alone. 128 lawyers implicated. Sanctions ranging from $100 to $31,100.</p><p>The judicial message is consistent: if you submit machine-generated fiction under your name, it&#8217;s still your filing. The machine doesn&#8217;t face consequences. You do.</p><h3><strong>An Important Distinction</strong></h3><p>Not all AI works this way. This is specific to generative AI, the large language models powering ChatGPT, Claude, Gemini, and most of the business tools you&#8217;re adopting right now.</p><p>Predictive AI, classification systems, rule-based automation: these operate deterministically or with well-bounded probabilistic ranges. They&#8217;re different tools for different purposes.</p><p>The problem isn&#8217;t AI broadly. It&#8217;s deploying one specific type of AI while treating it like another. It&#8217;s expecting deterministic reliability from a probabilistic system because we never stopped to understand the difference.</p><h3><strong>What This Means for You</strong></h3><p>If you&#8217;re deploying generative AI in your business, three questions matter more than any feature comparison.</p><p><strong>1. Where are you trusting outputs without verification?</strong></p><p>Map every workflow where AI-generated content reaches a customer, a partner, or a decision without a human checking it. Those are your risk points. Not because AI is bad. Because probabilistic systems require verification by design.</p><p><strong>2. Does your team understand what they&#8217;re using?</strong></p><p>Not at a technical level. At a conceptual level. Do they know that the system estimates probable answers rather than looking up correct ones? That distinction changes behavior. People who understand it verify. People who don&#8217;t trust blindly.</p><p><strong>3. Are you building on the right foundation?</strong></p><p>Some use cases are excellent fits for generative AI. Drafting, brainstorming, summarizing, exploring ideas. Others need deterministic reliability. Reporting facts, citing sources, making claims. Matching the right type of AI to the right type of task is the difference between value and liability.</p><h3><strong>Rename the Problem, See It Clearly</strong></h3><p>Language shapes understanding. And &#8220;hallucination&#8221; has shaped ours in exactly the wrong direction.</p><p>It made a product limitation sound like a personality trait. It turned accountability into ambiguity. It let companies ship probabilistic systems into deterministic workflows without anyone asking the obvious question: should we trust output that the system itself can&#8217;t verify?</p><p>The next time someone tells you their AI &#8220;hallucinated,&#8221; try this reframe:</p><p>&#8220;The product generated false information and presented it as truth.&#8221;</p><p>See how different that feels? See how it changes the conversation?</p><p>That&#8217;s the point. The technology isn&#8217;t the problem. The technology is doing exactly what it was designed to do. The problem is the language that keeps us from seeing it clearly.</p><p></p><p><em>I help leaders understand what AI actually is before they decide what to do with it. If your team is building on AI without understanding its foundations, that gap will cost you. <a href="http://www.mundaine.ai">Mundaine</a> offers Sprint workshops and Executive Sparring to close it. </em></p><p><em>Simple. Clear. Applicable.*</em></p><p></p><h3><strong>Sources</strong></h3><p>1. Farquhar et al. (2024). &#8220;Detecting hallucinations in large language models using semantic entropy.&#8221; <em>*Nature*</em>, Vol 630. Oxford University.(https://www.nature.com/articles/s41586-024-07421-0)</p><p>2. &#8220;Survey and analysis of hallucinations in large language models.&#8221; <em>*Frontiers in AI*</em> (2025). Japan Advanced Institute of Science and Technology.(https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1622292/full)</p><p>3. Magesh, Surani, Dahl et al. (2024). &#8220;AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries.&#8221; Stanford HAI / RegLab.(https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries)</p><p>4. Kalai &amp; Nachum (2025). &#8220;Why Language Models Hallucinate.&#8221; OpenAI.(https://openai.com/index/why-language-models-hallucinate/)</p><p>5. &#8220;AI Hallucination Statistics 2026.&#8221; All About AI.(https://www.allaboutai.com/resources/ai-statistics/ai-hallucinations/)</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Your AI Consultant Is Making You Dumber]]></title><description><![CDATA[Your AI consultant is making you dumber.mundaine - Damian Nomura is a reader-supported publication.]]></description><link>https://www.fresh.mundaine.ai/p/your-ai-consultant-is-making-you</link><guid isPermaLink="false">https://www.fresh.mundaine.ai/p/your-ai-consultant-is-making-you</guid><dc:creator><![CDATA[Damian Nomura]]></dc:creator><pubDate>Sun, 25 Jan 2026 10:57:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9Bi4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b6b1b23-485a-478c-994b-5f79fec99fe3_2048x2048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9Bi4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b6b1b23-485a-478c-994b-5f79fec99fe3_2048x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9Bi4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b6b1b23-485a-478c-994b-5f79fec99fe3_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!9Bi4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b6b1b23-485a-478c-994b-5f79fec99fe3_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!9Bi4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b6b1b23-485a-478c-994b-5f79fec99fe3_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!9Bi4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b6b1b23-485a-478c-994b-5f79fec99fe3_2048x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9Bi4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b6b1b23-485a-478c-994b-5f79fec99fe3_2048x2048.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0b6b1b23-485a-478c-994b-5f79fec99fe3_2048x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2844989,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.fresh.mundaine.ai/i/185715199?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b6b1b23-485a-478c-994b-5f79fec99fe3_2048x2048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9Bi4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b6b1b23-485a-478c-994b-5f79fec99fe3_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!9Bi4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b6b1b23-485a-478c-994b-5f79fec99fe3_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!9Bi4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b6b1b23-485a-478c-994b-5f79fec99fe3_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!9Bi4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0b6b1b23-485a-478c-994b-5f79fec99fe3_2048x2048.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>Your AI consultant is making you dumber.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Not on purpose. They&#8217;re probably quite smart. They&#8217;ve helped you save 40 hours a month. Your team generates reports in minutes instead of hours. The efficiency gains look great on the quarterly deck.</p><p>But here&#8217;s what they never asked you: &#8220;What will you do with those 40 hours?&#8221;</p><p>And that question is worth more than every hour you saved.</p><h2><strong>The Celebration Trap</strong></h2><p>I see this pattern constantly. Company implements AI. Time savings appear. Everyone celebrates. Then those saved hours quietly fill with more meetings, more email, more busywork. Six months later, nobody can point to what actually changed.</p><p>The efficiency gains evaporated. They got absorbed back into the noise.</p><p>This is the celebration trap. You measure inputs (time saved) instead of outputs (value created). You congratulate yourself for running faster without asking where you&#8217;re running to.</p><p>BCG surveyed over 1,000 executives across 59 countries. 74% struggle to achieve and scale value from AI. ServiceNow&#8217;s AI Maturity Index found that fewer than 1% of organizations even crack the midway point on their maturity scale. The average score dropped 9 points year-over-year.</p><p>Companies aren&#8217;t failing because they can&#8217;t save time. They&#8217;re failing because they don&#8217;t know what to do with it.</p><h2><strong>The Question Nobody Asks</strong></h2><p>Here&#8217;s the uncomfortable truth about most AI consulting engagements.</p><p>They focus on the wrong half of the equation.</p><p>The hard part isn&#8217;t finding where AI can save time. That&#8217;s the easy part. Any decent consultant can audit your processes and find automation opportunities. The hard part is the question that comes next:</p><h3><strong>&#8221;What will we do with the freed capacity?&#8221;</strong></h3><p>This is the leverage question. And almost nobody asks it.</p><p>I sat with a client last year who&#8217;d implemented AI across their customer service operations. They were saving 40 hours per month. I asked what they were doing with those hours.</p><p>Silence.</p><p>They hadn&#8217;t thought about it. The hours just... got absorbed. More tickets. More meetings. More of the same work that used to fill the day.</p><p>That&#8217;s not ROI. That&#8217;s just redistribution.</p><h3><strong>Savings vs. Leverage</strong></h3><p>Let me make this concrete.</p><p>Saving 10 hours a week on report generation is not valuable by itself. It&#8217;s only valuable if you convert those hours into something that moves the business forward.</p><p>The ROI of AI isn&#8217;t in the savings. It&#8217;s in what the savings enable.</p><p>Four conversion paths:</p><p><strong>Growth</strong>: Those 10 hours become more client outreach. More sales conversations. More pipeline = revenue leverage.</p><p><strong>Quality</strong>: Those 10 hours become deeper work on complex cases. Better outcomes. Fewer errors = excellence leverage.</p><p><strong>Expansion</strong>: Those 10 hours become time to launch something new. A product. A service. A market = opportunity leverage.</p><p><strong>Retention</strong>: Those 10 hours mean your best people stop working nights and weekends. They stay = human leverage.</p><p>Each path converts the same saved time into completely different outcomes. But you have to choose one. Deliberately. Before you start.</p><h2><strong>Why This Gets Missed</strong></h2><p>There&#8217;s a reason consultants don&#8217;t push the leverage question.</p><p>It&#8217;s hard.</p><p>Identifying efficiency gains is technical work. You audit processes, find bottlenecks, implement automation. It&#8217;s measurable. It&#8217;s defensible. It fits neatly into a project scope.</p><p>The leverage question is strategic work. It requires understanding the company&#8217;s direction, its constraints, its competitive position. It means asking executives uncomfortable questions about priorities. It means connecting AI initiatives to business outcomes that might be two or three steps removed from the automation itself.</p><p>Most consultants stay in their lane. They deliver the efficiency gains, declare victory, and move on.</p><p>Which is why 74% of companies struggle to see value.</p><h2><strong>The Maturity Illusion</strong></h2><p>Let&#8217;s have a look at what the data tells us.</p><p>90% of C-suite executives believe their people are ready to use AI effectively. Only 70% of employees agree. And when you dig deeper, only 8% of executives have sufficient AI literacy to actually guide these initiatives.</p><p>Organizations spend three times more of their AI budget on technology than on people.</p><p>This creates a dangerous illusion. Leadership thinks they&#8217;re further along than they are. They see the tools deployed. They see the time savings. They assume value is being created.</p><p>But value doesn&#8217;t come from deployment. Value comes from leverage.</p><p>The companies winning with AI aren&#8217;t the ones saving the most time. They&#8217;re the ones converting saved time into strategic advantage.</p><h2><strong>Connecting to Corporate Strategy</strong></h2><p>The leverage question forces alignment.</p><p>When you ask &#8220;What will we do with the freed capacity?&#8221; you&#8217;re really asking: &#8220;What does this company need most right now?&#8221;</p><p>Is it growth? Then saved hours become customer acquisition.</p><p>Is it profitability? Then saved hours become efficiency that drops to the bottom line.</p><p>Is it innovation? Then saved hours become R&amp;D bandwidth.</p><p>Is it talent retention? Then saved hours become better work-life balance.</p><p>The answer depends on your strategy. And if your AI initiatives aren&#8217;t connected to your strategy, you&#8217;re just automating in circles.</p><p>This is why the first conversation shouldn&#8217;t be about AI capabilities. It should be about business priorities. What does the company need? Where is the competitive pressure? What would change the game?</p><p>Then work backward to AI.</p><h2><strong>Before You Start Your Next AI Initiative</strong></h2><p>Ask the leverage question first. Not after. Before.</p><p>Before you scope the project. Before you hire the consultant. Before you deploy the tool.</p><p>&#8220;If this works, what will we do with the freed capacity?&#8221;</p><p>Write down the answer. Make it specific. &#8220;We will use 15 reclaimed hours per week to increase outbound sales calls by 30%.&#8221; Or: &#8220;We will reallocate two FTEs to the product development team.&#8221; Or: &#8220;We will reduce average weekly hours from 50 to 45 without cutting output.&#8221;</p><p>If you can&#8217;t answer clearly, you&#8217;re not ready to start.</p><p>The time savings will evaporate. The efficiency gains will get absorbed. And six months from now, you&#8217;ll be in that 74% wondering why AI didn&#8217;t deliver the value everyone promised.</p><h2><strong>The Real Test</strong></h2><p>Here&#8217;s how you know if your AI adoption is working.</p><p>Not: &#8220;How many hours did we save?&#8221;</p><p>But: &#8220;What did we build with those hours?&#8221;</p><p>The winners aren&#8217;t the ones who saved the most time. They&#8217;re the ones who converted it into something that matters.</p><p>Speed without direction is just motion. Efficiency without leverage is just busyness with better tools.</p><p>Your AI consultant gave you hours back. The question is whether you&#8217;re investing those hours or wasting them.</p><p>That&#8217;s on you. Not them.</p><p></p><p><em>Ready to connect AI initiatives to actual business outcomes?</em></p><p><em>Let&#8217;s talk about a Sprint week or Executive Sparring session where we don&#8217;t just find efficiency gains. We build the leverage strategy that converts them.</em></p><p><em>Simple. Clear. Applicable.</em></p><p></p><p><strong>Sources</strong></p><p>1. BCG (October 2024). &#8220;AI Adoption in 2024: 74% of Companies Struggle to Achieve and Scale Value.&#8221; Survey of 1,000+ executives across 59 countries. [BCG Press Release](https://www.bcg.com/press/24october2024-ai-adoption-in-2024-74-of-companies-struggle-to-achieve-and-scale-value)</p><p>2. ServiceNow &amp; Oxford Economics (2025). &#8220;Enterprise AI Maturity Index 2025.&#8221; Survey of 4,500 executives. Fewer than 1% scored over 50 on the 100-point maturity scale. Average scores declined 9 points year-over-year. [ServiceNow](https://www.servicenow.com/workflow/hyperautomation-low-code/enterprise-ai-maturity-index-2025.html)</p><p>3. Accenture (2025). Study cited in Fortune showing 90% of C-suite believe workers are AI-ready while only 70% of employees agree. Organizations spend 3x more AI budget on technology than people. [Fortune](https://fortune.com/2025/01/17/c-suite-leaders-believe-workers-ready-to-use-ai-employee-training-skills-gap/)</p><p>4. MIT Sloan Management Review (2024). Research finding only 8% of board-level executives have substantial AI literacy despite widespread confidence claims.</p><p>5. Faros AI (2025). &#8220;The AI Productivity Paradox.&#8221; 75%+ of developers use AI coding tools, but most organizations see no measurable performance gains. [Faros AI](https://www.faros.ai/blog/ai-software-engineering)</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[74% of Companies Fail at AI. Most Never Even Started Right.]]></title><description><![CDATA[74% of companies struggle to achieve and scale value from AI.mundaine - Damian Nomura is a reader-supported publication.]]></description><link>https://www.fresh.mundaine.ai/p/74-of-companies-fail-at-ai-most-never</link><guid isPermaLink="false">https://www.fresh.mundaine.ai/p/74-of-companies-fail-at-ai-most-never</guid><dc:creator><![CDATA[Damian Nomura]]></dc:creator><pubDate>Fri, 23 Jan 2026 14:27:22 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ZfbL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075dc327-3939-4217-b355-c59c6808251a_2048x2048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZfbL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075dc327-3939-4217-b355-c59c6808251a_2048x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZfbL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075dc327-3939-4217-b355-c59c6808251a_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!ZfbL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075dc327-3939-4217-b355-c59c6808251a_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!ZfbL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075dc327-3939-4217-b355-c59c6808251a_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!ZfbL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075dc327-3939-4217-b355-c59c6808251a_2048x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZfbL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075dc327-3939-4217-b355-c59c6808251a_2048x2048.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/075dc327-3939-4217-b355-c59c6808251a_2048x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2966461,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.fresh.mundaine.ai/i/185540707?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075dc327-3939-4217-b355-c59c6808251a_2048x2048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZfbL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075dc327-3939-4217-b355-c59c6808251a_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!ZfbL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075dc327-3939-4217-b355-c59c6808251a_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!ZfbL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075dc327-3939-4217-b355-c59c6808251a_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!ZfbL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F075dc327-3939-4217-b355-c59c6808251a_2048x2048.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>74% of companies struggle to achieve and scale value from AI.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>That&#8217;s not my number. That&#8217;s BCG surveying 1,000 executives across 59 countries.</p><p>And here&#8217;s the uncomfortable part: Most of these companies aren&#8217;t failing because they&#8217;re doing AI wrong. They&#8217;re failing because they never actually started. They&#8217;re still &#8220;getting ready.&#8221; Still planning. Still in committee meetings about their AI strategy.</p><p>The world moved on.</p><h2><strong>The Mental Model That&#8217;s Killing You</strong></h2><p>This is a pattern I keep seeing repeating. Executives approach AI with the same mental model they used for IT projects in 2010.</p><p>Big upfront planning. Lengthy requirements phases. 12-month timelines before anything ships. Massive teams. Careful, slow, expensive. Or at least that&#8217;s the mindset they approach it.</p><p>That model made sense when technology was expensive to change. When getting it wrong meant costly rework. And seldomly a solution went live without rework in the clip. When deployment meant physical servers and permanent decisions.</p><p>But that&#8217;s not how it works anymore.</p><p>The technology has changed. Dramatically. The question is whether your mindset has caught up.</p><h2><strong>A Conversation in Spain</strong></h2><p>I&#8217;m in Spain this week, catching up with a friend who runs a startup.</p><p>In June last year, he was nowhere near AI. Not even on his radar. Then 2025 happened. A brutal year for him. Market pressure. A sophisticated cyber attack that nearly took his company down. The kind of year that forces you to rethink everything.</p><p>Yesterday, he told me he&#8217;s deploying two AI solutions tomorrow. Not &#8220;exploring.&#8221; Not &#8220;piloting.&#8221; Deploying. Into production. And he&#8217;s already planning the next experiments.</p><p>Then he asked me something that surprised me.</p><p>&#8220;You know, I struggle with having an overview over everything in AI. Can you help me?&#8221;</p><p>My answer: &#8220;I could, but you would lose focus. The way you&#8217;re doing it is exactly how it works. Just keep going like that. The only thing to be aware of is to keep evaluating the results and iterate where needed.&#8221;</p><p>He doesn&#8217;t need a comprehensive AI strategy document. He doesn&#8217;t need to understand the full landscape. He needs to keep doing what he&#8217;s doing: picking problems, building solutions, testing them, and iterating.</p><p>That&#8217;s the speed lane. And he&#8217;s on it.</p><h2><strong>Why IT Projects Used to Be Slow</strong></h2><p>Traditional IT projects were slow for legitimate reasons:</p><p><strong>Waterfall methodology</strong> - You planned everything upfront because changes were expensive. Once you started building, deviations meant rework, delays, and budget overruns.</p><p><strong>Fear of getting it wrong</strong> - Deploying meant committing. Rolling back was painful, sometimes impossible. Better to plan for 6 months than ship something broken.</p><p><strong>Infrastructure constraints</strong> - Physical servers. Long procurement cycles. Dependencies that took weeks to provision.</p><p><strong>Limited iteration capability</strong> - Testing required dedicated environments. Feedback loops measured in sprints, not hours.</p><p>These weren&#8217;t irrational fears. They were responses to real constraints.</p><p>But those constraints are mostly gone.</p><h2><strong>What Changed Everything</strong></h2><p>Three shifts collapsed the timeline:</p><p><strong>1. AI-assisted development</strong></p><p>GitHub&#8217;s 2024 data shows 92% of developers now use AI coding tools. Stack Overflow reports 76% save at least 30% of their time on routine programming tasks.</p><p>AI doesn&#8217;t just write code faster. It eliminates the bottleneck of translating ideas into implementation. You describe what you want. It builds the first version. You iterate.</p><p><strong>2. Modular systems</strong></p><p>Modern architecture means components snap together. APIs connect services. You&#8217;re not building from scratch every time. You&#8217;re assembling.</p><p>What took months of custom development now takes days of configuration and connection.</p><p><strong>3. Iteration as default</strong></p><p>Cloud deployment means you ship, test with real users, and improve. The feedback loop collapsed from months to hours. You don&#8217;t need to be right the first time. You need to be fast enough to learn.</p><p>McKinsey reports a 55% reduction in development time through AI-assisted coding. Y Combinator startups are delivering MVPs in 6 weeks that used to take 6 months.</p><h2><strong>What My Friend Actually Did</strong></h2><p>He didn&#8217;t hire an AI consultant to map out his strategy. He didn&#8217;t spend three months building an AI roadmap. He didn&#8217;t wait until he &#8220;understood AI&#8221; before starting.</p><p>He picked a problem. Built something. Tested it. Learned. Moved to the next one.</p><p>No lengthy requirements document. No architecture review committee. No 3-month planning phase.</p><p>Start. Build. Test. Ship. Learn.</p><p>If something breaks, fix it. Today. Not next quarter.</p><p>That&#8217;s not reckless. That&#8217;s the new standard.</p><h2><strong>The Competitive Math</strong></h2><p>Here&#8217;s the math that should worry you.</p><p>If your competitor ships and tests 4 ideas per month while you&#8217;re still in planning committees, they&#8217;re learning 12 times faster than you are per quarter.</p><p>Every week you spend &#8220;getting ready,&#8221; they&#8217;re shipping, testing, and iterating. Every month you spend in planning committees, they&#8217;ve built and discarded 5 approaches that didn&#8217;t work and found 3 that did.</p><p>Speed isn&#8217;t about being sloppy. Speed is about compressing the learning cycle. The companies that win aren&#8217;t the ones with the best strategy document. They&#8217;re the ones who figure out what works before everyone else does.</p><p>And the only way to figure out what works is to try things. Fast.</p><h2><strong>The Mindset Shift</strong></h2><p>Speed is possible now. But only if you update your mental operating system.</p><p>Stop treating AI like a 2-year IT project. Treat it like a series of small experiments. Ship something in a week. Learn what works. Build on it.</p><p>Stop waiting for perfect requirements. You can&#8217;t know the perfect requirements until you&#8217;ve seen what&#8217;s possible. Build something rough. Show it to users. Let reality refine your understanding.</p><p>Stop protecting your timeline. The timeline is the enemy. Every day you spend planning is a day your competitors spend learning.</p><p>My friend in Spain figured this out the hard way. After one of the toughest years of his business life, he stopped waiting and started building. Tomorrow, two AI solutions go live.</p><p><em>Speed is possible. But only if you know what to speed toward.</em></p><p>If you&#8217;ve been waiting to &#8220;get ready&#8221; for AI, consider this your wake-up call.</p><p>Speed isn&#8217;t sloppy. It&#8217;s the new standard. And the longer you wait to adopt it, the harder the gap becomes to close.</p><p></p><p><em>Ready to move faster?</em></p><p><em>Reach out about a Sprint week where we build working prototypes for your business.</em></p><p><em>Simple. Clear. Applicable.</em></p><p></p><h2><strong>Sources</strong></h2><p>1. BCG (October 2024). &#8220;AI Adoption in 2024: 74% of Companies Struggle to Achieve and Scale Value.&#8221; Survey of 1,000 executives across 59 countries. [BCG Press Release](https://www.bcg.com/press/24october2024-ai-adoption-in-2024-74-of-companies-struggle-to-achieve-and-scale-value)</p><p>2. GitHub (2024). Developer survey showing 92% of developers use AI coding tools. Via [Intersog](https://intersog.com/blog/strategy/ai-driven-software-development-accelerating-innovation-in-2025/)</p><p>3. Stack Overflow (2024). Survey reporting 76% of developers save at least 30% of time with AI assistants. Via [Intersog](https://intersog.com/blog/strategy/ai-driven-software-development-accelerating-innovation-in-2025/)</p><p>4. McKinsey (2024). Research showing 55% reduction in development time through AI-assisted coding. Via [Advancio](https://www.advancio.com/from-6-months-to-6-weeks-how-ai-is-speeding-up-software-development/)</p><p>5. Y Combinator (2024). Data on startups delivering MVPs in 6 weeks versus traditional 6-month timelines. Via [Advancio](https://www.advancio.com/from-6-months-to-6-weeks-how-ai-is-speeding-up-software-development/)</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Everyone’s talking about AI agents. Almost no one should be building them yet.]]></title><description><![CDATA[Everyone&#8217;s talking about AI agents.mundaine - Damian Nomura is a reader-supported publication.]]></description><link>https://www.fresh.mundaine.ai/p/everyones-talking-about-ai-agents</link><guid isPermaLink="false">https://www.fresh.mundaine.ai/p/everyones-talking-about-ai-agents</guid><dc:creator><![CDATA[Damian Nomura]]></dc:creator><pubDate>Sun, 11 Jan 2026 23:21:28 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!mQ25!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7822f532-cc93-4adf-bae8-54ac93eecb19_2048x2048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mQ25!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7822f532-cc93-4adf-bae8-54ac93eecb19_2048x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mQ25!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7822f532-cc93-4adf-bae8-54ac93eecb19_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!mQ25!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7822f532-cc93-4adf-bae8-54ac93eecb19_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!mQ25!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7822f532-cc93-4adf-bae8-54ac93eecb19_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!mQ25!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7822f532-cc93-4adf-bae8-54ac93eecb19_2048x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mQ25!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7822f532-cc93-4adf-bae8-54ac93eecb19_2048x2048.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7822f532-cc93-4adf-bae8-54ac93eecb19_2048x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:646338,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.fresh.mundaine.ai/i/184259689?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7822f532-cc93-4adf-bae8-54ac93eecb19_2048x2048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mQ25!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7822f532-cc93-4adf-bae8-54ac93eecb19_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!mQ25!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7822f532-cc93-4adf-bae8-54ac93eecb19_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!mQ25!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7822f532-cc93-4adf-bae8-54ac93eecb19_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!mQ25!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7822f532-cc93-4adf-bae8-54ac93eecb19_2048x2048.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>Everyone&#8217;s talking about AI agents.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Autonomous systems. Multi-step reasoning. The future of work.</p><p>And I get it. The demos are compelling. The vision is exciting.</p><p>But here&#8217;s what nobody&#8217;s saying: if you can&#8217;t prove value from a simple AI workflow in 7 days, building an agent isn&#8217;t going to save you.</p><p>It&#8217;s going to bury you.</p><p></p><h2><strong>The Experimentation Trap</strong></h2><p>I&#8217;ve watched this pattern repeat across dozens of companies:</p><p>A team gets excited about AI. They launch a &#8220;pilot.&#8221; Six months later, they still can&#8217;t tell you if it worked.</p><p>Not because the AI failed. Because they never defined what success looked like.</p><p>They weren&#8217;t running experiments. They were just trying things.</p><p>And here&#8217;s the uncomfortable truth: <strong>trying things isn&#8217;t the same as proving things.</strong></p><p>Experimentation is a scientific word. Most companies aren&#8217;t using it scientifically.</p><p></p><h2><strong>What the Data Actually Shows</strong></h2><p>MIT just released a study that should terrify every executive investing in AI:</p><p><strong>Only 5% of AI pilots achieve rapid revenue acceleration.</strong></p><p>Think about that. Ninety-five percent are stalling out. Not because the technology doesn&#8217;t work, because companies can&#8217;t connect the activity to the outcome.</p><p>It gets worse:</p><p>- <strong>46% of AI proof-of-concepts get scrapped before production</strong> (S&amp;P Global Market Intelligence, 2025)</p><p>- <strong>30% of GenAI projects will be abandoned after POC by end of 2025</strong> (Gartner)</p><p>- Companies that do make it to production? It takes them an average of <strong>8 months</strong> to get there (Gartner)</p><p>This isn&#8217;t a technology problem. It&#8217;s a methodology problem.</p><p>Companies are launching pilots in &#8220;safe sandboxes&#8221; with no clear path to deployment. The tech works in isolation, but when it&#8217;s time to go live, they hit a wall: integration, compliance, training, measurement.</p><p>Gartner calls it &#8220;pilot paralysis.&#8221;</p><p>I call it what happens when you skip the groundwork.</p><p></p><h2><strong>Experimentation vs. Trying Things</strong></h2><p>Here&#8217;s the difference:</p><p><strong>Trying things</strong> looks like this:</p><p>- &#8220;Let&#8217;s give the sales team access to ChatGPT and see what happens.&#8221;</p><p>- &#8220;We&#8217;ll pilot this AI assistant for 3 months and evaluate.&#8221;</p><p>- &#8220;Let&#8217;s experiment with AI-generated content.&#8221;</p><p><strong>Proving things</strong> looks like this:</p><p>- &#8220;We believe AI-assisted email drafts will reduce response time by 40%. We&#8217;ll measure current baseline (2.5 hours average), test for 7 days, and evaluate against our decision criteria: if response time drops below 1.5 hours, we deploy company-wide.&#8221;</p><p>See the difference?</p><p>One is hope. The other is science.</p><p>And science requires structure.</p><p></p><h2><strong>The 5 Questions That Turn Random Trying Into Real Experimentation</strong></h2><p>If you can&#8217;t answer these five questions before you start, you&#8217;re not experimenting&#8212;you&#8217;re just burning time:</p><p><strong>1. HYPOTHESIS: What do you believe will improve?</strong></p><p>Not &#8220;let&#8217;s see what happens.&#8221; What specifically do you think will get better?</p><p>- Faster report creation?</p><p>- Fewer customer support emails?</p><p>- Higher proposal win rates?</p><p>Name it. Make it concrete.</p><p><strong>2. METRIC: How will you measure it?</strong></p><p>&#8220;Productivity&#8221; isn&#8217;t a metric. &#8220;Time to first draft&#8221; is.</p><p>If you can&#8217;t measure it, you can&#8217;t prove it. And if you can&#8217;t prove it, you can&#8217;t scale it.</p><p><strong>3. BASELINE: What&#8217;s the current state?</strong></p><p>You can&#8217;t know if something improved if you don&#8217;t know where you started.</p><p>How long does the task take now? How many errors occur? What&#8217;s the current cost?</p><p>Measure it. Before you change anything.</p><p><strong>4. TIMEFRAME: How long will you test?</strong></p><p>Here&#8217;s the thing: <strong>7 days beats 7 months.</strong></p><p>Most AI experiments don&#8217;t need months. They need days. A focused week with clear measurement tells you everything you need to know.</p><p>If it doesn&#8217;t show value in 7 days, extending it to 90 won&#8217;t save it.</p><p><strong>5. DECISION CRITERIA: What makes this a success?</strong></p><p>This is where most pilots die. Nobody decides upfront what &#8220;good enough to deploy&#8221; actually means.</p><p>- 20% time savings? 40%? 60%?</p><p>- 90% accuracy? 95%?</p><p>- Positive user feedback from 70% of testers?</p><p>Set the threshold. Before you start. Then stick to it.</p><p></p><h2><strong>The PhD Trap: Why Overthinking Kills Momentum</strong></h2><p>I&#8217;ve seen teams spend 3 months designing the &#8220;perfect experiment.&#8221;</p><p>They debate variables. They map dependencies. They build elaborate measurement systems.</p><p>And by the time they&#8217;re ready to start, the business has moved on.</p><p>Here&#8217;s what they miss: <strong>you don&#8217;t need a perfect experiment. You need a clear one.</strong></p><p>The 5 questions above? You can answer them in 30 minutes.</p><p>Then you run it. For a week. And you know.</p><p>Speed without sloppiness. That&#8217;s the balance.</p><p>Don&#8217;t overthink the experiment design. Just make sure you <em>have</em> one.</p><p></p><h2><strong>Why 7 Days Beats 7 Months</strong></h2><p>Every company I work with asks: &#8220;How long should we pilot this?&#8221;</p><p>My answer: <strong>As short as possible while still being valid.</strong></p><p>For most AI workflows, that&#8217;s 7 days. Maybe 14 if you need statistical significance.</p><p>And this is why short timeframes work:</p><p><strong>1. Urgency forces clarity</strong></p><p>When you only have a week, you can&#8217;t afford vague goals. You have to define success upfront.</p><p><strong>2. Feedback loops are faster</strong></p><p>Problems surface immediately. You fix them in real-time instead of discovering them in month 5.</p><p><strong>3. The opportunity cost is lower</strong></p><p>If it doesn&#8217;t work, you&#8217;ve lost a week&#8212;not a quarter.</p><p><strong>4. Momentum doesn&#8217;t die</strong></p><p>Three-month pilots lose steam. People forget why they started. Priorities shift.</p><p>A week? Everyone stays focused.</p><p></p><h2><strong>The Real Cost of Skipping This</strong></h2><p>Let&#8217;s do the math:</p><p>- Average time from POC to production: <strong>8 months</strong> (for projects that make it)</p><p>- Average percentage that make it: <strong>54%</strong> (industry average)</p><p>- Cost of a failed AI pilot: <strong>$4M - $20M</strong> (Gartner)</p><p>Now imagine a different path:</p><p>- Week 1: Run structured 7-day experiment with 5-question framework</p><p>- Week 2: If it works, deploy. If it doesn&#8217;t, kill it or iterate.</p><p>- Week 3: Start the next one.</p><p>By the time a traditional pilot finishes <em>planning</em>, you&#8217;ve already run 12 experiments and deployed the 3 that worked.</p><p>That&#8217;s the power of structured speed.</p><p></p><h2><strong>What This Means for AI Agents (And Why You&#8217;re Not Ready)</strong></h2><p>Back to where we started: AI agents.</p><p>Agents are powerful. Autonomous. Multi-step. Exciting.</p><p>They&#8217;re also the hardest form of AI to deploy successfully.</p><p>If you can&#8217;t structure a simple experiment around a single AI task, like &#8220;draft email replies&#8221; or &#8220;summarize meeting notes&#8221;, you have no business building an agent.</p><p>Because agents require:</p><p>- Clear success criteria (you don&#8217;t have those yet)</p><p>- Reliable measurement systems (you haven&#8217;t built them)</p><p>- Failure recovery processes (you haven&#8217;t tested them)</p><p>- User trust (you haven&#8217;t earned it)</p><p>All of that comes from the groundwork: small, structured, proven experiments.</p><p><strong>Master the basics. Then build the agents.</strong></p><p>Not the other way around.</p><p></p><h2><strong>The Quick Win Protocol: From Random Trying to Structured Proving</strong></h2><p>I built a framework for this. It&#8217;s called the <strong>AI Quick Win Protocol</strong>.</p><p>One page. Five questions. The exact structure you need to turn random experimentation into provable value.</p><p>Here&#8217;s what it includes:</p><p><strong>The 5 Core Questions:</strong></p><p>1. Hypothesis (what will improve?)</p><p>2. Metric (how will you measure?)</p><p>3. Baseline (what&#8217;s the current state?)</p><p>4. Timeframe (how long will you test?)</p><p>5. Decision criteria (what defines success?)</p><p><strong>Plus the Leverage Question:</strong></p><p>&#8220;What will we do with the freed capacity?&#8221;</p><p>Because here&#8217;s the thing: AI doesn&#8217;t just save time. It creates space.</p><p>If you don&#8217;t plan what to do with that space, it&#8217;ll just get filled with more tasks.</p><p>And you&#8217;ll be back where you started&#8212;busier than ever, wondering why AI didn&#8217;t help.</p><p></p><h2><strong>How to Get It</strong></h2><p><strong>Comment &#8220;QUICKWIN&#8221; and I&#8217;ll send you the framework.</strong></p><p>Use it this week. Pick one AI workflow. Answer the 5 questions. Run the experiment.</p><p>Seven days from now, you&#8217;ll know if it works.</p><p>And if it does, you&#8217;ll have proof. Not hope. Not hype.</p><p>Proof.</p><p>That&#8217;s how you move from pilot paralysis to deployment velocity.</p><p>That&#8217;s how you end up in the 5%, not the 95%.</p><p></p><h2><strong>One More Thing: Know Your Baseline</strong></h2><p>Structured experimentation starts with knowing your baseline.</p><p>Not just for individual tasks&#8212;for your entire organization&#8217;s AI readiness.</p><p>Where are you actually starting from? What&#8217;s mature? What&#8217;s missing?</p><p>I teamed up with the <strong>AI Maturity Index, </strong>building a <a href="https://app.ai-maturity-index.com/join-chat/IjTRuL3V3WPsUnKZ">maturity check</a> to answer exactly that.</p><p>It&#8217;s a 10-minute assessment that gives you a clear picture of where you are&#8212;and where the gaps are.</p><p>Because you can&#8217;t improve what you can&#8217;t measure.</p><p>And you can&#8217;t measure what you haven&#8217;t baselined.</p><p></p><p><em>Damian Nomura helps companies adopt AI through consulting, advisory, and hands-on implementation. His approach is human-centered, focusing on fast value creation while building sustainable leadership practices. If your AI pilots keep stalling&#8212;or you&#8217;re not sure where to start&#8212;let&#8217;s talk.</em></p><p><strong>Simple. Clear. Applicable.</strong></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[You've already wasted 6 months on AI.]]></title><description><![CDATA[Here's why...]]></description><link>https://www.fresh.mundaine.ai/p/youve-already-wasted-6-months-on</link><guid isPermaLink="false">https://www.fresh.mundaine.ai/p/youve-already-wasted-6-months-on</guid><dc:creator><![CDATA[Damian Nomura]]></dc:creator><pubDate>Sun, 04 Jan 2026 16:18:16 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!6jAS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7de737a1-9d7a-4bc6-a055-5756e2048cae_2048x2048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!6jAS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7de737a1-9d7a-4bc6-a055-5756e2048cae_2048x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!6jAS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7de737a1-9d7a-4bc6-a055-5756e2048cae_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!6jAS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7de737a1-9d7a-4bc6-a055-5756e2048cae_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!6jAS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7de737a1-9d7a-4bc6-a055-5756e2048cae_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!6jAS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7de737a1-9d7a-4bc6-a055-5756e2048cae_2048x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!6jAS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7de737a1-9d7a-4bc6-a055-5756e2048cae_2048x2048.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7de737a1-9d7a-4bc6-a055-5756e2048cae_2048x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:626722,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.fresh.mundaine.ai/i/183453115?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7de737a1-9d7a-4bc6-a055-5756e2048cae_2048x2048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!6jAS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7de737a1-9d7a-4bc6-a055-5756e2048cae_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!6jAS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7de737a1-9d7a-4bc6-a055-5756e2048cae_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!6jAS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7de737a1-9d7a-4bc6-a055-5756e2048cae_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!6jAS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7de737a1-9d7a-4bc6-a055-5756e2048cae_2048x2048.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>You hired consultants. Read whitepapers. Attended webinars. Built a task force.</p><p>Six months later, you&#8217;re no closer to actually moving.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>You&#8217;re not alone. 74% of companies struggle to achieve and scale value from AI. And in 2025, 42% abandoned most of their AI initiatives&#8212;up from just 17% the year before.</p><p>The problem isn&#8217;t lack of effort.</p><p>It&#8217;s that you&#8217;re trying to understand everything before you do anything.</p><p>Let me explain.</p><h2><strong>The Expertise Trap</strong></h2><p>Most executives think they (or someone in the company) need to become AI experts before they can make good decisions. So they go deep. Really deep. Either they themselves or a designated person.</p><p>They hire consultants who teach them about transformer architectures. They learn the difference between supervised and unsupervised learning. They study use cases from seventeen different industries.</p><p>And then they get lost.</p><p>Because here&#8217;s the thing: <strong>You don&#8217;t need expertise. You need just enough to decide competently.</strong></p><p>There&#8217;s a massive difference.</p><h2><strong>What &#8220;Just Enough&#8221; Actually Means</strong></h2><p>You need to know four things. Not four hundred.</p><p><strong>1. The basics of what AI actually is</strong></p><p>Not the technical details. The decision-making fundamentals:</p><ul><li><p>What&#8217;s automation? (Rules-based, repeatable tasks)</p></li><li><p>What&#8217;s AI? (Pattern recognition, prediction)</p></li><li><p>What&#8217;s agentic AI? (AI that can take actions based on goals)</p></li><li><p>What&#8217;s an AI agent? (Autonomous systems that make decisions and act)</p></li></ul><p>That&#8217;s it. One to two hours of clear explanation beats three months of technical deep-dives.</p><p><strong>2. What AI can actually do for your business</strong></p><p>Not theoretical possibilities. Real capabilities:</p><ul><li><p>Where can it save time without sacrificing quality?</p></li><li><p>Where can it surface insights you&#8217;re currently missing?</p></li><li><p>Where can it handle volume you can&#8217;t scale manually?</p></li></ul><p>You don&#8217;t need to understand how it works. You need to know what it does.</p><p><strong>3. Where to start without betting the company</strong></p><p>Not a comprehensive roadmap. A first experiment:</p><ul><li><p>One workflow that&#8217;s painful today</p></li><li><p>One metric you can measure</p></li><li><p>One week to see if it actually helps</p></li></ul><p>Speed beats perfection. Always.</p><p><strong>4. How to tell if it&#8217;s working</strong></p><p>Not ROI projections for 2027. Clear success criteria:</p><ul><li><p>Did the thing we hoped would improve actually improve?</p></li><li><p>By how much?</p></li><li><p>What do we do next based on what we learned?</p></li></ul><p>That&#8217;s just enough. Everything else is noise. For now.</p><h2><strong>The Data Overwhelm Problem</strong></h2><p>Here&#8217;s the uncomfortable truth: 72% of business leaders admit that the sheer volume of data prevents them from making ANY decision.</p><p>Not just AI decisions. Any decision.</p><p>And what do consultants do when executives are drowning in information? They add more information.</p><p>More frameworks. More case studies. More technical specifications. More vendor comparisons.</p><p>It&#8217;s well-intentioned. But it&#8217;s paralyzing.</p><p>Because while you&#8217;re learning, your competitors are experimenting. While you&#8217;re planning the perfect strategy, they&#8217;re running messy pilots and learning what actually works.</p><h2><strong>The Confidence Gap</strong></h2><p>Here&#8217;s where it gets dangerous.</p><p>90% of C-suite executives say they&#8217;re confident making AI decisions.</p><p>Only 8% actually possess substantial knowledge of AI technologies.</p><p>That&#8217;s an 82-point gap between confidence and competence.</p><p>You know what that gap is filled with? Consultants selling certainty. Vendors selling sophistication. And executives making decisions based on buzzwords instead of understanding.</p><p>The ones who succeed aren&#8217;t the ones who know the most. They&#8217;re the ones who know just enough to start, and learn everything else by doing.</p><h2><strong>What Success Actually Looks Like</strong></h2><p>You don&#8217;t need to understand transformer architectures to deploy a customer service chatbot.</p><p>You don&#8217;t need a PhD in machine learning to automate your reporting workflows.</p><p>You don&#8217;t need to master prompt engineering to use AI for market research.</p><p>You need to know:</p><ul><li><p>What problem you&#8217;re solving</p></li><li><p>How you&#8217;ll measure success</p></li><li><p>What you&#8217;ll do if it works (or doesn&#8217;t)</p></li></ul><p>That&#8217;s competent decision-making. And it&#8217;s enough.</p><p>The executives who are winning at AI adoption aren&#8217;t the ones with the most knowledge. They&#8217;re the ones with the clearest thinking.</p><p>They ask better questions:</p><ul><li><p>&#8220;What&#8217;s the smallest experiment we can run this week?&#8221;</p></li><li><p>&#8220;How will we know if this is actually helping?&#8221;</p></li><li><p>&#8220;What will we do with the time we save?&#8221;</p></li></ul><p>Not:</p><ul><li><p>&#8220;What&#8217;s the optimal architecture for our use case?&#8221;</p></li><li><p>&#8220;How does this compare to seventeen other solutions?&#8221;</p></li><li><p>&#8220;What if we&#8217;re missing something?&#8221;</p></li></ul><h2><strong>The Golden Cut</strong></h2><p>There is a simple, fast way to do AI right.</p><p>It&#8217;s not about going wide and deep before you start. It&#8217;s about going narrow and shallow, learning fast, and building from there.</p><p><strong>Step 1: Know where you stand</strong></p><p>Before anything else, benchmark yourself. Not against theoretical best practices. Against real companies doing real work.</p><p>Where are you actually strong? Where are you actually weak? What&#8217;s the gap between where you are and where you need to be?</p><p>The AI Maturity Index and mundaine have teamed up. Take the assessment for executives. Fifteen minutes. Free. Anonymous. Benchmarked against thousands of executives worldwide. Available in English and German.</p><p><a href="https://app.ai-maturity-index.com/join-chat/IjTRuL3V3WPsUnKZ">Take the assessment &#8594;</a></p><p><strong>Step 2: Pick one painful workflow</strong></p><p>Not the most strategic. Not the highest ROI. The most painful.</p><p>The thing your team complains about. The bottleneck that slows everything down. The task that makes people groan.</p><p>That&#8217;s your starting point.</p><p><strong>Step 3: Run a 7-day experiment</strong></p><p>Not a 6-month pilot. Seven days.</p><ul><li><p>What do you believe will improve?</p></li><li><p>How will you measure it?</p></li><li><p>What does success look like?</p></li></ul><p>One week. Real work. Real measurement.</p><p><strong>Step 4: Decide and move</strong></p><p>Success? Scale it.</p><p>Failure? Pivot or kill it.</p><p>Unclear? Run it one more week with better metrics.</p><p>But decide. And move.</p><h2><strong>The Real Cost of Waiting</strong></h2><p>Every day you spend &#8220;getting ready&#8221; for AI is a day you&#8217;re not learning what actually works in your business.</p><p>Your competitors aren&#8217;t waiting. The 26% who aren&#8217;t struggling to scale AI value? They started before they were ready. They learned by doing. They built competence through action, not study.</p><p>You don&#8217;t need to know everything.</p><p>You need to know just enough to start. And you need to start now.</p><p>Before you hire another high-profile consultant. Before you read another whitepaper. Before you build another task force.</p><p>Benchmark where you stand. Pick one experiment. Run it for seven days. Learn. Decide. Move.</p><p>That&#8217;s the golden cut.</p><p>Simple. Clear. Applicable.</p><p><strong>Ready to stop learning and start doing?</strong></p><p>Take the AI Maturity Index assessment. See where you actually stand. Then run your first experiment this week.</p><p><a href="https://app.ai-maturity-index.com/join-chat/IjTRuL3V3WPsUnKZ">Benchmark yourself now &#8594;</a></p><h3><strong>Sources</strong></h3><ul><li><p><a href="https://www.bcg.com/press/24october2024-ai-adoption-in-2024-74-of-companies-struggle-to-achieve-and-scale-value">BCG Press Release (Oct 2024)</a> - 74% of companies struggle to scale AI value</p></li><li><p><a href="https://www.baytechconsulting.com/blog/ai-investment-pullback-strategy-2025">BayTech Consulting</a> - 42% abandoned AI initiatives in 2025</p></li><li><p><a href="https://www.fastcompany.com/91338197/for-ceos-ai-tech-literacy-is-no-longer-optional-ceos-ai-literacy">Fast Company/MIT Sloan</a> - 90% confident, 8% knowledgeable</p></li><li><p><a href="https://fortune.com/2023/04/19/ai-data-decision-making-business-leaders-research-oracle/">Oracle/Fortune</a> - 72% paralyzed by data volume</p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Agent Trap: Why Even AI Experts Can't Answer "What Would I Use Agents For?"]]></title><description><![CDATA[The Agent Trap: Why Even AI Experts Can&#8217;t Answer &#8220;What Would I Use Agents For?&#8221;]]></description><link>https://www.fresh.mundaine.ai/p/the-agent-trap-why-even-ai-experts</link><guid isPermaLink="false">https://www.fresh.mundaine.ai/p/the-agent-trap-why-even-ai-experts</guid><dc:creator><![CDATA[Damian Nomura]]></dc:creator><pubDate>Mon, 29 Dec 2025 06:00:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!8-1i!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2944cd01-7c49-415c-8010-5bd68d36a0c6_2048x2048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8-1i!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2944cd01-7c49-415c-8010-5bd68d36a0c6_2048x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8-1i!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2944cd01-7c49-415c-8010-5bd68d36a0c6_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!8-1i!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2944cd01-7c49-415c-8010-5bd68d36a0c6_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!8-1i!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2944cd01-7c49-415c-8010-5bd68d36a0c6_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!8-1i!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2944cd01-7c49-415c-8010-5bd68d36a0c6_2048x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8-1i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2944cd01-7c49-415c-8010-5bd68d36a0c6_2048x2048.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2944cd01-7c49-415c-8010-5bd68d36a0c6_2048x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6372347,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.fresh.mundaine.ai/i/182811610?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2944cd01-7c49-415c-8010-5bd68d36a0c6_2048x2048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8-1i!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2944cd01-7c49-415c-8010-5bd68d36a0c6_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!8-1i!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2944cd01-7c49-415c-8010-5bd68d36a0c6_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!8-1i!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2944cd01-7c49-415c-8010-5bd68d36a0c6_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!8-1i!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2944cd01-7c49-415c-8010-5bd68d36a0c6_2048x2048.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h2>The Agent Trap: Why Even AI Experts Can&#8217;t Answer &#8220;What Would I Use Agents For?&#8221;</h2><p>If someone who&#8217;s built the world&#8217;s most advanced AI agent platform struggles with misuse, what chance does a budget-setting executive have?</p><p></p><p>I just got off a call with Philip Alm. He&#8217;s the co-founder and CEO of Incredible.one[1], a Swedish AI company that went straight to #1 on Product Hunt with what might be the most advanced AI agent platform I&#8217;ve ever seen. Winner of Sweden&#8217;s largest innovation award. Backed by serious investors. The real deal.</p><p>And yet, even with early access to his cutting-edge platform, I found myself sitting there asking: <em>What would I actually use agents for?</em></p><p>This isn&#8217;t false modesty. I bring eight years of AI experience. I&#8217;ve built agentic systems, automated workflows, consulted on AI strategy for companies of all sizes. If anyone should know what to use agents for, it&#8217;s me.</p><p>But I paused. And that pause matters.</p><p></p><h3>The 99% Problem</h3><p>Here&#8217;s what Philip told me that stuck: <strong>&#8221;Ninety-nine percent of the use cases people build aren&#8217;t actually agent use cases.&#8221;</strong></p><p>Read that again.</p><p>The builder of one of the world&#8217;s most advanced agent platforms, the person with the clearest view of how people actually use his technology, says almost everyone is using it wrong. Sure, you <em>can</em> solve these problems with an agent. But should you? It&#8217;s like using a sledgehammer for a thumbtack.</p><p>Get me right. This isn&#8217;t a criticism of his users. It&#8217;s an observation about where we are in this adoption curve. People hear &#8220;agents&#8221; and think it&#8217;s the future. So they try to force everything into that box.</p><p>The result? Overcomplicated solutions to simple problems.</p><p>The research backs this up. Gartner predicts[2] that over 40% of agentic AI projects will be canceled by 2027- due to escalating costs, unclear business value, or inadequate risk controls. McKinsey&#8217;s latest findings[3] are even more sobering: some companies are already &#8220;retrenching&#8221;- rehiring people where agents have failed. Their conclusion? &#8220;Agents aren&#8217;t always the answer - in some contexts traditional automation is still the smarter choice.&#8221;</p><p>Meanwhile, vendors are making things worse. Gartner calls it &#8220;agent washing&#8221; - the rebranding of chatbots and RPA tools as &#8220;agents&#8221; without any real agentic capabilities. Of the thousands of vendors claiming to sell agents, Gartner estimates only about 130 are legitimate.</p><p></p><h3>What Most People Miss: The Distinction</h3><p>Let me provide some clarity around my understanding of agents, agentic systems and automations.</p><p><strong>An agent</strong>, by classical AI definitions, possesses four core characteristics: autonomy (operating independently without constant human oversight), perception (gathering information from their environment), rational decision-making (analyzing options and choosing actions that maximize goal achievement), and goal-directed behavior (proactively pursuing objectives, not just reacting to prompts). Advanced agents add learning and adaptability&#8212;continuously improving through experience. The highest-tier agents even set their own goals and create tools as needed.</p><p>In practice, when we talk about true agents, we&#8217;re talking about specialized roles, sub-agents, supervisors, hierarchy. We&#8217;re talking about systems that decide <em>*themselves*</em> whether to go route A or B&#8212;not based on business logic I implemented, but based on judgment. We&#8217;re talking about actually replacing parts of people&#8217;s work.</p><p><strong>An agentic system</strong> follows decision trees I built. It looks autonomous, but I defined the paths. It&#8217;s still automation at its core.</p><p><strong>Automation</strong> is deterministic. Same input, same output. Every time.</p><p><em><strong>(Side note: If you&#8217;ve encountered &#8220;agents&#8221; in Microsoft Copilot, you&#8217;ve met something that covers the ground between true agents and agentic systems&#8212;which makes this even more confusing. For many, Copilot will be their first touchpoint with something called an &#8220;agent.&#8221; No wonder the lines are blurry.)</strong></em></p><p>Most &#8220;agent use cases&#8221; I see are really agentic systems. Many agentic systems would work better as simple automation. And some automation could be a spreadsheet.</p><p>The tool should match the problem. But we&#8217;ve got the shiny new hammer, so everything looks like a nail.</p><p></p><h3>The Hype We Skipped</h3><p>Here&#8217;s something we don&#8217;t talk about enough: <strong>we completely skipped the automation hype.</strong></p><p>I was working in automation when that wave should have peaked. But it never did. It was too complicated for management to understand. The speed to value was too slow. So we jumped past it.</p><p>All those consultancy waves - big data, cloud, digitization - they all had their moment. Automation got skipped. Now we&#8217;re jumping to AI and agents without the groundwork:</p><p>- When do you need exact, deterministic results?</p><p>- When do you need less exact but more human, non-deterministic results?</p><p>- When is AI even needed at all?</p><p>Without answers to these questions, companies are making million-dollar decisions based on vibes and vendor pitches.</p><p></p><h3>The Magnifier Problem</h3><p>AI is a magnifier. It makes everything faster, louder, bigger.</p><p>Feed it chaos, and it amplifies chaos. Feed it broken processes, and it breaks them faster. Feed it messy data, and it produces messy outputs at scale.</p><p>Agents stir even faster.</p><p>Before asking for agents, you need to understand what you&#8217;re actually trying to solve. You need the groundwork. Most companies don&#8217;t have it. They&#8217;re trying to run before they can walk.</p><p>Philip&#8217;s experience confirms what I see in my day-to-day work: the technology isn&#8217;t the bottleneck. Understanding is.</p><p></p><h3>The Management Upskilling Gap</h3><p>If I bring eight years of AI experience and still have to pause at &#8220;what would I use agents for?&#8221; - how should decision-makers investing millions know? How should managers allocating resources know? How should workers wondering about their futures know?</p><p>They can&#8217;t. Not without upskilling first.</p><p>McKinsey&#8217;s State of AI 2025 report[4] puts numbers to this gap: 88% of organizations now report using AI, but only 6% are seeing real financial impact. Just 1% believe their AI adoption has reached maturity. Everyone&#8217;s doing something. Almost no one&#8217;s doing it well.</p><p>The earlier leadership teams build this understanding, the further ahead they are. But most are still struggling to differentiate between automation and AI. Between agentic and truly autonomous. Between what sounds impressive in a demo and what actually creates value.</p><p>Philip said it well in an interview: the key is &#8220;the ability to distinguish between technology demonstrations and genuine value.&#8221;</p><p>That&#8217;s the gap. And it&#8217;s widening every day.</p><p></p><h3>What This Means for You</h3><p>If you&#8217;re planning AI investments for the year ahead, start with these questions:</p><p><strong>Before reaching for agents, ask:</strong></p><p>- What problem am I actually solving?</p><p>- Does this need judgment, or does it need consistency?</p><p>- Have I automated the basics first?</p><p>- Would a simpler solution work?</p><p><strong>Before approving the budget, ask:</strong></p><p>- Can my team distinguish automation from AI from agents?</p><p>- Do we have the groundwork (clean processes, good data) to benefit from AI?</p><p>- Are we solving a real problem or chasing a demo?</p><p>The unsexy truth: most companies need automation, not agents. They need to fix their processes before they amplify them. They need management that understands the distinction before they invest in the technology.</p><p></p><h3>The Question That Matters</h3><p>I walked away from my conversation with Philip with a deeper appreciation for where we actually are. Not where the hype says we are. Where we actually are.</p><p>The agent era is coming. Philip and his team are building the infrastructure for it. But we&#8217;re not there yet. Not for most use cases. Not for most companies.</p><p>The question isn&#8217;t &#8220;what agent should I build?&#8221; The question is: &#8220;do I even need one?&#8221;</p><p>For ninety-nine percent of you reading this, the honest answer is probably no. Not yet.</p><p>Start with the groundwork. Understand the distinctions. Build the basics. Then, when you&#8217;re actually ready, the agents will be waiting.</p><p></p><h3>References</h3><p>[1] Incredible.one - https://www.incredible.one</p><p>[2] Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by 2027 - https://www.gartner.com/en/newsroom/press-releases/2025-06-25-gartner-predicts-over-40-percent-of-agentic-ai-projects-will-be-canceled-by-end-of-2027</p><p>[3] McKinsey: One Year of Agentic AI: Six Lessons from the People Doing the Work - https://www.mckinsey.com/capabilities/quantumblack/our-insights/one-year-of-agentic-ai-six-lessons-from-the-people-doing-the-work</p><p>[4] McKinsey: The State of AI 2025 - https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai</p><p></p><p><em><strong>If you&#8217;re a leader trying to figure out where AI actually fits in your business, my [AI Adoption Sprint](https://mundaine.ai) takes you from confused to confident in one week. We do the groundwork together&#8212;so you know exactly what you need before you invest in what you don&#8217;t.</strong></em></p><p><em><strong>And if you&#8217;d rather have someone build the right solution for you (often automation, not agents), reach out about our Done-for-You automation service (https://mundaine.ai). We&#8217;ll build what you actually need in 1-2 weeks.</strong></em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.fresh.mundaine.ai/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[The Four Eras of Software Acquisition]]></title><description><![CDATA[Why 2025 Is the Year Small Companies Stop Buying and Start Building]]></description><link>https://www.fresh.mundaine.ai/p/the-four-eras-of-software-acquisition</link><guid isPermaLink="false">https://www.fresh.mundaine.ai/p/the-four-eras-of-software-acquisition</guid><dc:creator><![CDATA[Damian Nomura]]></dc:creator><pubDate>Mon, 22 Dec 2025 08:14:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!jKHZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd10353-eb01-4aa4-a961-cef9f2f9395e_2528x1696.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!jKHZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd10353-eb01-4aa4-a961-cef9f2f9395e_2528x1696.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!jKHZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd10353-eb01-4aa4-a961-cef9f2f9395e_2528x1696.png 424w, https://substackcdn.com/image/fetch/$s_!jKHZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd10353-eb01-4aa4-a961-cef9f2f9395e_2528x1696.png 848w, https://substackcdn.com/image/fetch/$s_!jKHZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd10353-eb01-4aa4-a961-cef9f2f9395e_2528x1696.png 1272w, https://substackcdn.com/image/fetch/$s_!jKHZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd10353-eb01-4aa4-a961-cef9f2f9395e_2528x1696.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!jKHZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd10353-eb01-4aa4-a961-cef9f2f9395e_2528x1696.png" width="1456" height="977" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5bd10353-eb01-4aa4-a961-cef9f2f9395e_2528x1696.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:977,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6105905,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.fresh.mundaine.ai/i/182305126?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd10353-eb01-4aa4-a961-cef9f2f9395e_2528x1696.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!jKHZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd10353-eb01-4aa4-a961-cef9f2f9395e_2528x1696.png 424w, https://substackcdn.com/image/fetch/$s_!jKHZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd10353-eb01-4aa4-a961-cef9f2f9395e_2528x1696.png 848w, https://substackcdn.com/image/fetch/$s_!jKHZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd10353-eb01-4aa4-a961-cef9f2f9395e_2528x1696.png 1272w, https://substackcdn.com/image/fetch/$s_!jKHZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5bd10353-eb01-4aa4-a961-cef9f2f9395e_2528x1696.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>88% of enterprises are experimenting with AI, but 70% of those projects never move past the pilot stage.</p><p>Here&#8217;s what nobody&#8217;s saying: the barrier isn&#8217;t technology. It&#8217;s organizational weight. The same resources that helped large companies win for decades (big IT teams, governance frameworks, structured processes) are now anchoring them to the ocean floor while smaller companies sail past.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>We&#8217;re witnessing an inversion of scale advantage. And most leaders haven&#8217;t realized it yet.</p><p>Let me explain.</p><h2>The Four Eras</h2><p>For thirty years, enterprise software followed a predictable pattern. You identified a need. You bought a massive platform. You used maybe 10% of its features. You paid for all of it.</p><p>This was the <strong>Big Systems Era</strong>. SAP. Oracle. Salesforce at enterprise scale. High cost, low utilization, but it made sense. Building custom software required massive capital and specialized teams.</p><p>Then came the <strong>Specialized SaaS Era</strong>. Instead of one monolithic platform, you bought focused tools. Project management here. CRM there. Analytics somewhere else. Less waste than before, but still paying for features you&#8217;d never touch. Still configuring workflows to fit the software instead of the other way around.</p><p>We&#8217;re currently living in the <strong>Composable Era</strong>. I call it the messy middle. You assemble best-of-breed tools and orchestrate them through integrations and APIs. It works. Kind of. Until you&#8217;re managing seventeen subscriptions, three integration platforms, and a full-time person just keeping the pipes flowing.</p><p>But look closer at what&#8217;s emerging beneath the surface.</p><p>We&#8217;re entering the <strong>On-Demand Creation Era</strong>. Not someday. Now.</p><p>You can already see this in the mainstream. Google just launched Disco, an experimental AI tool for the browser, powered by Gemini 3. Open a few tabs while researching a trip to Japan. Disco analyzes what you&#8217;re looking at and offers to build you a custom planning tool. Within a minute, it assembles a browser-based app with a map of Japan annotated with attractions, an itinerary builder, and links to all your sources.</p><p>No coding. No buying software. You describe what you need, and it builds it.</p><p>Google calls this feature &#8220;GenTabs.&#8221; You prompt the system with what you&#8217;re trying to accomplish, and it generates a custom interface with the information and tools you need. Studying a complex subject? It suggests building a visualization app. Comparing recipes? It offers to create a meal planner. The underlying AI handles all the logic and code generation.</p><p>This isn&#8217;t a developer tool hidden in some technical preview. It&#8217;s a consumer product from the world&#8217;s largest search company. The signal is clear: the era of describing what you need and having it built is no longer coming. It&#8217;s here.</p><h2>What Changed</h2><p>So what does the shift look like? We&#8217;re moving from buying functionality to <em><strong>describing</strong></em> functionality and having it built.</p><p>A team of three can now have custom software created for their exact workflow in days, not months. The economics have inverted. What used to require a $200,000 development project and six months of vendor management can now happen in a week with one technical person and AI assistance.</p><p>I&#8217;ve seen this firsthand. I have built an entire web application (front end, back end, deployment pipeline) in under a week. No external developers. No massive budget. Just describing what we needed and iterating with AI until it worked.</p><p>The research backs this up. According to analysis from Menlo Ventures, startups now hold 71% market share in product and engineering tools, including code generation, beating enterprise incumbents who had every structural advantage. Why? They shipped faster. They experimented more. They weren&#8217;t slowed down by governance committees.</p><p>Businesses implementing custom solutions report an average ROI of 55% over five years, compared to 42% for SaaS implementations, according to Gartner data.</p><p>The math is changing.</p><h2>The Paradox Nobody Talks About</h2><p>Here&#8217;s where it gets interesting: <strong>small companies now have the advantage</strong>.</p><p>I know. It sounds backwards.</p><p>Large companies have more resources. Bigger budgets. Established IT departments. Access to the best tools and platforms. They should be winning this race.</p><p>But they&#8217;re not.</p><p>Research from McKinsey found that while 88% of enterprises are using or experimenting with AI, only 33% have deployed it across their organizations. The gap between experimentation and deployment reveals a leadership challenge, not a technology one.</p><p>When you&#8217;re a company of thousands, every AI initiative needs governance frameworks. Cross-functional committees. Risk assessments. Compliance reviews. Security audits. Change management processes.</p><p>When you&#8217;re a team of eight, you just... build it.</p><p>Academic research on AI adoption in smaller firms points to the same conclusion: agility and willingness to take risks give startups a distinct advantage with disruptive technologies. While large firms leverage substantial R&amp;D resources for incremental innovation, smaller firms lead with technologies like AI that reward speed over scale.</p><p>The same attributes that made large companies successful in the Big Systems Era (formal processes, structured decision-making, coordinated rollouts) are now liabilities. Your advantage doesn&#8217;t come from the tools themselves. It comes from your ability to move.</p><h2>What This Actually Looks Like</h2><p>Let&#8217;s get concrete.</p><p>A small recruiting firm needs a candidate tracking system. The SaaS options are either too simple or too complex. Too expensive or too limited. Nothing quite fits.</p><p>In 2020, they would have compromised. Bought the closest fit. Configured their workflows to match the software.</p><p>In 2025, they describe their exact process to an AI-assisted developer. Custom candidate pipeline. Automated follow-ups specific to their approach. Integration with their existing tools. Built in two weeks. Owned completely. Modified whenever they need.</p><p>A boutique consulting firm wants a proposal generation system that pulls from their IP library, adapts to client contexts, and maintains their voice. No SaaS tool does this.</p><p>They don&#8217;t need to find one anymore. They build it. Custom. Exact. Theirs.</p><p>The difference is speed of execution. Small companies decide on Monday and deploy on Friday. Large companies form committees.</p><h2>The Trap Large Companies Are In</h2><p>Here&#8217;s the paradox large companies face: organizations with comprehensive governance models struggle to move fast. Organizations without them can&#8217;t scale safely. Either way, they lose ground to smaller competitors who don&#8217;t face this tradeoff.</p><p>Large enterprises also face integration nightmares with legacy systems. Nearly 60% cite integrating with existing infrastructure as their primary barrier to AI adoption. Their data sits in incompatible formats across siloed departments. Custom development requires coordinating multiple teams, navigating bureaucracy, securing budget approvals.</p><p>Small companies don&#8217;t have legacy systems holding them back. They&#8217;re not coordinating seventeen stakeholders. They&#8217;re not managing vendor relationships or negotiating enterprise contracts. They&#8217;re building what they need and shipping it.</p><p>The research is clear: organizations without structured governance experience faster initial deployment but struggle to scale. Organizations with comprehensive governance move slowly but systematically. Both approaches have costs.</p><p>But in an era where technology capabilities double every eighteen months, speed matters more than perfection.</p><h2><strong>Now, what does that mean for leaders?</strong></h2><p>If you&#8217;re running a small company, you&#8217;re sitting on an advantage you might not realize you have.</p><p>The keys to the future are already in your hands. The only question is whether you recognize it.</p><p>You don&#8217;t need a massive IT budget. You don&#8217;t need governance frameworks designed for thousand-person organizations. You don&#8217;t need to compromise your workflows to fit off-the-shelf software.</p><p>You need three things:</p><p><strong>1. A willingness to experiment.</strong> Not pilot programs and steering committees. Actual experimentation where someone on your team describes a problem and builds a solution. Fast feedback loops. Quick failures. Rapid iteration.</p><p><strong>2. One technically-capable person.</strong> Not a full development team. One person who can work with AI tools to create custom functionality. This is becoming a core business skill, not a specialized technical role.</p><p><strong>3. Permission to move faster than feels comfortable.</strong> The protective instinct to &#8220;wait until we have it figured out&#8221; is the same instinct that lets larger, slower competitors catch up. In 2025, building beats buying in speed, fit, and cost.</p><p>According to Salesforce&#8217;s SMB Trends Report, growing small and mid-sized businesses are the primary drivers of AI adoption. 83% are already experimenting. 78% plan to increase investments. The gap is widening between AI-native small companies and those waiting for the &#8220;right time.&#8221;</p><h2>The Choice Point</h2><p>We&#8217;re at an inflection point.</p><p>For the first time in modern business history, being small is a software advantage. You can build exactly what you need, faster than enterprises can evaluate what to buy.</p><p>But this window won&#8217;t stay open forever. As more small companies realize this advantage and act on it, competitive pressure increases. The companies moving now are building capabilities. The ones waiting are falling behind.</p><p>Large companies will eventually solve their governance challenges and integration problems. They&#8217;ll figure out how to move faster. But that takes time.</p><p>That&#8217;s your window.</p><p>You can keep shopping for SaaS tools that almost fit. Keep paying for features you&#8217;ll never use. Keep configuring your business to match software designed for someone else.</p><p>Or you can start describing what you actually need and building it.</p><p>The technology is here. The economics work. The only barrier is recognizing where you stand.</p><h2>What To Do Next</h2><p>If you&#8217;re a small company leader reading this, here&#8217;s your action plan:</p><p><strong>This week:</strong> Identify one workflow where you&#8217;re compromising because existing software doesn&#8217;t quite fit. Not your biggest problem. Just one clear example.</p><p><strong>Next week:</strong> Describe exactly how that workflow should work in your business. Not how the software makes you do it. How you would design it.</p><p><strong>The week after:</strong> Find one technical person, internal or external, who can work with AI tools to build a prototype. Give them your description. Set a two-week timeline.</p><p>Don&#8217;t aim for perfect. Aim for working. You can iterate from there.</p><p>The companies that figure this out first will have an advantage measured in years, not months. They&#8217;ll have systems built exactly for their needs. Software that evolves with their business instead of constraining it. Competitive moats built on custom capabilities, not purchased platforms.</p><p>This is the shift. The era of buying software is ending. The era of building exactly what you need is beginning.</p><p>The question isn&#8217;t whether this is happening. It&#8217;s whether you&#8217;ll be early or late.</p><p>---</p><p><em>*If this resonates but you&#8217;re not sure where to start, that&#8217;s exactly what I help companies navigate. I run 5-day AI Adoption Sprints where teams go from &#8220;we should probably do something with AI&#8221; to deployed, working solutions. We don&#8217;t just talk about possibilities. We build them.*</em></p><p><em>*And for leaders who see the vision but need execution support, I offer Done-for-You Automation where we build your custom solutions in 1-2 weeks. You describe the need. We deliver the working system.*</em></p><p><em>*2025 is the year to stop buying software and start building it. Let&#8217;s make sure you&#8217;re not still shopping while your competitors are shipping.*</em></p><p>---</p><p><strong>Research Sources:</strong></p><p>- [Google Blog: Disco and GenTabs](https://blog.google/technology/google-labs/gentabs-gemini-3/)</p><p>- [TechCrunch: Google Debuts Disco](https://techcrunch.com/2025/12/11/google-debuts-disco-a-gemini-powered-tool-for-making-web-apps-from-browser-tabs/)</p><p>- [McKinsey: The State of AI in 2025](https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai)</p><p>- [Menlo Ventures: 2025 State of Generative AI in the Enterprise](https://menlovc.com/perspective/2025-the-state-of-generative-ai-in-the-enterprise/)</p><p>- [Salesforce: SMB Trends Report](https://www.salesforce.com/resources/research-reports/smb-trends/)</p><p>- [Netguru: SaaS vs Custom Software Guide (Gartner data)](https://www.netguru.com/blog/saas-vs-custom-software)</p><p>- [Deloitte: AI Adoption Challenges and Trends](https://www.deloitte.com/us/en/services/consulting/blogs/ai-adoption-challenges-ai-trends.html)</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The AI productivity lie]]></title><description><![CDATA[> 96% of executives expect AI to boost productivity.]]></description><link>https://www.fresh.mundaine.ai/p/the-ai-productivity-lie</link><guid isPermaLink="false">https://www.fresh.mundaine.ai/p/the-ai-productivity-lie</guid><dc:creator><![CDATA[Damian Nomura]]></dc:creator><pubDate>Sun, 14 Dec 2025 23:33:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6b952ea0-0a5a-40fc-8b73-3e23f8ec70c2_2048x2048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&gt; 96% of executives expect AI to boost productivity. 77% of workers say it&#8217;s making things worse. Something doesn&#8217;t add up.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>I&#8217;ve been thinking about a strange experience I had recently.</p><p>I was working with three terminal windows open, running AI agents in parallel. Task one running. Task two in progress. Task three queued up. No waiting, no idle time. Just pure output. I built more in a single afternoon than I used to build in a week.</p><p>And by evening, I was completely wiped out.</p><p>Not tired in the normal &#8220;long day&#8221; sense. Something deeper. The kind of exhaustion that makes you question whether the speed was worth it.</p><p>I thought I was the only one feeling this way. Turns out, I&#8217;m not even close.</p><p></p><p><strong> The Numbers That Should Worry You</strong></p><p>A recent Upwork study surveyed 2,500 workers across the US, UK, Australia, and Canada. The findings are striking:</p><p><strong>96% of C-suite executives </strong>expect AI tools to increase their company&#8217;s productivity.</p><p><strong>77% of employees</strong> say AI has actually <em>*decreased*</em> their productivity.</p><p>That&#8217;s not a gap. That&#8217;s a chasm.</p><p>And here&#8217;s the part that really got my attention: workers who use AI frequently are <strong>30% more likely to report burnout</strong> than those who never use it. 45% of frequent AI users are burned out, compared to 35% of non-users.</p><p>The tools that were supposed to give us our time back are somehow taking more of it.</p><p></p><p><strong>What&#8217;s Really Happening</strong></p><p>The core insight isn&#8217;t that AI doesn&#8217;t work. It&#8217;s that we&#8217;ve been deploying it without addressing what actually matters.</p><p><strong>Nearly half of employees using AI don&#8217;t know how to achieve the productivity gains their employers expect.</strong> We handed people powerful tools and assumed they&#8217;d figure it out. They didn&#8217;t. Not because they&#8217;re incapable, but because nobody taught them how.</p><p>This is a skills gap masquerading as a technology problem.</p><p>Microsoft Research found something even more revealing: the more confident workers are in AI, the less critical thinking they apply. They trust the output, skip the verification, and end up with what researchers are now calling &#8220;workslop&#8221;: AI-generated work that looks productive but lacks the substance to actually move things forward.</p><p>41% of workers have encountered workslop. Think: AI-generated reports that sound impressive but say nothing actionable. Each instance costs about two hours to fix.</p><p>So much for the productivity gains.</p><p></p><p><strong>The Efficiency Trap</strong></p><p>Wharton researchers identified a pattern they call &#8220;the efficiency trap.&#8221;</p><p>Here&#8217;s how it works: Every productivity improvement becomes the new baseline. You complete a project in three days instead of five? Great. Now three days is the expectation. The time you saved doesn&#8217;t become free time. It becomes time for more work.</p><p>Workers report feeling:</p><p><strong>&#8221;Simultaneously more productive and more overwhelmed.&#8221;</strong></p><p>I recognize this feeling. That afternoon with three terminals running? I got an enormous amount done. And the next day, my brain expected that pace to continue. Anything less felt like slacking.</p><p>This isn&#8217;t a bug. It&#8217;s how we&#8217;ve designed our relationship with productivity tools. We optimize for output and assume the humans will adapt.</p><p>They are adapting. They&#8217;re burning out.</p><p></p><p><strong>We&#8217;ve Seen This Before</strong></p><p>In 1987, Nobel Prize-winning economist Robert Solow made an observation that became famous: &#8220;You can see the computer age everywhere but in the productivity statistics.&#8221;</p><p>Companies were spending 25% more on technology year over year. Productivity was actually declining.</p><p>Sound familiar?</p><p>We&#8217;re living through the same pattern with AI. Massive investment. Underwhelming results. Not because the technology doesn&#8217;t work, but because meaningful gains require more than just tools.</p><p>The NBER puts it clearly: &#8220;Meaningful productivity gains from transformative technologies require extensive complementary investments in new processes, business models, and human capital.&#8221;</p><p>Human capital. That&#8217;s the part we keep skipping.</p><p></p><p><strong>The Burden Shift Nobody Talks About</strong></p><p>There&#8217;s another dynamic I keep seeing in my work with companies.</p><p>AI makes it easy to generate output. Documents, code, proposals, analysis. The person using the AI saves time. But someone still has to review that output. Verify it. Fix the errors. Edit out the verbose parts.</p><p>Research on developer teams found exactly this: &#8220;While authors may save time by pasting in AI outputs, reviewers inherit the burden of checking quality, correcting errors, and editing down verbosity.&#8221;</p><p>The work didn&#8217;t disappear. It shifted. And it shifted to the people whose time is often already stretched thin.</p><p>If you&#8217;re a leader rolling out AI tools, ask yourself: who&#8217;s doing the quality control? Because if no one is, you&#8217;re not getting productivity gains. You&#8217;re getting &#8220;workslop&#8221; at scale.</p><p></p><p><strong>What This Means for Leaders</strong></p><p>This isn&#8217;t a technology problem. It&#8217;s a leadership problem.</p><p>The companies I work with that are actually seeing results from AI aren&#8217;t the ones with the best tools. They&#8217;re the ones that took time to answer three questions:</p><p><strong>1. Who needs to learn what?</strong></p><p>Not everyone needs the same skills. Some people need to learn how to prompt effectively. Others need to learn how to evaluate AI output critically. Some need both. Generic &#8220;AI training&#8221; doesn&#8217;t cut it.</p><p><strong>2. What are the new expectations, really?</strong></p><p>If AI makes certain tasks faster, does that mean more tasks? Or does it mean different work? Your team is guessing at the answer. Make it explicit.</p><p><strong>3. Who owns quality control?</strong></p><p>When output gets easier to produce, the bottleneck shifts to review. If you haven&#8217;t adjusted for this, you&#8217;ve just moved the burnout from one person to another.</p><p>These aren&#8217;t easy questions to answer alone. And they&#8217;re exactly the kind of questions I work through with executives who are navigating this transition.</p><p></p><p><strong>The Takeaway</strong></p><p>The AI productivity paradox isn&#8217;t about the technology being overhyped.</p><p>It&#8217;s about the gap between what we expect AI to do for us and what we&#8217;ve actually prepared people to do with it.</p><p>96% of executives believe AI will boost productivity. But belief isn&#8217;t a strategy.</p><p>The leaders who will actually capture those gains are the ones who recognize that tools alone don&#8217;t change outcomes. Skills do. Expectations do. The way you communicate why this matters and what it means for each person on your team does.</p><p>That&#8217;s the work that makes the difference. And right now, most companies aren&#8217;t doing it.</p><p>If you&#8217;re navigating this right now and want a thought partner, I&#8217;m always happy to talk.</p><p></p><p><strong>Sources</strong></p><p>- Upwork Research Institute: [From Burnout to Balance (2024)](https://www.upwork.com/research/ai-enhanced-work-models)</p><p>- Wharton: [The AI Efficiency Trap](https://knowledge.wharton.upenn.edu/article/the-ai-efficiency-trap-when-productivity-tools-create-perpetual-pressure/)</p><p>- Microsoft Research: [AI and Critical Thinking (CHI 2025)](https://www.microsoft.com/en-us/research/publication/the-impact-of-generative-ai-on-critical-thinking-self-reported-reductions-in-cognitive-effort-and-confidence-effects-from-a-survey-of-knowledge-workers/)</p><p>- Harvard Business Review: [AI-Generated Workslop (2025)](https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity)</p><p>- NBER: [AI and the Modern Productivity Paradox](https://www.nber.org/papers/w24001)</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Why Your AI Pilots Won’t Scale]]></title><description><![CDATA[You&#8217;ve built the use cases. Your AI champions are excited. The executive team believes. So why isn&#8217;t anything landing?]]></description><link>https://www.fresh.mundaine.ai/p/why-your-ai-pilots-wont-scale</link><guid isPermaLink="false">https://www.fresh.mundaine.ai/p/why-your-ai-pilots-wont-scale</guid><dc:creator><![CDATA[Damian Nomura]]></dc:creator><pubDate>Mon, 08 Dec 2025 07:01:11 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0606f1de-95d7-4778-9aff-b7f8e657afe9_2048x2048.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There&#8217;s a pattern I keep seeing in companies that have started their AI journey.</p><p>They&#8217;ve done the pilots. They&#8217;ve proven the concept. The AI champions are building use cases at speed. Ten a week sometimes. The executive team is bought in. Everyone at the top knows the value.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p> And yet nothing seems to stick.</p><p> If this sounds familiar, you might be about to hit a wall you don&#8217;t see coming. Because there are two kinds of &#8220;stuck&#8221; in AI adoption. And most people confuse them.</p><p>  </p><h2>The Two Phases of Stuck</h2><h3>Phase 1: The Easy Kind</h3><p>This is where daily business gets in the way. AI initiatives get deprioritized. Everything slows down. Other fires demand attention. The roadmap stretches.</p><p>But be aware: this is still the easy part.</p><p>In this phase, one motivated AI champion can build ten use cases a week. With ease. The technology works. The value is clear. It&#8217;s just about carving out time and focus.</p><p> If you&#8217;re stuck here, we can accelerate through it quickly. It&#8217;s mostly a matter of commitment and structure.</p><h3>Phase 2: The Real Wall</h3><p>But there&#8217;s another kind of stuck. And it&#8217;s the one that catches leaders off guard.</p><p>This is where your use cases exist but don&#8217;t touch the ground.</p><p>The executive team knows the value. You might even have a team of AI champions who are enthusiastic, building constantly, experimenting, tinkering. They&#8217;re eager to take this to the business.</p><p>But the layers beneath stall.</p><p>Middle management feels squeezed. Pressure from above to make AI happen. Resistance from below that doesn&#8217;t want to change how things work. And the people who actually need to use these tools? They&#8217;re not moving.</p><p> </p><h2>The Echo Chamber Problem</h2><p>Here&#8217;s what I&#8217;ve noticed in these situations: the people driving AI adoption often don&#8217;t realize they&#8217;re in an echo chamber.</p><p>You believe in this. Your AI champions believe in this. The executives who sponsored the initiative believe in this. You&#8217;ve been building and experimenting together. You&#8217;ve seen what&#8217;s possible. You&#8217;re speaking the same language.</p><p>And then you try to roll out to the broader organization.</p><p>That&#8217;s when you hit the outside world.</p><p>The people on the receiving end haven&#8217;t been on this journey with you. They haven&#8217;t seen the experiments. They haven&#8217;t felt the wins. They&#8217;ve been doing their jobs the way they&#8217;ve always done them.</p><p>And now someone is asking them to change.</p><h2>The Question Nobody&#8217;s Answering</h2><p>The question that&#8217;s missing in most AI rollouts is simple: &#8220;What&#8217;s in it for me?&#8221;</p><p>Not &#8220;what&#8217;s in it for the company.&#8221; Not &#8220;what&#8217;s in it for productivity.&#8221; Not &#8220;what&#8217;s in it for the bottom line.&#8221;</p><p>What&#8217;s in it for the person who has to actually change how they work?</p><p>When that question gets answered, people roll along intrinsically. They want to adopt because they see personal value. They&#8217;re not being pushed. They&#8217;re pulling.</p><p>When that question stays unanswered, you get compliance at best. And quiet resistance at worst. The rollout stalls. The use cases sit unused. The AI champions get frustrated. The executives wonder why things aren&#8217;t moving.</p><h2>Why This Happens</h2><h3>The Curse of Enthusiasm</h3><p>The people driving AI adoption are usually excited about the technology. They&#8217;ve seen what it can do. They believe in the potential. This enthusiasm is what got the initiatives started in the first place.</p><p>But enthusiasm doesn&#8217;t translate automatically. What feels obviously valuable to someone who&#8217;s been experimenting for months isn&#8217;t obvious to someone who just sees another tool they need to learn.</p><h3>The Communication Gap</h3><p>Most AI rollouts fail at communication, not technology.</p><p>The message that lands with the executive team is not the message that lands with middle management. And neither is the message that lands with the people doing the work.</p><p>Executives care about strategic value and competitive positioning. Middle managers care about not disrupting what&#8217;s already working and not creating more problems to manage. Individual contributors care about whether this makes their day better or worse.</p><p>Same initiative. Three completely different conversations needed.</p><h3>The Change Fatigue Factor</h3><p>Your AI initiative isn&#8217;t happening in isolation. People have been through system changes, process changes, restructurings. They&#8217;ve heard &#8220;this will make things better&#8221; before.</p><p>If your message sounds like every other change initiative they&#8217;ve lived through, they&#8217;ll treat it the same way: nod along and wait for it to blow over.</p><p> </p><h2>What Actually Works</h2><h3>Start with the Individual</h3><p>Before you can scale AI adoption, you need to be able to answer one question for every role that will be affected: how does this make their specific work better?</p><p>Not &#8220;better for productivity.&#8221; Better for them.</p><p>Less tedious work. Fewer interruptions. More time for the parts of their job they actually like. Less stress in specific situations they deal with regularly.</p><p>And even beyond that: What can we provide them, that they can take away into their private lives. This is where you have an opportunity to build that true identification. That true intrinsic motivation.</p><p>If you can&#8217;t articulate this for a specific role, you&#8217;re not ready to roll out to that role.</p><h3>Make It Concrete</h3><p>Abstract benefits don&#8217;t drive adoption. Concrete examples do.</p><p>&#8220;AI can help with customer service&#8221; means nothing. &#8220;When a customer asks about X, instead of searching through three systems, you can ask this tool and get the answer in 10 seconds&#8221; means something.</p><p>Get specific. Use real scenarios. Show the before and after in terms people can immediately recognize.</p><h3>Let People See Themselves</h3><p>The most effective AI rollouts I&#8217;ve seen let people discover value for themselves.</p><p>Not a presentation. Not a training session where someone demonstrates capabilities. An opportunity to play with a tool in the context of their own work and find their own wins. Bonus: This is where you collect feedback for further improvement.</p><p>When someone discovers for themselves that this thing can help them, they become advocates. When someone is told this thing will help them, they become skeptics.</p><p>  </p><h2>What This Means for You</h2><p>If you&#8217;re leading AI adoption in your organization, ask yourself honestly: are you stuck in Phase 1 or Phase 2?</p><p>If it&#8217;s Phase 1, you need the right approach, focus and commitment. The solutions are tactical.</p><p>If it&#8217;s Phase 2, you have a communication challenge. And that&#8217;s harder to solve, because it requires understanding perspectives very different from your own.</p><p>The technology isn&#8217;t the bottleneck. The message is.</p><p>This is exactly what I mostly work on with leaders in my Executive Sparring program. Not the AI strategy. Not the use case development. The communication. How to craft messages that land differently with different audiences. How to answer &#8220;what&#8217;s in it for me?&#8221; in ways that create intrinsic buy-in rather than forced compliance.</p><p>Because once people genuinely see the value for themselves, they don&#8217;t need to be pushed. They pull.</p><p></p><h2>The Takeaway</h2><p>Your AI pilots succeeded because you and your champions believed. Scaling requires something different: helping others believe too.</p><p>That doesn&#8217;t happen through better presentations or more training sessions. It happens through answering the question nobody&#8217;s asking out loud but everyone&#8217;s thinking: &#8220;What&#8217;s in it for me?&#8221;</p><p>Get that answer right, and the wall becomes a door.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.fresh.mundaine.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">mundaine - Damian Nomura is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Everything You Learned About Product Development is Backwards for AI]]></title><description><![CDATA[Find a problem.]]></description><link>https://www.fresh.mundaine.ai/p/everything-you-learned-about-product</link><guid isPermaLink="false">https://www.fresh.mundaine.ai/p/everything-you-learned-about-product</guid><dc:creator><![CDATA[Damian Nomura]]></dc:creator><pubDate>Mon, 01 Dec 2025 06:01:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ws_a!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F17f2238c-78ca-446f-9201-87a899c5014e_320x320.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Find a problem. Build a solution.</p><p>This is the sacred cow of product development. Every business school, every startup accelerator, every innovation consultant preaches it. Start with customer pain. Validate the problem exists. Only then build something.</p><p>It makes perfect sense. Except when it doesn&#8217;t.</p><p>When OpenAI released ChatGPT in late 2022, they violated this principle spectacularly. They didn&#8217;t launch a product that solved a clear problem. They released a fascinating technology and said: &#8220;Here. Play with it.&#8221;</p><p>And we did.</p><p></p><p>I still remember when one of my employees showed it to me. He had written an entire story in seconds. We were completely fascinated&#8212;not because we needed AI-generated stories, but because of what it implied. What else could this thing do?</p><p>Within weeks, we were chaining tools together. MidJourney for images (bad by today&#8217;s standards, but magical then). A lip-sync app. Text-to-speech. We created talking pictures telling their own stories. It was amazing&#8212;far from a final product, but we built it by playing around.</p><p>We still didn&#8217;t have a solution for an existing problem. But we had created so much.</p><p>This wasn&#8217;t traditional product development. This was something else entirely.</p><p></p><p><strong>The Inversion</strong></p><p>What happened with ChatGPT inverted everything we&#8217;re taught about innovation. Instead of problem-first, solution-second, we got:</p><p><strong>Solution first. Problems emerge through play.</strong></p><p>This isn&#8217;t how it&#8217;s supposed to work. And yet it did. Spectacularly.</p><p>The pattern I&#8217;m seeing is what I call <strong>Solution-First Innovation</strong>:</p><p>1. <strong>Encounter</strong> - Meet the technology with curiosity, not requirements</p><p>2. <strong>Experiment</strong> - Play without pressure, chain things together</p><p>3. <strong>Emerge</strong> - Problems reveal themselves through use</p><p>4. <strong>Extract</strong> - Pull out the actual value created</p><p>5. <strong>Execute</strong> - Now build the &#8220;proper&#8221; solution</p><p>This flips the traditional sequence (Research &#8594; Problem &#8594; Solution &#8594; Build) on its head. And for AI, I&#8217;m convinced it&#8217;s the only approach that works.</p><p></p><p><strong>The Language Test Tutor</strong></p><p>Here&#8217;s a story that captures this perfectly.</p><p>A woman was preparing for a language diploma exam. She needed practice tests to prepare, but she ran out of sample materials. So she did what any resourceful person would do - she fed existing tests and requirements into an LLM and asked it to generate new practice tests.</p><p>Simple problem. Simple solution. But here&#8217;s where it gets interesting.</p><p>She started using the AI to grade her answers too. She&#8217;d compare its ratings to the official guidelines. And she discovered something surprising: the AI graded with the exact same accuracy as a human examiner. Same scores. Same feedback quality.</p><p>She had set out to generate practice tests. What she accidentally built was a personal language tutor that could both test and grade her work.</p><p>She found a problem, test prep scarcity, through experimentation, not market research. And in the process, discovered a much bigger problem she could solve: the entire test preparation and grading workflow.</p><p>She passed her exam and moved on. But here&#8217;s my provocation: she could have productized this right now. A language learning AI that generates personalized tests and provides human-level grading. And at the same time she could have started to take over the testing space, by providing a solution for language test-centers. By the way, the tests she used for herself were hand-written.</p><p>If you&#8217;re reading this, you might steal this idea and build it right now.</p><p></p><p><strong>What This Means for Your Company</strong></p><p>I work with companies at the beginning of their AI journey. The most common thing I hear is: &#8220;We know we need to do something with AI, but we don&#8217;t know where to start.&#8221;</p><p>My response often surprises them: <strong>Stop looking for the perfect use case. Start playing.</strong></p><p>The companies I see winning with AI aren&#8217;t the ones with the best strategy documents. They&#8217;re the ones where people have permission to experiment. Where someone can spend an afternoon chaining tools together without a mandatory business case tied to it.</p><p>What I can tell you from years of tinkering: the problems will find you. The more you create and play, the more you&#8217;ll discover what AI is actually a solution for. And what not.</p><p>But this requires something most organizations struggle with: giving people permission to play without knowing the outcome. That&#8217;s not how business usually works. We want ROI projections. Use case validation. Analysis phases.</p><p>While I still believe that those three are important in the business world, I like to deliver on them fast, to create space for experimentation.</p><p>Because the most valuable AI applications I&#8217;ve seen weren&#8217;t discovered through analysis. They were stumbled upon by someone who was curious enough to experiment and lucky enough to work somewhere that let them.</p><p></p><p><strong> The Real Challenge</strong></p><p>The technical part of AI adoption has become shockingly simple. I&#8217;ve taken companies from zero to a working AI pilot in five days. The tools are ready. The capabilities are there.</p><p>The hard part isn&#8217;t technology. It&#8217;s culture.</p><p>It&#8217;s getting leaders comfortable with &#8220;try things and see what happens&#8221; as a legitimate strategy. It&#8217;s creating space for experimentation without requiring justification. It&#8217;s accepting that the best AI use cases in your company probably haven&#8217;t been discovered yet&#8212;and won&#8217;t be discovered by consultants running workshops, but by employees playing around.</p><p>The language test story didn&#8217;t come from a strategy session. It came from someone who ran out of practice materials and thought, &#8220;I wonder if...&#8221;</p><p>That &#8220;I wonder if...&#8221; is worth more than a hundred use case workshops.</p><p></p><p><strong>The Question</strong></p><p>So here&#8217;s what I&#8217;m thinking about:</p><p>Are you waiting for the perfect problem before you let your people play with AI? Are you demanding business cases before experimentation? Are you running analysis phases when you should be running experiments?</p><p>The companies winning at AI aren&#8217;t the ones with the best strategies. They&#8217;re the ones experimenting fastest. They&#8217;re the ones where someone can discover an accidental tutor while just trying to pass a language test.</p><p>Traditional product development says: find a problem, build a solution.</p><p>AI adoption says: find some curiosity, and let the problems find you.</p><p><strong>What have you discovered by playing that you never intended to build?</strong></p><p></p><p><em>Damian helps mid-sized companies adopt AI with a human-centered approach&#8212;from zero to working pilots in days, not months. If you&#8217;re stuck waiting for the perfect use case, let&#8217;s talk about what experimentation could look like for your team.</em></p>]]></content:encoded></item></channel></rss>