<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Product Thinking w/ Surya ]]></title><description><![CDATA[Deep dives and commentary on AI strategy, product thinking, and leadership for PMs navigating transformation in enterprise environments]]></description><link>https://blog.suryas.org</link><generator>Substack</generator><lastBuildDate>Wed, 29 Apr 2026 11:27:41 GMT</lastBuildDate><atom:link href="https://blog.suryas.org/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Surya Suravarapu]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[surya@suryas.org]]></webMaster><itunes:owner><itunes:email><![CDATA[surya@suryas.org]]></itunes:email><itunes:name><![CDATA[Surya Suravarapu]]></itunes:name></itunes:owner><itunes:author><![CDATA[Surya Suravarapu]]></itunes:author><googleplay:owner><![CDATA[surya@suryas.org]]></googleplay:owner><googleplay:email><![CDATA[surya@suryas.org]]></googleplay:email><googleplay:author><![CDATA[Surya Suravarapu]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Judgment Isn’t Talent. It’s Practice.]]></title><description><![CDATA[The friction that built product judgment is disappearing. Four practices to keep it deliberately.]]></description><link>https://blog.suryas.org/p/judgment-isnt-talent-its-practice</link><guid isPermaLink="false">https://blog.suryas.org/p/judgment-isnt-talent-its-practice</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Fri, 03 Apr 2026 17:21:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!m47K!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133f9f3b-f392-4b6a-b6af-cfce6637265b_2816x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p><strong>AI eliminated the reps that used to build judgment as a side effect. If you want to develop judgment now, you have to seek it deliberately.</strong></p><ul><li><p>The traditional ladder (write specs, ship V1s, debug production) built judgment accidentally. That ladder got pulled up.</p></li><li><p>Not all friction is bad. AI removes bureaucratic friction. The friction that built judgment was reality friction. Keep it.</p></li><li><p>The people treating AI as a productivity tool are getting faster. The people treating judgment as a practice are getting more valuable. The gap is widening.</p></li></ul></blockquote><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!m47K!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133f9f3b-f392-4b6a-b6af-cfce6637265b_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!m47K!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133f9f3b-f392-4b6a-b6af-cfce6637265b_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!m47K!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133f9f3b-f392-4b6a-b6af-cfce6637265b_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!m47K!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133f9f3b-f392-4b6a-b6af-cfce6637265b_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!m47K!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133f9f3b-f392-4b6a-b6af-cfce6637265b_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!m47K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133f9f3b-f392-4b6a-b6af-cfce6637265b_2816x1536.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/133f9f3b-f392-4b6a-b6af-cfce6637265b_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:8146815,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.suryas.org/i/193090987?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133f9f3b-f392-4b6a-b6af-cfce6637265b_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!m47K!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133f9f3b-f392-4b6a-b6af-cfce6637265b_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!m47K!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133f9f3b-f392-4b6a-b6af-cfce6637265b_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!m47K!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133f9f3b-f392-4b6a-b6af-cfce6637265b_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!m47K!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F133f9f3b-f392-4b6a-b6af-cfce6637265b_2816x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>You Took the Diagnostic</h2><p><a href="https://blog.suryas.org/p/ai-took-the-artifacts-whats-left">In the last post</a>, I asked you two questions. What would you have contributed if AI had produced every artifact for your last feature in two hours? And when did you last sit with a real user?</p><p>If you loved your answers, this post isn&#8217;t for you.</p><p>If you didn&#8217;t, keep reading. Nearly every high performer I&#8217;ve talked to since that post is feeling some version of the same thing. Strong track records, promoted fast, trusted with hard problems. The version I keep hearing: &#8220;I feel like my strengths aren&#8217;t strengths anymore.&#8221;</p><p>These aren&#8217;t people who lack skill. They&#8217;re people whose skills were validated by an artifact-production model that is now compressing. The fear isn&#8217;t incompetence. It&#8217;s relevance.</p><p>Your strengths aren&#8217;t obsolete. The system that rewarded them changed. What follows is how you rebuild on different ground.</p><div><hr></div><h2>Three Readers, Three Problems</h2><p>Not everyone reading this is in the same position.</p><h3>If You&#8217;re Senior or Executive-Level</h3><p>You probably have more judgment than you realize. Thousands of decisions, hundreds of customer interactions, dozens of product failures. Your problem isn&#8217;t a judgment deficit. It&#8217;s that you&#8217;ve never explicitly named what you do as judgment.</p><p>Stop apologizing for the fact that you think for a living. The shift is naming your judgment, leaning into it deliberately, and dropping the reflex to prove value through artifact production.</p><p>A distinction worth being honest about. Twenty years of varied decisions across different contexts, with real feedback loops, builds genuine depth.</p><p>Twenty years of the same call in the same domain builds pattern-matching that works until the frame breaks. AI is breaking the frame. If your experience has been narrow, the mid-career section below may be a better fit.</p><p>For those with genuine depth: your next move is naming what you already do. What are the three things you always push back on in reviews? What patterns make you uneasy before you can articulate why? Name them. The &#8220;Naming Your Judgment&#8221; section later in this post will show you how.</p><h3>If You&#8217;re Early in Your Career</h3><p>Here&#8217;s the good news first. For the first time, a PM can build a working prototype that demonstrates their judgment in action. You don&#8217;t need permission, a team, or a quarter of engineering time. You can show your thinking in a working product, not a slide deck.</p><p>But the portfolio isn&#8217;t the prototype. It&#8217;s the judgment trail behind it. Why this problem. Why this solution. What was killed along the way.</p><p>Now the honest part. The traditional path that built judgment for the people above you is disappearing. Junior PMs used to write dozens of specs before developing intuition about what makes a good one. Junior engineers used to debug production incidents until they could smell architectural risk. Junior designers used to iterate through dozens of mocks before developing taste.</p><p>Those reps are compressing or vanishing. AI writes the first draft. AI suggests the architecture. AI generates the variations. The reps that built judgment as a side effect of doing the work are being removed from the work itself.</p><p>You don&#8217;t build muscles by watching workout videos. You build them by doing the reps. If you have a job, seek out the hardest judgment calls on your team and volunteer for them. If you don&#8217;t, pick a real problem and build a real solution. Not a case study. A working product.</p><h3>If You&#8217;re Mid-Career</h3><p>You have some judgment and some artifact skill. Your daily work is a mix of both. The shift isn&#8217;t dramatic. It&#8217;s a gradual reallocation: less time polishing documents, more time in the activities that sharpen judgment.</p><p>The danger is that this reallocation feels unproductive. You&#8217;ll feel like you&#8217;re doing less. You are doing less, of the thing that&#8217;s compressing in value. The question is whether you&#8217;re filling that time with the thing that&#8217;s appreciating.</p><p>Here&#8217;s a filter. Writing a strategy doc is one task, but the 10x impact lives in the thesis and the bet, not in the formatting or the appendix.</p><blockquote><p><strong>The 10x question:</strong> For any piece of your work, ask: if I were 10x better at this specific part, would it produce 10x the outcome? Where the ceiling is low, automate ruthlessly. Where it&#8217;s high, protect and invest.</p></blockquote><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.suryas.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h2>The Friction You Actually Need</h2><p>Most people celebrate AI&#8217;s speed without noticing what the speed removed.</p><p><strong>Bureaucratic friction</strong> (formatting requirements, status update meetings, elaborate quarterly planning ceremonies, manual reporting, copy-pasting between tools) doesn&#8217;t improve the quality of anyone&#8217;s thinking. AI is excellent at removing it, and good riddance.</p><p><strong>Reality friction</strong> improves the quality of your thinking. It&#8217;s the resistance you encounter when your beliefs meet evidence. A customer who contradicts your thesis. An assumption you test and find wrong. Data that challenges what you believed.</p><p>This friction doesn&#8217;t slow work for the sake of slowing it. It slows work at exactly the moments where speed would substitute polish for understanding.</p><p>When AI removes all friction from your workflow, it removes both kinds. A PM who iterates a product doc through AI feedback ten times in an afternoon will produce a polished document. But if none of those iterations involved a customer, an uncomfortable assumption, or evidence that challenged the thesis, the friction removed was the kind they needed most.</p><p>The new reps are deliberate reality friction. Practices you choose to keep in a world that&#8217;s making everything frictionless.</p><div><hr></div><h2>The New Reps</h2><p>These aren&#8217;t productivity hacks. They&#8217;re the equivalent of a doctor&#8217;s clinical rotations. Four practices that build judgment when the old path through artifact production is gone.</p><h3>Proximity Reps</h3><p>The PM who knows their customer deeply generates prototypes grounded in reality. The PM who builds based on how they feel about the problem generates prototypes grounded in projection. AI makes this gap catastrophically wider. The disconnected PM goes from assumption to polished-looking-but-wrong prototype in two hours. Polish masquerades as insight.</p><p>The practice: one customer conversation per week. Not a survey. Not a dashboard. A conversation where you can ask follow-up questions and notice what they don&#8217;t say. Watching users use your product, not reading summaries about how they use it.</p><p>After each one, write one sentence about what surprised you. If nothing surprised you, your sample isn&#8217;t varied enough.</p><p>When we were building an enterprise API platform, I assumed the decision was which API gateway to license. The safe path was obvious: pick the vendor backed by the Gartner quadrant and move on. Nobody would have questioned it.</p><p>Then I talked to the developers who would actually use it and the business stakeholders who would sell through it. The surprise: nobody cared about the gateway. Gateways are commodities. What developers wanted was the experience around the gateway: discovery, exploration, try-before-you-buy, quick-start guides, monetization hooks, support. We built an end-to-end platform shaped by those conversations.</p><p>I walked in with a vendor selection question. I walked out building a different product. That reframe didn&#8217;t come from a dashboard or a competitive analysis. It came from sitting across from the people who would use the thing.</p><h3>Decision Reps</h3><p>Judgment develops through making calls with incomplete information and tracking whether they were right.</p><p>AI makes it easier to avoid this. You can always generate one more analysis, one more scenario, one more option set. The tool becomes the avoidance mechanism. But the decision itself is the rep. Not the safe decision that everyone agrees with. The judgment call where reasonable people would disagree, where you&#8217;re taking a position based on your read of the situation.</p><p>The practice: when you make a non-obvious call, write down what you believe and why before the outcome is known.</p><p>Revisit in 30 days. The goal isn&#8217;t a batting average. It&#8217;s calibration: learning which signals you overweight and which you miss.</p><h3>Kill Reps</h3><p>The most undervalued judgment skill is knowing when to stop.</p><p>Before AI, killing a project was painful because building was expensive. You&#8217;d invested a quarter of engineering time. Sunk cost made killing feel wasteful. Now building is cheap, which should make killing easier. But it doesn&#8217;t. You can build a prototype in an afternoon, and the prototype looks good, so the emotional case for continuing gets stronger even when the strategic case doesn&#8217;t.</p><p>The practice: pick one thing on the roadmap and ask: &#8220;If we had never started this, would we start it today?&#8221; Not &#8220;does the prototype work&#8221; but &#8220;would customers choose this over doing nothing?&#8221; If the answer is no, ask why it&#8217;s still alive.</p><p>Early in the cloud migration wave, we partnered with a middleware vendor who pitched a compelling story to our executives: a seamless bridge from legacy to cloud without rearchitecting. The thesis made sense on paper. We started the work on one of our largest, most complex product areas.</p><p>A few months in, the signal was clear. The &#8220;seamless bridge&#8221; was adding a layer of abstraction that would age poorly. The cloud-native path was harder, but it was where the industry was going. Every month we spent on the intermediary was a month we weren&#8217;t building the real thing.</p><p>We killed the partnership and redirected the resources to go cloud-native. The decision wasn&#8217;t popular. The original pitch still looked good in slide decks. But the thesis underneath it had expired. The hardest part of killing it wasn&#8217;t the sunk cost. It was that the original logic was still defensible. It just wasn&#8217;t current.</p><h3>Wrongness Reps</h3><p>This is the uncomfortable one. Deliberately reviewing where your judgment failed.</p><p>Not a retro designed to make everyone feel okay. A private, honest examination of the calls you got wrong. The feature you championed that customers ignored. The assumption you carried too long. The competitor signal you dismissed.</p><p>The practice: monthly, pick one judgment call that didn&#8217;t land. Reconstruct your reasoning at the time. Not what you couldn&#8217;t have known, but what was available and you didn&#8217;t weight correctly. This is the rep that separates people who have ten years of judgment from people who have one year of judgment repeated ten times.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ZVeK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafb7178e-bcdb-4218-ba72-8b95bebd1281_845x291.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ZVeK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafb7178e-bcdb-4218-ba72-8b95bebd1281_845x291.png 424w, https://substackcdn.com/image/fetch/$s_!ZVeK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafb7178e-bcdb-4218-ba72-8b95bebd1281_845x291.png 848w, https://substackcdn.com/image/fetch/$s_!ZVeK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafb7178e-bcdb-4218-ba72-8b95bebd1281_845x291.png 1272w, https://substackcdn.com/image/fetch/$s_!ZVeK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafb7178e-bcdb-4218-ba72-8b95bebd1281_845x291.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ZVeK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafb7178e-bcdb-4218-ba72-8b95bebd1281_845x291.png" width="845" height="291" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/afb7178e-bcdb-4218-ba72-8b95bebd1281_845x291.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:291,&quot;width&quot;:845,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:48583,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.suryas.org/i/193090987?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafb7178e-bcdb-4218-ba72-8b95bebd1281_845x291.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ZVeK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafb7178e-bcdb-4218-ba72-8b95bebd1281_845x291.png 424w, https://substackcdn.com/image/fetch/$s_!ZVeK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafb7178e-bcdb-4218-ba72-8b95bebd1281_845x291.png 848w, https://substackcdn.com/image/fetch/$s_!ZVeK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafb7178e-bcdb-4218-ba72-8b95bebd1281_845x291.png 1272w, https://substackcdn.com/image/fetch/$s_!ZVeK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fafb7178e-bcdb-4218-ba72-8b95bebd1281_845x291.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>A note on AI scaffolds. Review tools, rubrics, and frameworks can accelerate the early stages of all four reps. Use them when you&#8217;re learning the territory. But recognize when the scaffold is doing the cognitive work for you.</p><p>A PM who prepares for a hard review by asking &#8220;what will my leader challenge?&#8221; is practicing the skill of finding their own blind spots. An AI pre-review that answers that question in advance shifts preparation from &#8220;find my blind spots&#8221; to &#8220;fill in the blanks.&#8221; The output looks identical. The learning is categorically different. The scaffold should come down before it becomes load-bearing.</p><div><hr></div><h2>Naming Your Judgment</h2><p>If you&#8217;re experienced, the four reps above will sharpen skills you already have. But there&#8217;s a practice specific to your level: making your judgment explicit.</p><p>Most experienced leaders have never named their judgment model. They review a product doc and have a reaction. They push back in a meeting. They redirect a team. But ask them &#8220;what are your three recurring questions when you evaluate a product decision?&#8221; and they struggle to articulate it.</p><p>What they think is intuition is actually a set of structured heuristics they&#8217;ve never made explicit.</p><p>Here&#8217;s what the exercise looks like. A product leader sits down to answer one question: &#8220;What do I always push back on in reviews?&#8221; They start listing:</p><ul><li><p>Has the PM talked to customers directly?</p></li><li><p>Is the hypothesis named and falsifiable?</p></li><li><p>Is the pre-mortem honest or performative?</p></li><li><p>Is opportunity cost acknowledged?</p></li><li><p>What&#8217;s the strongest case against this?</p></li></ul><p>They end up with eleven dimensions. Not invented from theory. Extracted from years of their own review conversations.</p><p>Then they go further. They add gates: before the review even starts, has the PM talked to customers directly? Were they surprised by anything they heard? If not, the review doesn&#8217;t proceed. Not because the rubric says so, but because the leader knows from experience that a doc built without customer contact isn&#8217;t worth reviewing at the detail level.</p><p>This is the <strong>portable layer</strong> of judgment: your repeatable questions, standard frameworks, and known failure modes. The stuff you can name and encode.</p><p>The <strong>live layer</strong> is different: recognizing when your own framework doesn&#8217;t apply. You can only develop it through direct exposure (that&#8217;s what proximity reps and wrongness reps are for). But the better you articulate the portable layer, the more bandwidth you free for the live layer. You stop spending energy on the patterned decisions and start noticing where the patterns break.</p><div><hr></div><h2>Stepping Back</h2><p>The AI discourse is dominated by tools, techniques, and capabilities. Learn this framework. Master this prompt pattern. Use this tool. Technical proficiency is a real advantage, and it&#8217;s foolish to ignore it.</p><p>But the deeper game isn&#8217;t tools. It&#8217;s the quality of thinking you bring to what the tools produce. The tools will keep getting better. The quality of thinking will only improve through the specific practices in this post.</p><p>The irony is that the people most drawn to AI productivity tools are often the people who most need to slow down and do the judgment reps. The tools feel productive. The reps feel slow.</p><div class="pullquote"><blockquote><p>Tools without reps produce polished artifacts built on weak foundations. Reps without tools produce strong judgment that AI can then amplify 100x.</p></blockquote></div><p>The order matters.</p><p>Treat judgment development as continuous. Not something you did on the way up and left behind. A permanent practice, like a surgeon who continues clinical rounds regardless of seniority.</p><div><hr></div><h2>In Practice</h2><p>Pick the rep that maps to your biggest gap:</p><ul><li><p><strong>If you can&#8217;t remember your last customer conversation:</strong> Proximity reps. Start Monday.</p></li><li><p><strong>If you make safe decisions that everyone agrees with:</strong> Decision reps. Start logging the calls that scare you.</p></li><li><p><strong>If you keep features alive too long:</strong> Kill reps. Ask &#8220;should this exist?&#8221; before &#8220;is this good?&#8221;</p></li><li><p><strong>If you haven&#8217;t examined a recent failure honestly:</strong> Wrongness reps. Pick one. Reconstruct.</p></li><li><p><strong>If you can&#8217;t articulate what you review for:</strong> Name three of your recurring questions. That&#8217;s where the &#8220;Naming Your Judgment&#8221; work begins.</p></li></ul><p>Not all five. One. The one where your answer is weakest. That&#8217;s where judgment grows.</p><p><em>Next in this series: how customer diagnosis and solution evaluation work in practice when AI is doing the building. That&#8217;s where the judgment reps meet the daily work.</em></p><p></p>]]></content:encoded></item><item><title><![CDATA[AI Took the Artifacts. What's Left Is Judgment.]]></title><description><![CDATA[AI produces specs, code, and designs in minutes. The skill that remains is judgment. Here's what that means and how 
  to know if you have it.]]></description><link>https://blog.suryas.org/p/ai-took-the-artifacts-whats-left</link><guid isPermaLink="false">https://blog.suryas.org/p/ai-took-the-artifacts-whats-left</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Fri, 27 Mar 2026 12:53:26 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yW3N!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff092702f-d2d7-4a67-b1cd-15867295c3f7_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p><strong>AI collapsed the production layer of knowledge work. The existential question isn&#8217;t whether AI will take your job. It&#8217;s whether you were ever providing judgment or just producing artifacts.</strong></p><ul><li><p>Specs, code, designs, prototypes: all producible in minutes now. If your value was the artifact, you have a problem.</p></li><li><p>Judgment (deciding what to build, evaluating whether it&#8217;s right, knowing when to kill it) is the durable skill. It applies across PM, engineering, and design.</p></li><li><p>Speed without judgment is dangerous. The faster you can build, the more judgment matters, not less.</p></li></ul></blockquote><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yW3N!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff092702f-d2d7-4a67-b1cd-15867295c3f7_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yW3N!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff092702f-d2d7-4a67-b1cd-15867295c3f7_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!yW3N!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff092702f-d2d7-4a67-b1cd-15867295c3f7_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!yW3N!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff092702f-d2d7-4a67-b1cd-15867295c3f7_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!yW3N!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff092702f-d2d7-4a67-b1cd-15867295c3f7_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yW3N!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff092702f-d2d7-4a67-b1cd-15867295c3f7_2752x1536.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f092702f-d2d7-4a67-b1cd-15867295c3f7_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:7390869,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.suryas.org/i/192260383?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff092702f-d2d7-4a67-b1cd-15867295c3f7_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yW3N!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff092702f-d2d7-4a67-b1cd-15867295c3f7_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!yW3N!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff092702f-d2d7-4a67-b1cd-15867295c3f7_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!yW3N!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff092702f-d2d7-4a67-b1cd-15867295c3f7_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!yW3N!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff092702f-d2d7-4a67-b1cd-15867295c3f7_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div><hr></div><h2>The Question Everyone Is Asking</h2><p>A few weeks ago, a senior executive I know well pulled me aside after an alumni reunion. Twenty years of experience. Strong track record. Led large teams through multiple product cycles. He asked me, quietly: &#8220;With everything AI can do now, do they still need someone like me?&#8221;</p><p>He&#8217;s not alone. I&#8217;ve heard versions of this from PMs, engineers, engineering managers, and designers. It shows up in three flavors. &#8220;I don&#8217;t know where to start with AI.&#8221; &#8220;I&#8217;m not technical enough to survive this.&#8221; &#8220;My entire role is at stake.&#8221;</p><p>Different words, same fear underneath. And it&#8217;s not just hallway anxiety. It&#8217;s a boardroom strategy.</p><p>The people building AI are saying it plainly. Anthropic&#8217;s CEO Dario Amodei <a href="https://www.cnbc.com/2026/01/27/dario-amodei-warns-ai-cause-unusually-painful-disruption-jobs.html">wrote in January 2026</a> that AI will &#8220;disrupt 50% of entry-level white-collar jobs over 1 to 5 years&#8221; and called the coming disruption &#8220;unusually painful.&#8221; Mark Zuckerberg <a href="https://www.itpro.com/software/development/a-sign-of-things-to-come-in-software-development-mark-zuckerberg-says-ai-will-be-doing-the-work-of-mid-level-engineers-this-year-and-hes-not-the-only-big-tech-exec-predicting-the-end-of-the-profession">told investors</a> that AI will do the work of mid-level engineers.</p><p>And the enterprise results are already here. Klarna&#8217;s AI agent <a href="https://openai.com/index/klarna/">handles the equivalent work of 850 human agents</a>, and the company has seen a roughly 50% workforce reduction through attrition since 2022. <a href="https://fortune.com/2025/09/02/salesforce-ceo-billionaire-marc-benioff-ai-agents-jobs-layoffs-customer-service-sales/">Salesforce cut its customer support headcount</a> from 9,000 to 5,000. <a href="https://www.cnbc.com/2025/12/21/ai-job-cuts-amazon-microsoft-and-more-cite-ai-for-2025-layoffs.html">Block cut 40% of its workforce</a>, with Jack Dorsey explicitly attributing it to AI enabling smaller teams.</p><p>These are not pilots. These are not experiments. This is the new operating reality at companies with tens of thousands of employees. The question that the executive asked me is the question the entire industry is asking. The rest of this piece is my answer.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.suryas.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Subscribe for free to receive new posts.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><h2>The Misdiagnosis</h2><p>I know a PM who spent a weekend taking a prompt engineering course. An engineer who doubled down on LeetCode prep. Multiple designers who started learning to code. All smart, driven people. All responding to the same instinct: the threat feels technical, so the solution must be technical too. Learnable. Controllable.</p><p>But the threat isn&#8217;t technical. The threat is that your value was defined by the artifacts you produced.</p><p>For decades, specs, code, mockups, and test plans <em>were</em> the job. You produced them, stakeholders consumed them, and the quality of the artifact stood in as a proxy for the quality of your thinking. AI now produces those artifacts. Fast, cheap, and increasingly well.</p><p>If the artifact was your value, then yes, you have a problem. But the artifact was never the real value. It was a container for something else.</p><p>That something else is what this piece is about. Not the organizational shift (I&#8217;ve <a href="https://blog.suryas.org/">written about that</a>). This one is about you. What&#8217;s left when the artifacts disappear?</p><div><hr></div><h2>What Judgment Actually Is</h2><p>An AI tool generates a prototype in two hours. Two people review it. One says, &#8220;Looks great, let&#8217;s show stakeholders.&#8221; The other says, &#8220;This solves the wrong problem. The user doesn&#8217;t need a dashboard; they need an alert that fires before the problem happens.&#8221;</p><p>Same prototype. Same tool. Same speed. The difference is judgment: the ability to distinguish between <em>correct</em> and <em>right</em>.</p><p>Correct means it works as specified. Right means it solves a problem worth solving, for a person who actually has that problem, in a way that fits how they work.</p><p>What remains when production collapses is this distinction, applied dozens of times a day. Deciding what to build and, harder, what not to build. Evaluating whether the output is right for the problem. Knowing when to kill something that isn&#8217;t working, even when the team is excited about it.</p><p>Shaping iteration toward an outcome that matters, not just toward something that looks finished.</p><p>This isn&#8217;t one skill. It&#8217;s knowing your customer deeply enough to spot the wrong problem before anyone builds it. Understanding your business model well enough to kill a feature that users love but economics don&#8217;t support. Reading a stakeholder&#8217;s objection and knowing whether it&#8217;s political or substantive. The instinct that says &#8220;this will break at scale&#8221; before anyone runs a load test.</p><p>None of these are AI skills. They&#8217;re accumulated through proximity, reps, and paying attention. AI doesn&#8217;t replace any of them. It just makes them the only ones that matter.</p><p>The value is shifting from production to judgment, and from judgment to the speed of the judgment loop. How fast can you go from insight to prototype to validated learning? That loop speed is the new measure of effectiveness.</p><p>Klarna&#8217;s CEO, Sebastian Siemiatkowski, <a href="https://time.com/charter/7378651/what-klarna-learned-from-its-ambitious-ai-rollout/">put it bluntly</a>: during the first two years of their AI strategy, they focused on hiring engineers. Then it &#8220;switched, actually. It&#8217;s almost the opposite.&#8221; The business knowledge of non-coders, people who can use AI to build, but also know <em>what</em> dashboards or features are needed, has become more valuable.</p><p>The engineers? &#8220;They&#8217;re like, &#8216;okay, I coded this feature. What do I do next?&#8221; That&#8217;s the artifact-producer without judgment, described by a CEO who&#8217;s lived through the transition.</p><div><hr></div><h2>The Same Shift, Every Role</h2><p>Siemiatkowski was describing engineers. But the pattern is identical across every knowledge work role. Wherever the artifact was the identity, the same reckoning is happening.</p><p>Developers who define their value as &#8220;I write clean code&#8221; are in the same position as PMs who define their value as &#8220;I write good specs.&#8221; The artifact is being commoditized.</p><p>The junior dev valued for cranking out CRUD endpoints? Fully automated now. The mid-level dev valued for React expertise and well-structured components? AI does that reliably. Even the senior dev who prides themselves on elegant architecture gets nervous when Claude Code scaffolds a whole system in minutes.</p><p>But here&#8217;s where it gets interesting. The developer who looks at AI-generated code and says, &#8220;This won&#8217;t survive 10x load,&#8221; or &#8220;this data model becomes a nightmare when we add multi-tenancy,&#8221; or &#8220;this is solving the wrong problem at the wrong layer.&#8221; That person is more valuable than before, not less.</p><p>AI produces plausible code at incredible speed, and the cost of bad judgment is now multiplied by the speed of production.</p><p>The PM who can go from customer insight to a validated prototype in a day. The developer who can evaluate, reshape, and stress-test AI output faster than the AI produces it. Both are operating as editors of AI output, not producers of artifacts.</p><p>The role is different. The judgment layer is the same.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!neEj!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F001551de-a409-49fa-88bf-b8239f1dd283_896x172.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!neEj!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F001551de-a409-49fa-88bf-b8239f1dd283_896x172.png 424w, https://substackcdn.com/image/fetch/$s_!neEj!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F001551de-a409-49fa-88bf-b8239f1dd283_896x172.png 848w, https://substackcdn.com/image/fetch/$s_!neEj!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F001551de-a409-49fa-88bf-b8239f1dd283_896x172.png 1272w, https://substackcdn.com/image/fetch/$s_!neEj!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F001551de-a409-49fa-88bf-b8239f1dd283_896x172.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!neEj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F001551de-a409-49fa-88bf-b8239f1dd283_896x172.png" width="896" height="172" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/001551de-a409-49fa-88bf-b8239f1dd283_896x172.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:172,&quot;width&quot;:896,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:44178,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.suryas.org/i/192260383?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F001551de-a409-49fa-88bf-b8239f1dd283_896x172.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!neEj!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F001551de-a409-49fa-88bf-b8239f1dd283_896x172.png 424w, https://substackcdn.com/image/fetch/$s_!neEj!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F001551de-a409-49fa-88bf-b8239f1dd283_896x172.png 848w, https://substackcdn.com/image/fetch/$s_!neEj!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F001551de-a409-49fa-88bf-b8239f1dd283_896x172.png 1272w, https://substackcdn.com/image/fetch/$s_!neEj!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F001551de-a409-49fa-88bf-b8239f1dd283_896x172.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><div><hr></div><h2>The Uncomfortable Truth</h2><p>Production was slow enough to hide weak judgment for decades.</p><p>When a spec took two weeks to write, nobody asked whether the PM had genuine insight or was just organizing inputs from stakeholder meetings. The slowness looked like rigor.</p><p>When code took a full sprint to ship, nobody asked whether the engineer understood the problem deeply or just implemented the ticket as written. The effort looked like a value.</p><p>When a design went through three rounds of review, nobody asked whether the designer was making real decisions or just iterating toward consensus. The process looked like a craft.</p><p>Think about how many sprint retrospectives you&#8217;ve sat through where the team debated velocity, story points, and estimation accuracy, but never once asked: &#8220;Was this the right thing to build?&#8221; The process machinery was so consuming that judgment never came up. The speed of the cycle was the measure of success, not the quality of the decisions that fed it.</p><p>AI compresses the timeline and makes the gap visible. The PM who never had a strong point of view now has nothing to hide behind. The engineer who never questioned requirements is exposed when AI can implement any requirement instantly. The designer who relied on iteration-toward-consensus discovers that AI converges on &#8220;acceptable&#8221; in minutes, and that was apparently all they were doing too.</p><p>The creamy layer, the top that rises to the surface, is the people who remain indispensable. Their judgment was always the real value. They also happened to produce the artifacts. AI just made this truth undeniable.</p><p>And the economics confirm it: Klarna&#8217;s workforce shrank by 50%, but <a href="https://time.com/charter/7378651/what-klarna-learned-from-its-ambitious-ai-rollout/">revenue per employee went from $300,000 to $1.3 million</a>. The people who stayed got more leverage and more compensation. As Siemiatkowski put it: &#8220;My employees know that they&#8217;re driving efficiencies, but they are also participating in getting the benefit of that.&#8221;</p><p>If AI writes the code, what is the developer? If AI writes the spec, what is the PM?</p><p>The answer is the same in both cases: you&#8217;re the person who knows what to build and whether it&#8217;s right. The ones who thrive are the ones who were always doing that. The ones who were primarily valued for the mechanical act of production are the ones in trouble.</p><div><hr></div><h2>The False Confidence Trap</h2><p>Speed without judgment isn&#8217;t just useless. It&#8217;s actively dangerous.</p><p>Picture this: a PM uses AI to build a prototype in two hours. It looks production-ready. It has real interactions, plausible data, and smooth flows. The PM shows it in a stakeholder review.</p><p>Excitement builds. A team forms around it. A roadmap shifts. Resources redirect.</p><p>The prototype solved the wrong problem. But nobody questioned it because it looked so real. Polish masqueraded as insight.</p><p>The reasonable objection: &#8220;If I can build something in two days instead of two months, what&#8217;s wrong with getting it wrong? I&#8217;ll throw it away and start over.&#8221; For personal projects, this is genuinely true. The throwaway cost is trivial.</p><p>In an organization, nothing is truly throwaway. The artifact was cheap. The organizational momentum it created is expensive to reverse. Decisions crystallized around that prototype. A VP mentioned it in a board update.</p><p>The cost isn&#8217;t the code. It&#8217;s the conviction that builds around a convincing wrong thing.</p><p>And even individually, not all throwaway cycles are equal. The PM who builds and discards five prototypes grounded in deep customer knowledge is triangulating toward the right answer. The PM who builds and discards five prototypes grounded in vibes is just busy. One is iterating toward insight. The other is doing random walks.</p><p>Klarna learned this at the company scale. Their AI chatbot replaced the equivalent of 850 human agents. The efficiency metrics looked like a proof of concept. Then service quality declined, and the company had to rethink.</p><p>CEO Siemiatkowski now says they&#8217;re <a href="https://time.com/charter/7378651/what-klarna-learned-from-its-ambitious-ai-rollout/">reversing course on full AI support</a>: &#8220;We think it&#8217;s going to be like the future to offer human support. It&#8217;s going to be like the VIP treatment.&#8221; The initial metrics masked a judgment gap. Knowing which interactions need a human and which don&#8217;t is itself a judgment call, and they got it wrong by optimizing for speed alone.</p><div><hr></div><h2>What Fuels Judgment</h2><p>If judgment is what remains, what fuels it? The answer is unglamorous: customer proximity.</p><p>Not quarterly research rituals. Not <em>persona</em> documents. Not reading a summary of someone else&#8217;s user interviews. Actual proximity.</p><p>Knowing your user&#8217;s workflow, their frustrations, their workarounds, the emotion they feel when something breaks.</p><p>The PM who knows their customer deeply generates prototypes grounded in reality. The PM who builds based on how they feel about the problem generates prototypes grounded in projection. AI makes this gap wider and the consequences steeper.</p><p>The informed PM goes from insight to validated prototype in a day. The disconnected PM goes from assumption to polished-but-wrong prototype in two hours. The speed of production amplifies whatever customer understanding you bring to it, strong or weak.</p><p>The existential fear (&#8221;AI will replace me&#8221;) asks the wrong question. What AI actually exposes is whether you were ever providing judgment value, or just organizing the process. Customer proximity is what separates the two.</p><div class="pullquote"><blockquote><p><em>One is iterating toward insight. The other is doing random walks.</em></p></blockquote></div><div><hr></div><h2>The Existence Proof</h2><p>My wife trades a specific swing trading strategy. Not day trading. Not position trading. Swing trading, with its own universe of techniques, edges, and traps. She has her own rules, her own stock universe, her own journaling discipline for every position she enters and exits.</p><p>There are dozens of commercial trading apps. Some do scanning well. Some handle journaling. None does all of it integrated into a single workflow that matches how she actually thinks about markets. So I built one for her, using AI as the execution layer. Real money is at stake, which means precision matters.</p><p>The application pulls real-time quotes, runs market scans, calculates position sizing and risk parameters, tracks opportunities through her specific pipeline, and journals every entry and exit. I&#8217;m not reviewing code line by line. The judgment that mattered was knowing her domain deeply enough to specify what &#8220;right&#8221; looked like.</p><p>A quarter of the way in, I hit a wall. I&#8217;d started with a rapid prototyping framework that worked for the initial build. But the app needed real-time streaming for live price tickers and direct front-end integration with multiple APIs. The framework&#8217;s architecture wasn&#8217;t built for that interaction pattern.</p><p>Could a more experienced developer have made it work? Maybe. But the point isn&#8217;t the technical choice. The point is that I recognized the mismatch because I understood the <em>need</em>, not because I understood the framework&#8217;s internals.</p><p>I redirected to a stack built for the interaction model the app required: proper streaming, direct API calls, and a front-end architecture that matched how a trader actually uses the tool. That decision was the needs driving the tech, not the other way around. Someone without domain knowledge wouldn&#8217;t have known the current behavior was wrong. The judgment to redirect early, a quarter of the way in instead of all the way through, came from understanding what the user actually required.</p><p>If I&#8217;d been wrong about her workflow, if I&#8217;d been building from assumptions instead of proximity, AI would have helped me build the wrong thing beautifully. Twice. The tool is neutral. Judgment is the variable.</p><div><hr></div><h2>Stepping Back: What It Means for You</h2><p>Remember the executive from the beginning of this piece? Twenty years of experience, strong track record, asking quietly: &#8220;Do they still need someone like me?&#8221;</p><p>Here&#8217;s my honest answer to him, and to you: it depends on what &#8220;someone like you&#8221; actually means.</p><p>If it means the person who coordinated the team, organized the process, and delivered the artifacts on time, then the ground is shifting under your feet. That&#8217;s real. Pretending otherwise helps no one.</p><p>Roles will compress. The disruption is already happening, and it will accelerate.</p><p>But if &#8220;someone like you&#8221; means the person who knows the customer deeply enough to spot the wrong problem before it&#8217;s built, who has the judgment to redirect a team when the polished prototype is solving the wrong thing, who understands the domain well enough to know what &#8220;right&#8221; looks like before anyone writes a line of code, then you&#8217;re more valuable than you were a year ago. Not less.</p><p>The path forward isn&#8217;t &#8220;learn to code&#8221; or &#8220;learn to prompt.&#8221; Those are technical solutions to a judgment problem.</p><h3>The Judgment Diagnostic</h3><p>Here&#8217;s something concrete to do this week.</p><p>Pick the last feature or project you shipped. Ask yourself one question: if AI had produced every artifact (the spec, the code, the design, the test plan) in two hours, what would <em>you</em> have contributed that the AI couldn&#8217;t?</p><p>If your honest answer is &#8220;I coordinated the team&#8221; or &#8220;I wrote the requirements clearly&#8221; or &#8220;I implemented it cleanly,&#8221; that&#8217;s the artifact layer. That&#8217;s what&#8217;s compressing.</p><p>If your answer is &#8220;I knew this was the right problem because I&#8217;ve watched 30 customers struggle with it&#8221; or &#8220;I caught a flaw the team missed because I understood the second-order effects&#8221; or &#8220;I killed a direction that looked promising because the unit economics didn&#8217;t work,&#8221; that&#8217;s judgment. That&#8217;s the layer that becomes more valuable, not less.</p><p>Be honest about which answer is yours. Not which answer you aspire to. Which one is true today.</p><p>Then ask: when was the last time you were with a real user? Not reading a research summary. Not reviewing analytics. Actually watching someone use your product or describe their problem in their own words.</p><p>If you can&#8217;t remember, that&#8217;s your starting point. Not learning to prompt. Getting closer to customers. That&#8217;s where judgment comes from, and judgment is what remains.</p><p><em>If you did this exercise and didn&#8217;t love the answer, the next post is for you. Judgment isn&#8217;t fixed at birth. It can be cultivated deliberately. But the path isn&#8217;t what most people expect. More on that soon.</em></p><div><hr></div><p>In case you missed it, my earlier post on the topic: </p><div class="digest-post-embed" data-attrs="{&quot;nodeId&quot;:&quot;46b6a297-cfda-4fe9-9aa8-9eb9b9cfca33&quot;,&quot;caption&quot;:&quot;The cost of building software is collapsing. Your operating model didn&#8217;t notice.&quot;,&quot;cta&quot;:&quot;Read full story&quot;,&quot;showBylines&quot;:true,&quot;size&quot;:&quot;lg&quot;,&quot;isEditorNode&quot;:true,&quot;title&quot;:&quot;The Operating Model for When Building Is Free&quot;,&quot;publishedBylines&quot;:[{&quot;id&quot;:3493934,&quot;name&quot;:&quot;Surya Suravarapu&quot;,&quot;bio&quot;:&quot;Senior Director of Product @ Optum (ex-Change Healthcare, McKesson). Practical insights on AI transformation, product strategy, and leadership in the enterprise.&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cf993e51-d8c3-4bc8-961a-6035223bdf41_1792x1792.png&quot;,&quot;is_guest&quot;:false,&quot;bestseller_tier&quot;:null}],&quot;post_date&quot;:&quot;2026-02-27T19:48:27.936Z&quot;,&quot;cover_image&quot;:&quot;https://substackcdn.com/image/fetch/$s_!1qLd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9eacdd1-1bf6-47db-9a20-57e363a7a7ea_3168x1344.png&quot;,&quot;cover_image_alt&quot;:null,&quot;canonical_url&quot;:&quot;https://blog.suryas.org/p/the-operating-model-for-when-building&quot;,&quot;section_name&quot;:null,&quot;video_upload_id&quot;:null,&quot;id&quot;:189390135,&quot;type&quot;:&quot;newsletter&quot;,&quot;reaction_count&quot;:0,&quot;comment_count&quot;:0,&quot;publication_id&quot;:6040265,&quot;publication_name&quot;:&quot;Product Thinking w/ Surya &quot;,&quot;publication_logo_url&quot;:&quot;https://substackcdn.com/image/fetch/$s_!LOxS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd73e65d-dbce-4664-9e74-5ca963688619_1021x1021.png&quot;,&quot;belowTheFold&quot;:true,&quot;youtube_url&quot;:null,&quot;show_links&quot;:null,&quot;feed_url&quot;:null}"></div><div><hr></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.suryas.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Subscribe for free to receive new posts.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Operating Model for When Building Is Free]]></title><description><![CDATA[AI is changing how we build software. Next, it changes how we organize around it.]]></description><link>https://blog.suryas.org/p/the-operating-model-for-when-building</link><guid isPermaLink="false">https://blog.suryas.org/p/the-operating-model-for-when-building</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Fri, 27 Feb 2026 19:48:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!1qLd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9eacdd1-1bf6-47db-9a20-57e363a7a7ea_3168x1344.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>The cost of building software is collapsing. Your operating model didn&#8217;t notice.</strong></p><ul><li><p>The separation of Product, Engineering, and Design was an artifact of expensive building. That constraint is gone.</p></li><li><p>The new competitive unit isn&#8217;t a cross-functional team. It&#8217;s a domain expert with judgment and AI leverage.</p></li><li><p>Moats have shifted from what you build to how deeply you understand who you build it for.</p></li></ul><div><hr></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1qLd!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9eacdd1-1bf6-47db-9a20-57e363a7a7ea_3168x1344.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1qLd!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9eacdd1-1bf6-47db-9a20-57e363a7a7ea_3168x1344.png 424w, https://substackcdn.com/image/fetch/$s_!1qLd!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9eacdd1-1bf6-47db-9a20-57e363a7a7ea_3168x1344.png 848w, https://substackcdn.com/image/fetch/$s_!1qLd!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9eacdd1-1bf6-47db-9a20-57e363a7a7ea_3168x1344.png 1272w, https://substackcdn.com/image/fetch/$s_!1qLd!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9eacdd1-1bf6-47db-9a20-57e363a7a7ea_3168x1344.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1qLd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9eacdd1-1bf6-47db-9a20-57e363a7a7ea_3168x1344.png" width="1456" height="618" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f9eacdd1-1bf6-47db-9a20-57e363a7a7ea_3168x1344.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:618,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:9004501,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.suryas.org/i/189390135?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9eacdd1-1bf6-47db-9a20-57e363a7a7ea_3168x1344.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1qLd!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9eacdd1-1bf6-47db-9a20-57e363a7a7ea_3168x1344.png 424w, https://substackcdn.com/image/fetch/$s_!1qLd!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9eacdd1-1bf6-47db-9a20-57e363a7a7ea_3168x1344.png 848w, https://substackcdn.com/image/fetch/$s_!1qLd!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9eacdd1-1bf6-47db-9a20-57e363a7a7ea_3168x1344.png 1272w, https://substackcdn.com/image/fetch/$s_!1qLd!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9eacdd1-1bf6-47db-9a20-57e363a7a7ea_3168x1344.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Three Builds, One Pattern</h2><p>My wife trades a specific swing trading strategy. Her own rules. Her own universe of stocks. Her own journaling discipline for every position she enters and exits.</p><p>There are commercial trading apps. Dozens, in fact. Some do scanning well. Some handle journaling. Some are strong on position management.</p><p>None does all of it the way she needs it, integrated into a single workflow that matches how she actually thinks about markets. So she&#8217;s always stitching tools together, compromising on one dimension to get another.</p><p>So I built one for her. Not a toy. Real money is at stake, which means precision matters.</p><p>The application pulls realtime quotes, runs market scans, calculates position sizing and risk parameters, tracks opportunities through her specific pipeline, and journals every entry and exit. Under the hood, it integrates with multiple data providers and systems. When your P&amp;L depends on the accuracy of what&#8217;s on screen, &#8220;close enough&#8221; isn&#8217;t an option.</p><p>I&#8217;m not an engineer. I used to be, a long time ago. That background helps at the margins: I know the terminology, and I have intuition for what an AI assistant means when it discusses system architecture.</p><p>But I&#8217;m not reviewing code or debugging implementations. I built this because I understood her domain, I knew what &#8220;right&#8221; looked like for her workflow, and the distance between &#8220;I want this&#8221; and &#8220;this exists&#8221; has collapsed to a conversation with an AI coding tool. The skill that mattered was product judgment, not engineering expertise. And the biggest barrier for most people isn&#8217;t technical skill; it&#8217;s the mental block that says &#8220;I can&#8217;t build software.&#8221;</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.suryas.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Product Thinking w/ Surya! Subscribe for free to receive new posts.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Then I built a workout tracker for the family. An iOS app that does exactly what we need. A few more iterations and our Strava subscription cancels. Not because Strava is bad. Because its value proposition was building what I couldn&#8217;t build myself, and that constraint no longer holds.</p><p>Then, in a professional context (that I can&#8217;t talk freely yet): enterprise-grade software, built in compressed timeframes that would have been a full quarter&#8217;s roadmap a year ago. Not a prototype. Not a toy. Production-ready, with feature upgrades beyond the existing solution.</p><p>Three different scales: personal, consumer, enterprise. The complexity is genuinely different. A personal tool answers to one user. Enterprise software answers to SLAs, compliance requirements, and thousands of concurrent users.</p><p>What you need around each is different: the trading app needed my judgment alone; the enterprise project needed a small team of domain experts. But the pattern is the same. In none of these was I acting as an engineer. I was acting as a domain expert with AI leverage, deciding what to build, evaluating whether it was good, and directing an AI to handle the implementation.</p><p>The interesting question isn&#8217;t that I could do this. It&#8217;s what it means for the way we organize teams, companies, and entire industries around building software.</p><div><hr></div><h2>This Isn&#8217;t Just Me</h2><p>A reasonable objection: &#8220;Sure, you built some personal tools. But is AI-generated code actually production-grade?&#8221;</p><p>The data from the last twelve months says yes. Decisively.</p><p>The head of Claude Code at Anthropic <a href="https://fortune.com/2026/01/29/100-percent-of-code-at-anthropic-and-openai-is-now-ai-written-boris-cherny-roon/">hasn&#8217;t opened an IDE in months</a>. He shipped 259 pull requests and 497 commits in a single month, all written entirely by AI. Anthropic reports that 70 to 90 percent of their code company-wide is now AI-generated. At OpenAI, <a href="https://openai.com/index/codex-now-generally-available/">nearly all engineers now use Codex</a>, merging 70 percent more pull requests weekly.</p><p>Spotify&#8217;s co-CEO told investors on their February 2026 earnings call that the company&#8217;s best developers <a href="https://techcrunch.com/2026/02/12/spotify-says-its-best-developers-havent-written-a-line-of-code-since-december-thanks-to-ai/">&#8220;have not written a single line of code since December.&#8221;</a> Engineers use an internal system integrating Claude Code with Slack: direct a bug fix from your phone on the morning commute, receive a working build before you arrive at the office.</p><p>But here&#8217;s the distinction that matters: not all AI coding is equal. Early studies showed AI-assisted code introducing more bugs and higher churn. Those studies measured developers using basic code completion: autocomplete on steroids.</p><p>The real shift happened with agentic tools. Systems like Claude Code and Codex don&#8217;t just suggest the next line of code. They understand entire repositories, navigate across files, run tests, interpret failures, and iterate autonomously. The gap between prompting a model and directing an agent is the gap between dictating to a typist and collaborating with an engineer who works at machine speed.</p><p><a href="https://epoch.ai/benchmarks/swe-bench-verified">SWE-bench</a>, the industry standard for measuring AI&#8217;s ability to resolve real software engineering tasks, went from a 2 percent resolution rate in early 2024 to over 80 percent by late 2025. A 40x improvement in under two years. At the frontier companies, the shift has already happened. For the next tier, it&#8217;s happening now. The direction is not in question.</p><p>The scale shows up in the commit logs. Claude Code alone now accounts for <a href="https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens">4 percent of all public GitHub commits</a>, with SemiAnalysis projecting 20 percent by end of 2026. That only counts commits explicitly attributed to the tool, and it&#8217;s only one tool. Factor in unattributed usage and competitors like Codex, and the real share of AI-authored code is already considerably larger.</p><p>This matters because agentic AI changes what the human needs to do. You&#8217;re no longer writing code or even reviewing code line by line. You&#8217;re setting direction, evaluating output, managing context, and deciding when to reframe a problem the AI is stuck on.</p><p>Sitting back and asking the agent to do everything produces useless output packaged beautifully. The builder has to be an active orchestrator, learning and refining tactics with every feedback loop. That skill compounds.</p><p>This piece isn&#8217;t about pasting code from a chatbot. It&#8217;s about domain expertise and active orchestration producing production-grade software. That capability is real, it&#8217;s here, and it changes the economics of everything downstream.</p><div><hr></div><h2>Why the Roles Blurred</h2><p>The separation of Product Management, Engineering, and Design into distinct functions was never a law of nature. It was an economic response to a constraint: building software was expensive.</p><p>To be clear: the underlying skills are real and valuable. Understanding users deeply is a craft. Visual and interaction design is a craft. Systems architecture is a craft. None of that goes away.</p><p>What was an artifact of cost was the <em>organizational separation</em> of these crafts into distinct roles connected by handoffs.</p><p><strong>Product Management</strong> existed to reduce waste. When building the wrong thing cost six months of engineering time, someone needed to figure out what to build before the meter started running. The entire discipline of product discovery (user research, prioritization frameworks, roadmap negotiations) was a risk mitigation strategy against the high cost of building.</p><p><strong>Design</strong> existed to get it right before implementation. When a redesign meant throwing away a full sprint of engineering work, you needed high-fidelity mockups and usability testing upfront. Iteration was too expensive to do in code.</p><p><strong>Engineering</strong> existed as a distinct, specialized discipline because writing code was the scarce skill. Translating business requirements into working software required years of training and hard-won experience.</p><p>Each function represented genuine expertise. Together, they created an organizational structure optimized for a world where building was the bottleneck.</p><p>That world is gone for the companies paying attention. For the rest, it&#8217;s going. The timeline is the only question.</p><p>When building costs approach zero, when you can prototype in hours, iterate in minutes, and ship in days, the rationale for separating these functions into handoff-based pipelines dissolves. You don&#8217;t need a separate design phase when redesign costs nothing. The entire waterfall-in-disguise that most &#8220;agile&#8221; teams actually run becomes overhead without a purpose.</p><p>The spec doesn&#8217;t die, but it transforms. Writing is thinking, and the discipline of articulating what you&#8217;re building and why remains valuable. But the spec as a <em>handoff document</em> between PM and engineering loses its reason to exist. It becomes a living collaboration artifact between the builder and their AI tools: a place where agents surface gaps in your thinking, suggest directions you haven&#8217;t considered, and keep context updated as the product evolves.</p><h3>The Abstraction Layer Shift</h3><p>What happened is a steady climb in the layer of abstraction at which builders operate.</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!5aEh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cd0efb4-a249-4e2d-add4-c32bd0f0201c_709x185.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!5aEh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cd0efb4-a249-4e2d-add4-c32bd0f0201c_709x185.png 424w, https://substackcdn.com/image/fetch/$s_!5aEh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cd0efb4-a249-4e2d-add4-c32bd0f0201c_709x185.png 848w, https://substackcdn.com/image/fetch/$s_!5aEh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cd0efb4-a249-4e2d-add4-c32bd0f0201c_709x185.png 1272w, https://substackcdn.com/image/fetch/$s_!5aEh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cd0efb4-a249-4e2d-add4-c32bd0f0201c_709x185.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!5aEh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cd0efb4-a249-4e2d-add4-c32bd0f0201c_709x185.png" width="709" height="185" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6cd0efb4-a249-4e2d-add4-c32bd0f0201c_709x185.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:185,&quot;width&quot;:709,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:31166,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.suryas.org/i/189390135?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cd0efb4-a249-4e2d-add4-c32bd0f0201c_709x185.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!5aEh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cd0efb4-a249-4e2d-add4-c32bd0f0201c_709x185.png 424w, https://substackcdn.com/image/fetch/$s_!5aEh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cd0efb4-a249-4e2d-add4-c32bd0f0201c_709x185.png 848w, https://substackcdn.com/image/fetch/$s_!5aEh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cd0efb4-a249-4e2d-add4-c32bd0f0201c_709x185.png 1272w, https://substackcdn.com/image/fetch/$s_!5aEh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6cd0efb4-a249-4e2d-add4-c32bd0f0201c_709x185.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p>At each step, the barrier to entry dropped and the relevant skill shifted. When the abstraction layer was code, you needed an engineer. When it was frameworks, you needed a full-stack developer. When it was APIs, you needed a product engineer who could wire services together.</p><p>Now the abstraction layer is intent. The builder works in natural language. The AI handles implementation. The bottleneck is no longer &#8220;can you build it?&#8221; It&#8217;s &#8220;do you know what to build, and can you tell whether it&#8217;s good?&#8221;</p><div><hr></div><h2>The New Builder</h2><p>The old hiring question was: &#8220;Can this person build it?&#8221;</p><p>The new hiring question is: &#8220;Does this person know what&#8217;s worth building?&#8221;</p><p>Think of it as a Venn diagram. On one side, the PM who develops technical comfort and leans into AI as a building tool. On the other, the engineer who builds genuine product sense and judgment about what&#8217;s worth creating. The new builder lives in the overlap.</p><p>The people who struggle are on the edges: PMs who only coordinated without developing product instinct, engineers who only wrote code without building judgment about what to build. Both roles, as traditionally defined, are under pressure. The intersection is where the leverage lives.</p><p>When code is commodity, five things define that intersection:</p><p><strong>1. Discernment.</strong> Knowing what good looks like. And, harder, knowing what &#8220;good enough&#8221; looks like. The ability to evaluate AI-generated output and decide whether it meets the bar. This is the quality of your internal evaluation function, and it compounds with experience.</p><p><strong>2. Judgment.</strong> What to build. What to kill. When to ship. When to stop iterating. These are product decisions, but they&#8217;re no longer the exclusive domain of product managers. Anyone building with AI is making these calls continuously, in real time, as part of the building process itself.</p><p><strong>3. Domain depth.</strong> Understanding the problem space better than a model can infer from its training data. My wife&#8217;s trading rules aren&#8217;t in any dataset. The specific compliance requirements of a regulated industry aren&#8217;t captured in a general-purpose model. Domain expertise is the input that makes AI output actually useful.</p><p><strong>4. Systems thinking.</strong> Not writing the code, but understanding how the pieces interact. Where the failure modes live. What breaks at 10x scale. What the second-order effects of a design decision are. This is engineering judgment without the engineering implementation. It becomes more valuable, not less, when AI handles the implementation.</p><p><strong>5. Distribution instinct.</strong> When building is free, the only true waste is building something nobody uses. Distribution isn&#8217;t marketing. It&#8217;s the instinct to put something in front of real users fast, read their reaction, and iterate. The builder who ships to 5 real users on day one learns more than the builder who perfects in isolation for a month. Distribution is part of the building process now, not a phase that comes after.</p><p>Think of it like a film director. They don&#8217;t operate the camera. They don&#8217;t edit the footage frame by frame. But they know exactly what the shot needs to look like, why this scene matters to the story, and when to call cut. The quality of the film depends on their vision, not their technical execution.</p><p>The new builder is a director, not a camera operator. And like a director, they have checks and balances: build a feature in one session, then open a fresh session and ask it to review what was built as an independent consultant across architecture, coding standards, tests, and spec. Agents checking agents. More rigorous than most human code review, if you set it up deliberately.</p><p>But this raises an uncomfortable question. If a smaller subset of people with domain knowledge, discernment, and AI leverage can build production software in hours, what exactly is the 40-person product organization doing? Not all 40 are doing coordination work. But more of them than anyone wants to admit.</p><div><hr></div><h2>The Agile Industrial Complex</h2><p>Here&#8217;s the honest answer: most of what a large product organization does is coordination.</p><p>Standups to sync blockers. Planning poker to estimate specialist effort. QA handoffs.</p><p>Backlog grooming sessions that are really prioritization theater. Status updates. The weekly sync about the other weekly sync.</p><p>These aren&#8217;t make-work. They&#8217;re genuine solutions to a genuine problem: when building requires many specialists working in concert, you need mechanisms to keep them aligned.</p><p>Not all rituals are equal, though. Retrospectives (learning from what happened) and architecture reviews (systems thinking across a complex product) remain valuable even in small teams. What becomes overhead is the <em>coordination</em> machinery: the rituals that exist to synchronize handoffs between specialists.</p><p>I call this the agile industrial complex. Not to be dismissive of the people who operate within it, but to name what it is: an organizational structure purpose-built for a constraint that is rapidly disappearing.</p><p>If one person can hold the full context of what needs to be built, the coordination cost drops to zero. No handoff from PM to engineering. No translation from design mockup to code. No standup to sync on who&#8217;s blocked. The entire overhead vanishes. Not because it was wasteful, but because the problem it solved no longer exists at the same scale.</p><h3>What Dies, What Survives</h3><p><strong>What dies</strong> is the coordination layer. Scrum masters. Program managers. Release managers. Managers whose primary function is orchestrating handoffs between specialists.</p><p>An important distinction: managers whose value is <em>orchestration</em> go. Managers whose value is <em>technical mentorship, architecture guidance, or strategic prioritization</em> transform into the domain experts the new model needs. They don&#8217;t disappear; they become more leveraged.</p><p><strong>What survives, and becomes more valuable, is domain expertise.</strong> This deserves more attention than the things that die, because this is where the opportunity lives.</p><p>When a product needs to scale, you still need someone who deeply understands operational reliability. When it operates in a regulated industry, you still need someone who knows compliance inside and out. Security, infrastructure, data governance: these require genuine expertise that AI can augment but not replace. These experts, wielding AI, become dramatically more productive than the teams they replace.</p><p>The critical distinction is this: you need the <strong>expert</strong>, not the <strong>team</strong>.</p><p>CapabilityOld ModelNew ModelBuilding features8-person squad (PM, 4 engineers, designer, QA, EM)1 to 2 domain experts with AI leverageScale and reliabilityOps team of 6Ops expert + automated infrastructureSecurity and complianceSecurity team + audit cyclesSecurity expert directing AI-powered toolingCoordinationManagers, scrum masters, program managersRadically lighter. Organic alignment replaces managed coordination.</p><p>This is not the argument that one person can do everything. At enterprise scale, you still need teams. But the difference is <em>why the team exists</em>. In the old model, the team existed to coordinate the act of building. In the new model, 3 to 5 domain experts collaborate because their domains intersect, not because the building process demands it. The team&#8217;s value is combined judgment, not coordinated labor.</p><p>The expert who knows <em>what needs to be true</em> in security, in compliance, in operational resilience becomes more leveraged than ever. The coordinator who existed to manage the handoffs between those experts and the engineers who built things? That&#8217;s the role the new model no longer requires.</p><div><hr></div><h2>Two Games, Two Moats</h2><p>The AI era splits software into two fundamentally different games. The rules are different. The moats are different. Conflating them is a strategic error.</p><h3>Game 1: Build for Myself</h3><p>I built a trading application and a workout tracker with zero intention to monetize either. This category is growing fast. For friends-and-family scale, you just deploy and start using it. No distribution required. No go-to-market strategy. No fundraising.</p><p>The impact on SaaS is death by a thousand cuts. Every customer who builds their own tool is a cancelled subscription. Not a competitor. Something worse. A customer who simply doesn&#8217;t need you anymore. They didn&#8217;t switch to an alternative. They just... left.</p><p>The Strava example is instructive. What was Strava&#8217;s moat against me building my own tracker? Feature parity? I&#8217;ll match it in weeks. My data? It lives on my phone; they don&#8217;t own it. Their brand? Irrelevant to a user who can build exactly what they need.</p><p>What might survive is Strava&#8217;s social graph: the community, the leaderboards, the network effects. But the core product (tracking workouts, analyzing performance) is pure commodity now. The moat was never the features. It was the assumption that users couldn&#8217;t build those features themselves.</p><h3>Game 2: Build for Others</h3><p>If features are increasingly cloneable, what actually defends a company?</p><p>The answer depends on complexity. A simple workflow tool or dashboard can be replicated in days. A compliance platform with jurisdiction-specific logic across 50 countries takes longer. But even for complex B2B software, the timeline to functional parity has compressed dramatically. What used to take a competitor years now takes months. What took months takes weeks. Feature accumulation is no longer a durable advantage.</p><p>What remains is harder to copy and slower to build:</p><ul><li><p><strong>Depth of customer understanding.</strong> Knowing their problems before they can articulate them, because you&#8217;ve spent years immersed in their context.</p></li><li><p><strong>Relationship capital.</strong> Earned trust, demonstrated reliability, and switching costs built on consistent delivery.</p></li><li><p><strong>Proprietary data.</strong> Years of accumulated customer interactions, workflow patterns, and edge cases embedded in the product. A new entrant can clone your features but not your data. Feed that data into AI-powered systems and it becomes a compounding advantage that widens over time.</p></li><li><p><strong>Helping customers win.</strong> The value isn&#8217;t your product; it&#8217;s what your product enables them to achieve. How effectively are you making your customers successful?</p></li></ul><p>The moat is not your product. The moat is how deeply you understand what your customer needs to succeed, and how effectively you help them get there. This is a knowledge moat, not a technology moat. The moment &#8220;build it myself&#8221; becomes easier than &#8220;this vendor truly understands my needs,&#8221; the subscription cancels.</p><div><hr></div><h2>In Practice: The Incumbent&#8217;s Playbook</h2><p>For established companies, the org chart is now the biggest liability. It was designed for a world where building was expensive and coordination was the primary challenge.</p><p>A caveat: none of this happens overnight. No innovation is embraced equally across an ecosystem. There&#8217;s a bell curve of adoption, and the agile industrial complex has institutional defenders: SAFe training programs, Scrum Alliance certifications, an entire consulting industry built on the current model. It won&#8217;t go quietly.</p><p>But the organizations that resist this shift won&#8217;t just fail to thrive. They&#8217;ll fail to survive. The pace of AI acceleration is breathtaking. What felt like a 5-year transition window in early 2025 looks more like 18 months from where we stand now.</p><p>The prudent path for large enterprises is the lab model: make a small bet, prove the new operating model with a single team, measure the results, and expand gradually. Not a company-wide reorg announced at an all-hands. A quiet experiment that generates undeniable evidence.</p><p>Here are four moves that matter.</p><p><strong>1. Collapse functional silos into outcome teams.</strong></p><p>This is the structural move that everything else depends on. Not a &#8220;squads&#8221; rebrand with the same handoffs under a new name. Actual teams of 3 to 5 people with end-to-end ownership of a customer outcome. No PM-to-Engineering-to-Design pipeline.</p><p>Each person is a domain-expert builder with AI leverage who can take something from concept to production. The team exists to learn together, not to coordinate the building process. Start with one team. Give them a real customer outcome to own. Measure what happens. That&#8217;s your lab model in action.</p><p><strong>2. Weaponize your data advantage.</strong></p><p>Your accumulated customer data is the moat that strengthens with AI rather than weakening. Feed it into fine-tuned models. Train systems that get smarter with every customer interaction. Make your institutional knowledge a compounding advantage, not a static archive.</p><p><strong>3. Invest in distribution, divest from differentiation.</strong></p><p>Features are commoditized. Accept it. The assets that matter now are brand, trust, sales relationships, compliance certifications, and enterprise procurement approvals. These are slow to build, hard to fake, and genuinely defensible. Double down on them instead of trying to out-feature a startup that can ship faster.</p><p><strong>4. Accept a smaller, more leveraged organization.</strong></p><p>Same revenue. Fewer people. More leverage per person. This is the hardest move because it&#8217;s the most human one. It means acknowledging that roles people have built careers around may no longer be necessary.</p><p>But here&#8217;s what the leaner organization enables: faster decisions, shorter feedback loops, domain experts who are closer to customers and closer to the product. The companies that get this right don&#8217;t just cut costs. They move faster, learn faster, and build things that fit their customers better than the bloated competitor ever could. The constraint was never the number of people. It was the coordination overhead those people required.</p><div><hr></div><h2>The Startup&#8217;s Dilemma</h2><p>Startups face the inverse problem. Starting has never been easier. Surviving has never been harder.</p><p>EraHard PartEasy PartPre-AIBuilding the productFinding a market (if the product was good)Post-AIDefending the productBuilding the product</p><p>The opportunity is real. A 2-person team can go from idea to production in days, at near-zero cost. You can test 10 hypotheses in the time it used to take to test one. The cost of failure approaches zero, which means you can afford more swings at bat. For founders with domain expertise and AI leverage, this is the best time in history to start a company.</p><p>But everything that makes it easy for you makes it easy for everyone else. If you can build it in a weekend, so can 500 other people. Every niche is suddenly crowded. Every feature you ship can be cloned before your launch post finishes trending.</p><p>So the startup game has shifted. Building is no longer the hard part. The hard parts are learning and distributing.</p><p><strong>Speed of learning, not building.</strong> The winning team isn&#8217;t the one that codes fastest. It&#8217;s the one that goes from &#8220;shipped&#8221; to &#8220;understood why it failed&#8221; to &#8220;shipped the fix&#8221; fastest. Learning loops, not build cycles, are the competitive advantage.</p><p><strong>Integration depth.</strong> Shallow tools get replaced overnight. Deep embedding in customer workflows, the kind where ripping out your product would mean rearchitecting their process, creates switching costs that survive the AI era.</p><p><strong>Distribution as a first-order capability.</strong> In a crowded niche, the product that wins isn&#8217;t necessarily the best one. It&#8217;s the one people find first and trust most. Community, brand, go-to-market: these matter more when every competitor can match your features in a week.</p><div><hr></div><h2>Stepping Back: The Human Question</h2><p>Everything above is an operating model framework. It&#8217;s clean and strategic and makes sense on a whiteboard. But operating models are made of people.</p><p>A lot of those people are about to find that the skills they invested years in (coordination, process management, translating between specialists) are the skills this new model no longer needs. The problem is structural, not personal.</p><p>There&#8217;s a counterargument worth engaging with: if building gets cheaper, demand for software explodes, and we need <em>more</em> people, not fewer. The Jevons Paradox applied to code.</p><p>I think that&#8217;s directionally right and strategically misleading. The freed capacity absolutely does create net new innovation. Humanity will tackle problems in the next two years that were logistically impossible last year.</p><p>But the &#8220;more demand creates more jobs&#8221; framing misses a critical variable: individual agency. The demand is for domain experts who can wield AI, not for the coordination roles the old model required.</p><p>The transition isn&#8217;t automatic. It takes real effort, real time, and the willingness to step outside a comfort zone that the market rewarded for years. This could be zero-sum for specific people who don&#8217;t make the shift. That&#8217;s the honest part.</p><p>The honest guidance, not optimistic spin but honest:</p><p><strong>Pick a domain. Go deep.</strong> Become the person who knows <em>what needs to be true</em> in a specific area: compliance, security, customer success, an industry vertical, a technical discipline. Then learn to wield AI as your leverage to make those things true.</p><p>The path isn&#8217;t starting over. Consider the program manager who spent 3 years coordinating between the compliance team and engineering. They already understand the compliance domain better than they realize. The shift is recognizing which domain knowledge you&#8217;ve already accumulated and deepening it deliberately, rather than maintaining the generalist coordination skills that the market valued five years ago.</p><p>The market right now is flooded with generalists and starved for domain experts who can wield AI. The people who hear this early and redirect their trajectory will find themselves disproportionately valuable. The window is open. It won&#8217;t stay open indefinitely.</p><div><hr></div><h2>The Moat Is Clarity</h2><p>The transition is real and it&#8217;s moving fast. Some of it is painful. But here&#8217;s what makes it exciting: for the first time, the distance between understanding a problem and solving it has collapsed to nearly zero. A domain expert with AI leverage can build in days what used to take teams months. A 2-person startup can test ten hypotheses before a traditional company finishes its planning cycle. Problems that were too niche, too expensive, too logistically complex to tackle are suddenly within reach.</p><p>More software will be built in the next two years than in the previous twenty. Most of it will be built by people who would never have called themselves builders before.</p><p>The operating model for when building is free isn&#8217;t a theory. It&#8217;s already here. The people who see it clearly aren&#8217;t waiting for permission. They&#8217;re already building.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.suryas.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Product Thinking w/ Surya! Subscribe for free to receive new posts</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Building to Think in the Full-Stack Builder Era]]></title><description><![CDATA[AI empowers product managers to 'Build to Think.' Discover how full-stack builders use AI and strategic infrastructure to overcome latency and drive innovation.]]></description><link>https://blog.suryas.org/p/building-to-think-in-the-full-stack-builder-era</link><guid isPermaLink="false">https://blog.suryas.org/p/building-to-think-in-the-full-stack-builder-era</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Thu, 18 Dec 2025 13:22:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ymqA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3acd993a-278e-48ea-9161-d1816f446278_2752x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ymqA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3acd993a-278e-48ea-9161-d1816f446278_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ymqA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3acd993a-278e-48ea-9161-d1816f446278_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!ymqA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3acd993a-278e-48ea-9161-d1816f446278_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!ymqA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3acd993a-278e-48ea-9161-d1816f446278_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!ymqA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3acd993a-278e-48ea-9161-d1816f446278_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ymqA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3acd993a-278e-48ea-9161-d1816f446278_2752x1536.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3acd993a-278e-48ea-9161-d1816f446278_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5050766,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.suryas.org/i/181255481?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3acd993a-278e-48ea-9161-d1816f446278_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ymqA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3acd993a-278e-48ea-9161-d1816f446278_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!ymqA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3acd993a-278e-48ea-9161-d1816f446278_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!ymqA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3acd993a-278e-48ea-9161-d1816f446278_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!ymqA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3acd993a-278e-48ea-9161-d1816f446278_2752x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The debate over the future of product management has been polarized. On one side, the &#8220;PM is Dead&#8221; crowd argues that AI automates the coordination layer, making the role obsolete. On the other hand, the &#8220;Tech-PM&#8221; advocates insist the role is simply shifting closer to the code.</p><p>Recently, LinkedIn picked a side. In a <a href="https://www.youtube.com/watch?v=R-zCfLQD_84">conversation with Lenny Rachitsky</a>, LinkedIn&#8217;s CPO, detailed how they restructured their Associate Product Manager program (traditional product management training) and replaced it with the &#8220;Associate Product Builder&#8221; program. Their new mandate? <strong>Full-Stack Builders, </strong>while acknowledging that the specialists have a vital role to play. Not mutually exclusive, but there&#8217;s a trend building.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.suryas.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Product Thinking w/ Surya ! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>This isn&#8217;t just another tech company tweaking job titles; it is a signal that the market has moved. The era of the &#8220;Product Trio&#8221; (where a PM writes requirements, a designer draws mocks, and an engineer writes code) is buckling under its own inefficiency. In its place, we are seeing the rise of the <strong>Augmented Architect</strong>: a single operator who leverages AI to vertically integrate strategy, design, and execution.</p><p>This shift validates a harder truth: the administrative layer of product management is dying. The traffic cop who manages handoffs is being automated away. What survives is the <strong>Builder</strong>: someone who uses AI not just to speed up, but to reclaim the full creative stack.</p><div><hr></div><p><strong>Here&#8217;s what&#8217;s ahead:</strong></p><ul><li><p>Why the Product Trio model collapsed under its own latency</p></li><li><p>How LinkedIn built infrastructure to enable Full-Stack Builders at scale</p></li><li><p>What &#8220;Building to Think&#8221; means and why it&#8217;s not the &#8220;jack of all trades&#8221; trap</p></li><li><p>The three investments required to make this work in your organization</p></li></ul><div><hr></div><h2>Why the trio is breaking down</h2><p>The &#8220;Product Trio&#8221; (PM, Design, Engineering) wasn&#8217;t a mistake; it was a necessity. The complexity of the stack demanded hyper-specialization. One person couldn't master strategy, sophisticated UI systems, and distributed backend architecture simultaneously. Splitting the brain was the only way to scale.</p><p>For a decade or so, it worked. But it came with a hidden tax: Latency.</p><p>We built a supply chain of talent. The PM defines the &#8216;Why,&#8217; Design defines &#8216;What&#8217; it looks like, and Engineering defines &#8216;How&#8217; to build it. Every handoff creates a delay, and every delay creates a leak. Context is lost when a strategy deck becomes a wireframe. Nuance evaporates when a wireframe becomes a Jira ticket.</p><p>To patch these leaks, we built an entire ecosystem of coordination. Far removed from the original Agile Manifesto, we invented the <strong>Agile Industrial Complex</strong>: a bloat of standups, retro rituals, and endless syncs.</p><p>...creating an artificial divide between the Product Manager (Strategy) and the Product Owner (Backlog). Now, we didn&#8217;t just have handoffs between functions; we had handoffs within the brain of the product leader.</p><p>This created a management nightmare. Entire career paths in Program Management emerged just to coordinate the coordinators.</p><p>Today, many Product Managers spend 70% of their week servicing this complex. They aren&#8217;t building products; they are managing the friction of the process.</p><p>The tragedy is what gets displaced. When a PM is drowning in ticket grooming and alignment meetings, the first thing to go is the customer. Ask a modern product team, &#8220;How many customers did you talk to this month?&#8221; and don&#8217;t be shocked when the answer is single digits, or zero. The system designed to build products has cannibalized the time needed to understand who they are for.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!-npY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919c94af-164d-43d4-8ed3-29dadadf0664_2752x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-npY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919c94af-164d-43d4-8ed3-29dadadf0664_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!-npY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919c94af-164d-43d4-8ed3-29dadadf0664_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!-npY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919c94af-164d-43d4-8ed3-29dadadf0664_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!-npY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919c94af-164d-43d4-8ed3-29dadadf0664_2752x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-npY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919c94af-164d-43d4-8ed3-29dadadf0664_2752x1536.png" width="1456" height="813" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/919c94af-164d-43d4-8ed3-29dadadf0664_2752x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:813,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6177862,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.suryas.org/i/181255481?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919c94af-164d-43d4-8ed3-29dadadf0664_2752x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-npY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919c94af-164d-43d4-8ed3-29dadadf0664_2752x1536.png 424w, https://substackcdn.com/image/fetch/$s_!-npY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919c94af-164d-43d4-8ed3-29dadadf0664_2752x1536.png 848w, https://substackcdn.com/image/fetch/$s_!-npY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919c94af-164d-43d4-8ed3-29dadadf0664_2752x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!-npY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F919c94af-164d-43d4-8ed3-29dadadf0664_2752x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>This latency tax was acceptable when the alternative was not building at all. But today, when AI can generate a functional prototype in 60 seconds, the calculus has shifted.</p><div><hr></div><h2>Building to think</h2><p>When some industry voices warn against the &#8220;Full-Stack PM,&#8221; their argument is familiar: &#8220;Don&#8217;t be a jack of all trades and master of none.&#8221; They fear that asking a PM to code or design will dilute their focus on the customer.</p><p>This fear is rooted in an obsolete definition of &#8220;building.&#8221;</p><p>The goal of the Augmented Architect isn&#8217;t to replace the Senior React Engineer or the Principal Product Designer. You aren&#8217;t building production code to save money on headcount. You are <strong>Building to Think</strong>.</p><p>We need to distinguish between <strong>mastery</strong> and <strong>fluency</strong>.</p><ul><li><p><strong>Mastery</strong> is your deep, irreplaceable expertise. For a PM, this is Customer Insight, Evidence, Strategy, and Inspiration.</p></li><li><p><strong>Fluency</strong> is the ability to use a tool well enough to unblock yourself.</p></li></ul><p>The Augmented Architect uses AI to <strong>gain technical and design fluency,</strong> <em>specifically to reinforce</em> their <strong>product mastery</strong>.</p><ul><li><p><strong>Customer Insight (Mastery):</strong> Instead of writing a spec and waiting two weeks for a mock, you use AI to vibe-code a functional prototype in one hour. You put it in a user&#8217;s hands immediately for feedback. Your technical fluency didn&#8217;t distract you from the customer; it got you to the customer <em>faster</em>.</p></li><li><p><strong>Evidence Mindset (Mastery):</strong> Instead of waiting for a data analyst to prioritize your ticket, you use an AI agent to query the warehouse directly. Your data fluency didn&#8217;t replace the analyst; it allowed you to validate your hypothesis instantly.</p></li></ul><p>LinkedIn explicitly codified this (more on this below). They found that when execution is automated, the &#8220;human&#8221; responsibilities don&#8217;t disappear: they become the <em>only</em> things that matter. They define these as the <strong>Five Traits</strong>: Vision, Empathy, Communication, Creativity, and Judgment.</p><p>Notice what isn&#8217;t on that list: &#8220;Ticket Writing,&#8221; &#8220;Backlog Grooming,&#8221; or &#8220;Status Reporting.&#8221;</p><p>The &#8220;Jack of All Trades&#8221; critique misses the point. The Augmented Architect doesn&#8217;t do <em>more</em> work. They use AI to automate low-leverage coordination work, so they can spend <em>more</em> time on the high-leverage traits of Vision and Judgment. They aren&#8217;t diluting their role; they are distilling it.</p><p>Critics like to invoke the Swiss Army Knife analogy: you wouldn&#8217;t use one to build a house. They&#8217;re right, but they are missing the core point of this proposal. The Full-Stack Builder doesn&#8217;t use the Swiss Army Knife to frame walls and install plumbing. They use it to sketch the blueprint, test if the foundation makes sense, and validate whether the house should be built at all. </p><p>That&#8217;s the 0 to 0.5 phase. Once validated, they bring in the specialists with proper tools to take it from 0.5 to 1.0. The knife isn&#8217;t replacing the hammer. It&#8217;s accelerating the decision about whether to pick up the hammer in the first place.</p><div><hr></div><h2>How LinkedIn built the infrastructure for full-stack builders</h2><p>The latency trap of the Agile Industrial Complex was not a secret. But for years, the solution felt out of reach. How could a single individual navigate strategy, design, and complex engineering? LinkedIn&#8217;s ambitious &#8220;Full-Stack Builder&#8221; program doesn&#8217;t just offer an answer; it redefines the question.</p><p>But here&#8217;s the critical reality: this transformation doesn&#8217;t happen with ChatGPT and good intentions. It requires systematic organizational investment. Individual builders can move from 0 to 0.5 instantly with today&#8217;s AI tools. Scaling that capability across an organization demands platform thinking, shared infrastructure, and cultural rewiring.</p><p>This isn&#8217;t about simply asking PMs (or engineers, designers, and even researchers, the program is role-agnostic) to take on more work. It&#8217;s about providing the <strong>infrastructure of autonomy</strong> that empowers an individual to vertically integrate the talent stack, collapsing the traditional handoffs and accelerating value creation. LinkedIn&#8217;s pioneering effort rests on three pillars:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!k1qq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54df2380-3b7b-4d5a-9549-eb51eea1440c_2048x2048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!k1qq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54df2380-3b7b-4d5a-9549-eb51eea1440c_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!k1qq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54df2380-3b7b-4d5a-9549-eb51eea1440c_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!k1qq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54df2380-3b7b-4d5a-9549-eb51eea1440c_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!k1qq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54df2380-3b7b-4d5a-9549-eb51eea1440c_2048x2048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!k1qq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54df2380-3b7b-4d5a-9549-eb51eea1440c_2048x2048.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/54df2380-3b7b-4d5a-9549-eb51eea1440c_2048x2048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6573995,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.suryas.org/i/181255481?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54df2380-3b7b-4d5a-9549-eb51eea1440c_2048x2048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!k1qq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54df2380-3b7b-4d5a-9549-eb51eea1440c_2048x2048.png 424w, https://substackcdn.com/image/fetch/$s_!k1qq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54df2380-3b7b-4d5a-9549-eb51eea1440c_2048x2048.png 848w, https://substackcdn.com/image/fetch/$s_!k1qq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54df2380-3b7b-4d5a-9549-eb51eea1440c_2048x2048.png 1272w, https://substackcdn.com/image/fetch/$s_!k1qq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F54df2380-3b7b-4d5a-9549-eb51eea1440c_2048x2048.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Platform: Rearchitecting for AI Fluency</strong></p><p>LinkedIn understood that off-the-shelf AI tools wouldn&#8217;t suffice. To enable a &#8220;Full-Stack Builder&#8221; model, they had to re-architect their core platform to allow AI agents to &#8220;reason&#8221; and build directly. This involved:</p><ul><li><p><strong>Composable UI:</strong> Building server-side UI components that AI could manipulate and assemble seamlessly.</p></li><li><p><strong>Design Systems:</strong> Adjusting internal design systems to ensure AI agents (like Figma plugins or internal tools) could export code directly compatible with LinkedIn&#8217;s repositories.</p></li></ul><p>This foundational investment meant builders weren&#8217;t just prompting generic AI; they were interacting with a system designed for deep integration.</p><p><strong>Tools: The Custom Agent Ecosystem</strong></p><p>The true force multiplier lies in LinkedIn&#8217;s suite of specialized internal AI agents. Trained on LinkedIn&#8217;s &#8220;golden examples&#8221; and historical data, these agents automate the process complexity that once bogged down product teams. A Trust Agent identifies vulnerabilities and harm vectors in product specs before development. A Growth Agent critiques feature ideas by analyzing LinkedIn&#8217;s history of growth funnels and A/B tests. Similar agents handle research, QA, and maintenance tasks autonomously.</p><p>These agents act as an invisible, always-on &#8220;squad&#8221; for the builder, handling tasks that previously required multiple specialists and endless coordination.</p><p><strong>Culture: Incentivizing the New Builder</strong></p><p>To ensure adoption, LinkedIn didn&#8217;t just build the tools; they reshaped their culture and incentives:</p><ul><li><p><strong>Performance Reviews:</strong> AI fluency and agency are now explicit components of performance evaluations.</p></li><li><p><strong>Pod Model:</strong> Teams are reorganized into small, cross-functional &#8220;pods&#8221; to foster rapid execution.</p></li><li><p><strong>Pilot Exclusivity:</strong> A strategic rollout created internal demand and &#8220;FOMO,&#8221; accelerating adoption among top talent.</p></li></ul><p>LinkedIn&#8217;s results are compelling: top performers are saving hours per week, shipping higher quality products, and even transitioning between traditionally siloed roles with unprecedented ease. A user researcher, for instance, transitioned to a Growth PM role by leveraging these tools.</p><p><strong>A Key Distinction:</strong> LinkedIn&#8217;s model may push builders closer to 0.9 or even 1.0 on the execution spectrum, relying less on specialist handoffs than the framework I&#8217;m advocating. Their heavy infrastructure investment (custom agents, re-architected platforms, curated training data) enables this. But most organizations won&#8217;t have LinkedIn&#8217;s resources or appetite for that level of platform buildout. The 0 to 0.5 model I&#8217;m proposing is more pragmatic: use AI for rapid validation, then strategically partner with specialists to scale. <em>This preserves craft while accelerating learning.</em></p><p><strong>What This Means:</strong> The &#8220;Full-Stack Builder&#8221; isn&#8217;t a mythical unicorn, and it&#8217;s not about individual heroics. You don&#8217;t need LinkedIn&#8217;s exact stack, but you cannot skip the investment. Individual builders gain the 0 to 0.5 capability instantly with AI. Scaling that across your organization requires platform thinking: shared agents, composable systems, and dismantling the coordination overhead that still suffocates most teams. </p><p>The shift is both personal and organizational. One without the other hits a ceiling fast.</p><div><hr></div><h2>What it takes to make this work at scale</h2><p>The possibilities are here, right now. A PM can build a functional prototype in an afternoon. A designer can generate code. An engineer with product sense can ship a feature end-to-end. This isn&#8217;t speculative; it&#8217;s happening today.</p><p>However, what separates the individual win from organizational transformation is <strong>infrastructure</strong>.</p><p>Let&#8217;s address a cynical reading of this trend head-on: ...corporate code for do more with less. They argue it&#8217;s a cost-cutting play dressed up as innovation, a way to squeeze three roles into one headcount. If that&#8217;s what your organization is doing, the critics are right to be skeptical.</p><p>But that&#8217;s not what we&#8217;re describing. The model I&#8217;m advocating requires <em>more</em> investment, not less. Investment in training, in custom tooling, in platform capabilities, in cultural change. Organizations that treat this as a headcount reduction strategy will fail. The ones that treat it as a capability investment will pull ahead.</p><p>The Full-Stack Builder isn&#8217;t limited to Product Managers. Engineers with product sense, designers who can expand their scope, and researchers who can prototype their hypotheses; any role can adopt this model. The role is less important than the mindset: taking an idea from 0 to 0.5 rapidly, validating it with real users, then bringing in specialists to scale it to 1.0.</p><p>Making this work at scale requires investment in three areas:</p><p><strong>People: Building AI Fluency</strong></p><p>Your team needs training, not in &#8220;prompt engineering tips,&#8221; but in the mindset shift of Building to Think. This means:</p><ul><li><p>Understanding when to build for validation vs. when to hand off for production.</p></li><li><p>Developing computational fluency and judgment about when AI helps vs. when human expertise is non-negotiable.</p></li></ul><p>Many organizations are already investing here. The question is whether your training is tactical (how to use ChatGPT) or strategic (how to rewire your operating model).</p><p><strong>Platform: Shared Cross-Functional Capabilities</strong></p><p>This is where most organizations fail. They give everyone AI tools and wonder why the gains don&#8217;t scale. Without shared infrastructure, every team reinvents the wheel. Some do it well. Most do it poorly. The result is fragmentation and technical debt.</p><p>The investment required:</p><p><strong>Composable systems:</strong> Design systems, UI libraries, and APIs that AI can manipulate.</p><p>...curated datasets that teach AI what good looks like.</p><p>You need platform thinking. Without it, individual builders hit a ceiling, and the organization never captures the compound value.</p><p><strong>Process: Tearing Down the Agile Industrial Complex</strong></p><p>You cannot layer the new way of working on top of the old ceremony. If your builders are shipping prototypes in a day but still spending the majority of their time on standups, retros, and grooming sessions per week, you haven&#8217;t changed anything. You&#8217;ve just added AI to a broken system.</p><p>This requires hard organizational choices:</p><ul><li><p>Eliminating coordination theater: meetings that exist to manage handoffs you&#8217;re trying to collapse.</p></li><li><p>Reorganizing into small, autonomous pods rather than large functional hierarchies.</p></li><li><p>Changing incentives: rewarding speed of learning and commercial impact, not story points and ticket throughput.</p></li></ul><h3>The timeline question</h3><p>The question isn&#8217;t whether this transformation takes 6 months or 18. It&#8217;s whether you&#8217;re starting today. Individual builders can adopt the mindset immediately with tools that already exist. Organizations need to invest in infrastructure, and that takes intention, budget, and leadership conviction. But the urgency is real: if your competitors are building this capability while you&#8217;re debating it, the gap compounds fast.</p><div><hr></div><h2>What the new operating model looks like</h2><p>What does this distillation of the product role actually look like in the day-to-day? It&#8217;s a dramatic departure from the &#8220;traffic cop&#8221; model. Imagine two PMs in 2026, both working on similar problems:</p><p><strong>The Traffic Cop PM:</strong> Their week is a blur of meetings. They manage dependencies, chase updates, groom backlogs, refine tickets, and present status. Their primary output is documentation and coordination. When asked about customer conversations, they sigh and promise to fit them in &#8220;next week.&#8221;</p><p><strong>The Augmented Architect:</strong> Their week is characterized by rapid customer iteration. They build functional prototypes with AI in hours, test them with users directly, analyze patterns from session data, and present validated insights with working demos to their engineering partners. They stop at 0.5 and bring in specialists to scale to 1.0. Their time goes to deep work and strategic planning, not status meetings.</p><div><hr></div><h2>The widening gap</h2><p>Two groups are emerging: <strong>Augmented Architects</strong> who use AI to validate faster, and <strong>Traffic Cops</strong> who treat AI as optional while clinging to coordination rituals.</p><p>The choice isn&#8217;t whether AI impacts product management; it&#8217;s whether you&#8217;re adopting the builder mindset today. Your organization may lack custom agents or re-architected platforms. Start with individual capabilities available now. Build prototypes. Validate faster. Demonstrate value.</p><p>But understand: your organization must invest in shared infrastructure to scale this, or individual builders hit a ceiling. The mindset shift is personal and immediate. The infrastructure shift takes sustained commitment. Both are necessary.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TjjP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43787933-7259-465d-8272-6d51b44fe895_2816x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TjjP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43787933-7259-465d-8272-6d51b44fe895_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!TjjP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43787933-7259-465d-8272-6d51b44fe895_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!TjjP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43787933-7259-465d-8272-6d51b44fe895_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!TjjP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43787933-7259-465d-8272-6d51b44fe895_2816x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TjjP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43787933-7259-465d-8272-6d51b44fe895_2816x1536.png" width="1456" height="794" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/43787933-7259-465d-8272-6d51b44fe895_2816x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:794,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:6521472,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://blog.suryas.org/i/181255481?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43787933-7259-465d-8272-6d51b44fe895_2816x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TjjP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43787933-7259-465d-8272-6d51b44fe895_2816x1536.png 424w, https://substackcdn.com/image/fetch/$s_!TjjP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43787933-7259-465d-8272-6d51b44fe895_2816x1536.png 848w, https://substackcdn.com/image/fetch/$s_!TjjP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43787933-7259-465d-8272-6d51b44fe895_2816x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!TjjP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F43787933-7259-465d-8272-6d51b44fe895_2816x1536.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Those who start now, even with imperfect infrastructure, position themselves to accelerate when organizations invest. Those who wait will find themselves 12 to 18 months behind. If you&#8217;re coordinating handoffs while competition is shipping, the gap won&#8217;t be closeable.</p><div><hr></div><h2>The era of the architect</h2><p>The Product Trio model, while effective for a time, has succumbed to the latency and inefficiency of its own specialized handoffs. The rise of AI has exposed the &#8220;Agile Industrial Complex&#8221; for what it became: a system that prioritizes process over customer insight, trapping product managers in a cycle of coordination rather than creation.</p><p>But this isn&#8217;t a eulogy for Product Management. It&#8217;s a call for its evolution. LinkedIn&#8217;s bold move to replace its APM program with the &#8220;Associate Product Builder&#8221; isn&#8217;t an anomaly; it&#8217;s a leading indicator. It illuminates a future where the most valuable product professionals are <strong>Augmented Architects</strong>: individuals who leverage AI to vertically integrate the talent stack.</p><p>They are not generalists. They are masters of the core PM competencies (Customer Insight, Evidence, Strategy, and Inspiration), who use <strong>Fluency</strong> in adjacent domains (enabled by AI) to reinforce and accelerate their <strong>Mastery</strong>. They are &#8220;Building to Think,&#8221; reducing the latency of validation, and owning the full creative and commercial arc of a product.</p><p>The era of the Augmented Architect isn&#8217;t coming. It&#8217;s here. The question isn&#8217;t whether you understand this shift. It&#8217;s whether you&#8217;re shipping differently this quarter than last. The choice is stark: embrace this evolution and reclaim your role as a direct creator of value, or remain a traffic cop in a system that no longer requires one.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.suryas.org/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Product Thinking w/ Surya ! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Your platform is either a tax or a multiplier]]></title><description><![CDATA[Internal product managers often believe a dangerous myth.]]></description><link>https://blog.suryas.org/p/platform-tax-vs-multiplier</link><guid isPermaLink="false">https://blog.suryas.org/p/platform-tax-vs-multiplier</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Wed, 26 Nov 2025 00:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rOrX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F729d23ac-88c0-4a60-a97c-a0a4774c7159_1715x954.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rOrX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F729d23ac-88c0-4a60-a97c-a0a4774c7159_1715x954.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rOrX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F729d23ac-88c0-4a60-a97c-a0a4774c7159_1715x954.png 424w, https://substackcdn.com/image/fetch/$s_!rOrX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F729d23ac-88c0-4a60-a97c-a0a4774c7159_1715x954.png 848w, https://substackcdn.com/image/fetch/$s_!rOrX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F729d23ac-88c0-4a60-a97c-a0a4774c7159_1715x954.png 1272w, https://substackcdn.com/image/fetch/$s_!rOrX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F729d23ac-88c0-4a60-a97c-a0a4774c7159_1715x954.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rOrX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F729d23ac-88c0-4a60-a97c-a0a4774c7159_1715x954.png" width="1456" height="810" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/729d23ac-88c0-4a60-a97c-a0a4774c7159_1715x954.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:810,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2973395,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://suryaps.substack.com/i/180290347?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F729d23ac-88c0-4a60-a97c-a0a4774c7159_1715x954.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rOrX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F729d23ac-88c0-4a60-a97c-a0a4774c7159_1715x954.png 424w, https://substackcdn.com/image/fetch/$s_!rOrX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F729d23ac-88c0-4a60-a97c-a0a4774c7159_1715x954.png 848w, https://substackcdn.com/image/fetch/$s_!rOrX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F729d23ac-88c0-4a60-a97c-a0a4774c7159_1715x954.png 1272w, https://substackcdn.com/image/fetch/$s_!rOrX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F729d23ac-88c0-4a60-a97c-a0a4774c7159_1715x954.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>Internal product managers often believe a dangerous myth. They think they don&#8217;t have to worry about churn.</p><p>Since the company mandates the use of their API gateway, design system, or data warehouse, they assume their user base is guaranteed. They have a captive audience.</p><p>But in platform product management, users don&#8217;t churn. They rot.</p><p>When users are forced to use a tool they hate, they engage in malicious compliance. They do the bare minimum. They build &#8220;shadow IT&#8221; workarounds.</p><p>They complain to leadership until your budget gets cut. Every internal platform falls into one of two buckets: it is either a <em>Tax</em> or a <em>Multiplier</em>.</p><p>If you don&#8217;t know which one you are, you are probably a tax.</p><h2>The platform tax</h2><p>A tax is something you pay because you have to. In the context of a platform, it is a tool that slows users down in exchange for organizational compliance.</p><p>Your platform is a tax if it adds friction (&#8220;I have to file a ticket and wait three days just to get an API key&#8221;), requires distinct effort (&#8220;I have to rewrite my code to fit your rigid schema&#8221;), or delivers abstract value (the user feels the pain, the company gets the gain).</p><p>When you operate as a tax, your relationship with users is adversarial. You are the police. They are the citizens trying to avoid a ticket.</p><h2>The multiplier effect</h2><p>A multiplier gives users leverage. It abstracts away the boring, hard, or dangerous parts of development so product teams can skip infrastructure setup and go straight to feature development.</p><p>Your platform is a multiplier if it removes friction (&#8220;I dropped my code in, and the platform handled auth, logging, and scaling automatically&#8221;), accelerates velocity (&#8220;This saved me two weeks of integration work&#8221;), and delivers immediate value (the user feels the gain directly in their sprint velocity).</p><p>When you operate as a multiplier, your relationship with users is a partnership. You are the pit crew. They are the driver.</p><h2>The mandate trap</h2><p>The biggest enemy of the multiplier mindset is the corporate mandate.</p><p>When a CTO says, &#8220;Everyone must use Platform X,&#8221; the platform team stops selling. They stop treating their users like customers. They start treating them like subordinates.</p><p>This is where the product rot begins.</p><p>A &#8220;tax&#8221; team leans on the mandate to explain away their bad UX. They say things like, &#8220;They have to use it because of security compliance.&#8221;</p><p>A &#8220;multiplier&#8221; team ignores the mandate. They build a product so good that teams would voluntarily choose it even if they were allowed to use AWS directly.</p><h2>Shifting from tax to multiplier</h2><p>If you suspect you are building a tax, the shift starts with reframing your value proposition. You cannot just be a gatekeeper. The role shifts to <a href="https://blog.suryas.org/architect-gardner-orchestrator/">gardener</a>.</p><p><strong>Sell the &#8220;boring&#8221; work.</strong> Don&#8217;t sell compliance. Sell the fact that you handle the tasks nobody wants to do. &#8220;You must use our library for security compliance&#8221; is a tax pitch. &#8220;You never have to worry about patching a security vulnerability again&#8221; is a multiplier pitch.</p><p><strong>Measure time-to-hello-world.</strong> Tax teams measure uptime. Multiplier teams measure how fast a new user can ship value. If it takes three days to get onboarded, you are a tax. If it takes five minutes, you are a multiplier.</p><p><strong>Compete with the open market.</strong> Assume your users can leave. If they could use Vercel, Stripe, or Auth0 instead of your internal tool, would they? If the answer is no, find out why. That gap is your roadmap.</p><h2>The litmus test</h2><p>There is one simple question to determine where you stand.</p><p>If the executive mandate were removed tomorrow, how many teams would keep using your platform?</p><p>If the answer is &#8220;nobody,&#8221; you are a tax. You are living on borrowed time.</p><p>If the answer is &#8220;most of them,&#8221; you are a multiplier. You are building leverage.</p><p>The choice is clear: gates or ramps. Most teams are still building gates.</p>]]></content:encoded></item><item><title><![CDATA[Customer satisfaction is a hierarchy, not a metric]]></title><description><![CDATA[We have all been in that strategy meeting.]]></description><link>https://blog.suryas.org/p/customer-satisfaction-hierarchy</link><guid isPermaLink="false">https://blog.suryas.org/p/customer-satisfaction-hierarchy</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Tue, 25 Nov 2025 00:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!D9XQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b57e9d0-fd37-496b-8f4d-83a34e65e060_800x900.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>We have all been in that strategy meeting. The dashboard is green. Uptime is 99.9%, support ticket volume is down, roadmap is on schedule.</p><p>And yet, customers are <a href="https://blog.suryas.org/retention-starts-at-onboarding/">churning</a>.</p><p>The problem isn&#8217;t the data. It&#8217;s the definition. We treat &#8220;customer satisfaction&#8221; as a single bucket. We dump everything into it: bug fixes, new features, polite support emails, brand colors. If the bucket is full, we assume we are winning.</p><p>But satisfaction isn&#8217;t a bucket. It&#8217;s a hierarchy.</p><p>To fix retention, the question isn&#8217;t &#8220;Are they satisfied?&#8221; It&#8217;s &#8220;Which layer of the hierarchy is leaking?&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!D9XQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b57e9d0-fd37-496b-8f4d-83a34e65e060_800x900.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!D9XQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b57e9d0-fd37-496b-8f4d-83a34e65e060_800x900.png 424w, https://substackcdn.com/image/fetch/$s_!D9XQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b57e9d0-fd37-496b-8f4d-83a34e65e060_800x900.png 848w, https://substackcdn.com/image/fetch/$s_!D9XQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b57e9d0-fd37-496b-8f4d-83a34e65e060_800x900.png 1272w, https://substackcdn.com/image/fetch/$s_!D9XQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b57e9d0-fd37-496b-8f4d-83a34e65e060_800x900.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!D9XQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b57e9d0-fd37-496b-8f4d-83a34e65e060_800x900.png" width="800" height="900" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8b57e9d0-fd37-496b-8f4d-83a34e65e060_800x900.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:900,&quot;width&quot;:800,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:80145,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://suryaps.substack.com/i/180290348?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b57e9d0-fd37-496b-8f4d-83a34e65e060_800x900.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!D9XQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b57e9d0-fd37-496b-8f4d-83a34e65e060_800x900.png 424w, https://substackcdn.com/image/fetch/$s_!D9XQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b57e9d0-fd37-496b-8f4d-83a34e65e060_800x900.png 848w, https://substackcdn.com/image/fetch/$s_!D9XQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b57e9d0-fd37-496b-8f4d-83a34e65e060_800x900.png 1272w, https://substackcdn.com/image/fetch/$s_!D9XQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8b57e9d0-fd37-496b-8f4d-83a34e65e060_800x900.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Level 1: The removal of friction (hygiene)</h2><p>This is the basement. The basic expectations: the login works, the page loads, the data saves, the bill is accurate.</p><p>The defining characteristic of this layer is asymmetry. You get zero credit for getting it right. Nobody writes a thank-you note because the &#8220;Reset Password&#8221; button worked.</p><p>But if you get it wrong? You lose everything.</p><p>When you fail at this level, the customer emotion is <em>anger</em>. Angry customers submit tickets. They tweet. They churn loudly.</p><p>Many product teams confuse fixing friction with &#8220;delivering value.&#8221; They spend quarters squashing bugs and refactoring code, thinking they are improving the product. They aren&#8217;t. They are simply stopping the bleeding.</p><p>This is the floor. You cannot build a strategy here, but you can certainly die here.</p><h2>Level 2: The delivery of outcome (utility)</h2><p>This is the ground floor. This is why the customer hired your product in the first place.</p><p>This layer isn&#8217;t about whether the software works. It&#8217;s about whether the customer works. Did they get the result?</p><p>A tax software user doesn&#8217;t want a delightful interface. They want their taxes filed without an audit. A ride-share user doesn&#8217;t want a chatty driver. They want to be at the airport by 5:00 PM.</p><p>When you fail at this level, the customer emotion isn&#8217;t anger. It is <em>indifference</em>.</p><p>This is the most dangerous emotion in business. An angry customer submits a ticket. An indifferent customer just leaves.</p><p>They don&#8217;t complain because you aren&#8217;t worth the energy. You didn&#8217;t solve their problem, so they drifted to a competitor who would. They churn silently, without warning, without noise in your support channels.</p><p>This is where many product roadmaps fail. Teams ship features users requested, check them off the list, and wonder why retention doesn&#8217;t move. The features worked. They just didn&#8217;t <a href="https://blog.suryas.org/feature-factory-problem-ai-amplifies/">matter</a>.</p><p>The customer got the output. They didn&#8217;t get the <a href="https://blog.suryas.org/outcomes-over-outputs-for-real/">outcome</a>.</p><p><strong>What this means in practice:</strong> Sometimes, delivering the outcome requires adding friction. You might force a user through a complex setup wizard because that is the only way to ensure they get value later. You might require a compliance step that slows onboarding.</p><p>A focused product team prioritizes the outcome over the friction. They recognize that a frustrated user who succeeds will stay. A delighted user who fails will not.</p><p>This is the &#8220;success vs. satisfaction&#8221; trap. Optimizing for satisfaction at this layer often means optimizing for the wrong thing. The metric that matters isn&#8217;t &#8220;Did they smile?&#8221; It&#8217;s &#8220;Did they get the job done?&#8221;</p><h3>The silent churn problem</h3><p>Most analytics dashboards are built to catch Level 1 failures. Support tickets spike. Error rates climb. Alerts fire.</p><p>But Level 2 failures are invisible. Users log in. They click around. They complete workflows. From a product instrumentation perspective, everything looks fine.</p><p>Then they stop renewing.</p><p>The gap between &#8220;using the product&#8221; and &#8220;getting the outcome&#8221; is where most SaaS companies lose customers. The product works. It just doesn&#8217;t work for them.</p><p>This is why engagement metrics can mislead. High DAU (daily active users) with high churn means users are trying and failing. They are putting in effort and not getting results.</p><p>The diagnostic question isn&#8217;t &#8220;Are they using it?&#8221; It&#8217;s &#8220;Are they succeeding with it?&#8221;</p><h2>Level 3: Emotional resonance (connection)</h2><p>This is the penthouse. This is how the product makes the user feel. Do they feel smart? Do they feel secure? Do they feel like they are part of a club?</p><p>When you win here, the result is <em>loyalty</em>. Customers who reach this level don&#8217;t just renew. They advocate. They recruit their colleagues. They defend you in comparison threads.</p><p>Think of Notion users who evangelize the tool unsolicited. Or Linear users who badge their workflows with pride. These products deliver outcomes (Level 2), but they also create identity (Level 3).</p><p>The mistake companies make is trying to decorate the penthouse while the basement is flooded. They invest in &#8220;delighters&#8221; (fun animations, gamification, swag, personalized emails) while their core loop is broken.</p><p>You cannot delight a user whose login just timed out. You cannot build community with a user who isn&#8217;t getting results.</p><p>Resonance is a multiplier. But anything multiplied by zero is still zero.</p><h2>Diagnosing your product</h2><p>When you look at your backlog or your churn data, use this hierarchy to diagnose the root cause.</p><p><strong>High volume of support tickets?</strong> You have a friction problem. The product is breaking promises.</p><p>Stop building new features. Fix the foundation.</p><p><strong>High churn, low noise?</strong> You have an outcome problem. Customers are leaving silently because you aren&#8217;t solving the core job.</p><p>No amount of UI polish will fix this. You need to rethink the value proposition. Talk to churned users. Ask what they were trying to accomplish and why they left.</p><p>Often, the answer isn&#8217;t &#8220;your product is bad.&#8221; It&#8217;s &#8220;your product wasn&#8217;t for me.&#8221; That is a positioning problem, not a product problem.</p><p><strong>High retention, low growth or advocacy?</strong> You have a resonance problem. People use you because they have to, but they don&#8217;t love you.</p><p>This is where you invest in brand, community, and delighters. But only after the first two layers are solid. Resonance compounds success. It doesn&#8217;t create it.</p><h2>The hierarchy in practice</h2><p>Here is a real pattern I&#8217;ve seen across B2B SaaS companies:</p><p>Year 1: The product barely works. Every customer interaction is firefighting. Support volume is high. Churn is high. The team fixates on stability.</p><p>Year 2: Stability improves. Churn drops slightly. The team celebrates and shifts to &#8220;customer delight&#8221; initiatives. They add gamification, personalized dashboards, a community forum.</p><p>Year 3: Churn creeps back up. But this time, it is quiet. No tickets, no complaints. Customers just don&#8217;t renew. The team is confused because they have been investing in satisfaction.</p><p>What happened? They fixed Level 1 (friction) but never validated Level 2 (outcome). Customers could use the product. They just weren&#8217;t getting results from it.</p><p>The team optimized for delight before they earned it.</p><p>This is the trap of treating satisfaction as a single metric. Green scores can hide red outcomes.</p><h2>Where teams get stuck</h2><p>The hardest transition is from Level 1 to Level 2. It requires a shift in how you measure success.</p><p>At Level 1, you measure <em>usage</em>: uptime, error rates, support tickets. At Level 2, you measure <em>results</em>: customer outcomes, workflow completion, ROI.</p><p>Most product teams are better at the former. It is easier to instrument &#8220;Did the button work?&#8221; than &#8220;Did the customer achieve their goal?&#8221;</p><p>But the companies that break through are the ones that obsess over outcomes. They build customer success into the product. They instrument not just clicks, but progress toward the job.</p><p>They stop asking &#8220;Are they using it?&#8221; and start asking &#8220;Are they winning with it?&#8221;</p><h2>The takeaway</h2><p>Most teams optimize for smiles before they have earned them. They invest in delight while customers are still struggling to get basic outcomes.</p><p>The hierarchy matters. Friction first. Outcome second. Resonance only after you have delivered both.</p><p>The question isn&#8217;t whether your customers are satisfied. It&#8217;s which layer you are losing them.</p><p>If they are angry, you have a reliability problem. If they are indifferent, you have a value problem. If they are satisfied but not loyal, you have a resonance problem.</p><p>Fix the right layer. The rest will follow.</p>]]></content:encoded></item><item><title><![CDATA[Gemini 3 Proves the NVIDIA Tax Is Optional]]></title><description><![CDATA[Google&#8217;s Gemini 3 landed last week with impressive reviews: frontier-class performance that beats OpenAI and Anthropic except for its agentic code.]]></description><link>https://blog.suryas.org/p/gemini-3-nvidia-tax-optional</link><guid isPermaLink="false">https://blog.suryas.org/p/gemini-3-nvidia-tax-optional</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Mon, 24 Nov 2025 00:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!dic-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91c481dc-eefd-4bad-840e-93b082c46253_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dic-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91c481dc-eefd-4bad-840e-93b082c46253_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dic-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91c481dc-eefd-4bad-840e-93b082c46253_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!dic-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91c481dc-eefd-4bad-840e-93b082c46253_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!dic-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91c481dc-eefd-4bad-840e-93b082c46253_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!dic-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91c481dc-eefd-4bad-840e-93b082c46253_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dic-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91c481dc-eefd-4bad-840e-93b082c46253_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/91c481dc-eefd-4bad-840e-93b082c46253_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dic-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91c481dc-eefd-4bad-840e-93b082c46253_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!dic-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91c481dc-eefd-4bad-840e-93b082c46253_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!dic-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91c481dc-eefd-4bad-840e-93b082c46253_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!dic-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91c481dc-eefd-4bad-840e-93b082c46253_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Gemini 3 &#8212; Google vs. NVDA</figcaption></figure></div><p>Google&#8217;s Gemini 3 landed last week with impressive reviews: <a href="https://blog.google/products/gemini/gemini-3/">frontier-class performance</a> that beats OpenAI and Anthropic except for its agentic code. Conventional wisdom said Google was lagging behind OpenAI, which remains true in adoption. But on capability, they have a real chance to catch up. The tech press is focused on benchmarks and capabilities.</p><p>They&#8217;re missing the real headline: Google trained these models entirely on TPUs. Zero NVIDIA dependency.</p><p>While competitors debate model quality, Google just decoupled its cost structure from the entire market. OpenAI and Microsoft pay massive margins to Jensen Huang. Google is paying cost-plus to its own hardware division.</p><h2>It&#8217;s the margin war</h2><p>In a commodity market, the low-cost provider eventually wins. AI inference is racing toward commoditization. Google&#8217;s vertical integration is the ultimate cheat code.</p><p>Consider the math. NVIDIA-dependent companies must price APIs to cover energy, cloud operations, and NVIDIA&#8217;s 75% gross margins. Google prices APIs to cover energy and silicon fabrication costs.</p><p>This allows Google to trigger a race to the bottom on pricing that no NVIDIA-dependent competitor can survive. They can effectively subsidize the model indefinitely to protect Search.</p><p>Competitors, meanwhile, burn capital paying premium rents for compute. This is &#8220;compute sovereignty&#8221;: scaling capacity without third-party permission.</p><h2>Amazon proves it&#8217;s not just Google</h2><p>Google isn&#8217;t alone. Amazon is running the same playbook with AWS Trainium chips.</p><p>Anthropic is training Claude 4.x on 500,000 AWS Trainium2 chips, with plans to scale to one million by year-end. AWS claims 30-40% better price-performance than NVIDIA equivalents. That&#8217;s structural cost advantage, not incremental improvement.</p><p>Amazon&#8217;s approach mirrors Google&#8217;s vertical integration but with a different business model. Where Google optimizes for its own models, AWS sells compute sovereignty as a service.</p><h2>The software moat is deeper than the hardware</h2><p>Cheap chips don&#8217;t matter if nobody knows how to program them. This is the &#8220;Island Problem&#8221;: great infrastructure developers won&#8217;t adopt. It remains the single biggest risk to Google&#8217;s strategy.</p><p>NVIDIA is just not a hardware company. They are a software platform disguised as a chip manufacturer. The CUDA ecosystem is the deepest moat in tech.</p><p>Every researcher coming out of Stanford or MIT learns PyTorch on CUDA. Every major open-source library is optimized for CUDA first. Moving a production workload from NVIDIA to TPU isn&#8217;t just a &#8220;recompile.&#8221; It often requires rewriting code in JAX or dealing with XLA compiler friction.</p><p>If Google wins on efficiency but loses on developer mindshare, they end up with the best internal infrastructure that nobody else wants to use.</p><h2>The model is the only leverage left</h2><p>On the flip side, if Google cannot break the CUDA stranglehold, its TPUs remain a private island. In this scenario, their only path to victory is abstracting the hardware away entirely.</p><p>They must force the market to consume Inference APIs, not raw compute.</p><p>If Gemini 3 is sufficiently powerful (and sufficiently cheap) developers won&#8217;t care what silicon it runs on. They will never touch the metal. They will just hit the API endpoint. Google&#8217;s strategy relies on turning the TPU from a developer hurdle into a hidden margin engine.</p><p>The industry is splitting into two distinct camps.</p><p>OpenAI, Meta (training Llama 4 on H100s), and most startups rent NVIDIA capacity. They optimize for speed-to-market and developer familiarity but remain locked into NVIDIA&#8217;s margin structure and the CUDA ecosystem.</p><p>Google with TPUs and Amazon with Trainium optimize for unit economics. They&#8217;re betting that price-performance eventually trumps developer familiarity, and that vertical integration becomes the only sustainable path in a commoditizing market.</p><p>The question isn&#8217;t which group is right. It&#8217;s which advantage compounds faster: ecosystem lock-in or cost structure.</p>]]></content:encoded></item><item><title><![CDATA[Dead Time Is Story Time]]></title><description><![CDATA[I stumbled on David Maister&#8217;s 1985 research on waiting psychology while thinking about loading screens.]]></description><link>https://blog.suryas.org/p/dead-time-is-story-time</link><guid isPermaLink="false">https://blog.suryas.org/p/dead-time-is-story-time</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Sun, 23 Nov 2025 00:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!pWT9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13cb1afb-969f-432e-a1a3-c8b900c4d63f_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pWT9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13cb1afb-969f-432e-a1a3-c8b900c4d63f_1024x608.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pWT9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13cb1afb-969f-432e-a1a3-c8b900c4d63f_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!pWT9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13cb1afb-969f-432e-a1a3-c8b900c4d63f_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!pWT9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13cb1afb-969f-432e-a1a3-c8b900c4d63f_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!pWT9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13cb1afb-969f-432e-a1a3-c8b900c4d63f_1024x608.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pWT9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13cb1afb-969f-432e-a1a3-c8b900c4d63f_1024x608.png" width="1024" height="608" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/13cb1afb-969f-432e-a1a3-c8b900c4d63f_1024x608.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:608,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pWT9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13cb1afb-969f-432e-a1a3-c8b900c4d63f_1024x608.png 424w, https://substackcdn.com/image/fetch/$s_!pWT9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13cb1afb-969f-432e-a1a3-c8b900c4d63f_1024x608.png 848w, https://substackcdn.com/image/fetch/$s_!pWT9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13cb1afb-969f-432e-a1a3-c8b900c4d63f_1024x608.png 1272w, https://substackcdn.com/image/fetch/$s_!pWT9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13cb1afb-969f-432e-a1a3-c8b900c4d63f_1024x608.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">While your user awaits a long-running process on the site, utilize the opportunity to tell a story: educating your users of capabilities</figcaption></figure></div><p>I stumbled on <a href="https://davidmaister.com/articles/the-psychology-of-waiting-lines/">David Maister&#8217;s 1985 research on waiting psychology</a> while thinking about loading screens. His foundational principle: occupied time feels shorter than unoccupied time. But he uncovered something deeper. Anxiety, uncertainty, and unexplained delays make waits feel exponentially longer.</p><p>That got me thinking. Most product teams treat waiting as friction to eliminate. Load faster. Reduce checkout steps. Skip the queue.</p><p>But you can&#8217;t eliminate all dead time. Users will wait during loading, onboarding, checkout, processing. The question isn&#8217;t whether they wait, it&#8217;s what you do with that captive attention.</p><h2>Look at how the best products handle waiting</h2><p>As I explored this more, I started noticing patterns everywhere.</p><p>Look at how Disney handles theme park queues. They don&#8217;t just move lines faster. They build interactive games into waiting areas. Characters parade through crowds. Space Mountain&#8217;s twisting paths hide the full queue length, reducing anxiety about what&#8217;s ahead.</p><p>Consider how Asana onboards new users. They don&#8217;t overwhelm people with Gantt charts and dependencies upfront. They start with basic project creation, then gradually reveal advanced features as users gain confidence. The onboarding wait becomes capability building.</p><p>The pattern: people tolerate waits when they understand why they&#8217;re waiting, see progress toward completion, and have something meaningful to occupy the time.</p><h2>What separates high performers from the rest</h2><p>The best SaaS products get dramatically higher onboarding completion than industry averages. They use progressive disclosure: revealing one valuable feature at a time, teaching as users engage, turning configuration into discovery.</p><p>E-commerce brands fighting cart abandonment don&#8217;t just optimize checkout speed. They explain shipping timelines transparently, show order progress in real-time, and use recovery emails that provide value (product recommendations, size guidance) alongside the nudge to complete purchase.</p><p>These companies understand that dead time isn&#8217;t just infrastructure to optimize. It&#8217;s experience to design.</p><h2>Why this matters more than you think</h2><p>Page speed matters. Even small delays tank conversions. Amazon, Google, and others have documented how milliseconds of slowdown directly impact revenue.</p><p>But speed alone doesn&#8217;t solve this. Pinterest discovered something interesting: they reduced perceived wait times dramatically without dramatically changing actual load speeds. Better skeleton screens and progress indicators made the experience feel faster. The result was measurable increases in both search traffic and sign-ups.</p><p>The insight: perception matters as much as performance.</p><h2>Where is your dead time?</h2><p>Look for moments where users have no choice but to wait:</p><ul><li><p>Onboarding flows with account setup and data imports</p></li><li><p>Checkout processes with payment verification</p></li><li><p>Loading states during search or complex calculations</p></li><li><p>Processing delays for reports, exports, or deployments</p></li><li><p>Queue systems for high-demand access</p></li></ul><p>Each one is captive attention. Each one is an opportunity to educate, build confidence, reduce anxiety, or deepen engagement.</p><p>The question isn&#8217;t whether your product has dead time. It&#8217;s whether you&#8217;re wasting it on spinners, or turning it into story time.</p>]]></content:encoded></item><item><title><![CDATA[The Feature Factory Problem AI Amplifies]]></title><description><![CDATA[Your team just shipped three features this week.]]></description><link>https://blog.suryas.org/p/feature-factory-problem-ai-amplifies</link><guid isPermaLink="false">https://blog.suryas.org/p/feature-factory-problem-ai-amplifies</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Sat, 22 Nov 2025 00:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LOxS!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd73e65d-dbce-4664-9e74-5ca963688619_1021x1021.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Your team just shipped three features this week. Last quarter, that would have taken a month. AI tools turned your engineers into feature factories. Your designers generate variants in minutes. Your PMs prototype without waiting for engineering resources.</p><p>Everyone&#8217;s celebrating velocity. Who&#8217;s checking if you&#8217;re solving the right problems?</p><h2>Creation velocity isn&#8217;t validation velocity</h2><p>Recent research shows contradictory results on AI&#8217;s impact. Some studies report significant productivity gains, others find developers actually slow down when using AI tools. The pattern emerging is clear: AI accelerates code generation, but delivery stability often decreases and quality concerns rise.</p><p>You can ship faster. But are you learning faster?</p><p>The real gap isn&#8217;t speed. It&#8217;s the mismatch between how quickly you can build and how quickly you can validate what to build. <a href="https://blog.suryas.org/ai-commoditizes-entry-level-amplifies-senior/">As I explored previously</a>, AI amplifies senior strategic judgment while commoditizing tactical execution. Without that senior judgment directing what to build, teams become feature factories.</p><p>Most teams obsess over <em>who</em> builds features (PMs? Engineers? Designers?) and <em>how fast</em> they ship. Almost nobody asks whether those features matter to customers.</p><p>The warning signs aren&#8217;t dramatic. In B2B, users don&#8217;t immediately churn. Instead, adoption stays low. Features exist but go unused. Customer success teams field more &#8220;why did you build this?&#8221; questions. Renewal conversations get harder six months later.</p><p>Traditional product-market fit frameworks assume you&#8217;re testing hypotheses at a measured pace. You build, you measure, you learn, you iterate. AI tools break that rhythm. You can generate and ship features faster than your instrumentation can tell you if the previous batch worked.</p><h2>The path-of-least-resistance problem</h2><p>Teams pick AI use cases based on what&#8217;s readily available, not what aligns with strategic goals or delivers meaningful customer impact.</p><p>You have a new AI tool that generates UI components. So you build features that need UI components, whether or not those features solve real user problems. The tool shapes the roadmap instead of customer needs shaping tool selection.</p><p>Amazon&#8217;s principle of <em>working backwards</em> matters more now, not less. Start with customer needs, then figure out what to build and how to build it. Most teams do the opposite: start with what AI tools can do easily, then justify it with assumptions about customer value.</p><p>Research on AI project failures consistently points to the same issue: too much attention on technical capabilities, not enough on actual market needs. Most AI projects fail not because the technology doesn&#8217;t work but because teams built the wrong thing quickly.</p><h2>Speed as tactic, not strategy</h2><p>The companies navigating this well treat speed as a tactic, not a strategy. They use AI to accelerate <em>after</em> they&#8217;ve validated direction, not to spray features and hope something sticks.</p><p>This requires different metrics. Not just &#8220;features shipped per sprint&#8221; but &#8220;validated learning per sprint.&#8221; Not just &#8220;time to market&#8221; but &#8220;time to meaningful user engagement.&#8221; Not just &#8220;productivity gains&#8221; but &#8220;<a href="https://blog.suryas.org/outcomes-over-outputs-for-real/">impact per unit of effort</a>.&#8221;</p><p>Product managers still balance the same three dimensions: user needs, business viability, and technical feasibility. What changes is the first pillar. Understanding user needs now means distinguishing between what users actually need and what AI makes easy to build.</p><p>AI can help with validation too. Synthesize user interviews at scale. Analyze support tickets for patterns. Enable rapid prototype testing. But the craft of knowing <em>what</em> to validate and interpreting results still requires human judgment. That&#8217;s where the gap lives.</p><h2>The test that matters</h2><p>Ask what you learned this week, not what you shipped.</p><p>If the answer is &#8220;we shipped five features,&#8221; you&#8217;re measuring activity. If the answer is &#8220;we validated that users struggle with X and confirmed that approach Y doesn&#8217;t solve it,&#8221; you&#8217;re measuring learning. <a href="https://blog.suryas.org/fast-teams-learn-faster/">Fast teams don&#8217;t ship more, they learn faster</a>.</p><p>AI tools are incredible for accelerating learning. Generate multiple variations, test them quickly, and see what works. But only if you&#8217;re testing the right problem.</p><p>Speed is a compounding advantage when paired with direction. Speed without direction just gets you lost faster.</p><p>Even when you&#8217;re building the right features at the right pace, there&#8217;s another bottleneck most teams ignore. AI can generate code quickly. Someone still has to ensure it&#8217;s production-ready.</p>]]></content:encoded></item><item><title><![CDATA[AI Commoditizes Entry-Level Work While Amplifying Senior Value]]></title><description><![CDATA[Everyone&#8217;s asking the wrong question about AI and product teams.]]></description><link>https://blog.suryas.org/p/ai-commoditizes-entry-level-amplifies-senior</link><guid isPermaLink="false">https://blog.suryas.org/p/ai-commoditizes-entry-level-amplifies-senior</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Fri, 21 Nov 2025 00:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LOxS!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd73e65d-dbce-4664-9e74-5ca963688619_1021x1021.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Everyone&#8217;s asking the wrong question about AI and product teams.</p><p>The debate splits into two camps: one believes product managers will code their way to replacing engineers, the other thinks engineers will own strategy and eliminate PMs. Both narratives miss what&#8217;s actually happening. AI isn&#8217;t replacing entire functions. It&#8217;s splitting each function into three tiers, and only one of them is shrinking.</p><h2>The same pattern across three functions</h2><p>Look at what&#8217;s happening to engineers first. Employment for software developers aged 22-25 has <a href="https://digitaleconomy.stanford.edu/publications/canaries-in-the-coal-mine/">declined nearly 20%</a> from its peak in late 2022, according to Stanford&#8217;s Digital Economy Lab. Computer engineering graduates now face a <a href="https://www.thecollegefix.com/computer-engineering-grads-face-double-the-unemployment-rate-of-art-history-majors/">7.5% unemployment rate</a>, one of the highest across majors.</p><p>What&#8217;s getting automated? Boilerplate code. Unit tests. API maintenance. The tasks companies used to assign to junior developers to help them learn.</p><p>Now look at designers. Many studios used to hire junior designers specifically for wireframing. Those roles are shrinking. The <a href="https://www.stateofaidesign.com/">State of AI in Design Report</a> found that 89% of designers improved their workflow with AI this year, but the improvements concentrate on production work: variant generation, copy filling, visual polish.</p><p>Product managers see the same split. <a href="https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/how-generative-ai-could-accelerate-software-product-time-to-market">McKinsey found</a> that generative AI improved PM productivity by 40%, but the gains came from automating tactical work: user stories, performance reports, backlog maintenance, PRD drafting. The strategic work, the judgment calls, the stakeholder dynamics? Still human.</p><h2>Three tiers emerging</h2><p>The pattern is clear across all three functions. Entry-level tactical work gets commoditized. Senior strategic work gets amplified. Mid-level roles transform into something new.</p><p><strong>Tier 1: Commoditized</strong> Junior engineers who write boilerplate code. Junior designers who create wireframes. Junior PMs who draft user stories. AI tools handle these tasks faster and often as well as someone in their first year on the job.</p><p>Marc Benioff <a href="https://sfstandard.com/2025/02/27/salesforce-marcbenioff-layoffs-tech-agents/">announced</a> Salesforce will hire &#8220;no new engineers&#8221; in 2025, citing AI-driven productivity gains. Anthropic CEO Dario Amodei warned that AI could wipe out half of entry-level white-collar jobs. But AWS CEO Matt Garman <a href="https://www.entrepreneur.com/business-news/amazon-web-services-ceo-stop-replacing-workers-with-ai/496087">pushed back</a>, calling the idea of replacing junior developers with AI &#8220;one of the dumbest things I&#8217;ve ever heard.&#8221;</p><p>The debate reveals the tension: AI can do the tasks, but those tasks serve a purpose beyond just getting work done. They&#8217;re how people learn.</p><p><strong>Tier 2: Transformed</strong> Mid-level roles aren&#8217;t disappearing. They&#8217;re morphing into hybrid positions that require both AI literacy and deep domain expertise. These are the people who know enough to guide AI tools effectively and spot when AI outputs miss the mark.</p><p>The new job descriptions reflect this: &#8220;AI Orchestrators,&#8221; &#8220;Product Engineers,&#8221; &#8220;Full-Stack PMs.&#8221; People who move fluidly between strategy and execution, using AI as a force multiplier rather than a replacement.</p><p><strong>Tier 3: Amplified</strong> Senior roles gain value. System architecture. Product vision. Strategic design decisions. User empathy. These capabilities matter more, not less, in an AI-enabled world.</p><p>Here&#8217;s why: the more skilled you are at your craft, the better results you get with AI. A senior engineer can spot architectural flaws in AI-generated code that a junior might miss. A seasoned designer knows when AI-generated variants sacrifice usability for aesthetics. An experienced PM can tell when AI-drafted requirements miss the strategic context.</p><h2>The hollowed-out career ladder</h2><p>This creates a systemic problem nobody wants to talk about: if entry-level roles commoditize but senior expertise remains valuable, how does anyone become senior?</p><p>You need plenty of seniors at the top. AI tools handle grunt work at the bottom. But there are very few juniors in the middle learning the craft.</p><p>This threatens the long-term talent pipeline across product, engineering, and design. Companies benefit from AI productivity gains today while quietly eroding their ability to develop senior talent tomorrow.</p><p>The skills that make someone valuable at Tier 3 develop through years of doing Tier 1 work. You learn system architecture by first building components. You develop product sense by first writing user stories and seeing what ships versus what sits in the backlog. You understand good design by first creating wireframes and getting feedback.</p><p>Remove the learning ground, and you cut off the path to expertise. The question of whether <a href="https://blog.suryas.org/curiosity-beats-tenure-in-age-of-ai/">curiosity beats tenure</a> in this environment isn&#8217;t academic: it determines whether companies can develop the senior talent they&#8217;ll need.</p><h2>What this means in practice</h2><p>The conversation shouldn&#8217;t be &#8220;will AI replace PMs or engineers or designers?&#8221; It should be: &#8220;How do we structure learning when tactical work gets automated?&#8221;</p><p>Some companies respond by raising the bar for entry-level roles. Junior positions now require AI literacy plus traditional fundamentals. That solves the immediate hiring problem but makes it harder for people to break into the field.</p><p>Other companies may be experimenting with new mentorship models: pairing early-career people directly with senior staff and using AI tools to accelerate their learning rather than replace their contributions.</p><p>The companies getting this right recognize that AI doesn&#8217;t just change how work gets done. It changes how expertise develops. They&#8217;re designing career paths that acknowledge both the productivity gains from AI and the human judgment that AI amplifies but cannot replace.</p><p>But commoditization is only part of the story. The bigger question is what teams are actually building with all this newfound speed. Shipping 3x faster sounds great. Whether that speed translates to customer value is a different question, one I&#8217;ll tackle next.</p><p><strong>Read next:</strong> <a href="https://blog.suryas.org/feature-factory-problem-ai-amplifies/">The Feature Factory Problem AI Amplifies</a></p>]]></content:encoded></item><item><title><![CDATA[Why AI Agents Fail Today Despite the Hype]]></title><description><![CDATA[The agentic AI hype promises autonomous decision-makers that replace employees or dramatically boost efficiency.]]></description><link>https://blog.suryas.org/p/why-ai-agents-fail-today-despite-the-hype</link><guid isPermaLink="false">https://blog.suryas.org/p/why-ai-agents-fail-today-despite-the-hype</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Thu, 20 Nov 2025 00:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LOxS!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd73e65d-dbce-4664-9e74-5ca963688619_1021x1021.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The agentic AI hype promises autonomous decision-makers that replace employees or dramatically boost efficiency. The reality, according to Maria Sukhareva (Principal AI Expert at Siemens) in <a href="https://www.llmwatch.com/p/why-ai-agents-disappoint">&#8220;Why AI Agents Disappoint,&#8221;</a> is that general-purpose AI agents don&#8217;t work for most real-world business use cases.</p><p>The <a href="https://arxiv.org/abs/2307.13854">WebArena benchmark</a> proves the gap quantitatively. Researchers created realistic web environments (e-commerce sites, forums, development platforms) and asked GPT-4-based agents to complete end-to-end tasks like &#8220;Find the cheapest phone case and email me the link.&#8221; Success rate: 14.41%.</p><h2>The architecture is brittle</h2><p>Current agents follow a <a href="https://arxiv.org/abs/2210.03629">ReAct cycle</a> (Plan, Act, Observe, Repeat) as I <a href="https://blog.suryas.org/thinking-through-agentic-loops/">explored previously</a>. This looks like reasoning, but it&#8217;s sequential token prediction choosing from pre-defined tools. When one step fails, the entire chain collapses.</p><p>The problem isn&#8217;t the architecture alone. It&#8217;s what happens when reality deviates from the plan.</p><p>If an agent struggles through a task and eventually succeeds, it retains zero memory of that success. Next session, it repeats the same mistakes. There&#8217;s no mechanism to update its knowledge base or behavior from experience. (Research like <a href="https://blog.suryas.org/early-experience-agent-training/">Meta&#8217;s Early Experience approach</a> explores ways agents could learn from their own rollouts, but these methods aren&#8217;t yet production-ready.)</p><p>Error propagation compounds this. A single wrong click or a misread file in step two ruins the entire workflow. Attempted fixes like self-reflection or multi-agent debate act as band-aids. They sometimes amplify false reasoning rather than correct it.</p><h2>Why coding works</h2><p>AI-assisted coding (GitHub Copilot, Claude Code) is the exception. Agents genuinely deliver value here.</p><p>The environment is constrained. The IDE provides clear boundaries. The data is primarily text and code. Feedback loops are immediate. You run the code, see if it works, and adjust.</p><p>This reveals the requirements for agents to succeed elsewhere: constrained environments, homogeneous data types, and fast feedback loops.</p><p>Real-world business tasks fail all three tests. A financial audit requires juggling emails, database logs, PDF invoices, and regulatory texts simultaneously. When a document is missing or unreadable, agents hallucinate results rather than problem-solve like humans do.</p><p>Business workflows assume human common sense. Nobody emails a contact ten times in one minute. Nobody needs explicit instructions not to delete the production database. Agents require rigid IF&#8230;THEN rules for everything. They can&#8217;t handle dynamic obstacles (a file moved to a new database, a contact out of office) without being explicitly programmed for each scenario.</p><h2>We&#8217;re a decade away from autonomous?</h2><p>The market hype suggests we&#8217;re approaching the &#8220;Observer&#8221; level, where machines work fully autonomously. Sukhareva argues we&#8217;re actually at the &#8220;Collaborator&#8221; level where humans guide machines. Citing Andrej Karpathy, she estimates it will take at least ten years to fix these fundamental cognitive issues.</p><p>The gap isn&#8217;t just technical. It&#8217;s architectural. Current agents lack the cognitive structure for proactive learning, robust error recovery, and multimodal reasoning.</p><p>Companies investing in &#8220;agentic AI&#8221; based on hype videos and demos should understand what they&#8217;re actually buying: brittle sequential machines that work in constrained environments with immediate feedback. Not autonomous decision-makers.</p><p>The coding success proves agents can work when the environment matches their capabilities. The question for product teams isn&#8217;t whether to use agents. It&#8217;s whether your use case looks more like an IDE or like a financial audit.</p>]]></content:encoded></item><item><title><![CDATA[Infrastructure Redundancy Stops Before the CDN]]></title><description><![CDATA[Azure, AWS, and Cloudflare all experienced significant outages in recent weeks.]]></description><link>https://blog.suryas.org/p/infrastructure-redundancy-cdn</link><guid isPermaLink="false">https://blog.suryas.org/p/infrastructure-redundancy-cdn</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Wed, 19 Nov 2025 00:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LOxS!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd73e65d-dbce-4664-9e74-5ca963688619_1021x1021.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Azure, AWS, and Cloudflare all experienced significant outages in recent weeks. Different providers, same story: configuration changes triggering cascading failures across infrastructure that&#8217;s supposed to be resilient.</p><p>The interesting part isn&#8217;t that infrastructure fails. It&#8217;s what gets exposed about the gap between architected resilience and actual resilience.</p><h2>The multi-cloud gap</h2><p>Companies might use AWS for one application and Azure for another, but any given application typically runs on a single cloud. Redundancy within that provider (multiple regions, availability zones) but the provider itself is treated as permanent infrastructure.</p><p>Then Cloudflare goes down and everything stops.</p><p>The pattern shows up consistently: sophisticated redundancy for compute, single-provider dependency for CDNs, DNS, and edge infrastructure. Like installing a backup generator but leaving your electrical panel connected to a single grid.</p><h2>Configuration as failure mode</h2><p>All three outages share the same root cause pattern: configuration changes, not hardware failures or attacks.</p><p>Azure&#8217;s outage started with a networking configuration that created inconsistent state. AWS&#8217;s disruption began when two automated systems tried to update the same database simultaneously. Cloudflare&#8217;s global failure came from a database permissions change that corrupted the Bot Management system.</p><p>Infrastructure complexity creates failure modes that are hard to predict. Routine configuration changes can trigger cascading failures across regions or global networks.</p><p>This shifts the threat model. Traditional redundancy focuses on external threats: datacenter failures, provider outages, hardware degradation. But when configuration complexity is the primary failure mode, redundancy alone doesn&#8217;t solve it. You need loose coupling so failures don&#8217;t cascade.</p><h2>The CDN blindspot</h2><p>Multi-CDN strategies exist. Load balancing across providers, health checks, automated failover: these are solved technical problems. CloudFront, <a href="http://Bunny.net">Bunny.net</a>, Akamai, Azure CDN all offer alternatives.</p><p>What&#8217;s less common is treating CDN infrastructure with the same redundancy thinking applied to compute. When Cloudflare went down, companies with sophisticated multi-cloud architectures went offline just as completely as companies running on a single EC2 instance.</p><p>The gap shows up in infrastructure assumptions. Most organizations cluster around accidental multi-cloud. Different teams chose different providers over time, creating redundancy architectures that exist on paper but haven&#8217;t been tested under actual failure conditions.</p><p>What changes this is intentionality. Some organizations have made explicit decisions about where redundancy matters and where it doesn&#8217;t. They&#8217;ve calculated the cost of downtime for different parts of their product and architected accordingly.</p><p>They&#8217;ve also made a harder decision: accepting that some level of downtime is inevitable and building products that degrade gracefully rather than fail catastrophically.</p><p>As infrastructure complexity increases, new failure modes emerge faster than old ones get solved. The organizations that navigate this aren&#8217;t the ones with maximum redundancy. They&#8217;re the ones who&#8217;ve thought clearly about what they&#8217;re optimizing for and built systems that fail gracefully.</p>]]></content:encoded></item><item><title><![CDATA[World Models Teach AI to See]]></title><description><![CDATA[On Lenny&#8217;s recent podcast, Fei-Fei Li called LLMs &#8220;wordsmiths in the dark&#8221;: eloquent but ungrounded in physical reality.]]></description><link>https://blog.suryas.org/p/world-models-learning-to-see</link><guid isPermaLink="false">https://blog.suryas.org/p/world-models-learning-to-see</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Tue, 18 Nov 2025 00:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LOxS!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd73e65d-dbce-4664-9e74-5ca963688619_1021x1021.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>On <a href="https://www.lennysnewsletter.com/p/the-godmother-of-ai">Lenny&#8217;s recent podcast</a>, Fei-Fei Li called LLMs &#8220;wordsmiths in the dark&#8221;: eloquent but ungrounded in physical reality. The phrase resonated because it captures exactly what language models can&#8217;t do: understand space, navigate environments, predict physics, or reason about the 3D world we inhabit.</p><p>I&#8217;ve been following world models with growing curiosity. The contrast with LLMs is stark. Where language models learn statistical patterns from text, world models learn by watching: absorbing spatial relationships, temporal dynamics, and cause-effect from video and sensory data. They&#8217;re designed to answer the question LLMs fundamentally can&#8217;t: what happens next in physical space?</p><h2>What&#8217;s happening now</h2><p>There is a clear acceleration in 2024-2025. Google&#8217;s Genie 2 generates playable 3D worlds from a single image. NVIDIA&#8217;s Cosmos trained on 20 million hours of real-world footage, creating physics-aware simulations that companies like Uber and XPENG are deploying.</p><p>Meta&#8217;s V-JEPA 2 learns 5-6x more efficiently by predicting abstract representations rather than raw pixels.</p><p>Fei-Fei Li&#8217;s World Labs just launched Marble, the first commercial world model product. The technology she&#8217;s building toward: <em>spatial intelligence</em>, AI that understands the physical world the way humans do.</p><p>In a <a href="https://illuminem.com/illuminemvoices/hes-been-right-about-ai-for-40-years-now-he-thinks-everyone-is-wrong">recent WSJ profile</a>, Yann LeCun (Meta&#8217;s Chief AI Scientist) is telling PhD candidates to focus on world models instead of LLMs. His prediction: world models could replace the LLM paradigm within 3-5 years.</p><h2>What this could unlock</h2><p>Autonomous vehicles are the obvious application, but I&#8217;m watching a broader pattern. Robotics companies use world models as virtual simulators, training robots in generated scenarios before deploying to reality. Industrial automation benefits from synthetic data generation for rare edge cases.</p><p>The shift runs deeper. LLMs process language, world models process reality. One understands how to describe gravity, the other understands falling.</p><h2>Where this seems to be headed</h2><p>This feels like 2018-era LLMs: early, expensive, limited to well-funded teams. Genie 2 generates 10-60 seconds of stable video. Cosmos requires massive GPU clusters for training. The sim-to-real gap remains a real challenge: small simulation differences cause real-world failures in safety-critical systems.</p><p>But the trajectory is visible. Google formed a new team for world simulation models. NVIDIA is making Cosmos open-source to accelerate the robotics community.</p><p>For most companies, there&#8217;s no tangible bet to make yet. This technology isn&#8217;t accessible enough for broad experimentation. But it&#8217;s worth following closely.</p><p>World models feel like they&#8217;re approaching their ChatGPT moment. GPT-3 existed for years before ChatGPT made it accessible enough to spark the LLM application wave. When world models hit that inflection point, the teams that have been tracking the space will know where to tinker first.</p><p>LLMs taught AI to speak. World models are teaching it to see.</p>]]></content:encoded></item><item><title><![CDATA[AI Agents Multiply Work and Eliminate Jobs Simultaneously]]></title><description><![CDATA[Traditional automation follows a script.]]></description><link>https://blog.suryas.org/p/ai-agents-multiply-work-eliminate-jobs</link><guid isPermaLink="false">https://blog.suryas.org/p/ai-agents-multiply-work-eliminate-jobs</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Mon, 17 Nov 2025 00:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LOxS!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd73e65d-dbce-4664-9e74-5ca963688619_1021x1021.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Traditional automation follows a script. You map the steps, define the logic, and the system executes. If-then-else at scale.</p><p>AI agents are different. They have decision-making authority. You give them a goal, and they figure out the path, making choices on the fly based on context. That shift from scripted execution to delegated judgment changes what happens to your workload.</p><h2>What the data shows</h2><p>A <a href="https://www.faros.ai/blog/ai-software-engineering">recent study from Faros AI</a> analyzed over 10,000 developers across 1,255 teams to understand what happens when AI adoption goes high. The productivity story looks clear at first: teams completed 21% more tasks and merged 98% more pull requests.</p><p>But the same data revealed the downstream effects. PR review time increased 91%. Bug rates went up 9%. The agents didn&#8217;t just speed up the work developers were already doing. They revealed new work that hadn&#8217;t existed before.</p><p>Someone has to review what the agent produced. Someone has to validate the decisions it made. Someone has to integrate its output with the existing codebase. The cognitive load didn&#8217;t disappear: it <a href="https://blog.suryas.org/from-executor-to-manager-ai-work-shift/">moved downstream and multiplied</a>.</p><h2>Two different reads of the same pattern</h2><p>One interpretation: this is Jevons Paradox for knowledge work. When you make something more efficient, consumption increases rather than decreases. The efficiency gains are real, but they&#8217;re not reducing the total work in the system. They&#8217;re expanding what&#8217;s possible, which creates new categories of work that didn&#8217;t exist before. Agent management. Agent training. Quality control for autonomous decisions.</p><p>The other interpretation: <a href="https://fortune.com/2025/05/28/anthropic-ceo-warning-ai-job-loss/">Anthropic CEO Dario Amodei warned</a> that AI could eliminate roughly 50% of all entry-level white-collar jobs within the next one to five years. His logic centers on a shift from <em>augmentation</em> (AI helps people do jobs) to <em>automation</em> (AI does the job). If agents can handle the execution work, you don&#8217;t need as many people doing it. The efficiency doesn&#8217;t create more work. It reallocates the dollars to different problems.</p><h2>The core tension</h2><p>Both patterns are showing up simultaneously. The Faros data demonstrates work multiplication downstream. The Anthropic warning points to headcount reduction upstream, particularly at entry-level roles where tasks are more structured and agent-friendly.</p><p>It&#8217;s too early to tell which dynamic dominates, or whether they operate in parallel across different types of work. But the pattern is clear enough to plan for. If you&#8217;re deploying agents expecting simple headcount reduction, you might be underestimating the new work they create. If you&#8217;re assuming efficiency always expands the team, you might be overestimating the number of people you&#8217;ll need to manage what agents produce.</p><h2>The shifting baseline</h2><p>Here&#8217;s what complicates both interpretations: the definition of &#8220;entry-level&#8221; is moving. What we consider entry-level today might be three notches higher in eighteen months. College graduates entering the workforce with AI fluency might start at what we&#8217;d call mid-level today, because the baseline expectations have shifted.</p><p>The agents aren&#8217;t just changing how much work gets done or who does it. They&#8217;re changing what counts as foundational capability. If that&#8217;s true, <a href="https://blog.suryas.org/level-up-or-left-behind/">continuous leveling up</a> isn&#8217;t optional. It&#8217;s the only defense available. The landscape is changing too fast for static skillsets to hold value.</p><p>What new work will your agents reveal that you can&#8217;t see yet? And what work will disappear faster than you expect?</p>]]></content:encoded></item><item><title><![CDATA[Context Engineering Turns AI Agents From Goldfish Into Assistants]]></title><description><![CDATA[Your AI agent is brilliant.]]></description><link>https://blog.suryas.org/p/context-engineering-sessions-memory</link><guid isPermaLink="false">https://blog.suryas.org/p/context-engineering-sessions-memory</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Sun, 16 Nov 2025 00:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!ctnO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92ab67f4-cc55-4191-9c6c-890c213fb0f5_486x565.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Your AI agent is brilliant. It can write code, analyze documents, and answer complex questions with remarkable sophistication.</p><p>It is also a goldfish. Every conversation starts from scratch. Every user is a stranger. Every context is new.</p><p>Google just released a <a href="https://www.kaggle.com/whitepaper-context-engineering-sessions-and-memory">whitepaper on context engineering</a> that tackles this fundamental problem. The paper introduces a systematic framework for making LLM agents stateful using two core primitives: Sessions and Memory.</p><p>The framework formalizes the architectural patterns that separate toy demos from production AI systems.</p><h2>The statelessness problem</h2><p>LLMs are fundamentally stateless. Outside their training data, their awareness is confined to the immediate context window of a single API call.</p><p>You can craft the perfect prompt, tune every parameter, and still end up with an agent that forgets the user&#8217;s name between conversations. The model doesn&#8217;t remember. It doesn&#8217;t learn. It processes each turn in isolation.</p><p>Context Engineering is the discipline of dynamically assembling and managing all information within that context window to make agents stateful and intelligent. It is prompt engineering evolved: shifting from crafting static instructions to constructing the entire state-aware payload for every turn.</p><p>The business impact is direct. Stateless agents can&#8217;t personalize. They can&#8217;t maintain coherent multi-turn workflows. They can&#8217;t reduce repetitive questions or remember user preferences.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ctnO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92ab67f4-cc55-4191-9c6c-890c213fb0f5_486x565.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!ctnO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92ab67f4-cc55-4191-9c6c-890c213fb0f5_486x565.png 424w, https://substackcdn.com/image/fetch/$s_!ctnO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92ab67f4-cc55-4191-9c6c-890c213fb0f5_486x565.png 848w, https://substackcdn.com/image/fetch/$s_!ctnO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92ab67f4-cc55-4191-9c6c-890c213fb0f5_486x565.png 1272w, https://substackcdn.com/image/fetch/$s_!ctnO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92ab67f4-cc55-4191-9c6c-890c213fb0f5_486x565.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!ctnO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92ab67f4-cc55-4191-9c6c-890c213fb0f5_486x565.png" width="486" height="565" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/92ab67f4-cc55-4191-9c6c-890c213fb0f5_486x565.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:565,&quot;width&quot;:486,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:56447,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://suryaps.substack.com/i/180290360?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92ab67f4-cc55-4191-9c6c-890c213fb0f5_486x565.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!ctnO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92ab67f4-cc55-4191-9c6c-890c213fb0f5_486x565.png 424w, https://substackcdn.com/image/fetch/$s_!ctnO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92ab67f4-cc55-4191-9c6c-890c213fb0f5_486x565.png 848w, https://substackcdn.com/image/fetch/$s_!ctnO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92ab67f4-cc55-4191-9c6c-890c213fb0f5_486x565.png 1272w, https://substackcdn.com/image/fetch/$s_!ctnO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92ab67f4-cc55-4191-9c6c-890c213fb0f5_486x565.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Sessions: The temporary workbench</h2><p>A Session is the container for a single, continuous conversation. Think of it as the workbench where the agent does its immediate work.</p><p>Every Session contains two parts. First, the chronological event log (user inputs, agent responses, tool outputs). Second, the temporary working memory or state (like items in a shopping cart or the current step in a workflow).</p><p>The core operational challenge is managing growing conversation history. Long context creates four production problems: exceeding the model&#8217;s context window limit, escalating API costs (charged per token), increasing latency, and degrading model performance (&#8220;context rot&#8221;).</p><p>This is where compaction strategies become critical. Simple approaches truncate old messages after a token limit. Sophisticated systems use recursive summarization, where an LLM periodically condenses older conversation segments into compact summaries.</p><p>Here&#8217;s the trade-off in practice. A customer support agent handling 50 turns would send thousands of tokens per request without compaction. With recursive summarization (triggered every 20 turns), the system replaces verbose dialogue with a summary: &#8220;User confirmed Order 456 had missing item, requested refund.&#8221; Context preserved, costs and latency slashed.</p><h2>Memory: The long-term filing cabinet</h2><p>If Sessions are the temporary desk, Memory is the meticulously organized filing cabinet. This is where the personalization value lives.</p><p>Memory captures and consolidates key information across multiple sessions. It transforms agents from chatbots that reset every conversation into assistants that remember your preferences, context, and history.</p><p>The architecture is typically a combination of vector databases (for semantic similarity and unstructured facts) and knowledge graphs (for structured relationships and reasoning). But the real sophistication is in how memories get created and maintained.</p><p>Memory generation is an LLM-driven ETL pipeline. Extract meaningful content from conversations. Transform and consolidate it by handling conflicts and duplicates. Load the refined knowledge into persistent storage.</p><p>The consolidation stage is where most systems fail. Without it, memory becomes a noisy, contradictory log. With proper consolidation, the system compares new insights against existing memories, decides whether to update, create, or delete entries, and actively prunes stale information.</p><h2>The critical distinction: Memory vs RAG</h2><p>Product teams often conflate Memory with Retrieval-Augmented Generation (RAG), but they serve fundamentally different roles.</p><p>RAG injects static, factual knowledge from external sources (PDFs, wikis, documentation). It makes the agent an expert on facts. The data is typically shared across all users and read-only.</p><p>Memory curates dynamic, user-specific context derived from conversation. It makes the agent an expert on the user. The data must be highly isolated per user to prevent leaks.</p><p>Think of RAG as the research librarian providing universal knowledge. Memory is the personal assistant who knows your preferences, history, and context. Both are essential, but they operate at different layers of the system.</p><p>Here&#8217;s the strategic framework at a glance:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!7PuF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcda1527-c2d2-40f8-810b-878a1fdb8173_824x226.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!7PuF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcda1527-c2d2-40f8-810b-878a1fdb8173_824x226.png 424w, https://substackcdn.com/image/fetch/$s_!7PuF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcda1527-c2d2-40f8-810b-878a1fdb8173_824x226.png 848w, https://substackcdn.com/image/fetch/$s_!7PuF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcda1527-c2d2-40f8-810b-878a1fdb8173_824x226.png 1272w, https://substackcdn.com/image/fetch/$s_!7PuF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcda1527-c2d2-40f8-810b-878a1fdb8173_824x226.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!7PuF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcda1527-c2d2-40f8-810b-878a1fdb8173_824x226.png" width="824" height="226" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fcda1527-c2d2-40f8-810b-878a1fdb8173_824x226.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:226,&quot;width&quot;:824,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:38332,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://suryaps.substack.com/i/180290360?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcda1527-c2d2-40f8-810b-878a1fdb8173_824x226.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!7PuF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcda1527-c2d2-40f8-810b-878a1fdb8173_824x226.png 424w, https://substackcdn.com/image/fetch/$s_!7PuF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcda1527-c2d2-40f8-810b-878a1fdb8173_824x226.png 848w, https://substackcdn.com/image/fetch/$s_!7PuF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcda1527-c2d2-40f8-810b-878a1fdb8173_824x226.png 1272w, https://substackcdn.com/image/fetch/$s_!7PuF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffcda1527-c2d2-40f8-810b-878a1fdb8173_824x226.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><strong>Production challenges you can&#8217;t ignore</strong></p><p>The whitepaper highlights three critical production risks that context engineering must address.</p><p><strong>Latency and blocking UX.</strong> Memory generation requires expensive LLM calls and database writes. If this runs synchronously (blocking the user response), the experience becomes unacceptably slow. The solution is to handle memory operations asynchronously as background processes after responding to the user.</p><p><strong>Data isolation and privacy.</strong> Sessions and Memory must enforce strict per-user isolation. A user cannot access another&#8217;s conversation data or memories. PII should be redacted from session data before persistence to mitigate breach risks.</p><p><strong>Memory poisoning.</strong> Malicious users can attempt to corrupt the knowledge base by feeding false information. Safeguards like validation, sanitization, and trust scoring (memory provenance) must be employed before committing data to long-term memory.</p><p>These aren&#8217;t edge cases. They&#8217;re the difference between a demo that works in development and a system that scales in production.</p><h2>Memory provenance: The trust layer</h2><p>Not all memories are equally reliable. Some come from explicit user statements (&#8220;I prefer aisle seats&#8221;). Others are inferred from implicit behavior or bootstrapped from external systems like CRMs.</p><p>Memory provenance is the detailed record of a memory&#8217;s origin and history. Each memory carries metadata about its source, confidence score, and how that confidence changes over time (increasing with corroboration, decaying with age).</p><p>During consolidation, when new information conflicts with existing memories, provenance establishes a hierarchy of trust. A fact from a high-trust CRM system might override casual user dialogue. At inference time, the confidence scores are injected into the prompt, allowing the LLM to weigh evidence and make nuanced decisions.</p><p>This is the difference between a memory system that amplifies errors and one that becomes more accurate over time.</p><h2>Procedural memory: Learning how, not just what</h2><p>Most memory systems focus on declarative memory (facts and events). The whitepaper emphasizes procedural memory: the agent&#8217;s knowledge of skills and workflows.</p><p>Procedural memory captures the correct sequence of tool calls, the optimal strategy for recurring tasks, or the playbook for handling specific scenarios. It is extracted from successful interactions and distilled into reusable patterns.</p><p>The value is online adaptation. Instead of the slow, expensive process of fine-tuning model weights offline, procedural memory provides fast adaptation by injecting the right plan into the context via in-context learning.</p><p>For product teams, this means agents can learn and improve their workflows without requiring model retraining. That&#8217;s a significant operational advantage.</p><h2>What this means in practice</h2><p>For teams building stateful AI agents, the whitepaper provides a clear architectural roadmap.</p><p>Session management starts with conversation history persistence and compaction strategies. Simple token-based truncation handles basic cases. More sophisticated systems use recursive summarization to preserve context while controlling costs. Storage must be robust, retrieval fast, and per-user isolation strict.</p><p>Memory systems layer in gradually. Declarative memory (user preferences, key facts) provides the foundation. Asynchronous memory generation prevents blocking latency. Consolidation logic handles conflicts and prunes stale data.</p><p>Provenance tracking establishes trust and enables conflict resolution.</p><p>The architectural choices matter. Teams treating context engineering as foundational infrastructure get personalized, reliable agents. Those treating it as an afterthought face escalating costs, latency issues, and degraded user trust.</p><h2>The shift that matters</h2><p>Context Engineering represents the maturation of AI agent development. It moves the focus from crafting clever prompts to building robust systems that manage state, persist knowledge, and adapt over time.</p><p>The primitives are clear: Sessions for immediate coherence, Memory for long-term personalization. The challenges are well-defined: managing context length, ensuring data isolation, building trust through provenance.</p><p>Google&#8217;s whitepaper formalizes what production AI teams have been learning through experience. Not every use case requires stateful agents. But for applications where personalization, workflow continuity, or multi-turn context matters, this framework provides the architectural foundation.</p><p>The distinction between Sessions and Memory, the emphasis on consolidation and provenance, the recognition of procedural memory as distinct from declarative facts: these concepts clarify the design space and highlight the trade-offs that matter in production.</p>]]></content:encoded></item><item><title><![CDATA[Goal Clarity Without Strategy Clarity Is Just Noise]]></title><description><![CDATA[The dynamic is shifting.]]></description><link>https://blog.suryas.org/p/goal-clarity-without-strategy</link><guid isPermaLink="false">https://blog.suryas.org/p/goal-clarity-without-strategy</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Sat, 15 Nov 2025 00:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LOxS!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd73e65d-dbce-4664-9e74-5ca963688619_1021x1021.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The dynamic is shifting. AI tools let startups go from idea to credible prototype in weeks, not quarters. Technical execution gaps are narrowing. For enterprises, this changes the calculus.</p><p>The advantages used to be resources, data, distribution, and customer relationships. Those still matter. But only if you can deploy them before the market moves.</p><h2>The real enterprise problem isn&#8217;t speed</h2><p>It&#8217;s coordination.</p><p>Everyone knows the goal. &#8220;AI transformation.&#8221; &#8220;Double growth.&#8221; &#8220;Modernize the platform.&#8221; Leadership repeats it constantly. Town halls, all-hands, strategic decks.</p><p>But goal clarity without strategy clarity is paralysis.</p><p>Teams know the destination but have no shared map. So they make local decisions that feel rational but don&#8217;t compound. Five teams each moving at reasonable pace, solving adjacent problems in isolation. Velocity looks fine locally. Strategic progress is zero.</p><p>The turf wars amplify this. &#8220;That&#8217;s my lane.&#8221; &#8220;No, it&#8217;s mine.&#8221; Some of this is healthy. You need clear ownership. But it becomes extreme when there&#8217;s no strategy to adjudicate scope conflicts.</p><h2>The coordination paradox</h2><p>Here&#8217;s the tension: You need input from multiple teams to form a coherent strategy. You need to identify who&#8217;s already doing adjacent work, who controls critical capabilities, and who has context that would change the plan.</p><p>But decision-by-committee is doomed. If everyone needs to agree, you ship nothing.</p><p>The resolution isn&#8217;t eliminating coordination. It&#8217;s designing coordination for a 10x faster cycle time.</p><p>Include for input. Decide with authority. Move with speed.</p><p>Time-box strategy formulation to weeks, not months. Three weeks from &#8220;we need a strategy&#8221; to &#8220;teams are executing,&#8221; not three quarters. Separate the input phase (broad consultation) from the decision phase (narrow authority). Default to leveraging existing capabilities unless there&#8217;s a specific blocker.</p><p>Pre-negotiate escalation paths so turf conflicts get resolved in 48 hours, not 48 email threads.</p><h2>Why this matters more now</h2><p>Because the execution gap is narrowing. If a startup can prototype in 4 weeks and your enterprise takes 14 months to coordinate, your advantages evaporate.</p><p>Data moats, distribution, brand trust, and enterprise relationships only matter if you deploy them before competitors establish alternatives.</p><p>The question isn&#8217;t whether you&#8217;re moving fast in absolute terms. It&#8217;s whether you&#8217;re moving fast enough relative to how quickly the market is learning.</p><h2>What&#8217;s missing from your coordination system?</h2><p>If you&#8217;re in an established company trying to move with purpose:</p><p>Is it the actual decision-making structure? Who has authority at each level, and is that explicit?</p><p>Is it the incentive alignment? How do you get teams to cooperate instead of compete for scope?</p><p>Is it the measurement system? How do you know if you&#8217;re actually moving faster, or just feeling busy?</p><p>Is it the cultural shift? From &#8220;coordination equals consensus&#8221; to &#8220;coordination equals speed&#8221;?</p><p>The answers determine whether your resources compound into leverage or fragment into theater.</p>]]></content:encoded></item><item><title><![CDATA[Fast Teams Don't Ship More, They Learn Faster]]></title><description><![CDATA[Two teams both ship every week.]]></description><link>https://blog.suryas.org/p/fast-teams-learn-faster</link><guid isPermaLink="false">https://blog.suryas.org/p/fast-teams-learn-faster</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Fri, 14 Nov 2025 00:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LOxS!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd73e65d-dbce-4664-9e74-5ca963688619_1021x1021.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Two teams both ship every week. One is learning. The other is just busy.</p><p>The difference isn&#8217;t work ethic or talent. It&#8217;s what they optimize for. Slow teams measure velocity by features shipped. Fast teams measure it by hypotheses validated. One counts outputs. The other measures learning rate.</p><h2>The Learning Rate Problem</h2><p>Shipping is easy. Learning is hard. Most teams can release code weekly but take months to figure out if it worked.</p><p>They ship a feature, watch some dashboards, have a few meetings, and eventually form an opinion. By the time they know what happened, the context has shifted and the team has moved on.</p><p>Fast teams collapse that loop. They don&#8217;t ship faster because they cut corners. They ship faster because feedback arrives in hours, not weeks.</p><p>Each deployment answers a specific question. The instrumentation was built before the feature. The rollback is one click. The metrics update in real time.</p><p>By most accounts, Amazon&#8217;s two-pizza teams work this way. Each team owns metrics, deployment, and learning. They don&#8217;t wait for data teams to build dashboards or ask permission to roll back. The loop from &#8220;we think this will work&#8221; to &#8220;here&#8217;s what actually happened&#8221; runs in days, sometimes hours.</p><h2>What Fast Actually Means</h2><p>Fast isn&#8217;t about more features. It&#8217;s about more learning cycles in the same time period. A team that ships one feature and validates it in a week is faster than a team that ships three features and validates them in a month.</p><p>The constraint isn&#8217;t coding speed. It&#8217;s learning infrastructure. Can you deploy without friction? Can you measure what matters automatically? Can you see results without waiting for someone else? Can you kill a feature as easily as you launched it?</p><p>Reportedly, Stripe optimized for this early. Every experiment had clear success metrics defined upfront. Results populated dashboards automatically.</p><p>Teams could see within 48 hours whether their hypothesis held. That learning rate compounded. More cycles meant more validated insights. More insights meant better decisions. Better decisions meant sustainable velocity.</p><h2>The Real Metric</h2><p>Your velocity metric shouldn&#8217;t count story points or features shipped. It should measure time from hypothesis to validated learning. How many days from &#8220;we believe X&#8221; to &#8220;we now know Y&#8221;?</p><p>This is the same principle behind measuring <a href="https://blog.suryas.org/why-okrs-matter/">outcomes rather than outputs</a>. The question isn&#8217;t what you built. It&#8217;s what you learned.</p><p>If that number is more than two weeks, you don&#8217;t have a shipping problem. You have a learning problem. And no amount of faster coding will fix it.</p><p>What&#8217;s slowing down your learning loops right now?</p>]]></content:encoded></item><item><title><![CDATA[Why Retention Starts at Onboarding, Not Growth]]></title><description><![CDATA[Most products lose 80% of users within 30 days.]]></description><link>https://blog.suryas.org/p/retention-starts-at-onboarding</link><guid isPermaLink="false">https://blog.suryas.org/p/retention-starts-at-onboarding</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Thu, 13 Nov 2025 00:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LOxS!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd73e65d-dbce-4664-9e74-5ca963688619_1021x1021.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most products lose 80% of users within 30 days. Teams see this happening and hand the problem to growth. They add email campaigns, push notifications, re-engagement hooks.</p><p>None of it moves the number because the retention problem wasn&#8217;t created in month six. It was locked in during week one.</p><p>This isn&#8217;t about better onboarding flows or slicker tutorials. It&#8217;s about product decisions made before launch that determine whether users stay or leave months later. By the time your growth team measures retention, your product team already decided it.</p><h2>Time-to-Value Determines Everything</h2><p>Users don&#8217;t leave because they forgot about your product. They leave because they never experienced its core value. The gap between signup and first meaningful outcome is where retention dies.</p><p>Consider Slack versus most enterprise tools. Slack delivers value in the first conversation. You invite a teammate, send a message, get a reply. That loop completes in minutes.</p><p>Most B2B products make you wait weeks: configure settings, integrate systems, import data, train your team. By the time value might arrive, the user already decided you&#8217;re not worth it.</p><p>The best products collapse time-to-value ruthlessly. Figma lets you design in the browser with zero setup. Stripe processes your first test payment in minutes. Linear creates your first issue before you&#8217;ve read the docs.</p><p>Each optimized for the moment a user thinks &#8220;this actually works.&#8221;</p><h2>Complexity Curves Kill Quietly</h2><p>Every feature you add increases the burden on new users. The complexity that delights power users in month twelve crushes new users in week one. This tradeoff is unavoidable, but most teams get it backwards. They design for the expert and hope beginners will figure it out.</p><p>Notion is the cautionary tale. Infinitely flexible, incredibly powerful, and overwhelming to 90% of new users who just wanted a place to write notes. The product&#8217;s strength became its retention weakness.</p><p>Compare that to Linear, which hides advanced features behind progressive disclosure. New users see a clean issue tracker. Power users discover shortcuts, automations, and integrations as they need them.</p><p>The complexity curve should match the value curve. Early experience should be simple with obvious wins. Advanced capability should reveal itself gradually as users build competence and need more leverage.</p><h2>Habit Formation, Not Feature Adoption</h2><p>Retention isn&#8217;t about using all your features. It&#8217;s about embedding one habit that brings users back without thinking. The products with the best retention aren&#8217;t the most feature-rich. They&#8217;re the ones that become part of your daily rhythm.</p><p>GitHub doesn&#8217;t retain engineers because of Actions or Projects. It retains them because checking pull requests becomes a morning ritual. Superhuman doesn&#8217;t retain users through keyboard shortcuts.</p><p>It retains them by making inbox zero feel achievable daily. The habit is the moat.</p><p>Your onboarding should optimize for one thing: get the user to repeat the core action enough times that it becomes automatic. Three times is a trial. Seven times is a pattern. Thirty times is a habit.</p><h2>The Real Metric</h2><p>The metric that predicts retention isn&#8217;t MAU or feature adoption. It&#8217;s how many days until a new user completes the core loop three times. If that number exceeds seven (factor in your domain complexity), you have a retention problem that no growth campaign can fix.</p><p>The window to build retention is narrow. What product decision are you making today that will determine whether users are still here six months from now?</p>]]></content:encoded></item><item><title><![CDATA[Why AI Platforms Are Testing User-Paid Sharing]]></title><description><![CDATA[Most platforms face a brutal tradeoff when enabling sharing.]]></description><link>https://blog.suryas.org/p/ai-platforms-testing-user-paid-sharing</link><guid isPermaLink="false">https://blog.suryas.org/p/ai-platforms-testing-user-paid-sharing</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Wed, 12 Nov 2025 00:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LOxS!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd73e65d-dbce-4664-9e74-5ca963688619_1021x1021.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Most platforms face a brutal tradeoff when enabling sharing. Charge creators for hosting and you limit adoption. Charge end-users at the point of distribution and you create friction. Subsidize usage yourself and the costs don&#8217;t scale.</p><p>Each path blocks something you need: viral growth, sustainable economics, or both.</p><p>For years, platforms have picked their poison. SaaS tools charge creators monthly fees, killing casual sharing. Consumer apps eat infrastructure costs to drive growth, then scramble to monetize. Marketplaces take cuts that creators resent.</p><p>None of these models naturally align creator incentives with platform growth.</p><p>Anthropic&#8217;s <a href="https://claude.com/blog/claude-powered-artifacts">Artifacts feature</a> tests a fourth path. When you build and share an interactive app in Claude, you pay nothing for distribution (no hosting fees, no infrastructure costs, no matter how many people use it). Instead, anyone who uses your shared artifact authenticates with their own Claude account, and their usage counts against their subscription.</p><p>The cost doesn&#8217;t disappear. It just shifts to whoever&#8217;s getting the value.</p><h2>How the Model Works</h2><p>Artifacts let you build interactive applications directly inside Claude. React-based UIs powered by Claude&#8217;s API. You can create data analysis tools, games with adaptive AI, educational apps, writing assistants, or multi-step agent workflows.</p><p>Once you&#8217;ve built something, sharing is a single click. No deployment pipeline. No server configuration. No domain setup.</p><p>Here&#8217;s where the economics diverge from traditional platforms. Users must authenticate with their Claude account to interact with shared artifacts. That authentication isn&#8217;t just for access control. It determines who pays.</p><p>Every API call your shared app makes runs against the end user&#8217;s Claude subscription, not yours. If you&#8217;re on the free tier and share a tool that goes viral, you still pay nothing.</p><p>The platform handles scaling, hosting, and infrastructure. Users burn their own credits.</p><p>This creates unusual incentives. As a creator, you have zero reason to limit distribution. More users cost you nothing.</p><p>As a platform, every shared artifact that gains traction becomes a potential acquisition channel. New users must sign up to try it, and power usage drives upgrade decisions.</p><p>The current constraints reveal the roadmap. Artifacts can&#8217;t make external API calls yet. No persistent storage. Text-based completion API only.</p><p>These aren&#8217;t permanent limitations. They&#8217;re guardrails on a beta feature. Each constraint will likely fall as Anthropic validates the model.</p><h2>The Hypothetical Flywheel</h2><p>If this model works, the growth dynamics look different from traditional platform plays. Anthropic is betting on a self-reinforcing loop: zero-cost sharing drives more artifacts into the wild. Shared artifacts require authentication, converting casual users into registered accounts.</p><p>Those users engage with AI tools, generating usage signals and burning through free-tier credits. Some percentage hit their limits and upgrade to paid subscriptions.</p><p>The early adoption signal is real. Users have created over 500 million artifacts since launch. Community infrastructure emerged organically: artifact galleries, GitHub collections cataloging shared tools, diverse use cases from productivity apps to educational games.</p><p>But we don&#8217;t have the data that would prove the flywheel actually spins. What percentage of those 500 million artifacts get published versus staying private? Do shared artifacts meaningfully drive new signups, or are people mostly sharing within existing Claude users?</p><p>When someone discovers Claude through a shared artifact, do they convert to paid tiers at different rates than other acquisition channels? How much of Claude&#8217;s overall growth (18.9 million monthly active users, enterprise market share jumping from 18% to 29%) is attributable to Artifacts versus other features or marketing?</p><p>Those are the questions that determine whether this is a clever distribution hack or a fundamental shift in platform economics. Anthropic hasn&#8217;t published those metrics. Maybe they&#8217;re still figuring it out themselves.</p><h2>What This Reveals About Platform Strategy</h2><p>The model matters whether or not it works for Anthropic. It shows that AI platforms are actively experimenting with distribution models that don&#8217;t map to traditional SaaS or consumer app playbooks. The assumption (and it&#8217;s just an assumption for now) is that small, useful utilities can become repeatable acquisition channels if you make sharing frictionless enough.</p><p>This isn&#8217;t just Anthropic. OpenAI tested similar mechanics with custom GPTs and Canvas sharing. The specifics differ, but the pattern is consistent: make it trivial to create and share AI-powered tools, require authentication to use them, and see if community-driven distribution can compete with paid acquisition channels.</p><p>The unproven bet underlying all of this: that casual sharing actually creates viral growth at meaningful scale. Consumer social products proved that sharing photos and messages could drive exponential user curves. But those were inherently social activities. Sharing a YAML-to-JSON converter or a flashcard generator is utility-driven, not social.</p><p>Does utility sharing have the same viral coefficient? Or does it top out at small, engaged communities that never break into mainstream adoption?</p><p>If it works, if AI platforms can turn every creator into a distribution channel, the competitive dynamics shift. Platforms would compete not just on model capabilities or pricing, but on how easily you can build, share, and remix community creations. The platform with the lowest friction for turning ideas into shareable tools wins distribution mindshare. That&#8217;s a different game than the current race for benchmark scores and enterprise deals.</p><h2>The Question That Matters</h2><p>This is a strategic model worth understanding, not a proven playbook. Anthropic made a bet that eliminating distribution costs for creators would unlock a new growth engine. The early adoption numbers suggest people like building with Artifacts. Whether that translates to sustainable platform growth (new users, engagement, conversion) remains unproven.</p><p>The question isn&#8217;t just &#8220;does this work for Anthropic?&#8221; It&#8217;s &#8220;can small, shareable utilities become a repeatable acquisition channel for AI platforms?&#8221; If the answer is yes, we&#8217;ll see every major platform racing to reduce sharing friction. If it&#8217;s no, Artifacts becomes a power user feature that doesn&#8217;t move growth needles (still valuable, just not transformational).</p><p>For now, it&#8217;s an experiment. But one that reveals where platform thinking is headed: away from traditional SaaS unit economics and toward models where distribution cost approaches zero, user acquisition happens through utility sharing, and the platform captures value by sitting between creators and consumers. Whether that future arrives depends on data we don&#8217;t have yet.</p>]]></content:encoded></item><item><title><![CDATA[The Tool Spectrum is Collapsing]]></title><description><![CDATA[Marty Cagan&#8217;s recent piece on prototyping tools draws a clean line: build-to-learn tools on one side, build-to-earn tools on the other.]]></description><link>https://blog.suryas.org/p/the-tool-spectrum-is-collapsing</link><guid isPermaLink="false">https://blog.suryas.org/p/the-tool-spectrum-is-collapsing</guid><dc:creator><![CDATA[Surya Suravarapu]]></dc:creator><pubDate>Tue, 11 Nov 2025 00:00:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!LOxS!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fdd73e65d-dbce-4664-9e74-5ca963688619_1021x1021.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><a href="https://www.svpg.com/prototypes-vs-products/">Marty Cagan&#8217;s recent piece</a> on prototyping tools draws a clean line: <em>build-to-learn</em> tools on one side, <em>build-to-earn</em> tools on the other. He&#8217;s right about the hype problem: product managers confusing high-fidelity prototypes with production-ready systems. But the binary he describes is already dissolving.</p><p>The categorization reflects tool architecture. Lovable and Bolt for prototyping, Claude Code, and Cursor for production. UI-first tools abstract complexity and accelerate visual validation. Terminal-based tools expose code and configuration, giving engineers control over reliability, observability, and scale.</p><p>But that architectural difference doesn&#8217;t lock tools into single purposes anymore.</p><h2>What&#8217;s Shifting</h2><p>Claude Code sits in both camps. Non-technical product managers can use it to generate working prototypes that run on live data and simulate complex business logic. Then they hand that same generated code to engineering, who refactor what&#8217;s useful and discard what isn&#8217;t. But they continue building in the same environment with the same tool.</p><p>This isn&#8217;t theoretical. It&#8217;s happening now with Claude Code, Codex CLI, and tools like Droid Factory. The learning curve exists. Terminal interfaces intimidate at first, but the investment pays off in continuity. One tool, two phases, no translation layer.</p><h2>Why the Convergence Matters</h2><p>When prototyping and production share infrastructure, handoffs get cleaner. Product managers generate testable hypotheses in code, not static mockups. Engineers inherit working logic they can evaluate and extend, not wireframes they interpret from scratch. The gap between discovery and delivery narrows.</p><p>This doesn&#8217;t erase Cagan&#8217;s core warning: prototypes still aren&#8217;t products. <a href="https://blog.suryas.org/build-buy-or-ai-build">Business complexity, runtime demands, and operational constraints</a> (reliability, telemetry, fault tolerance, compliance) remain non-negotiable for commercial-grade systems. Prototyping sophistication doesn&#8217;t eliminate that work.</p><p>But it does change the question. It&#8217;s not &#8220;Can this prototype become a product?&#8221; It&#8217;s &#8220;How much of this prototype&#8217;s logic survives into production, and how quickly can we validate the rest?&#8221;</p><h2>The Implications</h2><p><strong>For product managers:</strong> Learning terminal-based tools is now a leverage move, not a technical detour. If you can <a href="https://blog.suryas.org/the-pm-as-builder-era">prototype in the same environment engineering uses</a> for delivery, you reduce interpretation overhead and accelerate feedback loops.</p><p><strong>For engineering teams:</strong> Code generated during discovery becomes a starting point, not a distraction. You&#8217;re evaluating real logic against real constraints, not translating concepts across tool boundaries.</p><p><strong>For organizations:</strong> The &#8220;build-to-learn versus build-to-earn&#8221; framing still holds. Separating discovery from delivery remains essential. But the tooling gap that once reinforced that separation is closing. That&#8217;s a workflow shift, not a conceptual collapse.</p><h2>Open Questions</h2><p>Can these converged tools handle the full complexity Cagan describes (thousands of use cases, enterprise-grade reliability, zero-downtime deployments) within the next three years? Unknown. The spec-driven development is picking up, so I am optimistic.</p><p>If tools serve both discovery and delivery well enough to accelerate learning and reduce handoff friction, that&#8217;s sufficient. Perfect continuity isn&#8217;t the goal. Better sequencing is.</p><p>The hype Cagan warns against (thinking prototypes are products) still deserves the warning. But the tools enabling that confusion are also solving a different problem: making the path from prototype to product less lossy. That&#8217;s not hype. That&#8217;s progress.</p>]]></content:encoded></item></channel></rss>