AI Commoditization and Three Strategic Paths

AI makes your differentiator table stakes. Your competitive advantage is evaporating. The junior employee with AI tools matches the senior expert's output. The expertise that took years to build becomes a commodity.

What do you do when your moat becomes a puddle?

The framework

Three strategic paths exist. Each works. Each requires different capabilities.

Race to the top: Invest in capabilities AI can't commoditize. This works when you can build compounding advantages, such as proprietary data moats, network effects, brand recognition, relationships, and trust.

Race to the bottom: Compete on distribution and cost now that product differentiation has vanished. This works only if you have distribution dominance, cost structure advantage, or execution speed that competitors can't match.

Race to the adjacent: Pivot to a new value as the old one evaporates. This requires knowing what to abandon—giving away what you used to charge for, charging for what used to be free.

Evidence from the field

Radiology demonstrates all three paths. AI diagnostic tools now match human radiologists in detecting lung nodules, diabetic retinopathy, and certain cancers. A recent resident with AI assistance achieves diagnostic accuracy that previously required decades of experience.

Leading radiology groups race to the top. They pivot to interpretive expertise and clinical integration. AI spots the anomaly. Senior radiologists provide contextualized diagnoses, coordinate with oncology teams, and make judgment calls on ambiguous cases.

Teleradiology companies race to the bottom. They compete on speed and cost, using AI to handle routine reads at scale.

Some radiologists race to the adjacent. They become AI quality validators or clinical AI implementation consultants, helping hospitals integrate these systems.

Legal research follows the same pattern. AI tools review documents, find case law, and draft motions in minutes instead of days. Elite firms reposition associates as strategic counselors. Legal process outsourcing companies compete on cost. Some lawyers become legal operations specialists.

Content marketing shows the fastest commoditization. AI produces SEO-optimized blog posts in seconds. Premium agencies focus on brand voice and strategic narrative. Content mills race to the bottom on pricing. Smart agencies pivot to content strategy, using AI to execute but selling the strategic layer.

The timing question

Radiology took five years for AI to reach parity, legal research took two years, and content marketing took 18 months. Specialized knowledge that can be learned from examples commoditizes the fastest. The speed varies, but the outcome doesn't.

Why recognition fails for the incumbents

Companies see what they want to see. Internal view: "Our experts are still better than AI." Market reality: "Good enough just got cheap enough."

There's a visibility lag between capability parity and revenue impact. By the time revenue drops, competitors have already rebuilt elsewhere.

Winners monitor capability, not revenue. They ask when AI plus a junior person will match their senior person. They move before being forced to move.

The decision criteria

Racing to the top requires investing ahead of commoditization. You need advantages AI can't replicate and the resources to build them before differentiation disappears.

Racing to the bottom requires structural advantages that most companies lack. The margins are brutal, and the advantage is temporary. Few win this race.

Racing to adjacent requires the hardest skill—abandoning your core competency while it still has value. You pivot before the market forces you to pivot.

What this demands

Product leaders who thrive will choose deliberately. They'll recognize commoditization early. They'll resist denial. They'll rebuild differentiation before the market forces them to.

The expertise you spent years building can become a commodity overnight. The advantage that defines your company can evaporate in months.

The question isn't if AI will commoditize your advantage. The question is when and what you'll do about it.

Companies that see it coming and move first survive. Companies that wait for proof don't.

From Project to Product Thinking

We shipped on time.

Every dependency cleared, every stakeholder satisfied. The dashboards lit green.

And then—nothing. Usage flatlined. The “big release” landed quietly, with customers politely ignoring it.

This was several moons ago, early in my career. That was the moment it clicked for me: we had delivered perfectly, but we hadn’t delivered value.

Scope, schedule, and cost were all managed flawlessly. But none of that mattered if the product didn’t change user behavior. That’s when I started seeing the gap between project and product thinking.

Scope. Schedule. Cost.

Every project manager knows this triangle. Deliver the scope on time, within budget. It’s the foundation of reliable execution, the hallmark of a well-run team.

But when you move into product management, those same instincts, the ones that make you dependable, can quietly hold you back.

Product work operates in the same triangle, but the balance shifts.

Scope becomes your center, where you define what and why to build.

Schedule becomes a lever, helping you decide when and how fast to learn.

Cost remains shared, the common constraint that keeps everyone honest.

In other words: projects deliver output; products deliver outcomes.

That single distinction changes how you think, prioritize, and lead.

1. From Execution Agent to Outcome Owner

Project management trains you to deliver what’s asked — clearly, predictably, and efficiently.

When stakeholders request a new feature, you line up the resources, manage dependencies, and drive it to completion. Success means shipping on time and avoiding surprises.

In product management, that same request sounds different.

A product-minded PM responds not with “yes” or “no,” but with curiosity.

“That’s great input. Can we unpack what problem this solves?”

“Who benefits most if we build this?”

“How would we know it worked?”

This isn’t stalling. It’s reframing.

You’re shifting the conversation from delivery to impact — from what they want to why it matters.

That small behavioral change signals a big mindset shift: [...]

It's not a Search Problem, it's a Distribution Problem

Yesterday, I discussed how Atlas builds OpenAI's interface-to-platform flywheel through continuity of context. Today, it's an extension: the market structure implications of a company controlling all three layers of distribution.

The conversation about AI search misses the real shift.

This isn't about which tool delivers better results. It's about who controls the starting point for online activity and whether that control consolidates or fragments.

Google has held that position for two decades. Chrome captures 65% of browser share. Google Search handles 90%+ of queries. Those aren't just metrics—they're compounding distribution advantages. Defaults, data, and intent all flow through one system.

Atlas challenges that directly by positioning ChatGPT as the default interface for all web activity. Perplexity offers itself as an alternative to search, not a supplement. Google's own AI Overviews synthesize answers at the top of search results, reducing click-through rates.

The shift from ad-based search to answer-based models is fundamentally a redistribution of power: who routes users, and on what terms.

Distribution as a three-layer stack

Distribution isn't one thing. It's a stack, and each layer compounds.

Intent capture is the moment someone decides what they want. Whoever owns that moment has leverage. Google captured it through defaults in Chrome and Safari. ChatGPT is capturing it by being the interface users open when they want answers. Where users start determines everything downstream.

Routing is where the platform decides what happens next. Traditional search routed you to a list of links. You chose. Answer engines route you to synthesized responses. You stop or they surface one source if you want more. The choice narrows. Routing power determines which sites get traffic, which businesses get customers, and which publishers get discovered.

Monetization follows routing. Google makes $70+ billion per quarter in ad revenue because it routes users to advertiser sites. If ChatGPT becomes the router, it captures that leverage—or chooses a different model. Subscriptions, affiliate fees, sponsored answers—all options, none proven at scale.

The shift isn't just technological. It's economic and structural.

Why consolidation is likely

The early competition looks chaotic—Google, OpenAI, Perplexity, and Microsoft all launching answer-based tools. But the dynamics favor consolidation.

Network effects compound quickly: more users generate more data, which improves answers, which attracts more users.

Defaults dominate behavior. Atlas bets that making ChatGPT the default browser interface shifts intent capture at scale. Google bets that AI Overviews keep users inside Search. Defaults are sticky; whoever wins that position controls routing for years.

Scale determines viability. Running AI models at query scale is expensive. Only a few platforms can afford it. The middle ground is unstable.

Bots are ruling the world: by 2030, we are looking at a multi-fold increase in the bot searches compared to humans. If that happens, the market structure compresses. One or two platforms control intent capture, routing, and monetization. The web becomes less distributed, more platform-dependent.

Strategic implications

For Google: you're cannibalizing your core business to stay relevant. The test is whether you can transition the revenue model before ad decline forces cuts.

For OpenAI and Perplexity: you have better technology but no distribution moat (yet). The test is whether you can capture defaults before users settle into new habits.

For publishers: you lose traffic either way. The test is whether you can create content that answer engines can't synthesize by original reporting, proprietary data, investigative depth, or rethink distribution entirely.

This shift concentrates power. Fewer platforms controlling more of the routing layer raises questions about neutrality, privacy, and economic sustainability that no one has answered yet.

The better question isn't whether this shift happens. It's whether it creates a healthier internet or a more concentrated one.

From Interface to Platform: What OpenAI’s Atlas Browser Might Really Signal

Everyone saw this one coming: Open AI's Atlas browser.

Rumors of an OpenAI browser had been circulating for months, alongside steady hints in partnerships, SDK updates, and app integrations. When Atlas finally arrived, it didn’t feel like a shock. The surprise isn’t that OpenAI built a browser. It’s why they built one, and what that might unlock.

Because Atlas isn’t just another entry in the growing list of “AI browsers.” It’s the latest move in OpenAI’s deliberate pattern of turning its conversational interface into a broader platform.

The Broader Pattern

If you zoom out, OpenAI’s product roadmap has followed a recognizable logic.

  • ChatGPT became the universal interface. A single surface for reasoning, creation, and task execution.
  • ChatGPT apps opened a pathway for developers and brands, with early integrations from Shopify and Etsy focusing on commerce.
  • Sora extended this interface into media creation and social content.
  • Now, Atlas stretches that reach into the open web. A place where context and intent can travel with the user.

Each move connects back to the same core idea: continuity of context.

The goal isn’t to dominate every use case, but to let the same reasoning layer follow you across them. That continuity, more than any individual feature, might be what defines OpenAI’s long-term strategy.

Atlas in Context

AI browsers aren’t new.

Perplexity’s Comet introduced agentic browsing, where the browser could take actions on behalf of the user. Google and Microsoft both added AI layers to Chrome and Edge. Startups like Arc and Dia have been experimenting with AI companions for more than a year.

So what makes Atlas different?

It’s not that it introduces a new category. It’s that it connects the categories OpenAI already owns — the chat interface, your persistent context, and the web itself. [...]

Claude Skills Might Be Anthropic’s Most Exciting Update Yet

I just tested Claude Skills, and it’s awesome. Anthropic is on a roll here with disruptive innovation. If you haven’t tried it yet, here’s a quick rundown of why it matters, and why it could reshape how we work with AI assistants.

What Claude Skills are

Skills are like snap-on capabilities for Claude. Instead of rewriting prompts or uploading instructions every time, you can package reusable logic, scripts, and guidelines into a small folder. Claude automatically detects when a skill is relevant and loads it as needed.

You could build a “content brief” skill that enforces your tone, structure, and formatting. Or a “status-report” skill that summarizes updates the same way every week. It’s lightweight, modular, and instantly reusable across chats and teams.

The brilliance is in the simplicity: Claude now behaves less like a memory-loss chatbot and more like a real system that remembers how your team works.

How it differs from MCP

The Model Context Protocol (MCP) enables Claude to connect to external tools and data sources, such as your CRM, documents, or APIs. It’s a connectivity layer.

Skills, by contrast, define what to do once that data is available. They’re workflow logic, not integrations. MCP gives Claude access; Skills give Claude purpose.

In practice, they complement each other: MCP fetches data, Skills tell Claude how to process and present it.

Seeing it through a product manager’s lens

As a product manager, I see Skills as a new layer of knowledge infrastructure. You can encode how your team thinks (tone, workflow, review criteria) directly into the assistant. It’s like turning your company’s playbook into callable micro-apps.

For many teams, that’s a bigger unlock than simple data connectivity. It moves AI from “help me write this” to “run this process the way we do it.” That’s the kind of leverage PMs and operations teams have been chasing for years.

Why some say it’s a bigger deal than MCP

Maybe it’s because Skills feel immediately useful. You don’t need engineering support or API keys to get value. Anyone can create a folder, drop in a Markdown file with clear instructions, and watch Claude follow it perfectly.

MCP, on the other hand, is powerful but technical. It shines when you want deep integrations, but most people start with workflows, not systems integration. Skills meet users where they already are.

One important thing I picked up: Skills are much more efficient. They only keep a smaller set of metadata about the skills in the context window and invoke the appropriate skill on demand.

Closing thought

Anthropic is quietly redefining how assistants evolve. From reactive tools to configurable, modular workers. Whether Skills outshine MCP in the long run doesn’t matter much right now. What matters is that we’re watching AI systems become more adaptable and more personal.

It’s exciting to be living in this era.

AI Isn’t Replacing Human Help, It’s Redefining It

“AI will be the greatest source of empowerment for all.” - OpenAI’s Fidji Simo

It’s a bold vision, one that suggests anyone, anywhere, could find help for whatever they need: a business idea, a mental block, or even emotional support. The promise sounds inspiring, but it also sparks a familiar unease.

We’ve heard versions of this before. Every major wave of technology begins with the same fear: that something deeply human will be lost. When banks introduced computers in India in the 1980s, employees protested, convinced the machines would take their jobs. Instead, computers transformed banking itself—from manual transactions to personalized service, from paperwork to advice.

AI is now walking that same path. It’s not just automating tasks; it’s entering spaces that feel personal, even emotional. A chatbot that listens, a coach that motivates, a tutor that adapts. It all feels both powerful and unsettling.

But history suggests fear isn’t the end of the story; it’s the beginning of reinvention. Each time technology takes over the routine parts of work, humans move upward toward creativity, empathy, and meaning. The same evolution can happen with AI, if we design it to amplify what people do best rather than replace it.

In the 1990s, journalists feared the internet would kill newspapers. Teachers worried online learning would replace classrooms. Retailers braced for the death of stores. Each sector eventually found a new balance: media discovered digital storytelling, education unlocked access for millions, and retail became more personal through data and design. The pattern is clear. Automation removes the repetitive, not the relational.

That same logic applies to AI-powered help. Chatbots can handle information, logistics, and even first responses, but they can’t replace what builds trust: shared experience, empathy, and connection. The real opportunity lies in collaboration. A teacher using AI to mentor more students, not fewer. A therapist extending care through digital tools. A manager using AI insights to give more meaningful feedback. These aren’t examples of replacement; they’re examples of amplification.

To get there, though, we have to design intentionally. The goal isn’t to make AI “feel” human but to make it work with humans so that technology handles the functional, and people handle the emotional. That means building systems that encourage dialogue, not dependence; empowerment, not isolation. The question isn’t whether AI can help us, but whether we’ll use it to strengthen how we help each other.

Every disruption looks like a threat until it becomes an upgrade. Computers didn’t erase bank tellers; they freed them to serve customers better. The internet didn’t kill journalism; it expanded its reach. AI won’t erase human help either. It’s our cue to reinvent it, to create a new kind of help that blends intelligence, empathy, and design in ways we’ve never managed before.

From 50 to 100: The Human Edge in an AI-Accelerated Product World

AI has changed the pace of product development. What once took months now takes weeks. We can ship prototypes in days, test them with users, and iterate instantly. The acceleration is real.

But speed creates a new tension. If AI can take us from 50 to 90 in quality and execution, what does it take to reach 100? That final stretch, the space between something that works and something that resonates, is where human judgment still defines the outcome.

This is not a call to slow down. It’s a call to lead differently with sharper tools, deeper technical fluency, and a stronger sense of meaning.

The New Acceleration Curve

AI has already reshaped how product, design, and engineering teams operate.

A single product manager can now synthesize user feedback, create market maps, and draft a PRD in an afternoon. Designers can test ten layout variations before lunch. Developers can generate, refactor, and deploy code faster than ever.

We've already moved past the old "50 to 80" threshold. AI is taking us to 90 or even 95 on execution. But what happens next is where skill divides from craft.

The challenge is not in how fast you can move, but how well you can decide where to move and why.

The 90-to-100 Zone

That final zone, the space between competence and resonance, is still entirely human.

Here’s what lives there:

  • Emotional timing: sensing when the market is ready for something new, not just when it's technically possible.
  • Cultural fit: designing with an understanding of what feels authentic to your audience.
  • Intuitive usability: creating flows that feel effortless, not just efficient.
  • Storytelling coherence: connecting functionality to a purpose that matters.
  • Trust and ethics: deciding how to balance speed with responsibility.

AI can simulate logic, but it can't feel context. It can optimize outcomes, but not meaning. The 90-to-100 leap happens when human empathy meets intelligent execution. [...]

The New Rythm of Product, Design, and Engineering

The lines between product, design, and engineering have always been fluid, but AI-assisted development is making that overlap more productive than ever.

Today, product managers can spin up interactive prototypes in hours, not weeks. What used to require multiple handoffs between PMs, UX designers, and developers can now start as a shared experiment. This shift isn’t about replacing roles. It’s about accelerating discovery.

Prototyping as a Discovery Tool

There’s growing tension in some teams: product managers worry that by creating prototypes, they’re stepping into design territory. But that view misses the point.

Prototyping isn’t about ownership. It’s about speeding up learning.

With AI tools like Figma’s Autoflow, Claude Code, or Codex CLI, a PM can create three variations of a user flow, test them internally or with users, and get feedback by the end of the day. That’s compressing a discovery timeline that used to take weeks. The goal is to scope faster, validate assumptions earlier, and give design and engineering a clearer picture of what matters.

The Evolving Role of Design

UX designers remain essential in this process. Their strength lies in thinking through the experience end-to-end: not just how it looks, but how it feels, behaves, and supports user intent.

AI can generate an interface, but only designers can ensure it’s intuitive, ethical, and emotionally resonant. They turn quick AI prototypes into experiences that actually work for real humans.

In this new workflow, designers spend less time redrawing ideas from product documents and more time improving and aligning the actual user experience.

Engineering as the Quality Layer

The same applies to engineering. AI-generated prototypes often include working front-end code. It’s not production-ready—but it’s a jumpstart.

Engineers now begin with something tangible. They can focus on crafting scalable, secure, and industry-grade solutions instead of building from scratch. Senior engineers ensure performance, stability, and architecture quality—translating rapid ideas into reliable systems that deliver business value.

This Era Needs You

This AI era doesn’t replace human creativity. It amplifies it.

It needs product managers who obsess over customer needs, business value, and measurable impact, who use AI to move faster but stay anchored in purpose.

It needs designers who shape technology into experiences people love to use, who question, refine, and humanize what AI produces.

It needs engineers who bring it all to life, who ensure that what’s imagined in hours becomes something durable, secure, and scalable in production.

The tools are powerful. The opportunity is massive.

This era needs the real you.

If you're already not on it. What are you waiting for?