What a Gigawatt of AI Really Means

Sam Altman wrote yesterday about a future of abundant intelligence, imagining a world where we add a gigawatt of new AI infrastructure every week. This week, we already saw their partnerships with Nvidia and Oracle. He teased about more partnerships and details coming soon:

"If AI stays on the trajectory that we think it will, then amazing things will be possible. Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer. Or with 10 gigawatts of compute, AI can figure out how to provide customized tutoring to every student on earth. If we are limited by compute, we’ll have to choose which one to prioritize; no one wants to make that choice, so let’s go build."

It’s a striking way to frame the scale of what’s coming: not in terms of chips or dollars, but in raw power capacity. But what does a gigawatt actually mean in everyday terms—and what does it unlock for AI?

Breaking Down a Gigawatt

A gigawatt is one billion watts of continuous power. Numbers that large can feel abstract, so let’s put it in context:

  • Homes: One gigawatt running nonstop for a month can power over 750,000 U.S. homes.
  • Light bulbs: It could keep roughly 100 million LED bulbs shining.
  • Laptops: Enough to charge around 10 million laptops at once.

Now scale that up. Ten gigawatts, operating year-round, would supply 8 million American homes for a full year.

Why Power Matters for AI

Today’s largest AI training runs already consume tens to hundreds of megawatts. Moving to gigawatt-scale AI means building infrastructure that rivals the energy footprint of whole cities. If Sam Altman’s vision of “a gigawatt a week” became reality, we would be adding the equivalent of a new metropolitan power system every seven days, all dedicated to intelligence.

This reframing matters because it shifts the conversation. The bottleneck for AI progress is no longer just better algorithms or more efficient chips. It’s also about energy, grid capacity, and the geopolitics of resource allocation.

What Becomes Possible

At gigawatt scale, AI systems could:

  • Accelerate medical breakthroughs, from drug discovery to personalized cancer treatments.
  • Deliver universal, personalized tutoring at negligible cost.
  • Simulate entire economies or ecosystems in real time to improve policy and planning.

The real leap is not any single application, but the sheer abundance of compute that enables exploration across thousands of domains simultaneously.

For product managers and technologists, the key insight is that the next breakthroughs may hinge less on clever model tweaks and more on infrastructure scaling. Questions worth asking now:

  • Who controls the supply of energy and compute at this scale?
  • Where are the bottlenecks in data center construction and energy delivery?
  • What new products become feasible when compute is no longer scarce but abundant?

Closing Thought

One gigawatt may sound like an engineering abstraction. But in human terms, it’s millions of homes powered, or tens of millions of devices running. Thinking of AI in those terms forces us to recognize the scale of what’s coming: intelligence as an energy industry, not just a software industry.

From Product-Led to Product-Agentic Growth in B2B

Picture this: A procurement manager signs up for your B2B marketplace.

Within 30 minutes, your product has analyzed their company's spending patterns, identified $2M in potential savings, and pre-vetted 15 suppliers that match their compliance requirements. It drafted three RFPs based on their historical templates and scheduled demos with the right stakeholders.

The procurement manager didn't do any of this. The product did.

This isn't just good onboarding. It's not even personalization in the traditional sense. This is autonomous value creation—your product literally doing the work for your users.

And it's solving the exact problems that have made B2B software resistant to traditional product-led growth.

Why Traditional PLG Breaks in B2B

Let's be honest: Product-led Growth (PLG) in B2B has always been awkward.

  • The consumer SaaS playbook says make it simple, let users self-serve, and watch it spread virally through the organization. But B2B doesn't work that way.
  • You've got multiple stakeholders who want different things. The end user wants ease of use. IT wants security and control. Procurement wants cost savings. Executives want strategic value. Good luck building one self-serve flow that makes everyone happy.
  • Then there's the integration nightmare. Your user loves your product, but it needs to connect to their ERP, CRM, and that custom system they built in 2015. Suddenly, your "quick win" becomes a three-month implementation project.
  • Don't forget compliance gates. That excited user who signed up? They can't actually buy anything without security review, legal approval, and procurement sign-off. Your beautiful PLG funnel just hit a brick wall.
  • And implementation cycles? While consumer apps onboard users in minutes, B2B software often takes weeks or months to show real value. By then, your champion has moved on to fighting other fires.

These aren't bugs in B2B. They’re features — the reality. They exist because businesses have complex needs, real risks, and multiple people involved in decisions. [...]

When Work Becomes the Practice

A colleague and an inspiring leader, Puneet Maheshwari, recently wrote something about work and meaning that stopped me in my tracks. He talked about growing up around people who never had the luxury of romanticizing "meaning" in work. For them, work was survival and dignity. Nothing more, nothing less.

His insight? The question isn't whether work is a means to an end, but which ends make the means worth it.

When Time Disappears

For me, meaningful work has always had a clear signal. Time becomes irrelevant. Three hours feel like ten minutes. The outside world fades.

But waiting for these magical moments is a trap. The real work happens when you show up consistently, especially on uninspired Tuesdays and exhausting Fridays. Meaning isn't something you find. It's something that emerges from practice.

The craft isn't in the inspiration. It's in the repetition.

The Distance Problem

Most of us work several layers removed from actual human impact. We push pixels that change metrics that supposedly improve someone's day somewhere. But who? Where? How?

The most powerful example from Puneet's post was a nurse calming a frightened family. She could see the impact immediately. Fear becomes relief. Anxiety becomes calm.

As product managers, we need to be translators. Every user story should have a human story behind it. Every sprint should serve someone specific. Not a persona. Not a segment. An actual person whose Tuesday gets a little easier because of our work.

When they become "users" instead of humans, we've already lost the thread.

How Organizations Kill Meaning

We all know purpose drives great work. Then we systematically bury it.

We celebrate outputs over outcomes. We ship features nobody requested. We maintain processes that would make Kafka weep. The modern workplace isn't just inefficient. It actively fights against meaning. (Read: Outcomes over Outputs for Real)

The real emotional labor isn't the work itself. It's holding onto purpose when every system seems designed to strip it away. It's maintaining enthusiasm during your 47th stakeholder meeting about a feature that solves no real problem.

The Courage of No

Every feature you reject protects the features that matter. Every meeting you decline creates space for real work. Every process you eliminate reduces the distance between effort and impact.

This isn't about being difficult. It's about being a guardian. You don't need permission to protect what matters. You just need to decide that protecting meaning is part of your job.

Even when nobody's watching. Especially then.

The Self-Deception Trap

The easiest person to fool is yourself.

Every product manager thinks their feature will change the world. Most barely change a dashboard. The challenge is maintaining healthy skepticism while keeping your team inspired.

Hold these two truths: This might not matter as much as we think, and we're still going to craft it like it does. That tension is uncomfortable. It should be. Discomfort is your compass pointing toward actual impact.

The Monday Morning Test

Here's the question I ask myself every Monday: Can I name the specific person whose life gets better because of this week's work?

If the answer is a vague "our users" or "the business," I'm not doing product management. I'm running a feature factory.

Work will always be partly transactional. We have bills, we need healthcare, and we have responsibilities. That's reality.

But within those constraints, we get to choose which transactions matter. We can create pockets of meaning even in broken systems. We can refuse to let the process kill the purpose.

Starting Small

Pick one thing this week. Just one feature, one decision, one problem. Trace it all the way to a real human impact. Find the person whose day gets easier. Learn their name. Understand their frustration.

Share that story with your team.

Then do it again next week.

This is how meaning works, not as a grand revelation but as a practice. Not waiting for organizations to suddenly care, but creating small spaces where purpose survives.

My colleague ended his piece with refusing cynicism, calling it risk aversion masquerading as wisdom.

He's right. The riskiest thing we can do is stop believing our work can matter. Even in small ways. Even when nobody notices. Even when the systems fight against it.

That's the craft. That's the belonging. That's the service.

That's the work worth setting an alarm for.

The Massive AI Opportunity Hiding on Your Home Screen

Right now, stop reading and look at your phone's home screen.

Count how many apps are built specifically for AI—not regular apps that added AI features, but products designed from the ground up for the AI era. ChatGPT probably makes the list. Maybe a few others.

But for most of us, the answer is surprisingly close to zero.

This observation comes from Andrew Chen's recent piece on how AI will change startup building, where he introduces what he calls the "Home Screen Test." It's a deceptively simple way to measure how far we've actually come in the supposed "golden age of AI." (That was just a lead-in, but read his entire article to think about the paradigms that are shifting)

The results reveal something startling: despite all the AI hype, we're still in the very early innings of building products that truly leverage AI's potential.

The paradox is real. We're living through what everyone calls the AI revolution, yet it's virtually invisible in the place where we spend most of our digital time—our phone's home screen.

This isn't a sign that AI is overhyped. It's a massive signal that the biggest opportunities in product management are still wide open.

What This Test Actually Reveals

The Home Screen Test exposes a weakness in how most companies are approaching AI product development. We're stuck in "AI feature mode" instead of "AI-native mode."

Think about the difference between mobile websites and mobile-native apps in the early smartphone era:

  • Old approach: Take a newspaper → put it online = website, or take existing websites → make them work on mobile = mobile-friendly sites
  • Breakthrough approach: Build something totally new for mobile = Instagram, Uber, Snapchat

We're seeing the same pattern with AI today: [...]

Platform vs Product: The AI Era Convergence

“In technology, whoever controls the platform controls the narrative,” as several strategic analysts have observed. The rise of AI is testing that maxim in new ways. A single large language model can be both the underlying platform that developers build on and the end-user product millions adopt directly. For companies in the AI era, the question is no longer whether to be a platform or a product, but how to navigate being both at once.

The Blurred Line in Practice

Consider OpenAI. The company provides an API that powers thousands of applications, making it a platform. At the same time, it operates ChatGPT, one of the most widely used consumer products in the world, built on that very same infrastructure. Anthropic follows a similar pattern, offering Claude as a developer-facing API while also positioning Claude Code as an integrated product experience for knowledge workers.

These examples highlight the duality at the heart of AI strategy. Platforms attract developers and extend reach. Products capture direct users and create faster feedback loops. AI companies are increasingly straddling both roles out of necessity.

Commoditization Pressure

The urgency comes from commoditization. Core LLMs are now accessible from multiple providers, including OpenAI, Anthropic, Cohere, and open-source projects such as Meta’s LLaMA. When the underlying models are interchangeable, differentiation shifts elsewhere. Companies must either:

  • Own the product experience, turning the model into a daily workflow or consumer habit.
  • Own the platform ecosystem, building a stickier developer environment of integrations, tooling, and distribution channels.

The danger lies in being stuck between the two—neither a beloved product nor a thriving platform, but a utility with no defensible edge.

Historical Parallels

This tension is not new. Microsoft built Windows as a platform, but also created Office as a product to drive adoption and revenue. Apple took the same route, pairing iOS with a suite of native apps to showcase the experience. In both cases, the platform and product reinforced each other.

What’s different in AI is the cycle time. With models updating every few months and user adoption moving at internet speed, companies must navigate platform–product strategy in real time. Decisions that once took years in the PC or mobile eras now compress into quarters.

Future Speculation

One plausible future is the rise of platform-products—hybrids where the line between app and API vanishes. ChatGPT plugins already move in this direction, turning a consumer-facing product into a platform for third-party developers. Claude and Perplexity are experimenting with integrations that extend their utility beyond the core chat interface.

This suggests a future where every AI product is also a developer surface, and every platform doubles as an end-user tool. The analogy might be an “AI-native app store,” but one that lives inside the product itself rather than as a separate layer.

Takeaways for Product Managers

For product managers and technologists, three lessons stand out:

  1. Positioning matters. Clarity on whether you are targeting builders or end users is critical, even if you serve both.
  2. Feedback loops create moats. Products generate user data that strengthens the platform layer. Platforms enable broader adoption that can feed product improvements.
  3. Convergence is the default. In AI, expect most companies to operate simultaneously as platforms and products. The winners will balance these roles without diluting either.

Conclusion

AI is erasing the boundary between platforms and products. The same model can be an API, an app, or both, depending on context. Historical playbooks offer clues, but the pace of change is faster and the stakes higher. The companies that succeed will be those that embrace convergence, creating ecosystems and experiences that reinforce each other.

Atlassian's Browser Move

Atlassian, the company behind Jira and Confluence, is spending $610 million to acquire The Browser Company, the maker of Arc and the newer AI-forward browser, Dia.

That sounds strange at first. Atlassian makes collaboration software, not browsers. Chrome and Edge dominate the market. Why on earth would they want to own a browser?

But once you look closer, it starts to make sense.

The browser as a starting point

Brian Balfour puts it well in his article: The new entry point: Why Atlassian Acquired The Browser Company. His argument is simple: the browser is the front door to work.

If Atlassian controls that front door, it can make sure people land in Jira, Confluence, and the rest of its tools. It’s not just about browsing anymore; it’s about shaping the flow of work.

There’s also the AI angle. Assistants like ChatGPT and Claude are quickly becoming the first stop for many tasks. If that shift continues, Atlassian risks being sidelined. A browser designed around work and agents is their way to stay relevant.

I actually started using Dia when it came out. It had some neat ideas — contextual awareness, shortcuts into work tasks — but it didn’t quite stick for me.

These days, my default browser outside of work is Comet, from Perplexity. It’s AI-driven in a different way: it handles research, summarization, filling forms, and everyday browsing tasks really smoothly. It feels less like a “work tool” and more like a personal assistant.

A risky but bold move

Back to Atlassian: $610 million is a serious bet. Browsers are hard to build and maintain. Getting people to switch is harder still. That too, with the enterprise standard security and compliance.

But if they can make Dia the fastest, smartest way to do daily work — not just another browser with AI bolted on — it could pay off in a big way. If they can’t, well, it may go down as an expensive experiment.

Either way, it’s a sign of how quickly the ground is shifting. Even a company as established as Atlassian sees the browser itself as up for grabs.

Why it’s worth watching

The idea of a work-first browser is compelling. If Atlassian pulls it off, Dia could be the surface where tasks, documents, and agents all converge — a true “browser for doing.”

But if it doesn’t hit the mark, the deal could end up being remembered as an expensive distraction. Either way, it’s another sign that the browser, once seen as a solved problem, is becoming one of the most interesting battlegrounds in the AI era.

When to Trust Intuition vs. Metrics

This is a follow-up from an earlier post on the limit of metrics.

Product managers often wrestle with a familiar question: Should I trust the numbers, or should I trust my instincts? The truth is, both matter — but their weight changes depending on where your product is in its lifecycle. Intuition plays a bigger role early, while metrics take over later. Knowing when to lean on which can be the difference between chasing noise and driving real impact.

Early-Stage Decisions: The Signal Is Too Weak

In the early stages of a product, metrics are either nonexistent or misleading. Low adoption rates, scattered usage, and noisy data can make even promising ideas appear to be failures. If you rely only on metrics at this stage, you’ll abandon good ideas too quickly.

This is where intuition matters most. Intuition, in a product context, isn’t guesswork. It’s pattern recognition shaped by exposure to customers, markets, and adjacent products. It helps teams imagine what might work before there’s hard evidence.

  • Consumer example: Airbnb’s early signups looked underwhelming, and the data suggested limited demand. Intuition about the emotional side of travel — specifically, belonging and community — drove the founders to push forward until the model clicked.
  • B2B example: An internal workflow API may show little usage at first, but interviews with developers might reveal that it saves hours of manual integration work. Metrics would have labeled it a failure, but intuition from those conversations signals hidden demand.

Late-Stage Decisions: Metrics as Guardrails

Once a product gains traction and scale, the role of intuition shifts. With a larger user base and more activity, you now have reliable signals. Metrics become the way to validate decisions, expose bottlenecks, and optimize flows.

At this stage, intuition without evidence is risky. Small missteps compound at scale, eroding trust and performance.

  • Example: Amazon famously tests everything in its retail experience. With millions of users, even a tiny change in checkout flow can move the revenue needle significantly. Metrics provide the guardrails to experiment safely and refine relentlessly.
  • Enterprise example: A CRM with tens of thousands of active users must rely on usage data to decide whether to streamline certain workflows or add new ones. Metrics on adoption, completion rates, and error frequency are more trustworthy than opinion at this stage.

The Overlap Zone

The best product managers don’t treat intuition and metrics as binary choices. They know how to use them together:

  • Intuition to frame the right hypotheses.
  • Metrics to validate and refine those hypotheses.

For example, a company exploring AI copilots might start with intuition about where users feel workflow pain. Perhaps sales reps spend too much time writing follow-up emails. That intuition guides the prototype. But once launched, adoption metrics — how many reps use the copilot daily, how often they edit its drafts — determine where to double down.

Takeaway for PMs

Ask yourself: Are we in exploration or optimization?

  • If exploration, lean on intuition and user empathy. Don’t kill ideas just because early metrics look weak.
  • If optimization, lean on metrics. At scale, even small gains have a big business impact.

Intuition and metrics aren’t opposites. They are tools for different moments.

Conclusion

Intuition is a compass, metrics are a map. Early on, you need the compass to know where to head. Later, you need the map to navigate precisely. The best PMs know how to switch between the two — and when to use them together.

AEO is the New SEO?

Most product and marketing teams already know SEO. Search engine optimization has been the backbone of digital visibility for decades. But a new acronym is creeping into conversations: AEO, or Answer Engine Optimization.

I’m still digging into it, but here’s what I’ve learned so far—and why it matters.

From Search Engines to Answer Engines

SEO is about ranking high in search engine results. When a buyer types a question into Google, the goal is to appear in the top results so they click through to your page.

AEO shifts the game. Instead of search engines returning a list of links, answer engines like ChatGPT, Google’s Search Generative Experience, or voice assistants like Siri generate direct answers. The challenge is not just being visible, but being included in the AI’s answer itself.

One strategist put it simply: SEO makes you discoverable, AEO makes you quotable.

Why It Matters

For both consumers and businesses, the shift is significant.

On the consumer side (B2C), people increasingly expect quick, direct answers. If someone asks ChatGPT, “What’s the best smart thermostat?” they’re unlikely to scroll through ten blue links. They want a clear recommendation, ideally with trustworthy sources. Brands that structure their content to be cited directly stand a better chance of being the chosen answer.

On the business side (B2B), traffic quality often matters more than volume. Research from Lenny’s Newsletter found that traffic coming from ChatGPT or similar tools converts at much higher rates—about six times better than Google search traffic. That makes sense: someone asking an AI assistant about “the best tools for enterprise compliance” is already deep into problem-solving mode.

For startups especially, AEO may even be a faster path to visibility than SEO. Traditional SEO is slow and often favors big brands with domain authority. Answer engines, on the other hand, reward clarity, originality, and relevance.

What Works in AEO

I’m noticing a few tactics that keep coming up:

  • Concise, answer-ready content: Lead with the definition or solution, then expand.
  • Structured data and FAQs: Schema markup, how-to guides, and help center content are easier for AI to parse.
  • Fresh, authoritative sources: AI favors content that looks recent, trustworthy, and not over-optimized.
  • Presence in communities: Reddit, forums, and even YouTube transcripts are often cited by answer engines.

The Road Ahead

This feels early. Measurement is messy—there’s no clear equivalent to SEO dashboards yet. And no one knows exactly how these models choose sources. But the shift is happening. According to Amsive, one in ten U.S. internet users now begins searches with generative AI, and AI Overviews already appear in 16% of Google desktop searches.

For now, my takeaway is simple: SEO is still table stakes. But it’s worth experimenting with AEO—structuring content around the kinds of questions an AI might be asked, and making sure your product is the one it recommends.