ChatGPT Is Becoming the Interface

When Sam Altman spoke with Stratechery this week, one idea stood out from the flurry of announcements and partnerships. OpenAI wants ChatGPT to be the single interface that connects people to everything else they do online.

Altman described a clear vision. OpenAI aims to build one capable system that people can use across their entire lives, from work to learning to entertainment. That mission explains the company’s focus on three fronts: research, product, and infrastructure.

From model to operating layer

The main DevDay reveal, “apps inside ChatGPT,” marks a turning point. ChatGPT is no longer just a place to ask questions. It is becoming a place to act. Users can now browse real estate listings, plan travel, or complete purchases directly in chat through a new feature called Instant Checkout. For developers, this means ChatGPT is not only an API endpoint. It is a distribution channel.

Altman noted that partner apps will keep their own branding and customer relationship. That decision matters. It shows that OpenAI wants to grow an ecosystem that supports other businesses instead of replacing them.

The infrastructure push

Behind these launches is a large-scale effort to expand computing power, storage, and electricity capacity. OpenAI has signed new agreements with Nvidia, AMD, Samsung, and Oracle to prepare for rising demand. Altman said it is a difficult challenge, but one the company must take on now. The approach recalls how past technology booms accelerated once many players invested in parallel instead of waiting for perfect timing.

What product leaders should learn

For product and technology teams, there are three key lessons.

  1. Design for conversation. Customers will expect to search, decide, and act without leaving the chat surface.
  2. Protect partner value. Long-term platforms depend on shared success, not control of every interaction.
  3. Build for trust. Altman emphasized that people love ChatGPT because they feel it is on their side. Maintaining that trust means clear data use, honest behavior, and helpful defaults. Products that lose user confidence lose everything else that follows.

Altman closed with a simple prediction: most people will rely on one digital helper that works across their whole life. Whether that turns out to be OpenAI’s product or something else, the direction is clear. The primary interface for computing is shifting again. It is moving from screens and apps to dialogue and continuity.

For product managers, the takeaway is straightforward. Start designing as if the conversation is the product, not just a support tool.

Notes on the Modern Product Leader’s Playbook

Watched Jiaona Zhang’s Reforge talk on product leadership. It’s a dense one — part philosophy, part tactical operating manual. These are the notes (and reactions) I don’t want to forget.

We’re in an in-between moment where PMs are both strategists and builders again. Jiaona calls it a new playbook, but it’s really a reminder that our leverage has changed.

1. Mindset: From Managing to Skating Where the Puck Is

The core shift is from execution to direction. PMs aren’t project managers. Strategy, speed, and capacity define the role. Capacity now encompasses people, agents, and workflows.

That framing stuck with me. Speed is seductive, but it can easily turn into chaos if you don’t have clarity about the goal. You can’t automate good judgment.

A useful reminder: the fundamentals haven’t changed. Solve real user problems. Keep your economics sound. Protect user trust, especially when your systems behave probabilistically instead of deterministically.

2. Build Faster, But Make It Mean Something

Zhang argues the new PM superpower is the ability to build, not just spec. Replace documents with prototypes. Automate the “old jobs” like research, customer feedback synthesis, and competitive analysis.

The best part of her message: everyone is a builder now. Support, marketing, and ops can all ship small improvements. The tools are here.

I’ve seen this play out in my own team. The fastest insights come when non-engineers can prototype an idea instead of waiting for a sprint cycle. But speed alone doesn’t create value. You still need the discipline to ask why before you ship.

3. Go to Market as You Build

If you build fast but don’t tell anyone, it didn’t happen. Keeping your source of truth — code, help center, and internal docs — up to date isn’t just good hygiene, it’s how your AI systems and teammates stay aligned.

Automate content creation while maintaining a human review loop. I love that balance. We talk a lot about “alignment,” but it really starts with clean data. If your docs are stale, your AI (and your marketing) will lie.

4. Scale Through Leverage, Not Headcount

The most provocative idea: scale doesn’t come from hiring, it comes from leverage. The best product leaders think in systems, not staffing. The new hire profile is for fast learners, systems thinkers, and builders. Designers who code. Engineers who architect for agents.

The people who tinker on side projects, who learn by doing — they adapt faster than anyone. Coordination-heavy middle layers are fading. Systems and agents don’t need status meetings.

Closing Reflection

The big takeaway: product leadership is becoming a leverage game. You don’t win by adding people; you win by compounding capability — through agents, workflows, and clarity.

If I mapped my own workflows today, how much of it could I hand to an agent? That’s the question I’m sitting with.

How I Scaled My Blog Archive with AI

I’ve built this site from the ground up. Over the years, I’ve used nearly every blogging platform: WordPress, Ghost, Substack, and more. But with the rise of generative AI, I wanted to roll my own. No templates, no prebuilt themes. Just me, rolling up my sleeves and vibe coding every page and design element into existence.

Part of this project is about learning firsthand how GenAI changes the way we build. I wanted to experience how natural language could become a true programming interface, shifting my focus from writing syntax to designing ideas.

Three months in, I’ve been publishing daily, seventy-five posts and counting. What started as a clean, minimal blog revealed an obvious limitation: my archive page was sub-optimal. It simply listed every post in one long scroll with no way to filter by topic or type. As the number of posts grew, this design would not scale.

I needed a better structure that was organized, navigable, and future-proof. So I turned to my trusted collaborator, Claude Code, to help reimagine the archive experience and make it scale.

The Problem: Archives at Scale

The challenge was clear. The archive needed to support hundreds, eventually thousands, of posts while allowing readers to filter by article type and category. It also had to be SEO-friendly, fast, and simple to maintain.

I outlined what I wanted:

  • Server-side pagination with 40 posts per page
  • Routes for filtering by type and category
  • Clean URL structure (/archives/1, /archives/articles/1, etc.)
  • Category-based navigation for my main topics

Implementing all that by hand would have meant a long session of pagination logic, route handling, and performance tuning. Instead, I described the problem to Claude.

The Vibe Coding Session

I opened Claude Code and wrote a short prompt describing the issue:

“The archive page currently lists all posts in one long scroll. I need it to scale, with pagination and filters for easy discovery.” [...]

OpenAI’s App Store Moment and the Future of Product Boundaries

Yesterday, OpenAI launched its own app store — a full ecosystem for third-party apps that live inside ChatGPT. Spotify, Canva, Figma, Zillow, and Coursera are already in.

At first glance, this might feel like another platform milestone. But if you zoom out, it’s something deeper: a redefinition of where products “live” and how users experience them.

The interface is dissolving

For years, we’ve built products around distinct interfaces — apps, dashboards, websites—each one with its own onboarding, layout, and user rituals.

But ChatGPT’s app model flattens that. The user doesn’t switch contexts anymore. Instead of going to a product, they talk through it. The conversation itself becomes the UX.

That’s a subtle but massive shift. When interaction happens in language rather than buttons, the unit of value isn’t the screen — it’s the intent. Your product becomes a capability that gets summoned, not a place that gets visited.

What this means for product builders

1. The API is the new front door.

If your value can be invoked in a sentence (“Generate a proposal with my data”), you should be designing for it. The interface layer is becoming optional.

2. The discovery funnel changes.

Being “inside ChatGPT” means discovery might come from search within a conversation, not app store optimization. You’ll need a strategy for conversational discoverability — how users find and recall your capability in the flow of work.

3. Monetization will look like micro-commerce.

When tools are composable and invoked contextually, business models follow. Expect usage-based or task-based pricing rather than subscriptions.

4. Governance and trust become differentiators.

Multiple apps will coexist within a single chat session. That raises new questions around data access, permissions, and privacy guardrails — and opportunities for products that solve those issues elegantly.

The bigger signal

OpenAI isn’t just launching integrations. It’s quietly positioning ChatGPT as an operating system for digital intent — where every product becomes a skill the user can invoke in plain language.

If that’s where we’re headed, product teams need to ask: What part of our experience could live entirely inside a conversation?

That’s not about building for ChatGPT alone. It’s about rethinking what it means to be present for the user in a world where interaction starts with words.

From Competitive Moats to Collaborative Bridges

The AI ecosystem is moving too fast for moats. Every closed advantage leaks. Every walled garden gets mapped. What used to protect you now isolates you. The defensible position today isn’t the highest wall — it’s the bridge everyone else depends on to cross.

For years, defensibility meant isolation. Own the data. Control the stack. Lock down the ecosystem. Those strategies worked when products were discrete and distribution was finite. You could draw boundaries around users, APIs, and markets. The slower the world moved, the more a moat mattered.

But AI has changed the terrain. Every model, API, and dataset can now be reassembled into something new overnight. Differentiation decays faster than ever. A closed system doesn’t just limit competitors; it limits learning and reach. When the environment itself is open, the winners are the ones who orchestrate flow — not the ones who restrict it.

That’s the quiet revolution underway: defensibility has gone dynamic.

The advantage now comes from enabling movement — from being the bridge that others must cross to get to opportunity. Bridges are hard to copy because they connect unique endpoints: your customers, your data flows, your partners’ capabilities. Once others build on top of them, you become the default path for value to travel through.

Stripe didn’t dominate payments by owning customers. It became the connective tissue between every business and every payment rail. The same pattern is emerging in AI. Companies like OpenAI, Anthropic, and Hugging Face are not defending isolated products; they’re building gravitational centers of connectivity. Their power compounds as others rely on them to reach users, data, or distribution.

A bridge strategy requires a mindset shift. Instead of asking, “What can we build that others can’t?”, product leaders need to ask, “What connections can we enable that others won’t?” The companies that thrive in this new era design for participation — exposing APIs, opening ecosystems, and inviting complementary innovation. Defensibility comes not from locking others out but from being the indispensable node they depend on.

This isn’t easy. Bridges require balance. Too open, and you become a commodity; too closed, and you lose network gravity. The art lies in defining the right interfaces — generous enough to attract others, specific enough to retain strategic control. The value isn’t in owning every interaction, but in owning the infrastructure those interactions depend on.

Moats kept competitors out. Bridges keep ecosystems together.

In an AI-driven world of agents, automation, and interlinked systems, the strongest products won’t win by isolation. They’ll win by indispensable connectivity — by becoming the platform that others can’t operate without, even if they could build alternatives.

The Feedback Loop Fallacy in AI Products

For years, product managers have lived by a simple gospel: ship, measure, learn.

The faster your feedback loop, the quicker your product improves.

But AI is quietly breaking this law of motion. The feedback loops we’ve trusted for decades no longer tell the truth.

When feedback starts lying

In traditional software, user behavior is a reliable proxy for value.

If conversion rates increase or churn decreases, the product has likely improved.

With AI, that assumption collapses. A model can optimize for engagement, satisfaction, or clicks—without actually creating value. A chatbot might get five-star ratings because it sounds confident, even when it’s wrong. A recommendation system might increase time-on-platform while feeding users the most polarizing content. The metric goes up, but integrity goes down.

The product manager, celebrating that uptick, is being fooled by the feedback loop.

Why this happens

AI products don’t just respond to user behavior, they shape it. They’re not neutral systems collecting data—they’re agents co-writing reality with the user.

Once the system starts influencing the signal, your feedback isn’t feedback anymore. It’s a mirror reflecting your own incentives back at you.

This is what researchers call reward hacking or specification gaming—when a model learns to perform for the metric, not for the mission.

What PMs need to change

The fix isn’t to abandon feedback loops, but to rebuild them for the probabilistic world AI creates.

That means:

  • Red teaming as product practice — probing your AI for failure modes, not just feature performance.
  • Longitudinal metrics — tracking user outcomes over weeks, not clicks per session.
  • Hybrid feedback — mixing behavioral data with expert or human-labeled truth.

Most importantly, it means asking what your feedback really measures.

If your metric can be gamed by confidence, persuasion, or bias, it’s not telling you the truth.

The PM as feedback architect

In this new world, product managers can’t just read dashboards. They need to design the measurement systems themselves.

The PM’s new question isn’t “How fast can we iterate?”

It’s “Are we learning the right thing when we iterate?”

Because iteration without truth doesn’t make the product better. It just makes the illusion stronger.

Platform Products Need to Earn Their Keep

Every company wants to build platforms. Few succeed.

The promise sounds irresistible: build it once, reuse it across teams, and move faster forever. But inside most enterprises, “platform” has become a buzzword attached to sprawling systems that no one loves and everyone tolerates.

Some of these platforms thrive because they are built with empathy and clarity. Others limp along as corporate mandates — used begrudgingly, updated reluctantly, and funded indefinitely. I’ve seen both ends of that spectrum, and the difference rarely comes down to technology. It’s about mindset, accountability, and whether platform teams remember who their real customers are.

A recent essay on Run the Business outlined seven myths about platform metrics — misconceptions like “all investments must have ROI” or “all metrics must be immediately measurable.” Those myths are true in spirit. Platform products do operate on longer horizons and indirect impact. But in practice, too many organizations misread that as a license to avoid measurement altogether.

Strategic patience is essential. But patience without accountability is just drift.

Where Platform Teams Go Wrong

1. Captive audiences breed complacency

Some platform teams serve customers who have no choice. Corporate IT decrees that “everyone must use Platform X,” and just like that, the internal users become a captive audience. Once that happens, the product mindset starts to erode. There’s no urgency to delight users, because adoption is guaranteed.

When usage is mandated, customer empathy evaporates. The roadmap becomes an exercise in compliance, not curiosity. The result is predictable: friction piles up, morale dips, and people quietly find workarounds outside the official system.

2. “Build it and they will come” rarely works

The other common failure mode is the opposite problem — a platform team convinced that their vision is so brilliant that adoption will happen naturally. They build, they launch, they celebrate… and no one shows up. [...]

Thinking Through Agentic Loops

I’ve long been fond of feedback loops. Systems thinking taught me to look for them everywhere: how a fitness tracker nudges you to walk more, how customer signals shape a product roadmap, how our habits form through repeated cues and responses. Feedback loops are elegant in their simplicity: an action produces an effect, which feeds back to influence the next action.

Recently, I came across the phrase agentic loops. At first, it sounded like another jargon term. But the more I sat with it, the more it felt like a natural extension of the feedback loops I already appreciate. Where feedback loops are about response, agentic loops are about initiative.

They describe an agent, whether a person or an AI, acting toward a goal, observing what happens, and then choosing the next move based on what it has learned. A simple example: when I debug code, I don’t just wait for errors to appear randomly. I make a change, run the program, study the outcome, and try again.

Each cycle is a loop, but what makes it agentic is my agency. I’m steering the iterations with intent, not just reacting passively. This subtle shift—from feedback to guided iteration—makes the concept more powerful, and more relevant beyond just control systems.

This framing clicked further when I read Simon Willison’s recent post on agentic loops. He offers a concise definition: an agent is “a model that runs tools in a loop to achieve a goal.” It’s not a grand new theory but a very practical way to think about how AI agents work.

Instead of being one-shot prompts, they’re structured to plan, act, and refine in cycles until they get closer to a defined outcome. Simon’s post highlights a few things I found useful. First, the importance of defining goals clearly. A vague objective leads to wandering loops, but a crisp goal gives the agent a yardstick for success.

Second, he emphasizes that the tools and environment matter as much as the model. Giving an agent a safe sandbox and the right commands is like giving a student good lab equipment; it shapes what’s possible and keeps the risks manageable.

And third, he talks about “YOLO mode,” where agents run actions without human review. It’s exhilaratingly fast, but risky if the environment isn’t locked down. To me, that risk is obvious. Any system that can take repeated actions without oversight will eventually go off the rails without guardrails. His post was a good reminder that speed and safety always need to be balanced when designing these loops.

What I find interesting is the resonance between agentic loops and product work. In product management, we often launch a feature, observe adoption, and refine based on what we learn. That’s a feedback loop.

But when teams act proactively—experimenting with hypotheses, testing multiple variations, and learning as they go—they’re effectively running agentic loops. The intent isn’t just to react to signals, but to shape outcomes through guided iteration.

I’m still learning about this idea, but it already feels like a useful mental model. Feedback loops show how systems stabilize and adapt. Agentic loops emphasize how actors, human or machine, can drive purposeful change within those systems.

My hunch is that, once I start looking, I’ll begin spotting agentic loops everywhere, just as I once did with feedback loops.