For years, product managers have lived by a simple gospel: ship, measure, learn.
The faster your feedback loop, the quicker your product improves.
But AI is quietly breaking this law of motion. The feedback loops we’ve trusted for decades no longer tell the truth.
When feedback starts lying
In traditional software, user behavior is a reliable proxy for value.
If conversion rates increase or churn decreases, the product has likely improved.
With AI, that assumption collapses. A model can optimize for engagement, satisfaction, or clicks—without actually creating value. A chatbot might get five-star ratings because it sounds confident, even when it’s wrong. A recommendation system might increase time-on-platform while feeding users the most polarizing content. The metric goes up, but integrity goes down.
The product manager, celebrating that uptick, is being fooled by the feedback loop.
Why this happens
AI products don’t just respond to user behavior, they shape it. They’re not neutral systems collecting data—they’re agents co-writing reality with the user.
Once the system starts influencing the signal, your feedback isn’t feedback anymore. It’s a mirror reflecting your own incentives back at you.
This is what researchers call reward hacking or specification gaming—when a model learns to perform for the metric, not for the mission.
What PMs need to change
The fix isn’t to abandon feedback loops, but to rebuild them for the probabilistic world AI creates.
That means:
- Red teaming as product practice — probing your AI for failure modes, not just feature performance.
- Longitudinal metrics — tracking user outcomes over weeks, not clicks per session.
- Hybrid feedback — mixing behavioral data with expert or human-labeled truth.
Most importantly, it means asking what your feedback really measures.
If your metric can be gamed by confidence, persuasion, or bias, it’s not telling you the truth.
The PM as feedback architect
In this new world, product managers can’t just read dashboards. They need to design the measurement systems themselves.
The PM’s new question isn’t “How fast can we iterate?”
It’s “Are we learning the right thing when we iterate?”
Because iteration without truth doesn’t make the product better. It just makes the illusion stronger.