I’ve long been fond of feedback loops. Systems thinking taught me to look for them everywhere: how a fitness tracker nudges you to walk more, how customer signals shape a product roadmap, how our habits form through repeated cues and responses. Feedback loops are elegant in their simplicity: an action produces an effect, which feeds back to influence the next action.

Recently, I came across the phrase agentic loops. At first, it sounded like another jargon term. But the more I sat with it, the more it felt like a natural extension of the feedback loops I already appreciate. Where feedback loops are about response, agentic loops are about initiative.

They describe an agent, whether a person or an AI, acting toward a goal, observing what happens, and then choosing the next move based on what it has learned. A simple example: when I debug code, I don’t just wait for errors to appear randomly. I make a change, run the program, study the outcome, and try again.

Each cycle is a loop, but what makes it agentic is my agency. I’m steering the iterations with intent, not just reacting passively. This subtle shift—from feedback to guided iteration—makes the concept more powerful, and more relevant beyond just control systems.

This framing clicked further when I read Simon Willison’s recent post on agentic loops. He offers a concise definition: an agent is “a model that runs tools in a loop to achieve a goal.” It’s not a grand new theory but a very practical way to think about how AI agents work.

Instead of being one-shot prompts, they’re structured to plan, act, and refine in cycles until they get closer to a defined outcome. Simon’s post highlights a few things I found useful. First, the importance of defining goals clearly. A vague objective leads to wandering loops, but a crisp goal gives the agent a yardstick for success.

Second, he emphasizes that the tools and environment matter as much as the model. Giving an agent a safe sandbox and the right commands is like giving a student good lab equipment; it shapes what’s possible and keeps the risks manageable.

And third, he talks about “YOLO mode,” where agents run actions without human review. It’s exhilaratingly fast, but risky if the environment isn’t locked down. To me, that risk is obvious. Any system that can take repeated actions without oversight will eventually go off the rails without guardrails. His post was a good reminder that speed and safety always need to be balanced when designing these loops.

What I find interesting is the resonance between agentic loops and product work. In product management, we often launch a feature, observe adoption, and refine based on what we learn. That’s a feedback loop.

But when teams act proactively—experimenting with hypotheses, testing multiple variations, and learning as they go—they’re effectively running agentic loops. The intent isn’t just to react to signals, but to shape outcomes through guided iteration.

I’m still learning about this idea, but it already feels like a useful mental model. Feedback loops show how systems stabilize and adapt. Agentic loops emphasize how actors, human or machine, can drive purposeful change within those systems.

My hunch is that, once I start looking, I’ll begin spotting agentic loops everywhere, just as I once did with feedback loops.