Great article, Hugo! I won’t hide it that I didn’t agree with everything what I was seeing in it, and had to re-read several times. For, all the workflow patterns of avoiding agent…are actually part of agent. You see, I come from “old” AI agents school (Peter Norvig, Stuart Russel et al) that assumes internal loop while having all the actuator, workflow, reasoning, sensors, all with continuous evaluations etc embedded.
Then this struck me “When people say "agent," they mean that last step: the LLM output controls the workflow. Most people skip straight to letting the LLM control the workflow without realizing that simpler patterns often work better.”
Ah, ok. If “most people”:
- can’t differentiate deterministic tasks from non-deterministic tasks
- have no clue that in true dynamic systems (including dynamic “workflow”) in order to achieve equilibrium you need to spawn millions of tasks/sub-agents to hope for actionable convergence, which is subject to energy conservation
- LLMs are nowhere close energy efficient
then, we are dealing with basic illiteracy these days. Btw Anthropic isn’t helping either by mindlessly promoting swarms of agents (hierarchical or not) as a hammer for every solution.
Thanks for contributing to Decoding ML with this fantastic article, Hugo!
it was so great to work with you on this, Paul, and I'm looking forward to collaborating more in future!
Great article, Hugo! I won’t hide it that I didn’t agree with everything what I was seeing in it, and had to re-read several times. For, all the workflow patterns of avoiding agent…are actually part of agent. You see, I come from “old” AI agents school (Peter Norvig, Stuart Russel et al) that assumes internal loop while having all the actuator, workflow, reasoning, sensors, all with continuous evaluations etc embedded.
Then this struck me “When people say "agent," they mean that last step: the LLM output controls the workflow. Most people skip straight to letting the LLM control the workflow without realizing that simpler patterns often work better.”
Ah, ok. If “most people”:
- can’t differentiate deterministic tasks from non-deterministic tasks
- have no clue that in true dynamic systems (including dynamic “workflow”) in order to achieve equilibrium you need to spawn millions of tasks/sub-agents to hope for actionable convergence, which is subject to energy conservation
- LLMs are nowhere close energy efficient
then, we are dealing with basic illiteracy these days. Btw Anthropic isn’t helping either by mindlessly promoting swarms of agents (hierarchical or not) as a hammer for every solution.
Thx for thought provoking post! 🙏