Every vendor in the AI space is calling their product an agent right now. Customer service software, no-code automation builders, copilots that suggest text in a form field. The word has been stretched so far it barely means anything.
That matters if you are trying to make a real decision about what to build. If you think you are buying an agent but you are actually buying a chatbot with a nicer interface, you will be disappointed and out several thousand dollars per month.
Here is a working definition, and more importantly, the practical test for whether it applies to your situation.
What Makes Something an Agent
An agent runs a loop. It perceives something in its environment, reasons about what that input means and what should happen next, takes an action, then observes the result and continues. The loop is the thing. Without it, you have a tool, not an agent.
A chatbot waits for a human to ask it something. It answers. The loop ends. A workflow automation fires when a trigger happens and executes a fixed sequence. There is no reasoning step. An agent, by contrast, holds a goal and figures out how to reach it given whatever inputs it encounters. The path is not predetermined.
This distinction is not academic. A chatbot embedded on your website can answer questions about your services. An agent can read an incoming client intake form, cross-reference it against your case database, determine whether the matter falls within your practice areas, flag it as high or low priority based on documented criteria, draft a response, and route it to the right attorney, all without a human touching it.
What Agents Are Actually Good At
Agents perform well in a specific category of work: repetitive decisions that have clear rules but variable inputs. The rules do not change. The inputs do. And the volume is high enough that doing it manually is expensive.
Law firm intake triage is a clean example. Every firm has criteria: practice areas they take, jurisdictions they cover, matter types they avoid, client conflicts they need to screen for. Those criteria are stable. But every inquiry that comes in is different. An agent can apply the same logic to every single inquiry, instantly, without fatigue. A junior associate doing the same job burns hours per week on work that requires judgment but not expertise.
Prior authorization review in healthcare works the same way. Payers publish criteria for what they will cover. Clinical notes document what the patient needs. Matching those two things is a decision problem with clear rules and variable inputs. Most of those decisions do not require a physician to make them. They require someone to read, compare, and route. That is agent work.
Claims routing in insurance is a third example. When a claim comes in, someone has to classify it by type, assess completeness, check for fraud signals, assign it to the right adjuster, and set a priority. The criteria for all of that exist. The claims are different every time. An agent handles it at scale.
What Agents Are Bad At
Agents fail when the rules are not clear, when the situation is genuinely novel, or when the stakes of a wrong decision are high enough that a human needs to own the outcome.
Creative judgment is outside an agent's lane. Deciding how to position a litigation strategy, structuring a complex deal, or determining whether a patient presentation warrants a diagnostic workup outside the documented guidelines, these require human expertise. An agent that tries to handle these will either refuse or hallucinate confidence. Neither is acceptable.
Novel situations are also a problem. An agent is trained on the rules it has been given. When something genuinely outside those rules appears, the right behavior is to escalate to a human. Agents that are not designed with clear escalation paths will make things up or fail silently. Good agent design includes knowing what the agent should not decide.
The honest framing is that agents are not replacing human judgment. They are clearing the volume of work that does not actually require it, so that humans can focus on the work that does.
How to Tell If Your Problem Is Agent-Ready
Three questions will get you most of the way there.
First: can you write down the rules? If you cannot document the criteria your team uses to make a decision, you cannot build an agent to make it. If your senior people make decisions based on intuition they cannot articulate, you have a knowledge capture problem before you have an agent problem.
Second: is the volume high enough to justify the build? An agent makes sense when you have hundreds or thousands of decisions happening every month that follow the same logic. If the volume is low, a well-structured checklist and a competent person are probably the right answer.
Third: what happens when it gets it wrong? Every system has an error rate. If a wrong decision results in a missed court filing or a patient receiving incorrect care, the agent needs tight escalation logic and human review for anything it is not certain about. If a wrong decision means a prospect gets routed to the wrong sales rep, the consequences are low enough to tune the system over time. Know the failure mode before you start.
Most of the businesses we work with have at least one workflow that is clearly agent-ready once they see the framing. The problem is usually that no one has looked at the work through that lens before. It is not complicated work. It is repetitive, rules-based, high-volume, and expensive to do manually. That is exactly what agents are built for.