Founder pricing for first 500 teams.Claim your spot →
All posts
OperationsOctober 18, 20256 min read

The 5 metrics that tell you if your AI support agent is actually working

Most AI support dashboards are vanity metrics. Here are the five numbers that actually correlate with retention, revenue, and whether you should trust your agent with more traffic.

T
The Signorian team
Founders

Every AI support tool ships with a dashboard. Most of those dashboards are bad. They show conversation counts, response times, and some made-up "AI score" that's designed to make you feel good about the purchase.

Here are the five metrics that actually tell you if the agent is doing its job.

1. Resolution without escalation

Out of every 100 conversations, how many ended without the user asking for a human or being routed to one? That's your true deflection rate. Anything else — "deflection by default," "AI engagement" — is a vanity wrapper.

2. Thumbs-down-to-answer ratio

If you've wired up feedback buttons (you should), what percent of answers get a thumbs down? Anything over 8% means your docs are thin or the agent is hallucinating. Either way, fix it before scaling.

3. Time-to-human on escalation

When the agent hands off, how long until a human actually responds? The moment you tell a user "a human will follow up" is a promise. Break that promise once and the user never trusts the agent again. 2 hours is the practical upper bound. 30 minutes is good. 5 minutes is excellent.

4. Repeat-question rate

Of the users who escalated to a human, how many came back with the same question within 30 days? If that number is above 10%, your handoff isn't resolving the underlying issue — it's just moving it. Often a sign you need to add that question to the agent's scope.

5. Source-citation accuracy

Spot-check 20 random answers a week. Did the agent cite the correct source? Was the cited source actually the best match? This catches silent drift — the kind of degradation you don't notice until customers complain.

How often to review

Weekly. Monthly is too slow — a drift in thumbs-down rate between weeks 1 and 4 will have compounded by month 2. Daily is too noisy. A 30-minute weekly review, same slot every week, is the right cadence for a team of any size.

Want to actually ship this?

Signorian deploys a docs-grounded AI support agent in under an hour. Free on 100 conversations/month. Founder pricing for the first 500 teams.

Claim founder pricing