Insights on AI, Customer Success & SaaS Leadership | Iliyana Stareva

Why AI Adoption Fails Inside Customer Success Teams (And It's Not What You Think)

Written by Iliyana Stareva | 07-Apr-2026 09:21:32

Every Customer Success leader I know is under pressure to adopt AI. The tools are being deployed. The dashboards are being built. The announcements are being made.

And yet — adoption stalls. CSMs quietly ignore the signals. Workarounds emerge. People revert to instinct over insight. This isn't a technology problem. It's a human one.

After years of building and running Customer Success and Customer Health operations across three major SaaS organisations, I've seen this pattern repeat. AI gets introduced. Behaviour doesn't change. Leadership assumes the team needs more training. What they actually need is something much harder to fix.

The real reasons AI fails in CS teams

1. Fear — but not the fear people talk about

Yes, there's fear of job replacement. That's real and worth acknowledging. But the deeper fear is more subtle: the fear of being wrong.

When a CSM acts on a human instinct and it fails, it's a judgment call. When they act on an AI signal and it fails, it feels like they were just following a machine. The loss of personal ownership over decisions is deeply uncomfortable.

Most CSMs would rather trust their own read — even when it's incomplete — than stake their credibility on a signal they don't fully understand.

2. Incentives point the wrong way

If a CSM's performance is measured on renewals saved and QBRs completed, there's no structural reason to engage with AI signals that require more upfront work and longer timelines to show impact.

Early intervention sounds good in theory. But if the reward system only recognises late-stage outcomes, early action will always feel optional.

3. Shadow AI vs official tools

Here's what no one talks about openly: CSMs are already using AI. They're using ChatGPT to draft emails. They're using it to summarise calls. They're building their own prompts in the background.

Shadow AI is everywhere in CS. But it's disconnected from the systems, data, and signals that would actually make it powerful. It's being used for personal productivity — not strategic decision-making.

The gap between what's being officially deployed and what's actually being used tells you something important: the official tools aren't solving problems people feel urgently enough to care about.

4. Signals without context don't create trust

"Your account has a risk score of 67."

What does that mean? What should I do? Why is it 67 and not 42? How confident is this?

When AI surfaces a signal without explanation, CSMs have no way to evaluate it. They can't connect it to what they already know about the customer. They can't defend the action they'd need to take internally.

So they don't act. Not because they're resistant. Because the signal gives them nothing to work with.

What this tells us about org design

The instinct is to solve AI adoption with more training, better UI, or stronger mandates from leadership. None of that addresses the root.

AI fails in CS teams because it's being introduced as a tool on top of a broken operating model — not as a redesign of how the work gets done.

The signals exist. The technology exists. What's missing is the organisational infrastructure to make acting on AI signals feel safe, natural, and rewarded.

That means:

  1. Explainability — CSMs need to understand why a signal is firing, not just what it says.

  2. Ownership — The signal needs a clear owner. Someone accountable for deciding what happens next.

  3. Action — The signal needs to come with a suggested next step. A recommended playbook, not just a flag. CSMs shouldn't have to figure out what to do — the system should make the obvious move obvious.
  4. Incentives — Early intervention needs to be measured and recognised, not just late-stage rescue.

  5. Feedback loops — When a CSM acts on a signal and it works, they need to see that. When they override it, that data needs to feed back into the model.

Without these five things, AI in CS will continue to produce dashboards that look impressive and behaviours that haven't changed.

The question to ask

Most CS leaders are asking: how do we get the team to use AI?

The better question is: what would have to be true about our operating model for acting on AI signals to feel like the obvious thing to do?

That shift — from adoption problem to design problem — is what separates organisations that extract value from AI and those that simply deploy it.

 

This post is part of a series on AI in Customer Success. Next: How to get CSMs to actually trust AI signals — explainability, ownership, and why signals without action always fail.