← Back to writing

Selling AI to People Who Don't Understand AI

The hardest sale in AI is not to a skeptic. Skeptics can be converted with evidence. The hardest sale is to a buyer who is enthusiastic about AI in the abstract and completely unable to evaluate AI products in the specific. They want to buy. They don't know what they're buying. If you don't solve that problem for them, they will either not buy or buy wrong and churn.

I've worked on GTM for AI companies selling into government and large enterprise. These buyers have been told by leadership that AI is a priority. They have budget. They have a mandate. They are sitting across the table genuinely trying to figure out if what you're selling is real. The question underneath every question: can I trust this?

That trust question doesn't get answered by technical capability. It gets answered by whether your story maps onto a world they recognize.

Your product uses a large language model fine-tuned on domain-specific data, with a retrieval-augmented generation pipeline that maintains citation fidelity and reduces hallucination rates. That description is accurate. It is meaningless to your buyer. Not because they're unintelligent. Because that sentence doesn't answer their actual question: what is different about my world after I use this?

The translation has to be specific. "Your team will be more efficient" is not a translation. It's a claim. A translation sounds like: "Your contracting officers currently spend two hours reviewing each procurement document for compliance. Our system flags issues automatically, which means each officer processes three times as many contracts per week. At your current volume, that's four additional FTEs without the headcount."

That paragraph contains one technical concept ("flags automatically") and it's self-explanatory. Everything else is the buyer's world: contracting officers, procurement documents, FTE equivalents. When buyers hear their world reflected back accurately, they lean in.

Getting to that specificity requires something most AI companies don't invest in early enough: buyer research. Not market research. Buyer research. Market research tells you the size of the opportunity. Buyer research tells you how the specific human who signs the check thinks about their job, what keeps them up at night, and what success looks like from their chair.

But there's a dimension to this problem that's easy to miss, and I think it's the real differentiator: risk translation. Enterprise and government buyers aren't just buying a product. They're taking on professional risk. If the AI system fails (wrong outputs, compliance issues, makes the team look bad), that's their career on the line.

The how-we-handle-failure conversation is as important as the what-we-can-do conversation. What happens when the system is wrong? How will your team know? What's the fallback? Who's accountable? Technical founders skip this because they're confident in the product. Buyers aren't buying confidence. They're buying a managed-risk path to a better outcome.

This is why the best enterprise AI salespeople don't sell features. They sell safety. "Here's how this works when it works. Here's what happens when it doesn't. Here's how you'll always know which one you're getting." That framing converts more than any demo.

The AI companies that win commercially will be the ones that understand translation as a core competency, not a communication problem. It's not about explaining the product better. It's about building a function that lives in the buyer's world and carries the signal back.