Study of nearly 2,000 senior hiring leaders finds 53% now prioritize AI fluency over domain expertise, but a critical gap between definitions and measurement is producing confident wrong hires on both sides of the Atlantic
TestGorilla, the leading skills-based hiring platform, today released The State of Hiring for AI Fluency, revealing a fundamental shift in talent evaluation: AI fluency has overtaken domain expertise as the top hiring priority. 53% of hiring managers now prefer candidates with strong AI fluency over deep subject matter experts.
Although 72% of UK and 71% of US organizations have formally defined AI fluency, and nearly all list it as a hiring requirement, 59% across both markets still made a bad AI hire in the past year.Share
But ambition is outpacing reality. Although 72% of UK and 71% of US organizations have formally defined AI fluency, and nearly all list it as a hiring requirement, 59% across both markets still made a bad AI hire in the past year — a candidate who spoke the language fluently in the interview but couldn’t apply it on the job.
“Organizations are no longer just looking for subject matter experts; they are looking for AI-augmented performers who can use emerging technology to 10x their output,” says Wouter Durville, CEO of TestGorilla. “But a candidate can learn the vocabulary, ‘agentic workflows,’ ‘RAG,’ ‘prompt chaining’ in a single weekend. They can describe a workflow convincingly without ever having built one.”
The Infrastructure Paradox
TestGorilla’s research identifies an “Infrastructure Paradox”: companies are investing in AI hiring frameworks built on the same broken proxies that have failed recruiters for decades. The report flags three critical issues:
- The Awareness Trap: 37% of organizations set their minimum bar at tool awareness — simply knowing a tool exists.
- The Subjectivity Trap: 19% leave AI assessment entirely to individual hiring manager discretion. Without a shared rubric, fluency becomes a vibe-check that rewards the best storyteller, not the best hire.
- Confidence vs. Competence: Interviews are designed to observe communication, not execution. Candidates can speak fluently about AI workflows without ever auditing an output or redesigning one.
A bad AI hire can cost more to fix than a vacancy: in lost output, failed projects, and rehiring costs.
A Transatlantic Divide
The data exposes a sharp split. 33% of US organizations report frequent AI-driven errors, compared to just 13% in the UK. UK employers are also less likely to set the bar at mere tool awareness (29% vs. 45% in the US), showing stronger internal alignment on what AI fluency requires.
The conclusion is the same on both sides: subjective evaluation is no longer fit for purpose. Objective, skills-based assessment is the only reliable path to verifying AI competence.

