By Natalia Taft
In financial services M&A, AI has moved from the product roadmap into the valuation model. And when it can’t be audited, due diligence turns into a liability conversation.
Natalia Taft, a compliance and regulatory governance executive with over 25 years of experience, shares what she’s seen happen when it doesn’t.
I have seen deals slow down because the AI inside the target was essentially a black box. Nothing documented. Nobody in the room was really able to say how the model worked if a regulator asked. The price on the term sheet didn’t always change immediately, but other things did: escrows got bigger, risk discounts started appearing in places they hadn’t before. Nobody called it an “AI governance problem” at the time. But that’s exactly what it was.
The real question buyers are asking now isn’t just “Does it work?” It’s “Can we defend it if regulators look at it?” Those are very different questions. A system can work operationally and still be fragile from a governance perspective. In regulated areas like credit, trading, AML, or payments, that kind of fragility quickly turns into valuation pressure and future remediation costs, post-deal regulatory scrutiny, or operational risk the buyer didn’t expect.
What used to be a fairly straightforward technology review now often feels closer to a mini regulatory exam. I have seen diligence teams bring in model risk specialists, data governance experts, even AI ethics advisers alongside the lawyers. They want to know what the model actually optimizes for, how it was trained, how bias is monitored, who owns oversight, and what happens when the model fails. If those questions can’t be answered clearly, confidence in the asset starts to wobble.
And that’s the shift we are seeing now. AI isn’t just a technology discussion anymore. In regulated industries it has become a governance question, and governance questions have a direct impact on valuation.
Before considering model sophistication or technical architecture, ask yourself: what is the system optimizing for. Is it revenue, conversion, fraud reduction, liquidity or maybe speed? If the target can’t articulate that in plain language and show how it aligns with risk appetite and regulatory obligations, that’s a red flag. Performance metrics are secondary because integrity starts with intent. You cannot outsource responsibility to code. And what people often mistake: the engineers knowing what the system does is not the same as the compliance function knowing what it’s been authorized to do.
Strong AI governance signals institutional maturity. That lowers perceived regulatory risk, which directly impacts valuation multiples. Also nobody talks loudly enough about the fact that post-acquisition AI remediation budgets often exceed original AI build budgets. Integration exposes undocumented assumptions, poor data lineage, unclear ownership. Things that worked fine in the seller’s environment stop working the moment someone tries to understand them from the outside. By the time a buyer finds this, the deal is closed so the problem is theirs.
Regulators have been clear about where they stand. SR 11-7 requires lifecycle validation and independent oversight of models used in material decisions. NYSDFS under Part 504 requires the same for transaction monitoring and sanctions filtering. The EU AI Act puts documentation and explainability obligations on high-risk systems with enforcement now active. What this means in practice is that governance gaps that used to be treated as technical debt are now showing up as transaction risk—and buyers know it.
For the CEOs who think this is a post-merger problem
Acquiring an AI-heavy business without evaluating its governance framework is no different than acquiring a bank without reviewing its credit portfolio. Governance built after scale is expensive. If you wait until diligence to formalize AI oversight, buyers will price in the remediation cost, or walk. I’ve seen it happen both ways.
The firms that hold valuation in that room are the ones where someone can pull up the model inventory, say who owns each system, and show you the last time something got flagged and what happened next. Build it before you need to prove it.

