Building Resilience in the Age of AI

Share This Post

With AI deployments continuing to rise across industries, operational resilience has emerged as a key differentiator between those scaling the tech successfully and those who risk falling behind. 

That’s according to a report from San-Francisco based IT incident management vendor PagerDuty, which surveyed 1,000 IT business leaders to answer the question of how AI uptake is affecting revenue. 

The answer, it turns out, has less to do with which AI tools a company deploys and more with how seriously it takes the infrastructure around it.

Findings

The report revealed a key correlation between companies’ investment in AI resilience and achieving measurable returns.

According to the findings, most of the companies covered by the report, nearly three-quarters, increased operational resilience over the past year, with a similar number actively growing their resilience budgets. 

Yet, at the national level, the link between resilience and revenue is increasingly stark. In the U.K. and Ireland, more than three-quarters of revenue-growing companies are increasing resilience budgets, compared with fewer than half of those with flat or declining revenues, according to the report.

Related:Reddit Sues Perplexity, Others for Data Scraping

To put it plainly, organizations that invest in resilience are pulling ahead in ROI, while those that don’t are quickly falling behind.

Failure to implement good governance and resilience has a multi-layered impact. In addition to financial loss (with the report finding that more than half of U.K. and Irish companies face losses of $300,000 or more per hour during major IT incidents, reputational damage can follow a company beyond the incident itself.

The scale of technological innovation has also given rise to another concern: that of implementing adequate governance structures.

The Governance Gap

Speaking at a recent roundtable, PagerDuty CIO Eric Johnson, SVP of product development David Williams and engineering VP Joao Freitas argued that while companies are feeling the pressure to continually integrate AI and innovate, governance structures to manage these deployments are not yet capable of managing the pace of change.

“There is this insane race to deploy AI as quickly as possible, without the appropriate guardrails or orchestration tools being in place,” Williams says. “That’s the challenge.”

AI agents, he argued, should be managed the same way as human staff, with appropriate permissions, oversight and accountability. 

“You should be treating them like you would if you were building a team of people,” he said. “Make sure you hire people appropriate for the job, coordinate the work, divide it among them, monitor their performance, weed out the ones that are not performing well and give them more responsibility over time. If we’re not thinking about technology the same way, it can be catastrophic.”

Related:Managing Shadow AI: Risks and Responses for Enterprises

“You can codify all these things for agents,” he added. “You can build infrastructure, orchestrate agents, provide guardrails, provide observability — all these things that prevent reliability from becoming a problem. But that hasn’t been done yet.”

Freitas pointed to the speed of AI-enabled development as another pressure point. “We had projects that previously would take six months that we can now do in 10 days,” he said. 

While the efficiency gains are significant, so are the risks.

“When we apply AI, we need to think about how we continue to be reliable, how we continue to be compliant, especially at the enterprise level,” Freitas added.

Questions About the End of the Employee

Beyond infrastructure, AI’s rapid expansion is raising a more human question about what happens to the workforce.

The early narrative that AI would render junior roles obsolete is already being revised, and Johnson stressed that the human-in-the-loop remains essential. 

“If you put an agent on your support site and it gives a customer the wrong information, then you are liable for that response,” he said. “It’s about augmenting staff, not always replacing them.”

On the question of entry-level jobs specifically, Johnson said the initial panic was misplaced. What will change, he argued, is the nature of those roles. 

With younger generations growing up alongside AI, he says these employees will be able to provide a level of tech expertise that was previously neither possible nor necessary. 

“By the time they enter the workforce, we’re going to have these AI natives that are really good at understanding how to manipulate AI to get things done at a scale we can’t even imagine right now,” Johnson added.

Williams struck a similar note, but with a warning. Capturing the best of both generations requires ensuring that foundational knowledge isn’t lost in the transition. 

“Understanding the fundamentals is important,” he said. “The question is how do you make sure the education is there: across secondary school, universities, as well as on the job? We need people to understand the fundamentals and be held accountable for checking the results when they use these tools. If you have those two things in place, I think we’re in good shape. If you skip either one, you’re in trouble.”

 

 

 

Related Posts

SEC’s Paul Atkins Floats Crypto ‘Safe Harbor’ Exemptions

US Securities and Exchange Commission chair Paul Atkins says...

Argentina joins growing list of countries blocking Polymarket access

Argentina has ordered a nationwide block on prediction market...

FinTech Australia Launches new Data Horizons Summit to Spotlight Real-World Open Banking Impact

FinTech Australia has officially launched the inaugural Fintech Data...

Crypto trading firm GSR expands token advisory with $57 million in acquisitions

Crypto trading firm GSR said Tuesday it is acquiring...