AI Safety From a Hardware Perspective

Share This Post

ORLANDO — Hong Kong-based multinational Lenovo designs, manufactures and sells the servers, laptops and desktop PCs that people around the world are increasingly using to build and deploy personal AI agents.

In addition to supply chain and chip shortage worries, the vendor also has to focus on safety and security problems associated with the personal agent phenomenon triggered by the sudden emergence of open source personal agent framework OpenClaw around the start of the year.

“If you look at it just straight from a security and defense standpoint, let’s say a company like Lenovo, to us, agents and chatbots are endpoints that need to be defended just like the physical device,” said Christopher Campbell, director of AI governance and global products and services security leader at Lenovo, on the latest episode of Targeting AI. The podcast was recorded onsite at the Gartner Data & Analytics Summit 2026.

Related:Combatting Cultural Bias in the Translation of AI Models

“The other side of that is … sometimes you can get differing results if there’s not consistency, let’s say, between a local model and a cloud model,” Campbell continued. “From our perspective, one thing that we’re doing, and this is internally for development purposes and readiness purposes, is looking at how we can make agents … be more developer focused so that developers … are armed with the information they need.”

Part of Lenovo’s approach to the fast-growing personal agent market is to develop a responsible AI process to govern how agents are created and deployed on personal devices, Campbell said.

As a hardware company, Lenovo wants to meet legal, ethical and compliance obligations in how it uses personal chatbots to select AI models and applications, he added. 

Meanwhile, Lenovo is using agents internally and needs to operate within those same guardrails.

“All of our different business groups are either using or developing personal chatbots … or external chatbots for customer support,” Campbell said. “And from a governance standpoint, we’re getting a better handle on that now. All of those projects still have to go through responsible AI review.”

Campbell also talked more broadly about AI safety and governance, particularly in light of recent incidents in which AI has been blamed, at least partly, for users committing suicide after protracted interactions with large language models.

“A lot of organizations and other people I talk to are dealing with issues of trying to understand that human impact of AI. And [for] myself and my team, [that’s something that’s] been first and foremost on our minds is the human impact and safety of AI,” he said. “You have to continue to adapt to the EU AI Act and other regulations. But I think industry-wide, we’re getting to a turning point where people have to be looking at how AI actually affects human safety.”

Related:Anthropic Aims for Transparency With Claude Constitution

Related Posts

Trump-linked WLFI passes proposal letting $5 million stakers buy ‘direct access’ to team

World Liberty Financial, the decentralized finance (DeFi) protocol linked...

Fed headlines central bank rate decisions, Gemini earnings: Crypto Week Ahead

The week could prove pivotal for markets, including bitcoin...

BXB Market Review 2026: How Ordinary People Are Learning to Trade Global Markets From Their Phone.

Share Share Share Share Email The online trading industry has experienced tremendous growth over...

Australian Senate Committee Backs Digital Assets Framework Bill

Australia’s Senate Economics Legislation Committee has backed a bill...

Crypto trading firm Blockfills has filed for bankruptcy

Blockfills, a Chicago-based crypto trading firm, has filed for...