Amid the popularity of OpenClaw, Nvidia is providing enterprises with their own OpenClaw moment, with added security and governance, while pushing to be recognized as an inference provider, not just the unquestioned leader in the AI training market.
The AI hardware and software giant, on Monday, at its GTC developer conference in San Jose, introduced the Nvidia NemoClaw stack for the OpenClaw agent platform. The platform lets users install Nvidia Nemotron models and the new Nvidia OpenShell runtime platform in a single command, according to Nvidia. OpenShell is an open source, secure environment for deploying personal AI agents. It provides a safety and governance layer between the AI agent and its compute infrastructure.
The NemoClaw platform shows just how important Nvidia considers OpenClaw’s widespread popularity. The open source agentic AI framework, now under the OpenAI umbrella, has gained wide popularity in the last few months since its introduction in November, with an estimated more than two million users worldwide.
With OpenClaw, users can create and manage AI agents that can execute real-world tasks and act as personal agents. However, despite its popularity, serious security concerns have emerged. It has sensitive API keys and session tokens in unencrypted files that can be stolen if the host system is compromised. Moreover, researchers have found instances in which OpenClaw agents lacked password protection. These security holes mean that while OpenClaw is pushing AI agents toward what Nvidia CEO Jensen Huang calls “the ChatGPT moment,” it is not ready for the enterprise. But the technology is too important to ignore, Huang said.
OpenClaw Strategy
“For the CEOs, the question is what’s your OpenClaw strategy?” Haung asked during his GTC on Monday. “Just as we need to all have a Linux strategy, we all need to have an HTTP/HTML strategy, which started the internet; we all needed to have a Kubernetes strategy, which made it possible for mobile cloud to happen. Every company in the world today needs to have an OpenClaw strategy.”
Nvidia’s new focus on agentic AI and OpenClaw is significant because it shows that the AI vendor is not leaving the heavy lifting of building tools solely to its partners; instead, it wants to help enterprises build those tools as well, said Brendan Burke, an analyst at Futurum Group.
“The rapid recent development and agentic models such as Nemotron show that the company can become a standalone agent orchestrator, along with providing hardware,” Burke said.
Nvidia is also looking to get the same effect it had with its two-decade old CUDA (compute unified device architecture) platform, said Benjamin Lee, a professor at Penn Engineering, University of Pennsylvania. He noted that CUDA was easily adopted for high-performance computing on Nvidia GPUs and that it drove further development of AI chips, and that OpenClaw is more than just an AI model solving complex mathematical problems. It’s addressing real-world tasks, and the models are applying training to what they’ve not seen before.
“With OpenClaw … they want to bring it up a level to look at these generative AI agents,” Lee said. “They’re hoping to extend what they did with CUDA and that flywheel effect to a bigger level of computation, because it’s not just individual matrix operations, it’s actually the inference and the agents.”
Focus on Inference
Lee added that, in addition to focusing on agentic AI, Nvidia is also honing in much more on AI inference, especially with its introduction, also on Monday, of six new chips in the Nvidia Vera Rubin line to power the next stage of agentic AI. The new chips are designed to operate together as one AI supercomputer to power every phase of AI, from pretraining and post-training to test-time scaling and agentic inference, Nvidia said.
The emphasis on AI inference, not only with these new chips but also with the Nvidia Vera Rubin DSX AI Factory design — a new guide for building codesigned AI infrastructure for data centers, in which hardware and software are developed together in a synergistic way — illustrates Nvidia’s shift from training, Lee said.
“On the inference side, what they’re recognizing is that there are lots of different models of different sizes and different capabilities, and they want a platform for users to be able to experiment with different types of models quickly, deploy them and see if it meets their needs,” he said.
The inference push also allows Nvidia to encourage the idea that its Vera Rubin chips and AI factory concept will lower the overall costs of tokens enterprises use, said Jack Gold, president at J. Gold Associates.
“Inference is a cost-sensitive compute structure, just like cloud hosting is,” Gold said. “Promoting a message that even though our systems are expensive, we can enable you to generate lots more tokens, and hence more revenue, is a critical message for them going forward.” This is particularly important given competition from hyperscalers such as Google, AWS, and Microsoft, as well as independent chipmakers such as Cerebras, he added.
In addition to NemoClaw and AI chips, Nvidia expanded its Nemotron family with what the vendor calls “omni-understanding” models for various applications, including vision, voice, and safety. For example, Nemotron 3 Ultra powers AI-native applications such as coding assistants and search; Nemotron 3 Omni uses audio, vision and language understanding to help agents gain insights from videos and documents; and Nemotron 3 VoiceChat enables the AI system to support real time conversations.

