OpenAI Launches Daybreak, a New Initiative to Challenge Glasswing

Share This Post

As more AI vendors seek to control how their technology affects cybersecurity, OpenAI on Tuesday introduced a program to help organizations identify, patch and validate software vulnerabilities in their code.

OpenAI Daybreak combines the intelligence of OpenAI’s GPT-5.5 models with Codex security to automate workflows such as threat modeling and remediation.

The initiative comes as both OpenAI and rival Anthropic compete on an almost monthly basis, targeting the cybersecurity market with new large language models (LLMs) such as Mythos from Anthropic and GPT-5.5-Cyber from OpenAI. Daybreak appears to be OpenAI’s answer to Anthropic’s much-publicized security-focused Project Glasswing

Daybreak also addresses a concern in the enterprise: many organizations fear that AI models will uncover vulnerabilities they cannot fix. That worry has multiplied following recent news that a threat actor used AI to develop a zero-day vulnerability, a type of threat that leaves cyber experts with no time to fix it.

“Security is under the spotlight,” said Gal Malachi, co-founder and CTO of Terra Security.

Projects such as Daybreak are important and beneficial to the cybersecurity community, he said.

“What OpenAI did is a good step forward because they’re not just giving you a bigger brain, they also give you a harness around that that allows you actually to have an orchestration around and handle vulnerabilities,” he added.

A Lot More Needed

However, OpenAI’s initiative does not fully address the current threats and vulnerabilities facing cybersecurity professionals, Malachi said.

“It will help with something that LLMs are familiar with,” he said. He added that the focus of both Daybreak and Mythos is on code because code is currently the most common application for generative AI.

However, “a lot of things happen until code reaches production,” Malachi continued, referring to the point in the development stage where the technology is being used and not necessarily the building phase. Preproduction is the phase in which developers build and write code for the application and software.

“Preproduction is one thing, and yes, you can see some vulnerabilities or potential vulnerabilities in the code, but still, good LLMs produce a lot of false positives,” he said. “We also need to understand how systems run in production; perhaps there are a lot of things that you don’t see from the code.”

Given the significant risks in production, it’s hard to tell exactly where the threat lies if an LLM is used, since generation happens in real time. Therefore, the demand for a possible answer to this problem that is more than a code-based tool is growing in the cybersecurity community.

“The industry is still learning and trying to understand how to code with it,” Malachi said. He said that enterprises should approach initiatives like Daybreak and models from AI labs such as Anthropic and OpenAI with caution and ensure they have the right tools and guardrails in place.

 

Related Posts

FOP Targets Key CLARITY Act Provision, Warning It Could Weaken Crypto Enforcement

Trusted Editorial content, reviewed by leading industry experts and...

DTCC Picks Chainlink As Data Layer For 24/7 Tokenized Collateral Platform

The Collateral AppChain will use the Chainlink Runtime Environment...

A powerful crypto indicator just flipped green as bitcoin tests $82,000

Cryptoquant’s bitcoin bull-bear cycle indicator turned green for the...

Senate Confirms Kevin Warsh as Fed Governor, with Chair Vote Expected

The US Senate has approved Kevin Warsh as the...

JPMorgan (JPM) to launch new tokenized fund as Wall Street tokenization race heats up

JPMorgan (JPM) is preparing to launch a tokenized money...