Class Action Lawsuit Filed Over Meta AI Glasses Privacy Claims

Share This Post

Meta is now facing a lawsuit over its AI glasses.

The class action suit, filed on March 4 on behalf of users in San Francisco, comes just a matter of days after European regulators raised privacy concerns about the product.

Both the U.K. data regulator and members of the European Parliament have expressed alarm that sub-contracted workers in Kenya employed to review footage to train Meta’s AI models have been exposed to private images and videos recorded by AI glasses users.

An investigation by Swedish newspapers revealed that these extended to sex, toilet visits and other intimate moments.

The suit, filed by law firm Clarkson Law in federal court, centers on the claim that deception is at the heart of Meta’s product.

According to a statement accompanying the lawsuit: “The new AI economy runs on personal data, and Meta’s business is no exception. Behind [its] marketing and privacy guarantees lies a data pipeline that is deeply invasive of its users’ privacy.”

Related:Anthropic Defies the Pentagon. Trump Fires Back

“Meta made privacy the centerpiece of its marketing campaign because it knew consumers would never buy these glasses if they knew the truth,” Yana Hart, a partner at the Malibu-based law firm, said: 

The action names two plaintiffs, Gina Bartone of New Jersey and Mateo Canu of California, who purchased AI glasses following Meta’s marketing campaigns that claimed they were “designed for privacy,” and neither saw any disclaimer or qualifier to contradict this claim.

But as Ryan Clarkson pointed out, these two buyers constitute merely a tiny portion of Meta AI glasses users, with seven million pairs sold in 2025 alone.

While Meta has yet to comment specifically on the lawsuit, it issued a statement to several outlets, including Courthouse News. It said: “Ray-Ban Meta glasses help you use AI, hands-free, to answer questions about the world around you. Unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device.”

“When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do. We take steps to filter this data to protect people’s privacy and to help prevent identifying information from being reviewed, the statement continued.

Workers in Kenya have said this filtering does not always work.

 

 

 

Related Posts

39 financial giants demand an emergency fast-track for Europe’s blockchain pilot

European financial firms and technology groups are urging lawmakers...

In Profile: Robin Anderson, Head of Product Management at Tribe Payments

Payments is an industry full of big claims about...

Is a Breakout to $2.24 Next?

Most XRP investors are back in profit, increasing the...

UK to Overhaul Payments Rules, Appoints Tokenization Lead

The United Kingdom is revisiting its payments rulebook to...

Saylor Hints at New BTC Buy, Strategy Eyes Semi-Monthly Dividends

Strategy co-founder Michael Saylor has hinted at another large...