Class Action Lawsuit Filed Over Meta AI Glasses Privacy Claims

Share This Post

Meta is now facing a lawsuit over its AI glasses.

The class action suit, filed on March 4 on behalf of users in San Francisco, comes just a matter of days after European regulators raised privacy concerns about the product.

Both the U.K. data regulator and members of the European Parliament have expressed alarm that sub-contracted workers in Kenya employed to review footage to train Meta’s AI models have been exposed to private images and videos recorded by AI glasses users.

An investigation by Swedish newspapers revealed that these extended to sex, toilet visits and other intimate moments.

The suit, filed by law firm Clarkson Law in federal court, centers on the claim that deception is at the heart of Meta’s product.

According to a statement accompanying the lawsuit: “The new AI economy runs on personal data, and Meta’s business is no exception. Behind [its] marketing and privacy guarantees lies a data pipeline that is deeply invasive of its users’ privacy.”

Related:Anthropic Defies the Pentagon. Trump Fires Back

“Meta made privacy the centerpiece of its marketing campaign because it knew consumers would never buy these glasses if they knew the truth,” Yana Hart, a partner at the Malibu-based law firm, said: 

The action names two plaintiffs, Gina Bartone of New Jersey and Mateo Canu of California, who purchased AI glasses following Meta’s marketing campaigns that claimed they were “designed for privacy,” and neither saw any disclaimer or qualifier to contradict this claim.

But as Ryan Clarkson pointed out, these two buyers constitute merely a tiny portion of Meta AI glasses users, with seven million pairs sold in 2025 alone.

While Meta has yet to comment specifically on the lawsuit, it issued a statement to several outlets, including Courthouse News. It said: “Ray-Ban Meta glasses help you use AI, hands-free, to answer questions about the world around you. Unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device.”

“When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do. We take steps to filter this data to protect people’s privacy and to help prevent identifying information from being reviewed, the statement continued.

Workers in Kenya have said this filtering does not always work.

 

 

 

Related Posts

FBI Arrests Custody Company CEO‘s Son over Alleged $46M Crypto Theft

The US Federal Bureau of Investigation (FBI) announced that...

Expert Says There Will Be No Altcoin Season In 2026, Here’s Why

Trusted Editorial content, reviewed by leading industry experts and...

Why More Travelers Are Choosing Local Escapes This Spring

Share Share Share Share Email Spring break has long been associated with crowded airports,...

Bitcoin (BTC) price drops toward $70,000 as Iran war sends oil price higher

Bitcoin is on the cusp of falling below $70,000...

SEC Chair Aligns With Trump on Need for Digital Asset Regulation Clarity

Momentum builds in Washington for clearer U.S. crypto rules...