Meta Faces Lawsuit Over AI Glasses Privacy Breach After Investigation Reveals Sensitive Footage Review
By admin | Mar 05, 2026 | 4 min read
Meta is confronting a fresh legal challenge regarding the privacy features of its AI smart glasses. This follows an investigation by Swedish newspapers revealing that employees at a subcontractor in Kenya have been reviewing customer footage, which has included sensitive material such as nudity, sexual activity, and people using toilets. While Meta asserted it blurs faces in images, reports indicated this feature does not always function reliably. The situation has also drawn the attention of the U.K.'s Information Commissioner’s Office, which is now investigating.
In the United States, a new lawsuit has been filed against the company. The plaintiffs, Gina Bartone from New Jersey and Mateo Canu from California, are represented by the Clarkson Law Firm and allege that Meta breached privacy laws and engaged in deceptive advertising. The complaint highlights that the smart glasses are promoted with claims like “designed for privacy, controlled by you” and “built for your privacy.” The plaintiffs argue these statements would not lead a reasonable customer to believe their intimate footage could be viewed by overseas workers, and they state they saw no disclaimers contradicting these privacy assurances.
The legal action charges both Meta and its manufacturing partner, Luxottica of America, with violating consumer protection laws. Meta has declined to comment on the ongoing litigation. The Clarkson Law Firm, known for previous major cases against companies like Apple, Google, and OpenAI, emphasizes the scale of the issue, noting that over seven million people purchased Meta’s smart glasses in 2025. This means their footage enters a review pipeline with no option to opt out.
In response to inquiries, Meta explained to the BBC that when users share content with Meta AI, contractors may review the information to enhance the user experience, a practice outlined in its privacy policy. The company referenced its Supplemental Meta Platforms Terms of Service, though it did not specify the exact location of this disclosure. The BBC found a mention of human review within Meta’s U.K. AI terms, while the U.S. version states, “In some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review may be automated or manual (human).”

The lawsuit focuses heavily on the marketing of the glasses, citing advertisements that emphasized privacy benefits, described privacy settings, and promoted an “added layer of security.” One ad explicitly stated, “You’re in control of your data and content,” explaining that owners decide what to share. This case emerges amid growing public concern over “luxury surveillance” devices like smart glasses and always-listening AI pendants, which has even spurred developers to create apps that detect nearby smart glasses.
While Meta has not commented on the new lawsuit directly, spokesperson Christopher Sgro provided a statement on the broader issue: “Ray-Ban Meta glasses help you use AI, hands-free, to answer questions about the world around you. Unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device. When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do. We take steps to filter this data to protect people’s privacy and to help prevent identifying information from being reviewed.”
This article was updated following its initial publication to include Meta’s statement.
Comments
Please log in to leave a comment.
No comments yet. Be the first to comment!