Reports have surfaced indicating that human moderators have reviewed sensitive, private videos recorded by users of Ray-Ban Meta smart glasses. The UK’s Information Commissioner’s Office (ICO) is now seeking clarification from Meta regarding data protection and transparency.
Key Details of the Investigation
Investigations by Swedish media outlets reveal that third-party contractors—specifically in Nairobi, Kenya—have analyzed recorded content to train Meta’s AI systems. According to workers involved in data annotation, the reviewed material includes:
- Private moments (such as individuals in bathrooms or changing rooms).
- Intimate interactions.
- Transcriptions of real-world conversations.
Why Is This Happening?
Meta uses these reviews to help its AI models better interpret real-world scenes and improve assistant responses. While this process is mentioned in Meta’s terms of service, the nature of the sensitive content has sparked significant privacy concerns.
Meta’s Position and User Control
Meta states that only content shared voluntarily with Meta AI is subject to analysis. The company also emphasizes that users can:
- Manage data settings within the app.
- Delete recordings and interaction history.
- Opt-out of certain data-sharing features.
The ICO continues to investigate whether Meta is meeting legal standards for transparency and user privacy in the UK.


