A digital rights organization in Kenya has called on the country’s data protection authority to examine whether footage captured through Meta’s smart glasses is being used in ways that violate privacy laws.

The group, The Oversight Lab, submitted a formal request to the Office of the Data Protection Commissioner (ODPC), asking regulators to review how photos and videos recorded by Ray-Ban Meta smart glasses are collected, processed, and potentially used to train artificial intelligence systems.

The complaint adds new scrutiny to the global data infrastructure that supports AI development, particularly the growing reliance on contract workers in countries like Kenya to analyse and label digital content used in machine learning models.

Questions Around Consent and Privacy

At the centre of the complaint is the issue of consent. According to documents reviewed, the Oversight Lab is asking regulators to determine whether individuals captured in footage recorded by the smart glasses were aware that their images, voices, or activities could be used to develop AI technologies.

The group is also seeking clarification on whether the devices could enable users to record others without their knowledge, potentially in both public and private settings.

These concerns fall under Kenya’s Data Protection Act, which requires organisations collecting personal information to obtain clear consent and ensure responsible handling of sensitive data.

Nairobi’s Expanding Role in the AI Data Economy

The complaint also draws attention to Nairobi’s growing position within the global artificial intelligence ecosystem.

In recent years, Kenya has emerged as a major hub for data annotation work. Thousands of contractors review and label images, videos, and text so machine-learning systems can learn to identify objects, behaviours, and environments more accurately.

The Oversight Lab’s request follows an investigation conducted by Swedish newspapers Göteborgs-Posten and Svenska Dagbladet. Their reporting indicated that Kenyan contractors working for the outsourcing company Sama have been tasked with reviewing footage captured by Meta’s smart glasses.

According to the investigation, recordings collected from users of the glasses around the world are routed to annotation teams who categorise scenes and identify objects to improve the performance of Meta’s AI systems.

Sensitive Material Among Reviewed Footage

The complaint claims that some of the content provided to data labellers may contain highly sensitive imagery.

Examples cited include recordings showing people in bathrooms, intimate interactions, visible bank card details, and individuals viewing explicit content. Workers reviewing this material are responsible for tagging and classifying what appears in the footage so AI systems can better understand the environments captured by the glasses.

While this process is common in AI training pipelines, critics argue that first-person recordings present unique privacy challenges because individuals appearing in the footage may not realize they are being recorded.

How Ray-Ban Meta Smart Glasses Capture Data

Ray-Ban Meta smart glasses were developed through a partnership between Meta and eyewear manufacturer EssilorLuxottica. The wearable devices include built-in cameras and microphones that allow users to take photos, record videos, and interact with an AI assistant.

Some features rely on cloud-based services operated by Meta, where captured content may be processed to power AI functionality.

Wearable devices like these represent a growing category in consumer technology. Companies across the industry see AI-powered wearables as a potential successor to traditional smartphone interactions.

Concerns About International Data Transfers

The Oversight Lab has also asked regulators to investigate whether data collected through the smart glasses is transferred across borders before being processed by annotation teams in Kenya.

The organisation wants authorities to determine whether the companies involved conducted a formal data protection impact assessment before handling the material. It is also asking whether individuals recorded by the glasses were informed that their data might be processed in another country.

We are deeply concerned by the development of harmful technology through exploitation of vulnerable communities, Mercy Mutemi, executive director of The Oversight Lab, said in a statement accompanying the complaint.


Ongoing Debate About Labour Conditions

The filing also highlights broader concerns surrounding the working conditions of people involved in content moderation and AI training tasks.

In previous legal disputes, contractors working for companies linked to Meta’s moderation operations in Kenya accused employers of exposing them to disturbing material without adequate safeguards or support.

These controversies have intensified discussions about the ethics of outsourcing large volumes of AI training and content moderation work to developing economies.

Read More: Kenya's Electric School Bus Revolution Has Quietly Begun

What Regulators May Examine Next

The Oversight Lab has asked the Office of the Data Protection Commissioner to complete its review within 90 days and determine whether the companies involved complied with Kenyan data protection laws.

The case could place renewed attention on Kenya’s role in the global AI supply chain. As demand for training data continues to rise, the country’s workforce has become an important part of the infrastructure that supports the development of machine-learning technologies.

The investigation, if pursued, may raise broader questions about privacy, labour standards, and how technology companies collect and process the massive datasets required to build modern AI systems.