Meta Ray-Ban users’ most private moments captured on smart glasses may not be staying between you and the AI, a report has said. An investigation by two Swedish newspapers has revealed that a hidden workforce of thousands in Kenya is manually reviewing and labeling all the video feeds to “train” Meta’s next generation of artificial intelligence (AI). The content that they are reviewing includes people using the toilet, undressing and engaging in intimate acts.According to reports by Swedish newspapers Goteborgs-Posten and Svenska Dagbladet, the footage recorded by Meta’s Ray-Ban is quietly being watched by workers of Sama, a technology contractor based in Nairobi, for massive data-labeling operation. There, “data annotators” spend ten-hour shifts watching real-world footage to teach AI how to recognise objects, people, and environments.
Israel Iran War
What workers say about the footage
What the workers, known as data annotators, at the data labelling company say they is deeply troubling. The glasses, which come fitted with built-in cameras and microphones, record video that is sent to employees at Sama. These workers are paid to watch, label, and categorise the clips — a standard preprocessing step used to train AI models to better understand the world around them.But the investigation by Swedish newspapers Göteborgs-Posten and Svenska Dagbladet revealed that the footage being reviewed is far from ordinary. Workers at Sama claim they have been shown clips that appear to include people using the toilet, undressing and engaging in intimate acts — all apparently recorded without the subjects’ knowledge.“We see everything — from living rooms to naked bodies. Meta has that kind of content in its databases. People can record themselves in the wrong way and not even know what they’re recording. These are real people like you and me,” one worker was quoted as saying. Reportedly, the workers also say the people captured in the footage appear entirely unaware that their most private moments are being recorded, transmitted and reviewed by strangers on the other side of the world.“Someone might have been walking around with the glasses, or happened to be wearing them, and then your partner might be in the bathroom, or he or she might have just come out naked,” another worker told the reporters.Several described clips they said could trigger “huge scandals” if they were ever leaked.“It’s so extremely sensitive. That’s why we have cameras everywhere, and you’re not allowed to bring your own phones or any gadgets that can record,” one employee was quoted as saying.
A job with no room for questions
According to the workers, the situation is troubling because they receive no explanation for the material they are shown. They do not know whose footage they are watching, why specific clips have been selected, or what will appear on their screens next.“When you watch these videos, it feels that way. But because it’s a job, you have to do it. You understand that it’s someone’s private life you’re looking at, but at the same time you’re just expected to do the job. You’re not supposed to question. If you start asking questions, you’re gone,” a worker said.The workers also handle transcription work — checking whether the AI assistant in the glasses has correctly answered users’ questions. “It can be about any topic. We see chats where someone talks about crime or protests. It’s not just greetings, it can be very dark things too,” one annotator was quoted as saying.
What Meta has to say
The company has responded to the newspapers, with Meta’s spokesperson in London, Joyce Omope, saying that that the captured media is stored on the glasses until it is imported to the phone using the Meta AI mobile app. The company said that the media is temporarily stored as part of the Meta AI mobile app in a cache. The cache will automatically clear itself shortly after importing, and users can also manually clear the cache in Device settings. Moreover, when live AI is being used, the company processes that media according to the Meta AI Terms of Service and Privacy Policy. The company also said that voice recordings and voice queries made to Meta AI are used to improve and personalise user experience and to develop and improve Meta Products.

Leave a Reply