Meta has not confirmed whether it trains its AI using photos taken with smart glasses.

Meta’s AI-powered Ray-Bans feature a discreet camera on the front, capable of taking photos not only when requested but also when triggered by specific keywords like “look.”
Meta has not confirmed whether it trains its AI using photos taken with smart glasses.

Meta’s AI-powered Ray-Bans feature a discreet camera on the front, capable of taking photos not only when requested but also when triggered by specific keywords like “look.” This functionality results in the collection of a significant number of images, both intentionally captured and captured incidentally. However, the company has not committed to ensuring the privacy of these images.

When asked if Meta plans to train its AI models on images taken by Ray-Ban users, similar to how it uses images from public social media accounts, the company did not provide a clear answer.

“We’re not publicly discussing that,” said Anuj Kumar, a senior director of AI wearables at Meta, during a video interview with TechCrunch on Monday.

“That’s not something we typically share externally,” added Meta spokesperson Mimi Huggins, who was also present during the interview. When TechCrunch sought clarification on whether Meta is training AI on these images, Huggins responded, “we’re not saying either way.”

This ambiguity raises concerns, especially with the introduction of a new AI feature for the Ray-Ban Meta that is designed to capture numerous passive photos. Recently, TechCrunch reported that Meta intends to launch a real-time video feature for the smart glasses. When activated by specific keywords, the glasses will stream a series of images (effectively creating live video) into a multimodal AI model, allowing it to answer questions about the user’s surroundings in a low-latency, natural manner.

This could result in a substantial number of images being taken, often without the user’s conscious awareness. For instance, if a user asks the smart glasses to scan their closet to assist in outfit selection, the glasses may take numerous photos of the room and its contents, uploading them to an AI model in the cloud.

What happens to those images afterward remains unclear, as Meta has not provided any information.

Meta's second-generation Ray-Ban Stories come equipped with a discreet front camera designed to capture photos not only when prompted but also automatically through AI triggers activated by specific keywords like “look.” This capability means the smart glasses generate a significant number of images, both intentionally and unintentionally, but the company has not committed to ensuring these images remain private.

When asked if Meta intends to train AI models on images captured by Ray-Ban users—similar to its approach with publicly available social media images—the company refrained from providing a clear answer. 

“We’re not publicly discussing that,” said Anuj Kumar, a senior director of AI wearables at Meta, during a video interview with TechCrunch on Monday. 

“That’s not something we typically share externally,” added Meta spokesperson Mimi Huggins, who was also part of the interview. When TechCrunch sought clarification on whether Meta is training AI on these images, Huggins replied, “we’re not saying either way.”

This uncertainty is particularly concerning in light of the Ray-Ban Meta’s new AI feature, which is expected to generate a considerable amount of passive photos. TechCrunch reported last week that Meta plans to introduce a real-time video feature for Ray-Ban Meta. When activated by certain keywords, the glasses will stream a series of images (essentially live video) into a multimodal AI model, allowing it to respond to questions about the user’s surroundings in a low-latency, natural manner.

This could result in numerous images being captured—photos that a Ray-Ban Meta user might not consciously realize they are taking. For instance, if a user asks the smart glasses to scan their closet for outfit suggestions, the glasses could take dozens of photos of the room and its contents, uploading them to an AI model in the cloud.

What happens to those images afterward? Meta remains silent.

Wearing the Ray-Ban Meta glasses essentially means having a camera on your face. As seen with Google Glass, this isn’t something that everyone is comfortable with. You might expect a company in this space to reassure users that “All your photos and videos from your face cameras will be completely private and isolated to your face camera.” But that’s not what Meta is doing.

Meta has already stated that it trains its AI models on every American’s public Instagram and Facebook posts, claiming that all of this data is “publicly available.” This broad definition of public data allows the company and other tech firms to use various types of content for AI training, which raises questions about what constitutes public versus private data.

However, it’s reasonable to argue that the visual information captured through smart glasses shouldn’t be considered “publicly available.” While there’s no definitive confirmation that Meta is training AI models on footage from the Ray-Ban Meta cameras, the company hasn’t ruled it out either.

In contrast, other AI providers have more explicit policies regarding user data. For example, Anthropic states it never trains on customer inputs or outputs from its AI models, while OpenAI asserts that it does not train on user inputs or outputs through its API.

We have reached out to Meta for further clarification and will update this story if they respond.

Blog
|
2024-10-01 19:28:11