There is a high chance that Meta is launching a new version of Ray-Ban glasses with embedded artificial intelligence assistant capabilities to revolutionize wearable technology. As a result of this innovation, users will have the ability to process audio and video cues to produce textual or audible responses in response to their actions.
Among the top features of these glasses is the “Look and Ask” feature, which is a feature that lets the wearer snap a picture and inquire about it instantly, thereby reducing the amount of time it takes to translate languages and improving the interaction between the user and the environment.
For its upcoming AI-integrated smart glasses, Meta has announced that they are launching an early access program, which enables users to take advantage of a host of new features and privacy concerns. In addition to Meta AI, the company's proprietary multimodal AI assistant, Meta AI will be available as part of the second generation of Meta Ray-Bans.
It is possible to control features and get information about what you are seeing in real-time using the wake phrase “Hey Meta.” In doing so, however, the company gathers an extensive amount of personal information about you, and it leaves room for interpretation as to how this data is used.
Currently in the beta phase, the glasses come with an artificial intelligence assistant that can process video and audio prompts, and provide a text or audio output to users. The company plans to launch an early access trial program shortly. In his Instagram reel, Zuckerberg demonstrated that the glasses could be used to suggest clothes and translate text, illustrating how useful they can be daily.
It is important to note, however, that privacy advocates are raising concerns about the potential risks resulting from such advanced technology, since all images taken by the glasses are stored by Meta, ostensibly to train the artificial intelligence systems that operate the glasses.
There are significant concerns raised about the extent and use of data collected by Meta, building on ongoing concerns regarding Meta's privacy policies.
Although Meta cites that while it collects 'essential' data for maintaining the functionality of the device, such as battery life and connectivity, users are free to provide additional data for developing new features.
The company's privacy policy, however, still has a lot of ambiguity around the types of data it collects to identify policy violations and misuses.
The first model of Meta included safety features such as a visible camera light and a switch for recording, but despite these features, sales and engagement were lower than expected.
In addition to advancing the field of AI, Meta's new enhancements aim to rebuild public trust amidst privacy concerns while also aiming to achieve a technological breakthrough.
It has been announced that Meta's latest Ray-Ban spectacles will include a built-in AI assistant offering innovative features, such as real-time photo queries and language translation, despite controversy surrounding privacy practices.
Despite the advancements in wearable technology, trust remains one of wearable technology's biggest challenges. As part of the first version of Meta's smart glasses, several safety features had been installed, such as a flashing light that signals when the cameras are in use, an on/off switch, and others, to ensure the glasses were safe to wear.
Although sales were not as expected, they were still a bit lower than what was predicted - down 20% from the target. The fact that only 10% of the glasses were active after 18 months since the first launch shows that Meta did not achieve what it might have liked, even though they were ultimately purchased.
The new AI features that Meta is developing are, needless to say, desperate to change these stats. Even though privacy concerns still loom large, it remains to be seen whether the tech giant will be able to convince its users of the company's reliability when it comes to personal data.