Regardless of my experience with various AI chatbots, there is one thing they all have in common: they are all confident liars. this Ray-Ban Meta smart glasses It has built-in AI image recognition capabilities, and I thought I’d ask it some questions about my daily hobbies. Instead, the glasses refused to come in contact with any book, character, print, or toy I showed them. The only time I’ve experienced a similar level of disconnection was during a typically awkward conversation with my dad.
during last week Meta Connect Conference, CEO Mark Zuckerberg unveiled a series of updates to Ray-Ban eyewear. On Wednesday, Zack claimed that the glasses will come with access to reminders (those with the latest update should be able to ask it, “Where did I park my car?”), QR code scanning, and more features like the ability to easily reply to WhatsApp or Messenger friends. More updates are coming soon and should add live translation and live video scanning. The feature should allow AI to comment on content you see in real time.
But I don’t trust artificial intelligence to accurately describe the decor of my room, let alone the items in the supermarket. Meta gave me a new pair of Headliner transition lens glasses. I can definitely say they look much better than my current pair of old, yellowed sunglasses. The photo quality they take is mediocre compared to my iPhone, but I don’t have many complaints about the built-in audio. They won’t match headphones or high-quality earbuds, but they easily beat most laptop speakers when browsing my playlist on Apple Music. I think they’re a solid choice for personal audio while lounging at the beach.
Message and music integration are great, but I wanted to see how well this AI wearable works, and Other devices fail miserably. I wore the Meta Ray Ban glasses around my apartment and asked it about my pile of tabletop RPGs, the prints on my walls, my comic book character figurines, and my treasure trove of Warhammer 40K novels question. It’s like having a brick wall conversation with your dad – someone who has no interest in fantasy or science fiction and only pretends to participate. Unlike my dad who still tries sometimes, Mehta’s glasses are terrible at hiding how much he doesn’t care.
Isn’t there any nerdy information in Meta’s AI training materials?
I pointed my glasses at the metal print of the 2019 RPG scene Disco Elysium. The best guess is “Borderlands.” For some reason, it thinks loyal detective Kim Kitsuragi is grandstanding. Harry Dubois, aka “Tequila Sunset”, is “One of the Vault Hunters”. I asked it to determine what my game settings included. It looked at the PlayStation 5 on my shelf and told me with absolute certainty that it was a PlayStation 4.
I tried some souvenirs, both esoteric and esoteric. It examines Brian K. Vaughan and Fiona Staples’ Testament action figures Saga comics, Mehta told me the character was Doctor Strange. my groom statue sin city According to Mehta, the comic is the Hulk. Just like my parents, Glass seemed to think that any nerd could be a character from a Marvel movie. Glasses looking at the prints hanging on my wall – two artistic depictions of Samus Alain in and out of armor metroid vanguard series, Mehta told me they looked like Iron Man.
Even if the glasses behave correctly, artificial intelligence is rarely specific or accurate. AI reads headlines with confidence Several indie RPG rulebooks, namely death fight island and blank. Nonetheless, in In the most dad-like way possible, it shows these RPGs being related to the Warhammer miniature wargames. Yes, Dad, I play Warhammer 40,000. No, Dad, these books have nothing to do with this.
But hey, the device knows who Luigi is. Nintendo’s influence clearly extends beyond the confines of my little nerd bubble. Still, you’d think artificial intelligence could tell the difference between Pokémon and Zelda Korok.
While the reminder sounds useful, Meta’s Ray-Bans still fall short on privacy
Mehta’s glasses are detail-oriented but guesswork-oriented. Yes, it’s fun to see how glasses often fail to understand nerdy trivia, but they’re useless for other basic tasks. It’ll look at a bottle of pomegranate molasses in my cupboard and tell me it’s soy sauce. Remember Google’s first foray into on-device artificial intelligence? Lies about the Webb telescope? Meta’s artificial intelligence model for Ray-Ban glasses will trick your face and trick your own eyes.
The correct answers it gets are often short and largely unhelpful. It can give a basic synopsis of a novel written by an author like Dan Abnett (it at least knows who he is). You can ask the AI for more information about his dictionary of fictional works, but when I asked him how many books he had written for Games Workshop’s Black Library, it told me, “More than 25, but the exact number is unknown.” “This number is very quantifiable. You can follow the Wikipedia link for AI and count it yourself, and you’ll find that the number is closer to 50.
We have not yet experienced Meta’s Llama 3.2 intermodal model. Meta’s AI says it still uses Llama 3.1 70B, but LLM may not be suitable for normal queries. Glasses cannot access location data (which is probably for the best). Wearable AI can’t tell me where the nearest bubble tea shop is near Union Square. There are two within a three-block radius.
Despite using the latest update, I’ve had no luck accessing QR code scanning or the new reminder feature. Alerts seem like a better use for Glasses, but know that if you take a photo of your license plate and ask Glasses to analyze it, Meta will see that, too. The Zuckerberg-led social giant told TechCrunch this week that it will train its artificial intelligence on any photo you take with your glasses.
To protect privacy, AI models are intentionally limited in other ways, but not your own privacy. Meta’s artificial intelligence will be unable to describe any face or person it sees. You can still snap a photo of anyone you want by secretly pressing the capture button, but the AI will refuse to identify anyone or comment on their appearance.
Despite Meta’s efforts, Ray-Bans still have serious privacy implications. A group of college students hacked into their Ray-Ban glasses to add facial recognition capabilities. Improved glasses may even draw on more information from the internetincluding names, phone numbers, emails, and even more sensitive information. The organization posted a video on Twitter last week showing off how effective their glasses are.
This is not what Ray-Ban Metas is designed for. A Meta spokesperson pointed out to 404 Media that technically, facial recognition software can run on any camera, not just the ones found on Ray-Bans. But at the same time, Meta has gone to great lengths to make its smart glasses’ cameras as independent as possible. Meta targets Ray-Ban sunglasses at influencers who want to post photos of themselves on Instagram. For now, artificial intelligence doesn’t offer much more for viewers than interesting clips for Reels.
These name-brand designer glasses are not necessarily designed for the audience that goes to New York Comic Con and asks about their glasses to find out more about which character the anime fan is cosplaying. In its current state, I wouldn’t use the AI capabilities for anything other than party tricks.