Meta begins testing a GPT-4V rival multimodal AI in smart glasses

5 Min Read

Are you able to carry extra consciousness to your model? Take into account changing into a sponsor for The AI Affect Tour. Study extra in regards to the alternatives here.


Extra information from Meta Platforms immediately, mum or dad firm of Fb, Instagram, WhatsApp and Oculus VR (amongst others): scorching on the heels of its launch of a brand new voice cloning AI known as Audiobox, the corporate immediately introduced that this week, it’s starting a small trial within the U.S. of a brand new, multimodal AI designed to run on its Ray Ban Meta good glasses, made in partnership with the signature eyeware firm, Ray Ban.

The brand new Meta multimodal AI is ready to launch publicly in 2024, in accordance with a video post on Instagram by longtime Fb turned Meta chief expertise officer Andrew Bosworth (aka “Boz”).

“Subsequent 12 months, we’re going to launch a multimodal model of the AI assistant that takes benefit of the digicam on the glasses with a purpose to offer you info not nearly a query you’ve requested it, but additionally in regards to the world round you,” Boz said. “And I’m so excited to share that beginning this week, we’re going to be testing that multimodal AI in beta by way of an early entry program right here within the U.S.”

Boz didn’t embody learn how to take part in this system in his put up.

The glasses, the newest model of which was launched at Meta’s annual Join convention in Palo Alto again in September, price $299 on the entry value, and already ship in present fashions with a built-in AI assistant onboard, however it’s pretty restricted and can’t intelligently reply to video or images, a lot much less a dwell view of what the wearer was seeing (regardless of the glasses having built-in cameras).

See also  Apple is reportedly exploring a partnership with Google for Gemini-powered feature on iPhones

As a substitute, this assistant was designed merely to be managed by voice, particularly the wearer talking to it as if it had been a voice assistant much like Amazon’s Alexa or Apple’s Siri.

Boz showcased one of many new capabilities of the multimodal model in his Instagram put up, together with a video clip of himself sporting the glasses and observing a lighted piece of wall artwork exhibiting the state of California in an workplace. Curiously, he gave the impression to be holding a smartphone as nicely, suggesting the AI might have a smartphone paired with the glasses to work.

A display screen exhibiting the obvious person interface (UI) of the brand new Meta multimodal AI confirmed that it efficiently answered Boz’s query “Look and inform me what you see” and recognized the artwork as a “wood sculpture” which it known as “stunning.”

Video exhibiting Meta’s multimodal AI in beta. Credit score: @boztank on Instagram.

The transfer is probably to be anticipated given Meta’s common wholesale embrace of AI throughout its merchandise and platforms, and its promotion of open supply AI via its signature LLM Llama 2. However it’s fascinating to see its first makes an attempt at a multimodal AI coming within the type not of an open supply mannequin on the net, however via a tool.

Generative AI’s transfer into the {hardware} class has been gradual to date, with just a few smaller startups — together with Humane with its “Ai Pin” operating OpenAI’s GPT-4V — making the primary makes an attempt at devoted AI units.

In the meantime, OpenAI has pursued the route of providing GPT-4V, its personal multimodal AI (the “V” stands for “imaginative and prescient”), via its ChatGPT app for iOS and Android, although entry to the mannequin additionally requires a Chat GPT Plus ($20 per 30 days) or Enterprise subscription (variable pricing).

See also  Unveiling Meta Llama 3: A Leap Forward in Large Language Models

The transfer additionally calls to thoughts Google’s ill-fated trials of Google Glass, an early good glasses prototype from the 2010s that was derided for its fashion sense (or lack thereof) and visible early-adopter userbase (spawning the time period “Glassholes“), in addition to restricted sensible use circumstances, regardless of heavy hype previous to its launch.

Will Meta’s new multimodal AI for Ray Ban Meta good glasses be capable to keep away from the Glasshole lure? Has sufficient time handed and sensibilities modified towards strapping a digicam to at least one’s face to permit a product of this nature to succeed?



Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Please enter CoinGecko Free Api Key to get this plugin works.