Skip to main content

The Apple Vision Pro can now be controlled only by your mind

Mark has ALS but can use the Vision Pro via Synchron's Stentrode.
Mark has ALS but can use the Vision Pro via Synchron's Stentrode. Synchron

The Apple Vision Pro is already incredibly easy to use, largely thanks to its lack of controllers. You just look at a control and tap your index finger to your thumb to select.

But hand gestures aren’t always easy or possible for the millions of people worldwide who have paralysis of the upper limbs. Synchron recently announced a spatial computing breakthrough that lets users of the Stentrode BCI (brain computer interface) implant control an Apple Vision Pro.

Recommended Videos

A demo of the technology was posted in a YouTube video, using the example of a person named Mark with amyotrophic lateral sclerosis (ALS), which prevents movement of his hands. But thanks to the BCI, he can now use a mixed-reality headset. Since the Vision Pro uses eye-tracking to move the cursor, Synchron’s BCI only needs to detect the intention to make small hand gestures.

Synchron Brain Computer Interface + ChatGPT | Powered by OpenAI

Mark played Solitaire, browsed and watched videos on Apple TV, and sent text messages, selecting words with Synchron’s BCI and Vision Pro software.

This advancement follows an equally impressive demonstration of Mark using a text chat interface that speeds up interaction with the help of ChatGPT. Synchron posted a YouTube video that shows how it works.

Sychron was pioneering BCI work long before Neuralink. The Stentrode was the first BCI implanted in humans, and clinical trials began in 2022. Synchron uses well-established techniques like stents and endovascular surgery. While Neuralink is impressive, allowing the equivalent of mouse control via its BCI, it requires opening the skull.

BCI hardware tends to get the most attention, but software and integration with external devices is essential to creating a more comfortable and functional experience, as proven by this new example.

Availability of this technology is quite limited with clinical trials still in progress for both. That means it could be several years before thought control of computers becomes widespread. While the news of breakthroughs like this are exciting, more time and thorough testing over years is needed to ensure safety.

Alan Truly
Alan Truly is a Writer at Digital Trends, covering computers, laptops, hardware, software, and accessories that stand out as…
OpenAI’s Advanced Voice Mode can now see your screen and analyze videos
Advanced Santa voice mode

OpenAI's "12 Days of OpenAI" continued apace on Wednesday with the development team announcing a new seasonal voice for ChatGPT's Advanced Voice Mode (AVM), as well as new video and screen-sharing capabilities for the conversational AI feature.

Santa Mode, as OpenAI is calling it, is a seasonal feature for AVM, and offers St. Nick's dulcet tones as a preset voice option. It is being released to Plus and Pro subscribers through the website and mobile and desktop apps starting today and will remain so until early January. To access the limited-time feature, first sign in to your Plus or Pro account, then click on the snowflake icon next to the text prompt window.

Read more
Apple could tie up with Sony for a critical Vision Pro upgrade
A man wears an Apple Vision Pro headset.

Apple hasn’t quite tasted the domain-shifting success it expected with the Vision Pro headset. A price tag worth $3,500 was already a deterrent, but the gaming ecosystem — a key driver for the VR segment — has also been lackluster. The company is now hoping to fix that situation with some help from Sony.

According to Bloomberg, the two companies have been working together to bring support for the PlayStation VR 2’s controllers to the pricey Apple headset. “Apple has discussed the plan with third-party developers, asking them if they’d integrate support into their games,” adds the report.

Read more
ChatGPT’s new Pro subscription will cost you $200 per month
glasses and chatgpt

Sam Altman and team kicked off the company's "12 Days of OpenAI" event Thursday with a live stream to debut the fully functional version of its 01 reasoning model, as well as a new subscription tier called ChatGPT Pro. But to gain unlimited access to these new features and capabilities, you're going to need to shell out an exorbitant $200 per month.

The 01 model, originally codenamed Project Strawberry, was first released in September as a preview, alongside a lighter-weight o1-mini model, to ChatGPT-Plus subscribers. o1, as a reasoning model, differs from standard LLMs in that it is capable of fact-checking itself before returning its generated response to the user. This helps such models reduce their propensity to hallucinate answers but comes at the cost of a longer inference period and slower response.

Read more