Skip to main content

HoloLens virtual touchscreen is the futuristic tech we’ve been waiting for

Image used with permission by copyright holder

Researchers at Microsoft Research have developed a way to give HoloLens users a virtual touchscreen through a system called MRTouch. This allows users of Microsoft’s mixed reality headset an additional way to interact while using HoloLens, complementing gesture, voice, and controller inputs, according to a Microsoft Research video demoing the use of the MRTouch prototype.

Recommended Videos

Although Microsoft Research hasn’t announced plans to bring MRTouch to the market or allow third-party developers to make use of the multi-touch interactions at this time, the good news is that it works on an unmodified Microsoft HoloLens headset. All users would need to do is use their fingers and swipe across a flat surface to create a virtual touchscreen. You can do this on a number of surfaces, including walls and tabletops. This virtual touch area could be used to display content, and you can interact with the virtual screen using multi-touch gestures, similar to how you would use a tablet.

One application for MRTouch and HoloLens is the ability to launch apps that rely on touch inputs, like a browser. This allows you to use a mixed reality device as your day-to-day computer, Microsoft said. In the photos app, Microsoft showed that you can pan and zoom using your fingers, just as you would on the Windows 10 Photos app on your touchscreen-enabled laptop or convertible. MRTouch even edges ahead of traditional touchscreens if you need to render realistic 3D content. Here, touch input could be combined with in-air gestures for specific interactions in three-dimensional space.

MRTouch essentially allows HoloLens owners to overlay a touchscreen that’s capable of displaying and rendering content on any flat surface.

MRTouch works by utilizing the short-length depth camera on the HoloLens headset for finger tracking. A reflectivity map is also captured at 25 frames per second, and the data is fed to the tracker engine, Microsoft said. Microsoft’s study tested for accuracy, and the results show that touch was detected 97.5 percent of the time with a mean distance error of 5.4mm. According to Microsoft, the accuracy rate of MRTouch is very competitive with the capacitive touchscreens today used on many modern laptops, tablets, and smartphones.

Chuong Nguyen
Silicon Valley-based technology reporter and Giants baseball fan who splits his time between Northern California and Southern…
This Lenovo ThinkPad is usually $2,059 — today it’s under $1,000
The Lenovo ThinkPad L13 Yoga 2-in-1 laptop in tablet mode.

You can enjoy the best of both worlds between laptop deals and tablet deals if you go for a 2-in-1 laptop like the Lenovo ThinkPad L13 Yoga Gen 4, which is currently on sale from Lenovo itself at 54% off. Its estimated value of $2,059 may seem a bit too high, but in any case, it's a smart purchase at its discounted price of just $931. You'll have to be quick in finishing the purchase process for this device though, as it may be back to its regular price as soon as tomorrow.

Why you should buy the Lenovo ThinkPad L13 Yoga Gen 4 2-in-1 laptop

Read more
‘You can’t lick a badger twice’: How Google’s AI Overview hallucinates idioms
Samples of Google AI Overview errors.

The latest AI trend is a funny one, as a user has discovered that you can plug a made-up phrase into Google and append it with "meaning," then Google's AI Overview feature will hallucinate a meaning for the phrase.

Historian Greg Jenner kicked off the trend with a post on Bluesky in which he asked Google to explain the meaning of "You can't lick a badger twice." AI Overview helpfully explained that this expression means that you can't deceive someone a second time after they've already been tricked once -- which seems like a reasonable explanation, but ignores the fact that this idiom didn't exist before this query went viral.

Read more
You can now try Adobe’s new app to digitally sign your artwork for free
Adobe Content Authenticity app graphic.

First announced in October, Adobe's Content Authenticity app is now in public beta, and anyone can try it for free. The app allows people to add "Content Credentials" to their digital work -- invisible and secure metadata that shares information about the creator. AI can't edit it out like a watermark and it still works even when someone screenshots the original file.

You can add various information to your Content Credentials, such as your name (which can be verified via LinkedIn) and your social media accounts. You can also express your preferences toward generative AI training. This is an experimental feature aiming to get a headstart on future AI regulation that Adobe hopes will respect the creator's choice regarding training data.

Read more