Skip to main content

Logitech Bridge lets you use your keyboard inside a VR experience

Over the past few years, one of the biggest problems virtual reality specialists like HTC and Oculus have attempted to tackle is what sort of input device works well alongside a headset. Various controllers have been developed with gaming in mind, but now Logitech is working on a way to take your keyboard into a virtual setting.

With a headset covering their eyes, even the most confident touch typist is going to have some issues using – or even finding – their keyboard. Thanks to the Logitech Bridge, though, that will no longer be a problem.

Recommended Videos

Bridge works with the Vive Tracker to render an accurately scaled, modeled, and tracked keyboard into a virtual environment, according to a report from Road to VR. It’s set to be built into SteamVR, to make it easy for developers to utilize the technology in their games and other experiences.

The Vive’s front-facing camera will be employed to track the user’s hand movements. This makes it possible to show an outline of their hands over the keyboard to give them a sense of their relative position.

Logitech is holding off on the wide-scale rollout of Bridge for the time being. From now until November 16, interested parties will be able to submit an application to be among the first wave of developers to get their hands on a beta version of the software development kit.  Only 50 copies are set to be made available at this time, at a price of $150 – but there are plans to expand the program if it proves to be popular.

This kind of technology has the potential to expand upon the possibilities of VR. While most early content designed for headsets like the Vive has been game-like, there’s untapped potential for the more practical applications of a virtual environment.

For instance, imagine the ultimate distraction-free writing environment: a digital representation of your document that occupies your entire periphery, with nothing but a virtual keyboard sat in front of you to attract your gaze.

Flashy, fun software was an obvious priority when VR developers were trying to get people to invest in expensive hardware, but now that prices are dropping, we’re sure to see a wider range of experiences and applications – and easy-to-implement keyboard support will help foster that sort of content.

Brad Jones
Former Digital Trends Contributor
Brad is an English-born writer currently splitting his time between Edinburgh and Pennsylvania. You can find him on Twitter…
What is HDMI 2.2? Everything you need to know
The rear of the Onn 4K Pro Streaming Device has a reset button, Ethernet port, HDMI port, USB-A port, and a barrel power connector.

Officially announced at CES 2025, HDMI 2.2 is the next-generation HDMI standard that promises to double available bandwidth for higher resolution and refresh rate support, and will require a new cable to support these new standards. It will also bring with it advanced features for improved audio and video syncing between devices.

But the new cable isn't coming until later this year, and there are no signs of TVs supporting the new standard yet. Here's everything you need to know about HDMI 2.2.
What can HDMI 2.2 do?
The standout feature of HDMI 2.2 is that is allows for up to double the bandwidth of existing Ultra High Speed HDMI cables using the HDMI 2.1 protocol. HDMI 2.2 is rated for up to 96 Gbps, opening up support for native 16K resolution support without compression, or native 4K 240Hz without compression. Throw DSC on and it should support monitors up to 4K 480Hz or 8K in excess of 120Hz.

Read more
ChatGPT now interprets photos better than an art critic and an investigator combined
OpenAI press image

ChatGPT's recent image generation capabilities have challenged our previous understing of AI-generated media. The recently announced GPT-4o model demonstrates noteworthy abilities of interpreting images with high accuracy and recreating them with viral effects, such as that inspired by Studio Ghibli. It even masters text in AI-generated images, which has previously been difficult for AI. And now, it is launching two new models capable of dissecting images for cues to gather far more information that might even fail a human glance.

OpenAI announced two new models earlier this week that take ChatGPT's thinking abilities up a notch. Its new o3 model, which OpenAI calls its "most powerful reasoning model" improves on the existing interpretation and perception abilities, getting better at "coding, math, science, visual perception, and more," the organization claims. Meanwhile, the o4-mini is a smaller and faster model for "cost-efficient reasoning" in the same avenues. The news follows OpenAI's recent launch of the GPT-4.1 class of models, which brings faster processing and deeper context.

Read more
Microsoft’s Copilot Vision AI is now free to use, but only for these 9 sites
Copilot Vision graphic.

After months of teasers, previews, and select rollouts, Microsoft's Copilot Vision is now available to try for all Edge users in the U.S. The flashy new AI tool is designed to watch your screen as you browse so you can ask it various questions about what you're doing and get useful context-appropriate responses. The main catch, however, is that it currently only works with nine websites.

For the most part, these nine websites seem like pretty random choices, too. We have Amazon, which makes sense, but also Geoguessr? I'm pretty sure the point of that site is to try and guess where you are on the map without any help. Anyway, the full site list is as follows:

Read more