Skip to main content

Like the internet? Google wants to attach it to your face

Image used with permission by copyright holder
Sundar Pichai stands in front of a Google logo at Google I/O 2021.
This story is part of our complete Google I/O coverage

At I/O this year, Google proclaimed that “2018 will mark the true kickoff point” for the immersive web, which brings virtual and augmented reality experiences to a browser near you. To help developers create VR and AR content for the web, Google unveiled the new WebXR. Some of these APIs are still in development, but Google is beginning to let developers experience these protocols today.

The immersive web is defined as a collection of new and upcoming technologies that prepares the web for the full spectrum of immersive computing, Brandon Jones of Google’s Chrome team said. “More generally, what we think of the immersive web is anything that gives the web a sense of depth, volume, scale, or place.”

Recommended Videos

There are two main technologies right now that power the immersive web — virtual reality and augmented reality. Jones described VR as tech that takes you anywhere, whereas AR is tech that brings anything to you.

Please enable Javascript to view this content

To bring VR to the web, Google is replacing the WebVR standard last year with its new WebXR protocol. In addition to supporting the deprecated WebVR specifications, WebXR also supports AR, better forward compatibility, a cleaner and more consistent user experience, and more optimizations. These WebXR optimizations allow VR headsets to display twice as many pixels — now up to 4 million pixels — at the same frame rate, remedying some of the limitations of WebVR, Jones highlighted.

With AR in the browser, WebXR leverages the device’s camera to place virtual objects in real life environments, similar to Snapchat’s filters. In an example, Chrome team product manager John Pallett showed how a statue can be placed on a surface on stage, allowing the viewer to understand the statue’s scale and size, as well as view details and information about the object.

Developers can get started today by enabling the WebXR flag in the Chrome 67 beta. When the flags are enabled, VR content can be displayed in three different ways. First, VR can be shown through a headset, such as Daydream VR and on Cardboard. Second, it will work on desktop VR systems, like HTC Vive and Oculus Rift. And lastly, VR can be displayed in the browser for users who don’t have a headset through Magic Window. Magic Window tracks the view based on your phone’s sensors.

To expand the reach of the immersive web, developers can enable an easy transition between VR headsets and Magic Window with a single click. Additionally, with polyfill support, WebXR can be emulated on older WebVR-compatible browsers as well as JavaScript browsers like Mobile Safari.

The tools for handling AR right now are still being developed in the W3C community, of which Google is a member, Pallett said. The beauty with WebXR is progressive fallback. If a user is attempting to view AR content on a desktop without a camera, for example, developers can add support for Magic Window to convert the AR experience into a VR one.

Chuong Nguyen
Silicon Valley-based technology reporter and Giants baseball fan who splits his time between Northern California and Southern…
This is what Google Maps’ big redesign looks like
Redesigned Google maps.

Redesigned Google Maps app Google

In recent years, Google Maps has felt like it's an afterthought to Google. As Apple Maps continues to improve with better navigation, cleaner transit layers, and better information, Google Maps has lagged. That’s why we’re thrilled about the redesigned Google Maps app that Google showcased at Google I/O 2024.

Read more
Google has a magical new way for you to control your Android phone
Holding the Google Pixel 8 Pro, showing its Home Screen.

You don’t need your hands to control your Android phone anymore. At Google I/O 2024, Google announced Project Gameface for Android, an incredible new accessibility feature that will let users control their devices with head movements and facial gestures.

There are 52 unique facial gestures supported. These include raising your eyebrow, opening your mouth, glancing in a certain direction, looking up, smiling, and more. Each gesture can be mapped to an action like pulling down the notification shade, going back to the previous app, opening the app drawer, or going back to home. Users can customize facial expressions, gesture sizes, cursor speed, and more.

Read more
Here are all the biggest Gemini announcements from Google I/O 2024
A screenshot from the Google I/O livestream with a slide on the screen about Gemini.

Unsurprisingly, AI is front and center at this year’s Google I/O developer conference. The company has just unveiled a more-advanced version of Gemini 1.5 Pro, its powerful generative AI suite. Available for developers starting today, Gemini 1.5 Pro is a multimodal language model that can work with text, voice, and various content formats.

The latest updates to Gemini 1.5 Pro introduce an extended context window, enhanced data analysis features, integrations with additional Google apps, and increased customization options. There are also improvements across crucial use cases, such as translation, coding, reasoning, and more.
Gemini 1.5 Flash

Read more