Skip to main content

Facebook’s new image stabilization tech makes shaky 360-degree videos smoother

Facebook is investing more resources into 360-degree video, a format that is gaining popularity on its platform as more users gain access to 360 cameras. The company’s last count revealed that 250,000 360-video clips had been uploaded to the social network since September.

Today, Facebook announced that it is testing a newly built image stabilization technology specifically designed to make 360 videos smoother. Facebook claims that its tech is unique in that it combines standard 2D algorithms with 3D techniques, and a new “deformed-rotation” motion model, to create a hybrid stabilization architecture. The company plans to eventually roll the system out on its social network, and the Oculus VR platform.

Recommended Videos

What this fundamentally means for general users is that your 360 videos that capture motion will soon be optimized to eliminate any shaky footage that detracts from the format’s immersive experience.

“As [360-video] cameras become more prevalent, the range and volume of 360 content are also expanding,” states Johannes Kopf, research scientist at Facebook. “It’s not always easy to keep the camera steady and avoid shaking, particularly when filming motion (like a mountain bike ride or a walking tour) with a handheld camera.”

Facebook states that the tech improves efficiency for 360-degree video, with a 10-20 percent reduction in bit rate for the same video quality. The system can also stabilize the format in less than 22 milliseconds per frame on a standard machine, allowing videos to be smoothed out in less time than it takes to play the video at normal speed. If you want an in-depth guide to how the tech works, you can read about it in Kopf’s dedicated blog post.

Facebook is also testing a hyperlapse algorithm as an extension of its main stabilization system. The tool will allow users to speed up lengthy 360 videos (such as a long bike ride). It does this by changing the timing of the video frame timestamps to balance out the camera velocity. Facebook hopes to make the hyperlapse option available to all users in future versions of the tech.

Saqib Shah
Former Digital Trends Contributor
Saqib Shah is a Twitter addict and film fan with an obsessive interest in pop culture trends. In his spare time he can be…
What is HDMI 2.2? Everything you need to know
The rear of the Onn 4K Pro Streaming Device has a reset button, Ethernet port, HDMI port, USB-A port, and a barrel power connector.

Officially announced at CES 2025, HDMI 2.2 is the next-generation HDMI standard that promises to double available bandwidth for higher resolution and refresh rate support, and will require a new cable to support these new standards. It will also bring with it advanced features for improved audio and video syncing between devices.

But the new cable isn't coming until later this year, and there are no signs of TVs supporting the new standard yet. Here's everything you need to know about HDMI 2.2.
What can HDMI 2.2 do?
The standout feature of HDMI 2.2 is that is allows for up to double the bandwidth of existing Ultra High Speed HDMI cables using the HDMI 2.1 protocol. HDMI 2.2 is rated for up to 96 Gbps, opening up support for native 16K resolution support without compression, or native 4K 240Hz without compression. Throw DSC on and it should support monitors up to 4K 480Hz or 8K in excess of 120Hz.

Read more
ChatGPT now interprets photos better than an art critic and an investigator combined
OpenAI press image

ChatGPT's recent image generation capabilities have challenged our previous understing of AI-generated media. The recently announced GPT-4o model demonstrates noteworthy abilities of interpreting images with high accuracy and recreating them with viral effects, such as that inspired by Studio Ghibli. It even masters text in AI-generated images, which has previously been difficult for AI. And now, it is launching two new models capable of dissecting images for cues to gather far more information that might even fail a human glance.

OpenAI announced two new models earlier this week that take ChatGPT's thinking abilities up a notch. Its new o3 model, which OpenAI calls its "most powerful reasoning model" improves on the existing interpretation and perception abilities, getting better at "coding, math, science, visual perception, and more," the organization claims. Meanwhile, the o4-mini is a smaller and faster model for "cost-efficient reasoning" in the same avenues. The news follows OpenAI's recent launch of the GPT-4.1 class of models, which brings faster processing and deeper context.

Read more
Microsoft’s Copilot Vision AI is now free to use, but only for these 9 sites
Copilot Vision graphic.

After months of teasers, previews, and select rollouts, Microsoft's Copilot Vision is now available to try for all Edge users in the U.S. The flashy new AI tool is designed to watch your screen as you browse so you can ask it various questions about what you're doing and get useful context-appropriate responses. The main catch, however, is that it currently only works with nine websites.

For the most part, these nine websites seem like pretty random choices, too. We have Amazon, which makes sense, but also Geoguessr? I'm pretty sure the point of that site is to try and guess where you are on the map without any help. Anyway, the full site list is as follows:

Read more