The new Google Pixel 3 and Pixel 3 XL smartphones take low-light images, high-resolution photos, and well-timed shots — but these major photo features aren’t realized solely by the cameras packed inside. Instead, Google is tackling tasks typically left to larger cameras with computing power — specifically machines learning — not lenses and high-resolution sensors.
Like the Pixel 2, Google integrated a special chip designed just for photos, the Pixel Visual Core, and a dual-pixel sensor that enables dual-lens effects with a single lens. And like the original Pixel phone, the Pixel 3 shoots and merges multiple images without a delay by using HDR+. And like the first two generations of Google smartphones, Google isn’t done leveraging artificial intelligence and computational photography to take better photos.
Smartphone cameras have either a slight zoom using two lenses or digital zoom — and all digital zooms produce poor results by cropping the photos. You just can’t fit a big zoom lens inside a small
The feature doesn’t appear to require a tripod, since it actually needs those small movements in your hands.
Super Res Zoom revamps an existing idea and reworks the concept to solve a new problem — that crappy
Super Res Zoom takes a burst of photos. Small movements in your hands will make those photos taken from a slightly different position. By stitching those slightly different photos together, the
Perhaps what’s even more intriguing is that the feature doesn’t appear to require a tripod, since it actually needs those small movements in your hands. Panasonic, Olympus, and Pentax cameras have similar modes using pixel shift, but they are designed to create a higher resolution final file, not as an artificial zoom, and tripods are recommended.
Speaking of cringe-worthy, Google’s Liza Ma says that the Pixel 3’s new low-light mode called Night Sight is so good, you’ll never use the flash. Like the Super Res Zoom, the feature is powered by machine learning. The Night Sight doesn’t use any of the usual hardware solutions for a better low light shot like a larger sensor and brighter aperture — instead, machine learning re-colors the photo to create brighter, more vivid colors even without using the flash.
Google didn’t dive much into detail how machine learning is used to brighten the photos, but says the A.I. recolors the image for a brighter shot without the flash. We’ll have to wait to see just how well that recoloring works — the feature isn’t launching until next month via software.
The Top Shot feature inside the
Top Shot takes a fast burst of photos. The
Google says the alternate shots are still also captured in
The idea of using A.I. to choose your best shots is nothing new — Adobe announced a beta tool for Lightroom to do just that a year ago. But what the
While the biggest new features are powered by A.I., the
The camera keeps a single lens at the back, yet manages to continue the impressive portrait mode from earlier models using dual pixel technology instead of dual lenses. That portrait mode is getting a boost, Google says — the
The camera’s dual pixel autofocus can also now track subjects — a feature that’s been around for some time on advanced cameras but is a nice addition to see integrated into a
Video is shot at up to 30 fps 4K or 120 fps in 1080p.
Google may have made some claims that are no big deal for DSLR fans like tracking autofocus, but pit the Pixel 3’s camera against other smartphones and those A.I. features could give the
Of course, we will be putting these features to the test when they become available, so stay tuned for our full reviews of both products.
- Samsung Galaxy S23: release date, specs, price, rumors, and news
- This is the OnePlus Pad — the OnePlus tablet we’ve waited years for
- Trading in your iPhone with Apple? You’ll get less than yesterday
- I bought a $50 Apple Watch Ultra clone, and it blew me away
- The iPhone 15 Pro’s rumored new design is one I can’t wait for