When it was first announced, one of Google’s major selling points on the Pixel was how good the camera is. Early on, pretty much everyone is agreeing that it’s a fantastic camera and might legitimately be one of the best cameras we’ve seen on a smartphone. But what exactly makes the camera stand out in a sea of competition and already-good cameras?
Google thinks that HDR+ is the deciding factor here. A combination of software and hardware has enabled a phone that can essentially take photos every second that the camera app is open then stitch things together to create a perfect image rather than try and capture a perfect image.
Google’s computational photography team is headed by Marc Levoy, who also helped out on 360-degree video for virtual reality projects and burst mode on Google Glass. He explains that Google’s improved HDR mode is only possible thanks to the Snapdragon 821 that actually has the bandwidth to capture all of that information without serious shutter lag. The camera actually snaps photos as soon as it’s opened, capturing images quickly while you’re lining up your shot. When you press the button, Pixel notes when it was supposed to take the picture, takes all of the images that it has captured, and pieces things together for a significantly improved picture. Best of all, it does this without you ever being aware of any of it happening.
Levoy’s team also focused very hard on making sure there wasn’t any ghosting or artifacts in images with the way Pixel handles capture photos. This was partly achieved by underexposing each individual image then using math to try and put together an accurate shot after the fact. It’s obviously paid off since the Pixel does well in low light and keeps colors accurate.
source: The Verge