
Last week Google announced some new algorithms are being implemented for the creation of panoramas used in Google Street View to stitch together images that are better at minimizing some of the artifacts currently visible in Street View. Google says the existing problems that users can spot are the result of things like mis-calibration of the cameras used on their Street View cameras (a rig called a rosette), timing differences between adjacent cameras and parallax.
Google explains they use two steps to produce new, smoother images that don't suffer from the overlap visible in current images. The first is a new “Optical Flow” algorithm applied to pairs of images that overlap. This involves some downsampling of images and then getting the images into alignment with each other. The second step is “Global Optimization” in which Google creates a “spline-based flow field” so that only the portions of overlapping images that need to be brought into alignment are affected without messing up the rest of an image. Google notes this is necessary because the overlapping portions of adjacent images are frequently only a small portion of the larger images.

Google notes that their work is similar to previous work in the field, noting solutions developed by Shum and Szeliski to “deghost” panoramas. However, Google says they have implemented their own original solutions to do things like use “dense, smooth correspondences” and they use a nonlinear optimization. Google hopes the result is a smoother, better-looking panorama that does not include new visual artifacts that are introduced by the new algorithm.

Google notes that the new algorithms and related code has been added to the Street View pipeline. Not only is this technology being used for new Street View images being captures, Google is also letting the routine go through and work on restitching existing images for panoramas.

source: Google