Despite all the angst Google Glass has induced due to the picture and video capabilities of the devices which seem to dominate early adopter use cases, Google Glass probably has much more potential to change how people use technology when other apps start to roll out. Augmenting reality using Google Glass is one such use that has a lot of potential for future development. The first steps to that goal were recently released by the development team of Brandyn White and Andrew Miller, part of a project called OpenGlass that is working on an open-source library for Google Glass. Their new demonstration shows how the concept of augmented reality could be used to discover more information about the real world environments all around us.
White and Miller’s demonstration did not take full advantage of the open-source library they are working on. Instead, in order to help show this is a technology that can be deployed now, they stuck with Google’s Mirror API, despite its limitations, and then passed information through a service like Picarus. The main downside to this is that the annotated information is not provided in real time.
Looking forward, the team is working on incorporating the capabilities shown natively with the Glass SDK. This would allow developers to take advantage of all device sensors, including future sensors added to the devices. Using additional sensors would enable real time tracking of a user’s field of vision. Incorporating the development into the Glass SDK means the new information generated by the system could be produced on the fly. For instance, in the video one of the examples shown is an effort to determine the height of the Washington Monument. Rather than waiting for annotators to return the information a few minutes later, users would be able to get the information overlaid onto their field of vision in a matter of seconds.
You can check out the demo video below. The examples are kind of clunky and simplistic, but represent a first step in using Google Glass for augmented reality purposes.