There is no doubt that the explosive growth of smartphones and social media services over the last few years helped fuel a desire by people to take and share photographs. However, just like watching grandpa’s slide show pushes one to grit their teeth in anticipation of the ensuing boredom, all of these photos flooding our feeds has the same potential. To help jazz things up and keep us interested, one tool that is gaining in popularity is something called “style transfer.” This is the application of style of a painting or artist, say something like Monet or Van Gogh, to a photograph in order to give it a new, unique look. A new report from Google indicates they have been applying deep neural net technology to the concept to help demonstrate how computers can “learn” and may have positioned themselves to take on Facebook in the commercial realm in the process.
The ability to apply a style to a photograph is not a knew technology or concept and has been around for about 15 years. During that time some improvements have been made, but two limitations have continued to exist. First, the tools for applying a style were limited to a single style and second, the process still took a considerable amount of time and often involves uploading an image to a service where the actual style is applied.
Google started looking into how they could apply multiple styles to a single image. This involved training a machine learning system to be able to capture and understand the unique qualities of several sample images from individual artists as well as across genres. All of this was accomplished on a single style transfer network.
In addition to the extensive style training, Google also developed the system to not be resource intensive. This means styles could effectively be applied in real-time and even applied to videos, not just still images.
The end result is a tool that could enable users to come up with their own unique style applications that are the result of a combination of other styles. Google developed a tool that can combine up to 16 different styles at varying strengths.
Although Google says the primary purpose of the creation of this tool was to study machine learning capabilities of deep neural networks, there is clearly a potential commercial application. This past summer Facebook launched a similar tool called Prisma for applying styles to images. Google will now be in a position to compete against Facebook for users in this market.
Below are a couple videos you may want to check out. The first is a demo of the tool, apparently called Picabo, that shows the end result of all of Google’s research into style transfer capabilities powered by machine learning. The second is a video by Nat & Lo in which they explain what style transfer is and how it makes use of deep neural networks to produce results.