Future proof your content creation with machine learning

Machine Learning (ML) is impacting the VFX, and other industries, in a number of different ways. More recently, one of the areas in which it has started to come into fruition is content creation.

The recent progress of deep learning techniques allows hours of manual and laborious content creation work to be done in minutes. This isn’t just true for the VFX industry but in applications such as gaming, Virtual Reality (VR), advertising and even retail. 

In this article we take a deep dive into some of the areas in VFX content creation in which ML is having the biggest impact, and how it’s transforming the way we make content—from image style transfer and 3D terrain modeling to awe-inspiring animation. 

Creating art with tech

As humans, we have perfected the art of creating unique visual content, whether that be through paintings and art, or through more technical mediums like film and VR. It seems only natural, then, that we have begun to use ML and other neural networks to experiment with how we can create images. 

One technique that is having a big impact is style transfer. This uses an algorithm which drives the combination of two images, one for content and another for style—usually a painting. Then, using convolutional neural networks, this creates a new output image. So if you’ve ever wanted to be part of a Van Gogh painting, now you can by simply applying your favorite style to your next selfie.

This concept can produce some intriguing results, especially as the algorithm allows for the manipulation of the weight of both the style and content. It also gives creators the ability to add multiple style images which gives the final output more depth and layers and has been known to create images that can be easily mistaken as a piece of original artwork.

While a lot of the advancements happening around style transfer relate to static images, there has been progress seen in using the algorithm on films as well. But this doesn’t come without complications. When the algorithm was first tested on video, the results were less than successful and saw inconsistencies in the film. The style would slip in and out on a frame-by-frame basis—it would be there one frame and gone the next. 

However, thanks to the innovations in machine learning, a research team in Germany managed to develop a method that stops discontinuities seen when applying style transfer to a video. The method penalizes any deviations it detects between two frames, whilst following the movement elements of the video. This produces a more finalized video at the end, and preserves the appearance of the image that was originally inputted. 

Whilst a step forward, this method does have its drawbacks, in that it proves to be a long and tedious process with the style-transfer taking several minutes per frame. While there may still be a way to go in terms of developing this method, style transfer as a whole presents groundbreaking possibilities for the VFX industry. Particularly the ability for machines to understand what an image is and the details that go into making it. This could save artists huge amounts of time when creating lots of different assets for a scene, like a street of houses, and further alleviate the time

Continue reading

This post was originally published on this site