Audra Coulombe is the Marketing Manager for The Molecule, a VFX, Motion Graphics, and VR company located in New York and Los Angeles.
The world of virtual reality is growing at an incredible speed. Everyone is working hard to make the best camera, the best viewing device, the best new story — and for many, it’s an exciting time. Fans of scripted storytelling will be especially interested to hear that production recently wrapped on VR’s first scripted drama series, Invisible. Created by 30 Ninjas and director Doug Liman (The Bourne Identity), the series tells the story of a powerful family that has the ability to become invisible. The Molecule had the pleasure of providing the titles and VFX for the series, supported by Jaunt, Samsung, Lexus, and Condé Nast.
The trailer for the series has been available since early October, but now you can watch the series in full, on any VR-capable device, on Samsung’s or Jaunt’s website. At around five minutes each, it will take you about the length of a normal 30-minute TV show to watch all the initial episodes.
At The Molecule, we’ve been developing our own methods for making VR compositing more efficient and intuitive. As studios all over the world are developing their own VR tools, we want to share what we’ve learned about creating precise VFX for this new frontier.
Before you get to post-production, it’s important to know what footage you’ll be working with. We shot on many different cameras for this series, including a few VR cameras from Jaunt, rigs with several GoPros attached, and a Sony A7S.
Cameras used in the production of Invisible
When choosing a camera, you should always think ahead to the post-production phase. The Jaunt VR camera and the GoPro rigs have different implications for stitching in post. The Jaunt cameras we used have the advantage of genlock (which helps to ensure synchronized timing between cameras) and exposure lock. They also contain their own cloud-based auto-stitcher. The camera is pretty large, however, and like most auto-stitchers, theirs is not quite perfect yet.
GoPro has the major advantage of being small and lightweight, and you don’t need very many of them to record 360-degree video. However, they use auto-exposure and auto white balance, have no genlock feature, and require you to manually stitch together footage from each camera in post.
For now we’re going to skip the specifics of footage stitching and get right to creating VFX for VR. (Quick disclaimer: Because we use The Foundry’s Nuke to composite in our studios, we’ll be giving tips with that software in mind. However, the idea behind the following trick can be applied to other compositing software if you’re not familiar with Nuke).
In our studio, we developed a set of steps that make compositing in VR space much easier for VFX artists. We created a node in Nuke that transforms the warped, lat-long image into something that a viewer would normally see in the VR headset. This makes a huge difference, because now the artist can work from a more traditional perspective.
Working in the stretched-out lat-long image would make it difficult to create precise effects because of the distortion. Michael Clarke, the artist behind the development of this method, explains that because of this shift in perspective, “any visual effects artist can get in there are start working with it.”
Say, for instance, that you wanted to paint out the camera rig in bottom of the following image:
If you switch the view to the headset perspective, you can isolate the rig and paint it out like you normally would.
Then, you could isolate that shape and reconvert it back to the lat-long view.
And then… ta-da!
This method can be used for all kinds of common visual-effects tasks, including rotoscoping, painting, and tracking.
“It’s Just a Comp”
For many artists, working in this new perspective can feel intimidating and frustrating. It definitely doesn’t have to be that way, though. As Clarke reminds his artists, “It’s just a normal comp when you work in this view.”
Although we developed our tools in-house, you can develop your own method of converting lat-long images to a view that looks more familiar. It’ll look more like the kind of image you’re used to working with, and you can keep on making stellar effects in your regular workflow.
Do you have any questions or suggestions on creating VFX in VR space? Leave them in the comments!