Peter Falk ACS shoots week of tests at a new LED virtual studio in Melbourne called ‘Dreamscreen’ – by Peter Falk ACS

Since Disney’s The Mandalorian burst onto screens with great fanfare, filmmakers around the world have been familiarising themselves with the notion of virtual studios and accustoming themselves to the new terminology that has followed. I had the opportunity to experience it first hand late last year, when a company called Dreamscreen, led by director Clayton Jacobson, was formed to create a virtual studio in Melbourne.
A five day shoot was planned to produce a demo reel for the company. The LED Volume to be built comprised a 17.5m wide by 4.5m high wall with a five degree curve, and built from 306 LED panels with a matching LED ceiling and two 4m by 3m floating LED walls. There is a head spinning amount of technical and promotional material about virtual studios or ‘Volumes’ on the internet but I can assure readers that the approach from a cinematographer’s point-of-view is a familiar one.
The Dreamscreen project was to be a short reel comprised of various scenarios filmed with virtual environments. The first step was to find those environments. In other words, a location survey. It’s just that in this case it was a virtual one. The virtual environments could be computer generated, could be footage filmed on a real location, or could even be still photographs.
For the Dreamscreen project, we primarily used computer generated environments initially designed for computer game creators, available for purchase through the Unreal Engine Marketplace. These environments are purchased as kits that can be configured to the user’s design. A kit might include buildings, landscapes, trees, vehicles, props, and so on.
What is interesting from a cinematographer’s perspective is that not only are the components configurable in multiple ways, in most cases so is the lighting and the light sources. For example, the sun’s position could be moved across a sunset skyscraper view at my whim to choose whatever mood or effect I wanted, with the shadows and the exposure changing as the sun was repositioned around the sky. I don’t have the experience with Unreal Engine to do that myself, but the director and I talked through the image as a third person made the adjustments as we were discussing them.

From there the environments were sent to one of Dreamscreen’s partners, visual effects house Method Studios, where the talented artists enhanced them to photo-real quality suitable for presentation on high resolution screens.
The fact that the cinematographer can contribute to the choice and the look of the virtual environments is one of the key benefits for a cinematographer of using an LED volume versus green or blue screen. When the VFX work is done in post, the cinematographer is often no longer much involved and the resulting CGI content might not be continuous in style with the rest of the production. Here, the cinematographer can offer their input as part of the preproduction process and maintain control of the look.
Once in the virtual studio and preparing to film, if any of the setting up tasks seemed new and a little daunting, I didn’t have to worry because the company’s team was made up of people who were experts in their fields and I was doing none of the pre-production just by myself.
An early task was to get the colour temperature of the LED panels as close to 5600°K as possible, so that there was a common base for any supplementary lighting. I did this in a very low tech way by having the LED panels emit a white light, taking readings with the camera’s in built white balance tool and then adjusting the colour of the ‘white’ LEDs until they registered as 5600°K.
When filming in a large LED Volume, the entire virtual environment is not usually shown at full resolution, as that requires massive amounts of computer processing power. Instead, only what the camera sees in the frame is at full resolution and with correct parallax. The term for this area on the LED wall is the ‘frustum’. The frustum is just slightly larger than the lens’ field-of-view, to allow for any minute lag in the movement of the virtual environment in relation to the camera movement.

To pair the frustum with the field-of-view, the camera was tracked by twenty four tracking cameras placed at points above the set. These tracking cameras sent my camera’s position on the set back to the software controlling the frustum. To accomplish this, a rig dotted with reflective spheres was attached to the camera above the lens and some additional reflective spheres were attached to the camera body. These did not impede my operating or the rigging of the camera.
Also sent back to the computer was the lens focus information. To determine the frustum and the nature of the image within it, the software needed to be told the focal length, the iris and the focus point of the lens. Traditional lens encoders cannot be read by the Unreal Engine software, so we used a Glassmark Lens Encoder, designed specifically for this purpose. If a background object in the virtual environment was 100ft away from the foreground subject in the real world, and the foreground subject that the camera was focused on was 20ft away from the LED wall, then the camera lens would capture the focus drop off for the first 20ft and the computer software would create the focus drop off effect for the remaining 80ft. The data being sent to the computer from the encoder is just a number and not the actual focus measurements, so each lens had to be ‘mapped’ so that the computer knew what number related to what distance for each lens.
The total resolution of the curved wall was 5712 x 1512 pixels. Framing for 2.39:1, the highest number of pixels that could possibly be in frame at one time would be 3614 x 1512. I used the theory that filming with a camera with greater resolution than that would remove moire on any shots that showed the whole height of the curved wall. Our principal camera was the Sony Venice at 6K, paired with Cooke S7/i Full Frame prime lenses, generously provided by VA Hire. We also filmed with a Phantom VEO 4K paired with Cooke Mini S4 primes on a Bolt motion control rig, and also with a Sony F55 at 4K paired with Sony CineAlta primes. The camera and all the computer systems were gen locked via an external sync generator.
In lighting the set, I approached my job as I would in any filming situation; I assessed the light that was there, some of which I had chosen in pre-production as part of the virtual environment choices, and then chose what I would use of that and what I would augment or cut. The difference was that sometimes augmenting or cutting was with virtual tools and other times it was with traditional filmmaking tools.
With most of the scenarios, the main source of light was the virtual environment. In some instances, it was the only source. When I added supplementary lighting, it was either to create a harder light, some gentle fill or to mimic some aspect of the virtual environment. It was also a joy to watch the reflective surfaces of the real subjects interact with the virtual environment. No more green spill!

Using the ambient light from the virtual environments was appropriate for the Dreamscreen reel, especially seeing as how I had been involved in their choice, but there would be times when a cinematographer might want to override the lighting in the virtual environments. Just because the virtual environment has a green hue, doesn’t mean that your foreground subject has to be green.
Similarly, whilst the virtual environment creates ambient light for the Volume, one might still choose to sculpt the subject with film lighting or even to just add extra fill, negative fill or an eyelight. And if the virtual environment is dark, it might not provide any ambient light for the Volume at all. In other words, your approach to lighting the subject would be similar to what you would do on a real location. Accordingly, access for lamps around the Volume is a consideration.
Any supplementary lighting had to be flagged off the LED walls because spill would lighten the black levels of the LED panels. We also found that we rarely used the ceiling LED panels closest to the walls because they too affected the black levels of the LED walls.
Many of the demo reels of virtual studios are similar; a person filmed in mid-shot in front of an out of focus background. Clayton Jacobson, the director, was resolute in proving we wouldn’t impose such restrictions on a director in a real filming situation. For the Dreamscreen reel, foreground sets were built that blended into the virtual environment and which allowed head to toe filming of the actors. These sets were built on large rostrums with wheels, so that we could change from one scenario to another quickly. We filmed in four different environments per day.
A fascinating and new phenomenon for me was the use of real-time human avatars. The movements of any CGI human characters in the virtual environments were controlled in real time via performers wearing motion capture suits just off set. The team from Tracklab made this work flawlessly.
The obvious attractions of virtual studios include having lighting and reflections that interact with the subject and the props live, the ability to travel to far flung or extreme locations without leaving the studio, to film on a wide range of locations in one day, for cast to see what they are interacting with and for the cinematographer to regain a degree of control in the look of visual effects.
Filming in a virtual studio is an exciting proposition for cinematographers and filmmakers in general and it will get more enticing as the technology adapts to the growing demand.
Peter Falk ACS has had a long career working as a cinematographer, with credits ranging across some of Australia’s most popular drama productions and award-winning music videos.