Movie Studios Embrace The Cloud For Next-Generation Production Workflows

A consortium of the five largest motion picture studios in the U.S. is developing the next generation of production and post workflows, using the cloud at its core, to save time and money and allow the best and brightest production teams to be located anywhere in the world yet collaborate and share files as if they were in the same room. Indeed, by the end of 2030, entertainment productions will be produced in very different ways, rearranging or inverting today’s workflow steps dramatically.

The nascent think tank, called Motion Picture Laboratories, Inc. (MovieLabs), is a nonprofit technology research lab jointly run by Paramount Pictures, Sony Pictures Entertainment, Universal Studios, Walt Disney Pictures and Television and Warner Bros. Entertainment. It’s based in San Francisco, Calif., where members have been working together since 2020 to evaluate new technologies and help the industry develop next-generation content experiences for consumers, reduce costs, enhance security and improve workflows through advanced technologies.

2030 Vision Project

In the Summer of 2019 the group released a whitepaper—as part of its 2030 Vision Project—entitled, "The Evolution Of Media Creation”, that lays out 10 fundamental principles (use cases) that point to ways of making production workflows less technical (and more automated) for storytellers to bring their vision to the big screen while improving the speed and efficiency of the movie making process. The plan is to have ML members utilize a highly secure private bit bucket within the AWS Cloud to perform mundane or repetitive production tasks using AI-enhanced tools, automation, bots and processes.

The 56-page document starts by stating: “Creatives are constantly challenged to achieve more with less time and stretched budgets. Within our 2030 Vision, we have included ways to use emerging technologies to provide more "time" to filmmakers to make the creative choices they want to make and to be able to iterate more times to reach their ambitions for every production.

“That does not mean every technology is applicable to every title, and filmmakers will independently decide what to take from this paper and how to apply it to their individual productions. HDR is a similar use case - a new technique and workflow that filmmakers can choose to use to create dynamic realism or deliberately dial back if they want a muted or flat look. Not every filmmaker will want to utilize every new technology we describe here, but they will have more options and tools from which to choose.”

Several panels at this year’s HPA Technical Retreat were dedicated to virtualized production in the cloud and the 2030 Project.

Several panels at this year’s HPA Technical Retreat were dedicated to virtualized production in the cloud and the 2030 Project.

Of course, the use of the cloud for file storage and production team collaboration is not new, and the group hopes that many of these trends might be delivered earlier than 2030.

A Foundation For The Future

“By laying out this comprehensive view of the future, we hope to avoid the fragmented and piecemeal approach to the digital migration that largely resulted in a change of storage medium but no major improvements or efficiency gains in workflow,” the paper says. “And by approaching innovations like cloud-based workflows in a systematic and intentional way, the industry can enable changes that will improve the creative process for all participants.

These new ways of working include photo-realistic, in-camera VFX images that are shot on a soundstage and immediately transferred to the cloud (directly from the camera), at a transfer rate of about 800 Mbps. The files are then stored on S3 storage arrays and made available to specific production team members via password-protected interfaces. Epic Games’ Unreal Engine is being used for real-time processing on set, along with Arch Platform Technologies workstations running special SaaS platform software to enable collaborative workflows for film & TV, virtual production, visual effects, and post-production.

In fact, all assets, from the first script to every captured file, every computer-generated asset and all associated metadata, is stored in the cloud immediately upon creation.

Unified linking, one of the MovieLabs whitepaper’s 10 basic principles, maintains the relationship between assets. For example, a camera frame file can be linked to the metadata description of the frame, and the metadata description would be linked back to the camera frame file.

Unified linking, one of the MovieLabs whitepaper’s 10 basic principles, maintains the relationship between assets. For example, a camera frame file can be linked to the metadata description of the frame, and the metadata description would be linked back to the camera frame file.

“This new way of working is saving time, money and allowing teams to be assembled from anywhere in the world,” said Erik Weaver, who runs the Adaptive and Virtual Production department at the Entertainment Technology Center at USC. He also helped peer review the original MovieLabs document prior to release and has been a strong proponent of using the cloud. “Then editors, colorists and others can use those files freely. Also, in the era of COVID it protects people while allowing the work that needs to be done to continue.”

Weaver has been testing various virtual production workflows in live production environment/sound stages in Los Angeles. Among the work he’s doing, he’s creating artificial worlds, or scenic backgrounds (much like 3D matte paintings) via virtual art department (VAD, which bridges the gap between Production Design and VFX) in Unreal Engine and then plays them back on huge LED screens on a sound stage, where animated or live action characters are inserted into these computer-generated environments. It’s all part of an overarching strategy that employs key technologies and workflows to enable artists and designers to quickly implement 3D visual content.

High-Quality VFX At Less Cost

This same technique is being used to produce the “Mandalorian” TV series for the Disney Plus OTT service. The production team has designed and built a large LED video wall, designed by ILM especially for Mandalorian, that consists of 1326 LED screens (made up of 2.84 mm pixels). The height of this wall, called “The Volume/StageCraft” is 20 feet, width is 75 feet and is 270 degrees around. This technology is an evolved version of previously used methods of displaying live images and graphics behind the cast, using traditional virtual set technology.

Some say it’s the largest virtual reality filmmaking stage ever made and it’s redefining the film industry. Production and postproduction costs are significantly reduced and it allows actors to immerse more into their character and helps them in better execution of their role. Yet another advantage of using LED walls is that sets had to be built ahead of time so during or before the shoot, only minor improvements and polishing was required. This largely reduces the time spent on shooting the scenes.

The Vision Is Well Within Reach

Back to the MovieLabs 2030 Vision Project, the goal of the white paper is to describe a vision for the future of media creation that enables new content experiences limited only by imagination. The ultimate improvements may result from a variety of means, including industry best practices and standards and/or innovation at individual companies.

To streamline workflows, MovieLabs has created a visual language in the form of shapes that represent generic “things” that are sometimes required in documents but fall outside of the formal shapes like Tasks, Participants, Assets, and Infrastructure.

To streamline workflows, MovieLabs has created a visual language in the form of shapes that represent generic “things” that are sometimes required in documents but fall outside of the formal shapes like Tasks, Participants, Assets, and Infrastructure.

They envision principal photography where directors are freed from the constraints of limited sets and a select number of cameras to a stage filled with hundreds of small cameras and sensors capturing entire light fields, scenes and performances simultaneously from every angle. These "volumetric capture stages" have emerged in the past few years for immersive media projects and experimentation, but have not yet become commonplace in studio productions. That’s slowly changing.

With the eight-year event horizon in sight, the group is looking at a variety of innovations in defining new types of ontologies (that provide explicit definitions in machine-to-machine form that can be used to organize and connect data from multiple sources, such as lenses and cameras) and taxonomies (the practice and science of categorization or classification). And the old adage: “we can fix it in post,” could soon change to “we can fix it before we shoot.”

MovieLabs is now hard at work implementing workflow improvements, but can’t say when that work will be complete. The year 2030 is the symbolic goal, but everyone involved wants to deploy these efficiencies as soon as possible. However each MovieLabs member company will undertake its own timeline in getting to a complete cloud-based production infrastructure. In addition, each company involved will make its own completely independent business decisions about specific technology deployments and each will “unilaterally determine how it plans to innovate and, most importantly, with whom it plans to do business.”

“Every studio is using some aspects of the cloud in different components of their workflows today, but the 2030 Vision Project is about coming up with the key components that will empower editorial decisions when making a movie. This is a road map for the future. Except these are not hard standards because in the cloud world everything is software and constantly in a state of flux. We’re talking about an API that is soft, dynamic and ever changing.”

Most agree that It won’t take until 2030 for the studios to realize their vision of virtualized production. MovieLabs released its first paper in 2020 and wanted to show what could theoretically be accomplished in a decade. The migration is obviously taking a lot less time than that. Many, like Weaver, think they will achieve many of their goals within three years.

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

IP Security For Broadcasters: Part 4 - MACsec Explained

IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.

IP Security For Broadcasters: Part 3 - IPsec Explained

One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…

The Resolution Revolution

We can now capture video in much higher resolutions than we can transmit, distribute and display. But should we?

Microphones: Part 3 - Human Auditory System

To get the best out of a microphone it is important to understand how it differs from the human ear.