Broadcast And The Metaverse - Part 1

From the earliest public radio transmissions in the 1920s to 4K television of today, broadcasters have been working to continually deliver and improve the immersive experience. Television broadcasting has gone from black and white, to color, then HD and 4K, with massive improvements in audio all building on previous technologies to encourage viewer engagement and get closer to the event. The last hundred years, with all the technological advances in television, has taught broadcasters that viewers always want more.


This article was first published as part of Essential Guide: Broadcast And The Metaverse - download the complete Essential Guide HERE.

Now, a new generation of ultra-high-power parallel compute and storage resource is forming the infrastructure to create the Metaverse that will deliver the ultimate in immersive and viewer experience. The Metaverse is rapidly driving adoption by bringing into existence a 3D overlay of the Internet to create a natural human interface that will provide an experience near indistinguishable from the real world.

The Metaverse end game may seem like a futuristic utopia, however, large parts of the enabling technology are actively being used today. And by adopting this technology, broadcasters will be empowered to develop the immersive experience that their viewers are continually demanding, thus driving greater viewing figures and potential growth.

Using todays Metaverse technology will bring viewers closer together for sports, concerts, and public events, and will not only create a greatly improved immersive experience, but also empower community engagement. This technology is significantly more than just enhanced VFX as ML, 3D animation, selective viewports, and avatars, to name but a few, have come together to allow broadcasters to massively improve their live programs and deliver the experience their viewers are demanding.

Virtual Worlds

One of the benefits of digitization of video tape into on-prem and cloud storage is that it has become much easier to process for AI applications. The data is parsed using convolution neural networks that learn aspects of the image to allow them to be classified, such as trees, cars, and highways. Which in turn provides the opportunity for image synthesis through AI technologies including generative adversarial networks (GANs).

In traditional broadcast workflows using green screen or LED wall technology, the designer has had to build the background images using recorded video or digitally generated images. This is incredibly expensive and time consuming as either a crew has to go out and record the GVs, which will need to be edited, or a graphic designer has to painstakingly create a background sequence frame by frame.

Contrast this to an AI method using virtual world technology consisting of high-end GPUs, servers and storage. Instead of creating a complete background, the graphic designer might create a viewport layered mask. For example, a scene with a car driving down the road might have one mask that outlines the road, another mask for the cars, another mask for the trees, and another mask for the building. The graphic designer would then parse the mask sequence through an AI image synthesizer that would replace the masks with the images it has created. This is far more than just replacing the masks with pictures of cars or trees as the images it synthesizes will be based on a library of vector mapped images that exist in 3D. Consequently, the images are represented as vector maps and only become visible at the point of image rendering, that is, when it is turned into a 2D image.

Due to the vector mapped nature of this technology, real-time viewport orientation can be applied. This allows the designer to move the visible rendered image so that it can be anywhere within the scene. This is further enhanced as the vector representation includes depth so that the viewport can be moved into and out of the image as well as around it. All in real time. Not only does this provide incredible 2D rendered images for the home viewer, but also has the potential to allow the viewer to enter into the scene, especially when using headsets, thus greatly improving the viewer immersive experience.

Figure 1 – Vector representation abstracts the image data from the display so that the scene is not dependent on the display (as is the case with raster images). The process of rendering the data will provide an image that can be displayed from the correct viewpoint and optimized for the target display.

Figure 1 – Vector representation abstracts the image data from the display so that the scene is not dependent on the display (as is the case with raster images). The process of rendering the data will provide an image that can be displayed from the correct viewpoint and optimized for the target display.

Broadcasting Future Virtual Worlds

Although the technology may be available now to help broadcasters create virtualized LED walls, the future offers many more interesting opportunities when considering how the metaverse is predicted to progress, especially when considering improving the immersive experience.

The metaverse can be thought of as adding a 3D layer to the 2D internet which in turn will allow viewers to enter the virtualized world that has been created. In broadcasting terms this could be a sports stadium where viewers swap seats and move from their homes to being in the stadium. Viewers then cease to be passive observers and instead become immersed in the game itself, allowing them to explore and experience the event almost at firsthand.

One of the key challenges for broadcasters is how do they take advantage of the future potential of the immersion that the metaverse promises?

In the same way that Google Maps have vehicles traversing roads around the world to provide street views, will broadcasters do the same with stadiums and venues? Thus, allowing them to build a massive library of images that can be used to form the virtualized worlds. Viewers could even choose their own seat in the stadium, go “backstage” to see how the game is being planned and the team strategies being adopted, and even join their friends to increase their enjoyment.

Once the historical limitations of thinking in a single viewport linear-2D environment have been removed, the true potential of moving and exploring within an interactive 3D virtual world becomes clear. And with this comes massive revenue opportunities for broadcasters.

Supported by

You might also like...

Designing IP Broadcast Systems - The Book

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

If It Ain’t Broke Still Fix It: Part 2 - Security

The old broadcasting adage: ‘if it ain’t broke don’t fix it’ is no longer relevant and potentially highly dangerous, especially when we consider the security implications of not updating software and operating systems.

Standards: Part 21 - The MPEG, AES & Other Containers

Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.

NDI For Broadcast: Part 3 – Bridging The Gap

This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…