The Sponsors Perspective: Unchaining Time

What is real time? While that question doesn’t normally come up at the dinner table, asking it of a group of broadcast engineers can draw out all kinds of responses, from philosophical debates around global atomic clocks to technical dissertations on lines, frames, and permissible nanoseconds of processing delay.


This article was first published as part of Essential Guide: Delivering Timing For Live Cloud Productions - download the complete Essential Guide HERE.

One of the reasons there are lots of opinions on the topic is because time is a human construct that we use for sequencing events. Real time describes a human sense of time that seems immediate.

The perception of real time – what is happening in a specific moment – is heavily influenced by what is happening in a person’s environment when they perceive it. Therefore, the definition of what is real time can vary by individual.

Before we start getting all metaphysical, let’s narrow the discussion. In live media production when we talk about working in real time what we are really asking are two separate questions.

  1. Is there is a noticeable difference between when I perceive something happening and when I can act on it? This is relative latency. A system that feels live to the operator must have a response time of about 240 milliseconds from the time the operator sees the cue to seeing the result of the action they have taken.
  2. Using a 24 hr clock, how many seconds does it take to sequence the different processing steps taken on a frame of video before it is pushed to the viewing audience? This is absolute latency. The expectation for absolute latency varies widely by producer but usually is less than 30 seconds.

The reason to break this into two separate questions is because if all the processing steps involving relative latency can be properly sequenced within the expected absolute latency, it doesn’t matter how many there are or when they occur. The system operators will take their actions in what feels like real time and the audience will have a live viewing experience.

New technology can align contributions from multiple contributors.

New technology can align contributions from multiple contributors.

To see how this works, let’s look at AMPP, Grass Valley’s Agile Media Processing Platform. In AMPP, every video frame is timestamped as it enters the system. Because transport times vary as frames speed across networks to different members of the production team, AMPP also tracks the local time of each operator. This allows creative decisions made by the operator and their associated processing time to be tracked relative to the operator’s time. The result of the operator’s work is time stamped with whatever offset time is best to synchronize the work across the production chain.

With AMPP managing these timing offsets, the operator experiences the phase-aligned environment they are used to. The order and local timing of the decisions are maintained. When all operator actions are sequenced, the total environment is time-shifted relative to the source and thus maintains the program’s continuity.

Following this design strategy, any live production task can be carried out in what feels like real time and assembled in a linear fashion to create programming that exceeds audience expectations. Even with complicated production tasks, total execution time is a few seconds. Compare this with today’s traditional live broadcasts which, in the best of circumstances, still take as much as 50 seconds to get final emission delivery to the home.

Unchaining individual operator workstations from external time is possible because AMPP operates faster than real time using technologies that did not exist when traditional frames per second timing was implemented. Frame syncs that were once used to introduce a few frames of delay are replaced by memory buffers which can hold the frames until they are needed for the sequence.

AMPPs internal frame management allows unique offsets for each operator by adjusting the buffer depth to match the timing offset required for each essence or AMPP can force groups of operators to be synchronized if that timing is critical to their workflow. In either case the perception of the operator is that the system is responding to them in real time.

Dennis Breckenridge, CEO of Elevate Broadcast Pte Ltd described their experience with AMPP in this way:

“With our virtual product we went whole hog. We had no backup plan. We counted on AMPP fully to work and we pushed the boundaries.

“We had contribution from many different countries: Australia, Singapore, the Philippines, Indonesia, and Thailand. Our producer was in Singapore. The director and TD with the switcher were side by side in Sydney, Australia. The main cameras were all in green screen studios with virtual sets but we also had live Zoom feeds and other complications.

“We told the production team: ‘You can’t come to Singapore because of the pandemic. You can stay there and we’re still gonna make everything that you’re used to: Karrera panel, multiviews, comms… All these type of things we’re gonna make magically work for you and you’ll produce a major broadcast for Asia!’ It took a little time to build their confidence and acceptance of that possibility.

Chuck Meyer (left)  and Chris Merrill (right).

Chuck Meyer (left) and Chris Merrill (right).

“Once all the comms and everything came together, the concerns from the production team went away. We managed all the delays through the system. Once that happened, they forgot about the technology and they just moved on with their production. That was the end of it. They felt like they were just in two different control spaces within the same facility. They didn’t think about the fact that they were on different continents.”

AMPP manages both relative and absolute latency in a way that makes the difference invisible to the operator and audience, erasing the barriers that were previously very apparent in remote production.

Supported by

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

IP Security For Broadcasters: Part 4 - MACsec Explained

IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.

Standards: Part 23 - Media Types Vs MIME Types

Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.

Building Software Defined Infrastructure: Part 1 - System Topologies

Welcome to Part 1 of Building Software Defined Infrastructure - a new multi-part content collection from Tony Orme. This series is for broadcast engineering & IT teams seeking to deepen their technical understanding of the microservices based IT technologies that are…

IP Security For Broadcasters: Part 3 - IPsec Explained

One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…