Esports - A New Prescription For Broadcasting: Part 3 - Considering The Cloud
Esports is demonstrating how agile mindsets can provide flexible and scalable solutions within relatively short timescales. But as more software solutions become viable, esports is taking advantage of the cloud and its offerings.
One of the advantages of esports is that much of its transmission takes place within the internet. Twitch and Youtube are just two of the platforms that provide streaming services allowing virtually anybody to set up their own streaming service.
It seems logical, therefore, to keep as much processing in the cloud as possible as the business model adopted by the public cloud provider defines where the costs apply. Some service providers discount their resource but charge for ingress and egress. Others keep ingress charges low and bump up egress costs.
High Speed Data Availability
Service providers often provide links between their data centers to speed up distribution costs resulting in high-speed data highways between the processing resource distributed around the world. These are a natural fit for esports production companies as they use the facilities already available to IT without having to employ custom distribution circuits.
Although latency and bandwidth are not generally guaranteed, the esports community takes a more pragmatic approach to these challenges. Principally, packet loss is assumed and methods to work with this are used. TCP is a protocol that guarantees data throughput but at the expense of latency. Round Trip Times can be excessive if a particularly lossy network is being used or a switch is dropping packets due to egress congestion.
SRT and RIST are alternatives to TCP based protocols as they have been developed to deal with challenges specific to broadcast television. But in the IT world, ARQ (Automatic Repeat Query) derived protocols are often used to maintain accurate throughput.
ARQ Challenges
ARQs describe several different packet loss and detection strategies that seek to improve data throughput in IP networks. TCP is a version of ARQ and uses the Go-Back-N mode to guarantee data throughput. In essence, the receiver sends a message back to the sender to tell it to re-send any lost packets or send the next window of packets. But if a receiver is not aware it has missed a window of packets as it didn’t know they had been sent, then the sender will time-out waiting for a response from the receiver. This greatly effects latency and data throughput.
SRT and RIST add forward error correction and their own ARQ strategies to improve the efficiency of streaming video and audio. TCP has been established for over thirty years and there are a lot of backwards compatibility challenges, so it’s proved difficult to develop the protocol for specialized applications such as streaming video and audio.
By effectively starting again, protocols such as SRT and RIST have been able to address many of the challenges seen with TCP and provide lower latency and high data throughput links.
It’s fair to say that SRT and RIST will never achieve the low latencies of ST2110, but the quality of the network that is required for ST2110 to make it reliable enough are orders of magnitude higher than a network using TCP, SRT, or RIST. There is a simple trade-off between latency and cost.
Latency has attracted a lot of bad publicity in recent years especially in the OTT domain where 30 seconds of delay are not uncommon. However, some latency is inevitable and a physical certainty. We shouldn’t just be asking the question of how to make latency as low as possible, but instead ask how much is enough for the application we’re working with. As esports networks are showing, latency is acceptable and to be expected.
Acceptable Latency
As we deliver more to cloud services, the key is to understand how much latency is acceptable. We don’t have to just contend with network delays but also processing delays and the influences buffers play within the overall design. Buffers are inevitable and converting between synchronous and asynchronous systems demands their adoption.
Esports engineers wouldn’t think in terms of keeping latency low for all services, instead, they would adopt a method of prioritization. It might be perfectly acceptable for the home viewer to have a thirty second delay on their OTT feed, just as long as it is consistent. In the studio the latencies would have to be tightened up to a few hundred milliseconds.
Reducing video latency to a few milliseconds is going to put incredible strain on the network and probably end up increasing its overhead through the procurement of more switches and interconnections. But what benefits will this extra cost and complexity achieve? Would somebody operating a production switcher notice the difference between a cut delay of 20mS and 200mS? They probably would, but would it make a massive difference to this type of production? Probably not. Humans adapt and as long as the latency is consistent, those operating the equipment will adapt with it.
Combining Formats
This is where the esports thought processes are potentially offering broadcasters a lifeline. IP offers us the opportunity to mix and match our solutions as we’re no longer tied to the static SDI and AES transport systems. Some video feeds may not benefit from 20mSec delay so why try and impose this on them?
Keeping signal processing in the cloud is a utopian dream for anybody who’s ever worked in a technology field as the massive amount of compute and storage resource available is breathtaking. It might not be infinite, but it would be really difficult for any broadcaster to use more than is available to them.
Controlling processes such as vision switching, and audio mixing may at first appear to be a challenging job. One solution is to use traditional broadcast control panels found in broadcast facilities throughout the world. The actual processing of the video and audio streams was removed years ago from the physical control interface so this is easily achievable.
Remoting Control
Production switchers and sound consoles often have a network link to the signal processing engine giving the potential to connect them to cloud services. This is indeed possible, but another alternative is to write software for a specific job. Production switchers and sound consoles look complicated and are, as they’re designed to do a multitude of jobs. If instead a generic human interface was written in software that performs a specific task for that production, the whole design would be much less complex and easier to use.
Writing custom software to achieve automation is much easier with web-type frameworks such as RUST. These frameworks hide the complexity from the developer thus reducing the need for repetitive programming. This allows the developer to concentrate of solving the challenges of the production as opposed to getting bogged down in the technology.
Esports technologists and engineers are looking at production from the perspective of IT. And by applying many of the principles of reducing complexity they’re providing easy to operate customized systems that allow the production teams to get on with making compelling programs and ultimately enhance the immersive viewing experience.
Supported by
You might also like...
IP Security For Broadcasters: Part 1 - Psychology Of Security
As engineers and technologists, it’s easy to become bogged down in the technical solutions that maintain high levels of computer security, but the first port of call in designing any secure system should be to consider the user and t…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
If It Ain’t Broke Still Fix It: Part 2 - Security
The old broadcasting adage: ‘if it ain’t broke don’t fix it’ is no longer relevant and potentially highly dangerous, especially when we consider the security implications of not updating software and operating systems.
Standards: Part 21 - The MPEG, AES & Other Containers
Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…