Cloud Native - It’s Not All Or Nothing
Cloud native processing has become a real opportunity for broadcasters in recent years as the latencies, processing speeds, and storage capacities have not only met broadcast requirements, but have even surpassed them.
Start-ups without the baggage of legacy systems, workflows, or archives have the opportunity to completely rethink their workflows and build truly cloud native systems. However, the vast majority of broadcasters considering migration to the cloud will have tens of years’ worth of legacy systems, workflows, and archive material to think about. For them, true cloud native adoption is impossible. But they do have the option of building hybrid type models where they combine existing on-prem workflows with cloud native systems.
The power of native cloud relies upon separating software processes from the underlying hardware through a method of software abstraction. Not only does this provide more flexibility, but it also reduces the risk of having to rely on one specific public cloud vendor.
Central to abstraction are APIs as they provide a generic wrapper for the underlying systems that are processing the video, audio, and metadata. Using facilities such as micro services and containers, system designers abstract away the low-level functionality through the lens of the API so they don’t need to get bogged down in the detail of how the process is applied.
The broadcast industry has matured to an extent where we can assume a proc-amp, standards converter, or even production switcher will just work. Therefore, we don’t need to spend hundreds of hours re-inventing the wheel. Instead of re-designing a proc-amp, why not just use one of the library instances available from a multitude of vendors? Forward thinking vendors will already be providing pay-as-you-go models for their applications. Some are even implementing a try-before-you-buy model to allow system designers the opportunity of testing the APIs and the application in their workflows before they commit to the design.
API abstraction also helps with future proofing a design as the point of demarcation is well defined. In software terms, swapping out a standards conversion from vendor A to vendor B is a relatively straight forward task. Admittedly this does rely on the broadcaster’s system designers providing abstract interfaces within their software design so that workflow dependencies can be easily established.
Speed is another area where cloud native solutions help broadcasters. It’s entirely possible to create a proof of concept in a matter of hours, not weeks or months. With traditional broadcast systems, the hardware procurement and installation were always the blocker, but with datacenters already in place, creating the workflows using known software and libraries becomes much easier.
Broadcasters with existing workflows need to consider how they interface to their current hardware. Although the cloud native textbook tells us to just throw away existing workflows and find more efficient methods of working, this is often just not practical. If an on-prem hardware solution exists that cannot be replicated in the cloud, then a server will need to be installed alongside it to act as a proxy controller. And the signals are probably using SDI or AES, so these will need to be converted into a file or stream at some point before being sent to the cloud.
These challenges might seem like a massive concern, but maintaining an open mind often leads to the realization that they are not the world’s greatest problem and solutions can be found.
All this leads to the hybrid model. That is, just because we can move to cloud native, doesn’t mean we have to. In twenty years, most broadcasters may well have moved to a cloud native model, but in the meantime, we must work with the hybrid approach.
That said, one of the challenges we face is to be careful that we don’t merely replicate existing workflows in the cloud, so they just become a copy of the existing on-prem design. To do this completely misses what it means to use public cloud computing. The whole point is that we’re trying to achieve scalability through software abstraction.
One example of workflows that can be optimized and scaled in the cloud are batch processes. Standards conversion, video color and level processing, and audio loudness adjustments are examples of these. It’s very enticing to just move a file from A to B, process it and then move it to C. But is this the most optimal way of working? Does the process need high CPU, high Disk or IO access? Knowing this will allow the system designer to choose the appropriate resource for the task in hand.
This leads onto the concept of Agile development. Dev Ops engineers look at the world in a different way to the traditional broadcast engineer. As they can build systems quickly, they adopt thought processes that encompass the capacity for rapid change. They design and build systems that can adapt quickly to the changing business demands. And silo working practices are frowned upon, to the extent where collaboration is assumed and expected. Hence the reason open source is so popular in the Dev Ops community.
Native cloud may be the utopian dream, but the harsh reality is that most broadcasters have so many legacy systems with on-prem hardware dependencies that it is almost impossible to move directly to the cloud in one leap. Instead, a hybrid approach is adopted. But employing software abstraction to bring scalability must be at the core of any cloud integration.
You might also like...
Designing IP Broadcast Systems - The Book
Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
If It Ain’t Broke Still Fix It: Part 2 - Security
The old broadcasting adage: ‘if it ain’t broke don’t fix it’ is no longer relevant and potentially highly dangerous, especially when we consider the security implications of not updating software and operating systems.
Standards: Part 21 - The MPEG, AES & Other Containers
Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…