Building Software Defined Infrastructure: What Is Software Defined Infrastructure?
We begin our new series by asking a simple question; what is Software Defined Infrastructure and why do we need it?
One of the challenges broadcasters are finding is that switching their mindsets from the synchronous nature of video and audio connectivity using baseband SDI and AES to the asynchronous systems of IP and Ethernet is far from trivial. A great deal of unnecessary complexity has resulted and so it’s worth taking stock of where we are and where we need to be.
Taking a traditional broadcast A-B workflow and pushing it into a COTS type infrastructure is both futile and extremely costly. The fundamental change that IP brings, regardless of the benefits of costs of ownership of IT systems, is that the whole premise on which IT computer, storage, and networking systems are designed is almost diametrically opposite to those of traditional broadcast systems.
Back in the 1930s when television was an experiment, pioneers were pushing the boundaries of the electronics of the day to meet the very specific demands of the time-invariant sampling system that makes television work. There are no moving pictures in television, just a series of still images played back quickly to give the illusion of motion. This method led to synchronous distribution with embedded synchronizing signals that were needed to frequency and phase lock the home television to the video cameras in the broadcast station. As electronics were so expensive at this time, it was deemed necessary to keep the complexity in the television station rather than moving this to the home viewing device. Therefore, broadcast systems became synchronous and are based on circuit switched networks.
Asynchronous Event Driven
IT systems were born out of the need to process data, and these designs formed the basis of many programmable computers of today. Fundamentally these computers operated asynchronously and were never designed to function synchronously. They can be thought of as event driven systems as they would embark on a calculation when programmed to do so, for the rest of the time they just sat there waiting for the next program to be loaded and executed.
Broadcast infrastructures have at their core the concept of synchronous processing, so they differ from asynchronous computer systems as they are still processing video and audio, even if there is none there. For example, what is the point of processing a signal which is just a continuous feed of video-black? Or an audio signal that is silent? This is highly wasteful of resource as the synchronous systems are processing data that just isn’t needed. In other words, synchronous media infrastructures are highly wasteful of expensive electronic systems. As IT data processing is largely asynchronous, they only send, receive, and process relevant data, and these can be thought of as event driven systems.
On the one hand we have a traditional broadcast system that processes all data in a baseband SDI or AES circuit even if it isn’t needed, and on the other we have the much more efficient asynchronous IT compute systems based on event driven processing. That is, they only process the data when it’s needed.
Improving Efficiency With Scaling
If we are considering just one video or audio distribution circuit, then it could be very easily argued that the difference between synchronous and event driven is largely irrelevant. But as we scale to hundreds and potentially thousands of media streams, this differentiation becomes huge and cannot be ignored. And this is what is meant by changing mindset from synchronous broadcast workflows to asynchronous and event driven COTS infrastructures.
Figure 1 – Synchronous processing tends to be blocking as process-B needs to complete before process-A (or any other process can continue). With asynchronous processing, blocking is less likely to occur, and so multiple operations can continue, thus improving efficiency.
A further differentiation soon appears when we look at what how we can leverage event driven systems to improve efficiency. Events, by their very definition are time constrained, that is, they only occur for a specific length of time. And if an event is time limited, then unrelated events could occur consecutively and independently. Therefore, we can have many different systems all queuing to use the same resource. If the queue isn’t too long, and the waiting time is deterministic, then we have greatly improved the efficiency of the resource.
In a synchronous system an entire resource such as a transcoder would need to be allocated to a specific workflow even if it wasn’t being used. Yes, it is possible to move the transcoder to another workflow, but this would need to be a manual intervention requiring a reconfiguration of the transcoding parameters and a rerouting of the SDI matrix. Again, it is possible to achieve this using a form of automation, but it soon starts to get very messy. Also, the transcoders processing is limited by the input and output capability of the SDI connection.
Resource Division
If we start to look at the transcoding operation from an asynchronous and event driven perspective, then everything takes on a new viewpoint. Assuming an event is a frame of video, then there is no reason why the transcoder cannot treat a video frame independently from another frame from a separate source and process them accordingly. Using IP addressing, we can differentiate video frames based on their source and destination IP addresses, the transcoder can use these to load the respective parameter file and process the video frame accordingly. This becomes even more apparent when we increase the processing capability of the transcoder as it could easily have the capacity to process ten video frames from different sources within 20-milliseconds. The transcoder server will still be processing the video frames sequentially, but it will give the appearance of processing concurrently.
Expanding this thought further we start to move into the world of datacenters as the transcoder resource is just a computer server. If we treat a server as just a processing resource, then its only one more step to abstract away the video and audio processing functionality from the underlying resource. And by doing this, we have now laid bare the true power of IP as the resource and functionality have been separated from each other.
Transitioning To Datacenters
Datacenters take on many guises, but essentially, they can be thought of as a highly configurable compute, storage and networking resource. And by abstracting away the functionality, we have the option of changing how the underlying hardware operates independently of the actual workflows. As new resource becomes available then we can change it without affecting the workflow. We can also scale it up, and down to meet the demands of the business as the broadcast requirements may have peaks and troughs throughout its operation.
IP is the enabling technology within the datacenter as it is the common method of data exchange between servers and storage devices. Ethernet may well be used as a transport stream, but the IP is the base level of information that can be sent to and from IT devices. IP is transport stream agnostic and so can be sent over many different types of transport stream such as Ethernet or fiber. Protocols automatically take care of any packet encapsulation, so the layer-3 IP packet doesn’t know or care what type of transport stream it is being sent over.
This further opens the door to flexibility within the datacenter design. As IP packets don’t know whether they are in a datacenter within the broadcast facility or a cloud provider somewhere in the rest of the world, the potential to integrate these two different facilities now exists.
Cloud computing is a term which is often used to describe public cloud services such as AWS or Azure, but it has a much broader meaning in that it applies to any datacenter that has adopted a cloud architecture. This includes virtualization as well as microservices and can be either on- or off-prem, public or private. In on-prem the server farm and associated storage and networks physically exist on a broadcaster’s site and off-prem is where the infrastructure exists away from their site. But these are not to be mixed up with public and private as a broadcaster could employ an off- or on-prem private cloud. Generally speaking, the public cloud service providers only supply off-prem cloud services. All this will be discussed in much more detail in a later article.
Moving To Software Defined Infrastructure
By abstracting and separating the hardware from the functionality we need some method of controlling and monitoring the infrastructure, preferably automated, and this is achieved through Software Defined Infrastructure (SDI). Unfortunately, the IT industry did not pay much attention to the notion that the broadcast industry was already using SDI as an acronym when adopting SDI as an acronym for Software Defined Infrastructure. As confusing as it is, the IT version of the acronym is here to stay. A SDI provides the ultimate in flexibility as we have complete control over how the functionality of the workflow and the hardware connect and work together.
Although the concept of Software Defined Infrastructure is relatively new to IT, broadcasters have had some experience of this through automation, albeit heavily restricted to synchronous baseband SDI/AES infrastructures.
As we progress through the rest of the articles in this series we will dig deeper into SDI and how it is set to revolutionize broadcast television over the coming years.
Part of a series supported by
You might also like...
The Creative Challenges Of HDR-SDR Simulcast
HDR can make choices easier - or harder - at every stage of production but the biggest challenge may be just how subjective those choices are.
IP Security For Broadcasters: Part 6 - NAT And VPN
NAT will operate without IPsec and vice versa, but making them work together is a fundamental challenge that needs detailed configuration and understanding.
Microphones: Part 4 - Microphone Technology - The Diaphragm
Most microphones need a diaphragm in order to follow some aspect of the air motion that carries the sound.
IP Security For Broadcasters: Part 5 - NAT Explained
When IP was first envisaged back in the 1970s, just over 4 billion unique IP addresses were allocated. However, the overwhelming international adoption of the internet with a world population of nearly 8 billion people has demonstrated there are simply not enough…
Standards: Part 24 - Timed-text & Subtitles Overview
Carriage of timed-text must be closely synchronized to the AV stream to ensure it is presented in a timely manner so here we describe the standards that enable this for both broadcast and internet delivery.