Virtualising Your Playout Operations: A Reality Check

So you think you want to virtualise your playout? Good, there are many benefits to be gained from deploying a channel in the cloud. For centralcast hubs, service providers, MCOs (Multiple Channel Operators), sports broadcasters, and corporates, virtualised playout can deliver an affordable option to deploy or contract IP-based channels instantly without the burden of racks of complicated hardware, and weeks or months of setup and provisioning.

However, before you sign the order for a virtualised solution, there are several important factors to consider. Let me offer some guidance when making these plans.

Adding a virtualised infrastructure into a broadcast facility adds an extra layer of complexity and specific new requirements into the mix. Don’t underestimate the level of in-house expertise you will need access to in order to implement a full-scale virtualised platform. Make no mistake, you will need to understand every nut and bolt of your virtual environment.

Moving a broadcast playout project to a virtualised environment involves processes that may be unfamiliar to some broadcast engineers. Expect the design and implementation to be a collaborative process, more about software than hardware.

Moving a broadcast playout project to a virtualised environment involves processes that may be unfamiliar to some broadcast engineers. Expect the design and implementation to be a collaborative process, more about software than hardware.

So before you embark upon a lengthy and potentially costly Proof of Concept (POC) or virtualised implementation, here are our tips and guidelines to help you set expectations and be clear about what you want to achieve.

The success of the project will depend as much on your infrastructure as it does on the applications you run on it

Whilst a provider might be willing to provide a small-scale virtualised system for a POC, a full-scale production environment is a different matter. Application providers will most likely not be willing to take control and design responsibility for your infrastructure; although they should be skilled and experienced enough to help you tune the system and diagnose performance issues, for example.

Where does the knowledge lie about your virtualised infrastructure?

Is it in house, in the hands of a third party, or with a contractor? Before you start the project, make sure you understand who owns each part of the environment. In the more traditional setup you will own the playout device and the vendor will take full responsibility for how that device performs, what benchmarks it complies with etc. However, with a virtualised solution, the vendor is simply the software provider, meaning that you, or your nominated representative, have responsibility for the overall performance of the virtualised platform and networks. You will need to build relationships with an entirely different set of suppliers, and it helps to do so early in the process.

What is your POC aiming to prove?

Do you aim to establish that an application will simply run on a virtualised platform and that it meets some set of arbitrary playout requirements? If you plan to start small and simple, do you aim to test your simplest use-cases or the most complex? All parties should agree on what the expected outcomes of the POC will be.

The application vendor will certainly be able to provide a configuration for you to demo their software, but perhaps it’s more relevant for your POC to focus on establishing whether your organisation is able to design, build, manage and maintain a scalable virtualised infrastructure, and whether that system will meet the on-going needs of your business.

In-depth discussions with your application partner will help prevent future conflicts. Be sure to establish firm goals of any proof of concept project and that the vendor understands what is required.

In-depth discussions with your application partner will help prevent future conflicts. Be sure to establish firm goals of any proof of concept project and that the vendor understands what is required.

Choose your partners wisely

In any virtualised integrated channel playout project, you are entering into a long-term relationship with a single supplier. Previous business models would have involved relationships with multiple vendors including a video server vendor, a graphics supplier, a switching and routing specialist, an encoding and mux supplier etc. However, the ‘function collapse’, which integrated channel technology delivers, means that this multi-vendor integration is often hidden to the end user. More often than not, integrated playout providers build third party software into their solution for specific functionality such as subtitling, graphics, audio processing etc. Be sure your vendor is one you are confident in, and be aware that your integrated channel vendor is almost certainly reliant on components from other suppliers. Your vendor may be dependent on other companies to fix any issues. Does your chosen integrated channel provider have an architecture modular enough to be able to replace components if a 3rd party provider cannot fix a specific issue?

Not all hypervisors are created equal

You will need to choose which virtualised platform you will use. There are frequently many choices for both private and public clouds. Once you have selected the platform, you will need to know and understand how you are going to measure the performance of your virtual environment. The playout software may perform differently with different hypervisors (software that creates, runs and monitors virtual machines). What metrics will you use? How will you know you can reproduce what you saw in the POC when you deploy the architecture for real? Does your test environment scale?

How close are you to the edge?

It’s vital to note that you won’t just need to measure the behaviour of the playout software application; you also need to monitor the behaviour of the entire infrastructure. Simply verifying that video and audio are playing does not give you the full picture.

Check that the playout software vendor will give you access to the raw data that shows you how the application is really performing on the virtualised platform. There are a multitude of parameters that can be measured, but which ones are important for overall system performance? For example, the sleep/wake time of processors of certain hypervisors may not be good enough for real time playout. Latencies and behaviour will vary depending on the hypervisor you test.

Over-provisioning

It is tempting to gear your resources to cater for any worst-case scenario. Certainly with the very dynamic schedules that can exist in playout, we need to be sure that there are sufficient resources in the virtual machine to handle the most complex playout scenarios. However, you should also check the effect that over-provisioning has on the performance of your hypervisors. In certain cases, latencies will increase dramatically, rendering the architecture unsuitable for playout. Over-provisioning can have an adverse effect!

Engineers often over-provisioned traditional broadcast systems to ensure “they always had enough...”  With a virtualised solution, additional capacity can be spun up and down as the load demands.

Engineers often over-provisioned traditional broadcast systems to ensure “they always had enough...” With a virtualised solution, additional capacity can be spun up and down as the load demands.

One of the reasons legacy broadcast systems were so expensive is that there was so much over-provisioning throughout the system – in order to leverage some of the advantages of a virtualised platform, it is important to get the provisioning correct.

Buying a bare metal box, a certain amount of RAM and a number of CPU cores will give you a reasonably predictable performance under given circumstances, but when you put your application on to a hypervisor, you are adding a whole new layer of software between you and the hardware which has a potentially huge number of ‘tweakables’. Expect the unexpected! And don’t forget to check that your chosen hypervisor supports the disk drives and storage you want to use with your COTS hardware. If you need to change your hypervisor will your hardware be supported?

Bear in mind that there is a difference between over-provisioning for safety and for future expansion. It’s better to add the capacity when you need to scale, and don’t forget that costs will be coming down and performance improving over time.

How much do you trust your network?

Don’t underestimate how critical your network infrastructure is. The transport streams your playout infrastructure generates will go through the enterprise network switches; so it follows that they can overload the network bandwidth and have an effect on on-air performance - despite the fact that your playout software application may be running on a completely separate network.

Have you properly benchmarked?

For a virtualisation POC to be successful, it’s important for you to characterise your channels. Tell the vendor everything that the channels need, so that the correct functionality can be incorporated and an understanding can be gained of how that will affect the host environment. Decide whether you are going to use GPU or CPU-based encoding. Customers generally find that CPU encoding is more versatile – but requires more expensive CPUs. True 3D graphics almost always require GPUs. Establish what file formats, audio tracks, graphics and so on need to be built in. What bitrate will you use to deliver the final transport streams? Will you use MPEG2 or H.264? What is the end destination for the transport streams? What is the receiving equipment?

All of these factors and more will have an effect on your host environment, so share the information at the outset.

Signal and network monitoring is always key to ensuring QoE in delivery systems. When considering virtualisation, think about who and where that task will be handled.

Signal and network monitoring is always key to ensuring QoE in delivery systems. When considering virtualisation, think about who and where that task will be handled.

Monitoring and failure scenarios

The range of available monitoring options in an SDI environment usually far exceeds those available in the IP domain. Bear in mind that diagnostics can be harder for IP, so you’ll need to investigate what tools are at your disposal, as well as staff who are able to interpret the results.

Operational monitoring is also critically important, especially in public cloud scenarios. As well as monitoring latencies and considering how and where your operators will monitor the playout, you need to consider any control latencies that will need to be added. Your playout automation may need to send out control commands taking into account the monitoring latency for the user.

Failure scenarios and failover contingencies needs to be considered. Who or what will be switching IP streams? If your VM fails, you may lose the transport stream altogether. Can your downstream distribution deal with no stream at all, not even a null stream! Where are your IP streams going? Can you test them?

Beware the noisy neighbour

Multiple Virtual Machines on the same physical host can impact each other, to the detriment of your playout. You may find the playout application is sharing a hypervisor with your MAM system or an email server in the Data Centre. And these neighbours may not be predictably busy. Furthermore these neighbours can ‘move in’ and ‘move out’ at unexpected times. It is possible that someone could choose to launch another 4 VMs on your box without warning. You will need to closely manage permissions for deploying channels.

To avoid the software application being affected by something else running on the physical hardware the onus is on you to put monitoring in place, and potentially to ensure that playout resources are isolated for real-time playback.

Do you really need to virtualise? What is the benefit?

Virtualisation does come with an overhead. You need to be certain of what it is you are trying to achieve. Ultimately a virtualised playout infrastructure will deliver flexibility and agility, but the path to those benefits may not be short, straightforward or inexpensive.

The level of skills and knowledge required in-house should not be under-estimated. Taking the more common non-virtualised route gives you access to the vendor’s expertise. They will provide the proven hardware, together with benchmarking statistics, and level of simplicity, predictability and performance that is difficult to achieve in your virtualised environment.

However, a virtualised playout environment does enable you to isolate the application from the generic technology in the Data Centre. Drives and CPUs can theoretically be simply upgraded by non-broadcast specialists. Service personnel will have fewer ‘boxes’ to maintain - as the same technology will most likely be in use across multiple platforms. Virtualisation makes it simple to clone and redeploy applications within a common environment across the whole facility, making expanding a system or migrating to new generation hardware, a much simpler exercise.

For a broadcaster, it’s not necessarily about being cheaper, or getting more out of the resources, it’s about flexibility, portability and the ease of maintenance. 

You might also like...

Designing IP Broadcast Systems - The Book

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

Operating Systems Climb Competitive Agenda For TV Makers

TV makers have adopted different approaches to the OS, some developing their own, while others adopt a platform such as Google TV or Amazon Fire TV. But all rely increasingly on the OS for competitive differentiation of the UI, navigation,…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

Standards: Part 21 - The MPEG, AES & Other Containers

Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.

Broadcasters Seek Deeper Integration Between Streaming And Linear

Many broadcasters have been revising their streaming strategies with some significant differences, especially between Europe with its stronger tilt towards the internet and North America where ATSC 3.0 is designed to sustain hybrid broadcast/broadband delivery.