Broadcasting from a Public Cloud? – A Look at Security and DR

As OTT consumption continues to rise, broadcasters and content distributors see the cloud as a way to respond to an insatiable demand for new channels. Likewise, the cloud seems to be a good solution to disaster recovery (DR). Cynics, however, are quick to note that the cloud, especially a public cloud, is unproven and unprepared both in terms of security and the QoS required by broadcasters.

In the past, tremendous sums of money have been spent to keep a channel on the air 24/7 because advertising revenues are contingent on viewers. It’s no surprise then, with today’s fickle viewing habits, that a few seconds of dead air, whether it be over cable or an endlessly buffering OTT feed, is all that it takes for someone to change channels.

For this reason, early cloud proponents have leaned towards a “private cloud” approach, a DIY datacenter where they gain economies of scale to some degree, but maintain a level of control and security they feel is paramount to success. Many vendors are all too happy to sell “private cloud” solutions, because it represents a fairly sizable up-front investment and many don’t have completely virtualized software to offer. However, this should not be confused with an inherent security flaw in public clouds.

Are public clouds secure enough for broadcasters?

If we really look for a moment at the state of security in public clouds such as AWS or Google versus a private “on-premises” datacenter, do we honestly think the public cloud is less secure? Not long ago it would have been unthinkable for a reporter to keep his/her sources' contact information on a public system, however most broadcast organizations are quite happy to have moved their email into the cloud via Microsoft 365 or Gmail. 

Do you really think you can build a more secure cloud than AWS?

Do you really think you can build a more secure cloud than AWS?

Likewise, critical sales CRM data is routinely stored into a cloud system like Salesforce and no-one loses any sleep. What if a sales person knows that a fiercely competitive vendor uses the same Salesforce cloud for all their data? Zero concern.

Banks use the cloud infrastructure and it’s been that way for some time. Yes, we are a conservative industry, but we need to ask ourselves a question. With the thousands of security personnel employed by public cloud vendors, how can our private cloud be remotely as “secure”?

It is time to accept that public clouds are every bit as secure, and in many cases much more so, than anything broadcasters and content distributors can assemble. In terms of media corruption or susceptibility to hacking, the public cloud vendors have invested more resources than most of us can even begin to imagine. Can things go wrong? Of course, but that’s why there is DR (Disaster Recovery).

Making DR work in the cloud

Being a conservative group, broadcasters need guarantees and SLAs (service level agreements) that can assure them that a cloud-based infrastructure will be there when needed. They also look to be able to save money on what is hopefully never going to be used.

If a content provider has an on premises both primary and backup playout servers, then the DR system is only going to come on line if the building is on fire or similar. The cloud is an expensive place to be running a channel that no-one is watching.

Particularly a channel that is perfectly in sync with master playout with content being continually uploaded. The beauty of virtualization is in the ability to provision a channel in minutes, or better yet, bring up a dormant channel in seconds.

If in a DR situation, is it really necessary to have the initial cutover completely in sync with master playout? If not, one can save a lot of money. If the media is already stored in the cloud, ready to play the moment DR playback is required, a replacement channel can be dynamically launched in seconds. Simultaneously, an automated transfer of programs can be started that resynchronizes the content in a matter of minutes. 

Data disasters come in many forms. Because they are unpredictable, an off-site DR plan may be a best solution to losing precious airtime and an audience. Image courtesy: Drbcpt.

The public cloud is a perfect host for DR, because there is little capital expense required, and the bulk of operating expenses occur only when required. Not having the DR channel physically running avoids the more costly egress (download) charges in the public cloud. On the contrary, ingress (upload) fees are either free or very low. Like a life raft, a DR channel need only be deployed when required, and costs virtually nothing to maintain.

Not the right solution for every channel

Even so, not all channels are good candidates for cloud deployment. Premium channels and channels with a substantial live element are probably best left to state-of-the-art on-premises servers. There is also the case of “where is the content” and the geo-political ramifications and policies that must be considered.

However, OTT pop-up channels for events, new experimental channels, and of course DR, can be a natural fit and carry little risk. Many cable companies are seeking OTT solutions to expand their content to a millennial audience who prefer alternative platforms. A public cloud deployment with a SaaS business model could be just the answer.

From an operations standpoint, whether an organization is still making use of legacy hardware, deploying channels in a box, or looking for a cost effective DR, it’s important that staff can control all channels in the same way. Whether a channel is playing out of the cloud, or locally, the user interface, workflow and experience should be the same. A common interface reduces errors and minimizes training for staff. It also makes provisioning of new channels with the same branding elements as simple as copy and paste or duplicate. This hybrid approach to on the ground and in the cloud playout is a key factor in reducing costs across the board as well as future proofing workflows for whatever playout needs may arrive.

So, before you think the only answer to adding disaster relief, or launching that new OTT service is to build a room of on-site servers, think public cloud. A software-based infrastructure may be easier to launch and maintain while being less expensive. Simultaneously, a public cloud solution can offer additional operational versatility while providing a more secure environment than a DIY solution.

Ian Cockett, Technical Director (CTO) of Pebble

Ian Cockett, Technical Director (CTO) of Pebble

You might also like...

Designing IP Broadcast Systems - The Book

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

Standards: Part 21 - The MPEG, AES & Other Containers

Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.

Broadcasters Seek Deeper Integration Between Streaming And Linear

Many broadcasters have been revising their streaming strategies with some significant differences, especially between Europe with its stronger tilt towards the internet and North America where ATSC 3.0 is designed to sustain hybrid broadcast/broadband delivery.

Microphones: Part 2 - Design Principles

Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.