Broadcasters Become More Software Driven to Compete in Multiscreen Era

Broadcasters are reverting to being engineering driven after some years operating as little more than content houses, but this time the focus is more on software than infrastructure. That conclusion emerged from the EBU’s (European Broadcasting Union) fourth annual software engineering conference called Devcon, which started in 2013 in recognition that the industry was becoming more IT focused.

This year more than ever before it was clear broadcasting has become more aligned with enterprise IT and is now helping to shape the evolution of distributed computing. In broadcasting as in other sectors there is a growing clamor for IT architectures that support micro services, or continuous development, where apps and features can be constantly evolving and deployed at short notice. This requires some form of software container insulating applications from the surrounding IT infrastructure, including the operating system, underlying hardware platform and network.

At the EBU’s Devcon it was not surprising therefore to witness a strong focus on the Docker software container platform along with the now closely related management platform from Google called Kubernetes. It is true that broadcasters on the whole have remained aloof from the technical debates that have been raging within the software development community over the merits of Docker in particular. That is wise given it is clear now that is where the field is heading and that teething problems will be resolved over time. The mood at Devcon was that Docker is coming and will add significant value to applications and service, particularly on the streaming and OTT fronts.

In essence Devcon represents the latest chapter in the long story of virtualization and distributed computing that has been running almost half a century since IBM introduced the concept for its mainframes, separating applications from the hardware they run on to introduce a degree of software portability.

Docker emerged in 2013 as an open source project motivated by the desire to take virtualization a step further by avoiding need for a guest operating system to run. The aim was to make virtualization lighter in terms of resources and also apps easier to install from the command line, rather like in the mobile world. Before Docker, virtualization was usually associated with a layer of software called the Hypervisor running on top of a given server’s host operating system, essentially presenting the hardware as a clean slate for deployment of a guest operating system. This provided the necessary separation between application software and hardware for distributed services to be run on commodity platforms, to reduce costs and make best use of available resources. But it meant the virtual machines built from commodity hardware comprised not just the application software but also the entire guest operating system along with other supporting software tools, often consuming tens of GBs of storage per server, while also retarding performance.

 The Docker architecture avoids need for guest operating system.

The Docker architecture avoids need for guest operating system.

Docker reduces storage requirements by replacing the Hypervisor with a new layer called the Docker Engine, which is the container for application software, delivering all the resources needed to run on the given machine, sharing the same host operating system. This reduces need for RAM as well as disk storage, with the net result of speeding up execution.

At least that is the theory, but inevitably the Docker Engine is itself a complex piece of software and larger than the Hypervisor it replaces. That offsets some of the benefits in reduced overhead achieved by cutting out the guest operating system. There have also been complaints that Docker can hardly be called open when it only works on servers running either a major version of Linux – admittedly open source – and also Microsoft Windows.

Security is another bone of contention. Advocates argue that the Docker Engine strengthens security because software containers isolate applications from one another and from the underlying infrastructure, while providing an added layer of protection for the application. But critics point out that Docker presents a new surface for attack that needs to be addressed, while amplifying the potential impact of any vulnerabilities present in the host operating system kernel. There is no longer the protection provided by the guest operating system and hypervisor, placing more responsibility for security on the host operating system.

But broadcasters should just let these issues be played out within the Docker community. The bigger picture is that the platform has gained almost universal support from key players such as Google, Microsoft and the whole open source community.

What is true though is that realizing the dreams of virtualization and distributed computing is an ongoing challenge which, having taken 50 years, is not about to be solved at a stroke. The Docker chapter is unlikely to be the last in the saga.

Such sentiments were to an extent in evidence at the EBU’s Devcon, with recognition that Docker is not a panacea for all the pains of software deployment in the microservices era. Broadcasters, like all enterprises, will continue to require highly skilled development people and Docker does not avoid the need for well-designed software. In fact microservices in general increases the requirement for software built for scalability and also skills in software testing, given increased exposure to bugs that might previously have had a more local impact.

The mood of optimism tempered by the challenges was captured at Devcon by Viktor Farcic, a member of the so called Docker Captains group acting as technical evangelists for the platform. "It is not just about lighter virtual machines,” said Farcic. “It is a completely new way of thinking about how to ship applications in terms of network, storage and computation.” Farcic led a 'show and tell' workshop demonstrating how to build, test and deploy services with Docker.

You might also like...

Designing IP Broadcast Systems - The Book

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

Operating Systems Climb Competitive Agenda For TV Makers

TV makers have adopted different approaches to the OS, some developing their own, while others adopt a platform such as Google TV or Amazon Fire TV. But all rely increasingly on the OS for competitive differentiation of the UI, navigation,…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

Standards: Part 21 - The MPEG, AES & Other Containers

Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.

Broadcasters Seek Deeper Integration Between Streaming And Linear

Many broadcasters have been revising their streaming strategies with some significant differences, especially between Europe with its stronger tilt towards the internet and North America where ATSC 3.0 is designed to sustain hybrid broadcast/broadband delivery.