Software IP Enabling Storytelling - Part 1

Television is still a niche industry, but nonetheless, one of the most powerful storytelling mediums in existence. Whether reporting news events, delivering educational seminars, or product reviews, television still outperforms all other mediums in terms of its ability to communicate to mass audiences.



This article was first published as part of Essential Guide: Software IP Enabling Storytelling - download the complete Essential Guide HERE.

One of the major impacts of software infrastructure for broadcasters is its flexibility. But when we speak of flexibility it’s often easy to get bogged down in the technical detail of virtualization, networks and monitoring. Flexibility in terms of ease of use is much more relevant for production teams.

Virtualized systems can now meet the demanding data throughput for video and audio, which in turn makes high quality broadcast systems available to a much broader group of production teams. Virtualization has also driven down the barrier to entry for technology making access in terms of cost lower now than it’s ever been.

Keeping systems simple is key for anybody working in a broadcast environment and this is even more critical for production teams. Software provides the opportunity to remove the operational complexity and even abstract core components, so production teams do not need to be concerned with how to configure complex workflows. Instead, they can spin-up predefined configurations and adjust them accordingly.

Ease of software configuration moves the focus to empowering production teams to concentrate on the story they want to tell and not have to think too much about the technology they’re using to do it. After all, when writing a script, how many producers think of the ASCII codes and keyboard serial interface to the operating system?

The same should be true of operational broadcast infrastructures. For production teams, the focus should be on the functionality and the problem the system solves, not necessarily how it does it.

This level of operational flexibility requires top-down thinking as opposed to the bottom-up analysis engineers and technologists often consider first. We shouldn’t be thinking of how we can fit the production teams around the technology, but instead how the technology can best meet the needs of the users.

The top-down thinking approach is a variation of the agile methodology that is now prevalent in modern technology circles. Agile encapsulates the whole concept of change. In fact, agile thrives on change. And to meet the needs of change we must develop flexible systems that can adapt quickly.

The combination of facilitating change and making systems easy to use is manifested in the idea of flexibility. And with software we can deliver this flexibility and remove the deep technical operation to make broadcast systems easy to operate and access.

From an engineering and technologist’s perspective, and to make the most of software-based infrastructures, we really need to know what’s going on under the hood of the facility. Spinning up software apps and deleting them when finished may sound all well and good, but what does this really mean in terms of making a system work effectively? How does software improve flexibility and what does this mean for production teams?

Television is at a turning point in its evolution. We’ve moved from a system that is totally dependent on highly specialized and custom hardware making infrastructures static and rigid, to a system that is now based on COTS and flexible software to deliver dynamic and scalable systems. Not just for the engineers, but for the users too.

Although dynamic and scalable systems will make broadcast infrastructures more adaptable for the future, the real benefits are the operational flexibility and ease of use with which production teams can work allowing them to focus on making programs and telling their story.



To truly understand the benefits of IP, software and cloud, it helps to take a higher-level view of the problem we’re trying to solve, the challenges we face, and the people this new technology is helping.

Broadcasting has traditionally been a technology led business. The line speeds, frequencies and data transfer rates needed to make television work have been at the forefront of the hardware capabilities for a good eighty years. But more recently, advances in industries such as telecoms and finance has seen a massive progression in hardware capability, to the point where real time processing in software has not only become possible, but a necessity.

With every technology advance we have to look at the benefits the end viewer has gained. SDI helped improve the quality of pictures, HDR and WCG delivered vibrant and colorful images and surround sound greatly improved the immersive experience. The question is, what is software and COTS really doing for the viewer? What benefit do they gain? In other words, why are we moving to IP and software processing?

Lengthy Hardware Development
Hardware development cycles can easily take six months for relatively straight forward designs and years for complex high speed signal processing. Analogue video gave technology a run for its money requiring custom designs not seen in any other industry. Every device from cameras to monitors and video recorders required specialist design taking years to perfect. Digital improved development times slightly but even here progress could be painfully slow. The bottleneck was clearly the hardware design.

Television is a relatively small industry compared to the multi trillion-dollar telecommunications business, or finance industry, or medical sector. If finance would have needed to record 270Mbit’s per second back in the 1980s then D1 video recorders would have been much different. These industries have attracted massive R&D investment to meet the growing demands of their clients and users. The great news for broadcasters is that we can ride on the back of this innovation and use the IP infrastructures and components these industries have provided to our advantage.

Optimizing Systems
Software designs often provide solutions in much shorter timescales than is achievable with hardware. But we must not forget that many of the broadcast products designed over the past thirty years have had software at the heart of their application. Custom hardware often led to custom operating systems to run the code tuned to the hardware and operating system. As embedded versions of Windows and Linux became available, vendors had the opportunity to write more generic code.

One of the disadvantages of the custom style of development is that the code is difficult to design and even more difficult to maintain. A deep understanding of the underlying hardware is necessary to fine tune the application to achieve the best results with the limitations of the available hardware. This further limits hardware design as not only do development engineers have to go through massive architectural changes, but the operating systems have to be adjusted and application code further tweaked.

In the ideal system we want to abstract the software away from the hardware to avoid having to rewrite large parts of the application code or go through new learning cycles for the application software engineers.

Benefitting From Industry
COTS servers and modern operating systems are facilitating this method of working. Although the hardware is designed and built by many different vendors, generic interfaces are provided to allow the operating system to communicate with the input/output devices. For example, Microsoft Windows provides the WDF (Windows Driver Framework). This encapsulates a software interface to the NDIS (Network Driver Interface Specification), which in turn communicates with the low-level registers and memory within the media access layer of the NIC (network interface card).

Through the WDF developers have a generic interface and comprehensive library that allows them to send and receive IP traffic. Although the transfer of IP packets may sound relatively easy, and at a packet level it is, the devil is always in the detail.

A NIC in itself generally doesn’t have provision for protocols such as UDP, TCP and RTP. Instead, the operating system is expected to provide this.

Leveraging Operating Systems
Protocols vary in complexity. UDP is relatively straight forward as it wraps the data and provides improved header information such as port numbers to facilitate application sub-addressing within a server. However, if multiple software services sending and receiving UDP datagrams within a server all need access to the NIC then there must be some form of scheduling, direction and arbitrage. Otherwise, the UDP datagrams will get mixed up, the services will not receive the correct data, and chaos will soon ensue. Achieving reliable delivery of datagrams from the NIC to the correct service, and vice versa are the responsibility of the operating system.

Fig 1 – Multiple applications negotiate through the operating system to gain access to the NIC (Network Interface Controller). The operating acts as an arbitrage and scheduler to maintain order within the server and keep the IP data packets coherent and error free for the higher-level applications. This results in an abstraction of the lower-level hardware control away from the higher-level service applications.

Fig 1 – Multiple applications negotiate through the operating system to gain access to the NIC (Network Interface Controller). The operating acts as an arbitrage and scheduler to maintain order within the server and keep the IP data packets coherent and error free for the higher-level applications. This results in an abstraction of the lower-level hardware control away from the higher-level service applications.

These challenges are further compounded when we consider TCP as the protocol is designed to solve two problems: guarantee delivery and congestion control. Not only must the operating system now guarantee the IP packets reach the correct software services running within the server, but it must also provide a higher level of control that assures the IP packets are delivered to the software service in order, and with 100% validity. Furthermore, the TCP operating system function must comply with the fair-use policies of the internet to make sure packet flooding doesn’t take place and congestion limits are observed, all while trying to optimize data throughput.

In just this one function alone we can see that the operating system is providing a multitude of complex independent services that must work reliably and efficiently to guarantee the best IP connectivity and data transfer possible. Even the most dedicated vendors would find this an almost impossible task to achieve if they had to write the operating system from the ground up. Luckily, Microsoft Windows and Linux operating systems, to name just a few, have done all this difficult work for us.

Demanding Security
We should also consider the thorny issue of security. It’s almost impossible to have any network connected device completely isolated from the internet. Whether executing software updates or exchanging files, the modern world requires us to access the internet. But as soon as we connect to the world wide web, we put ourselves at risk.

Operating systems are adept at dealing with security. There are enough engineers and security experts working at keeping the principal operating systems safe for us to be assured that they are as safe as can be. That said, maintaining secure infrastructures is a company-wide responsibility acting over many tiers, but operating systems provide a key part in stopping bad people from accessing our data.

As just demonstrated, COTS servers and operating systems go a long way to providing reliable and secure infrastructures to work with. Developers do not have to worry whether a TCP stream has been reliably received or sent, or whether the correct services in the server are receiving the correct data. This is further enhanced when we start focusing more on the bigger picture of television.

Supported by

Broadcast Bridge Survey

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

IP Security For Broadcasters: Part 4 - MACsec Explained

IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.

IP Security For Broadcasters: Part 3 - IPsec Explained

One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…

The Resolution Revolution

We can now capture video in much higher resolutions than we can transmit, distribute and display. But should we?

Microphones: Part 3 - Human Auditory System

To get the best out of a microphone it is important to understand how it differs from the human ear.