Building Software Defined Infrastructure: Virtualization Vs Microservices

How virtualization and microservices differ, and workflows where virtualization and microservices would be used or avoided in terms of reliability, flexibility and security.

In the previous article in this series, we introduced the idea of how hardware compute, storage, and networking resource can be abstracted away from the workflow functionality to provide Software Defined Infrastructure (SDI). In this article, we consider how microservices are used and when, how they excel when compared to virtualization, and how they contribute to Software Defined Infrastructure architectures.

Virtualization and microservices are quite different. Virtualization is a software program running on a host server to provide multiple operating system environments to give the impression that multiple servers are running on one server. Each of the OS environments operates independently and are logically separated to give the same level of security as stand-alone servers employ. Also, multiple OS environments can be operated over clusters of physical host servers to provide an element of resilience. If a physical host fails within the cluster, the other VM OS environments can be moved over to other physical machines thus providing resilience.

Microservices consist of small independent programs that communicate with each other using APIs. The Microservice host software architecture provides the platform to deploy the applications as well as delivering the administration structure that allows each of the microservice applications to communicate, and facilitates data-plane connectivity for each workflow. Each microservice function provides a specific function, such as a transcoder, and can operate independently of all other microservice applications and instances in the infrastructure.

Although it may be unfair to generalize, it’s not unreasonable to say, that if a software application can only operate within a virtualized environment, then it was probably not designed from the ground-up to take the full advantage of cloud computing and software defined infrastructure. This may be seen as a broad-brush observation of broadcast applications running in cloud infrastructures, but it is what is meant by the term “lift-and-shift”. That is, a software application that is happily running on a stand-alone server that needs to operate in a datacenter. Creating a VM would probably be the best way of facilitating lift-and-shift functional software and to fully appreciate why, it would be good to dig a bit deeper into how microservices operate.

Microservice Innovation

Transitioning to IP is more than just converting the signal flows from baseband SDI/AES to ST2022 or ST2110.  When done properly, it encapsulates a whole new methodology of working that delivers high levels of resilience, flexibility and scalability, but to achieve this, software architectures must be designed from the ground-up using infrastructures such as microservices. It is certainly possible to operate workflows using VMs in cloud and datacenter operations, but to fully embrace dynamic systems that scale to meet viewer demand or re-route entire parts of the infrastructure should a part of the system fail, then VM lift-and-shift solutions fall short of the ideal.

Microservices embrace the Unix way of thinking in that the tools needed to provide many of the core system functions, such as file search and listing, are predicated on small stand-alone programs that can be pipelined to build concatenated functionality as required. The whole premise of this type of operation is that programs are designed to be small and specialize in a specific operation. For example, the “ls” instruction that is used from the command line prompt lists files in directories. If you want to search for specific files within the directory, then you can send the output of “ls” to the “grep” command using the Unix piping method. Although it’s possible to add a search function to “ls” similar to “grep”, the philosophy of design for Unix, and Unix type operating systems, means that a function focuses on its core use-case. This way it doesn’t become bloated or need updates for seemingly unrelated features that have crept in over time.

Focusing On Expertise

Microservices operate in a similar manner to the Unix philosophy, that is, software applications are designed to be relatively small and operationally focused on a specific function. For example, a transcoder may only convert an ST2110-20 uncompressed video stream to a format such as H.265. If the video was originally encapsulated in an ST2110-10 transport stream then a separate microservice would be used to extract the video, audio, and metadata streams so that they could be sent to the transcoding microservice. Therefore, two microservice applications would be required, one for the essence extraction, and another for the transcoding. 

The concept is completely scalable as it is not limited to resource. If more transcoders are required, then more instances of the application can be instantiated quickly on multiple servers if required. But the genius with this methodology is that vendors and service providers can design solutions that are specific to their expertise and skill set. It further opens the possibility to allow broadcasters to mix and match microservices to build complex and custom workflows. 

Figure 1 – VMs allocate an OS per instance whereas microservices use containers with a common operating system. This makes microservices much more efficient as the application doesn’t have the overhead of initialising the operating system each time it is initiated but still has a common management control and interface system via the container engine.

Figure 1 – VMs allocate an OS per instance whereas microservices use containers with a common operating system. This makes microservices much more efficient as the application doesn’t have the overhead of initialising the operating system each time it is initiated but still has a common management control and interface system via the container engine.

Coordinating Workflow Design

Microservices may operate in isolation, but they certainly form part of a much greater workflow solution that needs to be coordinated, and this is where Software Defined Infrastructure excels.

As broadcast infrastructures and workflows increase in complexity, it’s becoming virtually impossible for one engineer to understand the intricacies of every part of the infrastructure. Quite often, this level of detail is not required on a day-to-day basis and so automation helps deal with this complexity, as it simplifies the repetitive detail that is more or less redundant to the broadcast engineer.

Using drag-and-drop GUI techniques, the engineer can pull in different functions and connect them to each other. Adding monitoring functions to relevant nodes further enhances the automation and can provide alarms when something goes wrong, such as a video feed disappearing.

All the configuration capabilities of microservices revolve around the concept of APIs, and this is one of its greatest strengths. A significant challenge with broadcast television has been our near obsession with maintaining backwards compatibility. There is good reason for this as viewers using existing televisions needed to watch new transmissions when new formats were released, such as the move from black-and-white to color, or 4:3 aspect ratio to 16:9, which in turn meant new specifications had to be defined before the new formats could be used, and this in turn slowed the advancement and adoption of new formats.

APIs help solve some of the need to constantly create new specifications as they are largely self-defining and self-documenting. As long as a vendor provides an adequate API interface then a management layer of software can easily communicate with them and provide the necessary configuration information. Parameters such as the video streams source and destination can be easily and dynamically defined so that if the source or destination changes, then a simple update of the API is all that is required.

Orchestrating Software

Being able to dynamically allocate and configure software applications is a major bonus for broadcasters as systems can be quickly designed and configured, but to truly excel at this, orchestrating software is also employed to act as a higher order overview of the entire infrastructure. This need for orchestration is also evident when we physically look at where a microservice application actually resides, because the simple answer is that it can be anywhere, either on- or off-prem, public or private datacenter or cloud.

The orchestrating software must not only keep track of where the microservices reside and how they are connected and configured within the workflow, but they must also know where they physically exist.

There is then a further challenge of costing. Somehow the microservice must be paid-for, and this can be billing by hour, volume of data processed, or whatever solution the vendor has designed.

One of the key aspects of configuring the software is to understand the signal flow and using the GUI drag-and-drop does make this straightforward. But what isn’t immediately obvious is how the video and audio data is moved from one microservice software application to the next. This is a complex subject, but the solution has been largely solved by the technology adopted in high-speed datacenters used in AI computer clusters and networks. Specifically, how this operates is covered in later articles, but to understand how we shift large volumes of data between services requires us to stop thinking synchronously and start looking at video, audio, and the related metadata through an asynchronous lens. Again, this is covered in later articles.

Microservice Benefits

Just from this brief introduction we can see that microservice architectures lend themselves perfectly to highly dynamic, scalable and configurable workflows. Adding signal monitoring and alarm services further enhances reliability, and resilience augments infrastructure reliability when we start integrating on- and off-prem, and public and private datacenters.

VMs simply do not have this flexibility, scalability or resilience, and as we progress throughout the series, then we will explore how these software defined infrastructure, also known as microservices, and their related networks deliver highly configurable and resilient infrastructure that meets the very specific needs of today’s broadcasters.

Part of a series supported by

You might also like...

Building Software Defined Infrastructure: Part 2 - Processing & Streaming Media Essence

Welcome to Part 2 of Building Software Defined Infrastructure - a new multi-part content collection from Tony Orme. This series is for broadcast engineering & IT teams seeking to deepen their technical understanding of the microservices based IT technologies that are…

IP Security For Broadcasters: Part 12 - Zero Trust

As users working from home are no longer limited to their working environment by the concept of a physical location, and infrastructures are moving more and more to the cloud-hybrid approach, the outdated concept of perimeter security is moving aside…

Disruptive Future Technologies For HDR & WCG

Consumer demands and innovations in display technology might change things for the future but it is standardization which perhaps holds the most potential for benefit to broadcasters.

Essential Guide: Building Hybrid IP Systems

This Essential Guide brings together insight from four seasoned professionals who design, build and configure broadcast infrastructure at Systems Integrators in the USA and Europe. Our contributors here are from Aret, Broadcast Solutions and CP Communications and they are all…

IP Security For Broadcasters: Part 11 - EBU R143 Security Recommendations

EBU R143 formalizes security practices for both broadcasters and vendors. This comprehensive list should be at the forefront of every broadcaster’s and vendor’s thoughts when designing and implementing IP media facilities.