Scalable Dynamic Software For Broadcasters: Part 3 - Microservices, Containers And Orchestration
Microservices provide a mechanism to allow broadcast facilities to scale their resource to meet viewer demand. But the true power of microservices is released when we look at containers and orchestration, a management system that empowers scalability, flexibility, and resilience.
All 12 articles in this series are now available in our free 88 page eBook ‘Scalable Dynamic Software For Broadcasters’ – download it HERE.
All articles are also available individually:
Although microservices and containers provide a technical solution to increasing and decreasing functionality to meet viewer demand, they also facilitate continuous delivery. This is the ability to deliver all changes within the software, including bug fixes, function releases, and configuration changes into the production environment in the fastest and safest way possible. And by safest, we mean the least disruption (preferably none) to the viewer.
It's important to remember that in a truly continuous delivery environment, developers do not expect to be restricted on how many times they can deploy their code, and this could be several times a day. Compared to the monolithic software of the past, where a code release was a major exercise, microservices working within the continuous delivery environment have removed the high stress and risk associated with software releases. Instead of treating software releases as an exception and risk to the business, they are now considered to be low risk and part of the daily operation.
Containers
Microservices can operate on their own but in doing so are difficult to deploy and manage. In effect, they just become a collection of loosely coupled small programs that are distributed across the compute resource. Containers group microservice applications together to provide an isolated operating environment that share the same operating system kernel of the host server.
The microservice deployment, within the container, just consists of the installation instructions, dependent libraries, and code, thus negating the need to deploy a full-blown operating system every time a microservice is started or stopped. This provides a lightweight alternative to virtual machines as there is significant overhead in starting and stopping VMs, which is bypassed when using containerized microservices.
Containers also deliver independence, scalability, and lifecycle automation, and can be thought of as a management component within the orchestration system that helps microservices work together and deliver truly scalable and resilient distributed software infrastructures.
Independence allows small teams of developers to work on specific microservices without having to involve large teams. This facilitates agile working so that features can be released more quickly, and testing is more efficient and reliable.
Having the option of operating microservice applications on any platform, whether local or remote, makes a software infrastructure incredibly flexible and provides many options for broadcasters in terms of meeting peak demand but also delivering efficient and cost-effective systems.
Lifecycle automation facilitates continuous delivery pipelines so that individual software components can be added, removed, updated, and maintained as required. This would be almost impossible with a monolithic system as the software functionality cannot be split into individual components.
To recap, microservices provide independent components, so that each component can work without reference to others but at the same time communicate through coupled API interfaces to maintain consistent control and data exchange. Also, components can be developed and tested individually without having to recompile the whole software so functions can be built safely. And the whole system is decentralized so that microservice components can be run from on-prem datacenters as well as public cloud services. Due to the APIs, communication channels and object storage, components do not “care” where they operate from, therefore, a broadcaster can use any combination of on-prem and off-prem hardware resource.
Furthermore, a container can be thought of in a similar light to a physical container. It provides a mechanism to move microservices around datacenters by grouping them. By moving a microservice into a container, we are effectively putting it into its own environment, independent of the underlying hardware it is operating on. Furthermore, as the containers abstract the microservice components from the underlying hardware it allows the container to be moved onto any server, cloud, or virtual machine.
Figure 1 - The cluster forms the highest level of the system containing the nodes. Each node is a physical or virtual machine which contains the pods. The pods are abstractions that contain the individual microservices and allocate the node resources as required.
Orchestration
The containers do not exist in isolation but need a higher-level management system that distributes, schedules, enables and disables them, and this process is often referred to as orchestration. It’s important to remember that microservices can exist without containers, which can in turn exist independently of orchestration, but it is the combination of all three of these fundamental components working together that provides the power of microservice architectures.
When referring to microservices, we are really embracing the whole orchestration ecosystem. Microservices working in isolation are just small programs, but combined with containers and orchestration, form a hugely scalable architecture that facilitates software deployment and operation over on-prem and off-prem datacenters, as well as the public cloud.
There are many container orchestration systems available including Kubernetes, OpenShift, and Nomad, and all have their own methods of operation but share the concept of deploying and managing containers.
Hierarchy
At its highest level, an orchestration system consists of a cluster which in itself is an abstraction of the whole orchestration system. The cluster embraces the nodes and control plane, and in the case of a Kubernetes type orchestration system, each of these nodes consists of one or multiple pods, and it is the pods that manage the containers and hence the microservices.
This all might seem a lot of abstraction and overhead, but the system does facilitate and empower full scalability, flexibility, and resilience. The control plane manages the cluster including scheduling the actual applications providing the functionality, scaling applications, and deploying software updates.
The node is either a physical computer or a VM and provides the worker machine for the cluster. This means that the node can exist on a physical machine, a localized VM in a datacenter, or a public cloud, and the orchestration layer through the control plane links the node processes together so that they can be either physically dispersed or decentralized if required.
In the case of Kubernetes, the nodes encapsulate Pods. These are another abstraction that manage one or more containers to allow sharing of resource. As the container is a resource independent abstraction, at some point, the microservice applications it hosts must access the hardware, and this is achieved through Pods.
A Node consists of one or more Pods and allows sharing of the storage, allocates IP addresses either individually or as a cluster, and contains information about how to run each container. This includes which container image version to use, and which specific ports are required.
Resilience
It is possible to run an entire cluster on one single machine or VM, but this would be highly dangerous. Should the server fail, then the whole architecture fails and it would be time-consuming and costly to rebuild. Instead, a minimum deployment would operate over three nodes (virtualized or physical servers) consisting of the one node for the control plane, one node for the system database, and the third node backing up the other two. Therefore, if one VM instance or physical server dies, then one of the others will recover the microservice architecture.
Another aspect of resilience is that we should assume failures will happen. This assumption facilitates strategies for testing as well as recovery. It’s only when recovery from a failure can be achieved that a system is completely resilient. And the compact and contained nature in which microservices operate lends themselves to this methodology. Instead of fearing failures we should really be embracing them and using them to learn how to recover. The old attitudes to A-B failover just don’t cut it in the highly complex world of software infrastructures.
In essence, to build a truly resilient system, microservice architectures should be designed with failure in mind. Only when we know what happens when things don’t go according to plan can we be sure to devise an effective counter-measure. The combination of the natural resilience of microservices and containerized infrastructures makes them truly resilient.
Part of a series supported by
You might also like...
HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows
Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…
IP Security For Broadcasters: Part 4 - MACsec Explained
IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.
Standards: Part 23 - Media Types Vs MIME Types
Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.
Building Software Defined Infrastructure: Part 1 - System Topologies
Welcome to Part 1 of Building Software Defined Infrastructure - a new multi-part content collection from Tony Orme. This series is for broadcast engineering & IT teams seeking to deepen their technical understanding of the microservices based IT technologies that are…
IP Security For Broadcasters: Part 3 - IPsec Explained
One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…