Empowering Cloud Through Microservices - Part 1
Simply moving workflows and software applications to virtualized infrastructures – whether public or on-prem – will not leverage the power of cloud. Monolithic software programs and static workflows all conspire against the broadcaster when they are reaching to achieve the flexibility, scalability, and resilience that cloud systems promise.
Other articles from this series.
Broadcast infrastructures demand high availability and resilience, and scalability has always been an aspiration, but the peak nature of program productions has rendered this virtually impossible in traditional broadcast infrastructures.
If you build your infrastructure to handle the largest events and the busiest days with full redundancy, it sits idle most of the time. Cloud and virtualization not only deliver high availability and resilience but also massive flexibility and scalability. However, this is only possible if the infrastructure is built with cloud and virtualization in mind from the beginning.
Microservices both present a new method of designing and building software-based infrastructures and encourage a new way of thinking. A microservice isn’t necessarily owned, but instead leased for the duration of the program or service it serves. This reflects a major change in how we think about making programs.
Instead of having to procure and find funds for capital expenditure that justifies the spend for years to come, we now have the opportunity to build entire broadcast infrastructures using pay-as-you-go methodologies.
Improving Reliability
Software is traditionally built using a monolithic design which means that a huge software release is provided for a single application or workflow. It is almost impossible to completely test the software prior to release due to the millions of combinations of inputs and outputs that are available. This can result in each release introducing bugs that have unintended consequences and unpredictable outcomes. Software engineers have tried to improve on this situation by introducing functional libraries and object-oriented code. The idea being that code could be reused, and if it had been in service for some time then it was assumed to be bug-free.
Embedded systems such as proc-amps and standards converters have used monolithic code for some time. Reliability is easier to achieve as the input and output data is better understood and easier to replicate due to working in a closed environment. Furthermore, vendors working in closed environments are often writing software for custom hardware, so they have much better control of how the product behaves.
The challenge we now face with modern broadcast workflows is that they operate on open architectures, that is, we use COTS hardware with their associated operating systems. This has both decreased the cost of the capital expenditure and greatly improved flexibility through the application of software functionality, but in doing so, has massively increased the potential for complexity. Furthermore, monolithic designs are not only difficult to maintain and upgrade, they do not lend themselves well to scalability. One reason for this is that monolithic architectures cannot easily duplicate themselves and coordinate user requests to multiple instances of the same application. COTS hardware has limited processing and as requirements increase, additional hardware needs to be purchased, integrated, and configured. This is a process that can take weeks or even months. Simply running the software on a remote server only moves the issue outside. A monolithic piece of software still needs to be installed, integrated, and configured regardless of whose server it is running on.
Microservices both solve the challenges of monolithic software and build on the advantages of COTS type infrastructures. One of the reasons COTS is so powerful is that hardware is much more readily available than it is with traditional closed broadcast systems.
It is worth remembering that the type of customized infrastructures broadcasters need is at the top end of the technology scale, in other words, it’s relatively expensive. But the high-end servers, switches, and storage broadcasters require are the same type of technology that other industries such as finance and medical are also using, so it is much easier to procure and support.
We can also ride on the crest of the wave of innovation that these other industries provide and microservices are just one result of these advances.
Fig 1 – Microservices are stateless allowing the API Gateway to schedule jobs within the workflow as requested. The user doesn’t know, or need to know where the microservice physically resides, only that it is a service available to them.
Less Is More
One of the original design philosophies of UNIX architects Ken Thompson and Dennis Ritchie was to keep the code reusable and modular. Consequently, UNIX has a host of commands that allow the output of one program to be piped into the input of another program. By keeping the operating system programs relatively small, they became much easier to maintain and support.
Microservices are following this similar proven design philosophy. By keeping functionality well defined with contained input and output data values, they are much easier to maintain and support. Other benefits also include improved security and scalability.
In the same way that a house consists of thousands of bricks, all working together to make a huge building many times the size of one brick, microservices combine to deliver highly flexible, scalable, and resilient workflows that are much greater than the sum of the parts.
Key to understanding the microservice workflow is to appreciate the philosophy of building systems consisting of smaller parts. Unlike a brick house, we can pull the whole microservice workflow apart, delete it when it’s not needed, and then reconstruct it again in a matter of minutes through the appropriate management software. Similar to the UNIX example above, microservices can be tested individually and can be easily installed, upgraded, or rolled back without impacting other microservices. This means that a small bug in one microservice will not bring down the whole system, improving the reliability of the system even when going through regular updates.
Broadcast facilities have worked with the modular mindset since the first television broadcasts, but these have always been fairly static. We have been able to design some flexibility and scalability into the infrastructures through assignable matrices and pluggable jackfields, but the reality is that we’ve been saddled with having to design for peak demand. No matter how flexible we try and make an infrastructure, the static nature of single functionality equipment has been a limiting factor. However, COTS infrastructures combined with manageable microservices are delivering untold levels of flexibility and scalability.
Supported by
You might also like...
Standards: Part 20 - ST 2110-4x Metadata Standards
Our series continues with Metadata. It is the glue that connects all your media assets to each other and steers your workflow. You cannot find content in the library or manage your creative processes without it. Metadata can also control…
Delivering Intelligent Multicast Networks - Part 2
The second half of our exploration of how bandwidth aware infrastructure can improve data throughput, reduce latency and reduce the risk of congestion in IP networks.
If It Ain’t Broke Still Fix It: Part 1 - Reliability
IP is an enabling technology which provides access to the massive compute and GPU resource available both on- and off-prem. However, the old broadcasting adage: if it ain’t broke don’t fix it, is no longer relevant, and potentially hig…
NDI For Broadcast: Part 2 – The NDI Tool Kit
This second part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to exploring the NDI Tools and what they now offer broadcasters.
HDR & WCG For Broadcast: Part 2 - The Production Challenges Of HDR & WCG
Welcome to Part 2 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 2 discusses expanding display capabilities and…