Scalable Dynamic Software For Broadcasters: Part 9 - Integrating On- & Off-prem Datacenters & Cloud
Flexibility and scalability are at the core of microservice architectures. To truly deliver on this, broadcasters will want to leverage the power of on-prem and off-prem hybrid solutions.
All 12 articles in this series are now available in our free 88 page eBook ‘Scalable Dynamic Software For Broadcasters’ – download it HERE.
All articles are also available individually:
When considering on-prem and off-prem datacenters, we must look a little closer at how they’re being used to determine their classification. Although an off-prem infrastructure can often be thought of as cloud system, that is not always the case.
A service provider can supply anything from bare racks with power, network connectivity, air conditioning, and building security, where the broadcaster will install their own servers and network, to serverless systems where the broadcaster only needs to manage the microservices deployment with little regard for the underlying hardware.
A similar scenario exists with on-prem datacenters. The broadcaster may operate their system as a cloud type infrastructure with virtualization and scalability, but they are still responsible for installation and maintenance of the entire facility. With on-prem, the broadcaster will be responsible for the network connectivity, power, air conditioning, security, and fire suppressant systems. Although these are all achievable for most broadcasters, as they’re used to working in 24/7 mission-critical systems, the complexity of on-prem systems should not be underestimated.
That said, the on-prem system generally provides much more control for a broadcaster than the off-prem equivalent, but it will always have limited capacity. No matter how much the broadcaster tries to future proof their facility, one of the compelling reasons to moving to IP is that it means that broadcasters do not have to think too much about the viewer requirements in ten years. With this in mind, many broadcasters are finding the concept of the off-prem and on-prem hybrid model compelling.
Hybrid Infrastructures
Combining on-prem and off-prem systems seems like the perfect solution. Using on-prem means the broadcaster has more control over their infrastructure and they can significantly reduce cloud egress and ingress costs, while at the same time they can move data quickly to and from local storage. During times of peak demand, which is inevitable for any broadcasters, they can divert some of their workflow traffic to the off-prem facility.
Off-prem cloud systems excel when they are scaling up and down as the number of viewers increases and decreases. Where they don’t do so well is when the workflow is static. This doesn’t mean that the broadcaster cannot use a cloud infrastructure entirely for a static workflow, but it just requires a little more thinking about in terms of the overall structure of the technology solution. There are many costs associated with running the technical side of a broadcast infrastructure and these might cover the procurement of an on-prem datacenter, or they may not. It all depends on the individual requirement of the broadcasters.
The great news is that hybrid on-prem and off-prem infrastructures provide broadcasters with a multitude of options, and it’s for the broadcaster to determine the best route for themselves.
Load Balancing Principles
If we assume the broadcaster has chosen a hybrid infrastructure approach where the static part of the workflow resides on-prem and they have the option of scaling into the off-prem when needed, then how is this achieved? It’s all well and good declaring that the infrastructure must scale, but what does this mean in real terms?
There are two problems to solve: the off-prem infrastructure must deliver new resource, and the workflow traffic must be diverted to it. Scaling the infrastructure is an intrinsic property of the orchestration system, but to achieve the diversion of the traffic, load balancing is used.
Load balancing is the method of distributing data and control information between a client and a server. And as microservices are stateless and their functionality is abstracted away from the underlying hardware, they lend themselves perfectly to hybrid on-prem and off-prem operation.
With a container and microservice architecture such as Kubernetes, the node maps to a server, and this may be virtualized or physical. The node runs the pods, which in turn manage the containers and individual microservice applications. Consequently, part of the workflow planning is determining which microservices lend themselves well to operating in an off-prem environment. This is particularly important when the broadcaster considers where the media assets are stored as costs could easily soar if they are continuously transferred to and from the off-prem and on-prem facilities. It’s inevitable that some transfer will take place, but the ingress and egress must be kept to a minimum, so storage optimization is critical.
Figure 1 shows how the workflow traffic is spread between multiple microservices providing the same service. The stateless nature of the microservice means that as jobs are created in the workflow, they can be sent to the load balancer, which in turn decides which microservice to send the job to by specifying the microservices IP address.
In the case of the ingest workflow where the received file must be converted to the broadcaster’s mezzanine format, the transcoders will probably reside within the same pod design. Transcoders are CPU (and sometimes GPU) intensive and require large amounts of local memory. The pod design will be tuned to providing these resources from the node.
This pod model can reside on any node, and the node can reside on any server whether on-prem, off-prem, virtualized or physical. Having this level of flexibility allows the transcoder node to exist in the broadcasters on-prem datacenter or the third-party providers off-prem datacenter.
Figure 1 – The load balancer acts as an interface to the user, so they do not need to know which node or pod the microservice is running on. If there is capacity within the on-prem datacenter, then more nodes can be created and added to the load balancer.
Off-Prem Load Balancing
There are many third-party off-prem suppliers who provide serverless computing that facilitates microservice architectures. The term serverless is somewhat misguided as servers are still being used, it’s just that the provider is delivering a service-based solution instead of a server solution. This leaves the broadcaster to focus on the applications and not get bogged down with configuring hardware. Serverless computing is also known as Function as a Service (FaaS), and this in turn provides containerized architectures such as Kubernetes.
Figure 2 – Extending from figure-1, a node is added in an off-prem datacenter to the load balancer. As the load balancer is effectively routing IP packets, it doesn’t matter whether the node is on-prem or off-prem. Care must be taken when determining where the storage is allocated, otherwise there may well be excessive ingress and egress.
If a broadcaster is using a bare-bones off-prem rack system, they must not only provision the physical servers but decide on how they are going to facilitate the containerized architectures.
How the containers are provisioned within the off-prem datacenter, depends to a large extent on how quickly the broadcaster is going to need the scaled resource. And the cost is proportional to the speed with which the resource becomes available. If a number of servers are kept on standby in the public cloud with a specific containerized deployment, then their availability is going to be in the order of milliseconds. But if the servers need to be created and spun up with a specific containerized deployment, the speed of availability can stretch to five or ten minutes.
The stateless nature of microservices allows this scaling. Furthermore, the broadcaster can scale to multiple and different third-party vendors. This not only reduces their risk, but also allows them to choose the most cost-effective service provider.
Containers and microservices not only provide scalable resource for broadcasters, but they can achieve this over multiple vendors, and their own on-prem facility.
Part of a series supported by
You might also like...
HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows
Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…
IP Security For Broadcasters: Part 4 - MACsec Explained
IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.
Standards: Part 23 - Media Types Vs MIME Types
Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.
Building Software Defined Infrastructure: Part 1 - System Topologies
Welcome to Part 1 of Building Software Defined Infrastructure - a new multi-part content collection from Tony Orme. This series is for broadcast engineering & IT teams seeking to deepen their technical understanding of the microservices based IT technologies that are…
IP Security For Broadcasters: Part 3 - IPsec Explained
One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…