Cloud Microservice Workflow Design - Part 2
In the second of the articles in this cloud microservices workflow series, we take a deeper look at how microservices are integrated into workflows using RESTful APIs, as well as reviewing storage deployment and accessibility “on your terms”.
Dealing With Peak Load
Advanced monitoring is installed by the microservice vendor that anticipates server load and can schedule new jobs or spin up additional servers as required, often before they’re needed. This all happens in the background, so the user does not need to be concerned with load balancing and server utilization. They only need to access the microservice through an API.
Interaction with microservices uses REST (Representational State Transfer) for control and monitoring microservices. REST is a software architecture that uses HTTP for transferring data over the internet and four methods provide the basic primitives to send and request data from the microservice servers. As the internet relies on the client-server model, and the server facilitates the microservice, the client computer must initiate any data transfer from the microservice.
REST methodologies often rely on stateless interactions, that is, the microservice has no persistence of data outside of the function it is being used to execute. For example, when a microservice has completed a transcoding job, no data will be retained, leaving it free to move on to service another client with complete anonymity. This leads to very high levels of security as only small sections of the media file are read into memory at a time so the whole asset is never available to the cloud vendor’s hardware, or the microservice provider’s software.
RESTful Integration
The API interface embraces the REST methodologies and is a standard operational interface for Agile developers working with internet server-client technologies. This makes integration into workflows straightforward as the interfaces are well-defined and predictable. Broadcasters do not need to be concerned with how a microservice provides a service, such as color correction or aspect ratio conversion, only that the service is available as a callable software function that is able to facilitate the technical requirement.
Transferring media files from on-prem to the public cloud is incredibly inefficient and potentially very costly. To alleviate this, broadcasters are storing more of their media in the cloud for convenience and reliability and keeping the processing in the cloud leads to greater efficiencies. Most public cloud providers supply ultra-high-speed links between their datacenter regions to facilitate high-speed low-latency data transfer, these are many orders of magnitude higher than what is available for transfer to the cloud from the broadcaster’s premises.
Microservices often use a queueing method to determine how many processing jobs have been requested by the broadcaster’s applications, when the queue reaches a threshold, more microservices are created to dynamically respond the increase in workload. All this happens automatically under the control of the microservice providers monitoring software without any intervention from the broadcaster.
This leads to the concept of bring-your-own-storage. Cloud vendors often provide high levels of access security for data assets with granular access control. For example, using the AWS IAM (Identity and Access Management) system for S3 storage, broadcasters can manage who has access to specific media sources, where and when. For microservices this is a major benefit as IAM creates highly secure keys under the broadcaster’s control that have a user defined lifetime.
These keys are used by IAM to access the media asset with read-write-delete control further allowing the broadcaster to fine tune access, they may only grant read access to a transcoding microservice for the duration of the job. As the microservice only loads segments of the media asset file into its own memory and not the media file in its entirety, security is maintained stopping the media asset from falling into the wrong hands.
Storage Deployment
Providing “access on your terms” not only provisions high levels of security to keep valuable media assets secure, but also maintains high levels of efficiency by keeping the storage in close proximity to the processing microservices.
The concept of keeping media assets in the cloud and providing secure granular access to third-party vendors allows for the concept of “bring your own storage”. Broadcasters will have their own media asset management system that may well be a complex hybrid on-prem and cloud solution. As microservices can be easily created in cloud-regions around the world, the microservice function can be moved to the locality of the storage housing the media asset, thus greatly improving efficiency, and keeping costs optimized. This is an automated service provided by the microservice vendor and not something the broadcaster will see or be involved with.
Software development cycles are much faster than the equivalent hardware solutions and this helps facilitate the provision of cutting-edge technology. As cloud infrastructure processing and distribution speeds now exceed the requirements of broadcast television, software-only services are the natural progression.
Embracing Innovation
A great deal of research is being conducted in image processing that has a direct impact on broadcast television. The level of technology available in terms of processing algorithms is improving with incredible pace. Vendors are taking advantage of this innovation and providing microservices for solutions at the bleeding edge of color science, image compression and format conversion.
As media asset files are processed on computer servers and distributed in IP, the services provided are no longer constrained by the limitations of SDI or AES. Video can easily be distributed and processed using 16bit data samples so that even higher quality thresholds can be achieved. Broadcasters benefit greatly from this as much of the color science used in the medical and display industries use 16bit data samples. Noise is reduced, color rendition is improved, and clarity is greatly enhanced.
Advances are not just limited to video but also embrace audio including loudness normalization and sound processing. High precision audio processing with greater bit depth and sample rates is also possible leading to greater sound clarity and depth.
Efficient Microservice Workflows
Research in video and image and audio processing is far from standing still, especially when we look at what is occurring in other industries. Having a microservice based approach to providing processing takes the responsibility away from the broadcaster leaving them to concentrate on optimizing their own workflows.
Every broadcast facility is unique. Due to the localization and community requirements workflows vary enormously, leading many broadcasters to build their own development teams to design workflow solutions that optimize their operations and maintain an immersive environment for their community of viewers.
The move to cloud computing with its associated microservices improves security, quality of the video and audio, and program delivery. Combined with “bring your own storage”, optimized APIs and an incredible number of processing solutions, custom workflow design has never been easier for broadcasters.
Supported by
You might also like...
Expanding Display Capabilities And The Quest For HDR & WCG
Broadcast image production is intrinsically linked to consumer displays and their capacity to reproduce High Dynamic Range and a Wide Color Gamut.
Standards: Part 20 - ST 2110-4x Metadata Standards
Our series continues with Metadata. It is the glue that connects all your media assets to each other and steers your workflow. You cannot find content in the library or manage your creative processes without it. Metadata can also control…
Delivering Intelligent Multicast Networks - Part 2
The second half of our exploration of how bandwidth aware infrastructure can improve data throughput, reduce latency and reduce the risk of congestion in IP networks.
If It Ain’t Broke Still Fix It: Part 1 - Reliability
IP is an enabling technology which provides access to the massive compute and GPU resource available both on- and off-prem. However, the old broadcasting adage: if it ain’t broke don’t fix it, is no longer relevant, and potentially hig…
NDI For Broadcast: Part 2 – The NDI Tool Kit
This second part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to exploring the NDI Tools and what they now offer broadcasters.