The Sponsors Perspective: The Answer Is: “Yes We Can!”
Flexible architecture opens new business possibilities.
Elevate Broadcast Pte Ltd. was one of the early adopters of Grass Valley’s agile media production and distribution platform, AMPP. Ever since their adoption of AMPP, they’ve been regularly using it for a wide variety of projects, from simple signal transport and monitoring to full live remote productions in the cloud. Elevate Broadcast has found that transitioning to a modular, microservices style architecture has increased their ability to quickly respond to the changing needs of their customers.
Dennis Breckenridge, CEO of Elevate Broadcast, explains: “One of the challenges in many of the IP environments is that you end up with all these gateway devices that are plumbed together to create a workflow. We can make that work for some situations, but it doesn’t have the same flexibility. You can’t say today I need eight inputs and tomorrow they need to be outputs. Or you can have a video switcher for this show but use those same resources to shuffle audio for the next show. AMPP doesn’t force you to make these decisions that you then have to live with.”
While many of the projects using AMPP do have elements of cloud operations in them, Breckenridge was quick to point out that the flexibility that AMPP provides makes it just as useful in an on-prem environment.
“Last year we built out a big production center. In that case, AMPP is connected to a SMPTE ST 2110 world. The nodes sit in our data center. Then we use it for all kinds of things.
“We use AMPP extensively for format conversion from 1080i to 1080p productions, to do contribution to AI engines for editing or other processing. We use AMPP if we need to post process any signals, for example to multiplex or shuffle audio and then convert the feed to SRT or RIST. Rather than going out and buying converters and all those types of edge devices, we just feed the 2110 signal into AMPP, and it provides what we need.”
GV AMPP Architecture
AMPP is a cloud-first microservices architecture that consists of a Grass Valley operated multi-tenant control plane – which is provided as SaaS – as well as a private customer video processing data plane that can either be in the cloud or on the ground. This enables extremely flexible workflows that have all the advantages of the cloud while recognizing that for some use cases, processing video at the edge makes more economic sense.
Cloud First
AMPP takes advantage of all the native services available in public cloud platforms. It consists of a set of microservices that are distributed across many physical computers in multiple availability zones. These architectures are normally defined as being high availability because the work is distributed across many microservices and one of which can fail without impacting the overall performance of the system. This provides better performance and reliability than you would get with a traditional “lift and shift” approach to the cloud which means taking some traditional monolithic software and simply running it on a dedicated VM in the cloud.
Microservices And Kubernetes
A single AMPP platform control plane is distributed across clusters of compute in different availability zones. AMPP operates on platforms distributed around the world so that customers access a platform that is local to them. Managing many microservices distributed over multiple data centers requires a management layer that handles the lifecycle of stopping and starting all the individual services and managing the resources they have available. Grass Valley uses Kubernetes to manage the control plane. Kubernetes groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.
The AMPP Data Plane
Real-time video processing happens on the AMPP data plane. The data plane infrastructure may run on-premise or on a public cloud hosted virtual machine and is private to an individual customer account.
Many individual AMPP applications can be deployed on a single compute node. These apps can be stopped and started individually as needed, but they all share access to a common set of shared 10bit YUV uncompressed video flows so that multiple apps can interact with the same frames of video very efficiently without incurring any significant latency.
Within the same data plane, you can run many copies of the same app with its own specific configurations. These are called workloads and can be managed from a central application called the Resource Manager. The advantage of this approach is different productions can have their own workloads which can be stopped and started as a block while preserving all their individual show setup.
Ian Fletcher (left) and Chris Merrill (right).
A New Way Of Working
While it is common to begin experimenting with AMPP as a one-to-one replacement for a specific hardware-based workflow, its true power lies in its ability rapidly provide whatever workflows are required at any given moment.
“The beauty of AMPP,” said Breckenridge “is that it is a toolbox that can be applied in so many ways. Before AMPP we had to build up a stock of converters and changeovers, clean quiet switches, and routing panels – a whole warehouse of purpose-built kit. Now we can be much more dynamic. We can add more inputs and outputs, we can scale the network, we can manage all different types of flows, and then bring all of that into whatever production environment we need: SDI, IP, cloud, hybrid… It really doesn’t matter.”
You might also like...
Standards: Part 20 - ST 2110-4x Metadata Standards
Our series continues with Metadata. It is the glue that connects all your media assets to each other and steers your workflow. You cannot find content in the library or manage your creative processes without it. Metadata can also control…
Delivering Intelligent Multicast Networks - Part 2
The second half of our exploration of how bandwidth aware infrastructure can improve data throughput, reduce latency and reduce the risk of congestion in IP networks.
If It Ain’t Broke Still Fix It: Part 1 - Reliability
IP is an enabling technology which provides access to the massive compute and GPU resource available both on- and off-prem. However, the old broadcasting adage: if it ain’t broke don’t fix it, is no longer relevant, and potentially hig…
NDI For Broadcast: Part 2 – The NDI Tool Kit
This second part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to exploring the NDI Tools and what they now offer broadcasters.
HDR & WCG For Broadcast: Part 2 - The Production Challenges Of HDR & WCG
Welcome to Part 2 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 2 discusses expanding display capabilities and…