The Sponsors Perspective: Why Settle For A Or B?

AMPP provides many different configurations for high availability cloud systems that empower broadcasters to choose the best infrastructures to meet their demands and further improve the viewing experience.


This article was first published as part of Essential Guide: Delivering High Availability Cloud - download the complete Essential Guide HERE.

Deconstructing The Monolith

Grass Valley’s AMPP, the Agile Media Processing Platform, is built on a modern architecture that provides more options for high availability than a traditional media production environment.

Unlike a monolithic system which requires a duplicate copy of the system to achieve higher availability, AMPP gives users options that meet specific needs.

With AMPP we’ve deconstructed the monolithic model into separate components:

  • Inputs
  • Outputs
  • Pixel processing
  • Backend database 
  • Application logic
  • User interface

These component groups are loosely coupled. As a result, system owners can choose where to run them: on-prem, at the edge of the cloud, cloud-based, or any combination of the above.

High Availability

What does a “loosely coupled” system mean? How does that affect high availability? Let’s break that down a little further.

Backend Services

AMPP’s platform services can be run from an on-prem equipment room. A better model is to run them from the cloud where they run in multiple availability zones. For example, multiple data centers within AWS. AWS has a much better uptime than a single equipment room. Their data centers are staffed 24/7 by people whose only job is to keep those machines running.

The platform services are running many copies of the AMPP microservices. AMPP users can choose any released version of the software they prefer. Because a particular microservice operates in the same way, if one of the nodes in a cluster hiccups, another node takes over without any perceptible difference to the AMPP user. The Grass Valley DevOps team continually monitors those clusters to ensure that the latest security patches are installed, and the system is always running optimally.

Pixel Processing

For additional high availability you can deploy your pixel processing for the same workflow in multiple places. For example, you could have two separate EC2s, or you could have one EC2 and one compute node on the ground. This allows multiple copies of an app to respond to the same instructions.

For many pixel processing applications, edge computing is the better model. Edge computing puts processing services at the edge of the network instead of in a datacenter. Proximity to end-users better achieves client objectives such as: dense geographical distribution and context-awareness, latency reduction and backbone bandwidth savings.

Application Logic

AMPP has a common set of microservices. These unique microservices get used in different combinations. Some applications will use multiple microservices, some might only use one.

By using edge computing, the components of AMPP may be located according to production needs. For fastest response times, the app normally runs on the same edge node where the pixel processing is done.

Inputs And Outputs

Inputs and outputs for AMPP can be hardware or software connections. They physically connect to a compute node on the edge and upload or stream to an IP address. Both hardware and software connections can be easily duplicated for redundancy.

User Interface

Because AMPP’s user interface is HTML5, any device with an internet connection and authorized security credentials can reach the system regardless of where the processing and apps are located. Each operator’s production tasks are independent of where the sources and processing are located so multiple users can see and interact with the same application at the same time.

Choosing The Best Options

AMPP can run as a standalone on-prem platform, but creating an isolated system generally negates the business objectives for adopting a new platform.

Because of AMPP’s flexible architecture, there are multiple ways to achieve different levels of high availability. Here are some things to consider when designing the right system for the application.

Where Does Your Content Reside?

Having all your video flows already on-prem is an argument for running the system on-prem. Several customers integrate AMPP into their larger SMPTE ST 2110 production studio. Because all the camera and graphic sources are already 2110 it doesn’t make sense for them to go to the cloud and back. For these customers, the advantage of AMPP is the ability to dynamically provision workloads. The same space can be used for multiple smaller productions, a single large production or flipped from live production to master control for popup channels depending on the needs of the moment.

For other AMPP customers who need to regularly bring in sources from remote locations, the ability to send those sources to the cloud from the remote site and have them immediately available to the production team to begin working means they save both time and money by locating the rest of the production system in the cloud.

Does Your Workflow Need To Stay Uncompressed?

AMPP supports many excellent compression formats which provide no noticeable difference in picture quality for most applications. Compression makes it easier and less expensive to transport signals over distributed networks. The production team can use these signals without conversion to a standard production format, thus saving time and avoiding multiple format conversions.

For applications where compressing a signal is not an acceptable option, AMPP uses uncompressed video formats - 10-bit YUV when exchanging flows between micro services on the same node. Because transporting uncompressed video signals to the cloud consumes large amounts of expensive bandwidth, it may be best in this case to run the system on-prem.

Do You Need The Resultant Video Flows Back On-prem?

Egress charges are one of the more expensive parts of cloud processing. If you are storing content or using a cloud CDN, it is generally most cost-effective to move the content to the cloud once and leave it there. For many AMPP customers, the ability to monitor live content, create localized versions, and play it out without bringing the content back to a terrestrial system is a highly efficient production model.

How Often Do You Use The Workflow?

As with most technology, the costs of cloud processing continue to decline. But if you are considering AMPP for workflows that operate continuously in an existing environment, operating on-prem may make the most commercial sense.

If you are building a 24/7 operation in a greenfield environment, a cloud-based operation may be more cost effective. You could buy hardware servers for less than a VPC, but then you need to provide power, cooling, and maintenance. You’ll need to keep it updated with the latest patches and continually change out the hardware every few years. Once you add up the total cost of the operation, it may cost more to run your own datacenter.

For short duration events such as pop-up channels, or a championship tournament where there is burst of activity and then the system lies dormant, a cloud system can cost less than the purchase of hardware for that activity.

Conclusion

It’s time to rethink our approach to highly available media production systems. Running duplicate main and backup systems is more expensive and difficult to operate than today’s solutions. AMPP’s architecture, particularly when installed as a cloud-enabled edge network, can achieve better total uptime than local engine rooms while providing far more flexibility in your production workflows.


GV Hub Local Discovery

It’s rare for major cloud providers to have an outage, but it is possible to have the occasional blip. AMPP Hub maintains high availability during these moments.

When running an edge network with AMPP Hub, all the on-prem computers are connected to AMPP through AMPP Hub. Under normal circumstances, AMPP Hub acts as a load balancer, managing the system traffic so that only the messages that need it go to the cloud. The rest of the traffic stays local. For example, a button press on a Switcher doesn’t need to go to the cloud and back, it goes directly to the application running on the edge device.

If the connection to the Internet is briefly broken, AMPP Hub ensures all the local traffic continues without interrupting the operator. Some aspects of the system, such as adding a new workload or loading new clips to storage might be paused, but the basic production functions continue to keep the show on air until the connection is restored.


Supported by

You might also like...

IP Security For Broadcasters: Part 1 - Psychology Of Security

As engineers and technologists, it’s easy to become bogged down in the technical solutions that maintain high levels of computer security, but the first port of call in designing any secure system should be to consider the user and t…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

If It Ain’t Broke Still Fix It: Part 2 - Security

The old broadcasting adage: ‘if it ain’t broke don’t fix it’ is no longer relevant and potentially highly dangerous, especially when we consider the security implications of not updating software and operating systems.

Standards: Part 21 - The MPEG, AES & Other Containers

Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.

NDI For Broadcast: Part 3 – Bridging The Gap

This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…