Cloud Workflow Intelligent Optimization - Part 2

In the last article in this series, we looked at how optimizing workflows improves reliability, and enhances agility and responsiveness. In this article, we investigate advanced monitoring systems to improve data analysis and aid optimization.



This article was first published as part of Essential Guide: Cloud Workflow Intelligent Optimization - download the complete Essential Guide HERE.

Advanced Monitoring For Optimization

Virtualization allows us to monitor processes individually so we can allocate them to the most appropriate server clusters. Public cloud providers allow broadcasters to provision differing resources to servers so a build-your-own philosophy can be adopted.

There’s little point in allocating a server with 100 CPUs and 200 threads to a file transfer program, but this is exactly what we want to do for the transcoder, or any other video intensive processing.

To truly see the benefits of private and public cloud, engineers building broadcast workflows and infrastructures must shift their mindset from one of static infrastructures to that of dynamic and scalable systems. Engineers often have a room of “just in case” boxes and hardware that they can spring into action should the need arise (if it ever does). Equipment that has been decommissioned or may “come in useful” one day is locked away resulting in a “cannot throw away” mindset. Cloud computing is the opposite mindset to this. Metaphorically, we’re throwing away the room of equipment that “might come in useful” one day because with cloud, we can build anything we need when we need it. 

The cyclical nature of agile software development leads to a constant review of the system resulting in highly efficient and well optimized workflows.

The cyclical nature of agile software development leads to a constant review of the system resulting in highly efficient and well optimized workflows.

Delete and throw-away is the mantra of cloud. We do not physically throw away the hardware, but we do delete the virtual machines when not needed. Microservices take this to an extreme with their functions lying dormant as a software image, only when they are needed are they loaded into program memory and executed, and then deleted after use. Instead of physical boxes, we now have licensing keys.

New Mindsets Improve Workflows

Adopting the dynamic and scalable mindset is key to integrating and operating private and public cloud systems. There is literally no place for storing things in case they will come in useful one day, because doing so removes flexibility and creates waste. The agile software mindset relies on regularly deploying code, often in two-week cycles, so that should any bugs arise, they will be relatively minor and easy to either fix or roll back the software. Furthermore, the agile software mindset creates a culture where new functionality is continuously delivered without requiring hardware upgrade cycle.

Broadcasters have been moving to software for years and have often relied on monolithic programs probably without even realizing it. This is risky as operating-system version incompatibilities and hardware inaccuracies conspire against software developers looking to build generic software. With all the different versions of hardware available under the x86 umbrella, it proved incredibly difficult for vendors to deliver consistent code across many server manufacturer devices. The regular deployment of smaller flexible software services leads to incredibly stable workflows.

Playout systems regularly operate on servers and upgrading the software has traditionally been a complex and risky task. Even with backup systems, the number of single points of failure in single server solutions is vast and problematic. However, code running on virtualized servers, and hence cloud systems, is more consistent in both hardware and the intricacies of the operating system. It’s simply much easier to test and validate the code as there are fewer virtualization variations. The code delivering the virtualized environment is much better contained and predictable.

Analytical Data

Another feature available in cloud systems is monitoring and metrics. Agile developers build massive amounts of analytics into their code to help with debugging and understanding the dynamics and interactions of the whole system. Modern code development relies on building smaller and more manageable programs that interact together through the exchange of control and notification messages. As each program works in isolation, understanding the code and message status is critical when scaling and interacting with other programs.

APIs provide this method of message interaction as well as a convenient access point to the data and control of the programs in general. A well defined and documented set of interfaces delivers a convenient gateway for developers integrating the services into their workflows. Although APIs develop over time and have features added to them, they do so by maintaining rigorous backwards compatibility. If the vendor releases a new API then any additions will not affect the integrators solution.

Vendors providing APIs can rigorously test them in well controlled environments so that any inconsistencies are easily detected before deployment. The API philosophy also prevents integrators from accessing the process in a method that is inconsistent with its reliable operation. And even if they achieve this, the vendor developers will have a mass of analytical data that logs the access to the API leading to a rapid fix and new deployment.

Services running in virtualized environments cannot assume that the programs they are communicating with are on the same server, or even the same datacenter. This becomes even more important when we start mixing workflows together that reside on public and private cloud systems. If a broadcaster’s private cloud workflow needs to scale, then they should allow themselves the option of scaling to the public cloud.

Developers also need to understand where any bottlenecks are occurring within their solutions. One consequence of cloud systems is that we are limited to the amount of control we have over individual servers, and paradoxically, this is one of virtualizations greatest strengths. To be able to effectively manage and operate any system, we must be able to monitor it, and this is exactly what agile development philosophy has inadvertently delivered for us.

Long Term Data Availability

The log files and analytical data gathered by developers monitor everything from key presses on portals to the number of processes running and the resource they are consuming. All this data is logged and primarily used for diagnostics but is also available through tactical dashboards and computer files. IT have been using this type of data for as long as we’ve had virtualization to maintain the reliability and performance of their systems.

Using metadata broadcasters can see how their workflows are performing. Bottlenecks, over capacity, and under capacity can be easily spotted and rectified. Processes that are not being used can be deleted, often automatically using scalable control systems, and if a workflow is looking under strain, then new resource can be allocated to it to relieve its workload and increase the throughput.

Integrating workflows slowly into cloud systems provides broadcasters with the opportunity to identify existing workflows that are no longer productive or needed. Over a period of many years, broadcasters often bolt on systems and processes to provide new formats or fix specific user requirements. As the systems improve and evolve, old processes are forgotten but maintain their existence as nobody can quite remember what they do and are fearful of switching them off. However, as these processes require costed resource the opportunity to question their relevance is clear.

Costing Each Job

Extrapolating this thought process allows costing to be applied on a job-by-job basis. The availability of metrics provides enough information to show how long a job running through the workflow took, the processes it went through and the resource it used. Evaluating its actual cost just becomes a method of joining the dots in the analytical metric files and cloud service costings. Even if the broadcasters cloud service is on-prem then they can calculate the working costs of the processes running in their datacenter.

Workflow metrics become an analysts dream as the abundant availability of metadata empowers them to understand exactly what is going on within their systems and how. Optimization isn’t just about saving money it also embraces improving reliability, enhancing agility and responsiveness, and providing better visibility into overall operations.

Transitioning to cloud not only reduces risks for broadcasters but allows them to take a deep look at their existing infrastructures and optimize efficiencies. This isn’t just about saving money but also by reducing risk and the number of processes involved in workflows to help spread them over multiple sites.

Supported by

You might also like...

Expanding Display Capabilities And The Quest For HDR & WCG

Broadcast image production is intrinsically linked to consumer displays and their capacity to reproduce High Dynamic Range and a Wide Color Gamut.

Standards: Part 20 - ST 2110-4x Metadata Standards

Our series continues with Metadata. It is the glue that connects all your media assets to each other and steers your workflow. You cannot find content in the library or manage your creative processes without it. Metadata can also control…

Delivering Intelligent Multicast Networks - Part 2

The second half of our exploration of how bandwidth aware infrastructure can improve data throughput, reduce latency and reduce the risk of congestion in IP networks.

If It Ain’t Broke Still Fix It: Part 1 - Reliability

IP is an enabling technology which provides access to the massive compute and GPU resource available both on- and off-prem. However, the old broadcasting adage: if it ain’t broke don’t fix it, is no longer relevant, and potentially hig…

NDI For Broadcast: Part 2 – The NDI Tool Kit

This second part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to exploring the NDI Tools and what they now offer broadcasters.