Cloud Workflow Intelligent Optimization - Part 1

Optimization gained from transitioning to the cloud isn’t just about saving money, it also embraces improving reliability, enhancing agility and responsiveness, and providing better visibility into overall operations.



This article was first published as part of Essential Guide: Cloud Workflow Intelligent Optimization - download the complete Essential Guide HERE.

The processing power, storage availability and network resilience that modern cloud systems offer now meets the demands of broadcasters. Furthermore, the responsiveness and speed with which cloud systems can scale is opening untold opportunities.

Transitioning cloud systems into existing infrastructures workflow-by-workflow is by far the safest way to achieve integration. This not only allows broadcasters to take advantage of the scalable and flexible systems that cloud offers, but also allows them to look at their existing working practices, many of which may now be obsolete. The natural expansion of a broadcast facility over a twenty- or thirty-year lifetime, often sees processes and workarounds that are still in existence, but few can remember why. By having the opportunity to analyze workflows, broadcasters can build much more efficient systems and remove many of the antiquated systems of the past.

Cloud isn’t just about moving static workflows from the existing infrastructures into datacenters. Instead, broadcasters must look at the areas of their workflows that can benefit from scalability. If a process is not being used, then why have it operational and using valuable resource? Thinking in the agile and dynamic mindset may be a leap of faith for many, but it’s a prerequisite to making cloud integration successful.

It’s fair to say that a public cloud will have more resource than we would ever need but understanding how processes operate on servers is another area where optimization can help provide massive gains. FTP, for example, will use much more network I/O than transcoding, so prioritizing I/O access is paramount. Not only does this provide better optimization for the server the FTP process is running on, but the benefits are felt all the way through the workflow as other processes can complete their tasks faster and with less resource demand.

Intelligent optimization is a lesson in data analytics. The amount of monitoring data available in cloud systems is breath taking and can be put to great use. Not only does this allow broadcasters to spin up and spin down resource as it is needed, but pinch points can be easily detected highlighting areas where workflow speeds could be improved. The response times with which cloud systems can achieve allows broadcasters to dynamically allocate workflows when they are needed, instead of leaving them running aimlessly consuming expensive resource.

Achieving dynamic and scalable mindsets through the adoption of agile working practices is critical to improving workflows. There is no place for storing things in case they will come in useful one day as this removes flexibility and creates waste. Workflows should only be created when needed and then rapidly deleted when not.

An often-unseen benefit for broadcasters transitioning to cloud operations is that specific jobs can be costed. All the data needed to understand the resource cost as well as which workflows will be needed is readily available, allowing broadcasters to quickly provide unit costs before a production is brought into life.

Transitioning to the cloud is a massive undertaking but working with industry specialists who have already achieved integration will help deliver a safe, flexible and reliable operation.



Cloud systems benefit from being highly scalable and flexible as entire infrastructures can be spun up on demand to respond to peak demand without having to over provision or over engineer a design. The need to build “just in case” static systems where expensive resource sits around doing nothing for weeks or even months on end has been relegated to the history books.

Key to building optimized workflows is identifying where efficiencies can be improved through managing effort and identifying repetitive and well-defined tasks that can be automated. Cloud computing not only provides these tasks but through the interconnectedness of the virtualized ecosystem is able to determine where the efficiencies can be made and how.

Static systems have been relegated to the history books as they are built for the peak-demand model. That is, the highest demand possible on a system must be known in advance of its design. This was easy to cater for in the days of relatively slow-moving broadcast technology.

However, as technology has advanced beyond all comprehension and new formats and business models have been introduced, the requirement for the fast-changing infrastructures has become a necessity due to the rapidly advancing user requirements for Direct to Home (D2C) and streaming.

Enabling Technologies

IP is an enabling technology for broadcasters as it allows them to flex the power and flexibility of COTS and cloud infrastructures. Although they may still rely on physical hardware components such as servers and network switches, they also enable a massively flexible software approach to system design.

If public cloud systems are used, broadcasters can concern themselves less with hardware resource management and procurement and focus more on building flexible and scalable workflows through software design. The combination of software and software controllable systems provides the ultimate in flexibility and scalability for workflow development.

It’s worth taking a step back and thinking about what we really mean by scalability and flexibility. Scalability is the power that enables us to increase and decrease the capacity of a system. Not only is this achievable through virtualization and cloud infrastructures, but also through network routing with software defined networks.

A virtualized environment can allocate different resource to different VMs so they can be fine-tuned to meet the needs of specific services. In this example, RP-B has more CPU power and memory than RP-A allowing the connected VM’s to be used for a CPU intensive task such as transcoding.

A virtualized environment can allocate different resource to different VMs so they can be fine-tuned to meet the needs of specific services. In this example, RP-B has more CPU power and memory than RP-A allowing the connected VM’s to be used for a CPU intensive task such as transcoding.

Improved Supply Chains

Most hardware components used in data centers are readily available from industry standard providers. Admittedly, the type of servers and network switches used for broadcasting won’t be available in a high street store, but they will be available from the many standard providers throughout the world. Furthermore, these suppliers and manufacturers are providing equipment for much larger industries than broadcasting, consequently, equipment is much easier to procure than traditional broadcast gear.

IT industry vendors also provide many different service-level-agreement warranties that give broadcasters the peace of mind needed when running a 24/7 facility. Most industry vendors have representatives across all the continents and can provide response and repair times from hours to days or weeks, all depending on the type of service the broadcaster requires.

Few broadcasters have the opportunity to build a brand-new green field site. Instead, they must integrate the new datacenter and existing broadcast infrastructure together, even when most of the facility is live and on-air. Datacenters can be built to meet the requirements of workflows allowing broadcasters to transition safely and methodically, resulting in a low risk integration as the system expands. In the IP world, there doesn’t have to be a single change-over event. Cloud systems can be brought online and accurately monitored to meet the needs of the service.

Complexity Choice

Public cloud systems further expand on the integration theme as they will have more capacity than we would probably ever need. A solution can be as simple as a single server or as complex as a highly integrated and fully scalable system. But as integration starts, the requirement will probably be somewhere within the middle of these extremes.

Virtualization makes the cloud possible, whether public or private. It not only provides resilience through distributed server processing, but also delivers scalability, that is, the ability to match user demand, quickly and efficiently.

Scalability works because it takes advantage of the redundant and unused resource in a server or cluster of servers. A transcoding process is generally CPU and memory resource intensive. Anybody looking at even the most basic CPU and memory usage software will see virtually all the CPUs jump to 100% during a transcode. But running a similar test on a file transfer function such as FTP, will see the CPU and memory resource stay low but the I/O usage on the network card go through the roof.

Supported by

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

IP Security For Broadcasters: Part 4 - MACsec Explained

IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.

Standards: Part 23 - Media Types Vs MIME Types

Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.

Building Software Defined Infrastructure: Part 1 - System Topologies

Welcome to Part 1 of Building Software Defined Infrastructure - a new multi-part content collection from Tony Orme. This series is for broadcast engineering & IT teams seeking to deepen their technical understanding of the microservices based IT technologies that are…

IP Security For Broadcasters: Part 3 - IPsec Explained

One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…