No More Flying Blind In The Cloud
Migrating live and file-based video services to the cloud holds the promise of huge flexibility in deployment architectures, geographical backhaul possibilities, virtually limitless peak provisioning, pay per use pricing, access to machine learning and other advanced intellectual property on-demand, and much more. This migration is still in its infancy and will ultimately drive new cloud business models and partnerships to create viable financial common ground, but there are some critical technical challenges facing video service providers looking to de-risk the move to the cloud.
Most content and service providers have operational teams that understand video very well and have years of experience working with on-premise video architectures. Many of these teams have Network Operation Centers (NOCs) that allow them to monitor the video as it traverses their video networks. The video is typically inspected at each demarcation point between the pieces of equipment to provide the transport (Quality of Service – QoS) and video/audio content (Quality of Experience – QoE) visibility so the operators know the video is good before it goes into the delivery pipes to the consumers and is then tracked across the delivery network.
For many content and service providers, moving their video services or video distribution to the cloud is a daunting prospect due to losing visibility for their operations teams. Many are used to “walled garden” architectures and do not understand the complexity of the cloud, and the thought of sending precious streams off into the unknown with no knowledge of whether they arrived intact is just too big a leap to make. The latest wave of reliable transport technologies (SRT, Zixi, Aspera) can help to carry the content to the relevant cloud data center for processing, but it still needs to be checked before and after the video processing pipeline. Otherwise, if the viewing experience is bad, how do you even begin to diagnose the issue? Figuring out where something has gone wrong without integrated cloud monitoring is like trying to find the needle in the proverbial haystack.
Plotting a safe course in your migration to the cloud
When migrating video services to the cloud, effective monitoring of all video streams is absolutely essential to understand what is happening at each stage of the delivery process. Video providers need to understand and plan for this before they start their migration to cloud based video services, whether 24/7 channels or scheduled/on-demand monitoring for live events. For live events, the actual channel and associated monitoring may only be orchestrated and deployed for the duration of the event (with suitable buffers before and after for pre-testing and post-event data analysis), so ensuring operational validation of the streams as they transition to and across the cloud is critical to isolating issues should anything go wrong.
As a core principle, the workflow’s monitoring needs to be designed in as early as possible. QoE/QoS is just as essential as ever, but there are different types of protocol monitoring required at various stages in the chain, such as the headend as compared with later throughout the content distribution network (CDN). Furthermore, the system might be used for more than just quality monitoring: tracking SCTE marker insertion for advertising and checking correct splicing for Live-VoD and VoD-Live transitions are all possible cross-over applications that may need to be checked in at an early stage. Lastly, and many people overlook this, there is a need for a real-time feedback loop for dynamic behavior within the delivery system. As we start to take advantage of cloud flexibility for auto-scaling and self-verifying/healing architectures, the real-time monitoring becomes an absolutely critical part of the feedback mechanism to make the right dynamic decisions.
Cloud contribution of live video is becoming a hot topic at the moment. Effective and versatile monitoring is very important - especially for contribution. Live video needs smooth delivery and traditional TCP-based delivery can be unstable. As low latency becomes more important, the jitter will become more critical again, just as it did for multicast. The more real-time video needs to be, the smoother the delivery needs to be. Any inconsistencies may affect the whole chain. This also drives the need for more real time monitoring due to tighter tolerances in delivery between the elements in the delivery architecture.
Hybrid approaches provide a bridge to the cloud
On-premise video infrastructures aren’t going to disappear overnight and will continue to play an important role for the foreseeable future. The $64 million question is how can the industry help to provide hybrid solutions that support and evolve existing solutions while enabling the transition of video workflows to the cloud?
Companies making the transition can’t handle too many changes at once, so transitioning ‘familiarity’ is key. At Telestream, we are investing in enabling a smooth transition with the live and VoD monitoring systems, file-based QC and our Vantage Media Processing Platform through Vantage Cloud Port. This hybrid approach means that workflows and transcoding functions can be deployed in the cloud as necessary, and in a seamless way, using the tools and APIs people are familiar with today. For monitoring, it is essential for operations teams that familiar KPIs are available in the cloud as they are on premise today, albeit supplemented with additional relevant KPIs around the additional transport protocols such as SRT and Zixi. As these protocols extend video transport globally across the major cloud provider backbones with the help of distribution frameworks like Haivision Hub, Zixi’s ZenMaster and AWS MediaConnect, monitoring of the video on a global basis across these networks will become as essential and familiar as it is today in traditional architectures.
Currently we live in times of unprecedented change, and transitions are hard to make when the on-premise to cloud transition comes on the back of many industry changes around 4K/8K, HDR, low latency, and use of secure reliable delivery protocols and architectures, such as SRT and Zixi.
At the same time, the video industry is migrating to more evolved underlying architectures leveraging micro services and more on-demand API usage, so familiarity to the extent possible will be a big help while companies catch up and retrain to embrace new architectures and methodologies.
You might also like...
Designing IP Broadcast Systems - The Book
Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
Standards: Part 21 - The MPEG, AES & Other Containers
Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.
Broadcasters Seek Deeper Integration Between Streaming And Linear
Many broadcasters have been revising their streaming strategies with some significant differences, especially between Europe with its stronger tilt towards the internet and North America where ATSC 3.0 is designed to sustain hybrid broadcast/broadband delivery.
Microphones: Part 2 - Design Principles
Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.