Improving OTT’s Sustainability With More JIT Capabilities

In order to be sustainable OTT services must be energy-efficient. As with other production processes, just in time (JIT) principles need to be maximised to find new and fundamental efficiencies for OTT content delivery.


This article is part of 'The Big Guide To OTT - The Book'

Energy efficiency requires waste reduction. In a perfect world, this means we should only produce and deliver what is required – if no one asked for it, do not produce or deliver it. This World of OTT series started out by describing how OTT is a pull system, only delivering content when a customer requests it. Except, to deliver video on demand (streamed files) and live content (live streams) there is typically a whole lot of pushing taking place which we need to evaluate.

Addressing Wastefulness In Push-based Content Delivery

Linear broadcasting over terrestrial and satellite networks was built on the push model. The waste it generates has a much lower impact on the planet than other industries, such as manufacturing, because waste is not in the form of physical inventory that might go to landfill at some point. Instead, waste is in the form of the same amount of resources used whether 1 person or 1 million people actually watch the content.

The wastefulness of the push system can be observed most clearly on the distribution side of the value chain.  Linear broadcasting can largely be declared to be highly efficient from a distribution perspective. The passive receivers like aerials and satellite dishes can lie dormant and only receive content from a head-end when the television is tuned in. And those pieces of infrastructure can endure for many years, much longer than most CPU-based electronics like laptops and mobile phones. Video on demand changed the efficiency dynamic because the content delivery needed to fulfil our personalised consumption demands. New systems were deployed that could store content close to the consumers ready for delivery. In cable TV and IPTV systems large VOD libraries were built out in multiple locations to deliver content quickly. Large transcoding farms were built to process thousands of VOD assets. Often, those VOD assets were stored in packaged and encrypted form already because it was cheaper to store more assets than it was to process them on-demand at the point of consumption.

This same model transferred over to OTT, which largely originated with video on demand services. Central push systems produced every variant of content required for the consumer. Then more live linear content became available and encoders and packagers pushed the content to the origin which then pushed the content to the CDN, which then pushed the content deep into its edge cache network, ready for the customer to request. Even if they didn’t request it.

Today, pull systems have been implemented in many parts of this ecosystem. Many video services only store the mezzanine file formats in central storage and do JIT packaging and encrypting on-demand. They no longer push all pre-prepared assets to the edge of their networks. Many CDNs have intermediate caching layers or caching intelligence to only keep a VOD asset or a live stream cached in minimal locations until other locations really need it.

The CDN model of delivery is not yet energy efficient enough. But it is expected to become the dominant video delivery method because it fulfils our demand for personalisation and convenience and it utilises the huge capacity of broadband networks. So what should happen next? There are two primary paths to take towards the sustainability of our OTT delivery systems: one is about infrastructure, and one is about processing philosophy.

Infrastructure

Subjects like video-tuned, spot-instances and increased bandwidth are the hot sustainabilitytopics of today because they seek to optimise resource usage.

Video-tuned refers to hardware that is architected specifically for video processing. Video streaming compared to website acceleration involves fewer client requests, fewer and larger content chunks, and more consistent streaming of fewer assets. There is a fundamentally lower level of processing required for video streaming – c. 30-40% by some test results – that can translate into a more efficient hardware platform. Given about 80% of internet traffic is video, architecting purpose-built video infrastructure is a major opportunity for CDNs to contribute big sustainability improvements.

Spot-instances refers to how to optimally utilise available hardware for video processing, which cloud providers are supporting. Spot instances mean that a piece of processing capacity, like a CPU or GPU, of a particular type and in a particular location can be utilised for a very short time (e.g., seconds or minutes). This is developing into a very important approach to support peak usage for VOD asset processing, as well as low-latency live processing that can span multiple cloud providers if required to access the necessary lowest-cost capacity.

Increased bandwidth sounds simple, but it is incredibly strategic where that bandwidth is placed given the inherently massive web that exists to connect us all. Telcos are building out FTTH (fiber to the home), DAA (distributed access architectures) and 5G to meet our needs for next-gen video services. 5G means distributing smaller network nodes more widely, FTTH means centralising network nodes compared to today, and DAA means pushing processing capability deeper into cable networks. The different architectures create different choices for caching content. For instance, with 5G can we imagine mini-cache locations serving small groups of people? In FTTH, as network junctures are pulled back to more centralised positions, we will leverage a simpler light-speed path to the home. But will we ultimately need a new network splitter with caching functionality built close to the consumer again? Despite these questions, one thing seems certain - the push-pull optimisation will continue indefinitely in the quest for efficiency and sustainability in the face of growing demand.

Processing Philosophy

Our workflow choices make a big difference to OTT’s sustainability. In principle, if we implement a pure pull system for video delivery we would not store content anywhere other than at the centre of the ecosystem and we would rely on fast processing and bandwidth to do the rest. Only when someone requests content would we prepare and deliver it. It would be a pull-based playout system. Is this feasible and does it make sense?

The full answer is probably not. Theoretically, if bandwidth from the Origin to the consumer was sufficient it would probably be technically and commercially feasible. The speed of light is fast enough. It would be beneficial to centralise power supplies, and only originate streams if requested. Perhaps this could be done on an ISP-by-ISP basis for the largest D2C streamers given that 80% of consumers in each country are generally served by a low number of ISPs. Those few ISPs could have centralised cache facilities built in their networks that make 1 request for content from the D2C Streamer’s Origin for Live streams and VOD assets and then deliver all the way to consumers over their fast broadband connections. The D2C Streamers would simply serve content up to 4-5 ISPs per country plus a generalist peering point for the rest of the smaller ISPs. The big problem with this idea is that the ISP Core networks, where a centralised cache would live, is a natural bottleneck given the sprawling connections with the Access network and other ISP networks. A lot of content would need to cross the Core Network if it wasn’t served from within the Access network. And it is not easy or inexpensive to keep expanding the Core. To keep an efficient Core there is a need to build infrastructure deeper in the network rather than expand the whole of the Core on a regular basis.

ISPs and CDNs will therefore continue to create capacity closer to the consumer, and Access Networks will expand their overall capacity. This allows consumers to stream to their heart’s content. But to be efficient and sustainable we still need intelligent caching that is as lean and efficient as possible rather than brute force big storage systems. We need to have rules for moving content closer to the consumer that emphasise lean pull thinking, not bloated push thinking.

So while the content distribution architectures evolve to become more energy-efficient and lean-out their management of content, the purist pull system implementation needs to focus further upstream at the origination platform. This is the moment when push-centric broadcasting meets the issue of processing power. Already many OTT services operate using JIT packaging. This is relatively light-weight processing and can be performed on-demand for VOD content which is the heaviest user of push and pull technology to place the right content in the right place at the right time. But transcoding is different. It is processing-intensive, and while faster than real-time processing is available it is traditionally highly engineered and high cost. So doing JIT transcoding has not been the norm, except in Live streaming where the expense is just part of the basic requirements to deliver content to the end device.

The technical issue that needs to be solved is to move the pull line further away from the consumer. The use of spot instances in cloud infrastructure is a big enabler – this model allows access to the exact type of compute resource for very short periods. But to leverage this, software has to be designed for it. We have to break up a live stream or VOD asset, process it in a distributed manner and stitch it back together again. Potentially these chunks of a single video segment could be farmed out to different cloud environments to access the capacity they need, all done within milliseconds so it is unnoticeable to the human eye. Fiber connectivity, faster processing units and cloud infrastructure make this possible. Leading solutions built for hyper-elasticity are taking advantage. Quortex is one young company tackling this challenge. “JIT should be the core of a Live Streaming solution”, states CEO, Marc Baillavoine. “Every single piece of the system should work in a pull mode. A stateless approach using spot instances and leveraging cloud scalability is extremely efficient. We always make sure that every kW hour we use has a purpose.”

Sustainable OTT

The OTT streaming industry will grow dramatically in the years to come. Network capacity requirements could easily expand 5-10x in the next 5 years based on demand for higher bitrates to feed our SmartTVs and larger audiences consuming content over their internet connections. Networks will also have to cope with more consistent usage as the audience shifts from traditional linear and pay-TV platforms to pure OTT services for more hours per day.

We need to therefore be serious about efficiency and sustainability. We need to think hard about maximising the promise of the OTT pull system. We need to ensure the solutions we deploy are optimised for video and don’t create unnecessary energy-consumption burdens. We need our video delivery systems, but they must be lean. And because video creates a much larger load on our networks than other use cases the onus is on the media industry and our media technologies to lead the way.


Contributor: Marc Baillavoine, CEO, Quortex.

Part of a series supported by

You might also like...

Designing IP Broadcast Systems - The Book

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

Operating Systems Climb Competitive Agenda For TV Makers

TV makers have adopted different approaches to the OS, some developing their own, while others adopt a platform such as Google TV or Amazon Fire TV. But all rely increasingly on the OS for competitive differentiation of the UI, navigation,…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

Standards: Part 21 - The MPEG, AES & Other Containers

Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.

Broadcasters Seek Deeper Integration Between Streaming And Linear

Many broadcasters have been revising their streaming strategies with some significant differences, especially between Europe with its stronger tilt towards the internet and North America where ATSC 3.0 is designed to sustain hybrid broadcast/broadband delivery.