The Streaming Tsunami: The Necessary Evolution Of The TV (From A Broadcaster Perspective) - Part 2

The first part of this article described how Streamers, especially national Public Service Broadcasters, must grapple with the TV landscape to deliver Streaming services that fully satisfy the viewers while simultaneously making an efficient investment in technology. Netflix has set the gold standard. As Streaming scales up and broadcasters shift to becoming streaming-first, it’s now time for the PSBs to follow the Netflix way.


More articles in this series and about OTT/Streaming:


The Netflix way can be summarized as “controlling the User Interface (i.e., the browser) and the Delivery Network”. As PSBs look to deliver broadcast-grade viewing experiences to all of their viewers, both of these elements are critical. This article focuses on the User Interface, which incorporates Content Discovery, Content Playback, and Community Engagement. The Delivery Network, which is all about how to scale live and VOD streaming for the largest audiences, is not covered in this article, but recent articles on the subject are available.

Browser Technology Advancements Supporting PSBs

The User Interface for PSBs is based on a web browser structure. There are limitations in how a single PSB can manage its customer experience in an environment it does not fully control, or for which there are only generalized standards. While Netflix implements its own binary run-time across a wide range of devices, PSBs must adapt themselves to the environment provided by the viewing device. This creates a significant constraint on adapting to new devices, fixing limitations in devices, or creating a super-consistent user experience across many different devices. All of which adds cost and complexity to the PSBs streaming services.

But new capabilities are emerging, enabling PSBs to become more Netflix-like.

A fundamental change is the move from the Web Browser development tool, React, to a new generation, WebGL. React is relatively heavyweight in its use of technical resources, but it is simplified for Service Operations and it is relatively easy to find Developers. WebGL, on the other hand, focuses on giving direct access to the silicon and transistors, instead of a graphics card, which improves Browser performance compared to React.

WebGL uses WebAssembly (WASM) binary files to give lower-level access to the browser with mainstream C/C++ programming languages. It requires that Browsers must be compatible with Javascript, which all major browsers are. Most major Browsers already support WASM itself, although a notable exception is Internet Explorer which must compile code into the highly optimized, low level subset of Javascript, called asm.js.

WASM is overseen by the World Wide Web Consortium (W3C) which also oversees Web Browser development standards. The goal of WASM is to execute all layout and graphics elements on the TV’s onboard silicon, but processors often do not have the strength to do this and are also often single-threaded.

Flutter is another tool, developed by Google, that allows Developers to build native-quality applications for mobile, web, and desktop platforms from a single codebase. It utilizes a reactive framework, enabling hot reloads, fast rendering, and expressive UI design.

Like gaming engines where gaming code is written once and compiled for different devices, Flutter allows Developers to build native-quality applications for mobile, web, and desktop platforms from a single codebase. Flutter can give PSBs an engine that is tailored for each platform it targets, that describes the graphics to layer on each device.

Returning to React, this browser development tool has generally required sending the entire JavaScript framework and application code to the browser to process it. This has resulted in larger download sizes, increased time to first interaction, reduced browser compatibility, and a poorer user experience on slower devices, especially mobile. Server-Side Rendering (SSR) support was added to the main JavaScript frameworks, which has improved the KPIs, particularly with the recent release of React18 that has improved memory utilization features. Initially, SSR support was added by deploying the rendering engine on a centralized Cloud running on a NodeJS server, then it moved to serverless cloud functions, and now there is a trend to process at the Edge.

Another technology helping Streamers deliver better user experiences on Smart TVs via browsers is Jamstack. Jamstack is an architecture where a website is delivered statically from its hosting location or via CDNs, but provides dynamic content and an interactive experience through JavaScript. The name Jamstack represents the “JAM” in a website - JavaScript, APIs, and Markup. It allows Developers to create API-first, headless applications that run directly on Edge servers.

Developers View Of The Edge

According to the State of the Edge 2023 by edgecomputing.io, most Developers agree that websites and applications will predominantly run at the Edge within the next 3 years, due to performance gains, cost savings, and ease of development that Edge functions offer. This is significant for Media Streaming Services given that Smart TV web browsers may need to be more frequently “worked around” to avoid device-level constraints impacting streaming media user experiences.

Key findings from the State of the Edge 2023, primarily based on inputs from Full Stack Engineers, are:

  • The main benefits from Edge computing are website speed, then cost savings, then ease of development.
  • The most valuable use cases from Edge computing are gluing together APIs / databases, authentication, and load balancing.
  • Most Developers are focused on these 3 use cases, but dynamic website personalization is a 4th area of focus in the near future.
  • The additional functionalities most important to pair with Edge computing are key value store, relational database, and cache.
  • The top 3 issues preventing the use of Edge computing are connecting Edge functions to other services, debugging issues, and caching data state.
Streamers View Of The Edge

A recent webinar involving DAZN, ITVX and MainStreaming highlighted that while Streamers currently approach Edge Computing as a way to de-centralize Cloud processing to more distributed Edge processing locations, there is a strong expectation that offloading processing from Devices to the Edge will be the most important step for Streamers to take. The drivers for this trend are:

  • Streamers need to retain control of the User Experience of their Apps. Removing a dependency on device processing power supports this goal.
  • There are millions of TVs but only hundreds or single-digit thousands of Edge servers used by Media businesses. Saving energy on millions of devices is helpful for the industry’s sustainability credentials.
  • The Apps are becoming richer and more complex, combining multiple experiences into a single environment. Leveraging extra processing power from the Edge reduces the pressure on the processing power of the Device.
  • To deliver next-gen viewing experiences at scale, such as VR and AR, needs more power which normal devices will struggle to provide. Consolidating processing into suitably powerful Edge servers is a faster way to deliver more advanced viewing experiences.

Just as Netflix have reached the gold standard of App performance and viewer experience – while serving VOD content which is arguably simpler and easier than delivering Live streaming content – now national Broadcasters have more tools and technologies available to them to close the UX gap with Netflix. Even if Broadcasters probably will not have their own binary run-time deployed at scale by individual TV manufacturers (as Netflix do), and even if Broadcasters probably will not build and own their own Private Edge platform (as Netflix do), there are commercially available approaches that can deliver very close to the same thing. This important dependency on TV manufacturers will persist for Broadcasters, particularly due to the longevity and variety of devices in the market used by Broadcasters’ viewers.

In Part 3 we return to this point to consider what Broadcasters and PSBs in particular need to manage in the TV domain and considerations for sustainability. 

You might also like...

Brazil Adopts ATSC 3.0 For NextGen TV Physical Layer

The decision by Brazil’s SBTVD Forum to recommend ATSC 3.0 as the physical layer of its TV 3.0 standard after field testing is a particular blow to Japan’s ISDB-T, because that was the incumbent digital terrestrial platform in the country. C…

Broadcasting Innovations At Paris 2024 Olympic Games

France Télévisions was the standout video service performer at the 2024 Paris Summer Olympics, with a collection of technical deployments that secured the EBU’s Excellence in Media Award for innovations enabled by application of cloud-based IP production.

Standards: Part 18 - High Efficiency And Other Advanced Audio Codecs

Our series on Standards moves on to discussion of advancements in AAC coding, alternative coders for special case scenarios, and their management within a consistent framework.

HDR & WCG For Broadcast - Expanding Acquisition Capabilities With HDR & WCG

HDR & WCG do present new requirements for vision engineers, but the fundamental principles described here remain familiar and easily manageable.

What Does Hybrid Really Mean?

In this article we discuss the philosophy of hybrid systems, where assets, software and compute resource are located across on-prem, cloud and hybrid infrastructure.