Machine Learning (ML) For Broadcasters: Part 8 - AI And ML Drive TV UX Innovation

AI, primarily Machine Learning (ML), is driving progress in the evolution of the UI (User Interface) for TV’s and other consumer devices, alongside other aspects of the overall UX (User Experience).

There is overlap between the UI and the wider user experience as far as ML is concerned, for example in emerging capabilities to match audio output to the user’s acoustic environment through adaptive feedback.

Another common thread is that many applications of AI or ML in the UI, feed off advances already made in other sectors, especially in the use of personal computers, smart phones, as well as enterprise IT. Past experiences in these sectors can help avoid mistakes such as overcomplicating the UI or introducing too much sophistication too soon. UI advances should be led by the longstanding principle that users should be given the shortest route possible to the content they want to watch, which is admittedly easier to state than to achieve.

There are though some unique UI aspects relating to traditional lean back TV viewing, which is still widespread despite the proliferation of access to video and TV services from portable wireless devices. At the same time, the overall viewing experience should be as consistent as possible, which is especially relevant for broadcasters, and providers of pay TV services whether these are legacy, subscription VoD or AvoD (Advertising VoD). In all these cases viewing may occur on devices of widely varying format, lean forward or lean back. On the one hand, users expect the idiosyncrasies of different devices to be catered for, yet they desire some common elements in navigation, search and recommendation. They expect their preferences to be taken into account and yet also have those determined partly by variable factors such as the device type, time of day, and even their location. A growing number of video service providers utilize ML to help personalize their UIs while catering for these variables.

A key aspect of the UI where ML is intimately involved lies in evolution of the traditional remote control and expansion of the TV domain across the home for control of other devices. There is a paradox here in that on the one hand viewing is fragmenting across myriad devices and platforms beyond the living room and yet the primary TV is finding new applications as the hub of smart homes and also for non-TV applications benefiting from the big screen, such as video conferencing.

The move towards smart internet connected TVs has fuelled this trend by making it easier to implement applications such as casting from mobile devices, while enabling some computationally intensive ML-based UI related tasks to be performed remotely in tandem with the TV’s own processing capabilities.

The other big UI trend is use of voice, which also straddles the different device platforms but is featuring increasingly as a bridge between the legacy remote and more advanced UI capabilities with more individual personalization enhanced by ML. The remote had become a drag on UI innovation to some extent by preserving the traditional clunky manipulation of on-screen menus through the D-pad (Directional Pad), the four-way controller with one button on each point that has been the mainstay of such devices. 

LG’s Magic Remote features AI technology for speech processing but still has the legacy D-Pad. (Source LG).

LG’s Magic Remote features AI technology for speech processing but still has the legacy D-Pad. (Source LG).

The D-pad has been preserved largely out of the conservatism inherent in traditional TV with a reluctance to alienate established users, but with the effect of preventing lean back TV from being as fast and responsive as lean forward streaming services on single devices. The route to content is often much longer on the main TV than it is on the laptop or smart phone, even though the number of actual clicks or manoeuvres may not be so different.

Recently though we have seen smart TV makers introduce voice in parallel with the traditional button control, bringing the TV UI more in line with streaming devices. AI and ML play various roles in voice UIs, from the underlying natural language processing to personalization and authentication of individual speakers. Voice allied to a traditional remote gives the opportunity to enable the one-to-one personalization long exploited for online video by identifying users from their voices.

Until recently, personalization has been confined to the household level with just some individualization through observation of content being viewed, or sometimes as a result of individual logins. Voice adds an additional dimension, making it easier for the system to identify individual users.

Voice UIs are complex and time consuming to develop from scratch, so even the largest video service providers are adopting technology already developed by major players in the field. These include major IT systems and services firms like IBM and also the Big Five tech companies, that is Microsoft, Apple, Google, Meta (formerly Facebook), and Amazon.

A number of set top and broadband gateway software vendors have collaborated with one or other of these major players over the voice UI, in some cases positioning the TV as a home control hub. Then voice becomes both the medium for the TV UI and also for controlling devices around the home, potentially including fridges, toasters, smart speakers and WiFi routers.

AI and ML are deeply involved in this expansion of the role played by voice assistants, helping orchestrate a number of the functions, ranging from parental control over access by children, to automating various applications of the smart home. This can extend beyond voice to facial recognition in security monitoring for example, with scope for contacting users remotely. In this way the TV UI becomes increasingly entwined with other services and applications around the home, bringing revenue generating opportunities for video service providers, especially if they are also in control of the broadband connection.

It is important to recognize that not all consumers are enamoured of voice, or for that matter touch screen control such as Apple TV provides, and that again points back to retention of the traditional remote with its D-pad, at least for now.

When voice is included, ML can help cater for varying levels of engagement by the user, allowing some to progress to “Conversational AI” for more complex interactions, while allowing others to progress more slowly with basic single word commands. Designers of UIs should always obey Hick’s Law, which states that if users are given too many options, they end up taking longer to reach a decision. Related to this is the principle of progressive disclosure whereby users are asked just one question at a time rather than confusing them with several at once. ML can help here by making intelligent deductions and speeding up the process, reducing that “time to content”.

While traditional remotes have defied predictions of their imminent death for years, rather like set top boxes have, they have been under threat from smart phones positioned as universal TV controls empowered by downloading of apps enhanced by ML in various ways. The idea of a universal TV controller was first posited almost as soon as remotes entered the consumer TV realm just over 40 years ago, but for years these failed to gain much traction because they only offered a subset of the full range of UI functions.

That constraint has been removed with the help of ML, which can enable the traditional remote format to be replicated on a smart phone screen while allowing advanced capabilities based on voice or gesture to be incorporated. The use of ML to enable control of basic functions by sweeping gestures picked up by the smartphone camera is under development by a number of app vendors and may soon become part of the TV UI armoury.

So, although AI and ML have been entering the TV UI realm for at least a decade now, it is only recently they have started enabling more advanced capabilities alongside traditional TV remotes. Such devices will increasingly be augmented by AI and ML-related capabilities as they enter their final lap. 

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

IP Security For Broadcasters: Part 4 - MACsec Explained

IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.

IP Security For Broadcasters: Part 3 - IPsec Explained

One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…

The Resolution Revolution

We can now capture video in much higher resolutions than we can transmit, distribute and display. But should we?

Microphones: Part 3 - Human Auditory System

To get the best out of a microphone it is important to understand how it differs from the human ear.