Understanding OTT Systems - Part 1
In this series of articles, we investigate OTT distribution networks to better understand the unique challenges ahead and how to solve them. Unlike traditional RF broadcast and cable platform delivery networks, OTT comprises of many systems operated by different companies to deliver programs to viewers, and it’s these potential silos that are the root of the challenges OTT faces.
To enable delivery to a multitude of mobile devices such as phones, tablets, and laptop computers, OTT distribution systems provide multiple data streams with varying data-rates. A viewer travelling may move in and out of good cellular coverage. Or a Wi-Fi network in a coffee bar may become congested as more customers arrive, and the available bandwidth is compromised.
Using adaptive streaming protocols such as DASH (Dynamic Adaptive Streaming over HTTP) or HLS (HTTP Live Streaming), multiple streams are provided to accommodate availability of network bandwidth. For an HD service, there may be five data rates varying between 500Kb/s and 5Mb/s, for example. Mobile devices compliant with DASH and HLS receive a manifest file describing the different available streams. The mobile device uses this information to automatically switch between data-streams.
Best Viewer Experience
Ideally, the mobile device will choose the highest available data-rate stream as this will deliver the best quality picture and sound. However, if the user moves into a network with poor coverage the data delivered will not reach the mobile device in time and buffer underruns will occur, resulting in the infamous “buffering, please wait” icon, thus seriously affecting the viewer experience.
The mobile device needs to keep its own internal buffer optimized to maintain a good level of quality of experience for the viewer. If it is too low, or empties, the picture and sound may freeze and break up. However, compliance with DASH and HLS allows the mobile to detect low buffer levels and calculate the bandwidth available and switch to the highest stream available to keep the buffer full. This compromise may deliver slightly lower quality video, but the quality of experience improves significantly for the viewer.
Switch to the Best Data-rate
If the mobile device detects higher data-rates are available from the network, it will switch back to the higher data-rate stream thus improving the quality of video and audio. All this happens automatically without user intervention.
At this stage, it’s worth remembering that broadcast television traditionally relies on sending a single data-stream per channel in one direction, from the broadcaster to the home. When the signal leaves the transmitter antenna, we can happily assume the signal is being received by the viewer.
Broadcast networks are either closed, that is owned and operated by the broadcaster, or signals are transferred over a private telco network directly contracted to the broadcaster. Either way, the broadcaster has complete visibility of the unidirectional broadcast from the TV station to the viewer at home via private telco networks and RF transmitters.
OTT Flexibility
Similar analogies and assumptions can be made about broadcast distribution using cable delivery. The network is closed, either completely owned by the broadcaster or directly contracted to private telco circuits by them.
OTT does not operate in this way.
OTT is incredibly popular from the viewers perspective, as they can access the material via the internet. But there are many third parties involved in the distribution network that the broadcaster either has no control over or little influence over.
Figure 1 – traditional broadcasting is a unicast distribution system where signals are actively pushed to a television receiver. The TV always receives the signal and the transmitter has no knowledge of whether a viewer has received the signal or not. OTT relies on the receiver, such as a mobile or playback, device actively requesting data from the broadcaster.
When leasing a dedicated SDI circuit from a telco, a broadcaster can be certain that the circuit meets certain stringent requirements such as bandwidth and jitter. These will comply with SMPTE’s 292M specification for example. The broadcaster can be sure the telco will route the signal to its destination with virtually no delay and completely intact in accordance with the specification. The disadvantage of this system is that it is incredibly expensive, restrictive, and offers little flexibility. There is no easy way of sending a program to a mobile device using transmitters and SDI networks.
IP is Transport Stream Agnostic
Internet Protocol (IP) has emerged as the dominant delivery mechanism for the internet as it is transport stream agnostic. IP can be as easily distributed over Ethernet as it can over Wi-Fi, the IP packets have no knowledge of the underlying transport medium. However, IP was never designed to be fault tolerant but instead is a best-effort-delivery system.
To deliver fault tolerance we use Transfer Control Protocol (TCP), this uses a windowing system to guarantee groups of packets are received by the destination. If the receiver doesn’t send an acknowledge or it isn’t received by the sender, then the group of packets is resent. This adds some delay and latency, but it’s not the full story.
HTTP Transports Video and Audio
Internet web-servers use Hyper Text Mark-up Language (HTML) to provide the data and information for web-browsers to display. In turn, HTML web-pages are transported between web-server and browser around the internet using Hyper Text Transfer Protocol (HTTP). And HTTP resides on top of TCP and then IP.
Although IP forms the basis of packet delivery for the internet, it’s HTTP that is the basis of application delivery and distribution for web pages, streamed audio and video. HTTP was chosen by the designers of DASH and HLS to transport video and audio to mobile devices as it forms the backbone of the internet and is interoperable (in theory).
Figure 2 – HTTP data is transported on TCP. Due to the operation of TCP, there can be unintended consequences if large errors occur. Although the bit data-rate on the wire increases, the data-throughput decreases, and its latency further increases. Care must be taken when interpreting just the bit-rate in a network.
From a service providers perspective, HTTP is incredibly useful to their business model. For example, a third-party Content Delivery Network (CDN) provider can spread the cost of their facility over many clients all using HTTP. ISP’s adopt the same model and it is this sharing that makes program delivery both flexible and cost effective for broadcasters. Being able to deliver to mobile devices gives them much greater audience reach.
HTTP Provides Flexibility
The ability for broadcasters to be able to use HTTP delivery over shared networks is what makes OTT so flexible. But with flexibility comes challenges and compromise. Reducing flexibility merely takes us back to dedicated privately leased SDI networks.
Consequently, broadcasters now find themselves sharing networks with other service providers and users. These networks are incredibly complicated and not dedicated to any single client. This leads to compromise on the part of the service provider as they do their best to keep all their clients happy.
These highly complex networks are dynamic, and their behaviors can be influenced by other users in the system. If a mobile phone operator suddenly issues a software upgrade and millions of users all download it at the same time, this could affect the network and influence the quality of experience of broadcast viewers. Especially if the proper safeguards within the network haven’t been implemented.
Silo Thinking
In a typical delivery channel, many service providers may form the link between the broadcaster and the viewer. Different suppliers provide geographically separated distribution through CDN and many ISP’s and Access providers deliver the end program to the viewer. This developing model has led to silo thinking and establishing who is at fault, should an error occur, can be a complex and demanding task. The blame-game and finger-pointing culture soon establishes itself with self-defeating consequences.
From the viewers perspective, OTT is a solution in its infancy. Furthermore, many vendors are still experimenting with delivering optimal methods of service provision. New architectures and protocols are constantly being developed to improve the viewer experience.
Optimization Incompatibility
Working within the confines of their own laboratory environment, a company may find it has been able to improve video delivery by designing a new architecture or optimizing a protocol. It’s only when they connect to another service provider, they may find that their solution is not completely compatible. And these changes are not always advertised so even establishing a modification has occurred can be difficult.
Some events create enormous loads on a delivery network resulting in non-deterministic and unpredictable behavior. Broadcasters cannot always predict how many viewers will use OTT in preference of traditional broadcasting so understanding where extra resource must be provisioned provides its own challenges. Mainly due to the interaction of many differing elements from unrelated service providers.
With todays social media savvy consumers, many dissatisfied viewers resort to social media to vent their anger and dissatisfaction. Brands are easily damaged, and valuable revenue lost. With a plethora of alternative sources of programming, viewers seamlessly switch to other program providers.
Faults are Complex
It’s not unheard of for broadcasters to be monitoring social media feeds to establish the quality of their delivery. Clearly this is unsustainable and does not provide a sustainable business proposition.
Due to the complex nature of real-time live broadcast television delivery, it’s time-sensitive and fault-intolerant. Many things can go wrong in many locations and a seemingly obvious fault manifesting itself with one service provider, may in fact be caused by another service provider, without obvious reason.
In the next article in this series we analyze a typical OTT delivery chain and investigate how faults and errors can interact between service providers to manifest problems elsewhere. And in part 3, we demonstrate how monitoring can help break down silo thinking to deliver more efficient and reliable networks leading to better viewer quality of experience.
Part of a series supported by
You might also like...
The New Frontier Of Interactive Rights: Part 1 - The Converged Entertainment Paradigm
Interactive Rights are at the forefront of creating a new frontier in the media industry. Driven by the Streaming era, but applicable to all forms of content platforms, Interactive Rights hold an important promise – to deeply engage the modern viewer i…
IP Security For Broadcasters: Part 1 - Psychology Of Security
As engineers and technologists, it’s easy to become bogged down in the technical solutions that maintain high levels of computer security, but the first port of call in designing any secure system should be to consider the user and t…
Operating Systems Climb Competitive Agenda For TV Makers
TV makers have adopted different approaches to the OS, some developing their own, while others adopt a platform such as Google TV or Amazon Fire TV. But all rely increasingly on the OS for competitive differentiation of the UI, navigation,…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
Standards: Part 21 - The MPEG, AES & Other Containers
Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.