CableLabs Certifies First Low Latency DOCSIS Modem
CableLabs, the industry body responsible for cable TV R&D and standards development, has certified the first cable modem supporting the low latency version of the DOCSIS data over cable specification.
This has been hailed as an important landmark for cable TV by improving the experience for interactive ultra-low latency applications delivered over broadband networks, such as gaming and Extended Reality (XR).
The device is a Motorola MG8725 from Minim, approved by the CableLabs Certification Board as supporting low-latency features that are now part of both the DOCSIS 3.1 and DOCSIS 4.0 specifications. Originally introduced to support internet access over cable TV networks, DOCSIS (Data Over Cable Service Interface Specification) in successive generations has increasingly been used for video transmission with the rise of OTT services and internet access to media content.
Within DOCSIS, low latency has emerged as one of four key pillars of the cable industry's so- called 10G platform strategy, the others being speed, reliability and security. Of these, speed really means capacity, for it is not about decreasing end to end transmission time, which equates to latency, but increasing overall bandwidth. It is equivalent to increasing capacity of a road transport network by introducing more lanes, which might cut average journey times by reducing congestion, but unless speed limit were also raised minimum delay would be unchanged.
Latency therefore came to be seen as the major constraint in cable TV networks for a range of emerging use cases, just as it has for mobile communications under 5G. For the latter, ultra-low latency has emerged as one of three key pillars or use cases, the others being capacity and support for large numbers of simultaneous lower bit rate sessions for machine to machine communications, of the Internet of Things (IoT).
Latency Sources
For cable TV, five sources of latency were identified and calibrated, boiling down to just two significant contributors that have been addressed within the new standard, that is queuing delay and media acquisition delay.
Of the others, at the bottom of the pile comes switching/forwarding delay, the time it takes for the cable modem (CM) at the user end and the CM termination system in the operator’s network to process an IP packet by deciding where to send it. This delay can accumulate across a large IP router network say but is negligible when just traversing two devices in the cable TV access network, being less than 0.04 ms. That is irrelevant when the target is to reduce the overall contribution to latency of the DOCSIS infrastructure from up to a few hundred ms down to 1 ms.
The second smallest contributor to latency within DOCSIS is propagation delay resulting from the time taken by the signal to traverse the HFC (Hybrid Fiber Coax) plant of the cable TV network. That is largely determined by distance and dictated by the laws of physics, so there is little that can be done to reduce it. Fortunately it is usually quite small, sometimes even less then the switching/forwarding delay, but can be up to 0.6 ms or more in larger networks. It can make some highly latency sensitive applications unviable across continents, but generally is not a significant problem given that most cable TV networks are quite localized.
The next source of latency is encoding delay associated with the upstream and downstream channel configuration options available to the operator. These can involve tradeoffs between latency and robustness, but on the whole operators are advised to select the option that imposes the least delay. It can impose delays in the range 0.4 ms to 3.5 ms, according to CableLabs.
Addressing Delay
Then of the top two sources of delay that have been seriously addressed within the new DOCSIS specifications, media acquisition results from the scheduling process used by DOCSIS to grant access to the shared upstream channel by competing processes via a request-grant mechanism. This imposes delays of 2 -8 ms on an uncongested channel, which is significant for the most delay sensitive applications. The new standard brings this down to well under 1 ms by introducing Proactive Grant Service (PGS), which eliminates the request loop so that nearly all IP packets can be transmitted straight into the network. This requires the CMTS to estimate how much capacity it has to grant to each data flow on the fly, so inefficiencies can occur as a result of overprovisioning and lead to some flows being denied the bandwidth they need. Compromises may have to be struck.
No compromise is needed though when dealing with the final and biggest contributor to DOCSIS latency, queuing delay. This has become a growing problem with increased consumption of the internet by multiple processes and devices within single households or enterprises that share a cable modem. It is associated with the internet’s TCP protocol and its variants or successors, where applications compete for as much bandwidth as they can get via a congestion control algorithm and then adjust their data rate to the capacity allocated to them.
But to ensure all bandwidth is used, each process tends to transmit data at a rate faster than network at its bottleneck or slowest link can sustain, which results in queuing in buffers. If the queues grow too large, transmission stops until they empty, or else packets are dropped and quality suffers. In the absence of a quality decline, latency increases while waiting for the buffers to empty.
Latency Compromise
The remedy came from realizing that some services, such as SVoD like Netflix, tend to build queues because their demand is for quality at the expense of latency, with even a few hundred ms delay being neither here nor there. The same holds for downloading videos and music, file sharing and email access, as well as uploading videos to say YouTube or Instagram, in addition to system updates.
There are also applications that are much less demanding of bandwidth because they send data intermittently and yet may require low latency, such as video chatting via applications such as FaceTime. These do not generate queues in DOCSIS networks, or any others for that matter.
The new Low Latency DOCSIS (LLD) exploits this dichotomy by allocating traffic into one of two logical queues depending on which of these categories it belongs to. This greatly reduces delay for those applications like video chat that do not contribute to queues and yet may be latency-sensitive. CableLabs emphasizes that, unlike some other approaches to low latency that have been employed in various scenarios or contexts, LLD does not directly favor one class of traffic at the expense of others. The point is that the two queues share pooled bandwidth and each can be optimized for their own requirements, which means low latency in the case of say video chat but quality for video streaming. In the case of the latter there is then scope for allowing some degradation in quality to prevent delay becoming unacceptably high, through use of adaptive streaming for example, although that is quite separate.
The point is that the so called queue building traffic can generate the queue it needs to achieve the desired throughput. Then the non-queuing traffic can take advantage of lower latencies by bypassing the queues accumulating around streams rather than having to wait for them to clear as before.
You might also like...
Standards: Part 22 - Inside AIFF Files
Compared with other popular standards in use, AIFF is ancient. The core functionality was stabilized over 30 years ago and remains unchanged.
The New Frontier Of Interactive Rights: Part 1 - The Converged Entertainment Paradigm
Interactive Rights are at the forefront of creating a new frontier in the media industry. Driven by the Streaming era, but applicable to all forms of content platforms, Interactive Rights hold an important promise – to deeply engage the modern viewer i…
IP Security For Broadcasters: Part 1 - Psychology Of Security
As engineers and technologists, it’s easy to become bogged down in the technical solutions that maintain high levels of computer security, but the first port of call in designing any secure system should be to consider the user and t…
Operating Systems Climb Competitive Agenda For TV Makers
TV makers have adopted different approaches to the OS, some developing their own, while others adopt a platform such as Google TV or Amazon Fire TV. But all rely increasingly on the OS for competitive differentiation of the UI, navigation,…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.