The Technology Of The Internet: Part 1 - How Video Streaming Shaped The Internet
This is the first in a series of articles examining the technology underlying video streaming as that becomes the dominant transmission medium for TV. This first article dissects the internet itself and examines the impact of streaming on it, setting the scene for a more in depth look at specific components such as CDNs.
Other articles in this series:
Video streaming has come of age through increased network capacity and specific technologies that address the unmanaged nature of the internet, such as adaptive bit rate streaming (ABRS). At the same time, video streaming has shaped the continuing evolution of the internet as it accounts for an ever-higher proportion of total data traffic and becomes the dominant medium for TV distribution.
The internet has become woven so deeply into the fabric of social and working life that few people pause any more to consider what it is, never mind how content is delivered over it. Yet many broadcasters and video service providers are having to ponder just that question as they plot their path from traditional over the air or cable distribution to internet streaming as their predominant medium. This is a two-way process in that the internet’s development itself has become increasingly shaped by the demands of video streaming as that consumes an ever higher proportion of total data traffic transported amid rising expectations over QoS.
Streaming has not changed internet fundamentals as laid down during its inception as an academic network in the 1970s but has increasingly played on the structure and technology of those key structural components such as CDNs and caching.
Fundamental to the internet is the TCP/IP protocol suite for transporting data packets ever since that was incorporated in the Advanced Research Projects Agency Network (ARPANET), the world’s first significant packet switched network launched in 1969. This evolved into what was essentially a single distributed network serving academia until the launch of the first public internet service 20 years later in 1989 by the inaugural internet service provider (ISP) called The World, based in Massachusetts, USA.
It was another decade before the internet gained much traction among general consumers when search engines became available from the likes of Google, Microsoft and Yahoo. By this time most enterprises and public services had established web sites in the WWW (World Wide Web), allowing access to documents located through strings of characters called uniform resource locators (URLs), and often formatted in Hypertext Markup Language (HTML).
The internet had by then evolved into the more complex structure prevalent today, comprising multiple IP networks run by different entities interacting as a greater call sometimes called a meta net through peering points located in data centers around the world. Access for consumers and enterprises is then typically provided by ISPs though local links sometimes referred to as the “last mile”, which can be over fiber, legacy copper telco twisted pair copper networks, coaxial cable, or a mobile cellular network.
WiFi often provides the final hop from an IP router or gateway when the internet is accessed from a device such as a PC or tablet, or even a smartphone when within range to avoid incurring mobile roaming charges or eating up monthly data allowances.
The ISPs in turn connect to the shared public internet infrastructure through peering points, which are often Internet Exchange Points (IXPs), where one network peers with multiple other networks over a single connection. The alternative is private peering when two or more networks exchange traffic directly at some private facility which may be owned by one of the ISPs or a third party.
As the internet evolved in the 1990s, even without the extra stresses and demands of video, it became clear that some form of technology enhancement or overlay was required to improve performance and alleviate congestion around bottlenecks. This was especially the case when the origin server from which web pages were served or content distributed was a long way from the end user consuming it via a local ISP.
This led to Content Delivery Networks (CDNs) being implemented to resolve bottlenecks and reduce latency experienced by the end user. Despite the name, CDNs did this primarily not by overlaying a new network but deploying servers around the world at the edge of the public internet where content could be replicated and cached for local delivery. This would be the case for popular content such as news that would then be delivered on demand to the end user from a local cache with less delay and without consuming public internet bandwidth for each instance on a unicast basis.
While CDNs evolved, consumers started to access video on PCs and laptops as these became capable of playback at reasonable quality. Although the history of desktop video playback dates back to the 1970s it was only really after 1995 with the arrival of on-chip graphic capabilities that video consumption gained any significant traction just as consumers started accessing the internet. At first bandwidth was nowhere near enough for streaming video even at low quality, and so online video access was largely confined to download.
Apple’s launch of iTunes in January 2001 may have opened the mainstream streaming era, but at first bandwidth was only enough for music and not video. The latter took off around five years later and the 2008 Summer Olympic Games in Beijing was the first major event where live video was widely streamed over the internet. The experience was poor by today’s standards, but similar to early mobile voice calling there was a certain tolerance of low QoS as the price to pay for the utility of being able to stream multiple events to PCs and not be reliant on TVs.
Video streaming had advanced greatly by the time of the 2012 London Olympics, where the BBC emerged as a major pioneer of live video streaming, since when it has continued to innovate in that field with a prolific publication rate in technical journals.
It was however on demand content that dominated the early days of video streaming with the emergence of the major SVoD (Subscription VoD) players led by Netflix (who launched their first streaming service in 2007), followed by other big players such as Amazon, HBO, Disney and Comcast through NBC Universal.
As SVoD became an increasingly global business competitive pressure extended to QoS as well as the content itself, with growing demand not just for higher resolution and elimination of artefacts such as buffering and freezing, but also faster start up, even though the content was not live. This led firstly to the use of third party CDNs to bypass multiple internet switching points and also save on bandwidth by caching content locally closer to the user.
But then Netflix, as the market leader facing unprecedented growth in data volumes, found that third party CDNs were increasingly unable to deliver consistent service. This led Netflix to develop its own CDN called Open Connect, which also took a step forward by installing its own equipment in local ISPs, which took another bite out of latency.
We will explore the details of such maneuvers in a future article, but one point to note here is that this had some implications for net neutrality, the principle that the internet is equally open to all parties and that nobody’s content is favored over others, and nobody is blocked.
It is true that installation of CDN appliances in an ISP by a content provider does not strictly contravene net neutrality because that same facility is open to others and indeed exploited by CDN companies. In practice though it favors Netflix over smaller content providers which would always be reliant on third party CDNs. Netflix gains advantage by dint of its size and scale justifying investment in its own CDN, as do some of the other major streamers such as Amazon Prime Video.
CDNs are also critical for live streaming, even though in that case there is no time to transport content out to local caches as a first stage of distribution. As in the case of sporting events such as the Olympics the content will play out from the location and inevitably the signal transmission time will make a more significant contribution to total latency experienced by the end user, compared with SVoD. The role of the CDN is then to distribute the content as efficiently as possible and strip latency to the bone.
Multicasting then comes in for live content by stripping back to single video streams along each branch of the distribution chain. This saves enormously on bandwidth and also in effect bares down on latency by avoiding congestion.
A key point for IP multicast over the internet is that the IP routers that forward IP packets along successive hops are inherently capable of data replication across their multiple output links. This means that an arriving IP multicast packet can, on demand, be copied across each of the ongoing network hops that lead towards an ultimate end user who wants to view the content of which that packet is a part. IP multicast therefore is a natural evolution of the internet.
Standards bodies such as the DVB have been working on common specifications for live IP multicast over the internet and the BBC again has been in the vanguard. The BBC has said it will continue using conventional CDNs for serving pre-recorded drama, entertainment and documentaries on demand via its iPlayer portal for example, but meanwhile has been developing a scalable IP multicast platform for live news, sport, music festivals and other big national events.
The technology and evolution of such platforms will be explored in future articles in this series. They underline the increasing influence of video streaming over internet evolution, above all live video.
You might also like...
IP Security For Broadcasters: Part 1 - Psychology Of Security
As engineers and technologists, it’s easy to become bogged down in the technical solutions that maintain high levels of computer security, but the first port of call in designing any secure system should be to consider the user and t…
Operating Systems Climb Competitive Agenda For TV Makers
TV makers have adopted different approaches to the OS, some developing their own, while others adopt a platform such as Google TV or Amazon Fire TV. But all rely increasingly on the OS for competitive differentiation of the UI, navigation,…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
Standards: Part 21 - The MPEG, AES & Other Containers
Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.
Designing IP Broadcast Systems - The Book
Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…