Viewpoint: Secure Edge Cache: Optimizing the Network and Reducing Latency

The efficient distribution of content, especially video, on the web with the best performance and highest quality of experience requires a large number of servers to be deployed as close as possible to end-users. Consequently, Content Providers (CP) and third-parties have built large networks of content distribution servers, also known as content delivery networks (CDNs).

Today, CDN owners partner with Internet Service Providers (ISPs) to jointly deliver content in the most efficient manner. This includes localizing a substantial amount of their traffic, which allows for the retrieval of assets from a cache closer to the end-user, resulting in faster downloads and delivery times. Localizing traffic also helps ISPs lower the cost of serving traffic demanded by the CP’s subscribers, translating into substantial savings in transit and transport bandwidth. Indeed, if the content requested is already in the local cache and is considered “fresh”, then it will be served directly to the end-user, resulting in improved user experience and bandwidth saving.

To effectively localize traffic, the CP asks the ISPs to deploy inside its network a certain number of the CP’s proprietary servers that provide caching functionalities together with other optimizations. The ISPs work closely with the CP to carefully map where to deploy these servers in the network to ensure a well-targeted deployment which substantially enhances performance.

Caches allow an HTTP origin server to offload the responsibility for delivering certain content. The cache hit rates vary depending on the number of end-users served by the cache, the unique consumption patterns of end-users, and the size and type of the cache. It’s been reported that “between 70-90% of CP cacheable traffic can be served from the deployed CP’s cache infrastructure” [1].

However, there is a major drawback of existing solutions for content distribution: an origin is required to yield control over their content to the CDNs, allowing them to see and modify the content that they distribute. In some cases, expediency can dictate that the CDN be given control over the entire origin. As a result, in the past three years, the larger CPs have built their own CDNs as a way to overcome this problem. In doing so, they have caused a proliferation of third-party proprietary cache boxes within the ISPs. This proliferation has become so big that the ISPs’ spending for those third-party boxes deployed in their network has far exceeded the savings in transit and transport bandwidth.

As an active member of the Internet Engineering Task Force (IETF), a large open international community of network designers, operators, vendors and researchers concerned with the evolution of the Internet architecture and the smooth operation of the internet, Ericsson is recommending a solution for the proprietary nature of caches in ISP networks, while ensuring privacy and protection of content stored there.

Ericsson, together with other companies, is proposing to the IETF a new architecture for distributing content via a third-party CDN with a stronger level of security and privacy for the end user while reducing the security privileges of the CDN compared with current practice.   

The proposed architecture allows an origin server to delegate the responsibility for delivery of the payload of an HTTP response (the content item) to a third-party in a way that makes it unable to modify the content. In this solution, the content is also encrypted, which prevents the third-party from “seeing” or learning about the content.

An origin server can use this proposed architecture to take advantage of CDNs where concerns about security might otherwise have prevented their use in the past. This is also relevant for types of content that were previously deemed too sensitive for third-party distribution.

The Ericsson proposed architecture consists of three basic elements:

  1. A delegation component
  2. Integrity attributes
  3. Confidentiality protection

Content Delegation

The out-of-band content encoding [2] provides the basis for delegation of content distribution.

  • A request is made to the origin server including a value of "out-of-band" in the Accept-Encoding HTTP header field indicating a willingness to use the secure content delegation mechanism and a new BC header field (defined in [5]) indicates that the client is connected to a proxy cache that it is willing to use for out-of-band requests.
  • In place of the complete response, the origin only provides response header fields and an out-of-band content encoding.
  • The server populates the proxy cache or CDN with the resource to be served, encrypted and integrity protected.
  • The out-of-band content encoding directs the client to retrieve content from the cache or CDN. The URL used to acquire a resource from the CDN is unrelated to the URL of the original resource. This allows an origin server to hide from the CDN provider the relationship between content in the CDN and the original resources that was requested by the client.

Content Integrity

Content integrity is crucial to ensuring that content cannot be improperly modified by the CDN.

Several options are available for authenticating content provided by the CDN [3]. Content that requires only integrity protection can be safely distributed by a third-party CDN using this solution.

Confidentiality Protection

Confidentiality protection limits the ability of the delegated server to learn what the content holds.

Confidentiality for content is provided by applying an encryption content encoding [I-D.ietf-httpbis-encryption-encoding] to the content before that content is provided to a CDN. It is worth highlighting that the proposed solution only allows content on the CDN that is protected by access controls on the origin server to prevent the CDN from finding out the real resources at the origin by pretending to be a client and querying the origin.

[1] Google Global Cache (GGC) https://peering.google.com/about/ggc.html checked 2015-04-29

[2] J. Reschke, S. Loreto “'Out-Of-Band' Content Coding for HTTP” https://tools.ietf.org/html/draft-reschke-http-oob-encoding-04

[3] M. Thomson, G. Eriksson, C. Holmberg ” An Architecture for Secure Content Delegation using HTTP” https://tools.ietf.org/html/draft-thomson-http-scd-00

[4] Thomson, M., "Encrypted Content-Encoding for HTTP", https://tools.ietf.org/html/draft-ietf-httpbis-encryption-encoding-01

[5] M. Thomson, G. Eriksson, C. Holmberg “Caching Secure HTTP Content using Blind Caches” https://tools.ietf.org/html/draft-thomson-http-bc-00

You might also like...

Designing IP Broadcast Systems - The Book

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

Operating Systems Climb Competitive Agenda For TV Makers

TV makers have adopted different approaches to the OS, some developing their own, while others adopt a platform such as Google TV or Amazon Fire TV. But all rely increasingly on the OS for competitive differentiation of the UI, navigation,…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

Standards: Part 21 - The MPEG, AES & Other Containers

Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.

Broadcasters Seek Deeper Integration Between Streaming And Linear

Many broadcasters have been revising their streaming strategies with some significant differences, especially between Europe with its stronger tilt towards the internet and North America where ATSC 3.0 is designed to sustain hybrid broadcast/broadband delivery.