A Video Quality and Measurement Overview: Part 1

Consumers care less about how content is delivered to their device than about the quality of the experience (QoE) on that device. This means engineers need to rethink how test and measurement tools can best be applied.

The video distribution landscape has been in an accelerating state of flux over the last 10 years. We now have traditional cable distribution, IPTV, satellite, Hybrid platforms, and Over-The-Top, opening the door for content owners to market to viewers directly. At the same time content licensing pricing pressures are making it very difficult to differentiate on price, and we’ve already seen many “blackouts” of networks due to stalls in licensing deals. Consumers now have access to hundreds of programs, channels and networks, all at a similar price from a variety of providers. In my opinion, the quality of experience - based on service performance - is now a key driver for customer acquisition and retention.

We are aware of some content and service providers trolling social media for real-time insight into the customer viewing experience. However, relying on social media to tell us when our quality is poor is not a solution; it is a symptom of the lack of maturity we face as an industry today. A quality solution must include delivery of real time video intelligence which accurately represents the audience’s playback quality. This two part series will address the measurement and monitoring/assurance challenges of providing high quality viewer experiences across this dynamic, rapidly expanding landscape.

Defining a baseline

Qualification and monitoring video delivery Quality of Experience (QoE) begins with defining clear demarcation points across the overall video delivery infrastructure. For the purpose of this article we will focus on general points that should apply to most infrastructures. The solution should always provide a guided drill-down approach to problem identification, isolation and resolution. Anything short of that will not effectively provide an actionable response to a video quality issue.

Is my incoming content good?

Let’s start with content acquisition at the head-end, or “how good is my source content?” At first glance, this might seem to be a fairly straightforward process, but it’s not as simple as it seems. Providers struggle with everything from regulatory requirements for audio loudness and closed captioning, to definition and compliance monitoring of “Service Level Agreements” (SLAs) with transport providers, to ad-insertion dynamics, and much more. The head-end is the critical point to perform true content quality measurements which directly impact QoE, and it requires an in-depth, full decode and measurement of incoming video quality. The measurement platform must be able to understand, monitor and alarm on video quality issues such as black frames (outages), audio clipping, poor video quality (low Mean-Opinion-Score (MOS) ratings) at the acquisition ingress. However it’s not enough for the solution to deliver on this; you also need a platform that can integrate with higher level “northbound” systems, enabling powerful end-to-end correlation which can report the true impact of video impairments. It’s good to know you had a 5 minute outage, but without knowing if it affected 5 or 500,000 viewers you don’t have the full picture.

Am I preparing my content properly?

You’ll also want to monitor any content transformation points in the head-end; transformation may include transcoding, conditional access/encryption, ad insertion, multi-format packaging, etc. And these processes may be performed by physical appliances, cloud-based services, or a hybrid physical/virtual combination. The goal here is to make sure that whatever content you have modified is qualified for further distribution. Problems that are caught here can save a lot of cost, time, and viewer irritation. It’s also important to understand how the type of issue or impairment actual QoE impact based on identified. Historically, measurements defined by TR 101 290 would be used as the yard stick. In reality, many video equipment vendors often “bend” the rules somewhat in that regards, and you can end up with a Christmas tree of red lights if you try and apply a pure “standards”-based methodology. You then must go through each of these flags, squelching 90% of them, so that you can find the 10% that really matter. This sets up a “boy who cried wolf” problem from the start.

I always recommend that customers start with basic and viable alarming such as video outage, audio outage and program outages. Then as they learn and baseline the network characteristics, they can move on to more advanced alarming based on Media Loss Seconds (an MDI [1] based measurement) as an indicator of macro-blocking and so forth. The following table illustrates how a single viewer-observed symptom can have many different root causes.


[1] Media Delivery Index: for more information, see this technical brief. http://bit.ly/1Vlz3WI 

Poor Video Quality Symptoms and Causes

Figure 1. Visual impairments can occur anywhere in a delivery stream. For that reason, it is important to monitor multiple points along that path.

What we see in Figure 1. is that often the end results of visual impairments are the same regardless of where in the network the actual issue may be occurring. This reinforces the need to have proper monitoring demarcation points in order to isolate and localize the source of the issues as quickly as possible.

QoE: Measuring the Subjective…Objectively

It is critical to recognize that while a viewer’s opinion of quality is subjective, objective metrics can be used to accurately predict the viewer’s quality perception. Most QoE scoring algorithms in the marketplace utilize pseudo-analysis to find a balance between processing requirements and accuracy. For example, some algorithms will calculate a score without actually decoding the content, simply by using metadata. While this requires relatively little processing, it does so at the expense of actually “looking” at the content. Accuracy can suffer. In addition, many scoring algorithms are “black boxes”, in that it is very difficult to decompose the underlying scoring elements which could explain why scoring improved or degraded. Without scoring transparency, it is almost impossible to know what content processing modifications should be made, and whether those changes will affect the required stream bandwidth and transport cost. For a score to be useful and actionable, it must offer algorithmic transparency.

Let’s consider a practical example; comparing two different encoders as part of the selection process many readers face. For simplicity, a single comparative score is desired, but the ability to understand why the difference exists is critical to the true evaluation. Can anything be done to optimize the configurations and minimize the difference? Or is the difference inherent in a particular encoder’s processing algorithms?

At IneoQuest, we have taken a transparent, “open box” approach to our MOS-based QoE scoring algorithm. We provide the component scoring element information to our customers so that they may understand, and potentially act on, the insight provided by the score. Contributing elements such as resolution, compression ratio, FES errors, bitrates, and more are factored into the scoring metric. In addition, our calculation is based on a complete decode of the stream, for maximum accuracy.

The following screen capture illustrates a practical application of IneoQuest’s latest video processing scoring algorithm, indicated as “Video MOS”, which is leveraged by release 4.0 of our DVA IP video assurance element. Using the same input stream, Encoder 1 is showing an output VPS of ~4.4, while Encoder 2 is scored at 3.56. Using the additional information provided by the DVA, Encoder 2’s lower score can quickly be attributed to a very large open-GOP. 

Figure 2. This display screen shows that with the same input streams, two different encoders can produce different Mean-Opinion-Score (MOS) ratings. Using the additional information from the screen, it reveals that Encoder 2’s lower score comes from having a large open-GOP, 60 versus 15.

Quality in a Virtual World

There is currently a big push in the industry towards network function virtualization and software defined networking (NFV/SDN), and the head-end is not immune to this trend. We are seeing many video network functions moving into cloud or virtualized environments, including encoding, transcoding, bulk encryption, and others. As you consider a shift to VM, there are many new things to consider:

  • Dedicated or multi-tenant deployments: Will the video network function share resources with other virtual machines? If so, is it over-subscribed or dedicated?
  • Choice of Hypervisors: will the system be consolidated on a commercial platform (VMWare ESXi, Microsoft Hyper-V, etc.) or open-source tools (KVM, XEN, etc). You’ll need very different skillsets and troubleshooting tools depending on which road you take.
  • Hardware considerations: Do you understand the performance penalties or impact of improper VM sizing across NUMA boundaries for a given VM?
  • And many more…

In addition, an NFV architecture adds a whole new level of complexity, and of course more potential problems. For example:

  • How will you know that your encoder isn’t being CPU starved in a VM environment? No longer are you relying on dedicated ASICs/FPGAs for those functions; now you must rely on generic computing facilities to do the same.
  • Can your monitoring platform logically associate multiple bitrate profiles so that you can analyze the performance and quality of those profiles as a single entity?

Measurement and monitoring techniques are a critical part of NFV/SDN, not only for quality assurance, but also to provide the essential control necessary to take advantage of the elasticity that this technology offers. New techniques for measurement and monitoring are being developed specifically to address the challenges of NFV/SDN.

Coming in Part 2

In the Part 2 of this tutorial, we will focus on the distribution and consumption portions of the distribution ecosystem, including network core, edge, and last mile considerations, and of course the tremendous challenge of multiscreen, and the ever growing multitude of consumer devices that must now be supported. 

Gino Dion, VP Engineering, IneoQuest Technologies

Gino Dion, VP Engineering, IneoQuest Technologies

Editor’s Note: Part two of this two part series will be published in December, 2015

You might also like...

Designing IP Broadcast Systems - The Book

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

Operating Systems Climb Competitive Agenda For TV Makers

TV makers have adopted different approaches to the OS, some developing their own, while others adopt a platform such as Google TV or Amazon Fire TV. But all rely increasingly on the OS for competitive differentiation of the UI, navigation,…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

Standards: Part 21 - The MPEG, AES & Other Containers

Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.

Broadcasters Seek Deeper Integration Between Streaming And Linear

Many broadcasters have been revising their streaming strategies with some significant differences, especially between Europe with its stronger tilt towards the internet and North America where ATSC 3.0 is designed to sustain hybrid broadcast/broadband delivery.