Proactively Monitor All Aspects Of digital Video Services To Stem Customer Complaints

Research shows that the top four viewers’ complaints concern: macroblocking, blackout, freeze, and audio silence. To retain those customers, engineers need a better method of monitoring digital signals. This Tektronix tutorial will show you how.

It is well established that monitoring the quality of digital video services is an important step in maintaining a high level of customer satisfaction. But the manner in which the quality is monitored is an important consideration. According to research of cable and IPTV operators, conducted by the Multimedia Research Group, the four top issues causing people to call in and complain are macroblocking, blackout, freeze, and audio silence, as shown in Table 1. These top four errors account for 54 percent of the total number of viewer complaints.

What’s particularly noteworthy about these complaints is that the diversity of problems makes it clear that monitoring just one aspect of the digital video service stream is not enough to keep the phone from ringing or to minimize truck rolls. Therefore, proactively monitoring and measuring each service, from RF/ IP to individual pixels, is necessary to reduce complaints and boost satisfaction and retention levels.

Table 1. These top four errors account more than half of customer complaints to operators.

This is because digital video services are segmented into several layers to maintain high quality using minimal bit rates. These segments include the following:

  • RF/IP layer with frequency, power level modulation formats as well as IP headers, checksums, payloads and packet timing (jitter) among others
  • Transport Stream (TS) layer with headers, payloads, continuity counters, Program Clock Reference (PCR) timing, and Program Specific Information (PSI) tables (basic electronic program guide)
  • Packetized Elementary Streams (PES) including headers, payloads, audio/video decode and presentation timing (also known as Access Units)
  • Elementary Stream Sequence headers including codec format, frame size and rate
  • Picture frame slice headers, macroblocks (16x16 set of pixels), and finally blocks (8x8)
  • Audio Frames or access units in small blocks of time (e.g., 32 ms for Dolby AC-3 at 448 kbps using 5.1 surround)

To stem customer complaints, your monitoring equipment must be able to validate each of these many layers as well as correlate issues across these layers in order to achieve high confidence that the digital video service is capable of being viewed with a high “Quality of Experience.”

A common misperception is that the MPEG ETSI TR 101 290 (ETR290) standard is sufficient for testing digital services. While it is a very useful standard, it only covers one of the many different layers. For example, to say that the TS headers have been measured and comply with TR101 290 requirements has nothing to say about the audio levels or the picture quality being delivered. Sometimes TR101 290 errors do not negatively affect the A/V quality.Additionally, underlying bad A/V quality does not manifest itself in any way that TR101 290 testing can detect.

As such, you must be able to traverse from the highest layer of RF/IP all the way down to the pixel or audio level before you can be confident that the digital video service is acceptable. There are two different approaches to help maintain high quality. One approach is quality of service (QoS) testing and monitoring, which looks for errors at the physical layer and TS layer. The second approach is quality of experience (QoE) testing and monitoring, which focuses more on the video and audio aspects of the decoded program. Both methods are very important, but each approaches the issue of testing and monitoring in different ways.

The differences between QoS and QoE layers are mapped out in Figure 1, which shows the different layers that can be evaluated during QoS and QoE testing and monitoring. This shows that if your monitoring program is limited to QoS, you won’t be able to get ahead of the number one source of complaints – macroblocking+.

Figure 1. These are the typical QoS and QoE layers.

Figure 1. These are the typical QoS and QoE layers.

To contrast QoS and QoE, think of QoS as a way to rate the quality of the signal, which should create a good digital video service at the TV or set top box, when error free. Think of QoE from the viewer’s perspective as a way to watch the video, listen to the audio and rate the quality independent from the physical or TS layer quality.

It’s possible to have a bad QoS lead to a bad QoE, and there are times when there can be a good QoS and still have bad QoE due to video or audio coding problems somewhere upstream It’s best to think of QoS as an indicator of potential problems based on spec compliance, but too much reliance on QoS can result in many false positives where QoE is not actually being impacted... An example of this is shown in Figures 2 and 3 where perfect TR101 290 does not highlight audio and video QoE issues.

Figure 2. Example of an error-free transport stream according to  R101 290 monitoring.

Figure 2. Example of an error-free transport stream according to R101 290 monitoring.

Figure 3. The same transport stream when monitored by QoE shows audio/video missing packets and slice errors.

Figure 3. The same transport stream when monitored by QoE shows audio/video missing packets and slice errors.

To avoid these problems, a good monitor should go deep into QoE testing and monitor every layer of every video and audio service in every TS. Whenever the monitor detects an audio or video codec command that is in error, it denotes a drop in QoE. The monitor increases the relative weight of each video protocol impairment depending upon the type of video frame, as well as where the impairment landed in the frame. Figure 3 shows how QoE ratings drop due to audio and video errors based on how it impacted experience. For example, a problem in the viewer’s area of attention on-screen is of much higher importance than something in the periphery and certain kinds of errors for short periods can have more impact on viewer QoE than a longer or more frequently occurring issue

Develop a Service Benchmark

With video services often originating from a wide variety of sources, there tends to be a wide difference in QoE impairments when all are compared against each other. It important, therefore, to normalize QoE measurement so that you can accurately compare different sources. In a large collection of ingest sources, a QoE report can be helpful to focus work on the worst or bottom 10 percent rather than treat all sources equally. Another idea is to congratulate the top 10 percent as high achievers. The ability to generate QoE ratings and reports on a daily, weekly or monthly basis can be helpful in understanding content provider trends or transcoder performance.

One example of scoring programs is to create a dashboard of a set of measurement categories. Within each category, a summary of all of the programs is shown. Figure 4 shows eight categories of measurements with a rating summary for each of the many programs. In most cases, green is a good sign, and red is a bad sign. The dashboard makes it easy to quickly see the health of the entire network.

Figure 4. Here is an example of a QoS/QoE monitoring dashboard.

Figure 4. Here is an example of a QoS/QoE monitoring dashboard.

Staying ahead of subscribers

The last thing that a network operator wants is to get is a call from a frustrated subscriber explaining audio or video problems in a service. The more frustrated that the customer becomes, the more likely they are to cancel their subscription.

In order to track such impairments, the monitoring system should allow for triggered alerts to send out an SNMP trap, or an email message to one or more operators or administrators. Alert definitions should be available in a wide array of choices from no audio or video over a defined window of time, to audio DialNorm/Loudness deviations or video over-compression. Once the alerts have been defined and then applied to the various video services, any errors beyond a certain threshold will raise immediate SNMP traps or fire off email messages when triggered.

Don’t Forget Video Quality

One other factor to consider in a video monitoring program is perceptual video quality or PVQ. This test performs a quality rating based on over-compression. If a video program always has enough bandwidth to maintain high quality, then it will rate highly on a perceptual scale but that can change moment to moment.

Here’s an example of how this works. Figures 5 and 6 show the results of transcoding from 15 Mbps to 4.5 Mbps over a 30 minute clip. To provide a visual understanding as to how these two video services compare, look at the same frame in the two clips involving a fast action scene. Notice that in Figure 5 the image is not blocky, although it may look a little blurry due to the film and camera shutter-speed. The same scene in Figure 6 at 4.5 Mbps is heavily over-compressed due to high motion and limited bandwidth.

Figure 5. Transcoding at 15 Mbps of this high-action image results in generally acceptable picture quality.

Figure 5. Transcoding at 15 Mbps of this high-action image results in generally acceptable picture quality.

Figure 6. Transcoding from 15 Mbps down to 4.5 Mbps results in blockiness and over-compression.

Figure 6. Transcoding from 15 Mbps down to 4.5 Mbps results in blockiness and over-compression.

While there’s no question that customer satisfaction includes picture quality, far too often cable and IPTV operators use monitoring systems that do not look at the actual video quality and they end up being fooled by valid QoS and QoE metrics. In the case of the action scene above, QoS and QoE results reported no TR101 290 issues or syntax issues either. But due to high motion in the video, there are times when the picture quality degraded.

The message is clear: If you really want to address customer complaints before they happen, you must measure every layer from RF/IP down to blockiness and audio/video quality. This will reduce subscriber loss in today’s highly competitive markets and will also reduce operational expenses.

About the Author

Sudeep Bose is a veteran product manager with more than 15 years’ experience and is currently managing the development of Tektronix' Cerify file-based content analyzer. He is well versed in a variety of technologies and earned a degree in electrical engineering from Georgia Institute of Technology.

You might also like...

Designing IP Broadcast Systems - The Book

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

Standards: Part 21 - The MPEG, AES & Other Containers

Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.

Broadcasters Seek Deeper Integration Between Streaming And Linear

Many broadcasters have been revising their streaming strategies with some significant differences, especially between Europe with its stronger tilt towards the internet and North America where ATSC 3.0 is designed to sustain hybrid broadcast/broadband delivery.

Microphones: Part 2 - Design Principles

Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.