Navigating the Many Layers of the ATSC 3.0 Ecosystem: Part 2

Part One of this two-part series explored the various layers and protocols of ATSC 3.0 that broadcasters must understand to take full advantage of the opportunities available through the technology. This second and final installment explores best practices for signal verification and compliance across the ATSC 3.0 ecosystem.

The development of ATSC 3.0 recommended practices work is in process for verification and compliance. At this point in its development, it remains focused on RF signal layers. For example, A/325 summarizes processes to test RF performance in a lab environment, A/326 presents objectives and general methodology for testing RF performance in the field.

There is no question that a set of best practices for the overall ATSC 3.0 system will provide great value for broadcasters. End-to-end verification implies that all layers and all components must be evaluated as a whole rather than individually.

Some monitoring points will be the same, such as knowing whether what is leaving the plant, what is received at the transmitter site, and what goes out over the air is good or bad. Unfortunately, due to the high degree of layers, features and dynamic configurability, you can no longer look at these three points in isolation. If deploying some of the advanced features such as Audience Measurement, Ad Insertion, Second Screen, SFN, and/or Channel Bonding, the broadcaster may also need to add last mile and cloud monitoring points to the analysis mix.

Complex Tools Analyze a Complex Format

Many of the Quality of Service (QoS) and Quality of Experience (QoE) parameters are the same and can be meaningful independently, provided the operator is aware of what is being monitoring. For the most part, having a tool to compare and contrast all signal layers at multiple points along the path, in real time, becomes much more insightful and alleviates the need to be an expert in every detail. This capability will also enable immediate cause and effect when making configuration changes and baselining operations.

As experienced with the analog-to-digital conversion, broadcasters discovered a new set of measurement tools known as QoS, QoE or Transport Stream Analyzers. The level of capability and complexity of ATSC 3.0 Next Gen TV will require even more sophisticated tools.

The following are potential issues with the early deployment of ATSC 3.0 and some approaches to quickly resolve and mitigate the risk of reoccurrence. The system for discussion consists of two probes with multiple inputs: one for each physical location, and one aggregation server to collect, correlate, and analyze all the inputs. The next step is to choose a tool that would allow for easy addition of additional transmitter sites for centralized monitoring and a larger data set for correlation and analysis.

Figure 1: A multipoint monitoring system. Click to enlarge.

Figure 1: A multipoint monitoring system. Click to enlarge.

Studio and Transmitter Site Outputs

The first logical point to monitor and analyze an ATSC 3.0 signal is at the RF output, but verifying pictures and levels isn't enough. It is important to analyze all the layers to verify a fully-decodable signal. Look for timing and synchronization abnormalities even if the signal appears decodable. The most logical approach to do this is to compare the RF output to the output of the broadcast gateway or scheduler/framer, as shown in Figure 1 at Point 3.

Comparing the two signals will expose timing, latency, missing or misaligned objects or incomplete streams, verify bitrates, control and signaling information. Multi-layer analysis will also expose the impact these types of issues have to the decoded media. Not all discrepancies or problems will cause a poor viewer QoE experience, but it is good to know where and when you cross this line. This level of monitoring is analogous to knowing what leaves the plant is good and what goes over the air is good.

If the vendor equipment in the chain supports a remote monitoring interface, such as Simple Network Management Protocol (SNMP) messaging, it is recommended to also include monitoring of Points 2, 7 and 8 in Figure 1 as part of the overall analysis set. This will correlate the signal input and output of a particular piece of vendor equipment with the vendor reported information, thus giving a second opinion or another data point in the event of a problem for faster root cause analysis or false alarm detection. If in a perfect world and everything looks good here, all is set. If a broadcaster sees problems, abnormalities, or inconsistencies, there will be a need to dig deeper.

With ATSC 3.0 capabilities, broadcasters will need to take both a micro-and macro-scopic view of signal delivery. Monitoring signal health throughout the delivery chain will be key to ensuring a high QoS. Click to enlarge.

With ATSC 3.0 capabilities, broadcasters will need to take both a micro-and macro-scopic view of signal delivery. Monitoring signal health throughout the delivery chain will be key to ensuring a high QoS. Click to enlarge.

Sources and Inputs

The next logical point to monitor In Figure 1 is Point 1, where the input interfaces to the broadcast gateway. This will provide verification of the output of the encoder and/or segmenter/packager independently, but also expose structure and configuration data for interoperability.

Encoding issues are an easy and obvious case for a QoS/QoE system to identify the root cause for errors. Some tools, such as QligentVision’s Match, can compare the video into the encoder with the video out of the transmitter RF to isolate programmatic issues. It can also expose content insertions, aspect ratio conversions, and embedded metadata such as watermarks or triggers; and can perform analysis on the percent quality degradation, latency thru the system, and offer a host of useful analysis to see what was changed from the source streams.

Studio to Transmitter Link

On Figure 1 at Point 6 is the output of the STL link measured at the input to the exciter. If available via SNMP, include Points 4 and 5 from Figure 1 to collect data from the STL vendor to correlate with the signal and other vendor equipment findings.

Another valuable piece of information that is quick to obtain is to compare Figure 1 Points 3 and 6, as they should be identical STLTP streams. Trending this data over time will profile the link health, especially useful if it is over a shared or public network. The STL will use the SMPTE 2022-1 FEC so keep an eye on the FEC packets recovered and other parameters to see how clean the link is between sites. The signal could be getting through fine, but the STL error correction could be working overtime to deliver.

This Qligent Vision display screen illustrates how multiple key parameters can be monitored simultaneously. Any signals operating outside of user-set limits appear in read. Click to enlarge.

This Qligent Vision display screen illustrates how multiple key parameters can be monitored simultaneously. Any signals operating outside of user-set limits appear in read. Click to enlarge.

System Analysis Features 

General features of a good end-to-end monitoring and analysis system consists of intuitive visualization, comprehensive analysis, and actionable reporting.

Visualization is key to a quick understanding of a system's overall health and performance. Dashboards and drill downs, with sortable and filterable alarms, multi-layer correlation and other visual explanations of the system are required in real-time along with historic data. Such parameters should be customizable for each user so they can focus on their areas of responsibility.

Analysis tools are important to help a user investigate the system from multiple perspectives. The operator needs to be able to identify the root cause of errors such as loss of captioning or signaling failure. A measurement system should be able to tell the operator if the errors are one-of or based on a ripple effect. Finally, the analysis platform should be able to offer predictive data based on observations and trend identification. 

When collecting data from multiple signal points and status from multiple vendors' equipment, it is also necessary to have a tool with good data aggregation capabilities. Recording of raw transport and/or packet capture (PCAP) is vital for sharing information with colleagues and vendors for conducting secondary or post analysis.

Reporting features should cover a wide range of capabilities from notifications, to actual recordings to generating full reports. Notifications are important for management by exception and need to be customizable for the usual visual and audible alarms, along with emails and text messages. Finally, reports should be able to provide machine-to-machine data for external systems such as to Network Management System (NMS) or an Operations Support System (OSS). Automated raw and consolidated reports with exports to spreadsheet or document forms are nice for cross department and upper management reporting accompanied by associated recorded stream segments.

Long-term trend analysis for all parameters is an extremely valuable tool. There is no worse problem to resolve than the one that shows up every couple of weeks when no changes were made in either configuration or operation. Another way to mitigate long hours of troubleshooting these types of problems is where the monitoring system can trend and correlate several layers of the signal across several points of data. Abnormalities will naturally pop out and having recordings associated with the time of the error will help the engineer quickly resolve the situation.

When tied to an ingest/playout automation system, a monitoring system an inform the operator of incorrect or missed feeds or advertisements. Click to enlarge.

When tied to an ingest/playout automation system, a monitoring system an inform the operator of incorrect or missed feeds or advertisements. Click to enlarge.

Adding any new features of ATSC 3.0, especially the interactive ones, will create a whole new set of problems in the timing and control. Imagine the hybrid mode use case where the user’s player will be switching from broadcast to broadband and vice versa as in the case of mobile ATSC 3.0 receiver. Support for this handover requires the monitoring of certain key signaling. Once these advanced features are deployed, adding monitoring at Figure 1 Points 10 and 11 will certainly close a bigger loop for a much safer, faster, and less painless transition.

Timely Investments

ATSC 3.0 is clearly structured to be a game changer for broadcasters that are fighting for viewer share in an increasingly fractured television marketplace. With the promise of a new dawn comes great responsibilities to learn, operate and maintain an infrastructure that will be unfamiliar to broadcasters in many respects. As there are so many new moving parts across standards, streams and parameters built into ATSC 3.0, having a head start on your monitoring strategy will go a long way in solidifying the overall health and performance of your over-the-air system without having to become an expert in every new standard.

The intrinsic IP networking capabilities of ATSC 3.0 makes moving to an IP and cloud-based system simultaneously a smart and timely investment. An end-to-end or holistic system approach makes future configuration changes less risky as cause and effect throughout the system will automatically be captured. Such a system approach will close the signal monitoring loop that runs from the content source to the RF output, and will eventually include on to the viewer's receiver.

Ensuring that signals are being aired as intended across the many potential signal paths (RF, Cable, OTA, IP, OTT) is of critical importance. The ability to visualize, analyze, and report on the quality of the required signals (be it for experience or service) allows broadcasters to maximize the business capabilities of ATSC 3.0. The multi-point signal monitoring, big data analytics, and depth of troubleshooting enabled in a system like Vision will pay dividends from the moment a station’s ATSC 3.0 content delivery system goes on the air.

Part 1 in this two-part tutorial can be found here.

Ted Korte is Qligent COO.

Ted Korte is Qligent COO.

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

IP Security For Broadcasters: Part 4 - MACsec Explained

IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.

Standards: Part 23 - Media Types Vs MIME Types

Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.

Six Considerations For Transitioning To Cloud Based Video Distribution

There are many reasons why companies are transitioning from legacy video distribution workflows to ones hosted entirely in the public cloud, but it’s not a simple process and takes an enormous amount of planning. Many potential pitfalls can be a…

IP Security For Broadcasters: Part 3 - IPsec Explained

One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…