The Human Element in Digital Media Monitoring
Appropriate monitoring requires both a trained operator and the proper tools.
A good test and measurement solution combines both smart test equipment and a trained operator.
If anyone has written a history of work, it will no doubt have a prominent theme: the way machines have allowed people to do progressively more and be more productive. The human race isn’t the only species to use tools, but we are the only species that puts so much effort into developing and evolving them so that we can be ever more efficient in our work.
Sophisticated tools, and the power they give us to alter the conditions of our own environment, have enabled us to become the dominant species on the planet. And if we only wanted to till the earth and reproduce ourselves in comfort and plenty, we might have reached the point by now where there was no necessity to keep evolving and new tools. But of course, we have greater ambitions and a seemingly endless desire for more.
This is the dynamic that drives industry, drives our behaviour as consumers, and our scientific enquiry. And although there are more humans on the planet than ever before, our impulse to know more, create more and have more means that we keep inventing new kinds of work – so much of it that there aren’t enough people to do it all.
Monitoring the new media
The work that we now call ‘digital media monitoring’ didn’t exist until quite recently. Digital media delivery didn’t exist either until only a few years ago. But this field has expanded explosively in a short space of time, creating a massive amount of new work that has to be done. And the volume of this work, and the rate of its expansion, is so great that it could not be done without the development of extremely powerful new tools.
In digital media delivery, the technology exists to monitor in fine-grained detail every aspect of a network and its performance. However, without the proper thinking tools that help technicians cope with such large amounts of data, it can be almost impossible to quickly identify the cause of errors.
And in this situation, much of the power of these tools comes from the gearing between the worker and the work done. It’s different from, for example, an accounting firm that takes on more auditing work; to get the audits completed, the firm has to hire more auditors, and each can only do roughly the same amount of work as the other. There’s no other way to get the job done.
But in digital media monitoring, the sheer amount of data to be analysed and understood every second of the day makes it impossible to take this approach. The tools have to provide enormous gearing, so that a single engineer can do an otherwise inconceivable amount of work, and do it in real time on live services.
New tools to the rescue
In the short space of time since digital media delivery began, monitoring solutions have evolved from the makeshift, sticking-plaster type, cobbled together from heterogenous kit designed for a previous era of media operations. Now, the only viable systems are those designed from the ground up for the digital media monitoring era, with fully integrated end-to-end capability.
These systems give a coherent view from the initial ingest right through to the viewer’s device. But while it’s essential to have this coherent, omniscient capability, it does create a truly staggering amount of data to deal with. And the engineering control room of a digital media operation is not staffed by thousands of toiling technicians like some Orwellian Ministry of Data. Even in the largest operations, a handful of engineers have to be able to cope.
Smart monitoring can automatically assemble the proper data and then display it in an easily understood form to the operator. Note here that the operator has asked the system to show “only errors” over a 24hr period.
And in fact a handful of highly-qualified engineers is about all any digital media operator can hope to get hold of, because the expertise in this relatively new field is in high demand and short supply. So it’s vital that the monitoring systems don’t demand too much of the staff who use them. A system that spews out unmediated data and fills screens with dense numeric tables is not an efficient one because it sucks too much of the engineer’s mental bandwidth, and that means that the gearing between worker and work done is too low to be economically viable.
The benefits of smart monitoring
In the early years of digital media, it was just about possible for a skilled engineer to keep on top of a few channels just by eyeballing the monitoring displays. Now, that same control room is probably having to deal with hundreds of channels, and that simply defeats the capabilities of a few pairs of human eyes, unless the monitoring system can do a lot of the work under the hood. Instead of a dense flow of numerical data, the hard-pressed engineer needs a highly evolved data presentation that aids at-a-glance recognition of anomalies. If the system can itself point up anomalies, so much the better, but the problem is that some anomalies are not considered errors in the strict terms of the standards defining correct behaviour. Basing a monitoring strategy on simply detecting and flagging errors as defined in the ETR290 standard, is not enough because errors that may be considered acceptable within the standard can be the cause of problems that degrade the service. Some errors that cause critical failures aren’t even tested for by ETR290, so when one of these pops up, the engineer has no assistance in identifying them from the monitoring system.
The smart monitoring approach
So a dumb approach to monitoring – limited to testing for conditions that breach parameters defined in the standard – is not the answer to the shortage of expert eyeballs. It makes far more sense to tailor the monitoring system to the realities of a digital media business today, where hard-pressed, time-poor engineers have to stay on top of constantly increasing numbers of services. In this context smart monitoring technology makes it possible to quickly set up the correct parameters for a new service (this would otherwise be a very time-consuming and error-prone task in itself), and embraces not only the ETR290 tests, but also tests for those conditions such as CAS errors, unintended language switches and so on, that are outside the scope of the standard.
This image displays a 4-day alarm history of all transport streams from multiple locations and sorted with results on Severity worst so the services with issues are displayed at the top. This view shows alarm history for OTT streams, IPTV multicasts, QAM multiplexes, COFDM multiplexes, Satellite transponders and ASI - all in one compact overview.
The smartest monitoring technology also lets engineers easily bring together a customised flight deck of monitoring instruments to best communicate the unique data required in each operational situation. This display of virtual data instruments can be reconfigured at the drop of a hat when requirements change, for example when a major live event is scheduled. And, crucially the data wall can be viewed not only in the engineering control room, but from any location in a standard internet browser. So the data can follow the eyeballs, wherever they are.
Editor note:
The following articles by Simen Frosted contain more information on test and measurement and the monitoring of digital signals.
You might also like...
Designing IP Broadcast Systems - The Book
Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…
Operating Systems Climb Competitive Agenda For TV Makers
TV makers have adopted different approaches to the OS, some developing their own, while others adopt a platform such as Google TV or Amazon Fire TV. But all rely increasingly on the OS for competitive differentiation of the UI, navigation,…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
Standards: Part 21 - The MPEG, AES & Other Containers
Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.
Broadcasters Seek Deeper Integration Between Streaming And Linear
Many broadcasters have been revising their streaming strategies with some significant differences, especially between Europe with its stronger tilt towards the internet and North America where ATSC 3.0 is designed to sustain hybrid broadcast/broadband delivery.