Using KPI for Media Workflows
Courtesy Erica Olsen of OnStrategy
Determining the Key Performance Indicators (KPI) required in order to measure media workflows is no different than for other workflows. Attempting to set standards and measure these without a thorough understanding of the process will result in incorrect results regardless of whether your workflow is for a doctor’s office or a post suite.
At IBC I encountered multiple exhibitors with products where the demonstrator mentioned KPI, indicating that the product always had the correct KPI and often could be tailored to the user or project. While this all sounds good, the underlying facts are that using KPI properly requires some understanding of the underlying theory. This article will help you get started.
It is important when developing key performance indicators to understand what processes are going on and the desired output goals.
Why have KPI?
The first question we need to ask: “What is the purpose of the measurement”. The answer will include not only the reason but also the audience. In the case of the above workflow our audience could include all stakeholders or just an operator such as the Asset Manager. With such a broad audience, how we measure a KPI for productivity would be different in each case.
Let’s assume we want to measure productivity and our audience is the Asset Manager. First we need to define productivity within the constraints of the workflow. Rather than pick some theoretical or historically accepted measurement it makes sense to look at our workflow and see if any of the automatically generated data can give us usable information.
For instance ask, what does productivity mean to the asset manager? Is it personal performance, system performance, operator performance, etc.? Because the Asset Manager is client facing, regardless of how performant the process may be, the organization’s productivity might be down if orders are not being placed or if the orders are not profitable. What if the Asset Manager is doing excellent work but their major account just went out of business?
A KPI always measures something. Let us call this something the “facilitator” regardless of whether it is a person a machine or an entire system. Now we need to know what the facilitator can affect and how. Measuring the productivity of the asset manager by whether the client base increased makes no sense, but an increase in orders from an existing client is affected by the manager’s performance.
If your facilitator is a person, make sure they get the current status of the KPI so that they may explain when circumstances were beyond their control.
The Asset Manager
The Asset Manager also has a controlling function. Part of this is on-time, in-budget and to-specification delivery. The other part is profitability. How can the asset manager affect profitability? Let’s look at the workflow. All organisational tasks are handled by the Asset Manager, so what information does the workflow provide when these tasks are not being handled in an efficient manner? Every workflow has a so-called “happy path” where everything goes as expected and there are no exceptions. Many in this industry may claim such a condition never occurs in broadcast or post work.
In this example workflow, all events go as planned and there are no exceptions.
As you can imagine exceptions to an expected chain of events costs time and money. If the workflow was designed to allow for it we can automatically check the actual exceptions against the planned exceptions as outlined in the original order. This will give us a rough KPI “Unplanned Exceptions”, however the facilitator (Asset Manager) cannot affect all unplanned exceptions.
Now we have two cases where the workflow is not giving us all the KPI data we think we need. Whether the product was delivered to specifications can be objective or subjective. If we are off spec a bit but the client is satisfied who's to say differently? The point being that the absence of this data is not going to affect the usability of the KPI for the purpose intended.
Also, if all specs are technically correct and the client is still dissatisfied, why is this? Remember our audience is the Asset Manager measuring their own Productivity and the purpose of a KPI in this case is an alert. So, if client orders are matching market expectations and product is on-time in-budget and technically correct and unplanned exceptions are kept to a minimum, is this enough for the Asset Manager to measure their own productivity? Because this information is automatically generated as part of the production process it is extremely accurate. Asking users to enter data that is not needed by the workflow is counterproductive and does not increase the usability of this KPI.
Measure the correct parameters
Let’s change the audience, the boss wants performance data on the new asset manager. What is missing from the above? We need a baseline for comparison to some expected level. The best scenario would be a history of previous Asset Managers using the same KPI’s.
Why not use “throughput”, you may ask? Historically throughput was the easiest thing to measure, so many gadgets per hour, but today we can measure other things just as easily. Still there is the missing element, how much of their available time was booked to which client and what percentage of the total “work-time” was booked to client work. Also interesting for the boss is client to asset manager matching.
There is an iterative process between workflow design and KPI reporting. A well-designed workflow will acquire all information required to develop informative KPI without needing additional manual inputs. The opening image for this article is from a YouTube article on developing organization KPI. That tutorial video can be found here.
You might also like...
Expanding Display Capabilities And The Quest For HDR & WCG
Broadcast image production is intrinsically linked to consumer displays and their capacity to reproduce High Dynamic Range and a Wide Color Gamut.
Standards: Part 20 - ST 2110-4x Metadata Standards
Our series continues with Metadata. It is the glue that connects all your media assets to each other and steers your workflow. You cannot find content in the library or manage your creative processes without it. Metadata can also control…
Delivering Intelligent Multicast Networks - Part 2
The second half of our exploration of how bandwidth aware infrastructure can improve data throughput, reduce latency and reduce the risk of congestion in IP networks.
If It Ain’t Broke Still Fix It: Part 1 - Reliability
IP is an enabling technology which provides access to the massive compute and GPU resource available both on- and off-prem. However, the old broadcasting adage: if it ain’t broke don’t fix it, is no longer relevant, and potentially hig…
NDI For Broadcast: Part 2 – The NDI Tool Kit
This second part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to exploring the NDI Tools and what they now offer broadcasters.