MAM is Dead. Long Live Media Logistics—Part 3
Conceptually, you are delivering your content to your audience
In the third and final part of BroadcastBridge’s MAM feature we contend that MAM as we’ve known it is dead and that today’s broadcaster and content delivery firm want a media logistics solution which encompasses all ingest, production, distribution and archive with rich metadata including rights. If so, are the tools in most MAM’s appropriate at ‘orchestrating’ all of these assets?
Here are the comments of Tony Taylor, CEO TMD.
TT: Many MAM solutions have been designed around a siloed approach. This has been typical of the way software has been developed in the broadcast industry for many years. Even now I find it incredible when I hear some of the stories of MAM implementations that have taken no account of joining up the business of media across organisations.
That joining up has to start with the metadata. The successful media businesses are those who realise the value of the metadata which exists alongside the content, and implement a MAM solution that uses it to the fullest extent possible.
There can be no argument that the future will be around file-based workflows in data centre environments. This depends upon metadata: acting on it, reacting to it and enriching it as it passes between and through facilities. The protection and enrichment of metadata has always been at the heart of any asset management system worth the name, and today it is the only logical place to put the workflow orchestration layer.
If workflow orchestration is about drawing on and adding to metadata, why would you even consider putting orchestration in a separate system?. It has to be in the system which is charged with holding the metadata.
Content preparation and delivery firms are required to deliver assets to an ever increasing variety of platforms. How have manufacturers helped content companies gear up for life in a multi-platform world?
TT: You have to think in terms of layers. At the bottom is the hardware: the servers, the encoders and transcoders, and the content delivery networks. Above that is a control layer, which tells the hardware what to do with each piece of content.
Above that is the business layer. This is where executives look at the economics of the operation and make commercial decisions. In a modern media enterprise, these executives should be able to make decisions based on purely commercial considerations, not what the technology allows them to do.
The middle layer is the asset and workflow management. Its rich metadata captures all the information on the content: what rights are available; when and where it can be shown; what content needs to take priority through the encode farms and more. Most important, the asset and workflow management system should both be controlling the hardware at the bottom, and reporting and responding to the business systems above it.
Put simply, a CEO should be able to look at one screen – familiar to him or her because it is in the enterprise management layer – and make a decision to, say, put a particular programme on iTunes. That decision should pass automatically to the workflow management system which will draw on the technical metadata to determine precisely which processes are required, and implement them at the right time, again fully automatically.
What are the tools to create, deliver and store files and metadata for broadcast, VoD, mobile and web in one workflow?
TT: The very simple answer to that is a rich metadata schema. If the asset and workflow management system knows all there is to know about the content, from rights to resolution, then it can command whatever other equipment is around to make all these things happen.
It is, frankly, ridiculous to think that the media industry can think about multi-platform delivery in anything other than a single workflow environment. Conceptually, you are delivering your content to your audience. It is one concept, so how can it be anything other than one workflow environment?
There are many tools that exist to achieve this, from editors to transcoders. But the primary tool to ensure efficient automated media business process management is content intelligence, relying on the metadata. There is no need to compromise if you use the biometrics inherently encapsulated in the metadata and content.
How important is the ability to integrate tools from a range of vendors?
TT: Broadcast engineers have always chosen best of breed solutions: the right set of functionality and performance for a specific installation. Do we really think anyone wants to change that?
However, as we move into the IT-centric and increasingly the cloud era, we have to find ways to maintain and simplify that choice. One of the biggest challenges is scaling services up and down to cater for peaks and troughs in volumes as well as introducing new technologies and services. At TMD we have designed, integrated and implemented a platform called UMS – unified media services – which is a simple approach to service-oriented architectures that enables broadcast and media organisations to cost effectively integrate third-party technologies.
There is of course the FIMS standard as a good open foundation, but this does not answer all of the needs of the current broadcast customer. So UMS provides a service bus to support integrations, which includes FIMS, proprietary APIs and other methods to decouple the technology from the operations, allowing users to choose best of breed hardware yet still operate it from automated, metadata-driven workflow orchestration.
Is it best to adopt a single system or opt for a modular workflow?
TT: It is best to implement a system that fulfils the real commercial needs of the media company. In some cases that can be done in a one-stop shop solution. In most cases, I suspect, it will best be served by components from a number of top vendors, brought together under a metadata-driven environment. Either way, the question should never be “who do I buy this from?” but “what do I need to make money?”. It has to be looked at from the business perspective and not simply the technology preference of an engineering or IT department.
TMD's Tony Taylor
You might also like...
Designing IP Broadcast Systems - The Book
Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
If It Ain’t Broke Still Fix It: Part 2 - Security
The old broadcasting adage: ‘if it ain’t broke don’t fix it’ is no longer relevant and potentially highly dangerous, especially when we consider the security implications of not updating software and operating systems.
Standards: Part 21 - The MPEG, AES & Other Containers
Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…