As Demand For Content Increases, So Too Has The Need For MAM
While the amount of content generated by broadcasters and program providers continues to increase exponentially, the need to manage those assets and support collaborative production as well as a myriad of distribution platforms has never been greater. The pressure is on to compete in today’s multiplatform world, where even small-market TV stations are expected to deliver a steady stream of content to the Web and mobile devices in addition to their linear broadcasts.
For most, this has meant deploying software-defined systems, boosted by artificial intelligence (AI) that help curate pieces for multiple audiences. Improvements in software interfaces and interoperability standards have made MAM systems an easier, less costly proposition than even a few years ago.
Indeed, more and more broadcasters today are investing in MAM systems that use metadata to track a piece of content from ingest through production to playout, then archive it in a way that allows easy future access. There’s very little, if any, human intervention needed.
Systems now being offered include those that can run on private or public cloud infrastructures as well as a hybrid of on-premise and cloud workflows, giving media companies fully automated and versatile MAM processes.
At Dalet, they are developing new AI-assisted tools to help journalists sift through and quickly find digital content that they are ingesting on a typical day. Machine learning and AI are also helping to automate processes with predictive analytics and (for viewers) better recommendations.
In today’s fast-paced environment, media and entertainment companies need to manage their digital assets efficiently, increase revenue opportunities, and streamline production costs. This requires automated processes and a distributed services architecture, all managed by workflow orchestration layer.
The Dalet Galaxy five is a MAM and Workflow Orchestration platform combined that unifies the content lifecycle by managing assets, metadata, workflows, and processes across multiple and diverse production and distribution systems. It includes a BPMN 2.0-compliant workflow engine that boosts productivity and agility while also improving operational and business visibility across the media library.
The GV Stratus toolset simplifies workflows with a combination of intelligent tools and customization.
Grass Valley offers its Stratus Video Production & Content Management System for collaborative editing and news production. In a newsroom environment, the key to success is how quickly things get done. The GV Stratus toolset simplifies workflows with a combination of intelligent tools and customization. It enables everyone on the team to have access to every clip on the network with the tools required to manage content.
In addition, GV Stratus includes social media management capabilities that allow users to post, track and delete any asset. To adapt to more user-generated content, the XRE Transcoder is built into every GV Render Engine so broadcasters can perform transcoding without buying more hardware.
The software is further enhanced with the addition of Momentum, Grass Valley’s media workflow engine that supports metadata handling. Scalable to any application, Momentum allows production workflows to be automated throughout a media production and distribution operation.
Tedial provides MAM systems that manage long-form content for customers like Fox, HBO and Sony Pictures, and has been a leader in supporting the SMPTE Interoperable Master Format (IMF) standard, which makes it easier to reformat content for different international markets.
The Tedial Evolution platform includes a Search/Indexing engine for organizing and searching program assets and other object related entities.
Tedial’s Evolution provides multi-site Enterprise MAM and Business Process Workflows while extending management functionality for improved integration between archive and workflow engines to reinforce a collaborative environment. An Object Relational Database provides access to new set of tools to manage group entities via a multi-level classification schema (collections, albums, series, projects, rights, delivery packages, etc.) based on dynamic, changing situations.
With the Object Relational Database, entities are logged as assets, which can serve as a repository for all shared data. Assets are categorized as members of multiple entities, according to specific user needs.
The platform includes a Search/Indexing engine for organizing and searching program assets and other object related entities, with the ability to index very large databases (via shared indexes), and automatically tag descriptive metadata based on scoring of texts using “stop words.” The system autocompletes user keyword inputs and generates suggestions for every entry as it is typed. And it offers new methods to surf the MAM through “departments” using an “Amazon.com style” facets category and/or group entities. The system can also auto-tag, relating assets based on most relevant tags.
Vidispine's VidiNet, a cloud-based media services platform from Arvato Systems, provides a broad range of media services in an integrated environment. The service is billed on a “pay-as-you-use” basis, whereby users are granted a volume license that is billed per analyzed image or analyzed video minute, as well as by module.
VidiCore is the media management backend and object repository that forms the basis of Vidispine's MAM solution. Vidispine itself represents a portfolio of tools to create, produce, prepare, manage and monetize media content. This includes proprietary applications and tightly integrated applications from third-parties.
VidiNet, a media management backend and object repository, gives users access to AI software from DeepVA (Freiburg, Germany), delivered through VidiNet’s Cognitive Services.
VidiNet users now have access to AI software from DeepVA (Freiburg, Germany), delivered through VidiNet’s Cognitive Services. This interface provides VidiCore customers with direct access to software offerings around cognitive services. The integration of DeepVA simplifies media workflows across the content ecosystem and provides users with an easy access to AI within their MAM environment.
[DeepVA is already used by large broadcasters, DAM/MAM providers, streaming services and city archives that produce significant amounts of videos and images on a daily basis. The software supports AI applications like visual concept recognition, face recognition, landmark recognition, brand/logo recognition as well as text recognition.
The technology’s AI tools improve existing image and video material with metadata that accelerates the documentation and search of media assets. Custom AI models can be trained according to individual requirements.
Face and label extraction can be used to automatically extract training data from videos and livestreams by linking names inserted in the image with the corresponding faces and storing them in a dataset. That way, individual training data is generated automatically within seconds, which then forms the basis for individual AI models and thus the recognition of media-specific information in image and video material.
Arvato said that due to “one-shot learning”, it usually only requires one image to train an AI model, which is a huge time saver when building training data.
Within VidiNet users can access Face Recognition, Custom Faces (individual creation of AI models of faces), Dataset Creation (automated creation of training data) as well as Face Indexing models.
So, whether your company pursues a cloud-based or on-premise approach, audio, video and metadata assets are like repurposed gold and should be treated accordingly.
You might also like...
Standards: Part 20 - ST 2110-4x Metadata Standards
Our series continues with Metadata. It is the glue that connects all your media assets to each other and steers your workflow. You cannot find content in the library or manage your creative processes without it. Metadata can also control…
Delivering Intelligent Multicast Networks - Part 2
The second half of our exploration of how bandwidth aware infrastructure can improve data throughput, reduce latency and reduce the risk of congestion in IP networks.
If It Ain’t Broke Still Fix It: Part 1 - Reliability
IP is an enabling technology which provides access to the massive compute and GPU resource available both on- and off-prem. However, the old broadcasting adage: if it ain’t broke don’t fix it, is no longer relevant, and potentially hig…
NDI For Broadcast: Part 2 – The NDI Tool Kit
This second part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to exploring the NDI Tools and what they now offer broadcasters.
HDR & WCG For Broadcast: Part 2 - The Production Challenges Of HDR & WCG
Welcome to Part 2 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 2 discusses expanding display capabilities and…