Digital Nirvana Updates MetadataIQ Metadata-Automation Tool For Avid Ecosystem

Digital Nirvana has announced an upgrade to MetadataIQ, its SaaS-based tool that automatically generates speech-to-text and video intelligence metadata, increasing the efficiency of production, preproduction, and live content creation services for Avid PAM/MAM users.

The new version, which will be previewed at the 2022 NAB Show, makes beta-tested video intelligence capabilities commercially available and integrates directly with Avid MediaCentral.

MetadataIQ 4.0 relies on advanced machine learning and high-performance AI capabilities in the cloud (speech to text, facial recognition, object identification, content classification, etc.) to create highly accurate metadata more quickly and less expensively than traditional methods. Crucially, MetadataIQ is the only tool that not only automatically generates speech-to-text transcripts on incoming feeds (or on stored content) in real time, but then takes the transcript, parses it by time, and indexes it back to the media in the Avid environment. No other such product integrates with Avid today.

Since Digital Nirvana introduced MetadataIQ about a year ago, the primary use case has been generating speech to text in real time as massive amounts of live streams are being ingested, then sending that STT transcript into the Avid Interplay PAM system with time inputs. The application’s unique ability to marry real-time transcript generation with real-time indexing in Avid means producers and editors can quickly find relevant media assets for their news stories, thereby accelerating the entire production process.

In the new version, MetadataIQ’s transcription and other video intelligence capabilities will emerge from the proof-of-concept stage and be commercially available based on the overwhelming success of the beta testing.

Also, instead of sending metadata only to Avid Interplay on-prem implementations, MetadataIQ 4.0 will integrate with Avid’s cloud-based MediaCentral hub, where editors access multiple Avid applications to do their work. Thanks to cloud integration, instead of being able to search only one type of metadata at once as they’ve been doing in Avid Interplay, editors will be able to combine searches in MediaCentral based on multiple forms of metadata. For example, if MetadataIQ generates metadata using OCR, facial recognition, and speech to text, when an editor enters search terms, MediaCentral will search all three of those types of metadata simultaneously. This means editors will get more precise results even faster.

You might also like...

Production Network Technologies At NAB 2025

As NAB approaches we pick up the key theme of hybrid production network infrastructure that combines SDI-IP network infrastructure & data center model compute resources, with a run-down of what to expect from vendors on the show floor.

KVM & Multiviewer Systems At NAB 2025

It’s NAB time again. Once again, as we head towards the show, we will take a look at the key product announcements across a range of key technology and workflow areas. We begin with the always critical world of K…

Sports Production Infrastructure – Where’s The Compute?

The evolution of IP based production and increased computer processing power have enabled new workflows, so how is compute resource being deployed to create new remote and hybrid approaches?

Building Software Defined Infrastructure: Shifting Data

The fundamental principles of how data flows through local and remote processing systems are central to designing software defined infrastructure.

BEITC At NAB 2025: Conference Sessions Preview - Part 2

Once again in 2025 The Broadcast Bridge is proud to be the sole media partner for the BEIT Conference Sessions at NAB. They are not free, but the conference sessions are a unique opportunity to engage with very high quality in-person…