Digital Nirvana’s MetadataIQ SaaS-based Tool Now Supports Avid CTMS APIs

CTMS integration allows Avid users without Interplay in their environments to benefit from MetadataIQ.
Digital Nirvana connects MetadataIQ directly to Avid Media Composer and MCCUX through new API support.
Digital Nirvana has announced that MetadataIQ, its SaaS-based tool that automatically generates speech-to-text and video intelligence metadata, now supports Avid CTMS APIs. As a result, video editors and producers can now use MetadataIQ to extract media directly from Avid Media Composer or Avid MediaCentral Cloud UX (MCCUX) rather than having to connect with Avid Interplay first. This capability will help broadcast networks, postproduction houses, sports organizations, houses of worship, and other Avid users that do not have Interplay in their environments to benefit from MetadataIQ.
Now, all Avid Media Composer/MCCUX users will be able to do this. They will also be able to:
- Ingest different types of metadata, such as speech to text, facial recognition, OCR, logos, and objects, each with customizable marker durations and color codes for easy identification of metadata type.
- Submit files without having to create low-res proxies or manually import metadata files into Avid Media Composer/MCCUX.
- Automatically submit media files to Digital Nirvana’s transcription and caption service to receive the highest quality, human-curated output.
- Submit data from MCCUX into Digital Nirvana’s Trance product to generate transcripts, captions, and translations in-house and publish files in all industry-supported formats.
These capabilities will greatly improve the workflow for media companies that use Avid Media Composer or MCCUX to produce content.
For example, media operations can ingest raw camera feeds and clips to create speech-to-text and video intelligence metadata, which editors can consume in real time if required. Editors can easily type a search term within Media Composer or MCCUX, identify the relevant clip, and start creating content.
For certain shows (reality, on-street interviews, etc.), the machine-generated or human-curated transcripts can be used in the script-generation process.
The postproduction team can submit files directly from the existing workflow to Digital Nirvana to generate transcripts, closed captions/subtitles, and translations. Then the team can either receive the output as sidecar files or ingest it directly back into Avid MCCUX as timeline markers.
If the postproduction team includes in-house transcribers/captioners/translators, editors can automatically route the media asset from Avid to MetadataIQ to create a low-res proxy, generate speech to text, and present it to the in-house team in Digital Nirvana’s user-friendly Trance interface. There, users get support from artificial intelligence and machine learning for efficient captioning and translation.
With timecoded logo detection metadata, sales teams can get a clearer picture of the total screen presence of each sponsor/advertiser.
For video-on-demand video on demand and content repurposing, the abundant video intelligence metadata helps to accurately identify ad spots and helps with additional brand/product placement and replacement.
You might also like...
Building Software Defined Infrastructure: Asynchronous & Synchronous Media Processing
One of the key challenges of building software defined infrastructure is moving to a fundamentally synchronous media like video to an asynchronous architecture.
Monitoring & Compliance In Broadcast: Monitoring Cloud Infrastructure
If we take cloud infrastructures to their extreme, that is, their physical locality is unknown to us, then monitoring them becomes a whole new ball game, especially as dispersed teams use them for production.
Phil Rhodes Image Capture NAB 2025 Show Floor Report
Our resident image capture expert Phil Rhodes offers up his own personal impressions of the technology he encountered walking the halls at the 2025 NAB Show.
Building Hybrid IP Systems
It is easy to assume that the industry is inevitably and rapidly making the move to all-IP infrastructures to leverage IP’s flexibility and scalability, but the reality is often a bit more complex.
Microphones: Part 9 - The Science Of Stereo Capture & Reproduction
Here we look at the science of using a matched pair of microphones positioned as a coincident pair to capture stereo sound images.