What Are The Long-Term Implications Of AI For Broadcast?
We’ve all witnessed its phenomenal growth recently. The question is: how do we manage the process of adopting and adjusting to AI in the broadcasting industry? This article is more about our approach than specific examples of AI integration; there’s plenty of time for that later.
Other articles in this series:
Writing about AI, it's all too easy to shoot off in multiple directions like a badly run firework display. But it's largely forgivable when that happens because - and this is one of the reasons why understanding AI is so challenging - you often find yourself having to make intellectual connections between concepts that you might never have associated a second ago.
It's not surprising because AI "learns" by being taught associations. A cow is to a calf what a dog is to a puppy. But they're all animals. So, a cow is an animal, but it's not a tree. But trees and cattle are both life forms. The more associations, and there could be millions or billions, the more powerful the inferences an AI can make. In other words, with more associations, it's more likely to reach a sensible answer and even to appear insightful.
Mission-critical
Even though broadcasting is rarely a matter of life or death in the way that flying an aircraft definitely is, if AI is going to be at the core of a broadcasting operation, it has to be able to deal with "mission-critical" decisions. If it's not, a station could go off the air, or a newsreader might unwittingly read a script with inaccurate content and make potentially libelous statements. An extreme but plausible outcome might be that a broadcaster could accidentally start a war.
In our first article, we covered a lot of ground, one of which was that it's a mistake to think of AI as an app or just a piece of software, as in: "That's a really clever app." We need to look beyond apps and remind ourselves that everything new an AI app brings us is a new capability that will spread like wildfire into our daily and, hence, professional lives.
Generative AI models scale new heights almost daily. We all thought Dall-e was amazing (it still is!), but now we have Sora. Think about it: you can now generate cinema-quality video with just a few words. About ten years ago, I recklessly predicted that at some distant point in the future, we would be able to feed a film script into a computer and get a feature film as a result. But it's happened! Instead of sending our document to the printer, we can send it to the cinema.
Maybe that's a bit of an exaggeration - but only for a month or so at the current rate of progress.
An AI Video Codec?
So, while we're all collectively reeling at the implications of that, what comes after it? I'm not even going to guess. But take a slightly sideways look at the potential for generative AI in cinema and television, and it's hard to escape the idea that AI will be involved in the production workflow to the extent that it becomes the next type of video codec.
What does that mean, exactly? Here's one possible answer. If you can create a short but otherwise near-perfect video clip from a few words (like "a smartly-dressed woman walking down a brightly lit urban street in Japan"), imagine what would happen if you gave the AI model the output from a Sony Burano sensor instead of words as a prompt. (In case you didn’t know, the Burano is one of Sony’s flagship cinema cameras.)
With massively more information to work on (i.e. a pixel-based version of the scene you want the AI to resynthesize), you could expect a perfect-looking image based on concepts rather than pixels, which is also independent of frame rate and resolution. This is profound (albeit speculative) stuff. The UK had PAL 625-line television as its standard for abound 40 years. Imagine what could happen in AI in 1/40th of that time.
So, how do we cope with this? How do we build a broadcasting future around it?
The Semantic Workflow
First, remember that AI has legal, ethical, and safety implications. Resolving these will take a lot of hard, intellectual work. For example, copyright tends to lag years behind new technology. If it was ill-equipped to deal with Napster, imagine how much copyright could struggle with generative AI.
Next, look at the technology stack for broadcast production and find ways to make it compatible with AI. There isn't room here to go into detail, but at the very minimum, amongst all the existing layers, there has to be a "semantic" layer where concepts can be input as opposed to commands and where generated media is as much part of the workflow as conventional content.
AI requires a new type of thinking, with no holds barred. The rate of progress is breathtaking and is almost certain to continue. It will seep into every activity and every profession. It has the potential to improve our work, but also to replace our work.
Navigating the next few years will be unlike anything we've had to deal with before. You could argue that it's come at the right time. With products like Apple's Vision Pro giving us new ways to consume content, we also have generative AI that can potentially create much of that content. Nvidia takes this prospect seriously, and it should know, as the largest and most valuable manufacturer of AI chips in the world.
Even though our current infrastructure and broadcast standards will give us some respite from the tsunami that AI undoubtedly is - and even AI-generated media is still media - we have to be ready for drastic change. But if we approach this with optimism and caution, then it might be the most exciting time ever for the broadcast industry.
You might also like...
Designing IP Broadcast Systems - The Book
Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…
Designing An LED Wall Display For Virtual Production - Part 2
We conclude our discussion of how the LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in…
Microphones: Part 2 - Design Principles
Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.