AI In The Content Lifecycle: Part 5 - Ethical Broadcasting And Regulatory Compliance

Broadcasters and video service providers are looking at AI to police the regulatory and ethical problems it has created, as well as bear down on some longer standing challenges. The latter include ensuring that content developed in one country complies with regulations in others.

The idea of AI policing itself has gained greater grip over broadcasting regulation and compliance than in some other sectors. There is the sense of AI chasing its own tail here, but also in a more positive way helping address issues of compliance that predate AI’s modern incarnation and have in some cases been rumbling on for years.

When it comes to Generative AI, the emphasis has been more on creating content artificially, leaving it to more traditional forms of machine learning to deal with issues arising such as infringement of copyrights, or of rules designed to protect rights of human script writers. The settlement of the strike organized by the Writers Guild of America (WGA), the union representing TV and film writers, reached in September 2023, obliged studios and production companies to disclose if any material given to them had been generated either partially or fully by an AI system. The latter was precluded from writing or rewriting so called literary material. This itself imposed compliance requirements on originators of content, including broadcasters, where machine learning itself can help ensure these rules are upheld.

That settlement between studios and the WGA may at least temporality have upheld the rights of human authors, but another question is over the legal status of content that still is AI generated. An attempt to resolve this was made by the US Copyright Office (USCO), with its ruling in March 2023 that Gen AI outputs cannot have the same copyright protection as output from humans.

It was easy to see though that this was unsustainable given the increasingly blurred lines between content produced by humans and algorithms. Experts in the field were quick to ridicule the move, such as Alex Connock, Senior Fellow at the University of Oxford and AI media guru. He suggested that while the USCO move was well-intentioned it was based on woolly wishful thinking.

“First of all, within 2024 almost every creative work features the use of AI and many of them generative AI. We all use machine learning-driven tools probably every five minutes of the day, from Google search to Word or Co-Pilot. There will come to be a dividing line between AI outputs and human outputs that is so ill-defined that a hard split between AI-created and human-created content will become entirely impossible to determine. I would say that that point is not in the future - it’s right now.”

The USCO itself has admitted it receives applications from human authors citing an AI system as co-author, immediately raising the question of whether that whole output can be copyrighted under the new rules. This could involve giving the human author rights to half of any royalties.

But, as Connock went on to argue, it is impossible to determine division of labor when a work partly involves Gen AI underpinned by a neural network with 80 billion parameters. As he said, there will be countless TikTok or YouTube videos where Gen AI has been involved in the background to an indeterminate degree.

Indeed, a number of longer established media outlets have been lured into overreliance too soon on Gen AI for content production, sometimes being caught out and embarrassed in the process. Such cases again underline the growing overlap between content generation and management of AI as it expands, with tension between Gen AI as the creative arm and traditional AI for control and monitoring of output for compliance with both external and internal regulations or principles.

There is growing overlap also between TV and print media, with the old boundaries between them breaking down as the former publish text-based stories on web sites, while the latter include video versions of their output. Some such as Wired have stalked the ethical high ground with rhetoric about sticking to traditional news values and rules prohibiting use of Gen AI for production even of story parts, “except when the fact that it’s AI-generated is the whole point of the story”. Wired argued that Gen AI was not yet intelligent enough or capable of the impartiality required for unsupervised news reporting.

By contrast CNET, Wired’s rival in the technical and science publishing domain, was caught with its hands deep in the trough of Gen AI as revealed by yet another tech and science magazine, Futurism. This led to CNET in 2023 publishing corrections for 41 or 77 stories generated by AI, and independent press regulator Impress to update its Standard Code pertaining to such AI-generated content. Impress implored publishers to be “aware of the use of AI and other technology to create and circulate false content (for example, deepfakes), and exercise human editorial oversight to reduce the risk of publishing such content”.

In practice it will not be possible to exercise such human oversight on content produced lower down the content food chain, as is happening across numerous web sites. There is continuum of differences between top and tailing a press release, for which traditional AI has long been employed at some media outlets, and production of documentaries on complex issues behind the news.

As AI and Gen AI mature further, they will progress along this ladder and also become capable of flagging up potential abuses or inaccuracies in their slipstream.

There is then the question of transparency, of informing readers or viewers clearly of provenance for stories, and at this stage of where AI is involved along the production chain from that point. Many outlets fail to do that and the most egregious offenders go so far as to fabricate numbers and biographies of humans claimed to have written content actually originated by AI.

A more pertinent issue for mainstream companies lies in latent basis engendered by the Large Language Models (LLMs) underpinning Gen AI. Few if any outlets are impervious to bias, and that may be part of the editorial or broadcasting strategy, in the case for example of The Guardian newspaper or Fox News. But Gen AI risks hard wiring that bias into coverage or amplifying it, according to a recent academic study of the issue (Fang, X., Che, S., Mao, M. et al. Bias of AI-generated content: an examination of news produced by large language models. Sci Rep 14, 5224 (2024). https://doi.org/10.1038/s41598-024-55686-2).

That study set out to investigate the bias of AI Generated Content associated with seven representative LLMs, including ChatGPT and LLaMA. Somewhat controversially, the study collected news articles from The New York Times and Reuters for the purpose, deeming these two to be dedicated to unbiased news. That itself is a statement open to some debate.

Indeed, the web site Media Bias Fact Check (New York Times - Bias and Credibility - Media Bias/Fact Check (mediabiasfactcheck.com)), judged the New York Times to be left-center biased politically and noted its emotive use of words in headlines such as Trump Again Falsely Blames Democrats for His Separation Tactics.

No one would have queried the headline had the words again and falsely been omitted. Even including falsely would be all right if the story backed that up with clear evidence. But including again, implying that Trump was a serial liar, has no place in a news story since that should be left to the reader to judge – confining such opinionated comment to editorial leader columns.

This illustrates the risks inherent even in analysis of LLMs in Gen AI for news generation. The study did though successfully identify fundamental biases in LLMs, all of which exhibited notable discrimination against females and people of color, especially black. Among those seven LLMs, the content generated by ChatGPT demonstrated the lowest level of bias, which was the sole model capable of declining to generate content when fed prompts it considered biased.

The case of Chat GPT shows that bias in Gen AI can be mitigated by incorporating its detection into the model implementations. In a similar way, there is scope for addressing the growing regulatory compliance burden faced by most sectors, including entertainment and TV broadcasting. The ability of AI to process large amounts of data quickly is already transforming compliance, not always through automated actions but by generating snapshots of relevant documents and matching requirements against adherence so that recommendations can be made.

These can then be transmitted to relevant managers across the organization. AI makes it easier to monitor compliance on an ongoing basis rather than just periodically, some of which for live streaming has to be performed in real time. That however is also still work in progress, with major publishing sites such as YouTube just starting to employ Gen AI to expand the range of classifiers its live content moderation system can be trained on.

Here again content creation intersects with compliance, as producers are now required to disclose when content is realistic, aiming to ensure viewers can distinguish between real and fake persons, places or events. The idea is that Gen AI would create labels that would appear within the video description.

This too raises issues of transparency and is also work in progress, as broadcasters navigate the shifting balance between generation and moderation under increasing levels of automation. Ultimately that balance will be best achieved by ensuring human managers maintain executive control, ensuring that AI and Gen AI remain agents rather than become masters.

You might also like...

HDR & WCG For Broadcast - Expanding Acquisition Capabilities With HDR & WCG

HDR & WCG do present new requirements for vision engineers, but the fundamental principles described here remain familiar and easily manageable.

What Does Hybrid Really Mean?

In this article we discuss the philosophy of hybrid systems, where assets, software and compute resource are located across on-prem, cloud and hybrid infrastructure.

Designing IP Broadcast Systems: NMOS

SMPTE have delivered reliable low latency video and audio distribution over IP networks, but it’s NMOS that is delivering solutions to discovery & registration challenges that satisfy operational requirements.

HDR & WCG For Broadcast - HDR Picture Fundamentals: Color

How humans perceive color and the various compromises involved in representing color, using the historical iterations of display technology.

Audio At IBC 2024

Great audio is fundamental to any great broadcast and professional audio remains one of the busiest areas of the show both in terms of number of exhibitors and innovative new technologies on show. IP and cloud developments seem set to…