Managing Paradigm Change
When disruptive technologies transform how we do things it can be a shock to the system that feels like sudden and sometimes daunting change – managing that change is a little easier when viewed through the lens of modular, incremental, and controlled evolution.
It’s always been ironic that a high-tech industry like broadcasting has tended to be conservative (with a small “c”). You only have to look at the BBC’s traditions and folklore to see examples of this. And yet, that same organization has led the way in many directions. Perhaps conservatism and iconoclasm are two faces of the same coin: proposing something radical and then opposing it - or at least moderating it - out of caution and concern.
This is not the overt contradiction it seems. Just because you can launch a rocket without it exploding doesn’t mean it’s ready for commercial passenger flight; any more than a novel transmission, delivery or production technology is likely to be ready for the rigors of live broadcast the day after the technical breakthrough.
Good broadcasters demand reliability, robustness, maintainability and flawless performance with virtually zero downtime. None of these are directly compatible with raw innovation.
Even if - as in the majority of cases - a broadcaster isn’t faced with a choice between brand-new, untested technologies, it can sometimes feel like that to an individual organization, specifically when they have to contemplate jumping from one technology paradigm to another.
If we take audio as an example, researchers knew many of the mathematical principles of digital audio in the 1930s. Modern sampling came to the fore following the groundwork of Shannon and Nyquist, and as computing power has increased, so has the real-time nature of day-to-day digital audio practice. Even though today’s studios have little or no analog audio, the transition from analog to digital was far from a simple binary switchover. It might sound counterintuitive, but there are many possible intermediate stages between analog and digital organizations. It’s a good exercise to look at this quite closely because it can help inform us about how to manage future paradigm changes, like that between on-premise and the cloud.
Imagine a world where you had no option but to switch from one paradigm to another in one dramatic - or traumatic - event. For a broadcaster, this might mean closing its analog facilities and moving to a digital equivalent. Of course, that’s already happened in audio and was essentially complete (depending on where you look) as much as twenty years ago. But look closer, and what you find is more of a continuum. How is this even possible?
It’s possible because of modularity, intentional or not, and largely facilitated by converters. We’re talking about audio here, so the converters in question would be analog to digital and digital to analog. So, given that you could start in analog, convert to digital, and do whatever you wanted to do in the digital domain (EQ, Dynamic Range Conversion, Delay, etc.), and then convert back to analog, there was effectively no functional difference (in terms of input and output) between an analog module and a digital one. Within the modules, there would likely be a world of difference, but that had no impact on its compatibility with an analog environment. A great example would be Minidiscs replacing cart machines in radio studios, or digital reverb replacing spring or plate devices in recording studios.
Ironically, this in-between world was ideal because no one had to worry about sample rates, word clocks, etc. After all, even if all the modules became digital, everything was still connected using analog infrastructure. It wouldn’t make sense for this to continue for long because each A/D and D/A conversion represented a slight loss in quality, and an opportunity for a slick, all-digital ecosystem. Even today, though, a quick fix in a digital audio environment is to go out to analog when an interfacing issue needs a quick fix, like a sample rate conversion or a clocking problem.
All digital environments (audio and video - and metadata!) are theoretically ideal and, in practice, solve many problems.
We all know the advantages of digital:
- Digital doesn’t degrade
- You can make perfect copies
- You can send digital over a network
- You can share digital files (a consequence of 1., 2 and 3)
- You can do math on digital. It makes the world of media computable.
There’s no going back. The advantages are too great. Apart from those who miss the “mellow” sound of analog, nobody would seriously disagree with the creative and productivity-related advantages of an all-digital broadcasting environment.
But paradigm changes never stop. Thanks to the step-by-step approach we’ve already discussed, the boundary between one paradigm and the next is blurred and no more so than with networks.
Disregarding the fact that virtually everything has been “networked” for years, largely thanks to the internet, the use of networks in broadcast and production environments has taken a bit longer. That’s partly because of bandwidth issues but mainly because of the time it takes to arrive at a standard and equip products to support it.
Standards can feel like they hold things back. They can seem like a snapshot in time, crystallizing the state of the art and impervious to or at least ill-suited to supporting the endless blank canvas of future developments. However, the reality is more nuanced if, indeed, the seismic consequences of standards like HTML can be called nuance.
Network control preceded network transport because connecting devices and controlling them is easier than sending real-time media between locations. A command like “Push this virtual button” is minuscule compared to an IP video stream. But now, we’re well and truly into the IP media paradigm, where a single network backbone can handle all signal categories with equal aplomb.
Without networks and the internet, there would be no cloud.
It’s easy to forget two simple facts about the nature of the cloud. The first is that it’s not a cloud. The cheery fluffy cloud symbol that network planners use to depict the cloud or simply “this is where the network takes care of it” hides, quite reasonably, the true nature of the cloud, which is that it is not a cloud at all. Instead, it’s a worldwide collection of cables, satellites, routers, servers and processors, together with layers of management and monitoring. A more realistic mental image of the cloud might be an enormous data center full of humming computers. But the key to the entire nature of the cloud is virtualization: the idea that physical devices are decoupled from the processes that they enable collectively, and - thanks to the internet - geographical independence.
We’ve all been using cloud services for several decades. If you use Google Mail, you’re using the cloud. But as you’d expect, the demands of digital media are hugely different in scale and typically in nature to those of email.
Since the introduction of web-based email, browser technology has grown to include powerful programming languages and surprisingly capable performance that, today, even includes real-time media processing, allowing for activities like video editing. Cloud-based editing is a boon for news and sports and any kind of production that needs a rapid turnaround.
If you compare any kind of live production studio with its predecessor from forty years ago, or contrast workflows and working methods across the same period, you’ll recognize many familiar things. But, under the hood, everything’s changed. In every aspect of live broadcasting, digital media, computing and, very soon, AI, either has or will soon change virtually everything. But does it all happen in an instant? Was there a big switch labelled “Pull me to make everything digital”? No, definitively not. The reality is much more pragmatic than that. It turns out that the best way to manage massive technological change is to do it in small steps whenever it makes sense. This approach squares the circle of “How do we cope with massive change in a cautious, conservative way?”. The answer is to appreciate the lessons of the past. Make incremental changes and methodically road-test them. Always have a fallback. But also, be prepared to say - at some point in the future, “we’re well and truly comfortable with the new paradigm, and there’s no going back”.
You might also like...
HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows
Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…
IP Security For Broadcasters: Part 4 - MACsec Explained
IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.
Standards: Part 23 - Media Types Vs MIME Types
Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.
Building Software Defined Infrastructure: Part 1 - System Topologies
Welcome to Part 1 of Building Software Defined Infrastructure - a new multi-part content collection from Tony Orme. This series is for broadcast engineering & IT teams seeking to deepen their technical understanding of the microservices based IT technologies that are…
IP Security For Broadcasters: Part 3 - IPsec Explained
One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…