What Does Hybrid Really Mean?
In this article we discuss the philosophy of hybrid systems, where assets, software and compute resource are located across on-prem, cloud and hybrid infrastructure.
“Cloud language”—how we talk about the cloud—has all kinds of oddities and anomalies. Most are unhelpful. For example, the term “the cloud” could be more accurately expressed if we said, “It’s not actually a cloud at all.”
To focus on what “hybrid” really means, we have to look closely at the two things that “hybrid” is not: it’s not the cloud, and it’s not a walled-off environment where no one is plugged into the internet.
Let’s look at the cloud first.
If you’ve never really thought about what the cloud is made of, you’re probably unwittingly closer to the essence of the cloud than if you start delving into the physical infrastructure that supports it. The cloud is all about abstraction, which in practice means you can get stuff done in the cloud without really knowing - or having to care - about how it’s done.
We’re all totally used to this situation. Remember your last email to the electricity company? Where is it now? Nobody knows, but, more importantly, it doesn’t matter. As long as it’s stored somewhere, everything’s OK. Much as media professionals aren’t used to the concept of “it doesn’t matter”, in a sense, that is the whole point of the cloud. The term “the cloud” means that you can throw your stuff over the fence, and it will get taken care of: either processed and sent back to you, stored until you need it again, or delivered to a third party. And that’s it. That’s the strict definition of the cloud.
Beyond that simple—and simplistic—definition of the cloud, there is a whole world of complexity and sophisticated technology, but that doesn’t detract from the elegant idea that the cloud is a service that takes care of stuff for you.
It is a genuinely compelling solution for an ever-increasing range of jobs, tasks, and processes. Impressively, the cloud is now quite capable of handling a wide variety of digital media-related tasks in real-time.
So that’s the state of the world now, but what did we have before the cloud? It’s worth mentioning that while the cloud depends on the internet, the internet is not the cloud. Before the cloud, we still had internet-connected workflows, but they were typically a point-to-point style network connection, where users would run apps that communicated with other users directly rather than through a third party. And it’s important to note that many media facilities didn’t connect to the internet at all. At first, it was seen as too much of a risk - initially from viruses and other malware and then from more sophisticated attacks that could crash projects and “steal” commercially valuable material.
Security is an all-embracing, ever-present consideration for any kind of internet-connected workflow, and it’s comforting to know that cloud service providers and internet-connected application writers take it very seriously. One well-known company that made a video review platform had as many developers working on security as there were working on every other aspect of the project. That approach was a significant factor leading to the company’s multi-billion dollar buyout by a leading creative software vendor.
This preamble sets out the fundamentals that provide the context for hybrid working. It needs context to be understood fully, and it also requires quite a lot of explanation and clarification.
It’s tempting to visualize hybrid working as a simple block diagram, with on-prem denoted by a box on the left, cloud as a box on the right, and hybrid as a third box in the middle. As a conceptual proposition, it’s OK, but it doesn’t really tell us much. A better way to look at it might be as a spectrum, with air-gapped computing at one end, fully cloud-native at the other, and hybrid filling up all the space in between. While that schema doesn’t tell us much, either, we can learn from it that there are no sharp boundaries. And we can almost certainly conclude that virtually everything we do - apart from those two extremes - is a hybrid solution.
We will return to this way of thinking in a minute, but first, let’s examine some examples.
Media Asset Management is a field that could have been made for the cloud. A big film or tape archive was traditionally stored in a warehouse or other dedicated building. Digitization has changed this quite a bit, but not entirely because the digital data is likely to be stored in a warehouse called a data center or in a storage facility belonging to the company the data belongs to… or both. MAM is a great example because it demonstrates how the same data can exist simultaneously in multiple forms and in several places.
Imagine a large broadcaster or media company with branches around the world. Media arrives from multiple sources and is digitized and logged at the earliest opportunity - ideally conforming to a company-wide (or, even better, an industry-wide) standard. Workflow designers need to make several decisions, even at this early stage. These depend on how it will be used - now and in the future. Does the incoming media need to be stored at the highest quality? (What you might call “master” quality.) Does it only need to be at “working” quality? Should the workflow require proxies, and should they also be of high and low quality? What kind of database is there? How deep and rich should metadata be? The better the metadata, the more the workflow can be automated.
Assuming the workflow can link all clip versions together, where should it be stored?
Arguably, master-quality material should be stored on-prem because it is secure and relatively cheap (compared to the cost of uploading vast quantities of enormous files).
However, current and archive material can be extremely valuable, and much of that potential value could be lost if it is anchored in a single place. What’s more, storing media in a single location limits its usability in time-critical situations. It would be much better if versions of the media could also be stored in the cloud for immediate use. Some MAMs scan an entire organization’s storage estates and create proxies to make the whole collection available to an authorized person anywhere in the world.
This arrangement is intrinsically hybrid, especially if the original media is stored near-line or offline, perhaps on LTO or some other kind of automatically addressable storage.
This example is, perhaps, the biggest reason why workflows will always tend to be hybrid. While it’s true that there’s no technical reason why even master-quality material should not be stored online, the cost is a factor. While cloud security is generally second to none, there is something about storing valuable material on-prem, which, to some people, makes it seem safer. On that note, on-prem material is only secure if it is protected against physical theft, floods, malevolent acts, etc. It’s not that hard to argue that it might be safer in the cloud! But it’s undoubtedly good to have these options.
Ironically, it’s now possible to run some applications explicitly designed for the cloud on your premises. In a sense, building a private cloud has always been possible. AWS and Azure are Amazon’s and Microsoft’s private clouds, although that’s probably stretching a point too far. But there’s nothing to stop you from building your own cloud facility that’s entirely under your control. The upside is that it’s yours to do whatever you like with, you won’t be weighed down by multiple requests from third parties, and security is entirely under your control. The downside is that you’ll be responsible for maintenance and operation - things you can just “assume” if you outsource to a cloud service provider.
If you want a smaller setup—one that might not qualify as a “cloud”—you can still choose applications that behave as if they’re in the cloud when they’re actually in your own premises. Some even include cloud-specific tools like Kubernetes to help you manage scalability and new software deployment.
This article aims to clarify that there is no single, clear definition of hybrid working. It will continue to be blurred as far as we can see in the future, as new communications and network topologies become available and new ways to manage, automate, and optimize resources emerge thanks to AI.
Here’s just a single, eye-opening example of how the cloud is moving closer to you at the same time as we are moving closer to it.
We’ve looked at the advantages of on-prem working: immediacy, security, speed, low cost, etc. But here’s another option. It’s called Edge computing, which essentially outsources your on-prem computing to the cloud while keeping it local. Some cloud suppliers (including mobile phone and data providers) offer Edge services that place powerful cloud computing as close as possible to your physical location. It’s too early to be definitive about the cost of these facilities, but as they roll out, that will become clearer. Vodafone has teamed up with AWS to provide 5G Edge computing services to clients that need low latency cloud responses. While more traditional on-prem/hybrid/cloud relationships will likely persist, Edge computing is an example of how cloud topologies can adapt and mutate to meet specific requirements and add more tools into the melting pot.
The rate of technology change is not always matched by the adoption rate. Committing your valuable resources to a third-party cloud is always likely to be a leap of faith. But as we all become more fluent in cloud topologies and as the technology matures and incrementally proves itself, the cloud is likely to become the default resource for broadcasters, and that includes every hybrid solution imaginable.
You might also like...
IP Security For Broadcasters: Part 5 - NAT Explained
When IP was first envisaged back in the 1970s, just over 4 billion unique IP addresses were allocated. However, the overwhelming international adoption of the internet with a world population of nearly 8 billion people has demonstrated there are simply not enough…
Standards: Part 24 - Timed-text & Subtitles Overview
Carriage of timed-text must be closely synchronized to the AV stream to ensure it is presented in a timely manner so here we describe the standards that enable this for both broadcast and internet delivery.
HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows
Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…
IP Security For Broadcasters: Part 4 - MACsec Explained
IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.
Standards: Part 23 - Media Types Vs MIME Types
Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.