Differentiating Between Cloud And Virtualization

There’s usually a bit of confusion when people talk about going to the cloud vs. virtualizing. While most cloud providers use virtual machine configurations to optimize their offerings, that’s not the same thing.

As servers and computers increase in compute power and memory. An interesting fact is that virtualizing a computer was first introduced in 1974 by Gerald J. Popek and Robert P. Goldberg (Data scientists) in their article "Formal Requirements for Virtualizable Third Generation Architectures”. However I digress. As Moore’s Law predicted, computers and servers have gotten a lot more powerful, so did the programs running on them need more computer resources. This put a huge burden on equipment rooms, each time a new service or application was added, it required a new server. As the broadcast industry migrated to computer-based systems and services, lots of computers started appearing.

Along Comes Virtualization 21st Century Style

Using the number of processors and cores available in computers, we now have the ability to partition a server assigning segments of the available resources with independent operating systems running different applications in each segment, hence virtualizing the machine. The virtualized machine is still on premise, only now in one physical server there can be multiple virtual servers. This greatly reduces the amount of actual rack real estate, power and HVAC needed in an equipment room for a media server farm. Of course, the broadcast vendors needed to have a version of their product that will run in a virtual environment. There are a number of benefits to running virtual machines. The most obvious one is scaling. If an application running on one VM instance can handle a certain number of users and to add users requires another machine, in the virtual world, it’s just setting up another instance on the same server. There’s no need to purchase a new server or find space for it. Another benefit of virtual machines (VM) is remote access, creating remote gateways and VPN’s using VM’s is more efficient and allows more users remote access. All this is still on premise in main equipment room and the software licenses are pretty much the same as a regular individual server license. Virtualizing the data center is not the cloud.

Moving to the Cloud

OK, so as we have discussed in many articles, the cloud is essentially a remote data center operated by an outside service provider that enables their clients to move their computer/server operations i.e. applications, services and storage off-site to someone else’s facility. Thus saving physical space, environment and hardware maintenance. All this is accessed either via a browser or a local application that integrates to the cloud service. One important note is that most cloud offerings are accessed through an open Internet connection.

An interesting note is that the cloud provider more than likely is running virtual machines in their facility. This allows them to have multiple clients on a single server, which is more efficient to manage and more profitable to the cloud provider. Using a cloud version of an application allows you to set up additional instances or request more CPU or memory for only the time it is needed and not need to purchase anything only rent.

Cloud software is typically accessed either via VPN with some of the application on a local desktop or server or via a browser interface. This changes the configuration of the application from the on premise version to a cloud version. One of the interesting challenges is that not all software is VM ready or compatible, and have different VM versions, now add cloud to the mix and things get complicated.

One of the changes that has come about with cloud offerings is that many broadcast vendors who already moved their products to software only with site licenses are now moving from an on premise installed software version to a cloud based subscription model. This changes a lot of things. On the plus side, the vendor maintains and manages the updates and versions changes, however instead of a capital expense and using the software as longs as you can, it now changes to a subscription model for a monthly fee based on a long term contract. This becomes an annuity for the vendor and difficult to change vendors since all your content is in their cloud. Interoperability between cloud based products is an ongoing work in progress.

In many cases, the user interface of the cloud version is different from the desktop version and in some instances there are different features.

Virtualizing a computer brings a level of efficiency and scalability, migrating to the cloud reduces the amount of hardware on site and changes capital investment to operating expense. In both virtualizing software of moving it to the cloud, there are lots of considerations. 

You might also like...

Audio At NAB 2025

Key audio themes at NAB 2025 remain persistently familiar – remote workflows, distributed teams, and ultra-efficiency… and of course AI. These themes have been around for a long time now but the audio community always seems to find very new ways of del…

Remote Contribution At NAB 2025

The technology required to get high quality content from the venue to the viewer for live sports production remains an area of intense research and development, so there will be plenty of innovation and expertise in this area on the…

Production Network Technologies At NAB 2025

As NAB approaches we pick up the key theme of hybrid production network infrastructure that combines SDI-IP network infrastructure & data center model compute resources, with a run-down of what to expect from vendors on the show floor.

KVM & Multiviewer Systems At NAB 2025

It’s NAB time again. Once again, as we head towards the show, we will take a look at the key product announcements across a range of key technology and workflow areas. We begin with the always critical world of K…

Sports Production Infrastructure – Where’s The Compute?

The evolution of IP based production and increased computer processing power have enabled new workflows, so how is compute resource being deployed to create new remote and hybrid approaches?