Secure IP Infrastructures For Broadcasters - Part 3 - Secure Virtualization
In the previous articles in this series we looked at advanced server security and out-of-band monitoring and control, especially with security validation of peripheral device firmware. In this article, we investigate virtualization further and its benefits for building secure broadcast infrastructures.
The cyber security landscape has changed beyond all recognition over the past twenty years. Back then, “script-kiddies” were responsible for hacking systems and large events would hit the news headlines once a year with a few thousand data records being compromised. Today, the cyber security landscape is very different.
Millions of records are now at risk daily. The adversary has changed completely with nation state and political activists attacking for ideological reasons. And cybercrime is now big business with some estimates showing a one trillion-dollar value. This demonstrates a lot of determination and resource on the part of the attacker.
In the past, firewalls were the first and most effective go-to level of defense. High-value and critical resources were enclosed within a secure firewall zone and we were relatively sure our systems were safe. But now, in a highly connected world, it’s difficult to be certain firewalls will protect our systems completely.
Business Wide Responsibility
More now than ever, security must be considered the responsibility of the whole business and broadcast facility. It must be built into the core of everything we do. This is not just a challenge IT can solve but is a business wide initiative. It’s about building secure applications, building into firmware and hardware the silicon root of trust, establishing trusted procurement paths, and understanding how we protect our data.
The primary benefit of virtualization is to make better and more efficient use of servers. For many user applications, the operating system spends a great deal of its time waiting for external events to occur. Disk reads, writes, keyboard and mouse actions, and network interface input and outputs are all relatively low frequency events compared to the speed of the processor. This is a highly inefficient use of expensive processor time.
But as we share the CPU, by implication, we must share all the server resources too including memory, disk drives, network interface cards, graphics cards etc. And the fundamental challenge we face is that the original x86 architecture was never designed to be operated natively in a virtualized mode.
Hypervisor Virtualization
The software running the virtualization service is called the hypervisor and runs on a host server. And the service that allocates machine resource is called the VMM (Virtualized Machine Manager), each virtual machine is referred to as a guest.
There are two types of virtualization topology; bare-metal and hosted. Servers running multiple machines in a data center tend to use bare-metal hypervisors and desktop computers running a hypervisor within their existing operating system are called hosted. For most data center broadcast infrastructures, the bare-metal approach (also called native) is the hypervisor of choice. It sits between the host hardware and guest operating systems.
Fig 1 – two types of virtualization are available. The hypervisor can either run on the bare-metal server, or on an existing OS.
When a user program wants to read a file from a SATA connected disk drive, it invokes a system-call in the operating system (OS). This in turn runs some low-level instructions to talk directly to the disk drive through the CPU’s input/output bus. Without virtualization, that is just one OS running on a server, the OS has direct access to the hardware and will execute the necessary instructions. However, when multiple OS’s are running on a virtualized server, they no longer have access directly to the hardware. Instead, the hypervisor detects they are trying to access the hardware and intervenes. But critically, the OS doesn’t know the hypervisor has intervened, as far as each OS is concerned, they are running on their own server.
Before 2005, hypervisors used a method called binary translation to read through the executable file of the running program on the guest OS and determine which instructions were virtualization critical, and then replace them with the hypervisors code, to give the desired intervention. Although this method worked, there could be a significant overhead in providing binary translation.
Hardware Virtualization
From 2005, Intel and AMD both started to build hardware accelerated virtualized instructions. These would detect the code that was trying to access a peripheral device such as a network interface card and provide a hook back into the hypervisor so it could take over. This is the modern method of providing virtualization on servers but requires CPU’s that have this feature built into them.
For both methods, from the point of view of the OS and the program it is executing, it has no idea a third party has effectively diverted control directly away from the peripheral device and is running a proxy instead, often referred to as I/O virtualization. And this is one reason why virtualization is incredibly secure. Combined with memory, network interface, and CPU virtualization, the hypervisor has a trusted level of control and monitoring. If a guest OS has had a security breach then not only can it be disabled, but a snapshot of its resources can be taken for later forensic analysis.
The hypervisor is constantly cycling between the guest OS’s to make the most efficient use of the servers’ resources. Although physical devices such as memory and hard disk drives are shared across multiple guest OS’s, each OS runs in its own shell and cannot communicate or exchange data with the others through these shared resources.
Memory is divided into three categories; virtual, physical and machine. In this instance, the physical memory is an abstraction to give the guest OS the illusion that it is running on the machine directly. The hypervisor translates between the physical and machine memory so that it alone police’s access for each guest OS. Every guest OS running on the virtualized server uses a zero-based address space. Therefore, each OS thinks it’s running on the hardware memory from address zero. It’s not, it’s running whatever area of memory the hypervisor has allocated.
Fig 2 – memory is abstracted away one layer to allow the hypervisor to check for attacks but at the same time it makes the OS think it’s the only system running on the server.
If a guest OS does get compromised and starts to try and find other guests within the virtualized server by attempting memory buffer overflows or segmentation faults, the hypervisor will detect and stop it from accessing memory it doesn’t have authorization to use. And importantly, it will log this event and send an alarm to the IT administrator to investigate.
Detecting and logging events is a core part of security as there are two aspects to it. First, there is the detection and second, there is the forensic analysis if a breach takes place. It’s great news to be able to stop an attack, but it’s equally important to know how it occurred and if there is still some malicious code lurking somewhere in the system. Logging helps enormously with this.
Virtual Drives
Although guest OS’s think they have a disk drive to themselves, in reality, their disk drive is a file on the server’s disc or connected storage device. If the server uses a disk that is 8TB and each OS has 1TB allocated to it, each 1TB allocation is in effect just a file giving the impression of a complete disk. When a guest OS accesses a disk drive, the hypervisor intervenes and first checks to confirm it has access to that virtual drive. If it does not then an exception is created back to the guest OS, an entry is made in the logging system and the IT administrator is notified.
As the virtual drive is a file, it can be transferred to another area if forensic analysis is required. And this is also the principal of snapshots and why guest OS’s can be moved between physical servers. As well as transferring the virtual disk, the hypervisor takes a copy of the CPU state, input/output state, and other connected peripherals and system configurations.
Network interface cards can also be virtualized. A virtual network adapter is created, and multiple guest OS’s can access it, each thinking they are the only OS that has access to the NIC. Multiple NIC’s can be combined to aggregate the link to make each OS think it has a high-speed connection. Not only does this provide better redundancy, but it also gives much higher connection speeds. As the hypervisor has abstracted away the physical hardware from the guest OS’s and in doing so has given another level of security.
Improved NIC Security
The virtualized NIC is a software abstraction of the underlying hardware so it can have extra levels of security added to it. For example, it can be enabled to provide VLAN layer 2 validation and firewall protection. IP access to the guest OS’s can be limited to specific protocols such as HTTPS thus restricting an external hacker from trying to run TELNET or some other control software.
As well as providing more efficient use of the server’s resource, the hypervisor provides an unprecedented level of security between guest OS’s and to the connected world. It provides layers of monitoring, logging and detection, so that if a guest OS does become compromised, the IT administrators can be notified quickly without jeopardizing the rest of the infrastructure.
Part of a series supported by
You might also like...
Expanding Display Capabilities And The Quest For HDR & WCG
Broadcast image production is intrinsically linked to consumer displays and their capacity to reproduce High Dynamic Range and a Wide Color Gamut.
Standards: Part 20 - ST 2110-4x Metadata Standards
Our series continues with Metadata. It is the glue that connects all your media assets to each other and steers your workflow. You cannot find content in the library or manage your creative processes without it. Metadata can also control…
Delivering Intelligent Multicast Networks - Part 2
The second half of our exploration of how bandwidth aware infrastructure can improve data throughput, reduce latency and reduce the risk of congestion in IP networks.
If It Ain’t Broke Still Fix It: Part 1 - Reliability
IP is an enabling technology which provides access to the massive compute and GPU resource available both on- and off-prem. However, the old broadcasting adage: if it ain’t broke don’t fix it, is no longer relevant, and potentially hig…
NDI For Broadcast: Part 2 – The NDI Tool Kit
This second part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to exploring the NDI Tools and what they now offer broadcasters.