Scalable Dynamic Software For Broadcasters: Part 11 - Container Security
As broadcasters continue to transition to IP, the challenges of security are never that far away. But security isn’t the responsibility of one person or department, it is the responsibility of everybody in the whole organization.
All 12 articles in this series are now available in our free 88 page eBook ‘Scalable Dynamic Software For Broadcasters’ – download it HERE.
All articles are also available individually:
Taking the Zero Trust approach to security goes a long way to keeping systems secure. Instead of using the trusted network approach, where anybody within the network is trusted, Zero Trust assumes every transaction is a potential security breach and must be verified before the process can continue.
The traditional trusted network approach was fundamentally flawed as anybody who could gain access to the network was then trusted until they logged out again. This could have potentially massive implications for the broadcaster as a hostile actor could lie in wait within a network for days, weeks, or even months without being detected. They could silently gain access to a whole host of user credentials and data, often resulting in catastrophic consequences. Keeping them out was difficult and detecting them once they were in the network was almost impossible.
Microservice and container architectures use the internet model for data exchange, control, and issuing of commands. This not only provides incredible flexibility, as anybody with an internet browser can operate the system, but it also allows broadcasters to take advantage of the innovation in other industries and operate in datacenter and cloud environments, thus delivering exceptional resilience and scalability. However, one of the downsides of using this technology is that broadcasters must take the same security measures as high-end enterprise datacenters.
Using the Zero Trust model requires validation for every user at every data or control point within a system. Once a user is logged in, the validation doesn’t stop, it continues. For example, if a user creates a transcode job, their login credentials are first verified by an Active Directory type service, which in turn request an OAUTH2 token from the OAUTH2 server, and when the user executes the process, the appropriate microservice receives the validated OAUTH2 token and compares it to the OAUTH2 validation server, if it’s authorized then the microservice can continue.
OAUTH2 tokens are critical to the operation of microservice architectures. They are not only validated against the user login credentials, but also have time limit and user rights access associated with them. If a system administrator suspects a hostile actor has broken into the broadcast infrastructure, they can disable the token. Any other access attempts will result in the OAUTH2 validation server from denying access by the process or storage.
Each memory access or process execution must have an authorized OAUTH2 token associated with it. Validating this token will significantly reduce the risk of a hostile actor breaking into the system.
Figure 1 – The OAUTH2 validation server issues a token that is then used by every process in the microservice architecture to validate access to the data.
Vulnerabilities
Although we talk much of IT vulnerabilities and security issues, it’s worth remembering that broadcasters also have their security weak points, even with SDI and AES. It’s just that they were much better contained due to the localized nature of the television station. But that resulted in a static infrastructure that was difficult to scale and lacked flexibility.
Areas of interest for microservice architecture security can be grouped into the following:
- Container image software
- Interactions between the container, host operating system, and other hosts
- Host operating system
- Networking and repositories
- Runtime environments
Item 2 is generally taken care of by OATH2 authentication, and items 3, 4, and 5 should be taken into consideration as a matter of good practice for any system administrator. However, item 1 needs much more consideration as it has implications for procurement and software provenance.
A container encapsulates one or more microservices with the appropriate libraries and operating system dependencies. And these in themselves can be a source of security vulnerability.
If the microservice has been written in-house, it is the responsibility of the software or DevOps team to thoroughly test the software for vulnerabilities. This includes any code dependencies such as libraries that may be provided by an outside source. Furthermore, as a matter of good practice, the developers must regularly check vulnerability catalogs such as the Cybersecurity and Infrastructure Security Agency (CISA – cisw.gov). CISA is a database of all the known security vulnerabilities provided by the US government and aims to understand, manage, and reduce the risk to cyber and physical infrastructure.
Guidance for vulnerability management is provided by the UK’s National Cyber Security Center at ncsc.gov.uk. The NCSC advises that a vulnerability plan must be in place so that DevOps and IT know what vulnerabilities are present within their infrastructure and a plan is needed to keep this updated.
Not only do the vulnerabilities need to be understood but meticulous processes must be in place to make sure the necessary patches are applied, deployed, and documented accordingly.
Although all IT and DevOps departments must be aware of these agencies and have the appropriate plans in place, if the microservice components or architecture is being provided by a third-party supplier, then the extent of their responsibility must be understood. And much of this comes down to understanding the provenance of the code and the processes the vendors have undertaken to make their software as secure as possible.
Recovery
Unfortunately, systems do occasionally go wrong and no matter how hard a broadcaster may try, it is an unfortunate fact of life that their high profile status attracts some of the best cybercriminals in the world. Consequently, a recovery plan must also be put in place.
Using microservice architectures makes this much easier than many of the monolithic software systems of the past would allow. The very nature of microservices means that there is a constant deployment cycle going on which gives ample opportunity for the applications to be checked and verified against the cybersecurity databases. This creates a culture of security first, which must be the mantra for any broadcaster operating an IP infrastructure.
Storage can be easily backed up in the cloud as broadcasters are only limited by the amount of money they want to spend. Whether to provide incremental or full backups is not only a matter for budgets but also security. If a file is sync’ed from an on-prem storage system to the cloud, and if the on-prem storage becomes compromised then the act of copying the file to the cloud will also compromise the cloud copy. Using incremental backups often helps alleviate this as there will be a historical copy of the file that has not been compromised.
SELinux
The server’s operating system security can be improved using SELinux (Security-Enhanced Linux) by allowing administrators to have more control over which users have access to the system. It was originally developed by the NSA (United States National Security Agency) to provide patches to the Linux kernel known as LSMs (Linux Security Modules). It was released as open source in 2000 and integrated into the Linux kernel in 2003.
SELinux operates by defining access controls for applications, files, and processes within the server. A set of rules, known as security policies, authorize what can and cannot be accessed. When an application accesses a file or ethernet port for example, SELinux checks the AVC (Access Vector Cache) to determine if the access has the necessary permissions. If no AVC rule exists, then SELinux sends a request to the security server (such as an OAUTH2 server) to determine whether access is permissible.
The effectiveness of this security relies on the system administrators setting the relevant rules and testing it on a regular basis.
Configuring and operating SELinux is often a complex task but it adds significant protection to a system as part of a Zero Trust strategy. The days when every user can have superuser access because it’s more convenient for them are long gone.
Part of a series supported by
You might also like...
IP Security For Broadcasters: Part 5 - NAT Explained
When IP was first envisaged back in the 1970s, just over 4 billion unique IP addresses were allocated. However, the overwhelming international adoption of the internet with a world population of nearly 8 billion people has demonstrated there are simply not enough…
HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows
Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…
IP Security For Broadcasters: Part 4 - MACsec Explained
IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.
IP Security For Broadcasters: Part 3 - IPsec Explained
One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…
The Resolution Revolution
We can now capture video in much higher resolutions than we can transmit, distribute and display. But should we?