Making Cloud Systems Secure - Part 1
Security for cloud and internet systems is playing an ever-increasing role in broadcast infrastructures. High value media assets and communication channels to broad audiences are at risk so it is reasonable to assume that unidentified hostile actors are lurking in every corner.
Other articles from this series.
The good news is that there is much a broadcaster can do to help protect themselves from attack. Although no system can ever be completely secure, it’s worth remembering that even traditional SDI and AES broadcast facilities had their vulnerabilities. They were just different, and broadcasters assumed they knew where they were, but they often didn’t.
Managing and understanding risk is key for maintaining security. Furthermore, a vast array of detection and analysis tools are available to help broadcasters understand network and infrastructure vulnerabilities, especially as the IT industry has been working at finding solutions to these challenges for many years.
Delivering effective security not only relies on our technical understanding but it also embraces the attitudes of users and how they approach security. To be effective, security must encompass a positive and productive mindset that is promoted and encouraged from the CEO so that it manifests itself as a culture throughout the broadcast facility.
As one of the biggest vulnerabilities to any IT system is human error, effective cloud security is a way of life that must be encouraged. And systems need to be designed to have the right processes, features, and patches in place from the beginning and then throughout the lifetime of the system.
Problem to Solve
To deliver cloud security, broadcasters need to achieve the following:
- Protect data storage, processing systems, and networks from data theft.
- Develop a data recovery plan in case data is lost or corrupted.
- Stop human negligence so that data cannot be compromised.
- Ringfence the impact of data loss or a compromise of the system.
Although much of our approach to cloud security revolves around stopping malicious actors from penetrating the network in the first place, we must also be mindful of the need to back up data and be able to recover from data loss or corruption.
Intuitively, we may want to treat data recovery separately from stopping intrusion, but in many instances, there is a great deal of overlap. Furthermore, data loss or corruption may not be a consequence of a malicious act, but instead be a simple mistake such as a user deleting a master media file. The fact that users shouldn’t be able to make these mistakes falls into the discipline of user access rights but restoring the media asset should be a key part of the data recovery plan.
Restoring a file is important but it’s only a part of the equation. Equally important is isolating a hacker’s ability to disrupt an ongoing service. This is where system redundancy and the ability to fail over to a back up system are important.
Cloud systems have the potential to make data recovery much easier due to the multitude of options available for storage. High speed near-line storage can be archived to off-line storage, which is often cheaper but slower. However, these storage systems also need to be protected from malicious attacks. Even if a broadcaster archived their cloud storage to on-prem, the two systems are intrinsically linked, and adequate security must be maintained between the two.
Again, we need to address protecting active processing, not just storage.
Outdated Approaches
Traditional methods of IT security used the perimeter wall approach. That is, the access points to the network were heavily guarded so that if a hostile actor tried to gain access, they could only do so through a limited number of points that could be protected. One example of this is the firewall on the internet router.
Firewalls and intrusion detection systems would be placed at the internet connection point to the ISP and any malicious access could be detected and stopped. But the fundamental challenge with this strategy is that it relies on knowledge of the attack pattern which can only be gained if another organization had been subject to the attack, detected it, noted the pattern, and then shared it. That said, this is still a very important part of infrastructure protection.
The challenges of the perimeter method have been further compounded in recent years as users become more reliant on the internet. Bring your own devices and the reliance on cloud systems has further exacerbated the inadequacies of this approach. Once the attacker is in the perimeter wall, they can cause all kinds of havoc, sometimes lying-in wait for weeks or even months before releasing their attack.
Figure 1 – Traditional perimeter approaches to network and resource security are flawed in modern cloud and IT infrastructures as they lack intra-zone traffic inspection, lack flexibility, and have single points of failure.
As IT and datacenter infrastructures increase in complexity, the need to improve our approach to security has become clear. This isn’t just a matter of improving antivirus software or increasing our ability to detect rogue traffic in a network (although these are clearly very important) but is also about adopting a new mindset that makes the assumption that nobody and no-transaction can be trusted.
Encryption
Stored data is often encrypted so that if an unauthorized user does access the storage, then they will be unable to use the data. They will certainly be able to access it, but they won’t be able to decode and view it.
Data is not only vulnerable when it is stored, but also when it is in transit between processes or if the entire platform is under attack. Anybody eavesdropping on the network will be able to gain a whole host of information about the infrastructure leading to a potential attack.
Exchanges between cloud software including microservices often use the RESTful API method. As public and private clouds are connected via the internet then their communication protocols must comply with internet standards.
The REST (Representational State Transfer) protocol provides methods and standards for computer systems on the internet to allow them to exchange data and therefore communicate with each other. Although this is a massively versatile system, it uses a protocol based on plain text, meaning without additional measures it is highly insecure. Anybody with a network sniffer will be able to view the messages and gain a great deal of knowledge about the sending and receiving networks. And this is especially worrying as the communications are being freely exchanged across the open internet.
This leads to another potential issue, and that is one of man-in-the-middle attacks as any end point using the RESTful API can be impersonated by a malicious actor. This scenario mainly occurs as end point validation was not built into the original web HTTP (Hyper Text Transfer Protocol) specifications. A malicious actor could intercept the traffic on the network and change the IP address of the destination to their own server and then force all the traffic to it, and once they’ve done that, they can easily harvest the user’s credentials.
To alleviate both these challenges, a method of validating the API endpoints was developed using public-private key encryption. This resulted in the adoption of HTTPS (Hyper Text Transfer Protocol Secure) which uses TLS (Transport Layer Security) as its underlying security method. HTTPS solves three challenges: confidentiality, authenticity, and integrity. Confidentiality stops anybody snooping on the connection as it is encrypted so that all sensitive data is obscured. Authenticity guarantees the sender and receiver are who they say they are (thus stopping man-in-the-middle attacks). And Integrity guarantees that the data exchanged between the endpoints hasn’t been tampered with or modified.
Supported by
You might also like...
IP Security For Broadcasters: Part 1 - Psychology Of Security
As engineers and technologists, it’s easy to become bogged down in the technical solutions that maintain high levels of computer security, but the first port of call in designing any secure system should be to consider the user and t…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
If It Ain’t Broke Still Fix It: Part 2 - Security
The old broadcasting adage: ‘if it ain’t broke don’t fix it’ is no longer relevant and potentially highly dangerous, especially when we consider the security implications of not updating software and operating systems.
Standards: Part 21 - The MPEG, AES & Other Containers
Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…