Making Cloud Systems Secure - Part 2
Correctly applied zero trust security methods stop intruders from entering networks and computing systems at all entry points within the infrastructure. And to help broadcasters understand security vulnerabilities further, organizations such as CVE are there to help.
Other articles from this series.
OAUTH 2.0
Although HTTPS provides a reliable method of data exchange for APIs, and services such as Active Directory provide authorization, they do not deliver any method of limiting access to cloud services. User credentials achieve access to some extent as the user provides a unique username and password to log onto a service, but to achieve adequate levels of security, they must logon for every service they access. This is often possible for simple website access but is not a reliable method for API connectivity, especially when considering microservices as they can often establish multiple connections at the same time to multiple services.
OAuth uses a method of tokens that both replace the logon credentials of the user as well as providing scope for the service they are connecting to. For example, a transcoding microservice will receive a RESTful API instruction from a user who wants to transcode a file using a transcoder microservice. Included in the API message will be a token that was created by the OAuth authorization server and embedded in this token will be information that defines the resource scope such as file access rights. In this instance, the user may have access to read the input media file but not write to it, thus negating the possibility of the user deleting or corrupting the media file.
Furthermore, if the system administrator wants to stop access to a specific resource, then they can kill the token. Although it may physically exist as a data string, the OAuth authorization server will stop any servers trying to validate it and therefore restrict access to the resource.
Figure 1 - A session showing how a user is issued with an OAuth token. Initially the client logs onto the network and is validated by the AD, which in turn requests an OAuth token from the OAuth server (assuming it passed AD). When the user requests a service through an API call it issues its token within a JWT (see text), the service then validates the token against the OAuth server and assuming the token is valid, the API is authorized to continue its task.
The OAuth tokens, as well as embedding resource access information, also have a limited time value. That is, after a predefined length of time (for example an hour), the token will no longer be valid and any requests from microservices to the OAuth authorization server will result in it being refused. The microservice will report back to the user advising it cannot complete the operation.
Username and password credentials do not have this dynamic level of access granularity or time limit restrictions leading to tokens being much more secure and flexible.
The tokens are small files which must be validated by the OAuth server and one of the most used formats is JWT (JSON Web Tokens). As well as providing a convenient method of RESTful data exchange, they provide a method of securing and validating the data through encryption.
Zero Trust
Security of modern cloud systems falls under the heading of “cyber security”. This specifically details the protection of computer systems from unauthorized or malicious users to prevent damage to the system or data theft. And one of the most prominent methods of achieving this is through Zero Trust, an architectural model that has at its core the concept of “never trust, always verify”.
Work conducted by US cybersecurity researchers at NIST (National Institute of Standards and Technology) and NCCoE (National Cybersecurity Center of Excellence) resulted in the publication “SP 800-207, Zero Trust Architecture”. The idea behind Zero Trust is that the traditional model of maintaining a secure perimeter for the network is replaced by assuming that any data exchange, storage, or processing point is a potential threat. This may be through the internet, Wi-Fi, USB drive, or even malicious software masquerading as a legitimate product. Everything and everyone presented to the network must be continuously verified.
For example, a user logging onto a workstation must be validated, preferably through a centralized user verification system such as Okta or Azure Active Directory (AD). The AD then sends a message to the OAuth authorization server to create a token, and this token is used by the microservice to check with the OAuth server that it is authorized to execute the process as well as defining its scope. Whenever the user tries to access any point within the infrastructure, the same OAuth token is used to verify their access thus greatly improving security.
The fundamental difference between perimeter and zero trust models is that the perimeter method tries to build a wall around the infrastructure to keep hostile actors out, however, the zero-trust method assumes we can never achieve this, and hostile actors are everywhere, not just where we would like them to be. By verifying each access using the OAuth token, every step in the workflow is secured.
The OAuth token is unique for every user session and not only maintains read-write-execute rights, but also has a time limit placed on it. As each process execution, data transfer and storage function are validated by the microservice against the OAuth authorization server, any suspicious behavior can be stopped by terminating the token, thus stopping any further processing. This would not be achievable in the traditional perimeter model as once the user is within the perimeter they are difficult to stop.
Another major benefit of the zero-trust model is that every data access and storage process is validated against a centralized OAuth server, and this creates a wealth of metadata. Should a system be attacked, then a forensic analysis of the problem can be very quickly established so that the problem can be dealt with quickly and any security vulnerabilities rectified.
Collaboration
Security isn’t just about working in isolation, instead, it relies on international collaboration. In the same way police services throughout the world collaborate to catch criminals crossing borders, then cybersecurity organizations collaborate to stop cybercriminals attacking networks. One example of this is the organization CVE (Common Vulnerability Exposures) whose mission is to identify, define, and catalog publicly disclosed cybersecurity vulnerabilities.
A vulnerability can be thought of as a mistake in the code that gives an attacker direct access to the network or system. This might be a vulnerability that allows an attacker to pose as a superuser thus giving them unauthorized access to the data.
Software developers use libraries all the time and subscribing to organizations such as CVE allow them to constantly check their own code as well as receive notifications of vulnerabilities in the libraries they are using. Cybersecurity collaboration is a fundamental requirement for keeping cybercriminals at bay and developers have a moral obligation (and in some countries a legal obligation) to notify organizations committed to combatting cybersecurity should they find a vulnerability in their code or others.
Conclusion
Maintaining high levels of cybersecurity is not just a technical challenge but applies to users of the system too. From the CEO down through the whole organization, keeping systems secure is the responsibility of everybody. This is particularly important when working with cloud systems and microservices as they are always online and so need to be secure at all times.
Broadcasters need to be certain that the software they are introducing into their infrastructures, whether on-prem or public cloud is secure. And the best way of achieving this is to check the provenance of the code being introduced.
Zero-trust infrastructures are key to keeping broadcaster systems secure and high value media assets safe.
Supported by
You might also like...
Standards: Part 24 - Timed-text & Subtitles Overview
Carriage of timed-text must be closely synchronized to the AV stream to ensure it is presented in a timely manner so here we describe the standards that enable this for both broadcast and internet delivery.
HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows
Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…
IP Security For Broadcasters: Part 4 - MACsec Explained
IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.
Standards: Part 23 - Media Types Vs MIME Types
Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.
Building Software Defined Infrastructure: Part 1 - System Topologies
Welcome to Part 1 of Building Software Defined Infrastructure - a new multi-part content collection from Tony Orme. This series is for broadcast engineering & IT teams seeking to deepen their technical understanding of the microservices based IT technologies that are…