Computer Security: Part 4 - Making Hardware Secure
The history of computing has been dominated by the von Neumann computer architecture, also known as the Princeton architecture, after the university of that name, in which one common memory stores the operating system, user programs and variables the programs operate on.
The advantage of the von Neumann architecture is that of flexibility. The proportions of the memory allocated to different requirements can be changed readily. When memory was expensive, physically large and power hungry that was an important factor. In principle a von Neumann computer can run adaptive software that modifies itself as it runs in order to achieve some goal such as optimization. In practice the great majority of applications do not need that ability and for security do not want it.
However, isolation of one memory area from another, which is what prevents users attacking the operating system or kernel code, is achieved entirely by control of the physical memory address generation. The memory management unit should be controlled only by the operating system working in kernel or privileged mode. If that system is compromised, by weak software or a hardware design fault, there is vulnerability. All the eggs are in one basket.
Software Integrity
The weakness of the von Neumann architecture is that it relies totally on the integrity of the software. Whilst the memory management unit might have flags that designated the purpose of each memory page, this is not sufficient. Computers frequently use buffers; pages of memory used temporarily to hold data. Much software assumes that stacks and buffers never exceed certain bounds, but does not actually check because such checks slow the processor down.
In the absence of such checking, malicious code may be allowed to cause the buffer to overflow the designated page such that data are written into a different area of memory. Numerous hackers used this approach to insert malicious code into vulnerable computers.
One famous attack displayed on the user's screen "Billy Gates why do you make this possible? Stop making money and fix your software!!"
It's a fair question, but it failed to suggest that some improvement was also needed in hardware. Sending the hacker to jail dealt with the symptom. We are still dealing with the problem after shooting the messenger.
ALU Optimization
The overriding ethic in computing has always been to obtain the most speed, which translates to getting the most value out of the traditionally expensive arithmetic logic unit (ALU). Whilst the technology of the ALU itself sets a speed bound that cannot be exceeded, it doesn't matter how fast the ALU itself runs if it is kept waiting for data or instructions.
That approach has been behind developments such as cache memory, where frequently accessed parameters end up in a local high-speed memory close to the ALU, to techniques such as pipelining and non-sequential execution of instructions.
One of the fundamentals of computing is the conditional instruction, where the outcome of an earlier instruction determines what should happen next. For example, an iterative process might set up a counter to be decremented at the end of an iteration. A conditional branch is then used to take some action when the iteration count has reached zero. Clearly the direction taken at the branch cannot be known until the state of the counter is considered.
Speculative Execution
One approach to gaining speed in such conditions is the so-called speculative instruction execution. The processor may start execution of instructions after the conditional branch without knowing if the branch occurred or not. If it did, all well and good, if it didn't the instruction and its outcome would be dumped.
Whilst speculative execution gained speed, it did allow security to be compromised, because instructions were executed out of sequence. If an earlier instruction tried to make an illegal memory access or did some other act that resulted in an exception or a call to the kernel, it was too late, because what followed, that should have been stopped, had already happened.
For a long time developments in computing were simply refinements of the von Neumann architecture that had effectively taken root. Adopting a different architecture was only possible if appropriate operating systems were also available, and it was easier to stick with the existing approach and take an operating system off the shelf.
Backwards Compatibility Compromise
Another factor is that a computer user would strongly prefer a more modern replacement machine that was compatible with his existing software. This would be a lot cheaper than having to write new code. Although it was good business sense, it didn't do a lot for innovation or security.
Now, however, the process of refinement of the von Neumann architecture seems to be moving into the area of diminishing returns. Memory is no longer huge and expensive, and the logic actually used in the ALU forms an increasingly small corner of the processor, courtesy of all the clever tricks being used to increase throughput.
The opportunity offered by adoption of alternative architectures also allows security to be enhanced, but it remains to be seen whether that actually happens.
The von Neumann architecture can be retained but refined, or it can be abandoned altogether and both approaches are possible.
Common Memory Weakness
One weakness of the von Neumann architecture is the use of a common memory for operating system, user software and variables. Hacking takes place when the operating system is inadvertently modified so that it does things that it ordinarily would not do. This can happen if memory locations containing the operating system can be modified, which is what happens in buffer overflow attacks.
In a computer having tagged architecture, each memory location is capable of storing a few extra bits. These bits are set by the operating system when the locations are written and describe what is in the memory location, from which it can be determined what memory transactions are and are not allowed. The memory management unit is extended so that whenever a memory location is accessed, the proposed transaction is compared with the tags to see if it is legal. If it is not the transaction will not take place. Instead, the memory management unit will store the error condition and cause a trap back to kernel mode.
Security Tagging Memory
For example, the operating system is code that is typically unchanging, so operating system code could be flagged as execute-only so that it cannot be modified. Locations containing variables can be flagged so that the variable can be read or written but cannot be interpreted as an instruction. That could include memory areas such as stacks, which should only contain information such as the addresses at which execution should resume after returns from interrupts and which are not executable.
Ideally the memory management unit would need more information from the processor than it gets in a von Neumann machine. It would be beneficial if the MMU knew what the processor is trying to do when it sends an address. For example if the processor has connected a program counter to the memory bus to access an instruction, the MMU would expect to map the address to a location containing executable tags rather than variable tags.
Logging Tags
A tagged architecture reinforces the single layer of protection of a conventional MMU by placing an extra layer of protection after the virtual address generation. Tagged architectures are significantly more robust than conventional machines, especially if tag violations are logged and studied, because they may reveal coding errors in the operating system or faults in the memory management hardware that can then be remedied. They may also reveal that the system has been attacked, of course, which means that tagged architecture computers can be used as lures to attract and record viruses.
Tagged architecture has a tremendous advantage, which is that the extra protection is in hardware, works in parallel with the existing memory contents and does not slow the processor down very much. In normal operation a memory transaction is received by the MMU and the legality of the transaction is checked without the processor being involved. Legacy software can still run on a tagged machine if it retains an appropriate instruction set.
Tagged architecture is robust against buffer overflow attacks because if a hacker incorporates malicious code in the buffer contents, the fact that it is a buffer transaction means that the code will be flagged as non-executable and the attack will not succeed.
Future Hardware Security
Tagged architecture is certainly not new. It has been around in various forms for about fifty years and whenever it has been adopted, various advantages were noted. Unfortunately, it largely got flattened by the von Neumann steamroller. In view of the increasing dependence of modern society on computer and the frequency of malicious attacks, tagged architecture will hopefully make a comeback as the von Neumann roller runs out of steam.
Whilst tagged architecture is a contribution to security, it can also make computers more efficient because tagging can be extended to describe the way in which data are encoded, for example whether variables are binary integers or floating-point parameters.
You might also like...
IP Security For Broadcasters: Part 1 - Psychology Of Security
As engineers and technologists, it’s easy to become bogged down in the technical solutions that maintain high levels of computer security, but the first port of call in designing any secure system should be to consider the user and t…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
If It Ain’t Broke Still Fix It: Part 2 - Security
The old broadcasting adage: ‘if it ain’t broke don’t fix it’ is no longer relevant and potentially highly dangerous, especially when we consider the security implications of not updating software and operating systems.
Standards: Part 21 - The MPEG, AES & Other Containers
Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…