Data Recording and Transmission: Part 10 - The Hard Disk Drive – Increasing Density
The hard disk drive rapidly converged on the concept of one head per surface with all of the heads moving together on a common positioner.
There are many factors that determine the storage density of a hard drive. Advances in magnetic material allow the same magnetic energy to be packed into a smaller area on the track. Better head designs allow the same signals to be recovered from even smaller areas, and improvements in channel coding store more bits in a given number of magnetic transitions.
However, none of these improvements can be realized if the read head cannot follow the data track, so all of the magnetic and coding improvements had to be paralleled by developments in mechanical precision.
The first hard drives were physically large, and the moving head assembly was heavy. The first head actuators were hydraulic, using technology developed during WWII for powered flight controls and gun turrets. Companies such as Sperry that had made electronically controlled hydraulic systems for aircraft turned to computers after the war and were able to use their technology in data storage.
Rapid acceleration of the head assembly resulted in a Newtonian reaction that would cause the drive to wander about the computer room like an unbalanced washing machine. One innovative solution was to build a double disk drive having two sets of disks and a pair of actuators acting in opposition to cancel out the reaction.
Hydraulically operated hard drives were maintenance intensive and as the moving mass came down it became possible to use moving coil linear actuators like those used in loudspeakers. The same developments in magnetics that improved the recording media also allowed the magnets in the actuator to get smaller and lighter. Aluminum coils further reduced the moving mass.
Initially, the actuator only moved the heads and as they got close to the desired cylinder they would be locked in place by a solenoid that engaged with teeth on the carriage. This worked well enough, but as tracks got narrower it was found that temperature changes in the disk were enough to move the tracks out of alignment with the heads. The result was the servo surface disk drive shown in Fig.1, in which one surface of the multiple disks contained special magnetic patterns that a dedicated head could locate and follow.
Fig.1 - The adoption of the servo surface meant that the expansion of the disk due to temperature changes didn't affect the head alignment. It did, however, assume all the disks had the same temperature.
The loss of one surface to servo patterns was more than compensated by the tremendous increase in capacity of the remaining surfaces due to the much narrower tracks that could be used. The servo patterns would also contain timing information so that the write clock and sector count could be synchronized to the rotation of the disk. The speed of the disk did not then need to be controlled and it was simply spun by an induction motor. The first drives of this kind could store about 100 Megabytes of data on a 14-inch disk pack. This was later doubled.
Such packs were removable and precautions had to be taken to ensure cleanliness so that head crashes were avoided. The radial position of the heads had to be adjusted using special alignment disks so that a disk written in one drive could be read in another. As the capacity of drives went up and the cost came down it became possible to conceive of a drive in which the disk was permanently installed. This offered a number of advantages. In a sealed unit contamination would no longer be a problem and the flying height of the heads could be reduced to raise density. As the heads only ever operated with one disk pack, the need for alignment went away.
IBM made a development model using these ideas. The model number was the same as that of a famous rifle and the idea became known as Winchester technology. With no need to exchange the disk, it was no longer necessary to retract the heads and instead they landed on a dedicated area of the disk when it stopped. Brakes were used to shorten the amount of time the heads slid over the disk.
Another advantage followed from sealing the disk inside the drive. It became simple to have two heads on every arm, such that each worked on its own part of the disk surface. Fig.2 shows that the travel of the actuator was halved and the access time was reduced.
As storage density rose ever higher, using one servo surface to align to multiple disks wasn't accurate enough and instead alignment patterns were interleaved with the data on every surface. These were knows as embedded servo drives, an approach that continues to this day.
Fig.2 - With no need to retract the heads in a drive where the disk was not exchangeable, it became easy to have more than one head per surface, reducing the travel of the actuator.
In a linear head actuator, every part moves at the same speed. It was then realized that a rotary actuator (Fig.3) could have less inertia because only the ends of the arms where the heads were mounted were moving at full speed and everything nearer the pivot was moving slower. The moving coil principle was still employed, but the motor was now rotary and worked in the same way as a traditional d'Arsonval or Weston voltmeter.
The operation of the head actuator switches mode, depending on whether it is following a track at constant radius, or seeking from one cylinder to another. A seek would begin by computing the cylinder difference, which is the number of tracks the positioner must move. The cylinder difference would then count down every time a track was crossed. The cylinder difference, with some modifications, would drive a DAC that controlled the speed of the actuator, often using velocity feedback from a tachometer. As the target cylinder was approached, the cylinder difference would fall and the scheduled velocity would fall. The heads would decelerate as the correct track was approached.
Fig.3 - The rotary actuator has the advantage that inertia is reduced because the inner parts travel slower than the extremities. A moving coil is still used, but it creates torque rather than thrust.
The physics of moving coils showed that the power dissipated in the coil would go as the fourth power of the access speed, so it was important to limit the speed to prevent overheating. This was typically done by limiting or clipping the cylinder difference so that long seeks did not result in an attempt to reach excessive speed. Once the heads were up to speed, the carriage would coast, needing little power, before using power to decelerate.
The disk drive can access its data blocks randomly, but that is not quite the same as a RAM that can access individual words. IBM called them direct access storage devices (DASD). As the seek to the desired cylinder and the search for the desired sector are mechanical operations, there is a risk of reaching the wrong block. Recovering the wrong data is not as bad as writing in the wrong place, which is not recoverable.
Disk drives universally adopted a system to check for correct access. Prior to every data block is a header, which contains the address of the following block followed by a checksum. The disk drive, or strictly the controller, will not attempt a data transfer unless the cylinder, sector and head addresses and the checksum read from the header are correct. The headers are written when the disk is formatted. At the same time the data areas may also be written and read back to check their data integrity. Should a bad block be found, a flag would also appear in the header. A fictitious file would be created in the disk directory, making it look as if the bad blocks were already in use, so no real data would ever be recorded on them.
This simple precaution allowed drives to use relatively simple error correcting systems such as the Fire code that will be discussed in a future article. The difficulty with bad blocks was that when clusters were used, a bad block would cause a whole cluster to be unusable. In some drives a spare block would be placed at the end of each track so that a bad block in that track would bring the spare into use, maintaining capacity.
The disk speed is uncontrolled and the time when data are written or read is determined by the instant the correct header is found. The rest of the system must be able to deal with that. Typically data would be read or written using direct memory access, so that the processor would be free for other work. The DMA system would be competing for memory access with other processes and to guarantee read data could be accepted or write data provided in unbroken blocks it was usual to provide a buffer memory or silo between the drive and the DMA process. Digital video and audio recorders need the same function, but it is known as a time base corrector.
When writing, the controller would start with a full silo and attempt to keep it full by reading the memory, whereas when reading it would start with an empty silo and attempt to keep it empty by writing to memory. In the event of a silo over or underflow, the disk drive would complete the present block and would then have to wait one revolution to recommence the transfer.
You might also like...
IP Security For Broadcasters: Part 1 - Psychology Of Security
As engineers and technologists, it’s easy to become bogged down in the technical solutions that maintain high levels of computer security, but the first port of call in designing any secure system should be to consider the user and t…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
If It Ain’t Broke Still Fix It: Part 2 - Security
The old broadcasting adage: ‘if it ain’t broke don’t fix it’ is no longer relevant and potentially highly dangerous, especially when we consider the security implications of not updating software and operating systems.
Standards: Part 21 - The MPEG, AES & Other Containers
Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…