Data Recording and Transmission: Part 5 - Channel Coding
John Watkinson introduces the idea of channel coding to convert the uncontrolled characteristics of data into something that works within the limitations of real media.
It has been seen that various magnetic and optical media can reproduce binary, or strictly speaking m-ary signals. However the ideal medium does not exist and all real media fall short in various ways. It must be recalled that most data channels are analog and are not aware that the waveforms they carry will subsequently be quantized. Analog channels have a frequency response that is seldom flat.
Many magnetic recording channels have nulls at zero Hz and at a higher cut-off frequency. Optical discs respond down to zero Hz, but the lowest frequencies cannot be used for data storage because they would interfere with the focus and tracking mechanisms that are also reading the track.
In all practical recorders, the highest frequency that emerges from the channel coder must be within the cut-off frequency of the medium. In many cases, the signal must also be free of low frequencies, a so-called DC-free signal. Equally there must be a sufficient number of timing events in the recorded signal, know as clock content, that the decoding process can find the centres of the eyes in the eye pattern.
Raw data are unconstrained and can have any combination of bit patterns, including runs of identical bits and unequal proportions of ones and zeros. Long runs of identical bits damage the clock content and an unequal number of ones and zeros causes DC offsets.
In electronic circuitry, there is usually a fixed-frequency clock signal available that synchronises all operations.In storage media this approach is not possible because the bit rate on replay is directly proportional to the speed of the medium, which is not necessarily constant.
In rotary-head video tape recorders, it was common to synchronise the rotation of the heads to the video signal. Whilst this provided sufficient timing accuracy for consumer products, professional machines still required time base correctors to produce signals that were sufficiently accurate for production purposes.
In contrast, the number of computer formats in which the medium is synchronized is very small. It is practically universal in data storage to let the medium run at some nominal speed and to make the electronics lock to the medium and not vice-versa. In some early computers back in the 1950s the clock for the processor came from a magnetic track on the drum storage device. He drum, which was a huge and heavy device, could then be economically driven with an induction motor whose exact speed stability was unimportant.
One early approach to self-clocking was in the first seven-track computer tape format devised by IBM. Data were recorded in parallel across six of the tracks using a transition to represent a one (a system IBM called NRZI), whereas the seventh track was recorded with odd parity generated from the other tracks. A transition in any track would operate the read clock. In the worst case where the data were all zeros, the ones in the parity track would provide the clock.
IBM's first magnetic tape data storage devices, introduced in 1952, used what is now generally known as 7 track tape. Image courtesy Lawrence Livermore National Laboratory.
The IBM tape format recorded only 200 bits per inch. As linear density increased, it became impossible to rely on timing being the same in different tracks. Disc drives would only read one track at a time. In both cases the data had to be made self-clocking and that was one of the forces that led to the development of channel coding. Channel coding will also be required for radio transmission or on cables.
Channel coding is a process whereby the data bits to be recorded/transmitted are converted to channel bits. There may be, and often is, a non-unity ratio between the number of data bits and the number of channel bits. Clearly in any format, there must be agreement between the recorder and the reproducer about this coding so that whatever is done when the recording is made can correctly be reversed on reproduction.
The obvious practical goals of channel coding are to control the spectrum of the recorded signal in order to guarantee clock content and, where necessary to produce a DC-free code. There is also an economic goal, which is to pack the highest practical data rate into the lowest possible channel bandwidth, since this strongly influences the linear density of the medium.
Figure.1 The FM code has a clock transition at the beginning of every bit, and an extra transition if the bit is a one.
Figure 1 shows one of the first self-synchronising channel codes was the FM code, which is known by various other names such as bi-phase mark. Once common in the audio-visual community it was used for time code signalling and in the AES/EBU digital audio interface.
The basic element of FM is the transition, which is a change in the polarity of the signal on a wire, or in the direction of magnetization on a tape. The absolute polarity is unimportant: it is the changes that matter. An FM signal can be inverted or phase reversed and it makes no difference.
Each bit period, whether it is one or zero, begins with a transition. This guarantees clock content. If the bit is a zero, there will be no further transition until the start of the next bit. If the bit is a one, there will be a further transition in the centre of the bit period. It will be seen that a run of zeros produces a square wave at half the bit rate, whereas a run of ones produces a square wave at the bit rate. That is where the name frequency modulation, or FM, comes from. It will also be seen that every one bit is DC-free and every pair of zeros is DC-free, so there is a null at zero Hz.
It is also possible to describe FM code in terms of channel bits. As Figure 1 shows, Each data bit results in two channel bits, where a channel bit one creates a transition and a channel bit zero produces no transition. In the case of a data 1, the channel bits will be 11, in the case of a data 0, the channel bits will be 10.
The FM code is robust and can be recorded on a spare analog audio track and distributed on audio cabling. As it is DC-free it will pass through transformers and coupling capacitors. However, it will be seen that the first channel bit is always the same, and so is redundant. This means that the channel bandwidth needed has been doubled. In the time code application this is unimportant as there is plenty of bandwidth in an audio channel for the small amount of information needed. In mass storage systems, its economic performance rules it out
Figure 2. In the MFM code, there is no need for clock transitions if there is a data one recorded. In the case of zeros, the clock edges are placed between the bits. It will be seen that, unlike FM, channel ones are never adjacent, halving the maximum transition rate. MFM is not always DC free, and the Miller2 variation overcomes that by omitting the transition for the last one when there is a even run of ones (shown by the asterisk).
Early floppy discs used FM, but this was soon replaced with the MFM code shown in Figure 2. The clock transition in every bit has been abandoned. The transition in the middle of the bit period, representing a one, is self-clocking. In order to make zeros self-clocking, transitions are placed on the bit boundary between them. The shortest time between transitions has been doubled, which means that for a given head and medium, twice as much data can be recorded in the same length of track. The phase locked loop has to work harder, as the clock content is reduced compared to FM.
It should be clear that channel coding is an enabling technology that has a major impact on the performance of storage devices. If The MFM code is considered in terms of the channel bits shown in Figure 2, it will be seen that channel bit ones are never adjacent. This means that transitions are a minimum of two channel bits apart, hence the halving of bandwidth compared to FM.
The performance of a code in this respect is measured by the density ratio (DR), which is the ratio of the minimum time between transitions to the bit period. In FM, the DR is 0.5 in MFM it is 1.0.
The MFM code has a small DC content, but there is a modification of it known as Miller2 code, also shown in Figure 2. If an even number of ones occurs between zeros, the transition at the last one is omitted. Figure 2 also shows that the channel bits can be integrated to produce a parameter called the digital sum value, which is a measure of the instantaneous DC content. The term running digital sum, (RDS) will also be found. The idea is to keep the DSV/RDS to zero.
D-2 is a professional digital videocassette format created by Ampex and introduced at the 1988 NAB convention as a composite video alternative to the component video D-1 format. Like D-1, D-2 stored uncompressed digital video on a tape cassette; however, it stored a composite video signal, rather than component video as with D-1. Shown is a D-2 Sony DVR-28. Image courtesy CTV.
The Miller2 code was used in the D-2 digital VTR format introduced in 1988.
As the number of channel bits between transitions controls the bandwidth and the integral of the channel bits determines the DC content, it should be clear that all future developments in recording codes would be based on increasingly complex conversions from data bits to channel bits. This is effectively modulation, but done economically in the digital domain, using hardware or software according to the bit rate. As will be seen in the next piece, the density ratio can go above one in suitable codes.
Other John Watkinson articles you may find interesting are shown below. A complete list of his tutorials is available on The Broadcast Bridge website home page. Search for “John Watkinson”.
John Watkinson, Consultant, publisher, London (UK)
You might also like...
IP Security For Broadcasters: Part 1 - Psychology Of Security
As engineers and technologists, it’s easy to become bogged down in the technical solutions that maintain high levels of computer security, but the first port of call in designing any secure system should be to consider the user and t…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
If It Ain’t Broke Still Fix It: Part 2 - Security
The old broadcasting adage: ‘if it ain’t broke don’t fix it’ is no longer relevant and potentially highly dangerous, especially when we consider the security implications of not updating software and operating systems.
Standards: Part 21 - The MPEG, AES & Other Containers
Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…