Broadcast For IT - Part 3 - Video Lines

In this series of articles, we will explain broadcasting for IT engineers. Television is an illusion, there are no moving pictures and todays broadcast formats are heavily dependent on the decisions engineers made in the 1930’s and 1940’s. In this article we look at video lines, why we still need them, and how they relate to video frames.

A television image is made up of video frames, and each of these consists of video lines. The more video lines, the better the resolution of the television image. Up to recently, video formats were often specified as a function of the number of lines with early television experiments consisting of only forty or fifty lines.

As technology improved, the number of lines increased, from 405 lines in the 1930’s to 525 and 625 lines in the 1960’s and then 1080 lines in the 1990’s. High-Definition used 1080 lines, but with a break from tradition, Ultra-High-Definition (UHD) started to express the image size in horizontal pixels – 4K and 8K.

Coils and Electron Beams

Prior to Charge Coupled Device (CCD) and Complimentary-Metal-Oxide-Semiconductor (CMOS) image sensors, broadcast television cameras used vacuum tubes to turn light into electrical current. The image gathering tubes were cathode ray tubes (CRT) in reverse. Light was focused on the faceplate and a beam of electrons was directed within the tube to the faceplate. The number of electrons flowing gave a current which was proportional to the brightness of the tube at a specific location.

Cameras used electromagnetic coils to deflect the electron beam very quickly in the horizontal domain, and slower in the vertical domain. The result of this was to scan the electron beam across the inside of the tube and provide a signal that was proportional to the brightness of the image.

Backwards Compatibility

Modern cameras use grid array CCD and CMOS image gathering devices to turn light into pixels, but these had to maintain backwards compatibility with existing broadcast systems, so they used a pixel matrix that simulated the old tube camera’s scanning system.

Diagram 1.  Resolution of the eye is defined by the angle of the seperation of the lines. The seperation of the lines at d2 is s2, which is the same resolution as the seperation of the lines (s1) at distance d1 from the eye.

Diagram 1. Resolution of the eye is defined by the angle of the seperation of the lines. The seperation of the lines at d2 is s2, which is the same resolution as the seperation of the lines (s1) at distance d1 from the eye.

Black and white television cameras used a single tube and color used three tubes, one each for converting red, green and blue light. The specific reasons for this will be explained in a later article on colorimetry and how the human visual system perceives color.

Voltage Spikes and System Failure

The speed at which the electron beams could be scanned across the face of the tube was dependent on how fast the current in the electromagnetic coils around them changed. A fundamental law of physics states that the rate of change of electrical current in an electromagnetic coil is proportional to how much voltage is created across it. If the current in the coils changed too quickly, massive voltage spikes would be created on the coil driver circuits, in the extreme this could cause a system to stress and fail.

The number of lines, and hence number of frames that could be scanned was heavily dependent on the operational speed of the scan coils within a camera.

Human Visual System

Another factor defining the number of lines that were scanned and displayed was the resolution of the human visual system.

There is only so much detail the average human eye can see, and this is often referred to as the visual acuity. The human eye and its interaction with the psychology of the brain, is an incredibly complex subject and many books have been written on this alone, but we do need to have some understanding of this subject.

Diagram 2.  More lines increase the size of the television screen for the same viewing distance. To see the increased resolution of a UHD screen, viewers usually have to get closer to the screen, which means you will have to turn the brightness down.

Diagram 2. More lines increase the size of the television screen for the same viewing distance. To see the increased resolution of a UHD screen, viewers usually have to get closer to the screen, which means you will have to turn the brightness down.

There are two fundamental types of receptor in the human eye providing two discrete functions; cones and rods. Each eye has approximately ten million cones and ten billion rods.

Cones and Rods

Cones detect color, have high acuity and detect fine detail, but have limited light sensitivity and only work in daylight – hence the reason we don’t see color when it’s dark.

Rods detect monochrome (black and white), have a low acuity so are poor at detecting fine detail, but are very sensitive in low light and work very well in the dark.

Cones are clustered in the center of the eye so are used when looking forward, such as watching a television or reading a book, but rods are evenly dispersed throughout the eye, and are particularly good at detecting movement in our peripheral vision.

Fight or Flight 

The human brain continuously receives information from the rods and cones and creates the image we see in our minds. Furthermore, the brain will subconsciously execute our “fight or flight” response when motion is detected in the peripheral vision, just in case we are under attack from something lurking in the shadows.

Tests have shown, when stood 20 feet (9.5 meters) from a chart, two lines just over 1/16 inch (1.75mm) apart can just about be resolved. However, the ability for the human visual system to see detail tapers off as the image moves away from the middle of the eye away from the cones.

Defining Eye Resolution is Difficult 

Consequently, the cones detect color and detail in daylight, but do not contribute much during night-time. And the rods are extremely sensitive in night-time, but do not contribute much in the way of color information.

Determining the resolution of the human eye is very difficult as it varies depending on whether you are looking straight on, and whether it is light or dark.

Limited Sample Testing

Although there was a good deal of science involved, human visual tests performed in the 1930’s to 1960’s were subjective and relied on images being presented to a limited sample group of viewers. Unfortunately, the samples were small, and a lot of assumptions were made. But a good range of broadcast specifications emerged, and from the 1960’s USA used 525 lines at 60 fields per second (but this did change to 59.94 fields per second when color was broadcast), and Europe and UK used 625 lines and 50 fields per sec.

Today’s broadcast television formats are still based on the subjective tests made during the early days of television where compromises had to be made between ideal requirements of the human visual system, and the technology available at the time.

In the next article we look at how lines and frame rates are combined to provide a television signal and image, a system that provides the basis of todays digital transmission.

You might also like...

Designing IP Broadcast Systems - The Book

Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…

Demands On Production With HDR & WCG

The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.

If It Ain’t Broke Still Fix It: Part 2 - Security

The old broadcasting adage: ‘if it ain’t broke don’t fix it’ is no longer relevant and potentially highly dangerous, especially when we consider the security implications of not updating software and operating systems.

Standards: Part 21 - The MPEG, AES & Other Containers

Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.

NDI For Broadcast: Part 3 – Bridging The Gap

This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…