A Practical Guide To RF In Broadcast: Broadcast Modulation

Defining the different types of RF modulation, power levels, regulatory standards and licensing for CW, AM, FM, and TV RF transmission.

Modulation of a RF carrier wave is what differentiates it from a continuous wave, switched on and off to wirelessly transmit messages using Morse code. Several types of modulation make broadcasting to the public possible.

Morse code was the first way to add intelligence to a continuous wave (CW) radio signal, making dots and dashes the original manual modulation technique for wireless communications. Radio waves are electromagnetic alternating current (AC) waves between approximately 30 Hz and 300 GHz.

RF transmitters are relatively simple. At the most basic level, an RF transmitter consists of a power supply, an oscillator to create the original carrier wave at a specific frequency, and tuned power amplifiers connected by a feedline to an antenna. Modulating a broadcast transmitter with an exciter is where RF gets a bit more tricky.

An exciter (also known as a modulator) creates the original modulated carrier wave at a specific frequency that a transmitter amplifies and sends to a broadcast antenna. The output of a TV exciter is typically 100 mW (0.1 watts). AM and FM transmitters also use exciters to ‘drive’ the transmitter’s power amplifiers (PAs). Amplitude Modulation (AM) varies the amplitude of the carrier signal with the modulation signal. Frequency Modulation (FM) varies the frequency of the carrier signal with the modulation signal.

Analog TV transmission combines a 4.5 MHz AM signal carrying the video with an FM signal for the audio within a 6MHz channel. Vestigal sideband (VSB) uses the upper and lower sidebands of the AM signal to increase bandwidth. 8VSB is the modulation method used for analog NTSC transmission as well as ATSC 1.0 digital transmission. 8VSB modulation converts a binary stream into an octal representation by amplitude-shift keying a carrier to one of eight levels. The bit rate of a 6 MHz channel modulated by 8VSB is 19.39 Mbps.

Coded orthogonal frequency-division multiplexing (COFDM) is the modulation method used to transmit ATSC3.0 and DVB-T signals. COFDM uses forward error correction and time/frequency interleaving to overcome errors. The basis of COFDM is frequency-division multiplexing (FDM), where all subcarrier signals in a channel are perpendicular to one another.

Cable systems and most MVPDs use quadrature amplitude modulation (QAM) modulation for signal distribution to customers. Thus, a cable-ready TV set must be able to decode QAM and 8VSB. ATSC 3.0 (NextGen TV) sets decode COFDM signals.

DVB, DVB-T and DVB-T2

The key obstacle for DTV transmission is the bandwidth of legacy analog TV channels. Depending on the country, TV channels can have a legal bandwidth of 6, 7, or 8 MHz. In the USA, 6 MHz channels are the standard. Thus, the challenge of broadcasting DTV is to transmit as much data as possible in a 6 MHz wide signal. Fortunately, digital audio and video can be compressed.

Digital Video Broadcasting (DVB) uses coded orthogonal frequency-division multiplexing (OFDM) modulation that supports hierarchical transmission, also known as layered modulation. OFDM is a type of digital transmission and a method of encoding digital data on multiple carrier frequencies. In OFDM, multiple, closely spaced, overlapping orthogonal subcarrier signals are transmitted to carry data in parallel.

Terrestrial TV uses DVB-T transmission. DVB-T uses OFDM transmission and supports QPSK, 16QAM and 64QAM modulation schemes. DVB-T multiplexes compressed video, audio, and data streams into MPEG program streams (MPEG-PSs).

One or more MPEG-PSs joined together create an MPEG transport stream (MPEG-TS). A MPEG-TS is a sequence of 188-byte packets. A first level of error correction is applied to transmitted data that allows correction of up to 8 wrong bytes in each 188-byte packet. A single DVB-T signal can be transmitted on a 6, 7, or 8 MHz TV channel.

Hierarchical transmission is a signal processing technique for multiplexing multiple data streams into a single symbol stream. It is used to mitigate the digital cliff effect. Hierarchical transmission can simultaneously transmit two different MPEG-TSs that are typically used to transmit the same content in SDTV and HDTV on the same carrier. This allows weak signals, backed up by a lower quality fallback signal, to gracefully degrade instead of instantly disappear. The DVB standard has been adopted by approximately 60 countries in Europe, Africa, Asia, and Australia.

DVB-T2 was finalized in 2011 and stands for “Digital Video Broadcasting – Second Generation Terrestrial,” an extension of DVB-T. It transmits digital audio, video, and other data in “physical layer pipes” (PLPs), using OFDM modulation and provides a higher bit rate than DVB-T.

ATSC

The Advanced Television System Committee (ATSC) ATSC 1.0 standard uses eight-level vestigial sideband (8VSB) for terrestrial broadcasting. This standard has been adopted by 9 countries including the United States, Canada, Mexico, and South Korea. Current ATSC 1.0 uses the H.264/MPEG-4 video codec, capable of 10 bits/pixel, or 1024 colors per channel. Early ATSC 1.0 signals were MPEG-2, 8 bits/pixel capable of 256 colors per channel. ATSC 1.0 supports one bit rate of 19.4 Mbps.

ATSC 3.0 supports RF transmission at UHD resolution of 3840x2160 at 60 Hz, although broadcasting a UHD channel OTA uses most of the 6 MHz TV channel. Instead, ATSC 3.0 uses a hybrid of OTA RF delivery and the internet to deliver UHD TV pictures and other content to home viewers. It transmits an HDTV signal over the air and sends additional UHD detail data over the internet. The OTA signal and the detail data are combined at the receiver to recreate and display a UHD picture.

ATSC 3.0 is a complex technology. A complete explanation of all it can do and how it all works is beyond the scope of this story.

Broadcasting IP

ATSC 3.0 is essentially IP over the air. It uses a physical layer based on orthogonal frequency-division multiplexing (OFDM) modulation with low-density parity-check code (LDPC) forward error correction codes. In a 6 MHz TV channel the bit rate can range from 28 to 36 Mbps or higher depending on the parameters being used. It is limited to four simultaneous physical layer pipes (PLPs) in a channel that may have different levels of robustness, like DVT-T2. PLPs are logical channels carrying one or more services, with a modulation scheme and robustness level particular to that individual pipe. DVB-T and ATSC 1.0 do not have PLPs.

ATSC 3.0 uses 10 bits/pixel and H.265 HEVC for transmission. Layers in the ATSC 3.0 protocol stack include system discovery and signaling, the physical layer using OFDM, internet protocols, and HTML5 applications.

Each frame of ATSC 3.0 video begins with a bootstrap signal which allows a receiver to discover and identify signals being transmitted. The bootstrap signal can also carry information to wake up a receiver so it can detect, receive, and display an emergency alert message when the TV set is turned off. It also contains a ‘preamble’ to support frame decoding and format, and the ‘payload’ video frame data.

HEVC compression supports video channels up to UHD resolution at 120 frames per second, wide color gamut, high dynamic range, Dolby AC-4 and MPEG-H 3D Audio, datacasting capabilities, and more robust mobile television support. ATSC 1.0 uses Dolby AC-3 for 5.1 channel surround sound. ATSC 3.0 uses Dolby AC-4 for up to 7.1.4 channel sound and it supports object-based audio formats like Dolby Atmos.  It also supports MPEG-H 3D Audio, which can provide up to 64 loudspeaker channels for immersive audio.

Because ATSC 3.0 is IP, it is well suited for private datacasting and services such as the ‘broadcast internet’ for inexpensive one-to-many data distribution. 

ATSC 3.0 doesn’t need an internet connection to be viewed OTA on a NextGen TV with an antenna, but it does require an internet connection to access many of its new features such as interactivity, VOD, hybrid UHD, targeted advertising, targeted public alerting and more.

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

IP Security For Broadcasters: Part 4 - MACsec Explained

IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.

Standards: Part 23 - Media Types Vs MIME Types

Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.

Six Considerations For Transitioning To Cloud Based Video Distribution

There are many reasons why companies are transitioning from legacy video distribution workflows to ones hosted entirely in the public cloud, but it’s not a simple process and takes an enormous amount of planning. Many potential pitfalls can be a…

IP Security For Broadcasters: Part 3 - IPsec Explained

One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…