Vendor Content.
Software-Based Encoding Offers Flexible Advantages For Next-Generation OTA Transmission
With MPEG-2 still the dominant encoding platform used by OTA broadcasters in North America, and many looking to make the move to ATSC 3.0 (Nextgen TV), software-based encoding systems are increasingly being deployed. This slow but steady evolution has led to smaller sized solutions that offer more features and flexibility and provide a more efficient use of the available bandwidth.
Indeed, getting the most out of each FCC-allotted 6 MHz (19.39 Mbs in ATSC 1.0) channel makes operations more efficient and supports a myriad of business models.
MPEG-2 Widely Used For Signal Delivery
The ATSC 1.0 broadcast transmission standard is almost 30 years old and continues to rely on MPEG-2 as its compression format. When it was first implemented in 1996, MPEG-2 was the best choice for converting from analog to digital broadcasting as well as compressing those signals so that they could be sent over the air.
The original MPEG encoders were hardware-based, using ASIC or FPGA circuit boards. They were large devices (some of the early models were 4-5 rack units tall) and were only able to process one video stream per encoder. As broadcasters became more accustomed to using their digital channel, some launched subchannels, or “DigiNets,” in standard definition as well as HD in order to generate more revenue.
However, hardware and processing costs were significant, a separate encoder was required for each individual program. Each encoder would output a Single Program Transport Stream (SPTS), an external Multiplexor was required to combine all the SPTS’s and create a Multi-Program Transport Stream (MPTS). Additionally, they would require another box called a PSIP Generator to create the dynamic signaling which also provided the data for the Electronic Service Guide.
So, OTA stations had to use all of these separate hardware boxes just to produce a 19.39 signal.
As encoding systems progressed into the early 2000s, they were still limited to the hardware realm, but naturally followed Moore’s Law (“The number of transistors in an integrated circuit doubles about every two years”). Each individual program still required a dedicated encoder, but these encoders had become significantly less expensive and were typically 1RU in height. An external multiplexor and PSIP generator were still required.
Today there is on-going work within the ATSC to allow the use of H-264 algorithms for the digital subchannels to get better compression and free up some bandwidth for ancillary services. But MPEG-2 is still the primary standard.
The NextGen TV Migration
The FCC will not be allocating additional frequencies or channel bandwidth to OTA broadcasters. But there is a new set of standards defined as ATSC 3.0 (also called NextGen TV) that provides a new and more efficient encoding standard called High Efficiency Video Coding (HEVC, H.265) and a more flexible and scalable modulation format called Orthogonal Frequency-Division Multiplexing (OFDM).
ATSC 3.0, with its adoption of OFDM, provides various combinations of modulation and code rates to create what are called Physical Layer Pipes (PLP’s). Broadcasters are no longer stuck with a fixed amount of throughput and fixed SNR. Within the 6Mhz channel broadcasters can now make choices based on the program resolution, the geographical terrain, and the devices they are targeting, these devices could be fixed or mobile. Today the term for modulation and code rate is known as “ModCod.” A UHD program could utilize a ModCod with high throughput (Bit-Rate) and high SNR targeting a fixed rooftop antenna, while within the same channel a mobile handheld or automotive application would utilize a ModCod with much lower throughput and a very low SNR to provide robust reception for mobility.
HEVC (H.265) encoding is at least twice as efficient as MPEG2, a typical SD program which required 2Mb in ATSC 1.0 will have comparable video quality with less than 1Mb, and an HD which required 8-10Mb in ATSC 1.0 will be comparable at 2-3Mb.
For a basic installation the on-premise ATSC 3.0 equipment consists of only two COTS servers: one for the HEVC encoding and the other for the ROUTE signaling and Broadcast Gateway. Another job of the ATSC 3.0 Software encoder is to package the HEVC encoded video using DASH (Dynamic Adaptive Streaming over HTTP). Here each multimedia file is broken down into a sequence of small segments and delivered over HTTP. These files are delivered to the ROUTE (Real-Time Object delivery over Unidirectional Transport). The Route is an application layer protocol which incapsulates the video segments with signaling and sends them to the Broadcast gateway. The Broadcast Gateway creates an STLTP (Studio to Transmitter Link Tunneling Protocol). This STLTP is the input to the ATSC 3.0 transmitter.
The Move To Software Compression
Into the 2010s, the industry was still marketing hardware encoding systems. It’s only been within the last five to eight years that Common Off the Shelf (COTS) servers have shown enough horsepower to run these new compression algorithms in software for real-time encoding. Due to the capabilities of this new generation of COTS servers, broadcasters can do so much more within the same OTA channel, and it’s all processed in software.
The advantages of moving to software-defined systems are less cost (COTS hardware), the ability to use IP-based (ST 2110) infrastructures, and much easier implementation. Adding an IP component found in the ATSC 3.0 spec, it also allows users to control the encoding from anywhere in the world via a web-based GUI.
Another benefit of software encoding—where the server is installed between the facility and the transmitter—is flexibility. If someone has an older ATSC 1.0 hardware system and they want to upgrade to a software encoder, they also gain the ability to host both 1.0 and 3.0 signals on the same server, allowing a station to make the move to NextGen TV when they are ready. So, having a server that can process MPEG-2 now and also process H.264 or HEVC in the future is a cost saving long term.
Added Benefits Of Software Encoding
The true beauty of software running on COTS is that its native IP, which enable additional features and capabilities to be easily integrated into 3.0 transmission workflows.
For example, getting better efficiency for your bits is one thing, but adding better picture quality is even another benefit that has come with HEVC and software encoding. The system supports features that were not available in ATSC 1.0 with hardware encoders. These features include High Dynamic Range (HDR), Wide Color Gamut (WCG), and High Frame Rate (HFR). Adding these features to an HD program brings increased Video Quality at less bit rate compared to UHD.
There is however a way to deliver UHD without consuming all the OTA bandwidth. This can be accomplished using Scalable HEVC encoding. SHEVC allows you to send 1080p/60 as a base layer via the OTA signal and then sending an enhancement layer Over The Top. A connected TV can then combine the two inputs to create the UHD. So, a station can save on cost (and bandwidth) by only sending a 1080p HD signal with software enhancements (which already look great) to an OTA receiver hooked up to an antenna. But if that TV is also hooked up to the Internet, broadcasters can provide to the consumer with a better picture quality experience using SHEVC. These picture enhancements become more noticeable on larger TV sets.
On-Prem Vs. Cloud
Another benefit of software-based encoding is that these new applications can be hosted in the cloud or on prem. Most broadcasters that have embraced software-based encoding systems have used on-premise deployments. The manufacturer provides a COTS server preloaded with these apps.
For cloud deployments, broadcasters need to consider the fees and other costs involved in getting the signals in and out of the cloud. If you are sending HD SDI into the cloud, it’s a high bitrate. To save on cost, some stations are creating a mezzanine-level encode, whereby instead of sending 4:2:0 files, they instead send 4:2:2 at a high bitrate for encoding—but still much lower (smaller file size) than the baseband SDI signal.
The cloud also facilitates elasticity and less CapEx. If a station now has one HD and four SD channels, but tomorrow they want to add another HD, if their current on-premise hardware server doesn’t have enough horsepower, they’ll have to add a second sever (extra cost). In the cloud they don’t need any extra hardware or devices. If they want to launch a temporary pop-up channel, it can go live over the air for a specified period of time and then be shut down when they don’t need it anymore. This flexibility is significant, even in the world of OTA broadcasting.
An Inevitable Evolution
Looking at the big picture of bandwidth management, the industry is moving towards software-only compression algorithms and there’s no looking back. Virtualized processing allows OTA broadcasters to leverage more efficient compression schemes and serve both 1.0 and 3.0 audiences during the current transition period with one technology investment, not two.