Advances In 12G-SDI: Part 1 - Developing 12G-SDI
Since SMPTE formally standardized SDI in 1989, it has become the dominant video, audio and metadata transport mechanism for virtually all broadcast facilities throughout the world. Technology advances have not only made SDI incredibly reliable, but the specification has continued to progress, embracing ever increasing data-rates and video formats.
In this series of three articles we take a closer look at SDI to understand its origins, current state, and why it’s here to stay for a long while yet.
The first version of SDI released by SMPTE in 1989 was SMPTE 259M, which supported four data-rates; 143, 177, 270 and 360 Mbits/s. Video was encoded with YCrCb and subsampled at 4:2:2.
At the time SDI debuted, video was generally distributed using PAL or NTSC. One of the major challenges with these formats was that the color was encoded using a form of quadrature amplitude modulated carrier. Although this system worked well, the color needed to be decoded to its component form (YUV or RGB) before processing.
RGB Limitations
Decoding video from PAL, NTSC or SECAM to its component parts would quickly result in artefacts appearing on the image due to the suboptimal modulation of the color components. To alleviate this, many facilities at the time would distribute video as RGB or YUV to maintain high video quality and reduce the possibility of these artefacts. A significant disadvantage of this system was the increase in cable and complexity. Even with the video syncs encoded on the green signal, the amount of cabling needed increased three-fold as each of the R, G and B signals needed their own cable.
Problems were further compounded as each of these cables had to be identical in length to prevent signal timing issues. Even a difference in cable length of a few meters would result in visible picture or color shifts potentially manifesting themselves as registration errors between the R, G, and B colors.
Although PAL and NTSC encoding solved the multiple cable requirement, this solution was at the expense of a degradation in quality, and other issues such as the 8-field PAL sequence manifested themselves and further led to compromises for editing.
Audio has historically been treated separately in broadcast stations. It is possible to modulate audio onto a PAL or NTSC carrier and this is how television is broadcast, but it’s rarely done within a broadcast infrastructure. Consequently, audio signals were distributed independently of the video. As framestores started to appear, lip-sync and audio-video timing issues soon became apparent.
Furthermore, as PAL and NTSC video were analog, they would continually suffer degradation due to noise and distortion by virtue of the fact that they were analog and the mediums upon which they were recorded suffered from material degradation and generational fall off.
SDI Solutions
SDI solved many of these problems in one go. Only one cable was needed, the signals were digitally multiplexed as Y, Cr and Cb thus removing any crosstalk between the luma and chroma, and audio could be encoded into the “spare” data in the horizontal and vertical blanking. It’s true that the color bandwidth of YCrCb over SDI is half that of RGB but the benefits far outweigh the challenges.
Vertical (frame/field) and horizontal (line) syncs are technically no longer needed and could easily be replaced with a much more efficient method of system synchronization . However, they have been kept in the specifications to maintain backwards compatibility, especially as we transitioned from traditional Cathode Ray Tube televisions to flat screen TVs. Unique embedded codes within the SDI stream identify the start and end of active picture so the sync pulses are no longer needed. The space taken by these sync pulses can therefore be used for other information such as audio and metadata.
The next major advance for SDI was the move to SMPTE 292 to accommodate 1080i HD television. The bit rate increased to 1.485Gbits/s for PAL countries and 1.485/1.001Gbits/s for NTSC countries. Commonly referred to as 1.5G, SMPTE 292 also accommodated the change in aspect ratio from 4:3 to 16:9. SMPTE 296 further added support for 720p HD.
Moving to progressive 1080p formats meant a doubling of the bit rate from 1.485Gbits/s to 2.97Gbit/s. However, the chipsets and cable specifications weren’t available at the time to achieve reliable distribution over a single link. To address this, SMPTE 372 was released to specify distribution of a 1080p59/60 video signal over two 1.5G SDI links.
Reducing Cables
By 2006 the chipsets and cable tolerances had improved so a single SDI link could reliably deliver a 2.97Gbit/s signal. SMPTE 424M provided the specification for 3G-SDI to facilitate formats such as 1080p50/59.
Formats continued to develop and SMPTE released their ST-2081 suite of specifications to facilitate the move to higher 1080p frame rates and the progression to 2160p with the data rate of a single link being 5.940Gbits/sec and 5.940/1.001Gbits/sec.
6G became available in three versions; single link, dual link, and quad link. Each version of the link doubled the data capacity of its predecessor. Single link (5.940Gbit/sec) supports distribution of 2160-line (4K) up to 30fps and 1080-line up to 60fps. Dual link (11.88 Gbit/ sec) can distribute 4Kp60 (4:2:2). Quad link (23.76Gbits/sec) provides 4Kp60 (4:4:4), 4Kp120 (4:2:2), and 8Kp30 (4:2:2).
Within these limits the specifications allow for color subsampling to be traded at the expense of frame rate. For example, ST-2081-11 allows for a video frame of 4096 x 2160, but you can either have 30fps with 4:4:4 color subsampling or 60fps with 4:2:2.
Automated Multi-Link Construction
Although the dual and quad links needed require either two or four coaxial cable connections, they do not suffer from the same picture shift issues as their analog RGB predecessors. The cable lengths must be of similar lengths and type, but the specification and chipsets in the send and receive devices will automatically correct for small cable length discrepancies.
SMPTE released ST-2082 in 2015 to provide a data rate of 11.88Gbits/s and 11.88/1.001Gbits/ sec. Now known as 12G-SDI, three versions are available; single- , dual- and quad-link.
The single link can provide 4K 4:4:4 at 30fps or 4:2:2 at 60fps. Dual link (23.76Gbits/sec) provides 8K (4:2:2) at 30fps and as we move to quad link (47.52GBits/sec) 8K (4:2:2) at 60fps becomes available along with 4K at 120fps.
Due to the large number of video formats supported within ST- 2082, a simplified method of determining the type of video that the signal it is carrying is needed. This is achieved using modes and each link-version has its own definition of modes.
For a single-12G link there are two modes of operation, ST2082-10 (single link) supports Mode 1 (4K/UHD up to 60fps) and Mode 2 (1080p 4:4:4 10 and 12-bit up to 120fps).
The dual-12G link provides three modes; mode 1, mode 2, and mode 3. Mode 1 describes 8K formats, mode 2 describes 4K 4:4:4 formats. And mode 3 describes 4K 4:2:2 formats with higher (extended) frame rates.
The quad-12G link only provides two modes; mode 1 and mode 2. Mode 1 describes 8K formats and mode 2 identifies 4K formats.
VPIDs Deliver Reliability
To help identify the format and mode type used, each link has an embedded label called a VPID (Video Payload Identifiers) to help the receiver identify the signal so it can correctly decode it. The VPIDs are sent in the ancillary data area so any receiving equipment can easily identify the type of signal it should be decoding.
The VPID’s are defined in SMPTE 352 as well as the relevant SDI standard and carry information essential for the receiver allowing it to quickly and efficiently identify the signal being sent to it.
Although it may seem intuitively correct to use wide color gamut with high dynamic range as part of a 4K system, there are several variations of this and ST-2082 allows parameters to be specified to identify a particular format, for example HLG or PQ, or Rec.709 or Rec.2020.
Again, these parameters are transmitted in the VPIDs allowing any receiver to quickly set up and establish the format of the signal and display it accordingly.
SDI is continuing to flex its muscles and is showing no sign of standing aside. This isn’t surprising as it is a thirty-year old technology and vendors, chip designers, and cable providers have had a long time to iron out any anomalies. The data rates have certainly increased, almost exponentially, from the 270Mbits of 1989 to the aggregated 47.52GBits/ sec now available on quad-link 12G. SDI is as stable and easy to use as ever.
Part of a series supported by
You might also like...
HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows
Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…
IP Security For Broadcasters: Part 4 - MACsec Explained
IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.
Standards: Part 23 - Media Types Vs MIME Types
Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.
Building Software Defined Infrastructure: Part 1 - System Topologies
Welcome to Part 1 of Building Software Defined Infrastructure - a new multi-part content collection from Tony Orme. This series is for broadcast engineering & IT teams seeking to deepen their technical understanding of the microservices based IT technologies that are…
IP Security For Broadcasters: Part 3 - IPsec Explained
One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…