Understanding IP Broadcast Production Networks: Part 12 - Measuring Line Speeds
Broadcast and IT engineers take very different approaches to network speed and capacity - it is essential to reach a shared understanding.
All 14 articles in this series are now available in our free 62 page eBook ‘Understanding IP Broadcast Production Networks’ – download it HERE.
All articles are also available individually:
In the analog days, broadcast engineers used frequency response to measure line capability. An audio twisted pair had a bandwidth of at least 20KHz and video had a bandwidth of approximately 8MHz. As we moved to digital audio AES3 and SDI HD these requirements increased to 3Mb/s and 1.485Gb/s respectively.
Digital audio and video systems send high and low voltages to represent the one’s and zero’s. However, at the higher frequencies we need to take into consideration the analog qualities of the transmission cable and equalizing circuits. Return loss, reflections and phase distortion are all significant factors needing consideration.
When speaking to the IT department, we might think that we should no longer worry about these qualities as signal paths are defined in bits per second. An internet line might be defined as 200Mb/s and an SFP link might be defined as 1Gb/s.
IT engineers tend to be product specialists in their own fields of Microsoft, Linux, and Cisco. Very few of them will study transmission theory in the way broadcast engineers have in the past, especially those who worked in transmitters. This can lead to some very frustrating and confusing conversations between IT and broadcast engineers.
An IT engineer might tell you that the bandwidth of a circuit is 200Mb/s, or the delay is negligible. At this point a broadcast engineers’ blood would boil as they have flashbacks to their Telegrapher’s equations and two port networks. The simple answer is that IT engineers think in terms of service level agreements, if a Telco has provided a circuit with 200Mb/s capacity then they assume and expect it to be true.
IT engineers think of network capacity in terms of bits/second, as opposed to broadcast engineers who think in terms of bandwidth, return loss, phase distortion and reflections. Further problems arise when we start discussing actual data throughput in a system.
Broadcast engineers will assume a 10MHz point to point network will allow them to send signals with 10MHz bandwidths. As IT networks are based on packets of data, there is an inherent loss of data in the system due to the headers and spaced distribution. A 10Mb/s circuit might only have a useful data-rate of 9.5Mbps.
Ethernet frames are generally split into two parts, the header and payload. The header includes information such as send and receive addresses, payload types, packet counts and flags. The payload will be a protocol type such as IP, which will also contain a header and payload.
Drilling down into the Ethernet packet we have the four octets of Cyclic Redundancy Check (CRC), and twelve octets of inter-packet gap appended to the end of the frame.
When discussing networks, engineers use octets instead of bytes to represent eight bits. This is to remove any ambiguity as computer science tells us a byte is “a unit of information”, which is based on the hardware design of the computer, and could be as easily eight bits, ten bits or one hundred bits.
Our Ethernet frame generally consists of 1,542 octets, or 12,336 bits. Only the payload is available to us when sending audio and video which is 1,530 octets, or 12,240 bits. If a camera uses UDP/IP to send its data over Ethernet a further 20 octets are lost in the Ethernet payload to the UDP header and CRC information leaving 1,510 octets, or 12,080 bits, all resulting in approximately 98% data throughput.
The theoretical maximum of 98% assumes no congestion. The IT engineer is expecting 200Mb/s, but the Telco is providing 196Mb/s of usable data. Clearly the Telco will dispute this as they will show they are providing a 200Mb/s circuit, and the fact that you are using 2% of it on packet header information is your problem not theirs. To the letter of the contract, they are probably correct.
This may not sound a lot, but if you suddenly find your network has a 2% reduction in capacity, you could find yourself with some tough questions to answer when approaching the Finance Director for more cash.
When analyzing network throughput, a thorough understanding of protocols must be achieved.
Transmission Control Protocol (TCP) sits on top of IP/Ethernet and provides a reliable connection-based system to allow two devices to exchange data. Low latency video and audio distribution doesn’t use TCP/IP for standards such as ST2110; its data integrity is very high, but the throughput can be low and highly variable. However, some video and audio distribution systems will use TCP/IP and understanding how it works is critical when considering data throughput and latency.
TCP works by sending a group of packets and then waits for an acknowledge packet from the receiver, if the “Ack” isn’t received then the same packets are resent, eventually timing-out and sending an error to the user if too many of these errors happen in succession. If the “Ack” is received by the sender, then the next group of packets are sent. Data throughput is reduced and is the price we pay for this data guarantee.
UDP/IP protocols, which ST-2022 and ST2110 uses to transport video and audio, is a “fire and forget” system. The camera outputs the packets and doesn’t use any form of testing to see if it was received at the destination, such as a production switcher. ST-2022-5 has forward error correction (FEC) built into it to provide some resilience over the network.
“IPerf” is used to actively measure the maximum achievable bandwidth in an IP network, it’s the closest tool we have for measuring a networks capacity and is released under the BSD license. It runs on two computers, a transmit at one end of the network and receive at the other. It works by flooding the link with IP datagrams and using the receiver to check their validity, consequently the measurement is potentially destructive to other users of the network and must be used in isolation.
Operating as a command line tool IPerf can make many different measurements, from UDP/IP line-speed to TCP throughput. TCP measurements will always be slower as IPerf will be measuring the speed of the protocol, not the network line-speed.
When working with the IT department great care must be taken in understanding what exactly you are measuring.
You might also like...
HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows
Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…
IP Security For Broadcasters: Part 4 - MACsec Explained
IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.
Standards: Part 23 - Media Types Vs MIME Types
Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.
Building Software Defined Infrastructure: Part 1 - System Topologies
Welcome to Part 1 of Building Software Defined Infrastructure - a new multi-part content collection from Tony Orme. This series is for broadcast engineering & IT teams seeking to deepen their technical understanding of the microservices based IT technologies that are…
IP Security For Broadcasters: Part 3 - IPsec Explained
One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…