Using Configurable FPGA’s For Integrated Flexibility, Scalability, And Resilience - Part 2

Software continues to demonstrate flexibility and scalability, but the new breed of software defined hardware architectures build on the success of software to keep latency very low and flexibility and scalability high.


This article was first published as part of Essential Guide: Using Configurable FPGA’s To Deliver Integrated Flexibility, Scalability, And Resilience - download the complete Essential Guide HERE.

Software Hardware

These hardware resources take the benefits of ASICs but maintain the flexibility of software programming. FPGAs are available in all different shapes and sizes and are also defined by the amount of hardware resource they have available to the design engineer.

Programming an FPGA is often a three-stage process: modelling, simulation and verification, synthesis and placement. All these stages are performed in software and the output is often a formatted binary file that is loaded into the FPGA during boot or at any other time under the control of the system.

Modelling is the operation of designing the process the engineer is building and is often facilitated using languages such as Verliog and VHDL. Both these are considered High Description Languages and in appearance are not dissimilar from procedural programming languages. Simulation and verification provide offline testing of the design where data samples are presented, and the outputs are verified against the expected output. For example, an FIR is highly determinate, and we know what the output values should be given a known input. The final stage is synthesis and routing where the binary file is built and programmed into the FPGA.

All these processes can take an incredible length of time, especially when the designs become complex and are therefore often divided into smaller test benches. But once the design of the function is complete, the final software file is downloaded into the FPGAs within a matter of milliseconds.

What is really compelling about FPGAs is that once the circuit board hardware is designed and built, the rest of the implementation is based on software. This provides untold flexibility for vendors as they can literally make the hardware do anything they want (within the limits of the resource). And this is a real opportunity for broadcasters.

Proven FPGA Technology

Although FPGAs have been used for many years inside broadcast hardware designs, and are therefore proven technologies, it is only recently that arrays of FPGA ICs have become available as stand-alone ecosystems to facilitate dynamic and scalable resource for broadcasters. For example, a single card could be programmed to be a proc-amp, but on the next day be reprogrammed by downloading a binary file, to be a standards converter. This flexibility is something we’ve never seen before in broadcasting when considering the very low latencies involved. It’s possible for software COTS to deliver this, but the latencies are variable and unpredictable, and the systems are incredibly complex, which is often an issue for live productions. FPGA arrays are low and latency determinate, and relatively easy to operate.

FPGA IC arrays provided on a single circuit board can then be replicated many times in a rack frame. With the appropriate management software, the functionality is effectively abstracted away from the underlying hardware, including the transport stream. This delivers incredible opportunities for flexible and scalable operation for broadcasters, especially when considering the potential for the multitude of licensing models. For example, using the pay-as-you-go model, centralized licensing repositories could be linked into the vendors management software to make available modular functionality, such as proc-amps, embedders, and frame-synchronizers, to name but a few.

An FPGA consists of tens-of-thousands of hardware gates and functions that can be programmed allowing many different operations to be provided such as proc-amps, standards converters, and frame-synchronizers. Also, many FPGAs can be connected with high-speed differential pair busses to facilitate low latency signal processing across multiple FPGAs.

An FPGA consists of tens-of-thousands of hardware gates and functions that can be programmed allowing many different operations to be provided such as proc-amps, standards converters, and frame-synchronizers. Also, many FPGAs can be connected with high-speed differential pair busses to facilitate low latency signal processing across multiple FPGAs.

The really exciting aspect of this initiative is that when the rack of FPGA cards has been procured, all the operational functionality is provided by the vendor using software files. These can be updated and managed by the vendor so the broadcaster can focus on building their specific studio solution without having to worry about software versioning or configuration. Furthermore, by increasing the number of racks, the available resource scales appropriately. Therefore, when an engineer is designing or expanding their facility, they can spread their estimated functionality requirements over many racks knowing the detail of operation can be loaded into the FPGAs as required, thus making the system highly flexible and scalable. Futureproofing is provided without having to plan ten years ahead as more FPGA racks can be added as required.

Transport Stream Independent

This design philosophy also has some very interesting implications for the transport stream interfacing as it is taken care of by the FPGA circuit boards. The SDI, AES, ST2110 IP, and ST2022 IP protocols, and many others, are available as VHDL code libraries and so manifest themselves as physical interfaces on the FPGA. Consequently, transferring video and audio data to and from them is a relatively straight forward process as it’s all taken care off in the FPGA itself. We don’t need to be concerned with interface equipment to convert between the various transport streams, it all takes place inside the FPGA.

It’s fair to say that the hardware still needs physical interfaces and connectors, but again these can be provided as an array of assignable flexible resource instead of being statically dedicated to specific tasks, thus further improving flexibility and scalability.

Another interesting aspect of this design philosophy is that much of the video, audio and metadata signal routing takes place within the confines of the rack of FPGA circuit boards through high-speed back planes, not only does this keep latency low, but another positive side effect is that cabling is significantly reduced.

Although cabling forms the core of any infrastructure, it has two undesirable attributes, it’s heavy, and is susceptible to damage. Weight is particularly important for OB trucks and anything we can do to reduce it is a major bonus. Even where fiber is used to distribute IP, there are clear advantages to keeping the amount of fiber to a minimum, that is, the less we have, the less there is to go wrong.

Keeping the signal processing within the relatively close proximity of a rack will help maintain resilience and equally importantly, low and predictable latency. This also helps reduce the number of inputs and outputs on the central routing matrix, further keeping weight low and power consumption down.

Multiple racks with dual power supplies and diverse power routings delivers high resilience, especially when combined with a software configuration and management system that can assign the FPGA functionality on-the-fly, thus delivering outstanding flexibility, resilience, and scalability.

Conclusion

Broadcasters looking to upgrade, improve, or expand their facilities are currently presented with some very difficult decisions. In part, this is due to the influence of IP and cloud computing. However, much of the functionality broadcasters currently need are difficult and sometimes impossible to implement in IP and cloud, and this is just a natural consequence of the state of IP and cloud development at this moment in.

The good news is that the new breed of assignable FPGA arrays not only makes the delivery of flexible and scalable functionality a reality giving an outstanding compromise between hardware and software, but also abstracts the SDI, AES and IP transport streams away from the user operation allowing broadcasters to mix and match technologies with ease, and build the most flexible, resilient, and scalable broadcast infrastructure possible.

Supported by

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

IP Security For Broadcasters: Part 4 - MACsec Explained

IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.

Standards: Part 23 - Media Types Vs MIME Types

Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.

Building Software Defined Infrastructure: Part 1 - System Topologies

Welcome to Part 1 of Building Software Defined Infrastructure - a new multi-part content collection from Tony Orme. This series is for broadcast engineering & IT teams seeking to deepen their technical understanding of the microservices based IT technologies that are…

IP Security For Broadcasters: Part 3 - IPsec Explained

One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…