Sports Production Infrastructure – Where’s The Compute?

The evolution of IP based production and increased computer processing power have enabled new workflows, so how is compute resource being deployed to create new remote and hybrid approaches?
Since the dawn of digitization most broadcast technology has been about software. The first generation of digital technologies presented their software within bespoke hardware – and in many ways it still looked and felt like hardware. Then came computers and the balance started to shift to it feeling more like software – but in reality, that software was still very much running on dedicated computer hardware and heavily reliant on FPGA processing horsepower – not CPU and GPU.
As the power and sophistication of computers have grown, networks went IP, computers turned into Virtual Machines running on COTS hardware, and the software has become more and more liberated from the hardware it runs on. As we move into 2025 it feels like the beginning of a new era when the software can be run entirely on generic industrial grade compute hardware that can be located anywhere on a network, and scaled up and back down relatively quickly and easily. All of that though is of course a journalistic oversimplification of what is definitely not simple infrastructure. As will also become apparent, the density and efficiency of FPGA still very much has its place.
The reality of how to combine the staggering array of compute possibilities available to system designers takes huge skill and a lot of experience. Live sports production is undoubtedly the most demanding area of broadcast and as such has always been a focus for innovation. So here we talk to our panellists about how they see this evolution in compute technology. We asked them a simple question – where are we right now on that evolutionary path and how is progress intertwined with the emergence of decentralized and remote production workflows? The answers are far from simple.
Rainer Kampe. CTO, Broadcast Solutions: “In an OB van the change has not been as big as in fixed facilities because of the type of equipment, and the efficiency of equipment used in an OB van. For many years now we have seen change in the audio domain, from bespoke hardware to computer based or CPU based processing. In the video processing we try to avoid too much CPU because it definitely takes much more power than bespoke FPGA hardware, which then needs more effort in terms of power supply. Maybe you need a bigger generator for the OB van or the supply from the grid needs to be one size bigger.
You also need more air-con and this is really a drawback because everybody thinks, okay, we are getting our carbon footprint down but no, it goes up. In fixed facilities that’s a different thing, where you use the same hardware or the same computing platform for different purposes within a day. In an OB van you are on site for one single purpose, for that production. You cannot re purpose the kit in the OB van for another production that day but in a facility that’s different. There CPU based and software-based systems might be a benefit.”

John Guntenaar. CTO, NEP Europe: “If we look at what we are currently hosting in the datacenter now in our remote models, it’s similar to the stuff that we normally put in a truck. But we are seeing the industry changing from hardware to software. If you think about the equipment that manufacturers have been creating for many years, they have created systems, appliances, and hardware that have a lot of interfaces that are not native to a computer. For example, SDI. A big SDI vision switcher, for instance, has boards in it filled with FPGAs and a whole load of SDI connectors on the back that require all of the real estate.”
“Once the world started moving from SDI to IP, that same board was used, but now it has a fiber-optic connector that could have multiple signals. You don’t need the real estate for all of the connectors that are non-native to a computer any more. A vision switcher is just something running software. The manufacturers have assembly lines where they’re creating PCBs, and they have mechanical engineering, electrical engineering and QA to put it together. In the end, it’s a computer running software. What we see happening is that when computers and COTS hardware improve, CPUs and GPUs improve, and you can have FPGA cards as PCI express cards in computers and you can run similar software applications, but in generic hardware.”
“We do see it moving in that direction, but not yet for everything, because there are certain applications where it doesn’t make sense to run it on generic computers, something like a multi-viewer, or for instance when you require a lot of sources. CPUs do not guarantee your throughput and they also use a lot of energy. You see that FPGAs in that area are still widely used, but for other applications, you can use a CPU and GPU with an FPGA card in that computer. There are even some manufacturers that are now creating things like multiviewer applications that run on an FPGA card inside of a server.”
Dan Turk. CTO, NEP Americas: “If you look at the vendors, take a step back and look at where everyone’s going. I think we’re all aligned at different timelines of where it’s going. The days of everything being disparate FPGA hardware boxes are gone. Compute has caught up and can do a lot of the things that we couldn’t do in the past. Thats check box one, now the CPU is big enough and the graphics cards are big enough to deal with broadcast. We’re in a transition from FPGA to bare hardware to software, but there’s going to be some processes that remain software-based with FPGA acceleration.”
“There are some that are going straight to software from hardware. For example, if you look at Grass Valley’s AMPP and Lawo’s Home, and at the 2024 IBC Show Riedel announced audio processing. They’re all in the cloud and that’s where everyone’s going. One of the things the experts in that area have been saying is once we get our base and are able to add on to this and add features, it’s going to be a lot quicker and that has come true. It’s amazing to see where the vendors were two years ago and what they can do now. It’s an exponential improvement. Even with hardware, we’ve beat out systems that would take 30 servers to do it a year ago and it’s down to 15. And the next year when you get to the next gen, it could be down to five. Processing at the compute and GPU CPU level is exponentially growing, and we’re able to take advantage of that.”

“How as an industry we grow from hardware to software to do things also relates to datacenter models and the potential to spin things up, use it, turn it off, and take it down and that same compute can be used for other devices. This goes back to the idea that we all have to find efficiencies as the industry is transitioning. How do we do that? Can we do that in software? One option is to make it more flexible and not spend as much money on shipping larger hardware around. Because it’s compute we can move it around, make it more efficient and therefore help everyone. We’re all trying to figure out this transition period we’re in now.”
John Guntenaar. NEP Europe: “The benefit of running in a datacenter setup is that the entire setup evolves over time, where for a traditional OB truck, you run it for a few years on the road and then refurbish it completely. In the datacenter it’s more of an ongoing process. We see more things moving towards standard compute. We have some audio applications for instance that we see now running on standard servers and there are some video applications doing the same, so it’s growing. The industry is moving from a hardware to a software-based world where the companies are software companies and not hardware manufacturers any more because you can run applications on COTS.”
“I cannot speak for any manufacturers, but, if they can run their application on a server that Dell can make for them or HP, etc. then why would they want to be different in the way that they make a server? I have no information about what they are exactly doing in that direction, but just seeing the trend around the industry that is my takeaway. The game that we want to play is that we equip our datacenters with standard servers and we deploy those workloads where we need it from our TFC [Broadcast Controller] stack.”
Pierre Mestrez. Senior Director, Software & Services, Broadcast Solutions. “Virtualization also introduces some different kinds of technical challenges. For example, there’s no clear standard between the vendors in terms of technology underlying these virtual machines and their deployment. Software defined production is quite a catchy term, but in terms of reality and applying it to multiple vendors solutions, it’s a challenge. Everybody tends to do their own thing. Then we play an important role in terms of ensuring compatibility between these different vendors platforms and framework. That’s one technical challenge for these new trends, another is probably related to the use of it. How a broadcaster can deal with such flexibility. Giving them a lot of repurposable tools gives them a wide range of possibilities and that could be a trap for them. You have all these new resources and possibilities but how will you efficiently manage them and not leave them as a wide ocean. It’s also our role to guide them into how to make better use of such technology.”
Data Center Models
We seem to have some broad consensus that we are on an evolutionary path to virtualization and software driven systems, but that for the moment where high density processing tasks like running mulitviewers or encoding or transcoding large numbers of video streams are called for – FPGA with its space, power and heat efficiency remains an important piece of the puzzle. Having that FPGA horsepower in a network addressable server as pooled resource is also potentially part of the picture because of the flexibility this brings.
It is important to bear in mind that when our discussion thus far mentions systems within data centers, what is meant is private data centers – not the public cloud. As we will see there can be a blend of technologies deployed within the private data centers and there is flexibility in where they are located – but there remains caution about the use of hyper scalers – aka the public cloud.
Rainer Kampe. Broadcast Solutions: “With processing in the OB you know what you have. You know how much power you need to calculate for processing. The question should be not so much can I do core processing in the cloud for an OB, but can we do add on processing like having a super slow mo processed in the cloud with the signal fed back to the OB. I think that’s really the way we should go because that minimizes the initial investment for the customer, minimizes the environmental footprint in the OB van because it means less equipment, less power, less air-con. So why not? But at the moment the complete end-to-end workflow is not there because you need the customer who accepts that this part is processed in the cloud and it’s not the high-end super motion camera on-site and you get the signal maybe a bit later than you would expect. I see the possibility of having some of the processing, not the core processing where we need to change the signals or alter the format or do corrections, but add on processing.”
Pierre Mestrez. Broadcast Solutions: “It’s a best of breed approach where you would put GPU, CPU, FPGA processing engine intensive audio and video tasks in edge devices and optimized devices and then leaving all these other metadata driven value services on the top like re purposing, content indexing, analyzing, generating 9x16 content, that can be done offline delegated to cloud services. This is, I would say, the best of both worlds.”

Antti Laurila. Chief Strategy Officer, Broadcast Solutions: “I think it’s the same not only for OB’s but also for the remote hub. They don’t want to put these things to the public cloud either, it stays on prem or in a private cloud. Then it’s a local data center. But cloud as using Amazon, or Azure that is too expensive so they don’t put this into the cloud. It’s on the local on prem servers.”
Dan Turk. NEP Americas: “We find that local cloud is becoming very popular. There are times where you want to use AWS or Google Cloud or Azure because you have a spike in requirement - it’s an important tool that we’re using. If it’s a show you do once a year, then AWS, Google or Azure is perfect. Spin it up, do it, and turn it off when you’re finished. You’re not paying for it the whole time. But, we can also take that same software and run it in a private datacenter or in a server, and because we use it enough, it makes sense.”
Patrick Daly. VP Media Innovations, Diversified: “I think I’m seeing some capacity constraints on particular features like replay channels deployed within the trucks themselves. At certain times of the year those trucks are in high demand and we’ve utilized every one of those channels and now we’re starting to spin up additional channels in the cloud, on hyperscaler infrastructure, often times utilizing an increasing set of direct connects out of these venues into regional hyperscaler data centers.”
“It started I think with a desire to accommodate those bursts of capacity need for high profile events that require more channels. But the success there with bursting capacity and then looking at technology refresh cycles of on premises installed replay packages, there’s a lot of talk right now in the industry of what if we just we just put all of our replay in the cloud. What if all of the ingest and replay function was just there and everything downstream of replay just hooks into the cloud, into the S3 buckets or into the EVS infrastructure. So that’s kind of an interesting trend I’m seeing but there’s no decreased demand for OB trucks. I think really it comes down to it’s a fixed cost for a capability where if I’m producing an event there’s a peace of mind and being able to outsource that function for a known cost and just factor that into the financials.”
Conclusion
As we have seen, software running on COTS compute resource is commonplace. There are times the compute resource is in the public cloud but mostly it remains in the tightly controlled confines of private data centers.
As yet most software services are either applications that have been lifted and shifted from dedicated compute resource to more flexible Virtual Machine compute resource – or the software services are used within a SaaS model or with a small group of pioneering vendors, as even more scalable and flexible Function As A Service (FaaS) microservices running in a Kubernetes environment.
The flexibility of these new approaches to provision of compute resource is enabling innovative approaches to sports production infrastructure that are proving their worth to rights holders and broadcasters.
We did get into discussion with some of our panellists about true microservices architecture – where microservices might exchange streams at a server level within a new asynchronous timing framework – but that currently remains a leading edge too far for this discussion and the subject of much ongoing developmental work by standards bodies, pioneering vendors and system designers.
Part of a series supported by
You might also like...
KVM & Multiviewer Systems At NAB 2025
It’s NAB time again. Once again, as we head towards the show, we will take a look at the key product announcements across a range of key technology and workflow areas. We begin with the always critical world of K…
Building Software Defined Infrastructure: Shifting Data
The fundamental principles of how data flows through local and remote processing systems are central to designing software defined infrastructure.
BEITC At NAB 2025: Conference Sessions Preview - Part 2
Once again in 2025 The Broadcast Bridge is proud to be the sole media partner for the BEIT Conference Sessions at NAB. They are not free, but the conference sessions are a unique opportunity to engage with very high quality in-person…
Microphones: Part 8 - Audio Vectorscopes
The audio vectorscope is an excellent tool for assuring quality in stereo sound production, because it makes the virtual sound image visible in the same way that a television vectorscope allows the color signals to be seen.
BEITC At NAB 2025: Conference Sessions Preview - Part 1
Once again in 2025 The Broadcast Bridge is proud to be the sole media partner for the BEIT Conference Sessions at NAB. They are not free, but the conference sessions are a unique opportunity to engage with very high quality in-person…