The Last Mile Is The Hardest—Getting To Live IP

It is no secret that the media industry is moving towards an IP infrastructure. It is not so clear as to how much confusion this transition may cause. Media expert Gary Olson offers some guidance on this important evolution.

This applies to almost every aspect of technology. In Telecom, getting services in to the business or residential premises is the toughest. In both software and hardware development, finishing the code or putting all the components together as a completed product is the hardest part of the project.

Now let’s apply this to the transition to IP. In my NAB musings I questioned the fact that we accept file based workflow, metadata, file acceleration, cloud services and the introduction of SDN- Software Defined Networks but in the same breath industry leaders are still saying that IP is still a few years out. It’s all IP and has been for quite some time - except for live production. This task represents the last mile.

Confusion is still front of mind

There are a few initiatives starting to appear and just as in most of the transition to IP they are adding a touch of confusion. Handling time and sync seem to be the current speed bumps to getting IP direct from a camera to a server and then recorded. A similar issue involves taking an IP stream from the camera directly into a production switcher and get seamless switching between sources.

Then we have the challenge of inter-cuting between SDI and IP. We managed to figure it out going from analog to digital and SD to HD. Can we do it here?

SMPTE has the 2022 family of Standards for IP media creation and transport based on MPEG-TS. In addition, there is SMPTE ST 2059-1 & 2059-2, One of these is for a Precision Time Protocol (PTP) replacing timecode and the other is for timing reference (sync) to replace genlock. OOPS, not replace, I mean next generation time reference and next generation timing (sync) reference for “frame” accurate production switching. Here again, the intent is to have IP standards that can integrate live streams with SDI.

These will be layers (OSI Seven Logical Layer Model) in the IP stream and part of the encapsulated package of audio, video, metadata, control and communications.

One of the new processes in IP workflow is orchestration. This is next generation automation, controlling the movement of files and streams throughout the core infrastructure, directing content to the correct device or system with a command structure for the system to perform its functions.

Typical IP network with multiple clients, and storage. In this case, one network (top half) serves the business sector, the other (bottom half) is exclusively reserved and protected for content tasks.

Typical IP network with multiple clients, and storage. In this case, one network (top half) serves the business sector, the other (bottom half) is exclusively reserved and protected for content tasks.

There is a joint task force between EBU and AMWA with considerable participation from most of the industry vendors to create a standard (protocol) that all devices and systems will recognize as a command structure.  This is the Framework for Interoperable Media Services or more familiarly known as FIMS. Think of it as RS422 for the IP world. This is great, everyone working together on a standard so devices can communicate with each other. Call it an IP version of “video out to video in” with the SMPTE standards.

SONY recently announced an encapsulating methodology (protocol) that addresses the transport of media and timing reference. This is in the form of a chip set they are proposing that would be embedded in all devices and systems to create the IP stream for true IP interoperability of streams.

According to conversations with EBU, FIMS will work together with the SONY technology encapsulating and transporting the media and the EBU FIMS protocol enabling devices and systems to understand what it is and what to do with it. And there is an EBU FIMS initiative to create an IP version of “video out to video in” based on the SMPTE standards.

We need a solution for live content

Playout systems are splicing, grooming and layering placing interstitial content, banners, lower thirds and snipes on all forms of programming from files on the air every day. Watching streaming on line seems to have the ability to seamlessly cut between program and commercials every time I try and scrub ahead in a show. Actually they have been doing it for a while. Is the challenge to do the same with a full resolution IP stream that much more daunting?

No matter the internal signal format, live production requires a familiar production GUI like that provided by this Grass Valley Kerrera production switcher.

No matter the internal signal format, live production requires a familiar production GUI like that provided by this Grass Valley Kerrera production switcher.

At each technology transition point there were little boxes that solved all of these problems. We have A/D, D/A, Frame Sync, Up/Down converters, transcoders and transmuxers. It’s hard to have a conversation that doesn’t have API or XML in it. We now have middleware and all kinds of new stuff to integrate these systems. Why should live production be treated as different problem? If all content is converted to a stream before it enters the production switch, then there is no need to intercut between SDI and IP. Even now, SDI is encoded to IP for file recording, distribution and transport.

At NAB, according to comments posted on one of the social network discussion groups, SONY and Grass Valley informed those allowed past the double secret handshake and code word authentication, that they would be showing IP direct from a camera and IP direct into a production switch soon. Could that be IBC soon, CCW soon, NAB2016 soon or just coming soon. During my own NAB research a few server vendors were asked if they could ingest an IP stream directly and create a file. The answers were more often no than yes. The response was almost uniform, that it could be easily done if anyone wanted it or even asked.

We can create, transmit and transport IP streams, and have been doing it for a while. We have published and accepted standards. We have the test, measurement and monitoring technology. The IP networks are able to support it in bandwidth and performance.

Should this last mile be this hard?

Follow Gary Olson in his IP tutorial series "Smoothing the Rocky Road to IP"

The Anatomy of the IP Network, Part 1

The Anatomy of the IP Network, Part 2

The Anatomy of the IP Network, Part 3

Changing Technology: Panacea or Pandemonium

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

IP Security For Broadcasters: Part 4 - MACsec Explained

IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.

Standards: Part 23 - Media Types Vs MIME Types

Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.

Building Software Defined Infrastructure: Part 1 - System Topologies

Welcome to Part 1 of Building Software Defined Infrastructure - a new multi-part content collection from Tony Orme. This series is for broadcast engineering & IT teams seeking to deepen their technical understanding of the microservices based IT technologies that are…

IP Security For Broadcasters: Part 3 - IPsec Explained

One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…