The IP Showcase is a highly anticipated event at the NAB Show in April that annually brings together a myriad of companies with complementary IP technology that spotlights “real world” applications using third-party products. Attendees like it because they get a hands-on look at how IP infrastructure can be set up and managed.
There’s a terrible tendency in cinematography to concentrate too much on the technology, overlooking creative skills that often make a huge contribution. In the last two pieces of this series we’ve gone into some detail on the historical background to current camera technology. In this last piece on the art and science of sensors and lenses, we’re going to consider what difference all this makes in the real world.
Super Bowl may not be the most watched sporting event in the world but remains a showpiece for US broadcasting where the latest technologies and innovations in coverage are displayed.
When Genelec’s GLM software began development about 20 years ago, many customers simply took their new audio monitors out of the box, plugged them in and started using them. They often didn’t even set the dip switches or consider proper set-up. With this situation, Genelec knew it had a problem.
In Part 2 we looked at solutions to keep AoIP systems simple and discussed the compromise and efficiency vendor specific network systems provide. In Part 3, we look further into system management and network security.
Computer systems continue to dominate the landscape for broadcast innovation and the introduction of microservices is having a major impact on the way we think about software. This not only delivers improved productivity through more efficient workflow solutions for broadcasters, but also helps vendors to work more effectively to further improve the broadcaster experience.
Every digital audio workstation — even the free ones — comes with a set of plugins for processing audio. Most us forget about them, concluding that to get quality audio processing we need to spend big money for name-brand plugins endorsed by well-known names. Surprise! What you already have might do the job and do it well.
Computer game apps read compressed artificial world descriptions from a disk file. This artificial world is regenerated by the CPU and loaded into the GPU where it is displayed to the gamer. The gamer’s actions are fed back to the GPU which dynamically modifies the artificial world it displays.