Taking graphics to the next level
Image courtesy hdw.eweb4.com.
As graphics’ technology gets better, it becomes more difficult to tell the difference between real and software-generated imagery.
Graphic elements serve many purposes. The line between content and promotion has become so blurred that the consumer confuses the two. This should be intended! The viewer tends to lower the critical filter for content. The trick is to keep it lowered for advertising and promotion as well.
One thing we can do is analyze the graphics in content (the structure of graphical elements is quite different from real world objects) and apply similar templates (colors, fonts, transparency, etc) to our advertising or promotional material. This will work well for sporting events, but it would be interesting where intellectual property rights intersect advertising when we use similar music and graphics (“Who wants to be a Millionaire”) to sell our soap.
On air graphic engines are the key to this kind of functionality. Combining data mining with real time pixel manipulation, these systems have changed our expectations about how news and sports should be presented. The results can be found in all types of non-drama programing from dancing to surviving.
In the last few years a whole ecosystem has grown up around these systems. Companies specializing in template design build elements to match a show or station’s needs. Modular frame-work applications allow for connecting and applying logic to real time data streams.
Social network filtering algorithms allow for tweets and likes to be quantified and graphically represented. What added value do these systems offer? In the background is a script driven 3D renderer with a database and a bunch of SQL statements taking advantage of open API’s to existing data sources. The size of the market has brought many vendors into the field, such that while “roll your own” may be an option. today it just does not make sense. Applications range from the simple kinds of graphics any station would use in the daily lineup to the Emmy nominated real time interactive game show “Web vs Promi”.
The graphic engine software provided with most channel in a box systems can provide all the functions required for a normal programming day, but getting the most out of them requires talent and knowledge. You may be better of outsourcing the design and programming tasks to one of the many firms providing these services.
The hardware behind all this is, in most cases, an of the shelf graphic card from either AMD or Nvidia. In any case all manufacturers have the same starting point. These GPU’s vary in price from $200 to around $1K, a small percentage of the total price of channel in a box solutions.
There are two parts to developing state of the art graphics; Programming and Hardware. Real time photorealistic rendering at 4K 120fps remains somewhere in the future. The hardware is getting better every day and gaming engines are driving the development.
Lucasfilm, according to Kim Libreri chief technology strategy officer, has begun utilizing video game technology for feature production. "We think that computer graphics are going to be so realistic in real time computer graphics that, over the next decade, we'll start to be able to take the post out of post-production; where you'll leave a movie set and the shot is pretty much complete," Libreri said. (BAFTA 2013) Broadcast graphics has always been about “real time”, currently 30 Fps @ 1920x1080, so the question becomes, what is possible and how can we integrate that into our programming?
Replacing the actor with software
Before you think that actors may not be replaceable, take a look at this video. You’ll be hard pressed to know that’s not a real live human head. The image relies on an NVIDIA GTX Titan card being run on the Maximum Resolution Games View You Tube channel. The channel specializes in high-rez imagery and has plenty of examples of software looking pretty realistic.
The ability to create realistic avatars opens up many possibilities, but technically these are not off the shelf solutions. Mastering the engineering challenges implicit in a new programming paradigm, ie real time interactive with user generated content and live avatars, will be rewarded with market success.
Broadcasters have the unique advantage of being able to reach a large audience simultaneously in a cost effective manner. Giving viewers the possibility to interact will require simultaneous participation, thus playing to broadcast strengths. Creating the parameters of a live event without the overhead is good for everybody in the broadcast value chain.
Looking down the road for the next year or so Thomas Molden, a player in automated graphics from day one when Computersports presented the first working virtual studio, sees some trends to keep an eye on. “Second and third screens are going to be major users of data-generated graphics. The individual user profiles available will enhance the first screen experience.”
These social interactions make broadcasting a must if the viewer wants to be part of the action. Thomas also sees lots of opportunities in using the intelligence at the display device. “Generating graphic overlays at the display will allow for individualization even on the first screen.” Would you send a clean feed or reserve certain parts of the picture for additional information?
Thomas suggests a third option. “Screens get bigger and support higher resolutions, think 4K. These displays have little or no content in native resolution. Why not use the extra resolution and screen space for additional information instead of just blowing up the HD feed?” New televisions and STB’s have the required intelligence on-board so this sounds like a really good idea to give the early adopters some added value. The lines between “content” and “graphics” are going to disappear!
You might also like...
Designing IP Broadcast Systems - The Book
Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…
Designing An LED Wall Display For Virtual Production - Part 2
We conclude our discussion of how the LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in…
Microphones: Part 2 - Design Principles
Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.