The Interactive Rights Technology Ecosystem: Part 2

As we continue our dive into the new frontier of Interactive Rights we delve deeper into the Interactive Rights technology ecosystem with an exploration of the required functionality and the components required to deliver it.
Other articles in this series:
The technology stack for Interactive Rights (IR) begins with massive data input, some of which must be carefully curated before it can be ingested to the IR Data Engine. As described in Part 1 of this article, the IR Data Engine focuses on data source management and data ingest for all data types that the sports content owner wants to bring into the interactivity domain. The next step in the Interactivity Experience Workflow (see Figure 1) is to run the ingested data through the Presentation Decision Engine.

Figure 1: The Interactivity Experience Workflow shows how the input data is ingested, analyzed, and aligned with the policies and rules to make decisions on what to present to viewers.
The Presentation Decision Engine
This second step towards deciding the interactivity experience that will be offered to the viewer combines User Behavior Data - the set of captured data that describes how viewers have and have not interacted with the interactive offerings presented to them – and Compliance Data - the set of rules about what interactive experiences can/cannot and should/should not be presented to an individual viewer. The output is a personally presented interactivity opportunity for each viewer.
- User Behavior Data
The first data set in the interactivity technology stack is User Behavior Data, which has the goal of enabling interactivity to be personalized uniquely to every viewer, even without collecting personally identifiable information.
This data set is captured entirely through observation of viewer behavior. While it is possible to ask for customer preferences, the prevailing view is that this is too static for the dynamism of Interactive Rights, or too invasive for the viewers to be asked, or too simple a way to identify preferences, or even just not effective enough in terms of understanding what viewers really want. Direct observation is managed through tracking a viewer’s on-screen interactions, supported by AI as shown in Figure 1. Which interactive opportunities a person interacts with and does not interact with is sufficient information to create a picture of their preferences. It drives what is presented to each viewer, and when and how it is done.
While viewer observation is central to the Interactive Rights ecosystem, many interactivity events are still triggered centrally to be consistent for each viewer. Broadcast advertising “push” principles still apply, supporting advertisers to reach large audiences. For instance, when a goal is scored, every viewer may receive the same interactive offer.
But many offers are customized specifically to a viewer, often related to content that suggests something to a viewer, for example an e-commerce offer, a game, an advert, or a bet. If the Interactivity Data (see part 1) and User Behavior Data give choices of what to present to a viewer, then the AI has the job of selecting the content that will deliver the best value exchange to each viewer according to the Compliance Data. For example, with betting, only some viewers see the bets; then, of the available bets, only a small sub-set of bets is presented to a viewer; then the AI learns how viewers respond, engage, and interact over time, and will adjust its presentations accordingly. In another example, with e-commerce, if a viewer has bought a jersey in a previous transaction, then the AI may present a hat or scarf, or perhaps an alternative jersey that fits with the viewer’s profile.
The AI will learn which transactions are more and less engaging with viewers. It will learn which viewers never click on bets but always click on games. And once a viewer starts interacting, the AI learns quickly. This knowledge is the information that drives the frequency of promotion of different interactive offers per viewer.
While rules trigger general content and AI triggers individual content, there is still a long way to go with refinements of the viewer experience. Various improvements will emerge in the years ahead, addressing the following types of current issues:
- How to best understand and address an individual viewer across multiple sports they watch when there are different sports content owners and Interactive Rights engines.
- How to manage interactivity in public viewing settings such as bars and theatres.
- How to manage the Web experience versus the TV experience for viewers, when the Web experience is being used as a group-viewing method (e.g., via a projector).
- Compliance Data
This data set is driven by the sports content owner, sometimes in conjunction with the broadcast rights holder, and sometimes driven by regional media regulators. Essentially, this data set is a list of rules, that are broken down into two groups – the Content Compliance rules and the Display Compliance rules.
Content Compliance rules define what interactive content can be presented to viewers. For example, they can state how many times a trivia game can be presented during an event, or how many times a betting opportunity can be presented to a single viewer, and which types of viewers cannot have bets presented to them. They also include which brands can and cannot be displayed on screen during the event, based on advertising arrangements. Finally, they are also used to set the pace of interactivity presentation, defining how many interactivity moments can be presented in the early moments of an event versus the middle and latter moments.
Display Compliance rules define how the interactive experience can be presented to viewers. Given the program content itself is at the core of the viewer experience, the first group in this rule-set relates to where on the screen an interactivity element can be shown.
The second rule group is about the format of presentation. For instance, must a particular interactive element be presented on the feed – an overlay – or around the feed – a squeeze back. Whether overlaid or squeezed back, the size of the element must also comply with the rules.
The third rule group focuses on presentation branding. This ties all aspects of the interactivity component to the design and color schemes that the sports content owner has defined, which can be associated with: a) the league/tournament, b) the team, c) the athlete, or d) the sponsors.
The fourth rule group confirms when interactive opportunities can be presented. Given that Interactive Rights are about contextual timing during the sports event, it is different from the frequency rules that form part of the Content Compliance rules. Contextual timing is about presenting information in the appropriate moments that are relevant to each sport’s natural flow. For instance, norms would be to avoid presenting a trivia question during a point in tennis or avoid overlaying a bet while the player runs towards the ball to take a penalty.
After processing all data inputs and rules, the Presentation Decision Engine decides in real-time which interactivity elements should be promoted to a viewer. Given the scale, dynamic, and real-time nature of this decisioning, this decision is an AI choice, which is refined over time.
The User Interface
The User Interface element in the technology stack focuses on how interactive content is displayed on the main screen using overlays, squeeze backs, and more, and how second-screens are managed in terms of the interactivity experience.
The Decisioning process described previously drives all the required content onto the viewer’s screen. A push message originates from the Presentation Decision engine to the User Interface system, referencing the required images, text, and video, with its relevant positioning, branding, exposure duration, etc.
The interactivity presentation on the screen is determined in multiple ways. Some devices have their own logic for how to present on their screens according to their screen “real estate” (e.g., proprietary set-top boxes) while some have standard technology and formats (e.g., Android-based set-top boxes). Some devices are memory-weak, which can include set-top-boxes, older TVs and simpler mobile devices, and cannot receive certain “heavy” media objects onto their screens like a dynamic graphic. These weaker devices force the technology workflow to render much of the visual content on a server rather than the device itself, which adds some additional cost for providing the service to the viewer, but which ensures a seamless and complete viewing experience.
The User Interface domain is driven by best practices to make all elements user-friendly and appealing, such as choice of fonts, colors, contrast, etc. Accessibility-compliance is a critical subject for broadcasters, and so accessibility capabilities must be available inside the UI, which include high-contrast graphics/colors, large text, and more.
The Financial Engine
All the work described so far is aimed at monetizing the content by engaging viewers. Engaged viewers are those that interact with the content presented to them. The Financial Engine is therefore the final, and arguably most important, part of the technology ecosystem of Interactive Rights. Without this, the engagement with viewers remains in the less targeted world of marketing and brand awareness, instead of the hyper-targeted world of sales and revenue generation which is the overarching commercial goal.
The Financial Engine is a clearinghouse and accounting system that tracks, reports, and executes every financial transaction related to the Interactive Rights. The clearinghouse contains a set of parameters that describe the nature and terms of the revenue split that is contractually agreed between the commercial parties involved. The parameters can be determined in multiple ways. For instance, if the revenue pot is $10 million, then it could be split as an equal percentage per party, or split based on active users per stakeholder, or split based on view-time per service provider, or in other ways. Rules can often vary between the primary screen and the second screen, because the primary screen is typically for the rightsholder of sports content while the second screen is currently the domain of the content owner itself via their own website or app. With the different stakeholder groups in each domain, it is possible to apply different rules.
Each content owner decides, with input about best practices from its clearinghouse partner, which rules will apply to the revenues that are to be split. This forms the basis of the negotiation with the other commercial parties. The overriding principle is to be fair to all parties, encouraging maximum collaboration to maximize revenue generation, but the number of rules applied to different interactivity experiences can make revenue splits very dynamic. How these rules are devised will be the subject of a following article in this series.
The data captured by the Financial Engine provides key information to the Content Owner and their partners that inform future rights negotiations and changes to revenue split rules. The data provides insight into consumer engagement with the content and interactivity offerings, shows how interactivity affects watch-time, and how this all affects revenues. It shows the breakdown of data by the content value chain that each viewer is using (e.g., the channel + primary service provider + secondary service provider where the content is onward distributed). The key point is that all parties can see the data, from the content owner to rightsholders to distributors to the advertising brands.
Maximizing The Benefits Of Interactivity
Data, Decisioning, Presentation, Financial Tracking. The Technology Ecosystem of Interactive Rights has these four important tasks to perform. The number and type of data inputs dictates the universe of possible interactivity experiences for a viewer, and this is expected to grow and adapt as the success of different types of interactive experiences is judged over time.
It is intriguing to consider if the content itself will become superseded by the gaming or social elements offered via Interactive Rights, based on the popularity of viewing vs. interacting. Today’s expectation, at the core of the business case for Interactive Rights, is that people will be more likely to watch more content and pay more attention to the game action because of the interactivity offered. In addition, interactivity is expected to become more sophisticated through improved graphics, immersive viewing formats, micro-targeting, and more social capabilities, and made simpler through user experience innovations, such as the use of seamless wallets.
The role of AI in making decisions is critical. The scale, dynamism, and real-time nature of interactivity, particularly in live sports, requires full automation and real-time decisioning. And yet human input, to carefully curate the gamification to ensure it is sufficiently accurate and entertaining, is also critical.
As data sources grow and adapt to viewers’ preferences, and as interactivity itself takes more of an important role in the viewer experience of live sports, and as the business benefits of Interactive Rights are realized, the technology stack will need to meet the necessary KPIs for reliability, scalability, and latency to keep up with sports audiences. But overarching all of the viewer experience is how the financial transactions are tracked, and how they inform content owners decisions as they continually shape their rights for maximum financial return.
You might also like...
5G Broadcast Update 2025
After some trials of varying success, European broadcasters are most interested in exploiting 5G Broadcast as part of their hybrid offerings with hopes of reaching mobile devices. The key missing ingredient is support by the major device makers.
IP Security For Broadcasters: Part 12 - Zero Trust
As users working from home are no longer limited to their working environment by the concept of a physical location, and infrastructures are moving more and more to the cloud-hybrid approach, the outdated concept of perimeter security is moving aside…
Disruptive Future Technologies For HDR & WCG
Consumer demands and innovations in display technology might change things for the future but it is standardization which perhaps holds the most potential for benefit to broadcasters.
EdgeBeam Wireless Technology Furthers ATSC 3.0 Datacasting
Simultaneous broadcast of real-time data to an unlimited number of one-way receivers and locations is the unique catalyst of the amazing potential of the Broadcast Internet. EdgeBeam Wireless is a new market offering from a group of TV broadcasters seeking…
IP Security For Broadcasters: Part 11 - EBU R143 Security Recommendations
EBU R143 formalizes security practices for both broadcasters and vendors. This comprehensive list should be at the forefront of every broadcaster’s and vendor’s thoughts when designing and implementing IP media facilities.