The Future of Media Archives: Managing Media Across Time and Space
Policy-based workflows, not file-based workflows, will move content where it needs to go next.
Media storage is not just about meeting today’s needs of size and speed. It’s also about being able to access that content tomorrow or 20-years from now.
Ever since video was first stored on hard disk drives some three decades ago, media operations have wanted faster, bigger, and cheaper storage. Fortunately we’ve finally come to a point in the evolution of storage where faster, larger and cost-effective solutions are available.
Fast — With all the talk about 4K and 8K resolutions, high dynamic range, high frame rate, etc., demand for speed is clearly at a new high. Fortunately, storage arrays and controllers have no problem providing the gigabytes per second of throughput for multiple streams of the heaviest of these uncompressed rates.
Big — The concept of big storage got a whole lot bigger a few years ago when social media overtook professional and broadcast media as the ultimate warehouse for our cultural heritage. The capacity of broadcast and studio media repositories is now small compared to the tens of petabytes employed at photo- and video-sharing sites, from Flickr to Facebook to YouTube. So when it comes to supporting large production and distribution environments, “big” is no longer an issue.
Cheap — A terabyte costs less than 1/1000th what it did 15 years ago. And it will only get cheaper.
Clearly fast, big, and cheap are no longer the problems. So what is on everyone’s storage wish list today?
Storage Management: The New Cost Conundrum
These days, instead of worrying about the price per terabyte, the big expense on the radar is the cost of storing the media over time and making sure it’s accessible and usable indefinitely. In other words, it’s not just storing it today, but keeping it for tomorrow—especially in the case of broadcasters and studios, for whom “tomorrow” could mean a decade or even a century from now. Consider this: “Star Wars” opened almost 40 years ago. The original “20,000 Leagues under the Sea” movie was released 100 years ago. How can we affordably manage for that kind of longevity?
The actual location of stored content should be invisible to users. And, storage needs to be more than just a place to manage content during the creation phase. A proper solution needs to provide accessibility—forever.
Object Stores: The Key to Affordable Long-Term Storage Management
Enter object stores. Unlike file systems, object stores group files and their metadata into objects that can be coherently accessed by different media asset management systems, file-delivery systems, and other production and distribution applications. They are designed to support multiple storage sites connected by a wide area network. As content ages, policies can automatically move content to tape or the cloud, eliminating the need to manually move or delete thousands or millions of files.
Some object stores are specifically designed to manage content through time and space — where time is measured in decades and hardware-platform transitions, and space in measured in the distance between cities and continents.
Across the next century, we will want to store our content on various hardware platforms — some yet to be invented. The ease of migrating content from one hardware platform to the next is one of the true values of an object store, and migrating content is the key to ensuring cost-effective long-term storage and accessibility.
An object store brings the data-management portion of the equation down into the storage level, where costs can be driven out. In this way, small, specialized MAM providers needn’t burden their cost structures by writing custom code to support data management, mobility, and migration technologies.
Here’s how it works: Storage vendors sell object stores into a variety of vertical industries and cloud-storage service providers. These storage vendors can afford to build robust APIs and have good reason to standardize these APIs across vendor platforms. Amazon’s Simple Storage Service (S3) has become the dominant API set. By making calls to this API set, MAM vendors in turn can initiate complex data management, movement, and migration — such as ensuring content is being moved to another site, deleted when appropriate, copied when needed, and migrated to new hardware when the time comes. No longer do they have to write specific code for each storage vendor.
Object storage allows media companies to cost-effectively employ various storage tiers through the years, such as lower-cost drives, future flash offerings, cloud storage, tape, and whatever comes next. With object storage, companies can manage all flavors of storage without the huge data-migration headaches that plague many global repository initiatives today.
The Future Topology
Over the next five years, a topology will evolve that combines a fast, thin production-storage tier with a large, slower object-storage tier behind it.
The first tier requires speed to support video production, processing, and delivery. For that reason, the dominant Tier 1 storage structure will continue to be either SAN or NAS, depending on a variety of workflow and network variables. This tier will most likely be made up of flash (solid state drives) within the next couple of years. Media asset management systems will have the functionality to send content to the object store. From there, the object will apply policies for data distribution across the multisite repository and prioritize content access as the assets age.
The second tier — the object store — will be extremely resilient, with embedded lifecycle-management functionality ruled by policies that govern long-term management, movement, and migration of content. Importantly, the functionality includes periodic checking and rebuilding of content as the individual drives age and fail.
By marrying a production tier with an extremely resilient, flexible, and scalable object store, media enterprises can securely share content from the object-store tier among multiple sites. This new storage paradigm — one platform that resides in multiple sites — allows mobility and resiliency across geographies, migration across the storage tiers of today and tomorrow, and data security across time.
Whether you are producing on three continents or distributing across six, policy-based workflows, not file-based workflows, will move your content where it needs to go next. Operations that value speed more than long-term resiliency will be fine using file systems for years to come. But for those that need both speed and the ability to manage content throughout space and time, the future is now. The move to an object store-enabled future is already underway.
Jason Danielson, Media and Entertainment Solution Marketing Manager, NetApp.
You might also like...
Designing IP Broadcast Systems
Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…
Standards: Part 21 - The MPEG, AES & Other Containers
Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…
Microphones: Part 2 - Design Principles
Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.
Expanding Display Capabilities And The Quest For HDR & WCG
Broadcast image production is intrinsically linked to consumer displays and their capacity to reproduce High Dynamic Range and a Wide Color Gamut.