Applying Storage Media Optimally Across Media Workflow
When media organizations combine various types of storage — high-performance disk, solid state drives, object storage, tape and the public cloud — with data management technology in a multi-tier storage strategy, they are in the best possible position to maximize the cost, access, and performance benefits of storage across all workflow areas.
The differing characteristics of various types of storage determine which part of the overall workflow they best support. The question, therefore, is not “What storage should I use?” but rather “When should I use it?” To answer this question, the organization must assess each workflow area — ingest, work in progress, editing and delivery, and archive and vault — and prioritize four key characteristics of storage: performance, protection, capacity and cost per storage unit.
Ingest and Work in Progress
Given that many media organizations today work with high volumes of content, an increasing amount of which is 4K, the capture and playback processes associated with ingest and work in progress can put significant demands on disk-based storage devices. And, the faster the hard drive, the more expensive it will be.
Performance-optimized arrays built on flash-based solid-state drives (SSDs) offer the utmost in speed, but they also provide less capacity for the dollar. This approach is suitable for compositing systems that require multiple streams of high-resolution content for short-format projects. In this case, the added performance of an all-flash system is a key benefit, and the lack of high-capacity disks isn’t an issue. SSDs might also be the best option for large editorial departments, which might need this top-shelf performance to accommodate the random I/O generated by multiple simultaneous seeks.
Although optimization of storage for work in progress naturally focuses on performance, organizations would do well to remember that by moving content out of primary storage and into archive, they not only reduce the cost of storage but also free up space, which further improves storage performance. Ideally, storage remains at a working size optimized to support the work at hand, not simply to retain content.
In high-performance systems, RAID technology is the standard choice for protection against (and recovery from) disk hardware failures. As always with RAID, the organization must decide which RAID scheme offers the best set of benefits and compromises — the appropriate level of protection — for the desired cost, performance and capacity.
Ongoing improvements to RAID systems include larger and larger drive sizes and increases in speed, as well. But when RAID arrays grow too large, drive failures begin to happen more often. In some cases, RAID recovery simply can’t keep up. Although there may be no data loss over time, the regular failure of drives can eventually become such a problem that the system will simply collapse.
Though limited in its ability to support work-in-progress tasks such as craft editing, color correction and compositing, the cloud can offer useful protection options on the front end of the workflow. While creatives work with original versions of newly ingested files on-premise, copies of that new content can be copied up to a public cloud service for protection. This model allows a limited number of files to be recovered with convenience, typically to replace an unstable file. On a larger scale, however, this retrieval process becomes quite costly, eliminating the cloud as a feasible recovery option in the event of a disaster or catastrophic event.
When more storage is a must, and fast, the cloud is up to the challenge. Despite its high costs and performance limitations in certain scenarios, the cloud enables uniquely agile scaling that can be invaluable in a pinch.
Editing and Delivery
In some instances, editing and delivery tasks will require relatively high performance, such as that offered by larger disk arrays or even flash-based SSDs. In other instances, however, these tasks do not demand the performance of primary storage but still require better performance than is offered by long-term storage tiers built on tape. Object storage fills the middle ground very well, and it also presents an alternative to RAID systems and accompanying RAID rebuilds.
Media organizations that use object storage give up some performance in exchange for flexible drive deployment and smoother migration of content across new generations of storage. Unlike RAID systems, object storage systems become more reliable as disks are added and the system grows. All the component levels — disks, nodes and racks — are managed as independent items; thus, a failed item can be replaced with a larger-capacity disk or newer-generation storage device.
Object storage is a good fit for repositories where content is stored and accessed for delivery, particularly when the content largely remains unchanged. Because media is stored as objects and must be converted to files for use by most media applications, the tracking and management of files and objects can grow complicated. Media asset management and file management systems today generally are equipped to handle these processes and simplify the use of content from object storage.
Archive and Retention
A managed storage environment can enable the media organization to extend its visibility and access to content on lower-cost storage tiers. In this area of the overall workflow, data tape and managed archive libraries with automated tape handling can offer substantial cost savings — even when compared with the cloud and its still-falling storage rates. LTO remains a solid choice for archive and retention, and it is the storage tier that allows for the aforementioned movement of content off of more expensive and previous primary storage.
Connecting Storage Tiers Across the Workflow
If content is to move from primary storage to another tier and still be usable, it must remain visible and accessible either through a MAM or built-in data management. The workflow shouldn't be impacted by the fact that the data has moved to another tier of storage.
But, building a storage infrastructure that traverses multiple storage tiers isn't easy. The organization’s file system must extend across all tiers, ensuring that content moves smoothly to the right tier (and storage media) at the right time. It must give users the ability to store and access files, and to set up automated data movement via scheduled or triggered moves. It must be equipped to deal with multiple storage protocols and to function smoothly under the control of MAMs and other data management systems.
You might also like...
Designing IP Broadcast Systems - The Book
Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
If It Ain’t Broke Still Fix It: Part 2 - Security
The old broadcasting adage: ‘if it ain’t broke don’t fix it’ is no longer relevant and potentially highly dangerous, especially when we consider the security implications of not updating software and operating systems.
Standards: Part 21 - The MPEG, AES & Other Containers
Here we discuss how raw essence data needs to be serialized so it can be stored in media container files. We also describe the various media container file formats and their evolution.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…