On-Line, near-Line or archive. Just how should data be stored?
Shown is a Spectra Logic T-Finity tape archive system.
Any discussion of media storage relies on four generic phrases; on-line, near-line, archive or off-line. What storage technology is best suited for each task?
Let’s set some general definitions for our discussion on data storage. On-line means the information is immediately usable for any required purpose. The data could be on-line for transcoding while at the same time being off-line for editing. In other words storage terms like “on-line” cannot be used unless an application is also specified.
Off-line has occupied some under-defined usage scenarios for the last 10 to 20 years. For me it has always meant "Damn-it, I have to wait for something to happen before I can do what I really wanted to do". Obviously such interruptions in a workflow, to the extent that KPI's are not met, require the media to be less "off-line".
Types of storage
Why do we need a third definition of storage usage scenarios? If something is "On-shelf" is it not simply further off-line. I would like to propose that we define the difference in terms of human interaction.
Off-line can become on-line as an automated process. The cheapest, and in some sense the most secure, storage is on the shelf, preferably kept in two geophysical separated disaster protected locations. Deciding what to put where becomes easier if we know that all three kinds of storage are available. Just to be clear, these distinctions still have value even when provisioning from the cloud.
Localized storage typically consists of multiple HDs installed in rack mounts of various sizes, configuration. Shown here is a Facilis TX16 storage system.
Today’s storage systems are virtually all disk based. While solid-state drives are available, and we use RAM, both provide insufficient storage capacity and are more expensive than rotating disk—especially for media projects. So what are the key differences in types of storage and how do we judge their performance?
The criteria for data storage are, Permanence, Availability, Scalability and Security. Making an acronym we get P.A.S.S.
- Permanence means that data is never lost.
- Availability means that the user/application requirements for access/performance are met.
- Scalability defines the ease of meeting changing requirements.
- Security defines the granularity and durability of access privileges.
Providing storage with the best technologically available, P.A.S.S., regardless of application, is going to be prohibitively expensive. Therefore we need to match the storage to the application. That means trading storage performance against availability. But that means we have to maintain multiple types of storage.
Cloud storage gives us the advantage of bespoke storage without the additional overhead. Generic cloud storage enables economies of scale were multiple clients are served from a single enterprise storage system. But, you still need to ask, will the cloud provider at some point off-load your deep archive and ship it to Iron Mountain? Or, can you even afford to keep all of your projects on expensive, always ready, on-line storage?
Let’s assume that we have determined what P.A.S.S we need for each application used within our acquisition, post-production and distribution pipeline. How do we determine when to move the data to less-expensive storage? Changing technology (lower prices) mean that this decision is always open for reinterpretation. This all comes back to KPI’s, if we can achieve the KPI while moving the data to less expensive storage, then do it!
Match workflows to KPI
An automated workflow should make data migration between types of storage a transparent background process. This works because the task requirements are anticipated and built into the system.
Grading systems arguably have the highest availability requirements in the production process, but does the storage used for this application have to be mirrored? Do you have to simultaneously store all current projects? Or, can you live working on just one project on-line at a time? Tradeoffs can be made to permit a sufficient level of availability, redundancy and capacity. After all, the redundancy required for the safe operation of a nuclear power-plant may be excessive for the production of the nightly news!
Diagram illustrates a typical broadcast workflow using Isilon technology. A modern storage platform allows users to adapt available storage to project needs—all without the complexity of becoming a storage infrastructure expert.
Correctly designed workflow management systems acquire the necessary information in order to anticipate data access requirements and move the data where it will be needed in a timely manner. This can even include an automatic order to get backups from Iron Mountain in advance so that the material is in place when needed for post. Fortunately, today’s workflow management solutions are so sophisticated, they can anticipate virtually all your production storage needs and automatically retrieve and move the data where it’s needed without human intervention.
Storage costs continue to drop. But, that doesn’t mean you should chase them. Shown here is a Samsung 1TB SSD, which today costs about $400. A 1TB HD may cost less than $50.
Exact pricing for each storage option is a moving target, however the relationship between the options should remain essentially the same.
Off-line storage costs are about one-third of on-line storage, this is without geographic replication. Using LTO-6 and 3TB cassettes at 50 cents per tape per month makes archive physical storage cost 1/100 of off-line costs. The latter comparison is, of course, unfair as it does not include the cost of the tape itself or the additional cost for physical retrieval.
However the extreme discrepancy between automated retrieval taking hours and manual retrieval taking days leaves room for a new service offering 24 hour retrieval of on-shelf storage. When thinking about the viability of shelf storage, remember that Disney destroyed the 4K data used for the latest release of Snow White and only keeps the physical separations!
You might also like...
Designing IP Broadcast Systems - The Book
Designing IP Broadcast Systems is another massive body of research driven work - with over 27,000 words in 18 articles, in a free 84 page eBook. It provides extensive insight into the technology and engineering methodology required to create practical IP based broadcast…
Demands On Production With HDR & WCG
The adoption of HDR requires adjustments in workflow that place different requirements on both people and technology, especially when multiple formats are required simultaneously.
NDI For Broadcast: Part 3 – Bridging The Gap
This third and for now, final part of our mini-series exploring NDI and its place in broadcast infrastructure moves on to a trio of tools released with NDI 5.0 which are all aimed at facilitating remote and collaborative workflows; NDI Audio,…
Designing An LED Wall Display For Virtual Production - Part 2
We conclude our discussion of how the LED wall is far more than just a backdrop for the actors on a virtual production stage - it must be calibrated to work in harmony with camera, tracking and lighting systems in…
Microphones: Part 2 - Design Principles
Successful microphones have been built working on a number of different principles. Those ideas will be looked at here.