A recent article at tvtechnology.com, discusses the ambiguity of the term “archive,” and uses the Library of Congress archives as an example of how digital media is changing the way films are preserved for future access and use.
It was not within the scope of the article to discuss how files are shuttled between locations, but it seems clear that file transfer acceleration would be extremely useful at a number of points, and becomes all the more critical when the media being stored will be uncompressed 1080p60 (with larger 2K and 4K resolutions steadily on the rise!). Looking at the “Production – Archive – Resiliency” architecture, most of the points at which files move from one stage to the next are potential points of interest for file transfer acceleration:
From the field to production playout is largely the domain of realtime broadcast. Accelerated file transfer could assist if the scenario permitted “near-realtime,” but it’s really a broadcast rather than a point-to-point transfer situation. However, at the same time that compressed video is being sent for playout, it is also being moved to the record/protect/archive hardware. Content may be encoded into a specific format and stored. Assuming the field production units are in a different geographical location than the storage servers, they may face two issues:
- Needing to wait for data to arrive at the record/protect/archive location before “packing up”
- Requiring the ability for a transfer to be interrupted and resumed once a new connection is established
Accelerated file transfers make more efficient use of the available time, meaning the wait time in scenario 1 is dramatically reduced. Managed file transfer technologies (whether accelerated or not) are built with retry/resume facilities as a key feature. Software like FileCatalyst is therefore able to address both issues.
Moving along the chain, let’s follow the low-loss or at least non-realtime data. The next stop is at an archive accessible to the Post Production facilities. Here, the archive is mirrored to servers that facilitate further production, editing, and asset management. This location would not ideally be at the same location as the R/P/A hardware, meaning the files have another leg of transfer ahead. It is already a benefit to move the files from the presumably stationary R/P/A to the Post Production location at the highest possible speed. If the assets are to be further processed, the involved parties will want fastest possible access. However, FileCatalyst can take it a step further. With the progressive transfer option, as soon as the files are being stored at the R/P/A location they may be forwarded to the next stop. To reiterate: the file need not have arrived in full at the R/P/A in order to start transferring to the Post Production locations. In effect, they will arrive at nearly the same time.
Continuing along the path towards the permanent archives via Resilient Disk Storage or Reslient Tape Archive, accelerated file transfer again provides the same benefits. Instead of waiting days for the data to reach the safe redundant storage of the permanent archive, the file transfer could take mere hours. Or using the same progressive file transfer technology, it could arrive at virtually the same time.
Even when time sensitivity is less of an issue (for example, it may not be critical to move data from Post Production to Archive at the fastest possible speed), accelerated and managed file transfer solutions provide reliability mechanisms along the way. From source to sink, the data is guaranteed to arrive intact. Other technologies for moving files (such as traditional FTP) are prone to error during long file transfers and have no built-in verification methods. Using FTP would require human intervention to confirm the arrival of the complete and functioning file, as well as manually request retransmission in the event of failure. With FileCatalyst, verification, retry, and resume are automatic, ensuring that the archive in fact receives the motion picture intact.