rscheffler Offline Upload & Sell: Off
|
I think RAID would be a reasonable near-term solution for quick access to an archive, where everything appears in a single volume, or logical set of volumes, and which depending on the system chosen, may be continuously expandable in capacity. But in itself, a RAID array is complex and is not considered to be a good archive solution. Basically, you will still need to have a back-up of some sort for the RAID. It's worth doing some research on this. Things that scared me off RAID were problems with controller cards or enclosures and their controller cards going bad and no longer being supported by the manufacturer to where it would no longer be a simple matter of putting the drives in a new box and just turning it on. This is in addition to the usual drive failure concerns.
My feeling is the more automated and simpler the solution becomes, the less control the user has over what's happening behind the scenes, especially when a problem arises. For example, Apple's Time Machine is very useful, slick, and simple. But I've run into problems with it where the archive was not longer accessible or reliable, and I couldn't really figure out why, at least not easily. I've since switched to using Carbon Copy Cloner (CCC) for regular OS backups, which it keeps in a structure identical to the original, also allowing it to be bootable, if needed.
I've stuck to my own variation of a 'JBOD' system that many others also seem to have adopted. I transfer off the memory card to a ~500GB SSD that contains currently active projects. SSD for speed of access, it's easy to bring along to offsite events/jobs and theoretically is more durable in such applications. I'll immediately use CCC to back up the new project to two external drives. As I work on the project, I use CCC to copy over just the updated files. It's set up to move the older files on the back-up drives into 'archive' folders that can be revisited in case a recent update breaks something, or I need to recover a file deleted from the working drive. When the project is finished, I'll do a final clone/back-up to the two drives, where one of the drives is an archive specific to the type of project, while the other fills with projects of all kinds on a chronological basis. After being certain everything has copied over correctly, I'll delete the project from the SSD. I also keep a set of low-rez captioned and keyworded images on the OS drive in a folder with the same name as the original, which become searchable with Spotlight. The low-rez files, like the originals, are named in a YYYYMMDD_XXXX format, with the project folder also beginning with YYYYMMDD. It's easy to do a Spotlight image search based on a keyword, determine when the image was shot by the filename or EXIF/caption, and trace it back to the appropriate hard drive(s). Or, sometimes if it's just a file or two, it's easier to pull it from the cloud, especially if I'm not home. I also use the same naming/filing system for backing up final processed files to the cloud. In my case, to Amazon's AWS S3 service, with some duplicates also residing in dedicated Google Drive accounts, which I will use for certain client deliveries, where a web gallery is desirable.
Over the longterm, a key seems to be to continuously migrate data forward, to new drives and storage mediums, as they become established and reliable. I agree that diversification of backup media is a good idea and I'm no longer sold on CDs/DVDs for some of the reasons stated above. They just don't hold much and take up a lot of space. A collection of cheap micro SD or simple SDHC cards, written once, might be a better solution to complement the usual HDDs. Here the biggest concern I have over migration is the introduction of bit rot due to physical media decay, which then carries over with future migrations. There are solutions to this, such as cloning solutions that check against the original, or the ZFS file system, but right now I'm playing the odds that it won't become a critical problem.
As already hinted, I'm relying on the cloud as a last resort, offsite solution, to which I backup only the final processed Jpeg selects from every shoot. Obviously this overlooks the original RAWs, but IME, I am unlikely to revisit 99% of my RAWs after the first time through the RAW converter. This might change, but at present the storage and bandwidth requirements to backup even just the RAW selects is not feasible for me.
|