Hi,

I’m not sure if this is the right community for my question, but as my daily driver is Linux, it feels somewhat relevant.

I have a lot of data on my backup drives, and recently added 50GB to my already 300GB of storage (I can already hear the comments about how low/high/boring that is). It’s mostly family pictures, videos, and documents since 2004, much of which has already been compressed using self-made bash scripts (so it’s Linux-related ^^).

I have a lot of data that I don’t need regular access to and won’t be changing anymore. I’m looking for a way to archive it securely, separate from my backup but still safe.

My initial thought was to burn it onto DVDs, but that’s quite outdated and DVDs don’t hold much data. Blu-ray discs can store more, but I’m unsure about their longevity. Is there a better option? I’m looking for something immutable, safe, easy to use, and that will stand the test of time.

I read about data crystals, but they seem to be still in the research phase and not available for consumers. What about using old hard drives? Don’t they need to be powered on every few months/years to maintain the magnetic charges?

What do you think? How do you archive data that won’t change and doesn’t need to be very accessible?

Cheers

  • aurtzy@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    You might be interested in git-annex (see the Bob use case).

    It has file tracking so you can - for example - “ask” a repository at drive A where some file is, and git-annex can tell you it’s on drives C and D.

    git-annex can also enforce rules like: “always have at least 3 copies of file X, and any drive will do”; “have one copy of every file at the drives in my house, and have another at the drives in my parents’ house”; or “if a file is really big, don’t store it on certain drives”.

  • DasFaultier@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    This is my day job, so I’d like to weigh in.

    First of all, there’s a whole community of GLAM institutions involved in what is called Digital Preservation (try googling that specifically). Here in Germany, a lot of them have founded the Nestor Group (www.langzeitarchivierung.de) to further the case and share knowledge. Recently, Nestor had a discussion group on Personal Digital Archiving, addressing just your use case. They have set up a website at https://meindigitalesarchiv.de/ with the results. Nestor publishes mostly in German, but online translators are a thing, so I think you will be fine.

    Some things that I want to address from your original post:

    • Keep in mind that file formats, just like hardware and software, become obsolete over time. Think about a migration strategy for your files to a more recent format of your current format falls out of style and isn’t as widely supported anymore. I assume your photos are JPGs, which are widely not considered safe for preservation, as they decay with subsequent encoding runs and use lossy compression. A suitable replacement might be PNG, though I wouldn’t go ahead and convert my JPGs right away. For born digital photo material, uncompressed TIFF is the preferred format.
    • Compression in general is considered a risk, because a damaged bit will potentially impact a larger block of compressed data. Saving a few bytes on your storage isn’t worth listing your precious memories.
    • Storage media have different retention times. It’s true that magnetic tape storage has the best chances for survival, and it’s what we use for long term cold storage, but it’s prohibitively expensive for home use. Also, it’s VERY slow on random access, because tape has to be rewound to the specific location of your file before reading. If you insist on using it, format your tapes using LTFS to eliminate the need for a storage management system like IBM Spectrum Protect. The next best choice of storage media are NAS grade HDDs, which will last you upwards of five years. Using redundancy and a self correcting file system like ZFS (compression & dedup OFF!) will increase your chances of survival. Keep you hands off optical storage media; they tend to decay after a year already according top studies on the subject. Flash storage isn’t much greater either, avoid thumb drives at all cost. Quality SSD storage might last you a little longer. If you use ZFS or a comparable file system that provides snapshots, you can use that to implement immutability.
    • Kudos for using Linux standard tooling; it will help other people understand your stack of anything happens to you. Digital Preservation is all about removing dependencies on specific formats, technologies and (importantly) people.
    • Backup is not Digital Preservation, though I will admit that these two tend get mixed into one another in personal contexts. Backups save the state of a system at a specific point in time, DigiPres tries to preserve only data that isn’t specific to a system and tends to change very little. Also, and that is important, DigiPres tries to save context along with the actual payload, so you might want to at least save some metadata along with your photos and store them all in a structure that is made for preservation. I recommend BagIt; there’s a lot of existing tooling for creating it, it’s self-contained, secured by strong checksums and it’s an RFC.
    • Keep complexity as low as possible!
    • Last of all, good on you for doing SOMETHING. You don’t have to be perfect to improve your posture, and you’re on the right track, asking the right questions. Keep on going, you’re doing great.

    Come back at me if you have any further questions.

    • some_guy@lemmy.sdf.org
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      And have multiple copies in at least two locations of anything truly important to guard against disaster (such as a fire or regionally appropriate natural disaster). I got a spare drive to copy all the music that I’ve made and sent it to my father in a different part of the country. I could lose everything and be pretty bummed, but not that (without severe depression). I also endorse use of a safe deposit box at a bank if you don’t have someone who can hold data in a different city.

      • DasFaultier@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        Yeah, you can always go crazy with (off site) copies. There’s a DigiPres software system literally called LOCKSS (Lots Of Copies Keep Stuff Safe).

        The German Federal Office for Information Security recommends a distance of at least 200km between (professional) sites that keep georedundant copies of the same data/service, so depending on your upload capacity and your familiarity with encryption (ALWAYS backup your keys!), some cloud storage provider might even be a viable option to create a second site.

        Spare drives do absolutely work as well, but remember that, depending on the distance, data there will get more or less outdated and you might not remember to refresh the hardware in a timely manner.

        A safe deposit box is something that I hadn’t considered for my personal preservation needs yet, but sounds like a good idea as well.

        Whatever you use, also remember to read back data from all copies regularly and recalculate checksums for fixity checks to make sure your data doesn’t get corrupted over time. Physical objects (like books) decay slowly over time, digital objects break more spontaneously and often catastrophically.

      • DasFaultier@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        Good to hear! When you go with the National Archives UK, you can’t fail. They have some very, VERY competent people in staff over there, who are also quite active in the DigiPres community. They are also the inventors of DROID and the maintainers of the widely used PRONOM database of file formats. https://www.nationalarchives.gov.uk/PRONOM/Default.aspx Absolute heroes of Digital Preservation.

  • phanto@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    This is actually a real problem… A lot of digital documents from the 90’s and early 2000’s are lost forever. Hard drives die over time, and nobody out there has come up with a good way to permanently archive all that stuff.

    I am a crazy person, so I have RAID, Ceph, and JBOD in various and sundry forms. Still, drives die.

    • Peffse@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      It’s crazy that there isn’t a company out there making viable cold storage for the average consumer. I feel like we’re getting even further away from viability now that we use QLC by default in SSDs. The rot will be so fast.

    • Sl00k@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      nobody out there has come up with a good way to permanently archive all that stuff

      Personally I can’t wait for these glass hard drives being researched to come at the consumer or even corporate level. Yes they’re only writable one time and read only after that, but I absolutely love the concept of being able to write my entire Plex server to a glass harddrive, plug it in and never have to sorry about it again.

  • Extras@lemmy.today
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Might be a dumb idea but hear me out. How about sealing a reputable enterprise or consumer SSD in one of those anti static bags with a desiccant and then sealing that inside a pvc pipe also with desiccant and then burying it below the frost line? You’ll just have to dig it up and refresh everything every couple of years, think 3 years at most iirc for consumer ones. Obviously this isn’t a replacement for a backup solution just archival so no interaction with it. It’ll protect it from the elements, house fires, flooding, temperature fluctuations pretty much everything and its cost effective. Hell you can even surround the hard drive bag in foam then stuff in the pvc pipe for added shock absorption. Make a map afterwards like a damn pirate (its night time so my bad if I sound deranged)

    • ReversalHatchery@beehaw.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      went with an ssd in this idea since its more durable than a mechanical, better price for storage capacity

      how? sorry but that does not add up to me. for the price of a 2 TB SSD you could by a much larger HDD

      and most likely to be compatible with other computers in the future in case you need it for whatever reason.

      both of these use SATA plugs, it should be the same

    • MentalEdge@sopuli.xyz
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      2 months ago

      This is a very, very bad idea.

      SSDs are permanent flash storage, yes, but that doesn’t mean you can leave them unpowered for extended periods of time.

      Without a refresh, electrons can and do leak out of the charge traps that store the ones and zeroes. Depending on the exact NAND used, the data could start going corrupt within a year or so.

      HDDs suffer the same problem, though less so. They can go several years, possibly a decade, but you’d still be risking the data on the drive but letting it sit unpowered for an extended time.

      For the “cold storage” approach you should really be using something that’s designed to retain data in such conditions, like optical media, or tape drives.

  • Max-P@lemmy.max-p.me
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I would use maybe a Raspberry Pi or old laptop with two drives (preferably different brands/age, HDD or SSD doesn’t really matter) in it using a checksumming filesystem like btrfs or ZFS so that you can do regular scrubs to verify data integrity.

    Then, from that device, pull the data from your main system as needed (that way, the main system has no way of breaking into the backup device so won’t be affected by ransomware), and once it’s done, shut it off or even unplug it completely and store it securely, preferably in a metal box to avoid any magnetic fields from interfering with the drives. Plug it in and boot it up every now and then to perform a scrub to validate that the data is all still intact and repair the data as necessary and resilver a drive if one of them fails.

    The unfortunate reality is most storage mediums will eventually fade out, so the best way to deal with that is an active system that can check data integrity and correct the files, and rewrite all the data once in a while to make sure the data is fresh and strong.

    If you’re really serious about that data, I would opt for both an HDD and an SSD, and have two of those systems at different locations. That way, if something shakes up the HDD and damages the platter, the SSD is probably fine, and if it’s forgotten for a while maybe the SSD’s memory cells will have faded but not the HDD. The strength is in the diversity of the mediums. Maybe burn a Blu-Ray as well just in case, it’ll fade too but hopefully differently than an SSD or an HDD. The more copies, even partial copies, the more likely you can recover the entirety of the data, and you have the checksums to validate which blocks from which medium is correct. (Fun fact, people have been archiving LaserDiscs and repairing them by ripping the same movie from multiple identical discs, as they’re unlikely to fade at exactly the same spots at the same time, so you can merge them all together and cross-reference them and usually get a near perfect rip of it).

    • ReversalHatchery@beehaw.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      with two drives (preferably different brands/age, HDD or SSD doesn’t really matter) in it using a checksumming filesystem like btrfs or ZFS so that you can do regular scrubs to verify data integrity.

      an important detail here is to add the 2 disks to the filesystem in a way so that the second one does not extend the capacity, but adds parity. on ZFS, this can be done with a mirror vdev (simplest for this case) or a raidz1 vdev.

  • Mountain_Mike_420@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Don’t over complicate it. 3 copies: backup, main, and offsite; 2 different media: hdd and data center; 1 offsite. I like blackblaze but anything from google to Amazon will work.

  • NaibofTabr@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Someone else has mentioned M-Disc and I want to second that. The benefit of using a storage format like this is that the actual storage media is designed to last a long time, and it is separate from the drive mechanism. This is a very important feature - the data is safe from mechanical, electrical and electronic failure because the storage is independent of the drive. If your drive dies, you can replace it with no risk to the data. Every serious form of archival data storage is the same - the storage media is separate from the reading device.

    An M-Disc drive is required to write data, but any DVD or BD drive can read the data. It should be possible to acquire a replacement DVD drive to recover the data from secondary markets (eBay) for a very long time if necessary, even after they’re no longer manufactured.

    • 8263ksbr@lemmy.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      M-disk, never heard of that. I got a quick research done and it seems to be exactly what I was looking for. Thank you!

    • 8263ksbr@lemmy.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Checked it out, thanks. I have to figure out, how it compares to my rsync Script

      • Nine@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        Waaaaay better.

        Restic allows you to make dedupe snapshots of your data. Everything is there and it’s damn hard to loose anything. I use backblaze b2 as my long term end point / offsite… some will use AWS glacier. But you don’t have to use any cloud services. You can just have a restic repository on some external drives. That’s what I use for my second copy of things. I also will do an annual backup to a hard disk that I leave with a friend for a second offsite copy.

        I’ve been backing up all of my stuff like this for years now. I used to use BORG which is another great tool. But restic is more flexible with allowing multiple systems to use a single repository and has native support for things like B2 that BORG doesn’t.

        We also use restic to backup control nodes for some of supercomputing clusters I manage. It’s that rock solid imho.

  • JubilantJaguar@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    The local-plus-remote strategy is fine for any real-world scenario. Make sure that at least one of the replicas is a one-way backup (i.e., no possibility of mirroring a deletion). That way you can increment it with zero risk.

    And now for some philosophy. Your files are important, sure, but ask yourself how many times you have actually looked at them in the last year or decade. There’s a good chance it’s zero. Everything in the world will disappear and be forgotten, including your files and indeed you. If the worst happens and you lose it all, you will likely get over it just fine and move on. Personally, this rather obvious realization has helped me to stress less about backup strategy.

    • 8263ksbr@lemmy.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      So you would suggest to get bigger and bigger storages?

      I really like and can embrace the philosophical part. I do delete rigorously data. At the same time, i once had a data lost, because I was young and stupid and tried to install Suse without an backup. I still am sad to not to be able to look at the images of me and my family from this time. I do look at those pictures/videos/recordings from time to time. It gives me a nice feeling of nostalgia. Also grounds me and shows me how much have changed.

      • JubilantJaguar@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        Fair enough!

        So you would suggest to get bigger and bigger storages?

        Personally I would suggest never recording video. We did fine without it for aeons and photos are plenty good enough. If you can still to this rule you will never have a single problem of bandwidth or storage ever again. Of course I understand that this is an outrageous and unthinkable idea for many people these days, but that is my suggestion.

        • 8263ksbr@lemmy.mlOP
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          Never recording videos… That is outrageous ;) Interesting train of thought, though. Video is the main data hog on my drives. It’s easy to mess up the compression. At the same time is combines audio, image and time in one easy to consume file. Personally, i would miss it.

  • Barx [none/use name]@hexbear.net
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    As a start, follow the 3-2-1 rule:

    • At least 3 copies of the data.

    • On at least 2 different devices / media.

    • At least 1 offsite backup.

    I would add one more thing: invest in a process for verifying that your backups are working. Like a test system that is occasionally restored to from backups.

    Let’s say what you care about most is photos. You will want to store them locally on a computer somewhere (one copy) and offsite somewhere (second copy). So all you need to do is figure out one more local or offsite location for your third copy. Offsite is probably best but is more expensive. I would encrypt the data and then store on the cloud for my main offsite backup. This way your data is private so it doesn’t matter that it is stored in someone else’s server.

    I am personally a fan of Borg backup because you can do incremental backups with a retention policy (like Macs’ Time Machine), the archive is deduped, and the archive can be encrypted.

    Consider this option:

    1. Your data raw on a server/computer in your home.

    2. An encrypted, deduped archive on that sane computer.

    3. That archive regularly copied to a second device (ideally another medium) and synchronized to a cloud file storage system.

    4. A backup restoration test process that takes the backups and shows that they restores important files, the right number, size, etc.

    If disaster strikes and all your local copies are toast, this strategy ensures you don’t lose important data. Regular restore testing ensures the remote copy is valid. If you have two cloyd copies, you are protected against one of the providers screwing up and removing data without you knowing and fixing it.

    • 8263ksbr@lemmy.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Interesting take on the test process. Never really thought of that. I just trusted in rsyncs error messages. Maybe I write a script to automate those checks. Thanks

  • Andromxda 🇺🇦🇵🇸🇹🇼@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I use LTO magnetic tape for archiving data, but unfortunately the tape drives are VERY expensive. The tape itself is relatively cheap though (this is a 5-pack at 12TB uncompressed, 30TB compressed per cardridge, totaling at 60TB uncompressed, 150TB compressed. This is a lot cheaper than hard drives, and lasts for much longer), has large storage capacity and 30+ years of shelf life. Yes, I know, LTO 9 has come out, but I won’t be upgrading, because LTO 8 works just fine for me, and is much cheaper. The drives are backwards compatible by one generation though, e.g. you can use LTO 8 tape in an LTO 9 drive.

    • ouch@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      5 k€? No wonder no one uses tape for home usage. You can come up with a lot of cheaper alternatives for that price.

  • rutrum@lm.paradisus.day
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Use a raid atrray, and replace drives as they fail. Ideally they wouldnt fail behind your back, like an optical disk would.

    • 8263ksbr@lemmy.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      That is an always ON approach? For example with an NAS? While that is a very save approach, it does not fit the idea of having something “on the shelf”. Thank you for the advice though :)

  • DeuxChevaux@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    I use external hard drives. Two of them, and they get rsynced every time something changes, so there’s a copy if one drive should fail. Once a month, I encrypt the whole shebang with gpg and send it off into an AWS bucket.