I have a load-bearing raspberry pi on my network - it runs a DNS server, zigbee2mqtt, unifi controller, and a restic rest server. This raspberry pi, as is tradition, boots from a microSD card. As we all know, microSD cards suck a little bit and die pretty often; I’ve personally had this happen not all that long ago.

I’d like to keep a reasonably up-to-date hot spare ready, so when it does give up the ghost I can just swap them out and move on with my life. I can think of a few ways to accomplish this, but I’m not really sure what’s the best:

  • The simplest is probably cron + dd, but I’m worried about filesystem corruption from imaging a running system and could this also wear out the spare card?
  • recreate partition structure, create an fstab with new UUIDs, rsync everything else. Backups are incremental and we won’t get filesystem corruption, but we still aren’t taking a point-in-time backup which means data files could be inconsistent with each other. (honestly unlikely with the services I’m running.)
  • Migrate to BTRFS or ZFS, send/receive snapshots. This would be annoying to set up because I’d need to switch the rpi’s filesystem, but once done I think this might be the best option? We get incremental updates, point-in-time backups, and even rollback on the original card if I want it.

I’m thinking out loud a little bit here, but do y’all have any thoughts? I think I’m leaning towards ZFS or BTRFS.

  • LifeBandit666@feddit.uk
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 month ago

    I can’t remember the steps (they were simple though) but when my Home Assistant raspi SD card died, I bought a 128gb SSD from AliExpress and a usb-sata cable.

    I then did something to the pi that meant it can boot from the SSD, and flashed the SSD using Balenetcher or RUFUS or whatever (same program I was using to flash my SD cards basically).

    Then it was just a case of plugging in and turning it on.

    Runs exactly the same as with an SD card with less dying because SD cards aren’t meant for a lot of read/write but SSDs do.

  • Noxy@yiffit.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    I would ditch the SD cards entirely and boot off of USB attached SATA SSDs. But your idea still sounds cool if you can’t or don’t want to invest in SSDs!

    I’ve enjoyed btrfs on my laptops, definitely seems stable, and using BEES foe dedupe is rad (maybe don’t do that on an sd card tho…)

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 month ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    DNS Domain Name Service/System
    LVM (Linux) Logical Volume Manager for filesystem mapping
    NAS Network-Attached Storage
    SATA Serial AT Attachment interface for mass storage
    SSD Solid State Drive mass storage
    ZFS Solaris/Linux filesystem focusing on data integrity

    [Thread #911 for this sub, first seen 8th Aug 2024, 15:25] [FAQ] [Full list] [Contact] [Source code]

  • sugar_in_your_tea@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Do you need a backup image?

    For my NAS, all I do is:

    • keep notes of what’s installed and how to configure OS things
    • automatic, offsite backups of important configs and data

    Any full-disk backups just make the restore process easier, they’re hardly the primary plan. If you want that, just take a manual backup like once a year, and maybe swap them out every 2-3 years (or however long you think the SD card should last). If you keep writes down, it should last quite a while (and nothing in your use-case seems write-heavy).

    But honestly, you should always have a manual backup strategy in case something terrible happens (e.g. your house burns down). Make that your primary strategy, and hot spares would just be a time-saver for the more common case where HW fails.

  • Shadow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Why not just connect an ssd via USB and save yourself the hassle and torment?

  • Mellow@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    I’ve had very bad luck with raspberry Pi’s and SDCards. They just don’t seem to last very long. I swapped to usb storage and things got somewhat better. I just had a usb drive die after 3 to 4 years of use. When I was still using SD it seemed like multiple times a year. Heat. Power loss, you can only punch holes in silicon so many times before it wears out. Whatever the reason.

    My approach for this is configuration backup not the entire os. I think this approach is better for when it’s time to upgrade the os or migrate to a new system.

    For my basic Pi running WireGuard and DNS, I keep an archive of documentation on steps to reconfigure the system after a total loss. Static configs are backed up once, and If there are critical configuration items that change then I back those up weekly. I’ve got two systems (media related servers, not Pi’s) that I keep ansible playbooks to configure 90% of the system from scratch so it’s as hands off as it can be.

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    Perhaps the best answer by far is ZFS but I don’t know how much pain it is to set it up to boot from on a Pi. The easiest to setup is probably LVM.

    With ZFS you can trivially keep a hot spare even over the network. Just tell syncoid where to replicate.