• TimeSquirrel@kbin.melroy.org
    link
    fedilink
    arrow-up
    81
    arrow-down
    2
    ·
    2 days ago

    It never ceases to amaze me how far we can still take a piece of technology that was invented in the 50s.

    That’s like developing punch cards to the point where the holes are microscopic and can also store terabytes of data. It’s almost Steampunk-y.

  • corroded@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    ·
    2 days ago

    I can’t wait for datacenters to decommission these so I can actually afford an array of them on the second-hand market.

  • TheRealKuni@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    2 days ago

    30/32 = 0.938

    That’s less than a single terabyte. I have a microSD card bigger than that!

    ;)

    • 4grams@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      My first HD was a 20mb mfm drive :). Be right back, need some “just for men” for my beard (kidding, I’m proud of it).

      • I_Miss_Daniel@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 days ago

        So was mine, but the controller thought it was 10mb so had to load a device driver to access the full size.

        Was fine until a friend defragged it and the driver moved out of the first 10mb. Thereafter had to keep a 360kb 5¼" drive to boot from.

        That was in an XT.

  • JasonDJ@lemmy.zip
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    2 days ago

    This is for cold and archival storage right?

    I couldn’t imagine seek times on any disk that large. Or rebuild times…yikes.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      Definitely not for either of those. Can get way better density from magnetic tape.

      They say they got the increased capacity by increasing storage density, so the head shouldn’t have to move much further to read data.

      You’ll get further putting a cache drive in front of your HDD regardless, so it’s vaguely moot.

    • RedWeasel@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      For a full 32GB at the max sustained speed(275MB/s), 32ish hours to transfer a full amount, 36 if you assume 250MB/s the whole run. Probably optimistic. CPU overhead could slow that down in a rebuild. That said in a RAID5 of 5 disks, that is a transfer speed of about 1GB/s if you assume not getting close to the max transfer rate. For a small business or home NAS that would be plenty unless you are running greater than 10GiBit ethernet.

  • veee@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 days ago

    Just one would be a great backup, but I’m not ready to run a server with 30TB drives.

    • mosiacmango@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      I’m here for it. The 8 disc server is normally a great form factor for size, data density and redundancy with raid6/raidz2.

      This would net around 180TB in that form factor. Thats would go a long way for a long while.

      • Badabinski@kbin.earth
        link
        fedilink
        arrow-up
        6
        ·
        2 days ago

        I dunno if you would want to run raidz2 with disks this large. The resilver times would be absolutely bazonkers, I think. I have 24 TB drives in my server and run mirrored vdevs because the chances of one of those drives failing during a raidz2 resilver is just too high. I can’t imagine what it’d be like with 30 TB disks.

        • taladar@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          A few years ago I had a 12 disk RAID6 array and the power distributor (the bit between the redundant PSUs and the rest of the system) went and took 5 drives with them, lost everything on there. Backup is absolutely essential but if you can’t do that for some reason at least use RAID1 where you only lose part of your data if you lose more than 2 drives.

  • RememberTheApollo_@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 days ago

    I thought I read somewhere that larger drives had a higher chance of failure. Quick look around and that seems to be untrue relative to newer drives.

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      2 days ago

      One problem is that larger drives take longer to rebuild the RAID array when one drive needs replacing. You’re sitting there for days hoping that no other drive fails while the process goes. Current SATA and SAS standards are as fast as spinning platters could possibly go; making them go even faster won’t help anything.

      There was some debate among storage engineers if they even want drives bigger than 20TB. The potential risk of data loss during a rebuild is worth trading off density. That will probably be true until SSDs are closer to the price per TB of spinning platters (not necessarily the same; possibly more like double the price).

      • GamingChairModel@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        If you’re writing 100 MB/s, it’ll still take 300,000 seconds to write 30TB. 300,000 seconds is 5,000 minutes, or 83.3 hours, or about 3.5 days. In some contexts, that can be considered a long time to be exposed to risk of some other hardware failure.

      • RememberTheApollo_@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        Yep. It’s a little nerve wracking when I replace a RAID drie in our NAS, but I do it before there’s a problem with a drive. I can mount the old one back in, or try another new drive. I’ve only ever had one new DOA, here’s hoping those stay few and far between.

      • oldfart@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 days ago

        What happened to using different kinds of drives in every mirrored pair? Not best practice any more? I’ve had Seagates fail one after another and the RAID was intact because I paired them with WD.

  • Avieshek@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    2 days ago

    How can someone without programming skills make a cloud server at home for cheap?

    Lemmy’s Spoiler Doesn’t Make Sense

    (Like connected to WiFi and that’s it)

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 days ago

      Yes. You’ll have to learn some new things regardless, but you don’t need to know how to program.

      What are you hoping to make happen?

    • bruhduh@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 days ago

      Debian, virtualmin, podman with cockpit, install these on any cheap used pc you find, after initial setup all other is gui managed

    • frezik@midwest.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Raspberry Pi or an old office PC are the usual methods. It’s not so much programming as Linux sysadmin skills.

      Beyond that, you might consider OwnCloud for an app-like experience, or just Samba if all you want is local network files.

  • NuXCOM_90Percent@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    21
    ·
    2 days ago

    Just a reminder: These massive drives are really more a “budget” version of a proper tape backup system. The fundamental physics of a spinning disc mean that these aren’t a good solution for rapid seeking of specific sectors to read and write and so forth.

    So a decent choice for the big machine you backup all your VMs to in a corporate environment. Not a great solution for all the anime you totally legally obtained on Yahoo.

    Not sure if the general advice has changed, but you are still looking for a sweet spot in the 8-12 TB range for a home NAS where you expect to regularly access and update a large number of small files rather than a few massive ones.

    • IrateAnteater@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      26
      ·
      2 days ago

      HDD read rates are way faster than media playback rates, and seek times are just about irrelevant in that use case. Spinning rust is fine for media storage. It’s boot drives, VM/container storage, etc, that you would want to have on an SSD instead of the big HDD.

    • CarbonatedPastaSauce@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      2 days ago

      I’m real curious why you say that. I’ve been designing systems with high IOPS data center application requirements for decades so I know enterprise storage pretty well. These drives would cause zero issues for anyone storing and watching their media collection with them.

    • mosiacmango@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      2 days ago

      Not sure what you’re going on about here. Even these discs have plenty of performance for read/wrote ops for rarely written data like media. They have the same ability to be used by error checking filesystems like zfs or btrfs, and can be used in raid arrays, which add redundancy for disc failure.

      The only negatives of large drives in home media arrays is the cost, slightly higher idle power usage, and the resilvering time on replacing a bad disc in an array.

      Your 8-12TB recommendation already has most of these negatives. Adding more space per disc is just scaling them linearly.

      • ricecake@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        9
        ·
        2 days ago

        Additionally, most media is read in a contiguous scan. Streaming media is very much not random access.

        Your typical access pattern is going to be seeking to a chunk, reading a few megabytes of data in a row for the streaming application to buffer, and then moving on. The ~10ms of access time at the start are next to irrelevant. Particularly when you consider that the OS has likely observed that you have unutilized RAM and loads the entire file into the memory cache to bypass the hard drive entirely.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 days ago

      The fundamental physics of a spinning disc mean that these aren’t a good solution for rapid seeking of specific sectors to read and write and so forth.

      It’s no ssd but is no slower than any other 12TB drive. It’s not shingled but HAMR. The sectors are closer together so it has even better seeking speed than a regular 12TB drive.

      Not a great solution for all the anime you totally legally obtained on Yahoo.

      ???

      It’s absolutely perfect for that. Even if it was shingled tech, that only slows write speeds. Unless you are editing your own video, write seek times are irrelevant. For media playback use only consistent read speed matters. Not even read seek matters except in extreme conditions like comparing tape seek to drive seek. You cannot measure 10 ms difference between clicking a video and it starting to play because of all the other delays caused by media streaming over a network.

      But that’s not even relevant because these have faster read seeking than older drives because sectors are closer together.

    • barkingspiders@infosec.pub
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      honestly curious, why the hell was this downvoted? I work in this space and I thought this was still the generally accepted advice?