I’m currently running a Xeon E3-1231v3. It’s getting long in the tooth, supports only 32GB RAM, and has only 16 PCIe lanes. I’ve been butting up against the platform limitations for a couple of years now, and I’m ready to upgrade. I’ve been running this system for ~10yrs now.

I’m hoping to future proof the next system to also last 8-10 years (where reasonable, considering advancements in tech and improvements in efficiency), but I’m hitting a wall finding CPU candidates.

In a perfect world, I’d like an Intel with iGPU for QuickSync (HWaccel for Frigate/Immich/Jellyfin), AND I would like the 40+ PCIe lanes that the Intel Xeon Scalable CPUs offer.

With only my minimum required PCIe devices I’ve surpassed the 20 lanes available on desktop CPU’s with an iGPU:

  • Dual m.2 for Proxmox ZFS mirror (guest storage) - in addition to boot drive (8 lanes)
  • LSI HBA (8 lanes)
  • Dual SFP+ NIC (8 lanes)

Future proofing:

High priority

  • Dedicated GPU (16 lanes)

Low priority

  • Additional dual m.2 expansion (8 lanes)
  • USB expansions for simplified device passthrough (Coral TPU, Zigbee/Zwave for Home Aassistant, etc) (4 lanes per card) - this assumes the motherboard comes with at least 4-ports
  • Coral TPU PCIe (4 lanes?)

Is there anything that fulfills both requirements? Am I being unreasonable or overthinking it? Is there a solution that adds GPU hardware acceleration to the Xeon Silver line without significantly increasing power draw?

Thanks!

  • thumdinger@lemmy.worldOP
    link
    fedilink
    arrow-up
    3
    ·
    11 days ago

    I hadn’t considered AMD, really only due to the high praise I’m seeing around the web for QuickSync, and AMD falling behind both Intel and nvidia in hwaccel. Certainly will consider if there’s not a viable option with QS anyway.

    And you’re right, the south bridge provides additional PCIe connectivity (AMD and Intel), but bandwidth has to be considered. Connecting a HBA (x8), 2x m.2 SSD (x8), and 10Gb NIC (x8) over the same x4 link for something like a TrueNAS VM (ignoring other VM IO requirements), you’re going to be hitting the NIC and HBA and/or SSD (think ZFS cache/logging) at max simultaneously, saturating the link resulting in a significant bottleneck, no?

    • lemming741@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 days ago

      The chipset link is 4x4.0 and daisy chained so 8GB per second. My use case is way more casual than you’re looking for.

      I think what’s your up against is Intel locking features behind a paywall, like they have with desktop ECC and hyper threading thru the years.

      • thumdinger@lemmy.worldOP
        link
        fedilink
        arrow-up
        3
        ·
        10 days ago

        Thanks, I’ll need to have a look at how the chipset link works, and how the southbridge combines incoming PCIe lanes to reduce the number of connections from 24 in my example, to the 4 available. Despite this though, and considering these devices are typically PCIe 3.0, operating at the maximum spec, they could swamp the link with 3x the data it has bandwidth for (24x3.0 is 23.64GB/s, vs 4x4.0 being 7.88GB/s).

    • MangoPenguin@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      10 days ago

      You could always add an Intel GPU in a PCIe slot, if you’re going for that kind of high end build.

      Alternatively if you run an Intel iGPU you don’t need a Coral TPU either, as Frigate can use OpenVINO and it works as good as the Coral or better anyways.

      Also if the LSI HBA is connecting to HDDs, it won’t need very much bandwidth so I’m not sure if the lane restriction there would matter?