I can’t say for sure- but, there is a good chance I might have a problem.

The main picture attached to this post, is a pair of dual bifurcation cards, each with a pair of Samsung PM963 1T enterprise NVMes.

It is going into my r730XD. Which… is getting pretty full. This will fill up the last empty PCIe slots.

But, knock on wood, My r730XD supports bifurcation! LOTS of Bifurcation.

As a result, it now has more HDDs, and NVMes then I can count.

What’s the problem you ask? Well. That is just one of the many servers I have laying around here, all completely filled with NVMe and SATA SSDs…

Figured I would share. Seeing a bunch of SSDs is always a pretty sight.

And- as of two hours ago, my particular lemmy instance was migrated to these new NVMes completely transparently too.

  • HTTP_404_NotFound@lemmyonline.comOP
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 months ago

    I will say, it’s nice not having to nickel and dime my storage.

    But, the way I have things configured, redundancy takes up a huge chunk of the overall storage.

    I have around 10x 1T NVMe and SATA SSDs in a ceph cluster. 60% storage overhead there.

    Four of those 8T disks are in a ZFS Striped Mirror / Raid 10. 50% storage overhead.

    The 4x 970 evo / evo plus drives are also in a striped mirror ZFS pool. 50% overhead.

    But, still PLENTY of usable storage, and- highly available at that!