• TCB13@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    11 months ago

    Everything was fine until…

    these are PCIe Gen3 x2 only

    Fucks sake. I’ve seen ARM board with PCI better than that.

    • ferret@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 months ago

      What arm board :p
      Honest question. All the ones I have seen are really awful and I would love to tinker with something that has real pcie (Ampere workstations do not count)

      • TCB13@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        11 months ago

        Both the ROCKPro64 and the NanoPi M4 from 2018 has a x4 PCIe 2.1 interface. Same goes for almost all RK3399 boards that care to expose the PCIe interface.

        Update: there’s also the more recent NanoPC-T6 with the RK3588 that has PCIe 3.0 x4.

        This boards seems extremely poorly designed, have a look at the CPU specs: https://www.intel.com/content/www/us/en/products/sku/97926/intel-atom-processor-c3758-16m-cache-up-to-2-20-ghz/specifications.html

        They could’ve exposed more SATA ports and / or PCI lanes and decided not to do it.

        And… let’s not even talk about the SFF 8087 connector that isn’t rated to be used as an external plug, you’ll likely ruin it quickly with insertions and/or some light accident.

          • TCB13@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            11 months ago

            Generally, there’s a small difference in speeds:

            PCIe 2.0 x 4 > 2.000 GB/s PCIe 3.0 x 2 > 1.969 GB/s

            But we also have to consider the suggested ARM CPU does PCIe 2.1 and we’ve to add the this detail:

            PCIe 2.1 provides higher performance than the PCIe 2.0 by facilitating a transparent upgrade from a 32-bit data path to a 64-bit data path at 33MHZ and 66MHz.

            I shouldn’t also have a large impact but maybe we should think about it a bit more.

            Anyways I do believe this really depends on your use case, if you plan to bifurcate it or not and what devices you’re going to have on the other end. For instance for a NAS I would prefer the PCIe 2.1 x 4 as you could have more SATA controllers with their own lanes instead of sharing lanes in PCIe 3.0 using a MUX.

            Conclusion: your mileage may vary depending on use case. But I was expecting to have more PCI lanes exposed be it via more m.2 slots or other solution. I guess that when a CPU comes with everything baked in and the board maker “only has” to run wires around better do it properly and expose everything. Why not all SATAs for instance?