It’s out!

  • suicidaleggroll@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    14 days ago

    It’s an MoE (Mixture of Experts) approach. An 80B-A3B model has 80B parameters total, so that dictates the size of the model and the VRAM+RAM you need to have to hold it, but only 3B of those parameters are active at any given time. This reduces the intelligence of the model compared to an 80B dense model, but improves the speed. In the end it’s the size of an 80B model, with the intelligence of a ~40B model, that runs at the speed of a 3B model.

    Pretty much all state of the art models either have already, or are in the process of switching to an MoE design, since it significantly reduces the hardware required to run big models at usable speeds. You can often get usable speeds on MoEs without a GPU at all.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      14 days ago

      Interesting. 3B models run decently fast on my CPU and I have a lot of system RAM. 🤔

      E: Just tried it on 100% CPU on AMD 7700 with DDR5 3600 and it does 6.5t/s. Not bad.