• Rose@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    5 days ago

    Given how LLMs always try to please, asking them to find something specific is very likely to result in just that. A more neutral query would be to ask to simply summarize the history, though it’s weird that a purportedly anarchist group would be relying on a completely centralized corporate LLM in its moderation. Even then, you need to be careful not to be swayed by the LLMs suggestions and analyze each highlighted comment in an unbiased manner.

    • ZombiFrancis@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      5 days ago

      What is weird is that its clear they didn’t decide to ban because of an LLM output and they used local and FOSS models to summarize the explanation. The explanation provided, instead of just ‘Zionism’ or ‘Rule#X’ or ‘genocide denial’, is what used the LLM.

      • Rose@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        5 days ago

        Their own caption says it was ChatGPT and I don’t believe it can be run locally. Either way, one of the many issues with LLMs is that they come across like a person and are convincing even when hallucinating. Couple that with the human psychology of generally taking people for their word, and you have created a perfect environment for being manipulated. Going by the linked thread, in one instance, the AI included a quote that wasn’t the exact quote. You could argue the actual comment wasn’t that different from it, but that’s more like confirmation bias. Even asking an AI to not comment on anything and just distill the provided content to the most important quotes would be affected by selection bias, but that’s not even how they used the model. They literally asked it to find what they were looking for.

        • ZombiFrancis@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          5 days ago

          Right, and the LLM output isn’t something I would’ve used either. The only thing I am noting as important is that the LLM stuff was an extra and unnecessary step, if that makes sense.

          • Rose@lemmy.zip
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            5 days ago

            I don’t know anything about that banned user, but perhaps it’s that they didn’t want to look like the main characters of this community who essentially ban “just because” and needed something they believed would be convincing?