• NaibofTabr@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 days ago

    I mean… duh? The purpose of an LLM is to map words to meanings… to derive what a human intends from what they say. That’s it. That’s all.

    It’s not a logic tool or a fact regurgitator. It’s a context interpretation engine.

    The real flaw is that people expect that because it can sometimes (more than past attempts) understand what you mean, it is capable of reasoning.

    • vithigar@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      9 days ago

      Not even that. LLMs have no concept of meaning or understanding. What they do in essence is space filling based on previously trained patterns.

      Like showing a bunch of shapes to someone, then drawing a few lines and asking them to complete the shape. And all the shapes are lamp posts but you haven’t told them that and they have no idea what a lamp post is. They will just produce results like the shapes you’ve shown them, which generally end up looking like lamp posts.

      Except the “shape” in this case is a sentence or poem or self insert erotic fan fiction, none of which an LLM “understands”, it just matches the shape of what’s been written so far with previous patterns and extrapolates.