Im fascinated by the idea of animals or other unusual witnesses being called to testify lol

I feel like the prosecution or whoever subpeonas it would just get the transcript or the company would release their logs or something but I wonder if that could ever end up happening, particularly where would be cross-examined

  • AnchoriteMagus@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 month ago

    In the form of some sort of record of user inputs, like a chat log between two humans, sure. Other than that, I sincerely hope not. Remember, LLMS and everything else we term “AI” is just predicting the statistically most probable answer to a question based on its training data. It has no concept of truth nor any way to evaluate it.

    • cheese_greater@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 month ago

      I know all that but man i dunno. Im naturally critical but even when they lie or dont know and try and bullshit, they seem to get to somewhere closer resembling reality/truth as you grill them and work thru the bullshit Socratically or whatever. Its fascinating.

      Like, objectively I think you’re right but I’ve had too many experiences with them where I pinned them down on a mistake/error or even straight up “hallucination” without giving them a direction to weasel into and they seem to be pretty good at being led to sweep all that away and get to something actually sort of useful

      • OwOarchist@pawb.social
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 month ago

        In other words, any lawyer ‘cross-examining’ an AI could ‘grill them’ and ‘work thru the bullshit Socratically’ or whatever and get the “witness” to say whatever it is they want it to say.

      • CorrectAlias@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 month ago

        They don’t lie, that’s not how LLMs work. LLMs don’t think. They don’t learn. They respond based on your input, and they aren’t capable of doing anything more than that. They aren’t alive.

      • Whitebrow@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        1 month ago

        An exercise in futility.

        The more you “grill it” or more specifically pigeonhole it to the narrative you want to see, the more likely it is to spit it at you sooner or later.

        Same thing happens when you push these text generators to convince you that the earth is flat or that taking a hit of heroin to take the edge off is something you should be doing.

        Only thing it’s useful for is logs like originally stated, any sort of interaction will result in statistically plausible bullshit, not actual evidence.