• TachyonTele@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      There’s going to be an entire generation of people growing up with this and “learning” this way. It’s like every tech company got together and agreed to kill any chance of smart kids.

  • paddirn@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    And yet it doesn’t even list ‘Plum’, or did it just think ‘Applum’ was just a variation of a plum?

      • RealFknNito@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Reminds me of how the “1800 gallons for one burger” statistic uses annual rainfall to calculate that as if it was captured, stored, and used from our kitchen sinks.

        • bbuez@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 months ago

          What’s that got to do with datacenters using evaporative cooling?

          Also if you’re curious cows drink water, 9-20 gallons a day, and in the typical 1-2 year lifespan, that amounts to 3,285 on the conservative side, or up to 14,000 gallons in hot climates, per cow. And depending on the cut, some 800 quarter pound patties, and using that conservative 1 year 9 gallons a day, that is about…

          4 gallons per burgers worth of meat. That total 3,000-14,000 gal/cow water usage is certainly an issue, especially in hot climates, but why make up bullshit?

          • RealFknNito@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            2 months ago

            Because statistics like those often ignore the fact the water they’re calculating is inaccessible for other uses. They calculate the rainwater used to make the grass grow, water we don’t collect nor have available for other uses, but it makes the number higher and shocking. If you see “14,000 gallons of water per cow” you think that’s how much water we’ve “lost” when in reality, it’s a massive bucket of rainwater they’re drinking out of, not a hit to our irrigation or water treatment facilities.

            It’s a misleading statistic meant to shock and manipulate you into a specific way of thinking, a lot like your original comment. I don’t give a shit how much rainwater a cow drinks, I care about how much is being pulled from local irrigation. Rainwater is going to lay in the dirt and evaporate anyway so why is that being calculated? If the answer to how much water is being pulled from our infrastructure is nearly zero, that’s how many fucks I dedicate to it.

            Should datacenters be operating in silicon valley where water is already scarce? No. But people shouldn’t also be living in a fucking desert, overdrawing from the river that lets anyone live there, so maybe they should move. Not like they can’t afford to.

  • Empricorn@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Some “AI” LLMs resort to light hallucinations. And then ones like this straight-up gaslight you!

    • eatCasserole@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      Factual accuracy in LLMs is “an area of active research”, i.e. they haven’t the foggiest how to make them stop spouting nonsense.

      • Swedneck@discuss.tchncs.de
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        duckduckgo figured this out quite a while ago: just fucking summarize wikipedia articles and link to the precise section it lifted text from

      • Excrubulent@slrpnk.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Because accuracy requires that you make a reasonable distinction between truth and fiction, and that requires context, meaning, understanding. Hell, full humans aren’t that great at this task. This isn’t a small problem, I don’t think you solve it without creating AGI.

  • Sunny' 🌻@slrpnk.net
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    It’s crazy how bad d AI gets of you make it list names ending with a certain pattern. I wonder why that is.

    • Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      It can’t see what tokens it puts out, you would need additional passes on the output for it to get it right. It’s computationally expensive, so I’m pretty sure that didn’t happen here.

    • bisby@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I’m not an expert, but it has something to do with full words vs partial words. It also can’t play wordle because it doesn’t have a proper concept of individual letters in that way, its trained to only handle full words

      • Swedneck@discuss.tchncs.de
        link
        fedilink
        arrow-up
        0
        ·
        2 months ago

        they don’t even handle full words, it’s just arbitrary groups of characters (including space and other stuff like apostrophe afaik) that is represented to the software as indexes on a list, it literally has no clue what language even is, it’s a glorified calculator that happens to work on words.

    • blindsight@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      LLMs aren’t really capable of understanding spelling. They’re token prediction machines.

      LLMs have three major components: a massive database of “relatedness” (how closely related the meaning of turns are), a transformer (figuring out which of the previous words have the most contextual meaning), and statistical modeling (the likelihood of the next word, like what your cell phone does.)

      LLMs don’t have any capability to understand spelling, unless it’s something it’s been specifically trained on, like “color” vs “colour” which is discussed in many training texts.

      "Fruits ending in ‘um’ " or "Australian towns beginning with ‘T’ " aren’t talked about in the training data enough to build a strong enough relatedness database for, so it’s incapable of answering those sorts of questions.

    • RGB3x3@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      I just tried to have Gemini navigate to the nearest Starbucks and the POS found one 8hrs and 38mins away.

      Absolute trash.

      • RGB3x3@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Just tried it with Target and again, it’s sending me to Raleigh, North Carolina.

        • Randomocity@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          that leads me to believe it thinks you are in North Carolina. have you allowed location to Gemini? Are you on a VPN?

          • RGB3x3@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            No VPN, it all has proper location access. I even tried it with a local restaurant that I didn’t think was a chain, and it found one in Tennessee. I’m like 10 minutes away from where I told it to go.

          • RGB3x3@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            I would totally leave if the “salary to cost of living” ratio wasn’t so damn good.

            I’d move to Germany or the Netherlands or Sweden or Norway so fast if I could afford it.