• markon@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    2
    ·
    1 year ago

    It’s amazing how Microsoft can take good models and absolutely ruin them in production… ChatGPT isn’t perfect but it’s like the difference between talking to the wall and talking to an avg IQ person that has reasoning capabilities in many domains that equals or exceeds human performance, if the user knows how to get the best prompt. That changes a little every time they do major model updates though.

    I’ve had more intelligent conversations with my own computer running a 3 billion parameter open source model. They must be wasting an incredible amount of money. Especially with GPT-4 considering it produces pretty shit results through Bing Chat…

    • MacN'Cheezus@lemmy.todayOP
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      1 year ago

      I don’t think that’s a problem with the model itself, but the fact that it was heavily censored and lobotomized in order to achieve maximum political correctness so they could avoid another Tay incident.

      • lud@lemm.ee
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        It makes sense that they do that since the media and randoms on the internet think everything chatGPT and Bing chat say is as valid as info from OpenAI and MS official spokespersons.