Definition of can dish it but can’t take it

  • skip0110@lemmy.zip
    link
    fedilink
    arrow-up
    56
    ·
    16 days ago

    Classic pull up the ladder behind you move.

    Kind of hilarious that one component of their complaint is that the DeepSeek model is more energy/computation efficient than theirs. Welcome to the free market?!

    • HiddenLayer555@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      48
      ·
      edit-2
      16 days ago

      OpenAI: “They stole our technology!”

      Also OpenAI: “Uh, well, our technology is actually inferior to theirs, but they must have stole it and made massive sweeping improvements to it that we weren’t able to! How dare they!”

      • P03 Locke@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        15 days ago

        OpenAI should have been fucking open in the first place. The Chinese are the only ones bother to open-source their models, and the US corpo’s decision to immediately close-source everything going to fuck them over in the end.

  • your_good_buddy@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    16 days ago

    Oh no!

    OpenAI should copywrite their work. I’m sure no one would dare steal someone else’s hard work for their AI model development!

  • Jack@slrpnk.net
    link
    fedilink
    arrow-up
    10
    ·
    16 days ago

    Can dish what? They haven’t made a profitable product ever. If you had a lemonade stand you are more profitable than those fucks.

  • 小莱卡@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    15 days ago

    Scrapers getting mad for being scraped will never not be funny for me. Deepseek surgence is such an awesome story.

  • humanspiral@lemmy.ca
    link
    fedilink
    arrow-up
    6
    ·
    15 days ago

    old (1 year = 1000 years in AI) accusation not relevant to expected upcoming deepseek breakthrough model. Distillation is used to make smaller models, and they are always crap compared to training on open data. Distillation is not a common technique anymore, though it’s hard to prove that more tokens wouldn’t be “cheat code”

    This is more a desperation play from US models, even as youtube is in full, “buy $200/month subscriptions now or die” mode.