• weedazz@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    1 year ago

    My mind immediately went to a horizon zero dawn like dystopia where the Mozilla AI is the only thing left protecting humans from various malevolent AIs bent on consuming the human race

      • weedazz@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        I think by that point ChatGPT would be more like Apollo, keeping the knowledge of humanity. I feel like one of the more corporate AIs will go full HADES, I’m thinking Bard. It will get a mysterious signal from space that switches it’s core protocol from “don’t be evil” to “be evil.”

    • clanginator@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Imagining the Mozilla AI as a personified Firefox and Thunderbird fighting off Cortana, some BARD (sorry) and a bunch of generic evil corporate AIs just makes me excited that Mozilla would be the one fending everyone off.

  • kingthrillgore@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    I want to give them the benefit of the doubt. I really do. I am going to watch this with a critical eye, however.

  • Weeby_Wabbit@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I’ll believe it when I see it.

    I’m so goddamn tired of “open source” turning into subscription models restricting use cases because the company wants to appease conservative investors.

    • blind3rdeye@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Mozilla has a very strong track-record though. They’ve been around for a very long time, and have stuck to free open-source principles the whole time.

  • donuts@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    1 year ago

    All I want to know is if they are going to pillage people’s private data and steal their creative IP or not.

    Ethical AI starts and ends with open, transparent, legitimate and ethically sourced training data sets.

    • azuth@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Using copyrighted material for research is fair use. Any model produced by such research is not itself a derivative work of the training material. If people use it to create infringing (on the training or other material) they can be prosecuted in the exact same way they would if they created an infringing work via Photoshop or any other program. The same goes for other illegal uses such as creating harmful depictions of real people.

      Accepting any expansion of IP rights, for whatever reason, would in fact be against the ethics of free software.

        • azuth@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          That’s ridiculous as even summaries themselves are protected. You can find book summaries all across the web (say wikipedia).

      • donuts@kbin.social
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Using copyrighted material for research is fair use. Any model produced by such research is not itself a derivative work of the training material.

        You’re conflating AI research and the AI business. Training an AI is not “research” in a general sense, especially in the context of an AI that can be used to create assets for commercial applications.

        • azuth@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          It’s not possible to research AI without training them.

          It’s probably also not possible to train a model whose creations cannot be used for commercial applications.

  • 👁️👄👁️@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    As much as I love Mozilla, I know they’re going to censor it (sorry, the word is “alignment” now) the hell out of it to fit their perceived values. Luckily if it’s open source then people will be able to train uncensored models

    • DigitalJacobin@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      What in the world would an “uncensored” model even imply? And give me a break, private platforms choosing to not platform something/someone isn’t “censorship”, you don’t have a right to another’s platform. Mozilla has always been a principled organization and they have never pretended to be apathetic fence-sitters.

      • Doug7070@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        This is something I think a lot of people don’t get about all the current ML hype. Even if you disregard all the other huge ethics issues surrounding sourcing training data, what does anybody think is going to happen if you take the modern web, a huge sea of extremist social media posts, SEO optimized scams and malware, and just general data toxic waste, and then train a model on it without rigorously pushing it away from being deranged? There’s a reason all the current AI chatbots have had countless hours of human moderation adjustment to make them remotely acceptable to deploy publicly, and even then there are plenty of infamous examples of them running off the rails and saying deranged things.

        Talking about an “uncensored” LLM basically just comes down to saying you’d like the unfiltered experience of a robot that will casually regurgitate all the worst parts of the internet at you, so unless you’re actively trying to produce a model to do illegal or unethical things I don’t quite see the point of contention or what “censorship” could actually mean in this context.

      • 👁️👄👁️@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Anything that prevents it from my answering my query. If I ask it how to make me a bomb, I don’t want it to be censored. It’s gathering this from public data they don’t own after all. I agree with Mozilla’s principles, but also LLMs are tools and should be treated as such.

        • salarua@sopuli.xyz
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          shit just went from 0 to 100 real fucking quick

          for real though, if you ask an LLM how to make a bomb, it’s not the LLM that’s the problem

          • 👁️👄👁️@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 year ago

            If it has the information, why not? Why should you be restricted by what a company deems appropriate. I obviously picked the bomb example as an extreme example, but that’s the point.

            Just like I can demonize encryption by saying I should be allowed to secretly send illegal content. If I asked you straight up if encryption is a good thing, you’d probably agree. If I mentioned its inevitable bad use in a shocking manner, would you defend the ability to do that, or change your stance that encryption is bad?

            To have a strong stance means also defending the potential harmful effects, since they’re inevitable. It’s hard to keep values consistent, even when there are potential harmful effects of something that’s for the greater good. Encryption is a perfect example of that.

            • Spzi@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              If it has the information, why not?

              Naive altruistic reply: To prevent harm.

              Cynic reply: To prevent liabilities.

              If the restaurant refuses to put your fries into your coffee, because that’s not on the menu, then that’s their call. Can be for many reasons, but it’s literally their business, not yours.

              If we replace fries with fuse, and coffee with gun powder, I hope there are more regulations in place. What they sell and to whom and in which form affects more people than just buyer and seller.

              Although I find it pretty surprising corporations self-regulate faster than lawmakers can say ‘AI’ in this case. That’s odd.

            • Lionir [he/him]@beehaw.org
              link
              fedilink
              arrow-up
              0
              ·
              1 year ago

              This is a false equivalence. Encryption only works if nobody can decrypt it. LLMs work even if you censor illegal content from their output.

              • 👁️👄👁️@lemm.ee
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                You miss the point. My point is that if you want to have a consistent view point, you need to acknowledge and defend the harmful sides. Encryption can objectively cause harm, but it should absolutely still be defended.

                • Solar Bear@slrpnk.net
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  edit-2
                  1 year ago

                  What the fuck is this “you should defend harm” bullshit, did you hit your head during an entry level philosophy class or something?

                  The reason we defend encryption even though it can be used for harm is because breaking it means you can’t use it for good, and that’s far worse. We don’t defend the harm it can do in and of itself; why the hell would we? We defend it in spite of the harm because the good greatly outweighs the harm and they cannot be separated. The same isn’t true for LLMs.

          • 👁️👄👁️@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 year ago

            Do gun manufacturers get in trouble when someone shoots somebody?

            Do car manufacturers get in trouble when someone runs somebody over?

            Do search engines get in trouble if they accidentally link to harmful sites?

            What about social media sites getting in trouble for users uploading illegal content?

            Mozilla doesn’t need to host an uncensored model, but their open source AI should be able to be trained to uncensored. So I’m not asking them to host this themselves, which is an important distinction I should have made.

            Which uncensored LLMs exist already, so any argument about the damage they can cause is already possible.

  • Boring@lemmy.ml
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    1 year ago

    Coming from a company the preaches about privacy and rates privacy respecting businesses, while collecting telemetry and accepting 500M/ year to from google to promote their search engine… I’ll take this as the puff up piece that is is.

    • DigitalJacobin@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago
      1. The very little, basic telemetry Firefox collects can be easily disabled[1].
      2. What alternative do you suggest to Mozilla? Reject the $500M and blowup everything they’ve worked so hard for decades to build? I feel like users having to click, at most, a whole 5 times to change their search engine (if they want) isn’t that big of a sacrifice to have a major privacy-oriented, non-profit player in the tech sphere.

      1. https://support.mozilla.org/en-US/kb/telemetry-clientid ↩︎

      • Boring@lemmy.ml
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        Its more so the principle. Many people that download Firefox are doing so to escape google, and if they are not born as cyber security experts they may download Firefox and continue with no real improvement to their privacy.

        Secondly, the main thing you should look for is where a company gets its funding. If Mozilla gets almost 100% of its funding through google… How much do you really expect them to push back against the data collection of their userbase?

        I rank Mozilla with the likes of ExpressVPN, NordVPN, etc. They preach privacy and security against surveillance… But its just theatre to make money in specific demographics.

        • DigitalJacobin@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          It is extremely simple and easy to change your search engine and disable telemetry in Firefox. I would agree if Mozilla showed any favoritism towards Google, but they don’t. Maintaining and developing an entirely independent browser is not cheap.

          I really hope you’re not about to suggest Brave as an alternative when 100% of their funds come from a dying crypto scam, is for-profit, and is owned by a far-right, anti-gay reactionary. Not to mention that Brave’s browser is entirely reliant on Chromium code from Google.

          Perfect is the enemy of good.

    • vinhill@feddit.de
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      Not only is telemetry easy to disable. In about:telemetry, you can see what’s being send and many of these things are important to improve the user experience, make Firefox faster and also monitor privacy/security problems.

      Without telemetry (use counter), how to decide whether a deprecated feature can be removed? Removing them is necessary to decrease maintenance work, be able to innovate and remove features that are less secure.