A new wave of AI is poised to transform the technologies we use everyday. Trust must be at the core of how we develop and deploy AI, everyday, all the time. It is not an optional ‘add-on’. Mozilla has long championed a world where AI is more trustworthy, investing in startups, advocating for laws, and… Continue reading About Us
As much as I love Mozilla, I know they’re going to censor it (sorry, the word is “alignment” now) the hell out of it to fit their perceived values. Luckily if it’s open source then people will be able to train uncensored models
What in the world would an “uncensored” model even imply? And give me a break, private platforms choosing to not platform something/someone isn’t “censorship”, you don’t have a right to another’s platform. Mozilla has always been a principled organization and they have never pretended to be apathetic fence-sitters.
This is something I think a lot of people don’t get about all the current ML hype. Even if you disregard all the other huge ethics issues surrounding sourcing training data, what does anybody think is going to happen if you take the modern web, a huge sea of extremist social media posts, SEO optimized scams and malware, and just general data toxic waste, and then train a model on it without rigorously pushing it away from being deranged? There’s a reason all the current AI chatbots have had countless hours of human moderation adjustment to make them remotely acceptable to deploy publicly, and even then there are plenty of infamous examples of them running off the rails and saying deranged things.
Talking about an “uncensored” LLM basically just comes down to saying you’d like the unfiltered experience of a robot that will casually regurgitate all the worst parts of the internet at you, so unless you’re actively trying to produce a model to do illegal or unethical things I don’t quite see the point of contention or what “censorship” could actually mean in this context.
Anything that prevents it from my answering my query. If I ask it how to make me a bomb, I don’t want it to be censored. It’s gathering this from public data they don’t own after all. I agree with Mozilla’s principles, but also LLMs are tools and should be treated as such.
If it has the information, why not? Why should you be restricted by what a company deems appropriate. I obviously picked the bomb example as an extreme example, but that’s the point.
Just like I can demonize encryption by saying I should be allowed to secretly send illegal content. If I asked you straight up if encryption is a good thing, you’d probably agree. If I mentioned its inevitable bad use in a shocking manner, would you defend the ability to do that, or change your stance that encryption is bad?
To have a strong stance means also defending the potential harmful effects, since they’re inevitable. It’s hard to keep values consistent, even when there are potential harmful effects of something that’s for the greater good. Encryption is a perfect example of that.
If the restaurant refuses to put your fries into your coffee, because that’s not on the menu, then that’s their call. Can be for many reasons, but it’s literally their business, not yours.
If we replace fries with fuse, and coffee with gun powder, I hope there are more regulations in place. What they sell and to whom and in which form affects more people than just buyer and seller.
Although I find it pretty surprising corporations self-regulate faster than lawmakers can say ‘AI’ in this case. That’s odd.
You miss the point. My point is that if you want to have a consistent view point, you need to acknowledge and defend the harmful sides. Encryption can objectively cause harm, but it should absolutely still be defended.
What the fuck is this “you should defend harm” bullshit, did you hit your head during an entry level philosophy class or something?
The reason we defend encryption even though it can be used for harm is because breaking it means you can’t use it for good, and that’s far worse. We don’t defend the harm it can do in and of itself; why the hell would we? We defend it in spite of the harm because the good greatly outweighs the harm and they cannot be separated. The same isn’t true for LLMs.
We don’t believe that at all, we believe privacy is a human right. Also you’re just objectively wrong about LLMs. Offline uncensored LLMs already exist, and will perpetually exist. We don’t defend tools doing harm, we acknowledge it.
Do gun manufacturers get in trouble when someone shoots somebody?
Do car manufacturers get in trouble when someone runs somebody over?
Do search engines get in trouble if they accidentally link to harmful sites?
What about social media sites getting in trouble for users uploading illegal content?
Mozilla doesn’t need to host an uncensored model, but their open source AI should be able to be trained to uncensored. So I’m not asking them to host this themselves, which is an important distinction I should have made.
Which uncensored LLMs exist already, so any argument about the damage they can cause is already possible.
As much as I love Mozilla, I know they’re going to censor it (sorry, the word is “alignment” now) the hell out of it to fit their perceived values. Luckily if it’s open source then people will be able to train uncensored models
What in the world would an “uncensored” model even imply? And give me a break, private platforms choosing to not platform something/someone isn’t “censorship”, you don’t have a right to another’s platform. Mozilla has always been a principled organization and they have never pretended to be apathetic fence-sitters.
This is something I think a lot of people don’t get about all the current ML hype. Even if you disregard all the other huge ethics issues surrounding sourcing training data, what does anybody think is going to happen if you take the modern web, a huge sea of extremist social media posts, SEO optimized scams and malware, and just general data toxic waste, and then train a model on it without rigorously pushing it away from being deranged? There’s a reason all the current AI chatbots have had countless hours of human moderation adjustment to make them remotely acceptable to deploy publicly, and even then there are plenty of infamous examples of them running off the rails and saying deranged things.
Talking about an “uncensored” LLM basically just comes down to saying you’d like the unfiltered experience of a robot that will casually regurgitate all the worst parts of the internet at you, so unless you’re actively trying to produce a model to do illegal or unethical things I don’t quite see the point of contention or what “censorship” could actually mean in this context.
It means they can’t make porn images of celebs or anime waifus, usually.
Anything that prevents it from my answering my query. If I ask it how to make me a bomb, I don’t want it to be censored. It’s gathering this from public data they don’t own after all. I agree with Mozilla’s principles, but also LLMs are tools and should be treated as such.
shit just went from 0 to 100 real fucking quick
for real though, if you ask an LLM how to make a bomb, it’s not the LLM that’s the problem
If it has the information, why not? Why should you be restricted by what a company deems appropriate. I obviously picked the bomb example as an extreme example, but that’s the point.
Just like I can demonize encryption by saying I should be allowed to secretly send illegal content. If I asked you straight up if encryption is a good thing, you’d probably agree. If I mentioned its inevitable bad use in a shocking manner, would you defend the ability to do that, or change your stance that encryption is bad?
To have a strong stance means also defending the potential harmful effects, since they’re inevitable. It’s hard to keep values consistent, even when there are potential harmful effects of something that’s for the greater good. Encryption is a perfect example of that.
Naive altruistic reply: To prevent harm.
Cynic reply: To prevent liabilities.
If the restaurant refuses to put your fries into your coffee, because that’s not on the menu, then that’s their call. Can be for many reasons, but it’s literally their business, not yours.
If we replace fries with fuse, and coffee with gun powder, I hope there are more regulations in place. What they sell and to whom and in which form affects more people than just buyer and seller.
Although I find it pretty surprising corporations self-regulate faster than lawmakers can say ‘AI’ in this case. That’s odd.
This is a false equivalence. Encryption only works if nobody can decrypt it. LLMs work even if you censor illegal content from their output.
You miss the point. My point is that if you want to have a consistent view point, you need to acknowledge and defend the harmful sides. Encryption can objectively cause harm, but it should absolutely still be defended.
What the fuck is this “you should defend harm” bullshit, did you hit your head during an entry level philosophy class or something?
The reason we defend encryption even though it can be used for harm is because breaking it means you can’t use it for good, and that’s far worse. We don’t defend the harm it can do in and of itself; why the hell would we? We defend it in spite of the harm because the good greatly outweighs the harm and they cannot be separated. The same isn’t true for LLMs.
We don’t believe that at all, we believe privacy is a human right. Also you’re just objectively wrong about LLMs. Offline uncensored LLMs already exist, and will perpetually exist. We don’t defend tools doing harm, we acknowledge it.
If you ask how to build a bomb and it tells you, wouldn’t Mozilla get in trouble?
Do gun manufacturers get in trouble when someone shoots somebody?
Do car manufacturers get in trouble when someone runs somebody over?
Do search engines get in trouble if they accidentally link to harmful sites?
What about social media sites getting in trouble for users uploading illegal content?
Mozilla doesn’t need to host an uncensored model, but their open source AI should be able to be trained to uncensored. So I’m not asking them to host this themselves, which is an important distinction I should have made.
Which uncensored LLMs exist already, so any argument about the damage they can cause is already possible.
Why are lolbertarians on lemmy?