Yea the anti-AI crowd on Lemmy tends to misplace their anger on all AI when a lot of their anger should be directed to the corporate BS shoving it everywhere and anywhere to make a profit and line go up
The stuff that gets made fun of by most anti AI people is AI “art” that people try to argue is equivalent to real, human art.
The main reason people hate AI in general is because nearly all models use data that was taken without permission of the owner of it.
It isn’t equivalent to bottled water, it is equivalent to the chocolate industry, it isn’t essential, so I will wait until an AI that was trained ethically without stealing data is made and doesn’t try to replace human art.
That AI is the one you make or at least host. No one is going to host an online AI for you that is 100% ethical because that isnt profitable and it is very expensive.
When you vilianize AI you normalize AI use as being bad. The end result is not people stopping use of AI it is people being more okay with using less ethical AI. You can see this with folks driving SUVs and big trucks. They intentionally pick awful choices because the fatigue of being wrong for driving a car makes them just accept that it doesn’t matter.
Most people can’t host their own AI. The only AI most people are aware of and the models that are pushed in everyone’s face are the horrible ones. I think a blanket hatred for all AI is stupid but it isn’t stupid to assume an AI is unethical because it most likely is, especially if it is a commercial one tech bros are posting about on corporate social media.
As long as more people aren’t being told about the possibility of ethical AI there will be a large group of people wishing for its failure, especially since it has ruined so many parts of the internet, with both a locally hosted model or a model like chatGPT.
I get it, but we should as a community try to be better than that.
AI won’t fail. It already is past the point where failing or being a fad was an option. Even if we wanted to go backwards, the steps that were taken to get us to where we are with AI have burned the bridges. We won’t get 2014 quality search engines back. We can’t unshitify the internet.
As always, technology isn’t the enemy, it’s the corporations controlling it that are. And honestly the freely available local LLMs aren’t too far behind the big ones.
I am very strongly anti-AI, I think it has some legitimate uses that have probably saved and improved a lot of lives (like AlphaFold). My main problem (and most people’s main problem with it) is the way it has been trained with stolen data and art.
Since I don’t know much about non-corporate AI I am interested to know how an open-source LLM just trained off of your bookmarks would work, I assumed it would still need to be trained off of stolen data still so it can form sentences as well as the more popular models but I may be wrong, maybe the volume of data needed for a system like that is small enough that it can just be trained off of data willingly donated to it? I doubt it though.
fluid dynamics are quite effective at that. And there seem to only be a few main talking points that they bring up (environmental, energy, training data, art, job loss, personal dislike, wealth concentration (probably better to just say economic, but it’s pretty much just this), ill fitting usages, or not understanding how the model works at a fundamental level), unless people think about something a lot they generally come up with similar arguments.
Finding the setting that stopped tabs from being reset every time I closed my browser changed my life. I just don’t if it was a positive or negative change yet
I have 172 spread over 13 windows, loosely grouped by subject, but it gets muddied fast as I often just start searching new things in existing windows and then have some eclectic mixes.
I am like this as well but since i also postpone everything i end up with a dozen more month after month so for quite a while now i have been using https://raindrop.io/ and it is the best thing for me. You can regularly export if you worry about losing data, don’t believe your privacy is at risk there but you might want to check for yourself if you value this a lot.
Hah. Yeah, I’ll do that as soon as you invent a way to freeze time.
For what it’s worth, I’m pretty sure it’s less energy efficient to run a local open source LLM than to offload the task to a data center, but the flexibility and privacy are too big of a deal to ignore.
In any case chatbots suck at finding accurate information reliably, but they are actually pretty good at reaching things you already know or can verify at a glance with suprisingly little information. The fact that a piece of tech is being misused often doesn’t mean it’s useless. This simplistic black-and-white stuff is so dumb and social media is so full of it. Speaking of often misused technology, I suppose.
Depends on what knowledge we are talking about. Personally, I’d be feeding it tons of manuals so that I could ask questions like “Which version of software x introduced feature y?” There’s no extra context I need, I just need a version number to give to a customer. And in my industry, that type of info just doesn’t show up on Google. So having an LLM that can answer the question in seconds saves me an hour of sifting through manuals.
LMAO yea using an LLM to dig through things to find what I need faster so that I can read further on the non-summarized version for more depth is “outsourcing my thinking” 🙄
oh poor baby 🥺do you need the robot to read through your bookmarks? 🥺 yeah? 🥺do you need the bo-bot to write you essay too? 🥺 yeah ??? 🥺 you can’t do it?? 🥺 you’re a moron?? 🥺do you need chat gpt to fuck your wife ??
Not true in all cases, yes if you want to read a novel you will enjoy reading it way more than reading a computer generated summary. But if you want to source information it’s a whole other story. Also, you still need to use your brain to understand summaries
That’s exactly what I was thinking. And this is actually the first time I’ve heard of some use of LLMs that I may actually be interested in.
Yea the anti-AI crowd on Lemmy tends to misplace their anger on all AI when a lot of their anger should be directed to the corporate BS shoving it everywhere and anywhere to make a profit and line go up
Nestle bottling water is bad, so my solution will be to never drink any water and make fun of people who do. This is how it always comes off to me.
The stuff that gets made fun of by most anti AI people is AI “art” that people try to argue is equivalent to real, human art.
The main reason people hate AI in general is because nearly all models use data that was taken without permission of the owner of it.
It isn’t equivalent to bottled water, it is equivalent to the chocolate industry, it isn’t essential, so I will wait until an AI that was trained ethically without stealing data is made and doesn’t try to replace human art.
That AI is the one you make or at least host. No one is going to host an online AI for you that is 100% ethical because that isnt profitable and it is very expensive.
When you vilianize AI you normalize AI use as being bad. The end result is not people stopping use of AI it is people being more okay with using less ethical AI. You can see this with folks driving SUVs and big trucks. They intentionally pick awful choices because the fatigue of being wrong for driving a car makes them just accept that it doesn’t matter.
It feels dumb, it is dumb, but is what happens.
Most people can’t host their own AI. The only AI most people are aware of and the models that are pushed in everyone’s face are the horrible ones. I think a blanket hatred for all AI is stupid but it isn’t stupid to assume an AI is unethical because it most likely is, especially if it is a commercial one tech bros are posting about on corporate social media.
As long as more people aren’t being told about the possibility of ethical AI there will be a large group of people wishing for its failure, especially since it has ruined so many parts of the internet, with both a locally hosted model or a model like chatGPT.
I get it, but we should as a community try to be better than that.
AI won’t fail. It already is past the point where failing or being a fad was an option. Even if we wanted to go backwards, the steps that were taken to get us to where we are with AI have burned the bridges. We won’t get 2014 quality search engines back. We can’t unshitify the internet.
As always, technology isn’t the enemy, it’s the corporations controlling it that are. And honestly the freely available local LLMs aren’t too far behind the big ones.
Well in some ways they are. It also depends a lot on the hardware you have of course. A normal 16GB GPU won’t fit huge LLMs.
The smaller ones are getting impressively good at some things but a lot of them are still struggling when using non-English languages for example.
I am very strongly anti-AI, I think it has some legitimate uses that have probably saved and improved a lot of lives (like AlphaFold). My main problem (and most people’s main problem with it) is the way it has been trained with stolen data and art.
Since I don’t know much about non-corporate AI I am interested to know how an open-source LLM just trained off of your bookmarks would work, I assumed it would still need to be trained off of stolen data still so it can form sentences as well as the more popular models but I may be wrong, maybe the volume of data needed for a system like that is small enough that it can just be trained off of data willingly donated to it? I doubt it though.
Wow, that’s a big net. Surely your comment is applicable to all your catch.
Right?
Yes, do tell me more about the tendencies of the crowd as a whole.
fluid dynamics are quite effective at that. And there seem to only be a few main talking points that they bring up (environmental, energy, training data, art, job loss, personal dislike, wealth concentration (probably better to just say economic, but it’s pretty much just this), ill fitting usages, or not understanding how the model works at a fundamental level), unless people think about something a lot they generally come up with similar arguments.
LLM has its uses if you arent relying too much on it or trusting it to give true information
I can’t trust book marks I just use open tabs if I want to keep track of info, that’s why I have 75 tabs open most of the time
Finding the setting that stopped tabs from being reset every time I closed my browser changed my life. I just don’t if it was a positive or negative change yet
Are you me? I have hundreds open at any given moment lmao
A friend of mine has 2.5k, and I don’t even know how many I have open, I have 13 windows of either ~5 or 100-200 tabs
I have 172 spread over 13 windows, loosely grouped by subject, but it gets muddied fast as I often just start searching new things in existing windows and then have some eclectic mixes.
I am like this as well but since i also postpone everything i end up with a dozen more month after month so for quite a while now i have been using https://raindrop.io/ and it is the best thing for me. You can regularly export if you worry about losing data, don’t believe your privacy is at risk there but you might want to check for yourself if you value this a lot.
i use gpt4all and a markdown list of notes for it to sort through. kind of works, but need to tinker the application more because it’s fun lol
Manually reading through is going to teach you more and give more context than a txt parser’s summary.
Just use your brain and don’t outsource your thinking.
Hah. Yeah, I’ll do that as soon as you invent a way to freeze time.
For what it’s worth, I’m pretty sure it’s less energy efficient to run a local open source LLM than to offload the task to a data center, but the flexibility and privacy are too big of a deal to ignore.
In any case chatbots suck at finding accurate information reliably, but they are actually pretty good at reaching things you already know or can verify at a glance with suprisingly little information. The fact that a piece of tech is being misused often doesn’t mean it’s useless. This simplistic black-and-white stuff is so dumb and social media is so full of it. Speaking of often misused technology, I suppose.
To pull this off I’d need everything in audio form
Use text to speech
can I have an ai or maybe go old school rss feed collect stuff and read it to me?
Depends on what knowledge we are talking about. Personally, I’d be feeding it tons of manuals so that I could ask questions like “Which version of software x introduced feature y?” There’s no extra context I need, I just need a version number to give to a customer. And in my industry, that type of info just doesn’t show up on Google. So having an LLM that can answer the question in seconds saves me an hour of sifting through manuals.
LMAO yea using an LLM to dig through things to find what I need faster so that I can read further on the non-summarized version for more depth is “outsourcing my thinking” 🙄
oh poor baby 🥺do you need the robot to read through your bookmarks? 🥺 yeah? 🥺do you need the bo-bot to write you essay too? 🥺 yeah ??? 🥺 you can’t do it?? 🥺 you’re a moron?? 🥺do you need chat gpt to fuck your wife ??
jokes on you I don’t have a wife
Not true in all cases, yes if you want to read a novel you will enjoy reading it way more than reading a computer generated summary. But if you want to source information it’s a whole other story. Also, you still need to use your brain to understand summaries