They don’t have to, algorithms do whatever they are designed to do. Long division is an algorithm.
Profit motives are the issue here.
Yeah, the narrowing of the word “algorithm” to only mean “social media recommendation algorithms” is getting on my nerves.
It’s the only time normies encounter the word.
How do you feel about crypto?
Cryptography is pretty useful
At this point, “crypto” is its own word short for “cryptocurrency”, and not for “cryptography” in the broader sense. It’s unfortunate, but that’s how people use it.
That isn’t even generally true. Try mentioning crypto on the LKML and see what they think you mean.
Scam from start to finish
I’ve always thought it was so funny when people say tHe aLgOrItHM like it’s a bad word or something. I know they mean social media & marketing, but it’s funny to think that they’re very concerned about something like bubble sort.
Algorithms [based on engagement]
…as opposed to platforms like Lemmy, where the only political ideologies you’ll find are “leftists” who, when asked what they even believe, respond with “what are you, a cop?”
are you? 👮♂️ 👀
If you’re a cop, you have to tell me, man! Like legally, you can’t arrest me without telling me you’re a cop!
For a long time Facebook counted an angry react as equal to five likes for measuring engagement. It’s very much intentional.
Based and reposted
Nobody likes Muhammad ibn Musa al-Khwarizmi anymore, I guess he had a good run since around 820 CE.
Wasn’t this literally the shady research that Facebook got caught doing with Cambridge Analytica? Specifically tweaking a user’s feed to be more negative resulted in that user posting more negative things themselves and more engagement overall.
I wonder exactly how much of Hawaii Zuckerberg has to own before people start to question what they are getting from facebook.
Yep!
Facebook figured out how to monetize trolling.
Over 10 years later, it’s destroyed society, but made them a lot of money.
I think the word is ragebait
What I don’t get about this is why in this day and age with all the analytics tools we have do companies continue to just happily pay for simple eyeball exposure?
The only time they seem to have any pause at all on this model is if people post screenshots of ads for their products next to posts literally praising Nazis.
These so called AIs (LLMs) can learn to tell the difference between positive/happy/uplifting posts, neutral posts, and angry/sad/disturbing posts. The advertisers should be asking for their products to be featured next to the first and second groups of posts.
People engage based on anger, sure. They click posts and reply and whatnot. But do they click the ad next to a post that pisses them off and then buy the product?
Or is this purely a subconscious intrusion effort? Do the advertisers just want their products in front of eyeballs regardless of what’s around the ad? It seems like the answer is “no” when they’re called out. But maybe it’s “yes” if they can get away with it?
Every social media company’s content algorithm should be open source or at least a government agency should have code enforcement
Algorithms simply determine which posts will get the most interaction and feed it to people. Does it benefit corps? Of course! But it’s driven by people who choose to engage in this content.
But… I love this cat and want my fingers to engage with their fuzzy head!
I’ve been participating in Threads (yeah, I know, should be ashamed) and I’m unfortunately a sucker for some of the ragebait, especially political.
Guess what Threads pushes at me. A lot of the dumbest ragebait. Not people that actually want to have a conversation. My fault for being a sucker, but the algorithms work.
Doesn’t really matter, I’m shadowbanned. Pissed off too many republican propagandists by refuting them, so as usual, the “report” button is their remedy.
deleted by creator
The old thread I posted this in was deleted, but I wrote this:
Okay so hear me out. I have this pet theory that might explain some of the divide between genders, but also political parties, causing paralysis which ultimately might lead to humanity’s extinction. Forgive me if I’m stating the obvious.
I’m going to set up two axioms to arrive at an extrapolated conclusion.
One: Human psychology tends to ascribe more weight to negative things than positive things in the short term. In the long term this generally balances out, but in the short term it’s more prudent in a biological sense to pay attention to the rustling in the bushes than the berries you might pick from them. This is known as the negativity bias.
Two: The modern gatekeepers of social interaction, Big Tech, employ blind algorithms that attempt to steer your attention towards spending more time on their platforms. These companies are the arbiters of the content we experience daily and what you do and don’t see is mostly at their discretion. The techniques they employ, in simple terms, are designed to provoke what they call ‘engagement’. They do this because at the end of the day FAANG have not only a financial interest, but a fiduciary duty to sell advertisements at the behest of their shareholders. The more they can engage you, the more ads they can sell. They employ live A-B testing, divide people into cohorts and poke and prod them with psychological techniques to try and glue your eyeballs to their ads.
Extrapolated conclusion: These companies have a financial and legally binding interest to divide the population against itself, obstructing politics and social interaction to the point where we might not be able to achieve any of the goals that we need to reach to prevent oblivion.
Thank you for coming to my TED Talk.
I don’t even think this is controversial in any way, in fact I used to assume this was just common knowledge after Cambridge Analytica…
I deleted, as in permanently, totally deleted my FB presence when that came out… but everyone else I explained … basically what you’ve just explained … to, thought I was insane or overreacting and paranoid.
…
Its simple.
Engagement, usage, time on platform is being optimized for.
What drives these things most effectively?
Hatred, outrage, extremely offensive and divisive things.
…
… And they know that they can, through exposing people to such things, make said people more extreme and hateful and anxious and depressed.
So… from an ‘optimize for platform usage’ standpoint… perfect! It’s a reinforcing loop!
…
Zuckerberg stated at one point that his goal with Facebook was to be able to profile (and manipulate, but he didn’t say that part) users so well that he’d be able to predict what they’d post next.
He really did/does just view all social interaction as a very complex problem that can be ‘solved’, like a physics question can be solved, to make a predictive model.
They literally know that their business model is to ruin social discourse, ruin peoples mental health and their lives, to polarize society.
It should not be surprising in any way that, well now society is extremely polarized and mentally ill.
The term “algorithm” in this context is simply a convenient term hiding the intentionalright wing radicalization of users to push them towards pro-business policies so can we please call this out more often?
I’m quite tired of “algorithm” standing in for the intentions behind the owners who write and maintain it.
It was also an “algorithm” that inflated rent around the country, right?
An algorithm, yes. Written with the intention of inflating rent.
It’s not an accident. Algorithm my hair-hole