It’s nothing that we probably all didn’t know already, but researchers from the University of Amsterdam suggest that, “core dysfunctions may be rooted in the feedback between reactive engagement and network growth, raising the possibility that meaningful reform will require rethinking the foundational dynamics of platform architecture.”

  • WoodScientist@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    10 days ago

    We need to outlaw individually targeted algorithmic content feeds. A feed is still fine as long as it’s entirely content from channels you subscribe to. If YouTube only showed you videos from channels you subscribe to, it would be a much less toxic platform. And new channels could still spread via search, peer-to-peer recommendations, etc. This also wouldn’t be censorship. You can publish whatever you want on your own site or whatever a platform allows you to publish on theirs. But it’s the individually targeted, psychologically optimized content feeds that are killing us. Feeds optimized for engagement and rage. Feeds optimized to be as addictive as possible. Facebook was fine when it was just posts from your actual friends. But like other social media, it’s degenerated into a rage box.

    I really think this is what we need to do. Social networks were a great idea; social media wasn’t. The tiktok feed and the YouTube algorithm need to be left in the ash heap of history. They’re just too addictive to be used responsibly. We should regulate social media companies like we do purveyors of alcohol and other addictive products. And that could start by banning individually targeted algorithmic feeds. “Going viral” is not something that should happen automatically. The only way to go viral should be if actual real human beings repeatedly recommend your content to other actual human beings.

  • Rimu@piefed.socialM
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    10 days ago

    tldr:

    There are 3 problems:

    Echo chambers or filter bubbles. You need to have a diversity of opinion, a diversity of perspective. Deliberation needs to be among equals; people need to have more or less the same influence in the conversation.
    The social media prism. The more extreme users tend to get more attention online due to algorithmic amplification and human bias towards negativity.

    IMO Lemmy/PieFed only really suffers from the first problem.

    Despite the obvious problems with using LLMs to pretend to be people (which the authors acknowledge), they found this basic dynamic:

    You hit retweet when you see someone being angry about something, or doing something horrific, and then you share that. It’s well-known that this leads to toxic, more polarized content spreading more.

    But what we find is that it’s not just that this content spreads; it also shapes the network structures that are formed [i.e. who follows who]. So there’s feedback between the effective emotional action of choosing to retweet something and the network structure that emerges. And then in turn, you have a network structure that feeds back what content you see, resulting in a toxic network. The definition of an online social network is that you have this kind of posting, reposting, and following dynamics. It’s quite fundamental to it. That alone seems to be enough to drive these negative outcomes.

    We tested six different interventions [chronological feeds, etc]

    but

    it doesn’t really make a difference in changing the basic outcomes.