• Flumpkin@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      10 months ago

      That might actually be the kind of thing where open source AI could help. At least I hope. To detect bias, lies or AI powered filtering / sorting of content.

        • Kedly@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          10 months ago

          See, THIS is the criticism of AI I can actually empathize with, I might even agree with it somewhat

        • OhNoMoreLemmy@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Honestly, most of what Cambridge analytica did was blackmail, illegal spending, and collusion between campaigns that were legally required to be separate.

          Much of the data processing/ml was intended as a smoke screen to distract from the big stuff that was known to work and consequently legislated against. The problem is that they were so incompetent that the distraction technique was also illegal.

          Maybe the machine learning also worked, but it’s really not clear.