• AggressivelyPassive@feddit.de
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Who actually gives a crap about AI in real life?

    Seriously, what possible actual, real life use case does the average user (even Pixel user) have? Image processing, maybe, but that’s nothing groundbreaking.

    Every single demo of anything AI related I’ve seen is nothing more than a nice demo. Impressive, but still just a demo.

    Think about, what are you actually doing with your phone that’s so much different from what you did 5 or 10 years ago? Maybe I’m a weirdo, but I use literally 80% of the very same apps. Do these need ai on my phone? Not really.

    • mctoasterson@reddthat.com
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      The truth is there is little in terms of a use case that directly benefits the user.

      Look at the MKBHD review of the new iPhone and he summarizes it pretty accurately when talking about all their super fancy backend bullshit going into the photo software to result in “slightly better pictures” and he wasn’t even 100% sure about that part.

      Apples Neural Engine delivered marginal value to the phones actual user and meanwhile the tech company harnessed that power mostly to do client-side scanning. They claimed to have ceased that effort but once again, blackbox proprietary software, so it isn’t transparent to the user.

      My contention is that at this point the big tech companies are developing features to benefit their business model, not deliver features to the users. The marketing and surveillance state grows, because that is the real business that these companies are in. Most of the AI gains we hear about benefit them directly but not us.

      • AggressivelyPassive@feddit.de
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Actually, I think the real reasons are far less “evil” than you might think: It’s marketing. People fall for that.

        Phones don’t improve really, haven’t for quite some time, but you still have to sell new ones. So you add bullshit features or advertise pseudo-improvements. I mean, Apple is currently marketing, that their side button is now programmable! Wow!

    • Barry Zuckerkorn@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Most of the normal apps on the phone are using AI on the edges.

      Image processing has come a long way using algorithms trained through those AI techniques. Not just the postprocessing of pictures already taken, like unblurring faces, removing unwanted background people, choosing a better frame of a moving picture, white balance/color profile or noise reduction, but also in the initial capture of the image: setting the physical focus/exposure on recognizable subjects, using software-based image stabilization in longer exposed shots or in video, etc. Most of these functions are on-device AI using the AI-optimized hardware on the phones themselves.

      On-device speech recognition, speech generation, image recognition, and music recognition has come a long way in the last 5 years, too. A lot of that came from training on models using big, robust servers, but once trained, executing the model on device only requires the AI/ML chip on the phone itself.

      In other words, a lot of these apps were already doing these things before on-device AI chips started showing up in 2013 or so. But the on-device chips have made all these things much, much better, especially in the last 5 years when almost all phones started coming with dedicated hardware for these tasks.