• ABetterTomorrow@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    9 hours ago

    Current big tech is going to keeping pushing limits and have SM influencers/youtubers market and their consumers picking up the R&D bill. Emotionally I want to say stop innovating but really cut your speed by 75%. We are going to witness an era of optimization and efficiency. Most users just need a Pi 5 16gb, Intel NUC or an Apple air base models. Those are easy 7-10 year computers. No need to rush and get latest and greatest. I’m talking about everything computing in general. One point gaming,more people are waking up realizing they don’t need every new GPU, studios are burnt out, IPs are dying due to no lingering core base to keep franchise up float and consumers can’t keep opening their wallets. Hence studios like square enix going to start support all platforms and not do late stage capitalism with going with their own launcher with a store. It’s over.

  • Ledericas@lemm.ee
    link
    fedilink
    English
    arrow-up
    12
    ·
    12 hours ago

    It’s because customers don’t want it or care for it, it’s only the corporations themselves are obsessed with it

  • Not_mikey@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    2
    ·
    12 hours ago

    The actual survey result:

    Asked whether “scaling up” current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was “unlikely” or “very unlikely” to succeed.

    So they’re not saying the entire industry is a dead end, or even that the newest phase is. They’re just saying they don’t think this current technology will make AGI when scaled. I think most people agree, including the investors pouring billions into this. They arent betting this will turn to agi, they’re betting that they have some application for the current ai. Are some of those applications dead ends, most definitely, are some of them revolutionary, maybe

    Thus would be like asking a researcher in the 90s that if they scaled up the bandwidth and computing power of the average internet user would we see a vastly connected media sharing network, they’d probably say no. It took more than a decade of software, cultural and societal development to discover the applications for the internet.

    • stormeuh@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      12 hours ago

      I agree that it’s editorialized compared to the very neutral way the survey puts it. That said, I think you also have to take into account how AI has been marketed by the industry.

      They have been claiming AGI is right around the corner pretty much since chatGPT first came to market. It’s often implied (e.g. you’ll be able to replace workers with this) or they are more vague on timeline (e.g. OpenAI saying they believe their research will eventually lead to AGI).

      With that context I think it’s fair to editorialize to this being a dead-end, because even with billions of dollars being poured into this, they won’t be able to deliver AGI on the timeline they are promising.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 hours ago

        Yeah, it does some tricks, some of them even useful, but the investment is not for the demonstrated capability or realistic extrapolation of that, it is for the sort of product like OpenAI is promising equivalent to a full time research assistant for 20k a month. Which is way more expensive than an actual research assistant, but that’s not stopping them from making the pitch.

      • morrowind@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Part of it is we keep realizing AGI is a lot more broader and more complex than we think

    • cantstopthesignal@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      2
      ·
      10 hours ago

      It’s becoming clear from the data that more error correction needs exponentially more data. I suspect that pretty soon we will realize that what’s been built is a glorified homework cheater and a better search engine.

      • Sturgist@lemmy.ca
        link
        fedilink
        English
        arrow-up
        27
        ·
        10 hours ago

        what’s been built is a glorified homework cheater and an better unreliable search engine.

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 hours ago

      Right, simply scaling won’t lead to AGI, there will need to be some algorithmic changes. But nobody in the world knows what those are yet. Is it a simple framework on top of LLMs like the “atom of thought” paper? Or are transformers themselves a dead end? Or is multimodality the secret to AGI? I don’t think anyone really knows.

      • relic_@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 hours ago

        No there’s some ideas out there. Concepts like heirarchical reinforcement learning are more likely to lead to AGI with creation of foundational policies, problem is as it stands, it’s a really difficult technique to use so it isn’t used often. And LLMs have sucked all the research dollars out of any other ideas.

    • 10001110101@lemm.ee
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 hours ago

      I think most people agree, including the investors pouring billions into this.

      The same investors that poured (and are still pouring) billions into crypto, and invested in sub-prime loans and valued pets.com at $300M? I don’t see any way the companies will be able to recoup the costs of their investment in “AI” datacenters (i.e. the $500B Stargate or $80B Microsoft; probably upwards of a trillion dollars globally invested in these data-centers).

    • Prehensile_cloaca @lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      ·
      7 hours ago

      The bigger loss is the ENORMOUS amounts of energy required to train these models. Training an AI can use up more than half the entire output of the average nuclear plant.

      AI data centers also generate a ton of CO². For example, training an AI produces more CO² than a 55 year old human has produced since birth.

      Complete waste.

  • Korhaka@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    12
    ·
    12 hours ago

    There are some nice things I have done with AI tools, but I do have to wonder if the amount of money poured into it justifies the result.

    • tetris11@lemmy.ml
      link
      fedilink
      English
      arrow-up
      55
      ·
      edit-2
      9 hours ago

      I like my project manager, they find me work, ask how I’m doing and talk straight.

      It’s when the CEO/CTO/CFO speaks where my eyes glaze over, my mouth sags, and I bounce my neck at prompted intervals as my brain retreats into itself as it frantically tosses words and phrases into the meaning grinder and cranks the wheel, only for nothing to come out of it time and time again.

      • MonkderVierte@lemmy.ml
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        8 hours ago

        Right, that sweet spot between too less stimuli so your brain just wants to sleep or run away and enough stimuli so you can’t just zone out (or sleep).

      • killeronthecorner@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        edit-2
        7 hours ago

        COs are corporate politicians, media trained to only say things which are completely unrevealing and lacking of any substance.

        This is by design so that sensitive information is centrally controlled, leaks are difficult, and sudden changes in direction cause the minimum amount of whiplash to ICs as possible.

        I have the same reaction as you, but the system is working as intended. Better to just shut it out as you described and use the time to think about that issue you’re having on a personal project or what toy to buy for your cat’s birthday.

      • spooky2092@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 hours ago

        The number of times my CTO says we’re going to do THING, only to have to be told that this isn’t how things work…

  • pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    11
    ·
    15 hours ago

    Meanwhile a huge chunk of the software industry is now heavily using this “dead end” technology 👀

    I work in a pretty massive tech company (think, the type that frequently acquires other smaller ones and absorbs them)

    Everyone I know here is using it. A lot.

    However my company also has tonnes of dedicated sessions and paid time to instruct it’s employees on how to use it well, and to get good value out of it, abd the pitfalls it can have

    So yeah turns out if you teach your employees how to use a tool, they start using it.

    I’d say LLMs have made me about 3x as efficient or so at my job.

    • andallthat@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      14 hours ago

      It’s not that LLMs aren’t useful as they are. The problem is that they won’t stay as they are today, because they are too expensive. There are two ways for this to go (or an eventual combination of both:

      • Investors believe LLMs are going to get better and they keep pouring money into “AI” companies, allowing them to operate at a loss for longer That’s tied to the promise of an actual “intelligence” emerging out of a statistical model.

      • Investments stop pouring in, the bubble bursts and companies need to make money out of LLMs in their current state. To do that, they need to massively cut costs and monetize. I believe that’s called enshttificarion.

      • pixxelkick@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        5
        ·
        14 hours ago

        You skipped possibility 3, which is actively happening ing:

        Advancements in tech enable us to produce results at a much much cheaper cost

        Which us happening with diffusion style LLMs that simultaneously cost less to train, cost less to run, but also produce both faster abd better quality outputs.

        That’s a big part people forget about AI: it’s a feedback loop of improvement as soon as you can start using AI to develop AI

        And we are past that mark now, most developers have easy access to AI as a tool to improve their performance, and AI is made by… software developers

        So you get this loop where as we make better and better AIs, we get better and better at making AIs with the AIs…

        It’s incredibly likely the new diffusion AI systems were built with AI assisting in the process, enabling them to make a whole new tech innovation much faster and easier.

        We are now in the uptick of the singularity, and have been for about a year now.

        Same goes for hardware, it’s very likely now that mvidia has AI incorporating into their production process, using it for micro optimizations in its architectures and designs.

        And then those same optimized gpus turn around and get used to train and run even better AIs…

        In 5-10 years we will look back on 2024 as the start of a very wild ride.

        Remember we are just now in the “computers that take up entire warehouses” step of the tech.

        Remember that in the 80s, a “computer” cost a fortune, took tonnes of resources, multiple people to run it, took up an entire room, was slow as hell, and could only do basic stuff.

        But now 40 years later they fit in our pockets and are (non hyoerbole) billions of times faster.

        I think by 2035 we will be looking at AI as something mass produced for consumers to just go in their homes, you go to best buy and compare different AI boxes to pick which one you are gonna get for your home.

        We are still at the stage of people in the 80s looking at computers and pondering “why would someone even need to use this, why would someone put one in their house, let alone their pocket”

        • andallthat@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          10 hours ago

          I want to believe that commoditization of AI will happen as you describe, with AI made by devs for devs. So far what I see is “developer productivity is now up and 1 dev can do the work of 3? Good, fire 2 devs out of 3. Or you know what? Make it 5 out of 6, because the remaining ones should get used to working 60 hours/week.”

          All that increased dev capacity needs to translate into new useful products. Right now the “new useful product” that all energies are poured into is… AI itself. Or even worse, shoehorning “AI-powered” features in all existing product, whether it makes sense or not (welcome, AI features in MS Notepad!). Once this masturbatory stage is over and the dust settles, I’m pretty confident that something new and useful will remain but for now the level of hype is tremendous!

          • pixxelkick@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            Good, fire 2 devs out of 3.

            Companies that do this will fail.

            Successful companies respond to this by hiring more developers.

            Consider the taxi cab driver:

            With the invention if the automobile, cab drivers could do their job way faster and way cheaper.

            Did companies fire drivers in response? God no. They hired more

            Why?

            Because they became more affordable, less wealthy clients could now afford their services which means demand went way way up

            If you can do your work for half the cost, usually demand goes up by way more than x2 because as you go down in wealth levels of target demographics, your pool of clients exponentially grows

            If I go from “it costs me 100k to make you a website” to “it costs me 50k to make you a website” my pool of possible clients more than doubles

            Which means… you need to hire more devs asap to start matching this newfound level of demand

            If you fire devs when your demand is about to skyrocket, you fucked up bad lol

      • pixxelkick@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        For sure, much like how a cab driver has to know how to drive a cab.

        AI is absolutely a “garbage in, garbage out” tool. Just having it doesn’t automatically make you good at your job.

        The difference in someone who can weild it well vs someone who has no idea what they are doing is palpable.

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      39
      arrow-down
      3
      ·
      edit-2
      15 hours ago

      Your labor before they had LLMs helped pay for the LLMs. If you’re 3x more efficient and not also getting 3x more time off for the labor you put in previously for your bosses to afford the LLMs you got ripped off my dude.

      If you’re working the same amount and not getting more time to cool your heels, maybe, just maybe, your own labor was exploited and used against you. Hyping how much harder you can work just makes you sound like a bitch.

      Real “tread on me harder, daddy!” vibes all throughout this thread. Meanwhile your CEO is buying another yacht.

      • LuigiDidNothingWrong87@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        15 hours ago

        This is how all tech innovation has gone. If you don’t let the bosses exploit your labour someone else will.

        If tech had unions this wouldn’t happen as much, but that’s why they don’t really exist.

      • pixxelkick@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        2
        ·
        edit-2
        15 hours ago

        I am indeed getting more time off for PD

        We delivered on a project 2 weeks ahead of schedule so we were given raises, I got a promotion, and we were given 2 weeks to just do some chill PD at our own discretion as a reward. All paid on the clock.

        Some companies are indeed pretty cool about it.

        I was asked to give some demos and do some chats with folks to spread info on how we had such success, and they were pretty fond of my methodology.

        At its core delivering faster does translate to getting bigger bonuses and kickbacks at my company, so yeah there’s actual financial incentive for me to perform way better.

        You also are ignoring the stress thing. If I can work 3x better, I can also just deliver in almost the same time, but spend all that freed up time instead focusing on quality, polishing the product up, documentation, double checking my work, testing, etc.

        Instead of scraping past the deadline by the skin of our teeth, we hit the deadline with a week or 2 to spare and spent a buncha extra time going over everything with a fine tooth comb twice to make sure we didn’t miss anything.

        And instead of mad rushing 8 hours straight, it’s just generally more casual. I can take it slower and do the same work but just in a less stressed out way. So I’m literally just physically working less hard, I feel happier, and overall my mood is way better, and I have way more energy.

        • Rimu@piefed.social
          link
          fedilink
          English
          arrow-up
          6
          ·
          15 hours ago

          That’s very cool.

          It’ll be interesting to see how it goes in a year’s time, maybe they’ll have raised their expectations and tightened the deadlines by then.

          • pixxelkick@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            15 hours ago

            The thing is, the tech keeps advancing too so even if they tighten up deadlines, by the time they did that our productivity also took another gearshift up so we still are some degree ahead.

            This isn’t new, in software we have always been getting new tools to do our jobs better and faster, or produce fancier results in the same time

            This is just another tool in the toolbelt.

        • Lemminary@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          15 hours ago

          That sounds so cool! I’m glad you’re getting the benefits.

          I’m only wary that the cash-making machine will start tightening the ropes on the free time and the deadlines.

  • TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    1
    ·
    edit-2
    16 hours ago

    Technology in most cases progresses on a logarithmic scale when innovation isn’t prioritized. We’ve basically reached the plateau of what LLMs can currently do without a breakthrough. They could absorb all the information on the internet and not even come close to what they say it is. These days we’re in the “bells and whistles” phase where they add unnecessary bullshit to make it seem new like adding 5 cameras to a phone or adding touchscreens to cars. Things that make something seem fancy by slapping buzzwords and features nobody needs without needing to actually change anything but bump up the price.

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      7 hours ago

      I remember listening to a podcast that’s about explaining stuff according to what we know today (scientifically). The guy explaining is just so knowledgeable about this stuff and he does his research and talk to experts when the subject involves something he isn’t himself an expert.

      There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.

      So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).

      Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.

      In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.

      • morrowind@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        There’s been several smaller breakthroughs since then that arguably would not have happened without so many scientists suddenly turning their attention to the field.

  • NoiseColor @lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    16 hours ago

    Worst case scenario, I don’t think money spent on supercomputers is the worst way to spend money. That in itself has brought chip design and development forward. Not to mention ai is already invaluable with a lot of science research. Invaluable!

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    ·
    edit-2
    7 hours ago

    It’s ironic how conservative the spending actually is.

    Awesome ML papers and ideas come out every week. Low power training/inference optimizations, fundamental changes in the math like bitnet, new attention mechanisms, cool tools to make models more controllable and steerable and grounded. This is all getting funded, right?

    No.

    Universities and such are seeding and putting out all this research, but the big model trainers holding the purse strings/GPU clusters are not using them. They just keep releasing very similar, mostly bog standard transformers models over and over again, bar a tiny expense for a little experiment here and there. In other words, it’s full corporate: tiny, guaranteed incremental improvements without changing much, and no sharing with each other. It’s hilariously inefficient. And it relies on lies and jawboning from people like Sam Altman.

    Deepseek is what happens when a company is smart but resource constrained. An order of magnitude more efficient, and even their architecture was very conservative.