Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

⁨36⁩ ⁨likes⁩

Submitted ⁨⁨3⁩ ⁨days⁩ ago⁩ by ⁨misk@piefed.social⁩ to ⁨technology@lemmy.zip⁩

https://www.newyorker.com/culture/open-questions/what-if-ai-doesnt-get-much-better-than-this

What If A.I. Doesn’t Get Much Better Than This?

Archive: https://archive.ph/2025.08.13-105813/https://www.newyorker.com/culture/open-questions/what-if-ai-doesnt-get-much-better-than-this

source

Comments

Sort:hotnewtop
  • spankmonkey@lemmy.world ⁨3⁩ ⁨days⁩ ago

    By design, LLMs can get faster but cannot be more accurate without a massive intentional approach to verifying their datasets, which isn’t feasible because that would counter anything jot fact based as LLMs don’t understand context. Basically, the training approach means that they get filled with whatever the builders can get their hand on and then they fall back to web searches which return all kinds of unreliable stuff because LLMs don’t have a way of verifying reliability.

    Even if they were perfect, they will not be able to keep up with the content flood of new information that comes out every minute when used as general purpose answer anything tools.

    What AI actually excels at is pattern matching in controlled settings.

    source
    • slate@sh.itjust.works ⁨3⁩ ⁨days⁩ ago

      And now, lots of web searches return results of AI SEO slop chock full of incorrect information, which then fules subsequent training sets and LLM web searches and creates a negative feedback loop that could destroy the internet.

      source
      • spankmonkey@lemmy.world ⁨3⁩ ⁨days⁩ ago

        The AI SEO slop is already destroying the internet, although that negative feedback loop is certainly accelerating it.

        source
      • Ramblingman@lemmy.world ⁨2⁩ ⁨days⁩ ago

        Apparently gpt-5 is much worse, or so the subreddit dedicated to it says. I wonder if that loop has already started?

        source
  • Perspectivist@feddit.uk ⁨3⁩ ⁨days⁩ ago

    I think the title should be “What if LLMs doesn’t get much better than this?” because that’s effectively what the article is talking about. I see no reason to expect that our AI systems wouldn’t keep improving even if LLMs don’t.

    source
    • mindbleach@sh.itjust.works ⁨3⁩ ⁨days⁩ ago

      Neural networks becoming practical is world-changing. This lets us do crazy shit we have no idea how to program sensibly. Dead-reckoning with an accelerometer could be accurate to the inch. Chroma-key should rival professional rotoscoping. Any question with a bunch of data and a simple answer can be trained at some expense and then run on an absolute potato.

      So it’s downright bizarre that every single company is fixated on guessing the next word with transformers. Alternatives like text diffusion and mamba pop up and then disappear, without so much as a ‘so that didn’t work’ blog post.

      source
    • kkj@lemmy.dbzer0.com ⁨2⁩ ⁨days⁩ ago

      Yeah, I think LLMs are close to their peak. Any new revolutionary developments in LLMs will probably be in efficiency rather than capability. Something that can actually think in a real sense will probably happen eventually, though, and unless it’s even more absurdly resource-intensive it’ll probably replace LLMs in everything but autocomplete (since they’re legitimately good at that).

      source
    • theneverfox@pawb.social ⁨3⁩ ⁨days⁩ ago

      I think that’s true, but also missing the point… We’ve hit the peak of AI until the next transformative breakthrough

      They’re still fucking magic. They’re really cool and useful, when you use them correctly.

      But chat gpt 5 isn’t much better than 3.5. It’s a bit better, it requires less prompt engineering to get good results, it gives more consistent results… But it’s still unreliable. And weirdly likes to talk down to you now, as if I don’t know more than it…I am still the expert here, it’s a light speed intern, it doesn’t know what’s going on

      source
  • givesomefucks@lemmy.world ⁨3⁩ ⁨days⁩ ago

    They’re operating under the long outdated assumption that all you need to simulate a brain is match the number of neurons…

    That’s not how any of this works, but they’ve been saying “we’ll be there soon” for so long now that we’re almost able to do it, their gonna lose their main excuse and main reason for fundraising.

    They’ll have to tell investors the timeline just changed from years to maybe decades if we’re lucky

    And it’s gonna divebomb our whole economy because fucking every fund manager is dumping insane levels of money into it.

    source
    • Perspectivist@feddit.uk ⁨3⁩ ⁨days⁩ ago

      Which AI company has taken this approach exactly? Whose this “they” you’re refering to?

      source
  • Rossphorus@lemmy.world ⁨3⁩ ⁨days⁩ ago

    Honestly? If AI systems stopped improving forever? That’s probably best case scenario. LLMs are already superhuman on a knowledge level, human-level in terms of speed (tokens per sec, etc), but subhuman in many other areas. This makes them useful for some tasks, but not so useful that they could cause any sort of existential threat to humanity (either in an economic sense or in a misalignment sense). If LLMs stagnate here then we have at least one tool in our AI toolbox that we’re pretty sure isn’t conscious/sentient/etc., which is useful since that makes them predictable on some level. Humans can deal with that.

    Unfortunately, I see no reason why AI systems in general wouldn’t continue to improve. Even if LLMs do stagnate they’re only one tiny branch of a much larger tree, and we already have at least one example of an AI system that is conscious and sentient - a human. This means even if somehow the human brain was the only architecture ever capable of sentience (incredibly unlikely), we could always simulate/emulate a human brain to get human-level AGI. Simulate/emulate it faster? Superhuman AGI.

    source
    • MotoAsh@lemmy.world ⁨3⁩ ⁨days⁩ ago

      No… you’re anthropomorphising the technology to hell and back…

      “Knowledge” takes understanding, and no current generation of “AI” has basically any level of understanding. Being able to crap out eloquently stated BS is not knowledge nor thought.

      Yet, they’re still a MASSIVE economic threat, mostly because moronic investors and c-suits are also anthropomorphising them and buying in to the sales pitch BS that’s straight up lies at a fundamental level…

      source
      • Rossphorus@lemmy.world ⁨3⁩ ⁨days⁩ ago

        I don’t want to get into an argument of semantics, whatever your definition of ‘knowledge’ is, LLMs can recall a greater number of factoids than any individual human. That’s all I meant. Are they perfect? No, I never said that. They’re still far beyond the average human, however, hence superhuman.

        I said that LLMs are not an existential threat to humanity, even economically. I never said that they wouldn’t threaten individual jobs, or cause a bubble. Please don’t strawman me. You and I are looking at completely different levels of effects, I’m looking at the big picture - is humanity or society as we know it going to continue to exist in 100 years (in this hypothetical where AI and/or LLMs stagnated)? If yes, then LLMs are not an existential threat. That’s what an existential threat means, after all.

        Is AI causing en economic bubble? Sure, but like all bubbles they will burst when people realise that they have limited use due to their drawbacks. The world will then return to some semblance of normalcy. That’s a non-existential threat.

        Now, if we’re talking about a world in which AI systems continue to evolve? All bets are off the table.

        source
        • -> View More Comments
  • mindbleach@sh.itjust.works ⁨3⁩ ⁨days⁩ ago

    Some guy blogged that the smart ones move to advertising.

    source
  • m532@lemmygrad.ml ⁨3⁩ ⁨days⁩ ago

    That’s what they always say, and then… new stuff

    Qwen-Image (diffusion with a LLM as text encoder) came out 10 days ago and it understands the prompt like 20x better, than, for example, SDXL Turbo

    source