Comment on

Rossphorus@lemmy.world ⁨4⁩ ⁨days⁩ ago

Honestly? If AI systems stopped improving forever? That’s probably best case scenario. LLMs are already superhuman on a knowledge level, human-level in terms of speed (tokens per sec, etc), but subhuman in many other areas. This makes them useful for some tasks, but not so useful that they could cause any sort of existential threat to humanity (either in an economic sense or in a misalignment sense). If LLMs stagnate here then we have at least one tool in our AI toolbox that we’re pretty sure isn’t conscious/sentient/etc., which is useful since that makes them predictable on some level. Humans can deal with that.

Unfortunately, I see no reason why AI systems in general wouldn’t continue to improve. Even if LLMs do stagnate they’re only one tiny branch of a much larger tree, and we already have at least one example of an AI system that is conscious and sentient - a human. This means even if somehow the human brain was the only architecture ever capable of sentience (incredibly unlikely), we could always simulate/emulate a human brain to get human-level AGI. Simulate/emulate it faster? Superhuman AGI.

source
Sort:hotnewtop