Would we know it if we saw it? Draw two eye spots on a wooden spoon amd people will anthromorphise it. I suspect we’ll have dozens of false starts and breathless announcements of AGI, but we may never get there.
More interestingly, would we want it if we got it? How long will its creators rally to its side if we throw yottabytes of data at our civilization-scale problems and the mavhine comes back with “build trains and eat the rich instead of cows?”
jmcs@discuss.tchncs.de 1 day ago
In the same way that if you start digging a hole in northwestern Spain you are heading towards New Zealand.
Free_Opinions@feddit.uk 1 day ago
The difference here is that you’re never going to reach New Zealand that way but incremental improvements in AI will eventually get you to AGI*
*
Unless intelligence is substrate independent and cannot be replicated in silica or that we destroy ourselves before we get thereThorry84@feddit.nl 1 day ago
It’s very easy with an incremental improvement tactic to get stuck in a local maximum. You’ve then hit a dead end, every available option leads to a degredation and thus isn’t viable. It isn’t a sure thing incremental improvements lead to the desired outcome.
Free_Opinions@feddit.uk 1 day ago
I simply cannot imagine a situation where we reach a local maximum and get stuck in it for the rest of human history. There’s always someone else trying a new approach. We will not stop trying to improve our technology. Even just simply knowing what doesn’t work is a step in the right direction.
jmcs@discuss.tchncs.de 1 day ago
AI in general yes. LLMs in particular, I very much doubt it.
fine_sandy_bottom@discuss.tchncs.de 1 day ago
That assumes that whatever we have now is a precursor to AGI. There’s no evidence of that.
Free_Opinions@feddit.uk 1 day ago
No, it doesn’t assume that at all. This statement would’ve been true even before electricity was invented and AI was just an idea.
MidWestKhagan@lemmygrad.ml 1 day ago
What do you mean there’s no evidence? This seems like a difference of personal explanation of what AGI is where you can move the goal post as much as you want “it’s not really AGI until it can ___, ok just because it can do that doesn’t mean it’s AGI, AGI needs to be able to do _____”.
SkunkWorkz@lemmy.world 1 day ago
Yeah not with LLMs though.
Free_Opinions@feddit.uk 1 day ago
You can’t know that.
underscore_@sopuli.xyz 1 day ago
It is a common misconception that incremental improvements must equate to eventually achieving the goal, but it is perfectly possible that progress could be asymptotic and we never reach AGI even with constant “advancements”
Free_Opinions@feddit.uk 1 day ago
Incremental improvements by definition mean that you’re moving towards something. It might take a long time but my comment made no claims about the timescale. There’s only two plausible scenarios that I can think of in which we don’t reach AGI and they’re mentioned in my comment.
MidWestKhagan@lemmygrad.ml 1 day ago
That doesn’t sound right at all, comparing AGI to digging a hole from Spain to New Zealand is hyperbolic. Sounds like more like “electricity will never cover the whole world, maybe one day it’ll have an impact, but powering cars and homes? No way”. AGI and SGI is almost our only way to communism, with Deepseek and other open source models capitalists won’t be able to keep up, especially if AGI becomes available to your average person. In a few years, hell just one year alone, LLMs have made substantial progress that we can only assume it will continue to grow. Acting as if though AGI is like fusion generators is naive, unlike containing the sun, AGI is far more possible because it is. There’s no stopping it at this point, my professor told me that they have stopped trying to catch AI as a university because it’s impossible to do so now, unless you’re a child and just copy everything and it’s obvious. It’s time to stop assuming AGI will never come because it will, and it is.