It’s very easy with an incremental improvement tactic to get stuck in a local maximum. You’ve then hit a dead end, every available option leads to a degredation and thus isn’t viable. It isn’t a sure thing incremental improvements lead to the desired outcome.
Free_Opinions@feddit.uk 1 day ago
I simply cannot imagine a situation where we reach a local maximum and get stuck in it for the rest of human history. There’s always someone else trying a new approach. We will not stop trying to improve our technology. Even just simply knowing what doesn’t work is a step in the right direction.
davidgro@lemmy.world 1 day ago
I can imagine it really easily for the foreseeable future, all that would need to happen is for the big corporations and well funded researchers to stick to optimizing LLMs.
Yeah that’s not the rest of human history (unless the rest of it isn’t very much) but enough to make concerns about AGI into someone else’s problem.
Free_Opinions@feddit.uk 1 day ago
Like I said; I’ve made no claims about the timeline. All I’ve said is that incremental improvements will lead to us getting there eventually.
davidgro@lemmy.world 1 day ago
In this scenario reaching the goal would require an entirely different base technology, and incremental improvements to what we have now do not eventually lead to AGI.
Kinda like incremental improvements to cars or even trains won’t eventually get us to Mars.
jrs100000@lemmy.world 1 day ago
Just like incremental improvements in the bicycle will eventually allow for hypersonic peddling.
chonglibloodsport@lemmy.world 1 day ago
By saying this aren’t you assuming that human civilization will last long enough to get there?
Look at the timeline of other species on this planet. Vast numbers of them are long extinct. They never evolved intelligence to our level. Only we did. Yet we know our intelligence is quite limited.
What took biology billions of years we’re attempting to do in a few generations (the project for AI began in the 1950s). Meanwhile the amount of non-renewable energy resources we’re consuming has hit exponential takeoff. Our political systems are straining and stretching to the breaking point.
And of course progress towards AI has not been steady with the project. There was an initial burst of success in the ‘50s followed by a long AI winter when researchers got stuck in a local maximum. It’s not at all clear to me that we haven’t entered a new local maximum with LLMs.
Do we even have a few more generations left to work on this?
Free_Opinions@feddit.uk 1 day ago
I’m talking about AI development broadly, not just LLMs.
I also listed human extinction as one of the two possible scenarios in which we never reach AGI, the other being that there’s something unique about biological brains that cannot be replicated artificially.
chonglibloodsport@lemmy.world 1 day ago
We could witness a collapse in our high tech civilization that effectively ends AI research without necessarily leading to extinction. Think of a global warming supercharged Mad Max post-apocalyptic future. People still survive but the population has crashed and there’s a lot of fighting for survival and scavenging among the ruins of civilization.
There’s gotta be countless other variations on this theme. Global dystopian techno-feudalism perhaps?