A probabilistic “word calculator” is not an intelligent, conscious agent? Oh noes! 🙄😅
Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.
Submitted 1 day ago by cm0002@infosec.pub to technology@lemmy.zip
Comments
ArgumentativeMonotheist@lemmy.world 1 day ago
ByteJunk@lemmy.world 1 day ago
I’ll bite.
How would you distinguish a sufficiently advanced word calculator from an actual intelligent, conscious agent?
lvxferre@mander.xyz 14 hours ago
How would you distinguish a sufficiently advanced word calculator from an actual intelligent, conscious agent?
The same way you distinguish a horse with a plastic horn from a real unicorn: you won’t see a real unicorn.
In other words, your question disregards what the text says, that you won’t get anything remotely similar to an actual intelligent agent through those large token models. You need a different approach, acknowledging that linguistic competence is not the same as reasoning.
Nota bene: this does not mean “AGI is impossible”. That is not what I’m saying. I’m saying “LLMs are a dead end for AGI”.
ArgumentativeMonotheist@lemmy.world 1 day ago
If I can’t meet it and could only interact with it through a device, then I could be fooled, of course.
lvxferre@mander.xyz 1 day ago
Linguists have been saying this over and over, but almost everybody ignored it.
idiomaddict@lemmy.world 1 day ago
Linguists were divided until recently, to be fair.
lvxferre@mander.xyz 1 day ago
The main division was about why language appeared; to structure thought, communication, or both. But I genuinely don’t think anyone serious would claim reasoning appeared because of language. …or that if you feed enough tokens to a neural network it’ll become smart.
Formfiller@lemmy.world 14 hours ago
AI tech bros are like we’re going to build a technology without any ethics or regulations to eliminate all of your jobs and to make you obsolete while we strip all of your social safety nets so you and everyone you love will either die or go to prison for being homeless and work as a slave for us in that prison and we’re going to use your money for our evil plan and we’re all like maybe we can just vote blue no matter who even though they pay them to just pretend to try to do something about this but they ultimately do nothing because a few magically turn facist when it matters so basically we’re just going to do nothing even though we know we’re being marched to our death by psychopaths who are very vocal about their intentions….am I following the story? Am I wrong?
krooklochurm@lemmy.ca 10 hours ago
You’re forgetting that, like all of americas problems, the solution has been found elsewhere and you’re just not going to do it because reasons I guess? Idk. I’m not American. Thank god.
Formfiller@lemmy.world 5 hours ago
Yeah watching your loved ones die without being able to access healthcare has been soul crushing torture for me.
ExtremeDullard@piefed.social 1 day ago
Well duh… Most politicians can talk.
Labna@lemmy.world 1 day ago
ByteJunk@lemmy.world 1 day ago
Let me grab all your downvotes by making counterpoints to this article.
I’m not saying that it’s not right to bash the fake hype that the likes of altman and alienberg are making with their outlandish claims that AGI is around the corner and that LLM are its precursor. I think that’s 100% spot on.
But the news article is trying to offer an opinion as if it’s a scientific truth, and this is not acceptable either.
The basis for the article is the supposed “cutting-edge research” that shows language is not the same as intelligence. The problem is that they’re referring to a publication from last year that is basically an op-ed, where the authors go over existing literature and theories to cement their view that language is a communication tool and not the foundation of thought.
The original authors do acknowledge that the growth in human intelligence is tightly related to language, yet assert that language is overall a manifestation of intelligence and not a prerequisite.
The nature of human intelligence is a much debated topic, and this doesn’t particularly add to the existing theories.
Even if we accept the authors’ views, then one might wonder if LLMs are the path to AGI. Obviously many lead researchers in AI think the same way - notably Prof LeCun is leaving Meta over this.
But the problem is that the Verge article then goes on to conclude the following:
an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.
This conclusion is a non sequitur. It generalizes a specific point about the capacity of LLMs to evolve into true AGI or not, into an “AI dumb” catchall that ignores even the most basic evidence that they themselves give - like being able to “solve” go, or play chess in a way that no human can even comprehend - and, to top it off, conclude that “it will never be able to” in the future.
Looking back at the last 2 years, I don’t think anyone can predict what AI research breakthroughs might happen in the next 2, let alone “forever”.
werebearstare@lemmings.world 1 day ago
This is not really cutting edge research. These limitations were described philosophically for millenia. Then again mathematically through the various AI summers and winters since 1943.
tomiant@piefed.social 1 day ago
CUTTING EDGE RESEARCH SHOWS something everybody already knew and were saying for years.
MrMcGasion@lemmy.world 1 day ago
Something I learned in film school 15 years ago was that communication happens when a message is perceived. Whether the message was intended or not is irrelevant. And yet here we are, “communicating” with a slightly advanced autocomplete algorithm and calling it intelligent.
pticrix@lemmy.ca 1 day ago
I keep saying that those llm peddlers are selling us a brain, when at most they only deliver Wernicke’s + Broca’s area of a brain.
Sure, they are necessary for a human like brain, but it’s only 10% of the job done my guys.
krooklochurm@lemmy.ca 10 hours ago
LLMs are actually very, very useful for certain things.
The problem isn’t that they lack utility. It’s that they’re constantly being shoehorned into area where they aren’t useful.
They’re great at surfacing nee knowledge for things you don’t have a complete picture of. You can’t take that knowledge at face value but a framework that you can validate with external sources can be a massive timesaver
They’re good at summarizing text. They’re good at finding solutions to very narrow and specific coding challenges.
They’re not useful at providing support. They are not useful at detailing specific, technical issues. They are not good friends.
lvxferre@mander.xyz 14 hours ago
when at most they only deliver Wernicke’s + Broca’s area of a brain.
Not even. LLMs don’t really understand what you say, and their output is often nonsensical babble.
pticrix@lemmy.ca 11 hours ago
you’re right. More like discussing with an Alzheimer’s addled brain being coerced into a particular set of vocabulary.
SoupBrick@pawb.social 1 day ago
Monied intrests beat science every day.
chicken@lemmy.dbzer0.com 1 day ago
LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning …
Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.
But take away language from a large language model, and you are left with literally nothing at all.
The author seems to be making the assumption that a LLM is the equivalent of the language processing parts of the brain (which according to the cited research supposedly focus on language specifically and the other parts of the brain do reasoning) but that isn’t really how it works. LLMs have to internally model more than just the structure of language because text contains information that isn’t just about the structure of language. The existence of Multimodal models makes this kind of obvious; they train on more input types than just text, whatever it’s doing internally is obviously more abstract than only being about language.
Not to say the research on the human brain they’re talking about is wrong, it’s just that the way they are trying to tie it in to AI doesn’t make any sense.
kromem@lemmy.world 11 hours ago
Took a lot of scrolling to find an intelligent comment on the article about how outputting words isn’t necessarily intelligence.
Appreciate you doing the good work I’m too exhausted with Lemmy to do.
(And for those that want more research in line with what the user above is talking about, I strongly encourage checking out the Othello-GPT line of research and replication, starting with this write-up from the original study authors here.)
favoredponcho@lemmy.zip 1 day ago
CEOs are just hyping bullshit
Septimaeus@infosec.pub 22 hours ago
Because what we call intelligence (the human kind) usually is just an emergent property of the wielding of various combinations of fist or second-hand experience by “consciousness” which itself is…
What we like to call the tip of a huge fucking iceberg of constant lifelong internal dialogues, overlapping and integrating experiences all the way back to engrams/assemblies/memories so deep we can’t even summon them any longer but are still measurable, still there.
Humans continuously, reflexively, recursively tell and re-tell our own stories to ourselves all day, and even at night, just to make sense of the connections we made today, how to use them tomorrow, to know how they relate to connections we made a lifetime ago, and how it fits in the larger story of us. That “context integration window” absolutely DWARFS even the deepest language model, even though our own organic “neural net” is low-power, lacks back-propagation, etc etc, and it is all done using language.
So yes, language is not the same as intelligence (though at some point some would ask “who can tell the difference?”) HOWEVER… The semantic taxonomies, symbolic cognition, and various other mental tools that are enabled by language are absolutely, verifiably required this massive context integration to take place.
msokiovt@lemmy.today 1 day ago
Somebody tell these absolute idiots that AI is NOT AN F’IN BUBBLE!
Here’s proof the USD and government bonds are the bubble, from Mark Moss: inv.nadeko.net/watch?v=xGoPdHH9PlE
lvxferre@mander.xyz 1 day ago
Whataboutism + false dichotomy.
sidebro@lemmy.zip 1 day ago
A wise man once said “The ability to speak does not make you intelligent.”
salacious_coaster@infosec.pub 1 day ago
And that man was Ra’s Al Ghul
sidebro@lemmy.zip 1 day ago
I mean, yes but no
djsaskdja@reddthat.com 1 day ago
How rude