Comment on Bcachefs creator claims his custom LLM is 'fully conscious'
echodot@feddit.uk 2 weeks agoThey’re not though, just because you’ve found a snippet that valifies your incorrect understanding of AI doesn’t mean that it’s right. And since apparently the way you do research is to Google something and then click on the first result I’ll explain.
Large language models don’t use the standardly understood neural network that people are familiar with. They use a lot of mathematics and high dimensional spaces to generate their responses. They’re not achieving that by simulating neural pathways.
Neural networks simulate simple brains in order to have output, it’s a much older version of AI and is more like evolution simulation than it is artificial intelligence, there’s plenty of videos on this on YouTube dating back well over a decade. What you are referring to in your original comment sounds very much like a neural network that has been trained on character recognition, again loads of YouTube videos on the topic. But there’s no understanding there there’s no comprehension and there’s no learning. It’s just a system evolving to identify patterns.
But none of this is anything close to an early version of artificial general intelligence because it’s all just responding to input. If you initialise a large language model and then just leave it it’ll sit there and do nothing, a true artificial intelligence would have its own defined goals and take action to achieve those defined goals on its own without any input from a human, it would also be capable of self-modification. LLMs, and neural networks don’t do either of those things.
BCsven@lemmy.ca 2 weeks ago
Right you missed the part about agency, I never said an LLM interaction model had agency. With agentic LLM they do.
echodot@feddit.uk 2 weeks ago
What would biological learning for an AI look like? I don’t even know what this sentence means or what you’re trying to convey.
No they can’t. That’s the whole point, they can’t sell for just they have no free will so they have no ability to take self-modification actions.
Yes, but so can a non-intelligent computer program. The ability to access the internet has nothing to do with intelligence. See humans.
I think this is where you’re getting confused. The “old research”, aka neural networks didn’t hit a wall, it’s just it was never particularly useful outside of very niche circumstances. But it’s been used extensively in OCR for decades. But it is not intelligence anymore than a plant turning towards the sun is intelligence. It’s just evolutionarily enforced stimulation response. Large language models work on a completely different concept, you don’t get good results by feeding neural networks lots of input because it just overwhelms them with signal and they can’t optimise towards anything. If you built a neural network with a 100 trillion nodes you might actually get something useful, but it still wouldn’t be artificial intelligence and no one’s doing that anyway because it’s prohibitively processor intensive and anyway LLMs exist.
It’s important to realise that words mean the things they mean. Emergent behaviour just means that they behaviour is emergent, it doesn’t mean that the behaviour is intentional or directed. Large crowds have emerged behaviour, it doesn’t mean that there’s some hive mind control everyone.
BCsven@lemmy.ca 2 weeks ago
You missed mlwhat I meant, which is fine, English is 30% content and 70% disambiguation. I meant we are biological computing, the computers are non biological, and too me I don’t care. If we get to a state where synapses can be replicated onto chips and feed experiences to it, then the “intelligence” is no different and we delude ourselves if we think we are somehow a superior biological electrical brain.
I’m not trying to be condescending so forgive me if it sounds like that, but you have to do some more reading here. Giving AI self agency has been done and they have the ability to self act and adjust their learning (I’m not talking about chatgpt locked model in a generate responses mode. But systems build with the purpose of allowing them to backtrace and research and self adjust. There have been many papers and reports over the last three years of researchers setting this up.
I think this is where you’re getting confused. The “old research”, aka neural networks didn’t hit a wall, it’s just it was never particularly useful outside of very niche.
That’s what they thought, but they realized that there was way less neurons, and humans had way more. But as humans we have limited experience intake, and they found that they could feed a million times more experience, and that greatly improved the outcome especially with the backtracing capabilities.
Again you don’t have to take my word for it, check out the overview in NDT Starktalk episode with one of the architects of AI, Geoffrey Hinton. Or review the last 3 years of researchers purposely giving “AI” agency.
That was my point, given enough pathways and ability to self tweak based on experiences, it seems “intelligence” is an emergent behaviour without specifically programming for it, like us. There’s no magic in a human brain, we are a chemical computer that wanted to survive and has tweaked itself to become better till a point where we believe we are “alive” because we “think” it.
CorrectAlias@piefed.blahaj.zone 2 weeks ago
Asking for evidence of extraordinary claims = trolling. Got it.
“Agentic LLMs” is just a corporate buzzword. It’s meaningless, because by the very nature of LLMs, they do not “think”. It’s simply not possible. Deep learning models, maybe, but not LLMs.
Also, lots of things can mimic brains, and not all “brains” are the same anyway. So what brain are we talking about here?