Comment on Bcachefs creator claims his custom LLM is 'fully conscious'
echodot@feddit.uk 2 weeks agoLLMs are not neural networks and neural networks are not AI. But you know other than that.
Comment on Bcachefs creator claims his custom LLM is 'fully conscious'
echodot@feddit.uk 2 weeks agoLLMs are not neural networks and neural networks are not AI. But you know other than that.
BCsven@lemmy.ca 2 weeks ago
From the web since you trolls can’t search: Large Language Models (LLMs) are a type of advanced neural network specifically designed for understanding and generating human language
echodot@feddit.uk 2 weeks ago
They’re not though, just because you’ve found a snippet that valifies your incorrect understanding of AI doesn’t mean that it’s right. And since apparently the way you do research is to Google something and then click on the first result I’ll explain.
Large language models don’t use the standardly understood neural network that people are familiar with. They use a lot of mathematics and high dimensional spaces to generate their responses. They’re not achieving that by simulating neural pathways.
Neural networks simulate simple brains in order to have output, it’s a much older version of AI and is more like evolution simulation than it is artificial intelligence, there’s plenty of videos on this on YouTube dating back well over a decade. What you are referring to in your original comment sounds very much like a neural network that has been trained on character recognition, again loads of YouTube videos on the topic. But there’s no understanding there there’s no comprehension and there’s no learning. It’s just a system evolving to identify patterns.
But none of this is anything close to an early version of artificial general intelligence because it’s all just responding to input. If you initialise a large language model and then just leave it it’ll sit there and do nothing, a true artificial intelligence would have its own defined goals and take action to achieve those defined goals on its own without any input from a human, it would also be capable of self-modification. LLMs, and neural networks don’t do either of those things.
BCsven@lemmy.ca 2 weeks ago
Right you missed the part about agency, I never said an LLM interaction model had agency. With agentic LLM they do.
echodot@feddit.uk 2 weeks ago
What would biological learning for an AI look like? I don’t even know what this sentence means or what you’re trying to convey.
No they can’t. That’s the whole point, they can’t sell for just they have no free will so they have no ability to take self-modification actions.
Yes, but so can a non-intelligent computer program. The ability to access the internet has nothing to do with intelligence. See humans.
I think this is where you’re getting confused. The “old research”, aka neural networks didn’t hit a wall, it’s just it was never particularly useful outside of very niche circumstances. But it’s been used extensively in OCR for decades. But it is not intelligence anymore than a plant turning towards the sun is intelligence. It’s just evolutionarily enforced stimulation response. Large language models work on a completely different concept, you don’t get good results by feeding neural networks lots of input because it just overwhelms them with signal and they can’t optimise towards anything. If you built a neural network with a 100 trillion nodes you might actually get something useful, but it still wouldn’t be artificial intelligence and no one’s doing that anyway because it’s prohibitively processor intensive and anyway LLMs exist.
It’s important to realise that words mean the things they mean. Emergent behaviour just means that they behaviour is emergent, it doesn’t mean that the behaviour is intentional or directed. Large crowds have emerged behaviour, it doesn’t mean that there’s some hive mind control everyone.
CorrectAlias@piefed.blahaj.zone 2 weeks ago
Asking for evidence of extraordinary claims = trolling. Got it.
“Agentic LLMs” is just a corporate buzzword. It’s meaningless, because by the very nature of LLMs, they do not “think”. It’s simply not possible. Deep learning models, maybe, but not LLMs.
Also, lots of things can mimic brains, and not all “brains” are the same anyway. So what brain are we talking about here?