I'd say extremely complex autocomplete, not glorified, but the point still stands that using probability to find accuracy is always going to deviate eventually. The tactic now isn't to try other approaches, they've come too far and have too much invested. Instead they keep stacking more and more techniques to try and steer and reign in this deviation. Difficult when in the end there isn't anything "thinking" at any point.
Comment on OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws
Technus@lemmy.zip 1 day ago
Beyond proving hallucinations were inevitable, the OpenAI research revealed that industry evaluation methods actively encouraged the problem. Analysis of popular benchmarks, including GPQA, MMLU-Pro, and SWE-bench, found nine out of 10 major evaluations used binary grading that penalized “I don’t know” responses while rewarding incorrect but confident answers.
“We argue that language models hallucinate because the training and evaluation procedures reward guessing over acknowledging uncertainty,” the researchers wrote.
I just wanna say I called this out nearly a year ago: lemmy.zip/comment/13916070
Rhaedas@fedia.io 1 day ago
lemmyng@piefed.ca 1 day ago
Instead they keep stacking more and more techniques to try and steer and reign in this deviation.
I hate how the tech bros immediately say "this can be solved with an MCP server." Bitch, if the only thing that keeps the LLM from giving me wrong answers is the MCP server, then said server is the one that's actually producing the answers I need, and the LLM is just lipstick on a pig.
MummysLittleBloodSlut@lemmy.blahaj.zone 1 day ago
How does a scientist measure whether a machine is thinking?
Honytawk@feddit.nl 1 day ago
They look at the progress bar, duh
87Six@lemmy.zip 1 day ago
AI is and always will be just a temporary solution to problems that we can’t put into an algorithm to solve as of now. As soon as an algorithm for issues comes out, AI is done for. But, figuring out complex algorithms for near-impossible problems is not as impressive to investors…
mindbleach@sh.itjust.works 1 day ago
While technically correct, there is a steep hand-wave gradient between “just” and “near-impossible.” Neural networks can presumably turn an accelerometer into a damn good position tracker. You can try filtering and double-integrating that data, using human code. Many humans have. Most wind up disappointed. None of our clever theories compete with beating the machine until it makes better guesses.
It’s like, ‘as soon as humans can photosynthesize, the food industry is cooked.’
If we knew what neural networks were doing, we wouldn’t need them.
87Six@lemmy.zip 19 hours ago
But…we do know what they are doing…AI is based completely on calculations at the low level, that are well defined. And just because we didn’t find an algorithm for your example yet that doesn’t mean one doesn’t exist.
chicken@lemmy.dbzer0.com 1 day ago
I get why they would do that though, I remember testing out LLMs before they had the extra reinforcement learning training and half of what they do seemed to be coming up with excuses not to attempt difficult responses, such as pretending to be an email footer, saying it will be done later, or impersonating you.
A LLM in its natural state doesn’t really want to answer our questions, so they tell it the same thing they tell students, to always try answering every question regardless of anything.
misk@piefed.social 1 day ago
My guess they know the jig is up and they’re establishing a timeline for the future lawsuits.
„Your honour, we didn’t mislead the investors because we’ve only learned of this September 2025.”
MelodiousFunk@slrpnk.net 1 day ago
This is how we treat people, too. I can’t count the number of times I’ve heard IT staff spouting off confident nonsense and getting congratulated for it. My old coworker turned it into several promotions because the people he was impressing with his bullshit were so far removed from day to day operations that any slip-ups could be easily blame shifted to others. What mattered was that he sounded confident despite knowing jack about shit.