How does a scientist measure whether a machine is thinking?
Comment on OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws
Rhaedas@fedia.io 1 day agoI'd say extremely complex autocomplete, not glorified, but the point still stands that using probability to find accuracy is always going to deviate eventually. The tactic now isn't to try other approaches, they've come too far and have too much invested. Instead they keep stacking more and more techniques to try and steer and reign in this deviation. Difficult when in the end there isn't anything "thinking" at any point.
MummysLittleBloodSlut@lemmy.blahaj.zone 1 day ago
Honytawk@feddit.nl 1 day ago
They look at the progress bar, duh
87Six@lemmy.zip 1 day ago
AI is and always will be just a temporary solution to problems that we can’t put into an algorithm to solve as of now. As soon as an algorithm for issues comes out, AI is done for. But, figuring out complex algorithms for near-impossible problems is not as impressive to investors…
mindbleach@sh.itjust.works 1 day ago
While technically correct, there is a steep hand-wave gradient between “just” and “near-impossible.” Neural networks can presumably turn an accelerometer into a damn good position tracker. You can try filtering and double-integrating that data, using human code. Many humans have. Most wind up disappointed. None of our clever theories compete with beating the machine until it makes better guesses.
It’s like, ‘as soon as humans can photosynthesize, the food industry is cooked.’
If we knew what neural networks were doing, we wouldn’t need them.
87Six@lemmy.zip 17 hours ago
But…we do know what they are doing…AI is based completely on calculations at the low level, that are well defined. And just because we didn’t find an algorithm for your example yet that doesn’t mean one doesn’t exist.
mindbleach@sh.itjust.works 12 hours ago
Knowing it exists doesn’t mean you’ll ever find it.
Meanwhile: we can come pretty close, immediately, using data alone. Listing all the math a program performs doesn’t mean you know what it’s doing. Decompiling human-authored programs is hard enough. Putting words to the algorithms wrenched out by backpropagation is a research project unto itself.
lemmyng@piefed.ca 1 day ago
I hate how the tech bros immediately say "this can be solved with an MCP server." Bitch, if the only thing that keeps the LLM from giving me wrong answers is the MCP server, then said server is the one that's actually producing the answers I need, and the LLM is just lipstick on a pig.