mindbleach
@mindbleach@sh.itjust.works
- Comment on No, Deus Ex Remastered, I simply do not believe you need an RTX 2080 to run at recommended settings 5 days ago:
Surely it just means, that’s what they tested with. The minimum specs sound like they oldest machine they bothered to lay hands on.
Listed specs are not what’s worrisome about this project.
- Comment on Pax Dei, the medieval EVE Online-esque MMO, gets its 1.0 release next month 1 week ago:
In WHAT FUCKING MANNER does this on-foot low-tech whack-people-with-sticks game resemble a sci-fi starship combat game?
The inability to describe any game except in reference to other games is infuriating enough, without forgetting to make the goddamn comparison!
- Comment on Messenger is an absurdly slick, perfectly lovely free pocket world exploration game you can play in a browser 1 week ago:
That is a terrible name.
- Comment on Brazil's president has signed a ban on selling loot boxes to minors as part of a larger online child safety law 1 week ago:
The razor is: did you, the player, receive new content? Or did you get charged for permission?
Horse armor is fine. That’s how low the bar is. That’s how bad this abuse is. All microtransactions are “on-disc DLC,” where you’ve already been given the thing, inside the game you already paid for, but fuck you, pay us again. And again and again and again.
It’s the difference between Warhammer’s little plastic men being obscenely expensive, and Games Workshop expecting five actual dollars after every match to replace their imaginary bullets.
- Comment on Brazil's president has signed a ban on selling loot boxes to minors as part of a larger online child safety law 1 week ago:
Fuck them kids. This entire business model is an abuse against people with credit cards.
Nothing inside a video game should cost real money.
- Comment on Console wars death watch: Microsoft Flight Simulator coming to PS5 in December - Ars Technica 1 week ago:
The war’s been over since blue team and green team started releasing near-identical machines, for nearly the same price, at basically the same time. There are no consoles anymore. It’s all just computers. Some computers have shitty locked-down app stores.
- Comment on Charlie Kirk could be placed on US currency under new House GOP proposal 1 week ago:
- Comment on OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws 2 weeks ago:
Insisting that someone could figure it out does not mean anyone has.
Twenty gigabytes of linear algebra is a whole fucking lot of stuff going on. Creating it by letting the computer train is orders of magnitude easier than picking it apart to say how it works. Sure - you can track individual instructions, all umpteen billion of them. Sure - you can describe broad sections of observed behavior. But if any programmer today tried recreating that functionality, from scratch, they would fail.
Absolutely nobody has looked at an LLM, gone ‘ah-ha, so that’s it,’ and banged out neat little C alternative. Lack of demand cannot be why.
- Comment on OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws 2 weeks ago:
Knowing it exists doesn’t mean you’ll ever find it.
Meanwhile: we can come pretty close, immediately, using data alone. Listing all the math a program performs doesn’t mean you know what it’s doing. Decompiling human-authored programs is hard enough. Putting words to the algorithms wrenched out by backpropagation is a research project unto itself.
- Comment on OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws 2 weeks ago:
… yes? This has been known since the beginning. Is it news because someone finally convinced Sam Altman?
Neural networks are universal estimators. “The estimate is wrong sometimes!*” is… what estimates are. The chatbot is not an oracle. It’s still bizarrely flexible, for a next-word-guesser, and it’s right often enough for these fuckups to become a problem.
What bugs me are the people going ‘see, it’s not reasoning.’ As if reasoning means you’re never wrong. Humans never misremember, or confidently espouse total nonsense. And we definitely understand brain chemistry and neural networks well enough to say none of these bajillion recurrent operations constitute the process of thinking.
Consciousness can only be explained in terms of unconscious events. Nothing else would be an explanation. So there is some sequence of operations which constitutes a thought. Computer science lets people do math with marbles, or in trinary, or on paper, so it doesn’t matter how exactly that work gets done.
Though it’s probably not happening here. LLMs are the wrong approach.
- Comment on OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws 2 weeks ago:
My guy, Microsoft Encarta 97 doesn’t have senses either, and its recollection of the capital of Austria is neither coincidence nor hallucination.
- Comment on OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws 2 weeks ago:
While technically correct, there is a steep hand-wave gradient between “just” and “near-impossible.” Neural networks can presumably turn an accelerometer into a damn good position tracker. You can try filtering and double-integrating that data, using human code. Many humans have. Most wind up disappointed. None of our clever theories compete with beating the machine until it makes better guesses.
It’s like, ‘as soon as humans can photosynthesize, the food industry is cooked.’
If we knew what neural networks were doing, we wouldn’t need them.
- Comment on Vimeo is getting acquired by Bending Spoons, the parent company of Evernote 4 weeks ago:
- Comment on The Chinese Room defend Bloodlines 2's paywalled vampire clans: "we have been expanding it from where we originally planned to land it" 1 month ago:
‘We changed scope and it’s your problem’ does not parse.
- Comment on Outlaws + Handful of Missions: Remaster is the next Nightdive Studios release 1 month ago:
Could Lucasarts not secure the rights for “Fistful?”
- Comment on How AI researchers accidentally discovered that everything they thought about learning was wrong 1 month ago:
Quite possibly, yes. But how much is “a lot?” A wide network acts like many permutations.
Probing the space with small networks and brief training sounds faster, but that too is recreated in large networks. They’ll train for a bit, mark any weights near zero, reset, and zero those out.
What training many small networks would be good for is experimentation. Super deep and narrow, just five big dumb layers, fewer steps with more heads, that kind of thing. Maybe get wild and ask a question besides “what’s the next symbol.”
- Comment on Is Germany on the Brink of Banning Ad Blockers? User Freedom, Privacy, and Security Is At Risk. 1 month ago:
Nevermind tearing a page out of your own copy of a book is not a copyright issue… at all.
- Comment on China is about to launch SSDs so small you insert them like a SIM card 1 month ago:
Defragging wasn’t handled in hardware. The OS is free to frag it up.
- Comment on China is about to launch SSDs so small you insert them like a SIM card 1 month ago:
It’s a little weird that wear leveling isn’t handled at the software level, given that you can surely pick free sectors randomly. Random access is nearly free. So is idle CPU time.
- Comment on China is about to launch SSDs so small you insert them like a SIM card 1 month ago:
Is there a difference, besides SSDs tending to be plugged-in all the time? Maybe better firmware?
- Comment on China is about to launch SSDs so small you insert them like a SIM card 1 month ago:
So… an SD card?
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 1 month ago:
Are you sure? Check.
Where you jumped in is me, pointing out, repeatedly, that LLMs and IT have nothing to do with the actual article. Y’know, the doctors I keep mentioning? They’re not decorative.
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 1 month ago:
You literally did.
“Concerning that the same is happening in medical even for the experts.”
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 1 month ago:
No. You’re making a faulty comparison. The thing in this article is exclusively for experts. Using it made them better doctors, but when they stopped using it, they were out-of-practice at the old way. Like any skill you stop exercising. Especially at an expert level. Your junior programmers incompetently trusting LLMs is not the same problem in any direction.
This is genuinely important, because people are developing prejudice against an entire branch of computer science. This stupid headline pretends AI made cancer detection worse. Cancer’s kind of a big deal! Disguising the fact that detection rates improved with this tool, by fixating on how they got worse without it, may cost lives.
A lot of people in this thread are theatrically advocating the importance of deep understanding of complex subjects, and then giving a kneejerk “fuckin’ AI, am I right?”
- Comment on 1 month ago:
Some guy blogged that the smart ones move to advertising.
- Comment on 1 month ago:
Neural networks becoming practical is world-changing. This lets us do crazy shit we have no idea how to program sensibly. Dead-reckoning with an accelerometer could be accurate to the inch. Chroma-key should rival professional rotoscoping. Any question with a bunch of data and a simple answer can be trained at some expense and then run on an absolute potato.
So it’s downright bizarre that every single company is fixated on guessing the next word with transformers. Alternatives like text diffusion and mamba pop up and then disappear, without so much as a ‘so that didn’t work’ blog post.
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 1 month ago:
We’re not talking about LLMs.
These doctors didn’t ask ChatGPT “does this look like cancer.” We’re talking about domain-specific medical tools.
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 1 month ago:
Should urologists still train to detect diabetes by taste? We wouldn’t want the complexity of modern medicine to stunt their growth. These quacks can’t sniff piss with nearly the accuracy of Victorian doctors.
When a tool gets good enough, not using it is irresponsible. Sawing lumber by hand is a waste of time. Farmers today can’t use scythes worth a damn. Programming in assembly is frivolous.
At what point do we stop practicing without the tool? How big can the difference be, and still be totally optional? It’s not like these doctors lost or lacked the fundamentals. They’re just rusty at doing things the old way. If the new way is simply better, good, that’s progress.
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 1 month ago:
“Concerning that the same is happening in medical even for the experts.”
It isn’t.
Glad we cleared that up?
- Comment on AI Eroded Doctors' Ability to Spot Cancer Within Months in Study 1 month ago:
Tone policing, followed by essentialist insults. Zero self-awareness.