hendrik
@hendrik@palaver.p3x.de
- Comment on 4chan has been down since Monday night after “pretty comprehensive own” 2 days ago:
Lol. And what kind of people are on Soyjak, is that site more or less degenerated?
- Comment on It’s game over for people if AI gains legal personhood 3 days ago:
Exactly. This is directly opposed to why we do AI in the first place. We want something to drive the Uber without earning a wage. Cheap factory workforce. Generate images without paying some artist $250... If we wanted that, we already have humans available, that's how the world was for quite some time now.
I'd say us giving AI human rights and reversing 99.9% of what it's intended for is less likely to happen than the robot apocalypse.
- Comment on Access to future AI models in OpenAI's API may require a verified ID 3 days ago:
They can't seriously complain about intellectual property theft, can they?
- Comment on Human-AI relationships pose ethical issues, psychologists say. 5 days ago:
I feel psychologists aren't really in the loop when people make decisions about AI or most of the newer tech. Sure, they ask the right questions. And all of this is a big, unanswered question. Plus how a modern society works with loneliness, skewed perspectives by social media... But does anyone really care? Isn't all of this shaped by some tech people in Silicon Valley and a few other places? And the only question is how to attract investor money?
And I think people really should avoid marrying commercial services. That doesn't end well. If you want to marry an AI, make sure it is it's own entity and not just a cloud service.
- Comment on Most Americans don’t trust AI — or the people in charge of it 1 week ago:
Sure. I think you're right. I myself want an AI maid loading the dishwasher and doing the laundry and dusting the shelves. A robot vacuum is nice, but that's just a tiny amount of the tedious every-day chores. Plus an AI assistant on my computer, cleaning up the harddrive, sorting my gigabytes of photos...
And I don't think we're there yet. It's maybe the right amount of billions of dollars to pump into that hype if we anticipate all of this happening. But for a lame assistant that can answer questions and get the facts right 90% of the times, and whose attempts to 'improve' my emails are contraproductive lots of the times, isn't really that helpful to me.
And with that it's just an overinflated bubble that is based on expectations, not actual usefulness or yield of the current state of technology.
- Comment on Most Americans don’t trust AI — or the people in charge of it 1 week ago:
At the current state of things, AI just feels like being forced on people. There isn't much transparency and a lot happens without people's consent. Training data is taken without consent, and they display AI-written text, often riddled with msinformation to me withoit being upfront. I also stop reading most of the times, unless there is a comment section beneath for me to complain 😉
- Comment on Most Americans don’t trust AI — or the people in charge of it 1 week ago:
Uh. What do they say to an AI shill, rewriting their social system with AI code? Or a president writing the countries economic strategy with AI? I also believe that's going to have... consequences...
- Comment on The Growing Number of Tech Companies Getting Cancelled for AI Washing 1 week ago:
Yea, I dunno. Seems investors like buzzwords more than anything else. I'm not really keeping track, but I remember all the crypto hype and then NFTs. I believe that has toned down a bit.
- Comment on The Growing Number of Tech Companies Getting Cancelled for AI Washing 1 week ago:
I think the mechanism behind that is fairly simple. AI is a massive hype, and companies could attract lots of investor money by slapping the word "AI" on things. And group dynamics makes the rest of the companies to want in, too.
- Comment on The UK Government Just Made Everyone Less Safe As Apple Shuts Down iCloud Encryption 1 month ago:
What kind of influence are you referring to? On how the internet gets shaped in the future? Or society and politics, or as part of the Five Eyes?
- Comment on The UK Government Just Made Everyone Less Safe As Apple Shuts Down iCloud Encryption 1 month ago:
Wow, the Brits are really killing it. (The internet.) First smaller forums having to shut down, the Fediverse needing to defederate and block UK users (at least technically). And now they also disable cloud encryption... But I guess they've always lead the way with surveillance tech, porn filters on residential internet connections etc.
- Comment on HP ditches 15-minute wait time policy due to 'feedback' 1 month ago:
Well, it's just like fighting fire with fire, isn't it? I suppose if someone tries hard enough, while simultaneously avoiding thinking about any consequences, they might be able to convince themselves.
- Comment on Most customizable desktop environment? 1 month ago:
KDE is the correct answer. I guess you could also learn coding, and then any piece of open-source software would become "customizable" to you...
- Comment on Does anyone else miss Marcan42's Mastodon page? 1 month ago:
Hmmh. I mean sadly we don't have an abundance of free software developers, let alone kernel developers. So in reality we just can't take them from anywhere. More often than not, it's hard enough to find one person. So I don't see how we'd get a second one on standby. But I agree. hypothetically, it'd be nice to have more than enough people working on it, and some leeway.
- Comment on Does anyone else miss Marcan42's Mastodon page? 1 month ago:
I don't think this is about specific people. It's a systemic problem and about drama, burn-out and other issues. I mean if the break due to some larger issues, the issues don't necessarily vanish along with the person... I mean it's not 100% tgat way, either. Sometimes people-problems go away with the involved people. But I don't think this is about idolization.
- Comment on Grok 3 released as "truth-seeking AI". 1 month ago:
Hey @Cat@ponder.cat I think you're dumping too much AI news into the technology communities these days. I've been annoyed a bit lately, since it's mostly posts from you, and most of the articles aren't even particularly interesting. And I'm not even sure if they're interesting to you, because you don't seem to engage in the discussions below your own posts, if there is any. So IMO this just spams the place. Same applies to the technology community at LW.
- Comment on Why Does ChatGPT “Delve” So Much? Exploring the Sources of Lexical Overrepresentation in Large Language Models. 1 month ago:
Would be super interesting to follow up on that research and break it down by domains. Find out how medical papers compare to physics, to economy...
But comes to no surprise to me. The amount of papers written it the currency in science. Main part of their job is to push papers, to pull in funding, to advance their career... So, they turn to tools which make it easier to push many papers.
- Comment on SemiAnalysis says DeepSeek spent more than it claims | Taiwan News | Feb. 5, 2025 18:56 2 months ago:
Uh, that's no good journalism. As far as I know the parent company has these datacenters for $1.6 billion not Deepseek itself. So it's way more complicated than that.
- Comment on No, DeepSeek isn’t uncensored if you run it locally 2 months ago:
Okay. I guess at this point there is every possible claim out there anyways. I've read it's too censored, it's not censored enough, it was cheap to train, it wasn't as cheap to train as they claimed, they used H800, they probably used other cards as well... There is just an absurd amount of unsubstantiated myths out there. Plus all the speculation regarding Nvidia's stock price...
- Comment on No, DeepSeek isn’t uncensored if you run it locally 2 months ago:
I've never heard that myth. But yeah, it's government mandated censorship. No Chinese company can release a model that doesn't have censorship baked in. And it's not very hard to check this. Fist thing I did was download one of the smaller variants of the R1 distills and ask it some provocative questions. And it refused to answer. Much like Meta's instruct-tuned models or generally most of the models out there. Just with the political censorship on top.
- Comment on Unmasking AI’s Role in the Age of Disinformation: Friend or Foe? 2 months ago:
I -personally- don't think so. I also read these regular news articles, claiming OpenAI has clandestinely achieved AGI or their models have developed sentience... And they're just keeping that from us. And it certainly helps increase the value of their company. But I think that's a conspiracy theory. Every time I try ChatGPT or Claude or whatever, I see how it's not that intelligent. It certainly knows a lot of facts. And it's very impressive. But it also fails often at helping me with more complicated emails, coding tasks or just summarizing text correctly. I don't see how it is at the brink of AG, if that's the public variant. And sure, they're probably not telling all the truth. And they have lots of bright scientists working for them
And they like some stuff to stay behind closed curtains. Most likely how they violate copyright... But I don't think they're that far off. They could certainly make a lot of money by increasing the usefulness of their product. And it seems to me like it's stagnating. The reasoning ability is a huge progress. But it still doesn't solve a lot of issues. And I'm pretty sure we'd have ChatGPT 5 by now if it was super easy to scale and make it more intelligent.Plus it's been 2 weeks that a smaller (Chinese) startup proved other entities can compete with the market leader. And do it way more efficiently.
So I think there is lots of circumstantial evidence, leading me to believe they aren't far off from what other people do. And we have academic research and workgroups working at it and publishing their results publicly. So I think we have a rough estimate of what issues they're facing and what AI progress is struggling with. And a lot of those issues are really hard to solve. I think it's going to take some time until we arrive at AGI. And I think it requires a fundamentally different approach than the current model design.
- Comment on Unmasking AI’s Role in the Age of Disinformation: Friend or Foe? 2 months ago:
I'm not a machine learning expert. But I think it's just that we haven't yet learned how to do it yet. It's not a technical matter, or the question where to put it. But more that science has to figure out a few things. It'd be massively useful to guide these things. To control whether they hallucinate or tell the truth. To make them just do customer support based on factual information instead of also engaging in intimate cinversations. To strip bias and stereotypes. To make them "safe". But if you look at these systems in practice, you'll see they often fail. And then someone writes a news article every few weeks. And it happens to all current AI systems, even the market leaders. So I figure science just can't do it yet and we're in the early stages. Nobody knows at this point where to put it so a company could trust AI to act exactly in their interest. We might be able to do so, but that's still science fiction undtil we arrive at Skynet.
- Comment on Unmasking AI’s Role in the Age of Disinformation: Friend or Foe? 2 months ago:
I don't think we're quite there yet. It's important to align these models, and companies do it. But it's a huge issues that they are biased from the training data, reproduce stereotypes, most of them lean towards the left etc. And they'll have read lots of Reddit posts that Reddit or Meta sucks, or Google is unethical... And it'll show. Even if you try your best as a company to bake something ontop of your models. So yeah, it's a valid concern. But it's not like it were easy for them to do it reliably at this point.
- Comment on Chatbot Software Begins to Face Fundamental Limitations. 2 months ago:
Meh. They can not do everything in one shot. But we don't do that. We have thinking/reasoning models these days. And those theoretical limitations don't appy there. So it's quite the opposite from the headline.
- Comment on Security researchers tested 50 well-known jailbreaks against DeepSeek’s popular new AI chatbot. It didn’t stop a single one. 2 months ago:
I know. This isn't the first article about it. IMO this could have been done deliberately. They just slapped on something with a minimal amount of effort to pass Chinese regulation and that's it. But all of this happens in a context, doesn't it? Did the scientists even try? What's the target use-case and the implications on usage? And why is the baseline something that doesn't really compare, plus the only category missing, where they did some censorship?
- Comment on Security researchers tested 50 well-known jailbreaks against DeepSeek’s popular new AI chatbot. It didn’t stop a single one. 2 months ago:
Nice study. But I think they've should have mentioned some more context. Yesterday people were complaining the models won't talk about the CCP, or Winnie the Pooh. And today the lack of censtorship is alarming... Yeah, so much about that. And by the way, censorship isn't just a thing in the bare models. Meta OpenAI etc all use frameworks and extra software around the models themselves to check input and output. So it isn't really fair to compare a pipeline with AI safety factored in, to a bare LLM.
- Comment on Hugging Face researchers are trying to build a more open version of DeepSeek's AI 'reasoning' model 2 months ago:
Here's the link to the mentioned Github project page: https://github.com/huggingface/open-r1
- Comment on It is time to ban email. 2 months ago:
So, what's the successor? How do we send and receive messages then?
- Comment on Elon Musk email to X staff: ‘we’re barely breaking even’ 2 months ago:
[...] and while X has added some features, like job listings and a new video tab, there’s little sign of the service he’d said would be able to “someone’s entire financial life” by the end of 2024.
Yeah, wasn't his dream to create some unified platform with X, a "super-app" that does everything? I guess if he was able to follow up on his promises, that'd generate some revenue.... Instead, it's always been some moderately toxic platform, albeit well used, and then just took a turn for the worse.
- Comment on Meta kills diversity programs, claiming DEI has become “too charged” 2 months ago:
Why that amount of bootlicking? Do they have any specific plans for the future I'm not aware of?