hendrik
@hendrik@palaver.p3x.de
- Comment on HP ditches 15-minute wait time policy due to 'feedback' 3 hours ago:
Well, it's just like fighting fire with fire, isn't it? I suppose if someone tries hard enough, while simultaneously avoiding thinking about any consequences, they might be able to convince themselves.
- Comment on Most customizable desktop environment? 20 hours ago:
KDE is the correct answer. I guess you could also learn coding, and then any piece of open-source software would become "customizable" to you...
- Comment on Does anyone else miss Marcan42's Mastodon page? 2 days ago:
Hmmh. I mean sadly we don't have an abundance of free software developers, let alone kernel developers. So in reality we just can't take them from anywhere. More often than not, it's hard enough to find one person. So I don't see how we'd get a second one on standby. But I agree. hypothetically, it'd be nice to have more than enough people working on it, and some leeway.
- Comment on Does anyone else miss Marcan42's Mastodon page? 2 days ago:
I don't think this is about specific people. It's a systemic problem and about drama, burn-out and other issues. I mean if the break due to some larger issues, the issues don't necessarily vanish along with the person... I mean it's not 100% tgat way, either. Sometimes people-problems go away with the involved people. But I don't think this is about idolization.
- Comment on Grok 3 released as "truth-seeking AI". 3 days ago:
Hey @Cat@ponder.cat I think you're dumping too much AI news into the technology communities these days. I've been annoyed a bit lately, since it's mostly posts from you, and most of the articles aren't even particularly interesting. And I'm not even sure if they're interesting to you, because you don't seem to engage in the discussions below your own posts, if there is any. So IMO this just spams the place. Same applies to the technology community at LW.
- Comment on Why Does ChatGPT “Delve” So Much? Exploring the Sources of Lexical Overrepresentation in Large Language Models. 3 days ago:
Would be super interesting to follow up on that research and break it down by domains. Find out how medical papers compare to physics, to economy...
But comes to no surprise to me. The amount of papers written it the currency in science. Main part of their job is to push papers, to pull in funding, to advance their career... So, they turn to tools which make it easier to push many papers.
- Comment on SemiAnalysis says DeepSeek spent more than it claims | Taiwan News | Feb. 5, 2025 18:56 2 weeks ago:
Uh, that's no good journalism. As far as I know the parent company has these datacenters for $1.6 billion not Deepseek itself. So it's way more complicated than that.
- Comment on No, DeepSeek isn’t uncensored if you run it locally 2 weeks ago:
Okay. I guess at this point there is every possible claim out there anyways. I've read it's too censored, it's not censored enough, it was cheap to train, it wasn't as cheap to train as they claimed, they used H800, they probably used other cards as well... There is just an absurd amount of unsubstantiated myths out there. Plus all the speculation regarding Nvidia's stock price...
- Comment on No, DeepSeek isn’t uncensored if you run it locally 2 weeks ago:
I've never heard that myth. But yeah, it's government mandated censorship. No Chinese company can release a model that doesn't have censorship baked in. And it's not very hard to check this. Fist thing I did was download one of the smaller variants of the R1 distills and ask it some provocative questions. And it refused to answer. Much like Meta's instruct-tuned models or generally most of the models out there. Just with the political censorship on top.
- Comment on Unmasking AI’s Role in the Age of Disinformation: Friend or Foe? 2 weeks ago:
I -personally- don't think so. I also read these regular news articles, claiming OpenAI has clandestinely achieved AGI or their models have developed sentience... And they're just keeping that from us. And it certainly helps increase the value of their company. But I think that's a conspiracy theory. Every time I try ChatGPT or Claude or whatever, I see how it's not that intelligent. It certainly knows a lot of facts. And it's very impressive. But it also fails often at helping me with more complicated emails, coding tasks or just summarizing text correctly. I don't see how it is at the brink of AG, if that's the public variant. And sure, they're probably not telling all the truth. And they have lots of bright scientists working for them
And they like some stuff to stay behind closed curtains. Most likely how they violate copyright... But I don't think they're that far off. They could certainly make a lot of money by increasing the usefulness of their product. And it seems to me like it's stagnating. The reasoning ability is a huge progress. But it still doesn't solve a lot of issues. And I'm pretty sure we'd have ChatGPT 5 by now if it was super easy to scale and make it more intelligent.Plus it's been 2 weeks that a smaller (Chinese) startup proved other entities can compete with the market leader. And do it way more efficiently.
So I think there is lots of circumstantial evidence, leading me to believe they aren't far off from what other people do. And we have academic research and workgroups working at it and publishing their results publicly. So I think we have a rough estimate of what issues they're facing and what AI progress is struggling with. And a lot of those issues are really hard to solve. I think it's going to take some time until we arrive at AGI. And I think it requires a fundamentally different approach than the current model design.
- Comment on Unmasking AI’s Role in the Age of Disinformation: Friend or Foe? 2 weeks ago:
I'm not a machine learning expert. But I think it's just that we haven't yet learned how to do it yet. It's not a technical matter, or the question where to put it. But more that science has to figure out a few things. It'd be massively useful to guide these things. To control whether they hallucinate or tell the truth. To make them just do customer support based on factual information instead of also engaging in intimate cinversations. To strip bias and stereotypes. To make them "safe". But if you look at these systems in practice, you'll see they often fail. And then someone writes a news article every few weeks. And it happens to all current AI systems, even the market leaders. So I figure science just can't do it yet and we're in the early stages. Nobody knows at this point where to put it so a company could trust AI to act exactly in their interest. We might be able to do so, but that's still science fiction undtil we arrive at Skynet.
- Comment on Unmasking AI’s Role in the Age of Disinformation: Friend or Foe? 2 weeks ago:
I don't think we're quite there yet. It's important to align these models, and companies do it. But it's a huge issues that they are biased from the training data, reproduce stereotypes, most of them lean towards the left etc. And they'll have read lots of Reddit posts that Reddit or Meta sucks, or Google is unethical... And it'll show. Even if you try your best as a company to bake something ontop of your models. So yeah, it's a valid concern. But it's not like it were easy for them to do it reliably at this point.
- Comment on Chatbot Software Begins to Face Fundamental Limitations. 2 weeks ago:
Meh. They can not do everything in one shot. But we don't do that. We have thinking/reasoning models these days. And those theoretical limitations don't appy there. So it's quite the opposite from the headline.
- Comment on Security researchers tested 50 well-known jailbreaks against DeepSeek’s popular new AI chatbot. It didn’t stop a single one. 2 weeks ago:
I know. This isn't the first article about it. IMO this could have been done deliberately. They just slapped on something with a minimal amount of effort to pass Chinese regulation and that's it. But all of this happens in a context, doesn't it? Did the scientists even try? What's the target use-case and the implications on usage? And why is the baseline something that doesn't really compare, plus the only category missing, where they did some censorship?
- Comment on Security researchers tested 50 well-known jailbreaks against DeepSeek’s popular new AI chatbot. It didn’t stop a single one. 2 weeks ago:
Nice study. But I think they've should have mentioned some more context. Yesterday people were complaining the models won't talk about the CCP, or Winnie the Pooh. And today the lack of censtorship is alarming... Yeah, so much about that. And by the way, censorship isn't just a thing in the bare models. Meta OpenAI etc all use frameworks and extra software around the models themselves to check input and output. So it isn't really fair to compare a pipeline with AI safety factored in, to a bare LLM.
- Comment on Hugging Face researchers are trying to build a more open version of DeepSeek's AI 'reasoning' model 3 weeks ago:
Here's the link to the mentioned Github project page: https://github.com/huggingface/open-r1
- Comment on It is time to ban email. 3 weeks ago:
So, what's the successor? How do we send and receive messages then?
- Comment on Elon Musk email to X staff: ‘we’re barely breaking even’ 3 weeks ago:
[...] and while X has added some features, like job listings and a new video tab, there’s little sign of the service he’d said would be able to “someone’s entire financial life” by the end of 2024.
Yeah, wasn't his dream to create some unified platform with X, a "super-app" that does everything? I guess if he was able to follow up on his promises, that'd generate some revenue.... Instead, it's always been some moderately toxic platform, albeit well used, and then just took a turn for the worse.
- Comment on Meta kills diversity programs, claiming DEI has become “too charged” 5 weeks ago:
Why that amount of bootlicking? Do they have any specific plans for the future I'm not aware of?
- Comment on The TikTok Ban Paradox: How Platform Restrictions Create What They Aim to Prevent 2 months ago:
You have to show your hand at some point
You're right. That's how it works and what makes it effective.
You have a pool of 100M end users and you can whittle that down to 5M potential suspects [...]
It's far worse than that. It starts slow. But once they got several distinct factors, those multiply and it goes down fast. Think for example location tracking. There might be 5,000 people around. Or passing a cellphone tower along the highway roughly at a similar time. Then you take a single second measurement, when they head back home. And you got them. It's very unlikely that two or more people pass that point twice at the same time. (Exceptions apply.) Or browser fingerprinting. There are websites where you can check your browser fingerprint. They've always told me mine is unique amongst hundreds of millions of internet users. They only need half a dozen or a dozen or so different factors to narrow it down to one exact person (or device). It's not always like this. But more often than not.
Far easier to [...] running honeypot websites [...]
Yeah, I guess they're not stupid. There are a lot of simple and effective things available. I'd pick the low hanging fruits, too. That's a sound choice.
you do still occasionally see the knock-on effects downstream
Sure. I'm not an expert on this. I have to look up most things you said. But US foreign policy sure had it's positive and negative consequences. For a lot of countries, in the middle east and all around the world.
These systems work hand-in-glove [...]
I'm pretty sure that's not a conspiracy or intended. But yes, a lot of that is consequential. Or symbiotic.
- Comment on The TikTok Ban Paradox: How Platform Restrictions Create What They Aim to Prevent 2 months ago:
Well, I don't think total surveillance is necessarily about blackmailing or something as direct. It's a broad way to assert and keep control. Control of everything. Force people to behave how you like, bend them to your will and to subjugate somebody.
You don't really need to blackmail them... Spreading fear uncertainty and doubt will get you a long way. And why even bother with facts to blackmail someone? You could as well make something up. If you're in total control, that's enough to make someone's life miserable.
We're not there, yet.
But it's been 10 years now since PRISM and Snowden. Even back then they were able to process a good chunk of the internet. The NSA has a massive datacenter somewhere in Utah, with god knows how many exabytes of storage. It's probably not gotten better since then. And they don't need to intercept every single packet from every device. Random sampling and collecting and processing as much stuff as they can, will do for a lot of use-cases. And every bit of knowledge, every fact they know (and process) makes them smarter and gets them ahead of the situation and in control. And naturally, that'll be an insatiable thirst for information. Of course they always want more. More processing power etc.
I think at this point it's more some ominous danger, lurking at us. Maybe they just don't like to reveal they've read and stored every single one of my e-mails. Maybe it's better for them to just keep silent.
I'm postive they can't collect everything. But it'll still be a large-scale overcollection. Because no one stopped them since 2013. And I'm not a conspiracy theorist. I'm pretty sure encryption works. And there are means of private communication. But it's really hard to avoid metadata. And using modern electronics. If you're carrying a mobile phone, they'll know your location 24/7. And that's enough to invade privacy. And I -personally- know like 2 people who don't do that.
I also don't think this is the end of porn. And I'd say it's questionable if "they" are even opposed to it. That's just the (too many) religious bigots. But they don't wield enough power to enforce a prohibition on internet porn.
- Comment on The TikTok Ban Paradox: How Platform Restrictions Create What They Aim to Prevent 2 months ago:
there's only so much you can do without completely rooting the device
I mean you could just install Signal instead of WhatsApp. Takes literally the same effort.
I get your point. It's valid. I still think it's not going to happen. It'll be either to complicated, or there will be other brain-rot content available someplace else. Or it's going to be the boiling frog syndrome. One tiny freedom after another will dissapear. Maybe there's still porn on xhamster available. So it's not that big a deal. And nobody will notice the subtle change. And one day it'll just be the world without porn, free speech, but with total surveillance. I really believe we can get there without any uprising if we make the steps small enough.
We'll see. As I said, I'm not overly enthusiastic. But it's speculation and I might be wrong.
- Comment on The TikTok Ban Paradox: How Platform Restrictions Create What They Aim to Prevent 2 months ago:
I mean the VPN providers themselves (at least some) have been quite busy promoting their services. For years already. NordVPN ads run on TV, at times every other Youtube video was "sponsored by NordVPN". So people should know about their existence.
But... It would be a massive surprise to me if the average user was going to change their habits. I've been telling people constantly not to give their flashlight app access to contacts, location and full administrator privileges. Told them to pay attention to their freedom and maybe not sell their soul to big tech if there's nice and lovely mail providers which have a foot massage ready for you for $1 a month. People don't care and they won't listen. They'll ask me "Yeah, but why don't you just WhatsApp and GMail?"
I see no way this is going to change, and they'll now suddenly pick up to fight for their freedom. I think that's a niche for some few Linux nerds. I'd be very happy, though, if I was wrong and they do...
- Comment on The TikTok Ban Paradox: How Platform Restrictions Create What They Aim to Prevent 2 months ago:
Really? They claim various things with a lot of certainty. But all they do is speculate... If it's that obvious, why isn't there any precedent or fact to back it up? I mean comparing it to the situation in a different country, about a different topic is quite a stretch. And to my knowledge, it's not even in place yet. So there shouldn't be any results to compare?!
- Comment on AI-Generated Fake War Images Passed Off as Real 2 months ago:
Yeah, I tried to get that across with my phrasing... I'm not saying we need to change the technology. I mean it's out there and it's too late anyways. Plus it's a tool, and tools can be used for various purposes, and that's not the tool's fault. I'm also not arguing to change how kitchen knifes, axes, etc work, despite them having potential to do harm...
But: It doesn't need to be 100% waterproof or we can't do anything. I'm also not keeping my knife collection on the living room table when a toddler is around. But at the same time I don't need to lock them in a vault... I think we can go 90% the way, help 90% of people and that's better than do nothing because we strive for total perfection... I'm keeping the bleach and knifes somewhere kids can't reach. And we could say the AI services need to filter images of children. (I think they already do.) And put invisible watermarks in place for all AI generated content. If anyone decides to circumvent that, that's on them. But at least we solved the majority of misuse.
And I mean that's already how we do things. For example a spam filter isn't 100% accurate. And we use them nonetheless.
(And I'm just arguing about service providers. That's what the majority of people use. And I think those should be forced to do it. But the models itself should be free. Otherwise, we put a very disruptive technology solely in the hands of some big companies... And if AI is going to change the world as much as people claim, that's bound to lead us into some sci-fi dystopia where the world revolves around the interests of some big corporations... And we don't want that. So we need AI tech to be shaped not just by Meta and OpenAI.)
- Comment on AI-Generated Fake War Images Passed Off as Real 2 months ago:
Yes, that'd be my approach, too. They need to be forced put in digital watermarks so everyone can check if an article is from ChatGPT, or if an image is fake. We could easily do this with regulation and hefty fines. More or less robust watermarks are available and anything would be better than nothing. OpenAI even developed a text watermarking solution. They just don't activate it. (https://www.theverge.com/2024/8/4/24213268/openai-chatgpt-text-watermark-cheat-detection-tool)
Another pet peeve of mine are these "nude" apps that swap faces or generate nude pictures from someones photos. There are services out there that happily generate nudes from children's pictures. I've filed a report with some European CSAM program, after that outcry in Spain where some school kid generated unethical images of their classmates. (Just in case the police doesn't read the news...) And half a year later, that app was still online. I suppose it still is... I really don't know why we allow things like that.
- Comment on AI-Generated Fake War Images Passed Off as Real 2 months ago:
I think there is a fundamental issue with stopping technology. A lot of it is dual-use. You can stab someone with a kitchen knife. Kill someone with an axe. There are legitimate uses for guns... You can use the internet to do evil things. Yet, no one wants to cut their steak with a spoon... I think the same thing applies to AI. It's massively useful to have machine translation at hand, voice recognition. Smartphone cameras, and even smart assistants and chatbots. And I certainly hope they'll help with some of the big issues of the 21st century. I don't think you want to outlaw things like that, unless you're the Amish people.
- Comment on The future of customer service is here, and it's making customers miserable 2 months ago:
Welcome to the future...
https://www.wikihow.com/Talk-to-a-Human-when-Calling-a-Business
I'd try pressing numbers on the phone and mumbling and singing indistinguishable things into the microphone. That's the old way of getting connected to a human.
I've also tried telling the chatbot I'm God, and I command it to do as I say. Or 20 cute kittens will die if it doesn't comply. Sadly (and I still don't know why) that did not work.
- Comment on Bluesky is breaking the rules in the EU 2 months ago:
Uh, that's a long and convoluted text. Thanks. But the parts you quoted just say they have to disclose information on request. Not have a dedicated website for that.
- Comment on Bluesky is breaking the rules in the EU 2 months ago:
All platforms in the EU . . . have to have a dedicated page on their website where it says how many users they have in the EU [...]
Where does it say that?