Cooper8
@Cooper8@feddit.online
- Comment on Mozilla's Latest Quagmire 17 hours ago:
I wish projects like this would offer simple “security profile” settings that would allow you to batch change the relevant settings between the most common suggested settings for different usecases.
Just “General use” and “Privacy” profiles would go a long way.
- Comment on Mozilla's Latest Quagmire 17 hours ago:
What would it take for a fork of Firefox to become the main branch one must wonder? I know I switched to LibreWolf and IronFox when this all started, not FireFox. Now I’m hearing WaterFox works on the platforms I use (is it as good?)
Neither of these projects are doing core feature development on the browser engine though, as far as I can tell. I guess what it would take is a heap of cash for them to really compete.
I see LadyBird and the grumbles about their sponsors, but at least they are really doing work from the core rather than modding.
- Comment on World Socialist Web Site to launch Socialism AI 3 days ago:
Eh, recent studies show some implicit bias in some of the best Chinese models, mostly when it comes to software dev stuff so less relevant here. Not the hugest deal for this kind of projected either way, but it would be good to know if they are using those vs say Llama, or yes just building a front end fine tune for a closed model.
- Comment on Matrix Retiring the Slack Bridge by January 3 days ago:
In my dumb naive optimistic brain bridges are the perfect kind of software to be automatically developed, revised, tested, and updated by AI agents.
Two sets of rules with written documentation (API), translate request from one to the other, validate message sent and message received, if not revise and test again.
Realistically, I know this would likely hit a brick wall quickly, but it sure seems like if AI driven software development is getting actually practical this type of thing should be on the easier end of the spectrum of possibilities.
- Comment on World Socialist Web Site to launch Socialism AI 3 days ago:
I notice it doesnt say what model Socialism AI is built on top of, I’m guessing one of the Chinese models fine tuned?
- Comment on When DeepSeek-R1 receives prompts containing topics the CCP considers politically sensitive, the likelihood of it producing code with severe security vulnerabilities increases by up to 50%. 4 days ago:
Thanks
- Comment on When DeepSeek-R1 receives prompts containing topics the CCP considers politically sensitive, the likelihood of it producing code with severe security vulnerabilities increases by up to 50%. 4 days ago:
You can run it yourself on a closed network if you’re worried about telemetry, that’s part of the point.
- Comment on When DeepSeek-R1 receives prompts containing topics the CCP considers politically sensitive, the likelihood of it producing code with severe security vulnerabilities increases by up to 50%. 4 days ago:
is this with or without the prompt including politically sensitive topics?
- Comment on When DeepSeek-R1 receives prompts containing topics the CCP considers politically sensitive, the likelihood of it producing code with severe security vulnerabilities increases by up to 50%. 4 days ago:
Check out Apertus , the Swiss are showing how it should be done. 100% open: architecture, training data, weights, recipes, and final models all publicly available and licenses Apache 2.0. https://ethz.ch/en/news-and-events/eth-news/news/2025/09/press-release-apertus-a-fully-open-transparent-multilingual-language-model.html
- Comment on Why Apple Just Gave Up on AI - ColdFusion 2 weeks ago:
I remain convinced they have held back budget on AI because they are waiting for the bubble to burst so they can buy one of the bigger developers like Anthropic. Why burn a bunch of cash now just to loose the race when at the end of the day Open Source options might come out competitive or one of the leaders in the space can be bought out once valuations hit a reality check?
- Comment on Nested Learning: A new ML paradigm for continual learning 3 weeks ago:
Makes sense to me. I have always thought that if the goal is to emulate human-level intelligence then developers should consider the human brain, which not only has multiple centers of cognition dedicated to different functional operations, but also is massively parallel with mirroring as a fundamental part of the cognitive process. Essentially LLMs are just like the language centers being forced to do the work of the entire brain.
More functional systems will develop a top level information and query routing system with many specialized sub-models, including ongoing learning and integration functions. The mirroring piece is key there, because it allows the cognitive system to keep a “stable” copy of a sub-model in place while the redundant model is modified and tested by the learning and integration models, then cross checking functions between the new and old version to set which one gains “stable version” status for the next round of integration.
Anyway, thanks for sharing.