OpenStars
@OpenStars@piefed.social
Compassion >~ Thought
- Comment on Elon Musk and Sam Altman clashed on X after Musk shared a post about a man who committed a murder-suicide following delusional conversations with ChatGPT 2 days ago:
Well, it’s always worked before…
- Comment on NVIDIA CEO says relentless negativity around AI is hurting society and has "done a lot of damage" 1 week ago:
There are so many interconnected issues there:
- I thought “vibe-coding” inherently implies checking the output, but just as “patriots” or “believers” often do not actually believe in the principles that they espouse, perhaps “ai slop” would more rightly apply to much of the output, aka theory vs. actual practice
- similarly for videos, “ai slop” by its technical definition implies only minimal checking of the output, however any output - whether checked or not - from an unethically trained LLM, and perhaps using a datacenter that privatizes profits at the expense of public funding (water), can be considered theft
- so then is responsibly-trained output of AI, like using DeepSeek on a personal machine where someone pays for their own electricity, okay? What if an artist trained an LLM on their own OC, so then technically if such a person were to not modify their output (or do so only minimally e.g. slapping on a label for attribution) before sharing, would that be considered okay? That does meet the technical definition of “ai slop” though?
- conversely, what about stealing memes on the internet and sharing those without attribution as to the source - why is that so very often considered okay and even somehow “good”? (let’s say for the sake of argument that we exclude those images that have been cropped specifically to remove the author attribution) Should we start calling those “human slop”, or “meme slop”?
- piracy likewise steals content and shares - a huge difference there is attribution, but there are certain similarities to how common a"i” models also did not consider concerns about violations of copyright and IP. One is lifted up on the Threadiverse as being ethically good while the other is condemned as being bad. I know it is more complex than this… or at least surely it must be, but I definitely struggle with categorizing all of this in my own mind (perhaps the difference lies in the intent? one makes the common man happier. or perhaps the difference lies rather and/or with the output, where one of those two harms us all? but doesn’t the other as well, if less content is made from those sources that will not see their hoped-for ROI as a result?). Wow I really did not expect to open up this rabbit-hole… I guess just ignore this one for now. :-P
- and then there’s the issue of whether content is properly labeled or not - I have far less problems (not none but less) with something labelled “made with ChatGPT5[, trained on <source>]” than with something that has no label on it whatsoever.
- and finally there’s programming vs. video, yeah
I suppose I mostly have heard the phrase “vibe-coding” from its pro-ai proponents, while the anti-slop contingent has not really used a coherent phrase (so far that I have typically seen). I suspect because for coding, people have the expectation that you are supposed to be checking it, so the concern there is mostly on the low quality due to lack of degree of rigorous post-production checking, rather than the theft of input source - although I also suspect that most people have not really though the issue through very in-depth. I know I have not.
Calling poor-quality vibe-coding as “ai slop” could be a great way to shame it! :-P
- Comment on NVIDIA CEO says relentless negativity around AI is hurting society and has "done a lot of damage" 1 week ago:
That seems a solid listing. I would add one more: companies that did actually fire their workforces and attempt to replace them with a"i” now having regretted it, and likely to the tune of that decision having destroyed their entire company.
Although my earlier comment was purely about the slop present on YouTube - where slop or no slop, already the monetization aspects have been so destructive to the utility of those videos.
This now makes me curious: does the term “slop” apply beyond text, images, and videos? I thought “ai” coding was called “vibe-coding” rather than slop?
- Comment on NVIDIA CEO says relentless negativity around AI is hurting society and has "done a lot of damage" 1 week ago:
Only for them. Now get back to work you lazy slob!
- Comment on NVIDIA CEO says relentless negativity around AI is hurting society and has "done a lot of damage" 1 week ago:
Maybe if “ai” would make better than mere slop, we might actually like it? (e.g. if instead of just stealing it might do something responsibly, and also well)
Somehow all this reminds me of spez… we are just landed gentry, don’t you know 😜
- Comment on Bluesky suspending antifascist researchers for sharing publicly available information about literal nazis. 3 weeks ago:
Then are censures received what “hesitations” means, or does that site just make no sense whatsoever to not have access to censures received on the page where it would make the most sense?
- Comment on OpenAI's ChatGPT ads will allegedly prioritize sponsored content in answers 3 weeks ago:
It happened so fast though! (Almost like their only concern was ever profits)
- Comment on Bluesky suspending antifascist researchers for sharing publicly available information about literal nazis. 4 weeks ago:
There is such a thing for Lemmy, and Lemmy.ml has a “good reputation” listed on it.
See it here: https://gui.fediseer.com/instances/detail/lemmy.ml, noting the 15 “endorsements” (think upvotes) and only 2 “censures” (think downvotes), with another 2 “hesitations”. Fwiw, PieFed.social has 6 endorsements (all by Lemmy instances iirc) and 0 censures and 0 hesitations. lemmy.dbzer0.com has 49 endorsements, 136 censures, and 2 more hesitations.
So people definitely put censures and hesitations for some instances… just not lemmy.ml. Possibly the system admins are too afraid of being known by the very developers of the code that they are running on their machines to call it out? (I don’t have to remind you of all people that system admins in most countries cannot be anonymous - unlike the rest of us, most people in that situation have to register with their country to be responsible for the content shown, e.g. CSAM). Mainly around lemmy.ml there is simply… silence, by the vast majority of the Threadiverse.
Which matches every other policy surrounding Lemmy.ml around the Threadiverse: chiefly silence (at the “official” levels, e.g. sidebar text on an instance or in official documentation), leaving new people to have to constantly keep discovering what is going on regarding it, mainly on their own.
- Comment on Bluesky suspending antifascist researchers for sharing publicly available information about literal nazis. 4 weeks ago:
If you want an abortion, but your neighbor is willing to fully, literally, and actually kill you for attempting to get one, then how do you get along? Indeed…
The above example is auth-right, while tankies are auth-left. The common denominator is the auth part. You either give in and do whatever the other side wants, or… you do not do that.
Platforming the auth-left seems similar to trying to get people to join Reddit. Either way you are helping someone else feed forward their agenda, which will ultimately arrive at a bad ending.
I do note that PieFed is building an entirely new future, neither platforming tankies nor seeking profits to the exclusion of all else. I am putting my hopes into it.
- Comment on Bluesky suspending antifascist researchers for sharing publicly available information about literal nazis. 4 weeks ago:
Sorry for being confusing, but you are correct, tankies are not literal Nazis - I was referring to the Paradox of Tolerance whereby, as some may consider paradoxically, when we attempt to tolerate everyone then in reality we will become less free than if we would exclude those who would act to take freedom away from others (this logical principle is often called by the “Nazi bar” effect, where you yourself may not be an actual “Nazi”, or in our case a tankie, and yet by virtue of association we are seen as such by Redditors who might otherwise flock here and contribute much more content than we currently have here).
- Comment on Bluesky suspending antifascist researchers for sharing publicly available information about literal nazis. 4 weeks ago:
Almost no instances defederate from Lemmy.ml. And I had accounts on multiple instances that federated with both lemmygrad.ml and Hexbear.net. None of that was explained anywhere, we early adopters just had to figure it out.
And who tells new people to avoid Lemmy.ml in the first place? That join Lemmy website that “randomly” picks an instance for you has even selected it for me, as well as hexbear.net.
Face it: we are a Nazi bar. Yes it’s possible to walk through the crowd of Nazis at the front door to our corner of the room where it’s cool, but I understand if my Jewish friends will refuse to accept my invitations, seeing who they will encounter on the way over.
- Comment on Bluesky suspending antifascist researchers for sharing publicly available information about literal nazis. 4 weeks ago:
Tbh I think of Lemmy in the same terms. Like, people contemplating coming here from e.g. Reddit could block all the anti-Western propaganda (e.g. calling for actual murder against us), and find some pools of content that are halfway worthwhile… but like, why would they bother? For the ideological purity of not contributing to enshittification? Anyone who thinks that way is already here though.
Whether facing “leftist” tankies on Lemmy or “conservative” right-wingers on Nostr, mainstream non-technical normie users are going to just nope right out of either.
- Comment on Reddit’s CEO says r/popular ‘sucks,’ and it’s going away 1 month ago:
the most human place on the internet.
Yes… “human”, that’s right these are the most human humans that ever humaned their way to humanness, r-r-right!?
(Except for the bots ofc)
- Comment on AI finds errors in 90% of Wikipedia's best articles 1 month ago:
Using ChatGPT to “fix” Wikipedia, what could possibly go wrong? (/s as the approach seems valid, this is just a funny statement)
- Comment on New data shows companies are rehiring former employees as AI falls short of expectations 2 months ago:
… at the same or higher salary, r-r-right?
- Comment on Bluesky experiments with dislikes and 'social proximity' to improve conversations 2 months ago:
“disliking” a post isn’t going to do anything
Not true - it seems designed to increase advertising revenue for the CEO:-P. That’s… “something”, technically? 🤪
- Comment on Bluesky experiments with dislikes and 'social proximity' to improve conversations 2 months ago:
They are focusing on enshittification
It is what they want - for them it is a “feature” to exist surely inside of their echo chambers. MANY Lemmy instances - hexbear.net and Lemmy.ml to name just a couple - are the same, banning people who even remotely disagree with them.
Profit-seeking is not the only cause of enshittification.
- Comment on Bluesky experiments with dislikes and 'social proximity' to improve conversations 2 months ago:
I like the way that PieFed implements this.
“Highly contentious users” i.e. those who are consistently heavily downvoted by “trusted instances” (I don’t know the actual thresholds but imagine someone who receives 10x more downvotes than upvotes - and e.g. hexbear.net can be federated with but not “trusted” so that downvote brigading can be eliminated, unless ofc they use their non-HB alts but while nothing is perfect, every ounce of protection does help:-) are labeled, but there is currently no way that I am aware of to actually remove their content. Still, it helps to see that automatically-applied label as you scroll down, so that you can skip past it or at least realize that a reply is going to fall on deaf ears. People’s reputations precede us irl so why not online as well, where it is so much easier to measure?
Individual content - posts and comments - that are highly contentious, according to user-defined thresholds, can be either automatically collapsed or even hidden. I personally disable both of these, but if someone wants to not see highly contentious content then this makes it happen for them. Similarly there are keyword filters - again nothing will ever be perfect but if you want to see less of e.g. Musk or Trump, then this is a method to help reduce the incoming flood of content related to such.
Communities have access to “community-specific” voting patterns. I know less about this aspect but generally the entire community or perhaps an individual post could be limited to community-specific rules, like a member can vote but a non-member drive-by commentor might be disallowed under certain conditions. Not every community should be this way and I hope most won’t enable these features, but they are necessary sometimes - e.g. a community for and by women needs to exclude all the “don’t you know that I am such a nice man"-splaining that will inevitably arise.
Anyway I love the hierarchy that distributes the work of moderation all the way from instance admins (for e.g. illegal content) through community mods (who have access to software to help them) and ultimately powers the end-users to control their own recipient of content, which they can change over time - e.g. rather than leave social media entirely they could enable some of the contentious user and/or keyword filter controls and thereby attain for themselves a break from the noise and hubub that the entire internet tends to prefer to throw at us all the time.
In contrast, whatever little moderation that Bluesky has is obviously insufficient - the problems of outright monotonization spam and high contentious users seems to have overwhelmed whatever capacity there was to handle such.
PieFed has really high me hope for the entire Fediverse.
- Comment on Bluesky experiments with dislikes and 'social proximity' to improve conversations 2 months ago:
Moderation seems sorely lacking on Bluesky. And did you read the comment in the OP article? It offered “I am such a nice man” vibes, though technically not entirely wrong either, yet failing to consider replies not offered in good faith nor the consent of the recipient to receive such shocks to their systems.
- Comment on 'Godfather of AI' says tech giants can't profit from their astronomical investments unless human labor is replaced 2 months ago:
They want feudalism back. Ngl, the technology available today might make it work for them.
- Comment on When Everything Is Fake, What’s the Point of Social Media? 2 months ago:
Bots trained from bots, talking to bots, governed by THE ALGORITHM… surely this will end well.
- Comment on Kohler Wants to Put a Tiny Camera in Your Toilet and Analyze the Contents 2 months ago:
Worse, you have to provide your own shit! 💩
- Comment on Imgur's Community Is In Full Revolt Against Its Owner 4 months ago:
Tbf that's a big deal for Mastodon aka Fediverse, not so much on the Threadiverse with K/Mbin, Lemmy, or PieFed.
And the tools that exist to help are laughably bad - the last time I tried the auto-selector website it chose for me hexbear.net, and I noticed Lemmy.ml was prominently displayed up high in their listing (surely the Windows-using centrists and conservatives on Reddit will have no problems joining that extremist leftist instance of FOSS enthusiasts... r-r-right?!).
- Comment on Chinese-developed AI system revolutionizes industrial fermentation process 5 months ago:
Do yourself a favor and skip the fluff piece and move straight to the pubmed article linked after it, unless you really enjoy reading sentences like:
It serves as an 'intelligent brain'...
- Comment on AI slop is ruining all of our favorite places to scroll 5 months ago:
It may not be much, but it's our garbage and a waste of time content! 🤪
- Comment on Developer survey shows trust in AI coding tools is falling as usage rises 5 months ago:
Counterpoint: they want number go up.
Pro Tip: it doesn't even matter if number go up, when they know how to suck up to even higher-ups.
- Comment on Grok 4 seems to consult Elon Musk to answer controversial questions 6 months ago:
So... a direct message line to Musk you say?
- Comment on Reddit CEO Steve Huffman says Reddit will work with “various third-party services” to verify a user's humanity, after an unauthorized AI persuasion experiment 8 months ago:
We should all leave Reddit and move to the Threadiverse! Oh wait.... 😁
- Comment on A 25-Year-Old Is Writing Backdoors Into The Treasury’s $6 Trillion Payment System. What Could Possibly Go Wrong? 11 months ago:
No no no no no, you are supposed to reassure me with nice-sounding "factual" statements!? Everything will be okay bc... Cap't America, or sumtin.
What I know is that if people have principles but not convictions, then they have neither.
And unfortunately, greed is a principle.:-(
- Comment on A 25-Year-Old Is Writing Backdoors Into The Treasury’s $6 Trillion Payment System. What Could Possibly Go Wrong? 11 months ago:
What will really blow all of our minds, is that once we get this tiny little matter of the fate of the USA under control (I'm mostly joking here bc I think there's a strong, >50% chance of that never happening), there is still the fact that climate change has radically altered our word forever.
And the internet too.
And globalization as well.
Oh and automation likewise.
Meanwhile, to deal with all of THAT, we have... "Congress".
No matter what, things will never be the same again, nor would those of us who think about it even want to. You can't un-pop a bubble, and why would we want to make a new one? (bc that worked out so well the last time)
Damnit, I'm not trying to be fatalistic here.