Comment on NVIDIA CEO says relentless negativity around AI is hurting society and has "done a lot of damage"
OpenStars@piefed.social 12 hours agoThere are so many interconnected issues there:
- I thought “vibe-coding” inherently implies checking the output, but just as “patriots” or “believers” often do not actually believe in the principles that they espouse, perhaps “ai slop” would more rightly apply to much of the output, aka theory vs. actual practice
- similarly for videos, “ai slop” by its technical definition implies only minimal checking of the output, however any output - whether checked or not - from an unethically trained LLM, and perhaps using a datacenter that privatizes profits at the expense of public funding (water), can be considered theft
- so then is responsibly-trained output of AI, like using DeepSeek on a personal machine where someone pays for their own electricity, okay? What if an artist trained an LLM on their own OC, so then technically if such a person were to not modify their output (or do so only minimally e.g. slapping on a label for attribution) before sharing, would that be considered okay? That does meet the technical definition of “ai slop” though?
- conversely, what about stealing memes on the internet and sharing those without attribution as to the source - why is that so very often considered okay and even somehow “good”? (let’s say for the sake of argument that we exclude those images that have been cropped specifically to remove the author attribution) Should we start calling those “human slop”, or “meme slop”?
- piracy likewise steals content and shares - a huge difference there is attribution, but there are certain similarities to how common a"i” models also did not consider concerns about violations of copyright and IP. One is lifted up on the Threadiverse as being ethically good while the other is condemned as being bad. I know it is more complex than this… or at least surely it must be, but I definitely struggle with categorizing all of this in my own mind (perhaps the difference lies in the intent? one makes the common man happier. or perhaps the difference lies rather and/or with the output, where one of those two harms us all? but doesn’t the other as well, if less content is made from those sources that will not see their hoped-for ROI as a result?). Wow I really did not expect to open up this rabbit-hole… I guess just ignore this one for now. :-P
- and then there’s the issue of whether content is properly labeled or not - I have far less problems (not none but less) with something labelled “made with ChatGPT5[, trained on <source>]” than with something that has no label on it whatsoever.
- and finally there’s programming vs. video, yeah
I suppose I mostly have heard the phrase “vibe-coding” from its pro-ai proponents, while the anti-slop contingent has not really used a coherent phrase (so far that I have typically seen). I suspect because for coding, people have the expectation that you are supposed to be checking it, so the concern there is mostly on the low quality due to lack of degree of rigorous post-production checking, rather than the theft of input source - although I also suspect that most people have not really though the issue through very in-depth. I know I have not.
Calling poor-quality vibe-coding as “ai slop” could be a great way to shame it! :-P
lvxferre@mander.xyz 4 hours ago
If I got this right, what most people call “slop” is mass-produced and low quality. Following that definition you could have human-made slop, but it’s less like a low quality meme and more like corporate “art”. Some however seem to be using it exclusively for AI generated content, so for those “human-made slop” would be an oxymoron.
Human reviewing is not directly related to that. Only as far as a human to be expected to remove really junky output, and only let decent stuff in.
Vibe coding actually implies the opposite: you don’t check the output. You tell the bot what you want, it outputs some code, you test that code without checking it, then you ask the bot for further modifications.
That’ll depend on the person. In my opinion, AI usage is mostly okay if:
Key differences: a meme is typically made to be shared, without too many expectations of recognition, people sharing it will likely do it for free, and memes in general take relatively low effort to generate. While the content typically fed into those models is often important for the author/artist, takes a lot more effort to generate, and the people feeding those models typically expect to be paid for them.
Even then note a lot of people hate memes for a reason rather similar to AI output, “it takes space of more interesting stuff”. That’s related to your point #6, labelling makes it a non-issue for people who’d rather avoid consuming AI output as content.
It’s less about intent and more about effect. A pirated copy typically benefits the pirate by a lot, while it only harms the author by a wee bit.
Note I don’t consider piracy as “theft” or “stealing”, but something else. It’s illegal, but not always immoral.