Comment on NVIDIA CEO says relentless negativity around AI is hurting society and has "done a lot of damage"

<- View Parent
OpenStars@piefed.social ⁨12⁩ ⁨hours⁩ ago

There are so many interconnected issues there:

  1. I thought “vibe-coding” inherently implies checking the output, but just as “patriots” or “believers” often do not actually believe in the principles that they espouse, perhaps “ai slop” would more rightly apply to much of the output, aka theory vs. actual practice
  2. similarly for videos, “ai slop” by its technical definition implies only minimal checking of the output, however any output - whether checked or not - from an unethically trained LLM, and perhaps using a datacenter that privatizes profits at the expense of public funding (water), can be considered theft
    1. so then is responsibly-trained output of AI, like using DeepSeek on a personal machine where someone pays for their own electricity, okay? What if an artist trained an LLM on their own OC, so then technically if such a person were to not modify their output (or do so only minimally e.g. slapping on a label for attribution) before sharing, would that be considered okay? That does meet the technical definition of “ai slop” though?
    2. conversely, what about stealing memes on the internet and sharing those without attribution as to the source - why is that so very often considered okay and even somehow “good”? (let’s say for the sake of argument that we exclude those images that have been cropped specifically to remove the author attribution) Should we start calling those “human slop”, or “meme slop”?
    3. piracy likewise steals content and shares - a huge difference there is attribution, but there are certain similarities to how common a"i” models also did not consider concerns about violations of copyright and IP. One is lifted up on the Threadiverse as being ethically good while the other is condemned as being bad. I know it is more complex than this… or at least surely it must be, but I definitely struggle with categorizing all of this in my own mind (perhaps the difference lies in the intent? one makes the common man happier. or perhaps the difference lies rather and/or with the output, where one of those two harms us all? but doesn’t the other as well, if less content is made from those sources that will not see their hoped-for ROI as a result?). Wow I really did not expect to open up this rabbit-hole… I guess just ignore this one for now. :-P
  3. and then there’s the issue of whether content is properly labeled or not - I have far less problems (not none but less) with something labelled “made with ChatGPT5[, trained on <source>]” than with something that has no label on it whatsoever.
  4. and finally there’s programming vs. video, yeah

I suppose I mostly have heard the phrase “vibe-coding” from its pro-ai proponents, while the anti-slop contingent has not really used a coherent phrase (so far that I have typically seen). I suspect because for coding, people have the expectation that you are supposed to be checking it, so the concern there is mostly on the low quality due to lack of degree of rigorous post-production checking, rather than the theft of input source - although I also suspect that most people have not really though the issue through very in-depth. I know I have not.

Calling poor-quality vibe-coding as “ai slop” could be a great way to shame it! :-P

source
Sort:hotnewtop