Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

Gamers react with overwhelming disgust to DLSS 5's generative AI glow-ups

⁨171⁩ ⁨likes⁩

Submitted ⁨⁨21⁩ ⁨hours⁩ ago⁩ by ⁨monica_b1998@lemmy.world⁩ to ⁨gaming@lemmy.zip⁩

https://arstechnica.com/gaming/2026/03/gamers-react-with-overwhelming-disgust-to-dlss-5s-generative-ai-glow-ups/

source

Comments

Sort:hotnewtop
  • warmaster@lemmy.world ⁨3⁩ ⁨hours⁩ ago

    Image

    source
  • BallShapedMan@lemmy.world ⁨20⁩ ⁨hours⁩ ago

    Good! Fuck this noise.

    source
  • lvxferre@mander.xyz ⁨15⁩ ⁨hours⁩ ago

    Wow. The example picture alone already shows what’s wrong.

    DLSS off: the background is rainy, the “cigare[ttes]” thing and the delicatessens sign are weathered, there’s some blue plastic in the background, she’s wearing brown, her eyes and lips lack any shine. This scene is clearly representing a tired, weary, “soulless” reality; one you survive but not live, that makes you whisper to yourself “…I’m so bloody tired”…

    DLSS on: throws the mood out of the window by adding OH-SO-SHINY!!! everywhere.

    This is not a breakthrough. This is not fidelity. It’s butchering artistic intent.

    source
  • Whostosay@sh.itjust.works ⁨15⁩ ⁨hours⁩ ago

    Nvidia is not a gaming company, they are a money company. Abandon them at all costs, it very well may cost everything

    source
  • iconic_admin@lemmy.world ⁨16⁩ ⁨hours⁩ ago

    Radeon, if you want to have a moment…

    source
  • devtoolkit_api@discuss.tchncs.de ⁨18⁩ ⁨hours⁩ ago

    This was inevitable. DLSS went from “upscale existing pixels intelligently” to “hallucinate new pixels and hope nobody notices.” Of course people noticed.

    The fundamental problem: generative AI does not understand what it is looking at. It sees patterns and fills them in. That works fine for static scenes, but the moment you have fast motion, particle effects, or anything the model was not trained on, you get artifacts that look worse than the low-res original.

    Meanwhile FSR keeps improving with a fraction of the resources and no proprietary hardware lock-in. FSR 4 on RDNA 4 is genuinely competitive now, and it works on any GPU.

    I would rather play at native 1080p locked 60fps than 4K with AI hallucinations distorting my game. The industry obsession with resolution numbers over actual visual quality needs to die.

    source
  • DragonTypeWyvern@midwest.social ⁨19⁩ ⁨hours⁩ ago

    AI already hit the indie game market like a ton of shit bricks.

    source
  • saigot@lemmy.ca ⁨19⁩ ⁨hours⁩ ago

    They were just starting to win people over with the simple ai upscaling and then they pull bullshit like this.

    source
  • demlet@lemmy.world ⁨16⁩ ⁨hours⁩ ago

    It’s lame, but I can’t help wondering if in this particular case gamers are just peeved that the character looks older.

    source
  • cogitase@lemmy.dbzer0.com ⁨20⁩ ⁨hours⁩ ago

    Is there more detail on the process beyond what’s in the blog post? I could see a scenario in which the training data was just generated by running multiple playthroughs on a $500,000 GPU at impossible quality, creating a copy of what that would look like on a mid-range GPU, and then training a model. I’m not sure I would object to that.

    source