Comment on Peter Molyneux thinks generative AI is the future of games, all but guaranteeing that it won't be
NuXCOM_90Percent@lemmy.zip 2 months agoYes and no.
The thing to understand about “AI” is that basically all of it is old tech with a few advances and much better branding.
“Generative AI” to make worlds is very much the future of games… it is also the past. When Bethesda could do no wrong and Oblivion was the new hotness, there was a big deal about the forest (and I think even town?) generation tech and how it let them make a much denser world than Morrowind ever was. And… it did. It just also felt samey (which is actually realistic to anyone who spends time walking through forests and/or suburbia can attest but…).
Which led to a strong pushback against admitting these tools were used. I want to say the UE4/5 demos on this kind of tech usually includes “and then you modify it” after generating a forest or whatever. And MS Flight Sim 2020/2024 is heavily dependent on this kind of tech.
But as things get more advanced? It suddenly gets a lot easier to make a good open world (which, for all its flaws, Ubi’s Ghost Recon Breakpoint is a great example) where you have the giant forests with natural-ish paths that funnel you to POIs via a text prompt or a configuration file.
The other aspect which, funny enough, also goes back to Oblivion is the idea of procedurally generated quests/stories and narratives. A big part of Oblivion was that every NPC needs to eat food every N hours and that this was the big reason why everyone would kill themselves by eating a mysterious apple that you reverse pickpocketed on them. But you also had Radiant Quests where a random NPC would ask you to go to a random dungeon and get them a random item.
And… the fact that people had so much trouble realizing how pointless those radiant quests were says a lot about how many basement rats and yak asses we kill in the average RPG. Which is why there are increasingly guides for the Dragon Ages of the world that list what quests are “worth it” based on narrative and the like. Which gets back to the idea of generating en masse and then fine tuning.
The real sticking point, like all things AI once you get past the knee jerk bullshit and marketing, is assets and proper credit. Making a voice or texture or mesh model based on previous work is trivial and has been a thing for most of the past decade. The big issue is that getting that training data is complicated and there are very important discussions to be had about what it means to compensate creators for using their art/being as training data. And companies are glad to skip all that and just get it “for free”.