Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder whether it'll be possible to compress enough of the game to make (almost) every possible scenario that you could encounter in the game be playable. Same issue that the previous AI experiment for Minecraft and others had is that objects and enemies seem to pop in and out of nowhere. Could the "learned" probability be high enough for this never to be an issue? You ever think you're seeing something in real life but it's just an optical illusion, it kinda feels like that to me. Obviously this still requires an entire game to be made before you can train on it, but could maybe open up other development and testing of games.


> Obviously this still requires an entire game to be made before you can train on it, but could maybe open up other development and testing of games.

The idea of developing a game where the "code" is totally opaque and non-deterministic honestly sounds like an absolute nightmare. How would you even begin to QA something like that?


> The idea of developing a game where the "code" is totally opaque and non-deterministic honestly sounds like an absolute nightmare. How would you even begin to QA something like that?

I have a fear that we are going to experience a significant regression in our ability to develop software as new "programmers" normalize the idea of "generating" "code" this way. Some kind of dystopian future where people who think an "is-negative" module is a good idea, but coupled with that module having been "generated" by "AI". Bone chilling.

Re: QA

Clearly we just need another generative "AI" to act as QA in an adversarial capacity to the "AI" generating the "code“. Turtles all the way down.

"The Machine Stops".


This proposed direction is even worse than generating code, it's eliminating code altogether. The project "source" would just be a big blob of weights that you indirectly prod and poke until it hopefully does what you want, and nobody could understand exactly what's going on under the hood even if they wanted to.


Computer "programs" being big hairy balls of "intent" derived from some corpus of inputs and a prompt is horrifying.

I kind of wish hardware hadn't gotten fast enough to enable this future. Humans being lazy, as they are, and the output of this kind of horror show eventually being "good enough", this is going to get normalized.

Anybody working to enable this future is, to me, acting unethically. This is the "AI" apocalypse I'm worried about, not the AGI singularity fever dreams that garner headlines.


Worse yet: that big blob of weights only works at all because it's been trained on a huge corpus of data from existing games. Doing anything actually novel in a game - like implementing new game mechanics, or even using a distinctive art style - would be next to impossible.


Wasn’t is-negative the dystopian past of ten years ago?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: