If they could deliver on one-hundredth of that, it will be truly amazing.
They can’t be generating a straight up video and that would be way too slow.
They said that they’re not using pre-built worlds, so they’re not pre-generating an entire world that you can run around in, which would still be fucking great.
I wonder if they’re dynamically creating procedural world generation. then dynamically creating occasional assets as you demand.
Seems like it would be too slow and power hungry to do it any other way.
At some point we’re going to get some crazy amazing games out of this.
The LLM, I kinda get, you can almost picture training a word predictor.
Working code with comments is pretty mindblowing even if it’s not better than a professional.
But diffusion… that’s BLACK MOTHERFUCKING MAGIC.
Here’s a picture of me, Here’s a picrture of me with a hat. the vector between the two is hat. Now put a hat on that squirrel… works…
I’m sure it doesn’t work as well as depicted, but speaking of black magic (and holodecks)… Genie 3.
If they could deliver on one-hundredth of that, it will be truly amazing.
They can’t be generating a straight up video and that would be way too slow.
They said that they’re not using pre-built worlds, so they’re not pre-generating an entire world that you can run around in, which would still be fucking great.
I wonder if they’re dynamically creating procedural world generation. then dynamically creating occasional assets as you demand.
Seems like it would be too slow and power hungry to do it any other way.
At some point we’re going to get some crazy amazing games out of this.