I've dreamed of a NeRF-powered backrooms walking simulator for quite a while now. This approach is "worse" because the mesh seems explicit rather than just the world becoming what you look at, but that's arguably better for real-world use cases of course.
It's about generating interesting virtual space!
> The code is being prepared for public release; pretrained weights and full training/inference pipelines are planned.
Any ideas of how it would different and better compared to "traditional" PCG? Seems like it'd give you more resource consumption, worse results and less control, neither of which seem like a benefit.