- I feel like there's a bit if a disconnect with the cool video demos demonstrated here and say, the type of world models someone like Yann Lecunn is talking about.
A proper world model like Jepa should be predicting in latent space where the representation of what is going on is highly abstract.
Video generation models by definition are either predicting in noise or pixel space (latent noise if the diffuser is diffusing in a variational encoders latent space)
It seems like what this lab is doing is quite vanilla, and I'm wondering if they are doing any sort of research in less demo sexy joint embedding predictive spaces.
There was a recent paper, LeJepa from LeCunn and a postdoc that actually fixes many of the mode distribution collapse issues with the Jepa embedding models I just mentioned.
I'm waiting on the startup or research group that gives us an unsexy world model. Instead of giving us 1080p video of supermodels camping, gives us a slideshow of something a 6 year old child would draw. That would be a more convincing demonstrator of an effective world model.
- As a machine learning researcher, I don't get why these are called world models.
Visually, they are stunning. But it's nowhere near physical. I mean look at that video with the girl and lion. The tail teleports between legs and then becomes attached to the girl instead of the tiger.
Just because the visuals are high quality doesn't mean it's a world model or has learned physics. I feel like we're conflating these things. I'm much happier to call something a world model if its visual quality is dogshit but it is consistent with its world. And I say its world because it doesn't need to be consistent with ours
by superb_dev
1 subcomments
- None of these examples videos seem like the kind of “experiments” that they’re talking about simulating with these models.
I was expecting them to test a simple hypothesis and compare the model results to a real world test
- Given the near-impossibility of predicting something as "simple" as a stock market due to its recursive nature, I'm not sure I see how it would be possible to simulate an infinitely more complicated "world"
- The reason they are called "world models" is because the internal representation of what they display represents a "world" instead of a video frame or image. The model needs to "understand" geometry and physics to output a video.
Just because there are errors in this doesn't mean it isn't significant. If a machine learning model understands how physical objects interact with each other that is very useful.
- For a minute I was like (spoiler alert) « wow the creepy sci-fi theories from the DEVS tv show is taking place »… then I looked up the video and that’s just video generation at this point
- Please AI - lions have their tail attached to their back, not front. The lion's tail in the video of Girl with a lion is misplaced.
by anigbrowl
1 subcomments
- This appears to be a simulator that produces only nice things.
by nylonstrung
1 subcomments
- I can't wait for companies like this to run out of money
- Interesting. I imagine quite a few issues would seem to stem out of the inherent nature of generative AI, we even see several in these demos themselves. One particularly stood out to me, the one where the man is submerged, and for a good while bubbles come out quite consistently out of his mask, and then suddenly one of the bubbles turn into a jellyfish. At a specific frame, the AI thought it looked more like a jellyfish than a bubble and now the world has a jellyfish to deal with now.
It'll surely take a looot of video data, even more than what humans can possibly produce to build a normalized, euclidean, physics adherent world model. Data could be synthetically generated, checked thoroughly and fed to the training process, but at the end of the day it seems.... Wasteful. As if we're looking at a local optima point.
by actionfromafar
0 subcomment
- The videos are like waking up from a dream, monstrous inexplicable details.
by pedalpete
1 subcomments
- This looks interesting, but can someone explain to me how this is different from video generators using the previous frames as inputs to expand on the next frame?
Is this more than recursive video? If so, how?
- I guess this might be a chance to plug the fact that Matrix came up with their own Metaverse thing (for lack of a better word) called Third Room, it represented the rooms you joined as spaces/worlds, they built some limited functionality demos before the funding dried up
by arminiusreturns
0 subcomment
- I'm doing a metasim in full 3D with physics, I just keep seeing the limitations of the video format too much, but it is amazing when done right. The other biggest concern is licensing of output.
by no_no_no_no
0 subcomment
- [dead]