The interesting thing here is the core of it, being Android XR and its deep AI integration, especially the spatial awareness. Devices will come and go, but the OS will be the core that stays and grows and evolves over time. I am very curious to know how much of this is all exposed as OS foundations to build on vs a monolithic app built to look like an OS by Google. This has been a large part of Meta's mistake, where the OS is not providing many of these fundamentals and any app you see doing it is mostly re-inventing it themselves or relying on 3rd party tools like Unity to do the heavy lifting.
The really impressive part of Vision Pro is actually how well thought out the OS is underneath it, exposing fundamentals of how 3D computing can work. Especially the part to do with compositing together multiple spatial apps living together in the same shared space and even interacting with each other (eg: one app can emit a lighting effect that will shade the other's rendering).
I am very curious if Google has done this kind of foundational work. Especially if that is designed (as they claim) from the ground up to interface with AI models - eg: a 3D vision language model can reason across everything in your shared space including your pass through reality and respond to it. This would be truly amazing but there's zero technical information I can see at this point to know if Google really built these foundations or not here.
Seems like there are now ~4 places to buy content (Oculus, Steam, Google Play, Apple App Store).
If you buy on Steam, your catalog is reasonably portable over time - you can buy another vendor's headset and still access your catalog. The cost is that you have to bring a separate device with you to host the catalog (unless/until the rumored Steam Frame comes out).
Oculus and Play are both based on Android. I suspect there will be e.g. guides on Reddit to sideload one vendor's catalog onto the other vendor's device.
I can imagine a world where someone prefers to buy content in one of these stores, to have everything in one place for portability to future devices. You're already seeing this in computer gaming with Steam (and Epic, Xbox, etc.).
* https://www.samsung.com/us/xr/galaxy-xr/galaxy-xr/
* https://www.samsung.com/us/business/xr/galaxy-xr/galaxy-xr/
> 12 months of Google AI Pro, YouTube Premium, and Google Play Pass.
Not a bad deal for those who pay for those services.
What does Apple bundles with their Vision Pro for $3500?
I even installed Termux via F-Droid today, and have a bluetooth keyboard with touchpad connected to it.
Then when they say - explore Google Maps - ok. Fun. But for what? 10 minutes? How prominent is that need/activity in our life?
All usecases that Apple and now Google/Samsung showcase are "imaginary", wishful thinking usecases. They don't stick. They are more like "party-tricks" than something that can integrate into our lives and fill in a certain gap.
Wonder what I get for the other 1.6k, that makes me want it...
[0] https://play.google.com/store/apps/details?id=com.htl.agmous...
Games targeting the system won't be stuck in Apple Tv/Vision Pro conundrum of having no clear target hardware or have to ask the user to go buy a controller from another platform they might have never heard of before.
Does anyone have any recommendations on the matter ? would be super helpful as we have a flight coming soon (2 months) and I can already see her anxiety levels rising.
The use cases they showed are just as stupid as those shown in Apple's event over two years ago.
Reminder that Vision Pro has a dedicated R1 chip, with a blistering 256GB/s memory (with the actual cpu "only" having 153GB/s)! That's as much as the quad-channel memory LPDDR5x Strix Halo!
It'll be interesting to see how Samsung & then others fair at this, and over time to see how much Google, Qualcomm or other platform providers help versus leave device makers to fend for themselves at sensor fusion and other ultra realtime tasks here. Whether the Snapdragon XR2+ Gen 2 here can do enough, and whether the software can make decent use of that hardware is so TBD for this new ecosystem. It's not super super clear who is leading the charge to make it all slick and smooth. My default assumption is Qualcomm likely holds a big chunk of the stack, and sole-proprietorship of the stack like that seems like a real threat to long-term viability of XR as a technology: like the Valve Steam Deck so strongly exhibited, it's only through intense cross-stack ownership and close collaboration (in the Linux kernel in this case) that we see genuinely good products emerge.
Sensors, from Samsung's specs page:
Two High-resolution Pass-through cameras
Six World-facing tracking cameras
Four Eye-tracking Cameras
Five Inertial Measurement Units(IMUs) [commentary: whoaaa, thats a lot]
One Depth sensor
One Flicker sensor
As an aside, this sort of makes me want a device that just does eye tracking. That there are four eye tracking cameras here seems wild! I've mostly seem some pretty chill examples of webcam based tracking; it'd be neat to see what kind of user interface we could build if we really could see where people are looking.Also maybe worth reviewing what Android ARCore offers, as this defines so much of what we get here. I'd love to see more depth-based capture systems about in general: not just on the XR displays but on regular devices too! To build a better library of depth-having media. Apple's had LiDAR since iPhone 12 Pro (2020)! There's some ToF on Android phones but close to zero lidar. We also see tons of big fancy dual-sensor XR cameras out there, but AFAIK nothing for phones! Just adding a second stereoscoping camera on the back of phones would be so obvious, & do so much to help the XR world! It feels like XR products are being left to stand all on their own with no help from the rest of the mobile device ecosystem, and it feels so obvious & unworkable.
It seems the battery is external, like in the Apple Vision Pro, but it's not clear. The display (OLED) resolution is also the same.
Is there a fight between Google and Netflix?
Also USD 1800 per headset ... wow.
If Apple couldn't make it work, does Google really think they can? This should be headlining an event, not relegated to a blog post.