The biggest hurdles is that none of the large companies think there is enough profit to be made from AR. The Hololens 2 is the only headset on the market both capable of running the program required while also being safe to use in a active shop enviroment (VR with passthrough is not suitable). Unfortunately the Hololens 2 is almost 6 years old as is being stretched to the absolute limits of its hardware capabilities. The technology is good but feels like it is only 90% of the way to where it needs to be. Even a simple revision with double the RAM and faster more power efficient processor would alleviate many of the issues we've experienced.
Ultimately from what I've seen, AR is about making the human user better at their job and there are tons of industries where it could have many applications, but tech companies don't actually want to make things that could be directly useful to people that work with their hands, so instead we will just continue to toss more money at AI hoping to make ourselves obsolete.
I use VR for gaming. The headsets are uncomfortable after about 45 minutes, they're hot and sweaty, and they're incredibly isolating. All that's fine if you want to slay baddies while alone at home, but utterly propellant to most people.
- Viture Pro XR glasses
- Vuzix Z100 glasses (through Mentra)
The Viture's I use as a lightweight alternative to VR headsets like the Meta Quest. I lay down on the couch/in bed and watch videos while wearing them.
The Vuzix are meant to be daily-wear glasses with a HUD, have yet to break them in.
Later this year, Google/Samsung are due big AR releases, so is Meta I think as well.
It'll be the debut of Android XR.
I saw it couple of time in action, it's really impressive.
We post case studies regularly on our blog, so you can read about real world deployments there: blog.resolvebim.com
From my experience the hardware is still a hurdle simply because it doesn’t completely replace all pc based workflows right now and therefore has to be used selectively at the right moments alongside 2D monitors.
Worked great to avoid eye fatigue/posture issues on airplanes though. I'm happy I have them, but in hindsight I'd have gotten a Viture or something with a better nose bridge and a narrower field of view.
Hurdles? Battery life, proper hardening against dust/water.
We have another product that's geared towards collaborating and sharing data between teams and vendors, and it seems better suited there, but that one is a web application, and I don't know how well VR glasses are supported there.
I think it'd be awesome in the CAD applications themselves but I don't know if any of them support it out of the box.
https://loop.equinor.com/en/stories/shaping-the-future-with-...
https://loop.equinor.com/en/stories/developers-trip-johan-sv...
At the end of the day, you are asking someone to put something on their face that is still very different ergonomically than glasses (and I’m not sure even glasses would overcome enough friction). The ROI has to overcome the business (or personal) friction of buying the hardware, the friction of the form factor plus any friction from changed workflows.
Now put that in an operational workflow instead of training and the risks go up. Most are still skeptical of device reliability (not to say there aren’t suitable devices for operational roles but the perception is still a hurdle, and the applicability is often device-specific). Now add on to that limited experience with devices (many decision makers have never put one on), added security complications, specialized software development skills, limited content libraries and very real accessibility concerns and a lot of enterprises can never get past an “innovation center demo.”
For many industries the value proposition just isn’t there yet. But that said, I’d recommend digging a little deeper as there’s a lot of existing use-cases and deployments, both failed and successful, outside of IVAS.
They use the Apple Vision Pro headset fairly significantly in human interaction and data gathering that they then utilize for simulations.
We haven't been able to get a contract in nearly two years. Almost all of our competition in this sector have gone bust, and my company is about to follow suit.
The answer appears to be "no". The industry at large does not have enough interest in AR/XR to sustain any sort of competitive business to provide those products.
I spent a lot of time in graduate school researching AR/VR technology (specifically regarding its utility as an accessibility tool) and learning about barriers to adoption.
In my opinion, there are three major hurdles preventing widespread adoption of this modality:
1. *Weight*: To achieve powerful computation like that of the HoloLens, you need powerful processing. The simplest solution to this is to put the processing in the device, which adds weight to it. The HoloLens 2 weighs approximately 566g (or 1.24lb), which is a LOT of weight compared to a pair of traditional glasses, which weigh approximately 20-50g. Speaking as someone who developed with the HL2 for a few years, all-day wear with the device is uncomfortable and untenable. The weight of the device HAS to be comfortable for all-day use, otherwise it hinders adoption.
2. *Battery*: Ironically, making the device smaller to accommodate all-day wear means that you're simultaneously reducing its battery life, which reduces its utility as an all-day wearable: any onboard battery must be smaller, and thus store less energy. This is a problematic trade-off: you don't want the device to weigh too much that people can't wear it, but you also don't want the device to weigh too little that it ceases to have function.
3. *Social Acceptability*: This is where I have some expertise, as it was the subject of my research. Simply put, if a wearer feels as though they stand out by wearing an XR device, they're hesitant to wear it at all when interacting with others. This means that an XR device must not be ostentatious, as the Apple Vision Pro, HoloLens, MagicLeap, and Google Glass all were.
In recent years, there have been a lot of strides in this space, but there's a long way to go.
Firstly, there is increasingly an understanding that the futuristic devices we see in sci-fi cannot be achieved with onboard computation (yet). That said, local, bidirectional, wireless streaming between a lightweight XR device (glasses) and a device with stronger processing power (a la smartphone) provides a potential weigh of offloading computation from the device itself, and simply displaying results onboard.
Secondly, Li+ battery tech continues to improve, and there are now [simple head-worn displays capable of rendering text and bitmaps](https://www.vuzix.com/products/z100-smart-glasses) with a battery life of an entire day. There is also active development work by the folks at [Mentra (YC W25)](https://www.ycombinator.com/companies/mentra) on highlighting these devices' utility, even with their limited processing power.
Lastly, with the first two developments combined, social acceptability is improving dramatically! There are lots of new head-worn displays emerging with varying levels of ability. There was the recent [Android XR keynote](https://www.youtube.com/watch?v=7nv1snJRCEI), which shows some impressive spatial awareness, as well as the [Mentra Live](https://mentra.glass/pages/live) (an open-source Meta Raybans clone). In terms of limited displays with social acceptability, there are the [Vuzix Z100](https://www.vuzix.com/products/z100-smart-glasses), and [Even Realities G1](https://www.evenrealities.com/g1), which can display basic information (that still has a lot of utility!).
As an owner of the Vuzix Z100 and a former developer in the XR space, the progress is slow, but steady. The rapid improvements in machine learning (specifically in STT, TTS, and image understanding) indirectly improve the AR space as well.
In my day job I occasionally hear about some AR startup doing demos for training and parts setup in CNC machines but the value add seems to be too insignificant for the work required.
VR is the zombie category that comes around every 10 years. All that's missing is another Lawnmower Man sequel.
Multiple companies have bought it, and we have large companies as clients who’ve used it to train 1000s of their blue-collar workers, even in sectors such as construction in a relatively challenging (in terms of pricing and value) market.
We have a significant (I think!) number of devices deployed, and most of my clients end up purchasing more after the initial purchase and pilot.
I think that’s for a few reasons:
1) VR, when well designed, can offer 1st person experience of being an accident victim due to the viewer’s own oversight / someone else’s oversight. That makes it a far more effective way to draw the learner’s attention to the importance of the safety protocols, etc.
2) Our solution is multi-lingual: it’s currently available in 10 regional Indian languages - that matters, since a significant fraction of the workforce may not understand English. Our localisation extends beyond that, but language is a big thing in enabling access and usage.
3) if you have to invest 10-15 min per learner (often one-on-one as the instructor) to onboard each learner before they can use your solution, it becomes very difficult to scale and raises the bar for cost-effectiveness. So that’s something we focus on heavily.
4) Setup time- don’t create a solution that requires IT support, someone who understands how to setup / load SteamVR / Oculus Link / meta Horizon. If the solution adds 20-30 min workload to the staff on a site, then adopting it becomes that much more painful - so we’ve worked very hard to develop an integrated system where the instructor can quickly onboard 10-15 learners and get going with the session in 5 min.
5) workflow changes: often, introducing VR means changing some part of the organisation’s workflow - many VR solutions don’t consider this / acknowledge this in their design- clients get initially excited, but when it comes to actually using it on a daily basis, the deployment and workflow frictions can completely tank VR adoption.
I’ve seen multiple solutions fail because of this, and we focus extensively on this when we design our solutions.
India is a hard market for VR, honestly because it’s very price sensitive. But I think we’ve made some progress here, because we’ve focused extensively on system robustness, ease of deployment, localization, and a lot of user-centered design.
We’ve also developed sophisticated VR - based training solutions for SOP training. VR can be / is, very effective for initial onboarding and SOP training. Again, the challenge here is usability - most of the learner’s don’t know how to use the controllers. Learning how to use the controllers is not easy and takes time. So that onboarding is critical and needs to be done well.
In SOP training, our experience is that it can, if designed well, significantly reduce on-boarding time; however, you still need the last 20% of training on the actual thing for it to stick, and for the learner to actually _learn_.
Edit: formatting and word choice
He wouldn’t invest in Palantir either.
Convince the best seed fund in the world that it has a blind spot, maybe some risks will yield something great.