Rob Enderle2019-03-14 13:51:00
[Disclosure: Microsoft is a client of the author.]
A couple of weeks back I wrote that HoloLens 2 was the beginning of a computer revolution. Last week I was able to spend some time with HoloLens 2 and I’m pleasantly surprised that the device was actually better than I thought it would be.
I’ve been involved with HoloLens since it was a targeted prototype for Lawrence Livermore Labs, where they pretty much had to assemble it on your head (and it was tethered)! Now it could actually be successfully deployed by someone other than an expert.
I still believe some form of mixed reality is in our computing future and AR games like the coming Harry Potter Wizards Unite game will continue to focus us on what this could be rather than what it is. This is important because it drives investor interest and investor interest drives startups and more rapid advances.
The first edition of this product had a number of issues. It wasn’t easy to swap users. The product’s construction put the weight on the front which tired your neck prematurely. Occlusion wasn’t very good, so the rendered objects were translucent. And the field of view was annoyingly small. But it wasn’t a bad-looking product and it appeared like the kind of protective shield you’d find on a helmet. I’ve often thought they should just build this into a helmet given how often it is used in industrial settings and because that would create a result that looked less like a huge device on your head that didn’t belong. (It should be noted that Microsoft opened up the design of this thing and now a third party is building into a helmet.)
I didn’t get to try the helmet, sadly, but I was able to use the new stand-alone HoloLens 2. What I found was that the product was far better balanced which made it feel lighter – even though it wasn’t – by removing the strain on the back of my neck. Field of view increases made an enormous difference because it no longer felt like I had some kind of little window in front of my face. Occlusion still isn’t were I’d like it to be, but it’s far better than it was. Now, if you focus on the object, it looks more real than it did…and while it wouldn’t fool anyone yet as to its actual reality it’s close enough that you can more easily suspend belief. For its initial industrial use this isn’t that important (except it does likely reduce eye strain), but this capability will be huge when they pivot the HoloLens to the consumer market in a few years.
One of the most interesting improvements was the ability to scan and then use your hands in the way they were intended as pointing and grasping implements. It used to be that you had to use your hands like they were some kind of mouse hybrid and that meant using the HoloLens 1 was counter-intuitive and relatively difficult to learn to use well. Now interfacing with the device is far more intuitive and while they still need to fine tune the accuracy of this feature (it sometimes took a couple attempts to actually grasp something) this remains a huge improvement over the earlier “mouse hands” approach.
While the product is still undergoing final testing and won’t be ready until later in the year, for many it would be acceptably usable now. Mostly they’re just tweaking the code and improving the ecosystem, which was also significantly enhanced with new services at the launch.
What’s in store for HoloLens 3?
I think they’ll begin to pivot back to more of a design-forward focus with the next generation of the device as they begin to explore a move toward the higher-volume consumer market. When they make that move, I’d expect them to bifurcate the line retaining a focus on business with a new iteration of the existing design and a new more fluid and less expensive design for consumers. So, the future line would have a lower cost base consumer/entry level offering and a more expensive pro offering for true professionals.
I’d anticipate a line of accessories that can be used with the HoloLens 3 to improve the experience for the consumer side and to better integrate tools with the device on the business side. Part of this may be advanced tools like the Kinect Full Sensing Camera that could be used to help capture elements of the world around the user and create the impression the user could see though objects by enabling the camera on the other side of that object. This could be useful for some building and prototyping projects or to place the image of a mentor who is remote in the same space as the user to facilitate training.
But the real advancements will occur once HoloLens 3 is put into production at scale and the developers enabled with the latest announcements start executing against their own visions. This too will define HoloLens 3 and likely make it very different than the device we have now (for instance, I expect some expanded power options so the device can remain in use longer).
HoloLens represents what may be the future of personal computing. In fact, I expect it’s more a matter of when than if. The changes made to the current device came, as they always should, from feedback from users and Microsoft seems to have hit all of the needed improvements I knew of (other than better battery life which is a technical limitation at the moment).
But this is still early days for this technology which will likely shift sharply to the web and Azure for performance advancements and go through some serious design changes to meet the needs of the new audiences that will use it.
I can hardly wait for the future when I can place virtual manuals anywhere I need them when I’m working on my car and battle wizards with a future version of the Harry Potter Wizards Unite game. Until then, those that need a mixed reality solution for a professional purpose should find the HoloLens 2 a vast improvement over the first edition.
I expect version 3 to be an even bigger change. With every step of this technology we get closer and closer to creating what looks like magic. Now if we can just get a set of haptic gloves so we can feel the virtual objects we touch, and improve occlusion so the objects become more photorealistic, we’ll be there. One step at a time.
This article is published as part of the IDG Contributor Network. Want to Join?