The biggest irritation I’ve found with the Magic Leap so far (there are others, but they’re not as big) is the state of video capture.
- The capture camera and the render camera are offset from each other, so nothing lines up in a recorded video
- The FOV is not good. If you want something captured, it needs to be pretty much dead center of your view.
- Getting media off the device is OK, on par with pulling movies from iPhone onto a PC, and the Device Bridge app in the Lab makes it easy.
I won’t pretend to understand the choices they made getting this product to market, but suffice it to say that if you’re not the one wearing the device, your fundamental understanding of the experience will be vastly different from the one who is.
To that end, I’ve tried to show what it looks like through the device by holding up a smartphone —an iPhone XR, if you’re curious— to the inside of the right eyepiece. It’s a little bleed-y, and a bit difficult to manage, but it does show a better lineup of the status rings and allows me to pull back to show the DJ rig in total.
Some other notes:
- Hand meshing and occlusion is still broken in the UE4 project. When this works, it’s kind of cool, but it’s not a must-have for now.
- Hand tracking does NOT work when the device isn’t being worn. Not sure why, but that’s why the hand markers weren’t visible in this video.
The best way to show this stuff, and I’m dreading it because it’s a ton of boilerplate, is a cross-platform solution involving an ARKit/ARCore-enabled mobile device. Even a second Magic Leap ML1 wouldn’t help, you’d still get the offsets, you’d just have them in a 3rd person view. So I guess multi-device/cross-platform networking is next. Take the lessons from last year’s UE4 networking fun and apply them to this project and see if there’s a way to make a better demo of a sustained DJ set.
Apologies for the vertical video.