As it is explained by Microsoft, “Research Mode is a capability of HoloLens that provides application access to the key sensors on the device.” It becomes interesting that what if we can directly get point clouds from its depth sensor? So I did several experiments with it.
Visualization
Basically, Microsoft let developers get access to the media frame of each sensor. In the Git repo available on my GitHub: Research Mode, you can find the following code:
private async void InitSensor() { ... var mediaFrameSourceGroup = mediaFrameSourceGroupList[0]; var mediaFrameSourceInfo = mediaFrameSourceGroup.SourceInfos[0]; ... }
Basically what it does is to let you switch from RGB camera to key sensors by changing the mediaFrameSourceGroupList
and SourceInfos
determine which sensor you want to get access to.
After several experiments, I conclude that the short-range depth camera that can sense depth from about 0.15m to 0.95m is what I need for generating point clouds. Here is a video that shows the depth camera frames in real time.
Calibration
Getting the depth image frame by frame is clearly not enough. As you can see, the image is distorted like a fish-eye camera, and we know nothing about its intrinsics. We need to somehow undistort the image frame and get the exact x, y, z 3D position of each 2D pixel. The first thing I tried is to directly get the intrinsics by using the UWP method MediaFrameSource.TryGetCameraIntrinsics
and … it returns null value. As the teaching assistant of Augmented Reality course, I made this calibration and visualization work as one of the course projects. I closely mentored the team and the result is great, we are able to get the point cloud of the hand. The full poster can be found here: poster.