Experiments with HoloLens (1st Gen) Research Mode

As it is explained by Microsoft, “Research Mode is a capability of HoloLens that provides application access to the key sensors on the device.” It becomes interesting that what if we can directly get point clouds from its depth sensor? So I did several experiments with it.

Visualization

Basically, Microsoft let developers get access to the media frame of each sensor. In the Git repo available on my GitHub: Research Mode, you can find the following code:

  private async void InitSensor()
{
   ...
   var mediaFrameSourceGroup = mediaFrameSourceGroupList[0];
   var mediaFrameSourceInfo = mediaFrameSourceGroup.SourceInfos[0];
   ...
}

Basically what it does is to let you switch from RGB camera to key sensors by changing the mediaFrameSourceGroupList and SourceInfos determine which sensor you want to get access to.

After several experiments, I conclude that the short-range depth camera that can sense depth from about 0.15m to 0.95m is what I need for generating point clouds. Here is a video that shows the depth camera frames in real time.

Calibration

Getting the depth image frame by frame is clearly not enough. As you can see, the image is distorted like a fish-eye camera, and we know nothing about its intrinsics. We need to somehow undistort the image frame and get the exact x, y, z 3D position of each 2D pixel. The first thing I tried is to directly get the intrinsics by using the UWP method MediaFrameSource.TryGetCameraIntrinsics and … it returns null value. As the teaching assistant of Augmented Reality course, I made this calibration and visualization work as one of the course projects. I closely mentored the team and the result is great, we are able to get the point cloud of the hand. The full poster can be found here: poster.

Screen Shot 2019-06-11 at 3.58.10 AM.png

2 thoughts on “Experiments with HoloLens (1st Gen) Research Mode

  1. Cool work!
    Did you validate the depth values of your undistorted depth image?
    When research mode became available a year ago, I tried simple pinhole calibration, but simply transfering the depth values pixel-by-pixel did not work well. I got a curved surface/pointcloud when pointing the hololens against a wall. I tried fisheye-model undistortion, too. Same problem.

    I still haven’t solved it properly.
    Maybe you have an idea?

    Like

    1. Hi Kevin,

      Thanks for your interest. I didn’t validate the depth value. It was a preliminary experiment just to try to handle the hand occlusion problem. I believe the source code and everything should be in the CAMP GitLab. Maybe you can take a look at that.

      Thanks.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s