Skip to main content

Dancing in digital rain: HoloLens used to see real-time motion capture

Using the HoloLens in Motion Capture / Dance / Visual Effects production
Augmented reality headsets might not be able to create entire digital worlds for you to walk around in, but they can do a lot of things that VR headsets cannot, such as layering the visual data from a motion-capture recording over that of the real-world actor. That was something that the WholoDance project played around with during development using a HoloLens headset, and it seemed to work very well.

WholoDance is a program that looks into new ways to teach dance, especially through technology, while preserving cultural history within the movements. HoloLens and augmented reality were an exciting development for the project, as it let the developers try out something very new.

Not only could the director of the motion-capture project view the dancer’s digital form while she was dancing next to it, but immediately afterwards, they were able to play back the moving 3D model to the dancer herself, who while wearing the headset, was able to critique her own performance and that of the capture technology.

That in itself could be of great help to dancers, who traditionally rely on 2D recordings to analyze their performance. However, being able to see it so quickly in 3D and be able to walk around their digital ghost could be extremely useful. As motion capture developer, Jasper Brekelmans, said of the project (via RoadtoVR), “Nuances of how the hips moved during balancing or how footwork looked for example became much more apparent and clear when walking around a life-size 3D character in motion than watching the same thing on a 2D screen.”

One aspect that is likely to be improved in the future though, is interaction. In the video above we see Brekelmans and the dancers utilizing an Xbox gamepad for inputs. While there are certainly more intuitive ways to interact with a virtual space, it was felt that a reliable, well tested controller would be better suited than something which the team were less familiar with, like motion controls or voice commands.

The WholoDance project is also interested in experimenting with someone dancing while wearing the headset, potentially giving them a HUD or overlay of information which could help with certain movements, or even learning the dances in the first place.

Editors' Recommendations

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
Hololens can be used to navigate the blind through buildings faster
HoloLens Opinion

Researchers at the California Institute of Technology developed an application for Microsoft's Hololens that can steer visually impaired individuals through a complex building. Rather than deliver raw images to the brain as seen in recent prosthetic attempts, this "non-invasive" method relies on 360-degree sound and real-time room/object mapping to navigate wearers through an unfamiliar multi-story building on their first attempt. 
Typically, Hololens renders interactive virtual objects in your full view of the real world. For example, engineers can construct a 3D model of a building in physical space and examine each side by simply walking around the virtual structure. You can also use the Hololens to shop for furniture online by placing a 3D model of the desired chair or table in your living room to see how it blends in with your current décor before making a purchase. 
The drawback to Hololens, for now at least, is that all virtual objects reside only in the wearer's view; these "holograms" can't be seen by anyone else unless they have a device capable of sharing the same experience. In this case, the wearer can't see anything, so the researchers fell back on the headset's real-time room and object-mapping capabilities. 
"Our design principle is to give sounds to all relevant objects in the environment," the paper states. "Each object in the scene can talk to the user with a voice that comes from the object’s location. The voice’s pitch increases as the object gets closer. The user actively selects which objects speak through several modes of control." 
These modes consist of scan, spotlight, and target. After selecting scan mode using a clicker, each object will call out its name in sequence from left to right via spatial audio, meaning the wearer can get a sense of their real-world placement based on the distance and location of their voice. Spotlight mode forces the object directly in front to speak, and target mode will force an object to repeatedly call out its name. Meanwhile, obstacles and walls will hiss if the wearer moves in too close. 
In one test, researchers created a virtual chair and directed Hololens wearers to approach object using the target mode. Most relied on a two-phase method: Localize the voice by turning in place and then quickly reach the correct destination. After that, researchers put a physical chair in the same location and asked the individuals to find that chair using their typical walking aid. The process took eight times longer and 13 times more distance without the help of Hololens. 
Hololens can be used for long-range guided navigation, too. Researchers created a virtual guide that followed a pre-computed path and called out "follow me" to the wearer. It continuously monitored the wearer's progression and remained a few feet ahead. If the wearer strayed off course, the virtual guide would stop and wait for him/her to catch up. The test included crossing a building's main lobby, climbing two flights of stairs, walking around a few corners, and stopping in an office. 

Read more
HoloLens virtual touchscreen is the futuristic tech we’ve been waiting for
hololens development edition pre order ship date microsoft preorders 2

Researchers at Microsoft Research have developed a way to give HoloLens users a virtual touchscreen through a system called MRTouch. This allows users of Microsoft's mixed reality headset an additional way to interact while using HoloLens, complementing gesture, voice, and controller inputs, according to a Microsoft Research video demoing the use of the MRTouch prototype.

Although Microsoft Research hasn't announced plans to bring MRTouch to the market or allow third-party developers to make use of the multi-touch interactions at this time, the good news is that it works on an unmodified Microsoft HoloLens headset. All users would need to do is use their fingers and swipe across a flat surface to create a virtual touchscreen. You can do this on a number of surfaces, including walls and tabletops. This virtual touch area could be used to display content, and you can interact with the virtual screen using multi-touch gestures, similar to how you would use a tablet.

Read more
Leap Motion’s prototype augmented reality headset includes hand tracking
leap motion augmented reality hmd prototype revealed ar headset

Leap Motion just revealed a new prototype headset that mixes hand tracking with augmented reality. The device, dubbed as North Star, relies on ellipsoidal reflectors, which are typically sections cut from a larger, curved ellipsoid mirror. In this case, the reflectors are based on optical-grade acrylic coated with a thin layer of silver.
For the first version of Leap Motion's prototype, the company mounted these reflectors inside a huge head-mounted device seemingly ripped out of a 1980s mad scientist movie. On each side of the helmet's inner wall was a 5.5-inch LCD panel along with bulky ribbon cables connecting to display drivers mounted on the top of the HMD. Needless to say, it wasn't sleek and compact like Microsoft's HoloLens. 
"While it might seem a bit funny, it was perhaps the widest field of view, and the highest-resolution AR system ever made," the company says. "Each eye saw digital content approximately 105 degrees high by 75 degrees wide with a 60 percent stereo overlap, for a combined field of view of 105 degrees by 105 degrees with 1440 × 2560 resolution per eye." 
Sound confusing? Imagine putting on a helmet, and inside you see two reflective surfaces curving outward, with one end connecting close to the bridge of your nose, and the other end extending forward and out. Meanwhile, there is an LCD screen mounted to the left and right of your eyes, reflected on the curved mirrored surface. This provides a huge field of view both horizontally and vertically. 
Eventually, the team whittled the headset down from a mad scientist contraption to something that looks more like a current VR headset. The prototype now relies on two fast-switching 3.5-inch LCD screens manufactured by BOE Displays that are powered by an Analogix display driver. While it has fewer parts than the previous gargantuan prototype, Leap Motion managed to preserve "most" of our natural field of view. 
In its current state, Leap Motion's prototype provides a 1,600 x 1,440 resolution for each eye -- lower than the hulking first-gen model -- running at 120 frames per second. It also sports a field of view "covering over a hundred degrees" with the two screens combined. Meanwhile, the hand-tracking runs at 150 frames per second over a 180- x 180-degree field of view.  
According to the post, the team pulled the reflectors away from the user's face just slightly to make room for a disassembled wide field-of-view camera manufactured by Logitech to record the augmented reality headset in action. But they're still not done -- other refinement tasks include better cable management, better curves, additional room for enclosed electronics and sensors, and more. 
Once Leap Motion makes those refinements, there are other details that could improve the headset's AR experience, such as inward-facing cameras for precise augmented image alignment, face tracking, and eye tracking. Head-mounted ambient light sensors would support 360-degree lighting estimation while directional speakers would provide "discrete" localized audio feedback. Micro-actuators could be used to adjust the displays. 
Leap Motion plans to release this design to the open source community next week.  

Read more