Videos

RealTimeVolumetricTelepresenceReal-Time Volumetric 3D Capture of Room-Sized Scenes for Telepresence

Video corresponding to the paper “Real-Time Volumetric 3D Capture of Room-Sized Scenes for Telepresence” by A. Maimone and H. Fuchs, to appear at 3DTV-Con 2012, Zurich Switzerland, 15-17 Oct 2012. Video prepared by Andrew Maimone.

Video

 

 

 

 

 

md vrEnhanced 3D Capture of Room-sized Dynamic Scene with Commodity Depth Cameras

In this project, we designed a system to capture the enhanced 3D structure of a room-sized dynamic scene with commodity depth cameras, such as Microsoft Kinects. Our system incorporates temporal information to achieve a noise-free and complete 3D capture of the entire room. More specifically, we pre-scan the static parts of the room offline, and track their movements online. For the dynamic objects, we perform non-rigid alignment between frames and accumulate data over time. Our system also supports the topology changes of the objects and their interactions. Video prepared by Mingsong Dou.

Video

 

 

AugmentedRealityTelepresence

Augmented Reality Telepresence with 3D Scanning and Optical See-Through HMDs

Video corresponding to the paper: “General-Purpose Telepresence with Head-Worn Optical See-Through Displays and Projector-Based Lighting.” by Maimone A., Yang, X., Dierk, N., State, A., Dou, M., and Fuchs, H. to appear in IEEE Virtual Reality 2013 (Orlando, FL, USA, March 16-23, 2013)
In the video, a system is demonstrated that is able to adapt to a wide variety of telepresence scenarios. By combining Kinect-based 3D scanning with optical see-through HMDs, we are able to precisely control which parts of the scene are real and which are virtual or remote objects. For example, the remote participant can appear *inside* the local environment (e.g., seated at a table) and the local user can move his head to look around him as if he were actually seated there, sharing the same table. The remote user’s environment can also appear to *extend* beyond the local environment, as if the two spaces were adjoined, or the local user can become totally *immersed* in the remote environment. Projector-based lighting control is used to illuminate only local objects that are not occluded by remote objects, allowing those virtual objects to appear opaque. Kinect-based depth sensing also supports occlusion of virtual objects by real objects — only parts of virtual objects closer than real objects are drawn. Video prepared by Andrew Maimone.