Sign In
 

​​​​

156
Collaborazione physics-based in un ambiente virtuale comune

Catalog-Item Reuse ‭[2]‬

Catalog-Item Reuse ‭[1]‬

Physics-based collaboration in a shared virtual environment

For physics-based collaboration in a shared virtual environment both architectural and implementative extensions are scheduled as maintenance activities of the Laboratory’s virtual reality simulator that will allow more users to immersively interact, including interfacing with the rich stream of multi-person tracking data. First implementation aims at local operation of participants co-located in the Laboratory, while the next one is designed to test and demonstrate the possibility for people to collaborate remotely in properly equipped laboratories networked with CIRA Laboratory. The geographical distribution version will also dealt with the issues of latency between the input and output (the effect visually perceived by each concurrent user) to minimize the annoying and inhibitory simulator sickness. A large-area multi-person tracking system will enable the implementation of a new, advanced contactless optical skeleton tracking of collaborative immersive experience participants (2+ people). It will consist of four Microsoft Kinect v2 units -   a modern RGB + Depth camera that provides real-time tracking of the main parts of the body (skeleton) of a maximum of 6 people at once. Each Kinect unit will cover a single tracking quadrant and it’s data, after initial co-registration, will be "fused" to create a single set of data used to determine the motion of the virtual mannequins in the common virtual environment that constitute the avatars of each party in the immersive collaborative simulation. The possibility to see the other participants through their mannequin-avatar activated in real time by corresponding tracking data is decisive in the collaborative environment. It is important that it be as realistic and life-like as possible, and to achieve that one has to attend to in real time to all the main sources of the Kinect v2 data: the depth map, the skeleton data at a high level and the color HD video stream.

Media gallery

READ ALSO