Journal
ACM TRANSACTIONS ON APPLIED PERCEPTION
Volume 15, Issue 2, Pages -Publisher
ASSOC COMPUTING MACHINERY
DOI: 10.1145/3147884
Keywords
Immersive virtual environments; stereo displays; joint action; pedestrian road crossing; co-occupied virtual environments; large screen VE; joint affordance
Categories
Funding
- National Science Foundation [BCS-1251694, CNS-1305131]
- US Department of Transportation, Research and Innovative Technology Administration, Prime DFDA [20.701, DTRT13-G-UTC53]
- Direct For Social, Behav & Economic Scie
- Division Of Behavioral and Cognitive Sci [1251694] Funding Source: National Science Foundation
Ask authors/readers for more resources
We investigated how two people jointly coordinate their decisions and actions in a co-occupied, large-screen virtual environment. The task for participants was to physically cross a virtual road with continuous traffic without getting hit by a car. Participants performed this task either alone or with another person (see Figure 1). Two separate streams of non-stereo images were generated based on the dynamic locations of the two viewers' eye-points. Stereo shutter glasses were programmed to display a single image stream to each viewer so that they saw perspectively correct non-stereo images for their eyepoint. We found that participant pairs often crossed the same gap together and closely synchronized their movements when crossing. Pairs also chose larger gaps than individuals, presumably to accommodate the extra time needed to cross through gaps together. These results demonstrate how two people interact and coordinate their behaviors in performing whole-body, joint motions in a co-occupied virtual environment. This study also provides a foundation for future studies examining joint actions in shared VEs where participants are represented by graphic avatars.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available