As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
Detecting speaking intentions in multi-user VR environments can facilitate turn-taking, thereby making group interactions more effective in VR environments. This study aims to establish the recognition of speaking intentions based on motion and gaze data from VR devices during interactions involving multiple participants. Through in-depth statistical data analysis, we identified head and right-hand features associated with speaking intentions and discovered different temporal dependencies for motion and gaze features. Using these features, we show that these features allow for effective detection of speaking intentions, with the random forest (RF) classifier achieving the highest F1 score of 0.824 by mixing motion and gaze features with different data window sizes.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.