Within-Object Intention Prediction Model

2024.02.02
We present a novel Within-Object Intention Prediction model for Bare-hands interactions in VR (eg manipulation tasks). Based on the (a) users’ skeleton, extracted from the Oculus Quest, and (b) Grasp taxonomies, (c) we analyze and extract 4 key geometrical features to capture the users’ grasp behaviour. (d) We exploit these features to generate planes, on which the users’ skeleton is projected. (e) These planes act as Cut Sections over a virtual object of interest, and predict the users’ future contact locations, prior to performing an interaction. (f) The user interacts with the virtual object at the predicted positions.

We present a novel computational model to favour bare-hands interactions with haptic technologies in Virtual Reality
environments. Using grasp taxonomies defined in the literature, we broke down users’ gestures into four key geometrical features and developed a model which dynamically predicts the users future within-object grasp intentions locations. The model supports a wide range of grasps including precision and power grasps, pulling or pushing as well as two-handed interactions. Moreover, its implementation does not require calibration (no parameter, user-independent) nor specific devices such as eye-trackers. We evaluate the model in a user study involving various shapes, sizes and gestures. The results show our model provides a great accuracy for predicting the future interaction locations (below 30mm) more than one second prior to interaction. Finally, we propose use-cases for our model – using redirection techniques or encountered-type of haptic devices.