We present a novel computational model to favour bare-hands interactions with haptic technologies in Virtual Reality
environments. Using grasp taxonomies defined in the literature, we broke down users’ gestures into four key geometrical features and developed a model which dynamically predicts the users future within-object grasp intentions locations. The model supports a wide range of grasps including precision and power grasps, pulling or pushing as well as two-handed interactions. Moreover, its implementation does not require calibration (no parameter, user-independent) nor specific devices such as eye-trackers. We evaluate the model in a user study involving various shapes, sizes and gestures. The results show our model provides a great accuracy for predicting the future interaction locations (below 30mm) more than one second prior to interaction. Finally, we propose use-cases for our model – using redirection techniques or encountered-type of haptic devices.