We present a novel computational model to favour bare-hands interactions with haptic technologies in Virtual Realityenvironments. Using grasp taxonomies defined in the literature, we broke down users’ gestures into four key geometrical features and developed a model which dynamically predicts the users future within-object grasp intentions locations. The model supports a wide range of grasps including precision and power grasps, pulling or pushing as well as two-handed interactions. Moreover, its implementation does not require calibration (no parameter, user-independent) nor specific…