Skip to the content
Elodie BOUZBIB
  • Home
  • Research Projects
  • Publications
  • About me
  • Home
  • Research Projects
  • Publications
  • About me
  1. Home
  2. Research Projects
  3. CoVR

CoVR

2020.07.06 2024.02.02
Research Projects
Elodie Bouzbib

A Large-Scale Force-Feedback Robotic Interface for Non-Deterministic Scenarios in VR

FULL PDF
PREVIEW VIDEO
FULL-LENGTH VIDEO
This teaser demonstrates the use of CoVR in four different scenarios. In all these scenarios, the user is wearing the Oculus Rift S Head-Mounted Display. On the first one, the user is pushing on one of CoVR's panels, while its virtual avatar pushes on a brick wall. The second picture shows the user and her avatar leaning. The physical version is leaning over CoVR's panels while the avatar is leaning on a chimney (reference to Harry Potter). The third picture shows the user being pulled by CoVR. The background of the picture shows a park/a forest, and reflects the user's viewpoint in virtual reality. It suggests that the user is flying above this forest. The fourth picture shows the user being transported from one location of the room to another one. An arrow shows the displacement of CoVR.

CoVR is a physical column mounted on a 2D Cartesian ceiling robot to provide strong kinesthetic feedback (over 100N) in a room-scale VR arena.
The column panels are interchangeable and its movements can safely reach any location in the VR arena thanks to XY displacements and trajectory generations avoiding collisions with the user.
When CoVR is static, it can resist to body-scaled users’ actions, such as (A) users pushing on a static tangible rigid wall with a high force or (B) leaning on it.
When CoVR is dynamic, it can act on users. (C) CoVR can pull the users to provide large force-feedback or even (D) transport the users.

15mn Presentation
5mn Presentation

All the details regarding CoVR and the interactions it enables will be available soon. Nevertheless, the technical aspect of CoVR and its control are defined in the next paragraphs.
A UnityPackage to replicate our controls/models is available below.

We present in the paper a model to control the robot displacements. While trajectories are easily generated by the Cartesian structure (XY displacements), the algorithm inputs for scenarios involving multiple objects of interest need to be defined and safety measures around the user need to be implemented.

The main idea is to attach the robot to a virtual proxy (a ball with mass and gravity) with a spring-damper model.

The ball’s displacements depend on (1) the user’s location to avoid collisions, (2) the user intentions and (3) the progress of the scenario to attract the ball towards the objects users are most likely to interact with next. A key contribution regarding our trajectory generation model is the elaboration of a low-computational user intention model working with common HMDs.

(a) A 3D referential is drawn with a grid. A virtual avatar is at the back of the grid, and a cone-like shaped obstacle is around the user. Three virtual objects of interest are available. A ball is representing the proxy/CoVR, and a spring is attached from this ball to each obstacle. (b) A yellow raycast shows the user is choosing one of the VOI. The avatar has moved (and its previous position is shadowed). The ball trajectory is drawn: it is avoiding the user obstacle by going around it and reaches the VOI’s position.
Control algorithm relying on a physical model: (a) The virtual proxy of the physical CoVR column is connected to all virtual objects of interest (VOIs) with weights depending on the users’ intentions to interact with them. The user and other forbidden zones are covered by a rigid cone-like obstacle to be repulsive. (b) Whenever the user is about to interact with a VOI, the proxy/CoVR move towards it, while naturally avoiding obstacles (e.g the user).

The 3D scenes from our technical evaluation (Data Collection and Simulation) are available by clicking on the following “Download” button. It consists of a UnityPackage containing:

  • 3D scenes and models (user obstacle, proxy, scripts);
  • data from our 6 participants;
  • all the the simulation files;
  • the attached Jupyter Notebook to analyse and compare the intention parameters.

Other intention models can be implemented by changing the equation in the script “Object_Of_Interest”.

download

Research Projects
Actuated deviceEncountered-type of Haptic interfaceKinesthetic FeedbackRobotic GraphicsRobotic interfaceVirtual reality
  • StretchSense
  • "Can I Touch This?"

Author of this article

Elodie Bouzbibのアバター
Elodie Bouzbib
To article list

Related posts

  • FlexiVol
    2025.03.07
  • When Tangibles become Deformable
    2022.12.22
  • “Let’s Meet and Work it Out!”
    2022.02.20
  • Within-Object Intention Prediction Model
    2022.11.27
  • PalmEx: Adding Palmar Force-Feedback for 3D Manipulation with Haptic Exoskeleton Gloves
    2022.11.27
  • StretchSense
    2019.04.23
Contact

Address

Upna Lab – Universidad Pública de Navarra

Campus Arrosadia

31006 Pamplona

Navarra, España

Mail

elodie[.]bouzbib[at]unavarra[.]es

Blog Categories
  • Research Projects
Archives
  • March 2025
  • December 2023
  • December 2022
  • November 2022
  • February 2022
  • April 2021
  • July 2020
  • April 2019
  • November 2018

© 2020 Elodie BOUZBIB