RoSA: Evaluation of Touch and Speech Input Modalities for On-Site HRI and Telerobotics

Exploring hybrid local and remote human–robot interaction through touch and speech input with RoSA 3.

• Dominykas Strazdas, Matthias Busch, Rijin Shaji, Ingo Siegert, Ayoub Al-Hamadi

Bridging Local and Remote Human–Robot Interaction with RoSA 3

This paper investigates hybrid interaction models for collaborative robots, combining on-site and telerobotic control through the same interface. Building on earlier RoSA frameworks, the study evaluates touch and speech input modalities to understand their strengths and trade-offs in both local and remote settings.

Key features of RoSA 3:

  • Supports touchscreen and speech input through a unified interaction concept
  • Evaluates two robots: Rosa (UR5e with gripper) and Ari (humanoid with touchscreen)
  • Incorporates a React-based touchscreen UI integrated with ROS and a local speech understanding pipeline using Picovoice SLU
  • Includes a new Commander module for managing and fusing multimodal commands


Fulltext Access

Accepted and coming soon in Frontiers in Robotics and AI


Citing

@article{RoSA_Frontiers2024,
  author={Strazdas, Dominykas and Busch, Matthias and Shaji, Rijin and Siegert, Ingo and Al-Hamadi, Ayoub},
  title={Robot System Assistant (RoSA): Evaluation of Touch and Speech input Modalities for on-site HRI and Telerobotics},
  journal={Frontiers in Robotics and AI},
  year={2024},
  note={in press}
}