DFG

SEMIAC

Home

Implicit mobile human-robot communication for spatial action coordination with action-specific semantic environment modelling

With the increasing use of robots in industrial and everyday contexts, the need for systems that can interact flexibly and cooperatively with humans is growing. Collaborative robots (cobots) in particular, which share a common workspace with humans and can react to complex, dynamic environments, are coming into focus. A central challenge here is the context-related perception of the environment and the appropriate interpretation of human behavior.

Founding

The project SEMIAC - Implicit mobile human-robot communication for spatial action coordination with action-specific semantic environment modelling (Implizite mobile Mensch-Roboter-Kommunikation für die räumliche Handlungskoordination mit aktionsspezifischer semantischer Umgebungsmodellierung) is funded by the German Research Foundation (DFG) under grant No. 502483052 and is planned with a project duration of 3 years (2023 to 2026).

Publications

image image
Multi-Head Attention-Based Framework with Residual Network for Human Action Recognition
May 06, 2025
We propose a novel HAR framework integrating residual networks, Bi-LSTM, and multi-head attention with a motion-based frame selection strategy. It...
image image
IM HERE – Interaction Model for Human Effort based Robot Engagement
April 29, 2025
We present our novel engagement modeling framework IMHERE, designed to improve human-robot interaction by capturing relational dynamics using a formal...
image image
Automating 3D Dataset Generation with Neural Radiance Fields
April 29, 2025
We present a fully automated pipeline that leverages Radiance Field–based universal 3D representations to rapidly generate high-quality models of arbitrary...
image image
Toward Truly Intelligent Autonomous Systems A Taxonomy of LLM Integration for Everyday Automation
April 28, 2025
With the rapid development of large language models (LLMs), their integration into autonomous systems has become essential. This integration significantly...
image image
Eye Contact Based Engagement Prediction for Efficient Human-Robot Interaction
April 28, 2025
We introduce a new approach to predict human engagement in human-robot interactions (HRI), focusing on eye contact and distance information....
image image
Mobile Robot Navigation with Enhanced 2D Mapping and Multi-Sensor Fusion
April 10, 2025
We propose an enhanced SLAM framework that combines RGB-D and 2D LiDAR data using late fusion and adaptive resampling for...
image image
Fine-grained gaze estimation based on the combination of regression and classification losses
September 03, 2024
We present our recent work on gaze estimation. Our approach is a novel two-branch CNN architecture with a multi-loss approach...