ERDF research project

ENABLING

Resilient Human-Robot Collaboration in Mixed-Skill Environments

ENABLING

ENABLING (Resilient Human-Robot Collaboration in Mixed-Skill Environments) addresses the problem area of developing AI methods to complement the skills of robots and humans. It thus enables research innovations in cross-sectional areas of IT and key enabling technologies and forms the basis for future applications in the lead markets. The challenges lie, firstly, at the interface between robotics and AI and, secondly, in the complexity of tasks in a mixed-skill environment and, thirdly, in resilient and responsible collaboration. These are to be achieved by developing the key technologies for 1. robust recording of the affective user state, 2. semantic environment analysis, 3. intention-based interpretation of user actions, 4. and research into generative models for recording complex behavior in mixed-skill environments.

The project is funded by the European Regional Development Fund (ERDF) under grant No. ZS/2023/12/182056 and is planned with a project duration of 4 years (2024 to 2027).

image image
Multi-Head Attention-Based Framework with Residual Network for Human Action Recognition
May 06, 2025
We propose a novel HAR framework integrating residual networks, Bi-LSTM, and multi-head attention with a motion-based frame selection strategy. It...
image image
IM HERE – Interaction Model for Human Effort based Robot Engagement
April 29, 2025
We present our novel engagement modeling framework IMHERE, designed to improve human-robot interaction by capturing relational dynamics using a formal...
image image
Automating 3D Dataset Generation with Neural Radiance Fields
April 29, 2025
We present a fully automated pipeline that leverages Radiance Field–based universal 3D representations to rapidly generate high-quality models of arbitrary...
image image
Toward Truly Intelligent Autonomous Systems A Taxonomy of LLM Integration for Everyday Automation
April 28, 2025
With the rapid development of large language models (LLMs), their integration into autonomous systems has become essential. This integration significantly...
image image
Eye Contact Based Engagement Prediction for Efficient Human-Robot Interaction
April 28, 2025
We introduce a new approach to predict human engagement in human-robot interactions (HRI), focusing on eye contact and distance information....
image image
Mobile Robot Navigation with Enhanced 2D Mapping and Multi-Sensor Fusion
April 10, 2025
We propose an enhanced SLAM framework that combines RGB-D and 2D LiDAR data using late fusion and adaptive resampling for...
image image
Fine-grained gaze estimation based on the combination of regression and classification losses
September 03, 2024
We present our recent work on gaze estimation. Our approach is a novel two-branch CNN architecture with a multi-loss approach...