
Design and implement a full perception-to-action pipeline: from visual input and computer vision analysis to LLM-based reasoning and real robotic execution. Collaborate in international teams, work hands-on with Raspberry Pi and robotics hardware.
Design and implement a full perception-to-action pipeline: from visual input and computer vision analysis to LLM-based reasoning and real robotic execution. Collaborate in international teams, work hands-on with Raspberry Pi and robotics hardware.

In this Erasmus+ Blended Intensive Programme (5 ECTS), you will develop an end-to-end perception-to-action system:
See → Reason → Act → Evaluate.
You work with real-time camera perception, build a structured world state (JSON), and use an LLM (online API) for interpretation, safe planning, and explainable decisions.
Execution is done via a deterministic controller on real hardware (e.g., Raspberry Pi, robot platform, camera; optional sensors).
Teams choose at least one challenge: pick & place, sorting, navigation. Final deliverables: live demo, plus poster/report and evaluation (logging, success rate, robustness).
Virtual phase (online): 26–29 May 2026
Kick-off, team formation, tools & datasets Setup, baselines, initial experiments
On-site phase (Heidelberg): 15–19 June 2026 Workshops & lab sessions Mentoring & project sprint Integration:
perception → world state → LLM planning → execution Evaluation & finale: live demo + poster presentation