RoboVision
07.2023 - Stuttgart, Germany
Integrative Technologies and Architectural Design Research (ITECH)
Master of Science
Seminar Project (Human Robot Collaboration)
ABSTRACT
This project aims to equip robots with vision capabilities, enabling real-time responses to their environment. This development has significant potential in architecture and construction, where uncertainties often present major challenges. The research explores the integration of vision in human-robot collaboration, emphasizing the critical role of human input in guiding design while robots perform picking and placing tasks. Specifically, ArUco markers are used as visual cues to facilitate robotic operations.
#Linux, #ROS2, #C++, #Python, #RealSense, #ComputerVision, #ArUcoMarkers #RaspberryPi, #TCP / UDP, #Grasshopper
Workflow
At the core of the system, a RealSense camera detects objects and generates positional data via ArUco markers, which is processed by a Raspberry Pi using ROS2. The Raspberry Pi controls an electromagnet via GPIO to pick up or place objects and transmits the processed positional data to a laptop running Grasshopper through UDP. The laptop, serving as the computational hub, uses the data to generate movement paths and communicates these instructions to a KUKA robotic arm via TCP. The robotic arm executes the tasks, signaling the Raspberry Pi to activate or deactivate the electromagnet at the appropriate points. A human operator plays a critical role by guiding the design direction and interpreting visual data, while the system automates execution. This setup demonstrates seamless integration of real-time vision, computational planning, and robotic action to achieve efficient and adaptive human-robot collaboration.