As wearable robotic devices used for movement assistance and rehabilitation start to populate our daily environments, their ability to autonomously and seamlessly adapt to recover motor control in persons with impairments based on environmental states changes becomes more critical. Wearable robotics that assists the lower limbs, for example, should autonomously change the assistance profiles depending on the environment and locomotor activity such as level-ground walking or stair climbing. Similarly, wearable robotics for the upper limbs should adapt the powered assistance depending on the user's body segment parameters and the weight of manipulated objects. To achieve such versatility, it is important to not only recognize user intention but also obtain information about the surroundings. Computer vision can provide rich, direct, and interpretable information while interacting with the environment compared to information from non-visual sensors like tactile sensors. This workshop will uniquely focus on the challenges and opportunities of integrating contextual awareness into automated high-level control and decision-making of wearable robotic devices based on state-of-the-art advances in computer vision, machine learning, and sensor fusion techniques. This workshop will discuss technical engineering solutions for vision-based rehabilitation and assistive robotics by bridging the gaps between researchers in wearable robots and computer vision, as well as academia and industry.
Letizia Gionfrida, Harvard Paulson School of Engineering and Applied Sciences
Robert D. Howe, Harvard Paulson School of Engineering and Applied Sciences
Daekyum Kim, Harvard Paulson School of Engineering and Applied Sciences
Brokoslaw Laschowski, Temerty Faculty of Medicine, University of Toronto
Michele Xiloyannis, Sensory-Motor Systems Lab, ETH
This proposal has been endorsed by the IEEE Computer and Robot Vision Technical Committee and by the IEEE Wearable Robotics Technical Committee.