About the project
This project explores how functionality of applications or products can be extended through the ability to recognize particular motions or actions using sensor data and machine learning. Issues examined may include methods for improving the accuracy of action recognition in various contexts, challenges of integrating data from heterogeneous devices, and how feedback generated through action recognition can be used to change behavior. Initial work will examine how sensors on a smart construction helmet can provide information related to worker safety and how data from mobile phone sensors can be used to distinguish between manual manipulation of the phone and physical activities of the user.
Motion identification and action recognition have been used to enable novel interaction modes, such as virtual keyboards, Microsoft’s Kinect motion-sensing game controller, and other hands-free interfaces including those used in virtual reality interactions. In the healthcare area, motion detection has been used to identify abnormal movement patterns that have been associated with neurodegenerative disorders such as Parkinson’s or Huntington’s disease (Binder, 2019) and to assist in rehabilitation training (Hu et al., 2016) as well as enabling individuals to track their own training activities.
In this project we will explore various use cases of motion recognition, including applications to increase worker safety, to prevent injuries, and to track compliance with exercise protocols in a game context. Using input from sensors (accelerometers, gyros, cameras, etc.), machine learning algorithms will be used to identify particular actions of humans and other objects for the purpose of enabling digi-physical interaction, where physical movements are reflected in the digital environment. This motion recognition will be used, for example, to provide feedback that can be used to adjust the performance of an action in the workplace to avoid injury and to detect ‘cheating’ (moving the device and not the body) in a game that incorporates particular physical activities. By increasing the accuracy of motion recognition we reduce the need for labelled data to adapt the parameters of the machine learning algorithms.
This builds on insights obtained from the action recognition research by Johnsson et al. (see Buonamente et al., 2016; Gharaee et al., 2017) which investigated principles for using hierarchies of self-organizing feature representations to recognize actions while avoiding the need to segment (in time) the stream of input from a camera. Extensive research on how to get such an action recognition system to work well has been carried out to recognize both 2D movies of agents performing actions and sequences of sets of joint positions obtained from 3D cameras. For more information, see the web page https://magnusjohnsson.se/ar.html.