A multimodal interactive augmented reality system aware of context. (Toward a human-machine symbiosis)
Starting: 01/09/2016
PhD Student: Damien Brun
Advisor(s): Sébastien George (LIUM - IEIAH) & Charles Gouin-Vallerand (LICEF de l'Université du Québec)
Co-advisor(s):
Funding: CRSNG (Conseil de Recherches en Sciences Naturelles et en Génie du Canada)
In a similar way than smartphones augmented reality eyewear devices are poised to become ubiquitous by providing a quicker and more convenient access to information. There is theoretically no limit to their applicative area and use cases, many of them already explored such as military, medical, industry, education, entertainment… Some interactions are becoming a standard, such as mid-air hand gesture and voice command. Paradoxically, in many use cases where these kinds of devices are currently implemented the user cannot perform these interactions without constraint: e.g. when the users are already using their hands to hold something … in a noisy environment or the opposite where silent is required and the vocal command could not be used properly, or even in a social context where both gesture and vocal command could be seen as weird for the users.
Thus, the thesis project aims to extend interactivity of augmented reality eyewear devices:
1) By providing more discrete interaction such as head gesture based on cognitive image schemas theory, metaphorical extension and natural user interfaces based on the smartwatch finger gesture.
2) By using the context of the user to provide the more convenient interface and feedback in the right space and time.
Machine learning technologies will be used both for the implementation of the interactions and for the context awareness. Many user experiments are (and will be) conducted to assess the solutions.
The underlying objective of this project is to facilitate the acceptance and usage of augmented reality eyewear devices.