ECOMODE was a research project funded under Horizon 2020, in which we developed an innovative technology that allows visually impaired people and older adults to interact with mobile technology using mid-air gestures and voice controls. The technology integrates a neuromorphic camera inspired by human vision, making it possible to control smartphones and tablet devices with multimodal interaction, regardless of environmental conditions and background noise.
My role
The project took place between 2015 and 2018. I was part of the design and evaluation team, collaborating with my teammates to the user research, benchmarking and co-design activities, formative and summative evaluation, and contributing to the major deliverables on design and evaluation activities throughout the project. I was in charge of organizing the Multimodal workshop, held at CHItaly in Cagliari in 2017.
The problem
The problem was twofold.
First, the traditional approach toward ageing and visual impairment had focused on the design of assistive technology aimed at compensating for people’s frailty and disabilities. This compensation model, sharply described by Rogers and Marsden’s article Does he take sugar?, seemed to be a step behind the growing importance of value, engagement, empowerment, user experience in human-centered design.
Second, designing effective and efficient mid-air gesture and voice interaction is challenging because traditional technologies lack robustness and reliability to compel to adverse environmental conditions, especially outdoor.
The challenge
On the one hand, we wanted to leave behind the compensation model and involve people, empowering them and designing our technology adopting a value-sensitive approach. On the other hand, the development and miniaturization of a novel and complex technology was a great challenge for our research partners.
The process
This short video explains the projects and the main challenges we faced.