Multimodal driver displays, autonomous car handovers, and inclusiveness
In this talk, I will discuss some of my PhD and my PostDoc work on multimodal driver displays, autonomous car handovers, and inclusiveness. During my PhD, I investigated the utility of multimodal driver displays, meaning multisensory ways to alert drivers about events on the road, using audio, vibration, and visual cues. I studied the effectiveness of such displays in both manual and autonomous driving scenarios, and found that they can help people to recognise the urgency of the situation signified. My fascination for this topic, as well as the fact that autonomous cars are quickly becoming a reality, led me to pursue research in autonomous cars also in my PostDoc. I am currently working at the Department of Engineering, Engineering-Design Centre, as part of the project Human Interaction: Designing Autonomy in Vehicles, funded by EPSRC and Jaguar-Land Rover. The focus of the project is to design inclusive interfaces for autonomous cars, meaning interfaces that most people (and not only highly technical and highly capable people) are likely to find useful. A particularly critical part of the interaction between the car and the driver in autonomous cars, are the transitions between manual and autonomous modes, called handovers of control. Through an iterative design cycle, involving questionnaires, focus groups, and design workshops, we created a set of design concepts to assist these handovers. We then designed a set of dialogue interactions for this transition, and evaluated them with an inclusive user group in an autonomous car simulator. We revealed the potential of using our dialogue-based concepts for handovers, and are now improving them based on our findings, expecting to test them on a test track and on the road in the coming years.