The Human-Computer Interaction Lab at the University of Haifa performs research in a wide variety of topics focusing on various areas in Human-Computer Interaction and Information Visualization. The focus of the research in the lab is on the design, implementation and evaluation of novel technologies, as well as on the study and understanding of how technologies affect human behavior. The lab is part of the Information Systems department at the University of Haifa.
In order to support a large number of autonomous Vehicles (AVs), a usable and efficient user interface for the monitoring and control of AVs is essential. Such a user interface should be designed to maximize situational awareness, while minimizing the cognitive load and performance time of the remote operator. While taking full remote control of an AV requires full cognitive attention and resources of an operator, our premise is that, in many cases, such intervention will not be necessary. We believe that in many cases a remote operator can make a simple decision that will provide an AV with the information it needs in its current situation. Thus, we aim to understand how to design a UI that would support the monitoring and immediate decision making of a remote operator working with multiple remote AVs.
Within the consortium, we have two main, disparate tasks. The first task, focuses on the user experience and user interface design of a remote teleoperation station. We utilize a user-centered design (UCD) process to understand, design, implement and evaluate a user interface that provides immediate solutions to an autonomous vehicle. We focus on Teleassistance – an interface which will support immediate handling of AVs who are in need of remote help using a set of given commands.
The second task deals with the understanding of how people perceive semantically segmented images and videos. A major issue in teleoperation is latency. In order to address this problem, we propose to examine the effect of replacing the RGB video, which is sent from the vehicle to the teleoperation station, by a significantly lighter, semantically segmented video. In our research, we examine how people perceive driving videos under different levels of semantic segmentation, compared to real videos.