The use of Virtual Reality technology in vehicles is poised to bring about a new era of transport experiences, but its use can cause or enhance motion sickness. However, with careful design, VR devices have the potential to mitigate motion sickness and provide immersive experiences while travelling. We propose a novel implicit motion cue for manipulating a virtual display presented in Virtual and Extended Reality. Our design mitigates motion sickness by providing awareness of physical orientation changes through changes in the visual orientation of the virtual planar content. We performed two experiments on a rotating chair, testing mitigation properties of different levels of rotational gain applied to the display. Our results showed that the technique significantly reduced motion sickness, without negatively affecting task performance. Our findings show that we can subtly interleave motion cues in existing spatial content like planar displays and this can contribute to lessening motion sickness experienced.
The paper presents a study on motion sickness mitigation while driving with a fully automated vehicle on a test track. 31 participants who were susceptible to motion sickness experienced a 25-minuntes drive with multiple motion sickness provoking decelerations and accelerations while reading a text on a tablet. The participants experienced three different conditions in separate sessions: 1) control condition without countermeasure, 2) drive with active seat belt tensioner, 3) drive with a two-step driving profile. The participants rated their motion sickness on the MSTT Scale (during the drive) [1] and on the MSAQ (pre and post drive) [2]. After each drive, drivers rated their subjective experience of vehicle behavior and the countermeasures. On MSTT, the results showed no significant differences in the development of motion sickness across the three conditions. However, the two-step driving profile reduced the development of motion sickness assessed via MSAQ. Furthermore, both countermeasures seem to have the potential to positively influence the perception of the automation as safer, more trustworthy and more reliable.
Nowadays, cars offer many possibilities to explore the world around you by providing location-based information displayed on a 2D-Map. However, this information is often only available to front-seat passengers while being restricted to in-car displays. To propose a more natural way of interacting with the environment, we implemented an augmented reality head-mounted display to overlay points of interest onto the real world. We aim to compare multiple selection techniques for digital objects located outside a moving car by investigating head gaze with dwell time, head gaze with hardware button, eye gaze with hardware button, and hand pointing with gesture confirmation. Our study was conducted in a moving car under real-world conditions (N=22), with significant results indicating that hand pointing usage led to slower and less precise content selection while eye gaze was preferred by participants and performed on par with the other techniques.
As opposed to conventional AR-HUDs, volumetric AR-HUDs provide multiple focal distances which could potentially benefit distance perception. In this work, we investigated the effect of variable and different fixed focus distances on male participants’ performance during perceptual distance matching of seven target distances between 6 and 24 meters from the driver’s position. In Study 1, 44 male participants completed the task with a variable and a fixed focal plane at 15 meters. In Study 2, 44 male participants completed the task with fixed focal planes at 6, 10, 12 and 24 meters. While the results are limited by the male-only samples, matching target distances closer than 15 meters was generally less accurate with fixed focus distances. Distances beyond 15 meters were matched most accurately with a fixed focal plane at 10 meters. However, results from subjective evaluation were in favor of the variable focal plane.
Augmented Reality (AR) is a key technology for digitization in enterprises. However, often there is a lack of stable tracking solutions when used inside manufacturing environments. Many different tracking technologies are available, yet, it can be difficult to choose the most appropriate tracking solution for different use cases with their varying conditions. In order to shed light on common tracking requirements and conditions for automotive AR use cases we conducted a use case study spanning 61 use cases within the complete product life-cycle of a large automotive manufacturer. By analyzing the gathered data we were able to note the frequency of different tracking requirements and conditions within automotive AR use cases. Based on these use cases we could also derive common factors of influence for AR tracking in the automotive industry, which show the various challenges automotive AR tracking is currently facing.
Perceived security is crucial for the widespread adoption of shared automated vehicles (SAVs) and shuttle buses. However, there is currently no validated instrument to measure perceived security in this context, and little research has been done to determine the factors that contribute to perceived security. We propose the Perceived Security Scale for Shared Automated Vehicles (PSSAV), a questionnaire that assesses various aspects of perceived security in SAVs. The scale was evaluated using an exploratory, data-driven approach in a pilot study with 60 German participants, and a main study with 114 German and 101 Colombian participants experiencing a positive or negative ride in an automated shuttle bus (between-subjects design) presented as videos in an online study. The results suggest that trust, privacy, and control are key factors that influence security in the context of SAVs. The PSSAV questionnaire is reliable and sensitive to manipulation, indicating its construct validity.
The Box Task combined with a Detection Response Task (BT + DRT) is a relatively less investigated but promising method for evaluating visual-manual and cognitive task demand due to the interaction with in-vehicle information systems while driving. The BT includes the tracking of a dynamic box whose size and position follow a sinusoidal pattern with uniform amplitudes and frequencies. However, it is unclear whether participants are able to predict and adapt to these uniform dynamics, which might lead to a reduced sensitivity of the BT + DRT. Within the present study, it was aimed to examine differences in BT + DRT performance depending on uniform and non-uniform BT dynamics. A laboratory study was conducted with N = 41 participants. The experimental conditions differed in the type and difficulty level of the secondary tasks as well as in the BT dynamics (uniform, varying amplitude, varying frequency). While the uniform BT dynamics could be more predictable, the non-uniform BT dynamics were designed slightly easier in their difficulty using a lower frequency or amplitude. The results revealed no performance benefits when performing uniform BT dynamics compared to non-uniform BT dynamics. The frequency BT condition was related to a significantly lower variability of box position and higher gaze duration on the secondary task compared to the uniform BT dynamics. These findings suggest that participants are not or only negligible able to adapt to the uniform BT dynamics. Therefore, it is recommended to use the uniform BT dynamics as suggested and implemented in previous studies.
Appropriate monitoring and successfully intervening when automation fails is one of the most critical issues in level 2 automated driving, since drivers suffer from low situation awareness when using such systems. To counter, we present a gamified in-vehicle interface based on ideas from previous work, where drivers have to support the vehicle by pointing at other traffic objects in the environment. We hypothesized that this system could help drivers in the monitoring task, maintain their situation awareness, and result in lower crash rates. We implemented a prototype of this system and evaluated it in a lab study with N=20 participants. The results indicate that participants were looking more intensively at lead vehicles and performed stronger braking actions. However, there was no measurable benefit on situation awareness and intervention performance in critical situations. We conclude by discussing differences to related experiments and present future ideas.
User Experience (UX) professionals need to be able to analyze large amounts of usage data on their own to make evidence-based design decisions. However, the design process for In-Vehicle Information Systems (IVISs) lacks data-driven support and effective tools for visualizing and analyzing user interaction data. Therefore, we propose ICEBOAT1, an interactive visualization tool tailored to the needs of automotive UX experts to effectively and efficiently evaluate driver interactions with IVISs. ICEBOAT visualizes telematics data collected from production line vehicles, allowing UX experts to perform task-specific analyses. Following a mixed methods User-Centered Design (UCD) approach, we conducted an interview study (N=4) to extract the domain specific information and interaction needs of automotive UX experts and used a co-design approach (N=4) to develop an interactive analysis tool. Our evaluation (N=12) shows that ICEBOAT enables UX experts to efficiently generate knowledge that facilitates data-driven design decisions.
Interacting with a smart parking system to find a parking spot might be tedious and unsafe if performed while driving. We present a system based on a Boosted Tree classifier that runs on the smartphone and automatically detects when the driver is cruising for parking. The system does not require direct intervention from the driver and is based on the analysis of context data. The classifier was trained and tested on real data (615 car trips) collected by 9 test users. With this research, we contribute (i) by providing a literature review on cruising detection, (ii) by proposing an approach to model cruising behavior, and (iii) by describing the design, training, and testing of the classifier and discussing its results. In the long term, our work aims to improve user experience and safety in car-related contexts by relying on human-centered features that implicitly understand users’ behavior and anticipate their needs.
In conditionally automated driving, assertive takeover requests (TORs) have been found to elicit faster responses in the transition from automated to manual control. However, despite this apparent benefit, concerns have been raised that assertive signals can stress the drivers, resulting in worse takeover quality and subsequent driving performance.
To investigate the phenomenon, we conducted a driving simulator study exploring the effects of assertive and non-assertive TOR signals. In a series of tasks where the drivers were required to take over control of the vehicle to manually maneuver a critical situation, immediate and sustained measures of driving performance, stress, and subjective perceptions were compared.
Consistent with previous research, the assertive signal led to shorter reaction times and stronger braking. However, no other immediate or delayed differences in the measures were observed. These findings suggest that assertive TOR signals might be a preferable design option, however, the phenomenon should be further explored.
Takeover safety is a critical issue when using Level 2 advanced driver assistance systems. Understanding the context of takeover can facilitate the development of driver monitoring systems that can adapt to changing environments for more contextually appropriate assistance during takeover. The paper presents a hierarchical clustering analysis of hundreds of post-takeover vehicle kinematics in the MIT-AVT naturalistic driving study. Results show similar types of takeovers between Super Cruise and Autopilot: normal takeover, braking takeover, accelerating takeover, evasive-manoeuvre takeover, and right-swerve takeover (Autopilot only). Context analysis showed that braking takeover which occurred at a normal highway speed was often associated with upcoming highway exits and foreseeable low-speed situations, while accelerating, evasive-manoeuvre, and right-swerve takeovers were caused by strong brake (for Super Cruise) or large steering (for Autopilot) during slow car following. The findings indicate the potential for sensor-based approaches to assessing various contexts and facilitating a more holistic takeover reference model.
Drivers want to retain a sense of control when driving (partially) automated vehicles (AVs). Future AVs will continue to offer the possibility to drive manually, potentially leading to challenging driver-initiated takeovers (DITs) due to the "out-of-the-loop problem" and reduced driving performance. A driving simulator study (N=24) was conducted to explore whether cooperative systems, without full control of driving tasks, provide a sense of control to mitigate DITs in varying conflict situations. Conflict levels were operationalized by an AV performing overtaking maneuvers under free, 100m, and 50m visibility on a two-lane rural road. Participants experienced three systems: no intervention-, a cooperative choice-, and a manual control system. Results showed that participants had a similar sense of control with the cooperative system compared to the manual one and preferred it over the manual system. The likelihood of DITs increased with conflict intensity, and trust in the AV moderated the conflict-DIT association.
The role of driver is changing from controller to regulator as automated vehicles become more common. This change leads to deceased situation awareness (SA) because of passive engagement and increased non-driving related tasks (NDRT), and might affected drivers' takeover. This study designed a gamified prototype named Dangerous Slime to help drivers maintain SA during automated driving. Dangerous Slime turned surrounding cars into slimes that attacked drivers' cars; drivers need to response to these attacks, which increased their attention to nearby objects. Drivers can take the game as non-driving related tasks (NDRTs) during the whole automated driving process. When compared to NDRT of watching films, the game achieved an improvement in drivers' SA and positive user feedback. Moreover, the styles of games affect drivers' behavior and SA. This study revealed how the game influenced drivers' SA, and took a step toward improving the safety of automated driving in a pleasant way.
Future automated driving systems (ADS) are discussed as having the ability to “override” driver control inputs. Yet, little is known about how drivers respond to this, nor how a human-machine interaction (HMI) for them should be designed. This work identifies intervention types associated with an ADS that has change control authority and outlines an experiment method which simulates a deficit in driver situation awareness, enabling the study of their responses to interventions in a controlled environment. In a simulator study (N = 18), it was found that drivers express more negative valence when their control input is blocked (p = .046) than when it is taken away. In safety-critical scenarios, drivers respond more positively to interventions (p = .021) and are willing to give the automation more control (p = .018). An experimental method and HMI design insights are presented and ethical questions about the development of automated driving are provoked.
This paper presents design guidelines for teleoperated driving interfaces within computational driver assistance systems for unstructured environments. The purpose of the guidelines is to manage the unpredictability of computational model-based assistance in unstructured environments in order to reduce user workload. Thus, we conducted a user study to evaluate workload and obtain insights into both the advantages and disadvantages of the computational driver assistance system in order to develop the guidelines. The study utilized a deep learning-based driver assistance method in simulated environments to observe the workload of users while teleoperated driving with the assistance method. Based on the user study, we proposed guidelines for teleoperated driving interface with computational driver assistance systems. We anticipate that the proposed guidelines could improve the understanding of computational driver assistance systems and reduce the workload of teleoperated driving in unstructured environments, thereby enhancing driver’s trust as well as comfort.
Autonomous vehicles (AVs) may use external interfaces, such as LED light bands, to communicate with pedestrians safely and intuitively. While previous research has demonstrated the effectiveness of these interfaces in simple traffic scenarios involving one pedestrian and one vehicle, their performance in more complex scenarios with multiple road users remains unclear. The scalability of AV external communication has therefore attracted increasing attention, prompting the need for further investigation. This scoping review synthesises information from 54 papers to identify seven key scalability issues in multi-vehicle and multi-pedestrian environments, with Clarity of Recipients, Information Overload, and Multi-Lane Safety emerging as the most pressing concerns. To guide future research in scalable AV-pedestrian interactions, we propose high-level design directions focused on three communication loci: vehicle, infrastructure, and pedestrian. Our work contributes the groundwork and a roadmap for designing simplified, coordinated, and targeted external AV communication, ultimately improving safety and efficiency in complex traffic scenarios.
Locating out-of-view vehicles can help pedestrians to avoid critical traffic encounters. Some previous approaches focused solely on visualising out-of-view objects, neglecting their localisation and limitations. Other methods rely on continuous camera-based localisation, raising privacy concerns. Hence, we propose the ARcoustic system, which utilises a microphone array for nearby moving vehicle localisation and visualises nearby out-of-view vehicles to support pedestrians. First, we present the implementation of our sonic-based localisation and discuss the current technical limitations. Next, we present a user study (n = 18) in which we compared two state-of-the-art visualisation techniques (Radar3D, CompassbAR) to a baseline without any visualisation. Results show that both techniques present too much information, resulting in below-average user experience and longer response times. Therefore, we introduce a novel visualisation technique that aligns with the technical localisation limitations and meets pedestrians’ preferences for effective visualisation, as demonstrated in the second user study (n = 16). Lastly, we conduct a small field study (n = 8) testing our ARcoustic system under realistic conditions. Our work shows that out-of-view object visualisations must align with the underlying localisation technology and fit the concrete application scenario.
With the introduction of autonomous vehicles, pedestrians may no longer expect explicit communication from drivers. Despite the anticipated safety benefits of anthropomorphic human–machine interfaces (HMIs) for pedestrian crossings, the impact of different levels of anthropomorphism and communication on pedestrian safety remains insufficiently understood. We proposed a virtual-agent (VA) HMI that mimics driver behavior and investigated pedestrians’ preferences through augmented reality (AR) experiments. Eighteen participants made decisions about crossing after receiving cues about the vehicle’s intentions from VA HMIs. Participants preferred the "characterized" VA HMI owing to its aesthetically pleasing design and found the "eye contact + hand gesture" combination to be more easily comprehensible. We found that while the degree of anthropomorphism did not significantly affect pedestrians’ crossing decisions, more explicit communication was helpful. Our study provides empirical evidence regarding users’ experiences of HMI in AR and the effectiveness of VA HMIs that imitate driver communication modes.
One way to improve road safety for cyclists is the development of hazard notification systems. Instead of in field experiments, such systems could be tested in safe and more controlled simulated environments; however, their validity needs verification. We evaluated the validity of mixed reality (MR) simulation for bicycle support systems notifying of dooring hazards. In a mixed-design study (N=43) with environment type(MR/test track) as within and hazard notifications (with/without) as between factor, comparing subjective and objective measures across environments.
In conclusion, MR simulation is absolutely valid for user experience and perceived safety and relatively valid for workload, standard deviation of lateral position, and speed. However, MR simulation was not valid for lateral distance, as participants cycled more in the center of the street than on the test track, perhaps to avoid simulator sickness. Thus, we conclude that MR simulation is valuable for studying bicycle safety.
Autonomous vehicles (AVs) must communicate their intentions to nearby road users and may use external Human-Machine Interfaces (eHMIs). Most eHMIs have focused on interaction with pedestrians at crossings. However, these may not work for cyclists, who can be anywhere around vehicles and encounter them in diverse traffic scenarios. We used participatory design with cyclists and AutoUI researchers (N = 12) collaborating in a real-world setting to design eHMIs around an actual vehicle. Participants preferred eHMIs that co-exist with traditional vehicle signals and use a single design language across traffic scenarios to communicate awareness and intent quickly without distracting their attention from the road. We used our findings to develop a taxonomy of eHMI features for cyclists, allowing us to synthesise the designs and contribute versatile eHMI concepts catered to cyclists’ needs. This is important to ensure AVs can safely navigate and interact with cyclists in all road scenarios.
Using in-vehicle infotainment systems degrades driving performance and increases crash risk. To address this, we developed air gesture interfaces using various auditory displays. Thirty-two participants drove a simulator with air-gesture menu navigation tasks. A 4x2 mixed-model design was used to explore the effects of auditory displays as a within-subjects variable (earcons, auditory icons, spearcons, and no-sound) and menu-generation interfaces as a between-subjects variable (fixed and adaptive) on driving performance, secondary task performance, eye glance, and user experience. The adaptive condition centered the menu around the user’s hand position at the moment of activation, whereas the fixed condition located the menu always at the same position. Results demonstrated that spearcons provided the least visual distraction, least workload, best system usability and was favored by participants; and that fixed menu generation outperformed adaptive menu generation in driving safety and secondary task performance. Findings will inform design guidelines for in-vehicle air-gesture interaction systems.
Despite significant advances in gesture recognition technology, recognizing gestures in a driving environment remains challenging due to limited and costly data and its dynamic, ever-changing nature. In this work, we propose a model-adaptation approach to personalize the training of a CNNLSTM model and improve recognition accuracy while reducing data requirements. Our approach contributes to the field of dynamic hand gesture recognition while driving by providing a more efficient and accurate method that can be customized for individual users, ultimately enhancing the safety and convenience of in-vehicle interactions, as well as driver’s experience and system trust. We incorporate hardware enhancement using a time-of-flight camera and algorithmic enhancement through data augmentation, personalized adaptation, and incremental learning techniques. We evaluate the performance of our approach in terms of recognition accuracy, achieving up to 90%, and show the effectiveness of personalized adaptation and incremental learning for a user-centered design.
Current interior sensing systems already enable the detection of critical driver states such as drowsiness or inattention. In order to extend the system’s capabilities, this work firstly investigates a possible detection of driver fainting via an interior sensing camera. An approach that supports the simulation of driver fainting is developed and realized in a parked vehicle as well as during manual and automated driving, with 61 participants in total. Moreover, multiple instructed intentional movements with for- and side-ward movements of the body are recorded. Classification models are developed based on features that are derived from head and body pose data. These models are then applied to the complete video streams that include various waiting and driving scenarios. The best classification results are seen with Random Forest classifiers with up to 84% true positive detections and 0.33 false positive detections per hour. The majority of false positive detections were seen during automated driving. Implications and options for future research are discussed.
This study examines three modalities for designing live-in van interiors. Participants (N=18) situated within an empty van were asked to explore potential designs using physical cardboard prototyping, a commercial software application (Vanspace 3D) for planning van interiors, and an augmented reality application that we developed. Participants were asked to think aloud as they designed van interiors for fictive journeys using each modality. A qualitative evaluation was conducted to assess how participants’ conceptualizations of space shifted across the use of each prototyping method. The results demonstrate that each design method influenced design outcomes due to the physicality of the task. This study highlights the importance of considering the role of physicality in the selection of prototyping modality for the design exploration process.
With the rising popularity of electrified vehicles, emphasis has been placed on encouraging charging with renewable energy and maximizing battery longevity to improve vehicle sustainability. Many mobile applications offer tools to suggest charging times with more sustainable renewable energy and charging strategies that preserve battery health. However, these options often result in longer, less convenient charging times for drivers. Here we conducted three charging scenario studies to identify factors that influence willingness to wait for sustainable charging. Participants selected between faster but less sustainable charging options and slower charging options that either reduce charging emissions or improve battery longevity. We find people’s willingness to wait for green energy is influenced by situational factors; further we find that information and battery longevity interventions can increase willingness to wait for sustainable charging. Finally, we provide design recommendations to promote sustainably in charging behaviors.
Automated vehicles (AVs) are expected to encounter various ambiguous space-sharing conflicts in urban traffic. Bottleneck scenarios, where one of the parts needs to resolve the conflict by yielding priority to the other, could be utilized as a representative ambiguous scenario to understand human behavior in experimental settings. We conducted a controlled field experiment with a Wizard of Oz automated car in a bottleneck scenario. 24 participants attended the study by driving their own cars. They made yielding, or priority-taking decisions based on implicit and explicit locomotion cues on AV realized with an external display. Results indicate that acceleration and deceleration cues affected participants’ driving choices and their perception regarding the social behavior of AV, which further serve as ecological validation of related simulation studies.
This study investigated drivers' preconceived notions about manoeuvres of Automated Vehicles (AVs) compared to manually driven vehicles (MVs) using a pseudo-coupled driving simulator. The simulator displayed a message indicating the state of approaching vehicles (AV/MV) in a bottleneck scenario, while participants were informed that the MV was controlled by an experimenter using another simulator, despite all trials having the same preprogrammed behaviours. Results showed that the types of AV/MV did not impact participants’ subjective responses. Communication through kinematic cues of the AV/MV was effective, with higher perceived safety, comprehension, and trust reported for approaching vehicles that yielded with an offset away from participants. Perceived safety and trust of the AV were also higher for trials with a light-band external Human Machine Interface (eHMI). This study highlights the value of both explicit and implicit cues for the communication of AVs with other drivers.
Eliciting and capturing drivers’ affective responses in a realistic outdoor setting with pedestrians poses a challenge when designing in-vehicle, empathic interfaces. To address this, we designed a controlled, outdoor car driving circuit where drivers (N=27) drove and encountered pedestrian confederates who performed non-verbal positive or non-positive road crossing actions towards them. Our findings reveal that drivers reported higher valence upon observing positive, non-verbal crossing actions, and higher arousal upon observing non-positive crossing actions. Drivers’ heart signals (BVP, IBI and BPM), skin conductance and facial expressions (brow lowering, eyelid tightening, nose wrinkling, and lip stretching) all varied significantly when observing positive and non-positive actions. Our car driving study, by drawing on realistic driving conditions, further contributes to the development of in-vehicle empathic interfaces that leverage behavioural and physiological sensing. Through automatic inference of driver affect resulting from pedestrian actions, our work can enable novel empathic interfaces for supporting driver emotion self-regulation.
Automation transparency offers a promising way for users to calibrate their trust in autonomous vehicles. However, it is still unknown what kind of information should be provided in driving scenarios with different risks and how this affects user trust. Driving scenarios based on different risks and Human-Machine-Interface (HMI) with different transparency based on Situation Awareness–Based Agent Transparency (SAT) model were developed to investigate the impact of risk and transparency on user trust using nine simulated fully autonomous drives within a static driving simulator environment. Results showed that driving scenario with lower-risk and HMI with higher-transparency increased user trust-related beliefs and intention to use. And perceived reliability and trust fully mediated the effects of risk and transparency on intention to use. The findings of this study provide insights on HMI transparency under different driving scenarios that may impact user trust.
Cooperation between drivers and automated vehicles requires transparent communication of the automation’s current status. This can be achieved by communicating confidence or certainty in its current perception or decision. We evaluate different sensory modalities for communicating information about how safely an automated driving system can perform the driving task in critical traffic situations to a driver who is present as a cooperator. We aimed to improve communication between the driving system and the human driver, ultimately increasing the overall driving experience and performance. In a virtual reality driving simulation study with 34 participants, we presented confidence information across three modalities: visual, auditory, and vibrotactile, compared to a baseline condition. Our results indicate that communicating automation uncertainty through the auditory and vibrotactile modalities improved user experience, trust in automation, and perceived safety. At the same time, interactions with the Non-Driving Related Task were reduced by communicating confidence information in critical driving situations.