Downloads will be available from the start date of the conference (Sep 17, 2022).
Creating a diverse and comprehensive dataset of hand gestures for dynamic human-machine interfaces in the automotive domain can be challenging and time-consuming. To overcome this challenge, we propose using synthetic gesture datasets generated by virtual 3D models. Our framework utilizes Unreal Engine to synthesize realistic hand gestures, offering customization options and reducing the risk of overfitting. Multiple variants, including gesture speed, performance, and hand shape, are generated to improve generalizability. Additionally, we simulate different camera locations and types, such as RGB, infrared, and depth cameras, without incurring additional time and cost to obtain these cameras. Experimental results demonstrate that our proposed framework, SynthoGestures1, enhances gesture recognition accuracy and can replace or augment real-hand datasets. By saving time and effort in dataset creation, our tool accelerates the development of gesture recognition systems for automotive applications.
Recent advancements in autonomous technology allow for new opportunities in vehicle interior design. Such a shift in in-vehicle activity suggests vehicle interior spaces should provide an adequate manner by considering users’ affective desires. Therefore, this study aims to investigate the affective role of future vehicle interiors. Thirty-one participants in ten focus groups were interviewed about challenges they face regarding their current vehicle interior and expectations they have for future vehicles. Results from content analyses revealed the affective role of future vehicle interiors. Advanced exclusiveness and advanced convenience were two primary aspects identified. The identified affective roles of each aspect are a total of eight visceral levels, four visceral levels each, including focused, stimulating, amused, pleasant, safe, comfortable, accommodated, and organized. We expect the results from this study to lead to the development of affective vehicle interiors by providing the fundamental knowledge for developing conceptual direction and evaluating its impact on user experiences.
Level 3 autonomous cars may benefit older and younger drivers, but their perspectives on the technology remain understudied. We employed a questionnaire and focus groups to examine the perceived trust, safety, and usefulness of older drivers as a heterogeneous age group (60-80 years old divided into four age groups) and younger (22-25 years old) drivers about Level 3 and the non-driving related tasks (NDRTs) they would perform. The 60-65 group was mostly resistant towards Level 3, whereas the 76-80 group saw it as a chance to stay mobile. All groups were eager to engage with NDRTs, however, prior to gaining trust they would not engage with highly distracting tasks such as reading. The 76-80 group stressed the importance of designing take over requests that consider their decline in physical and cognitive abilities. In this research, we highlight the importance of considering age-related needs in HMI design of Level 3 cars.
Anger can lead to aggressive driving and other negative behaviors. While previous studies treated anger as a single dimension, the present research proposed that anger has distinct intensities and aimed to understand the effects of different anger intensities on driver behaviors. After developing the anger induction materials, we conducted a driving simulator study with 30 participants and assigned them to low, medium, and high anger intensity groups. We found that drivers with low anger intensity were not able to recognize their emotions and exhibited speeding behaviors, while drivers with medium and high anger intensities might be aware of their anger along with its adverse effects and then adjusted their longitudinal control. However, angry drivers generally exhibited compromised lateral control indicated by steering and lane-keeping behaviors. Our findings shed light on the potentially different influences of anger intensities on young drivers’ behaviors, especially the importance of anger recognition for intervention solutions.
If we want to develop automated vehicles that are truly human-centered, we need to keep in mind that our reasoning ability is far from “flawless”, as we are susceptible to inherent cognitive biases. With this work, we aim to systemically investigate cognitive biases in the context of automated driving for the first time. Using an online study (N = 99) and a short experiment (N = 21), we investigated two exemplary cognitive biases (1. illusion of control and 2. automation bias). Our results suggest that both can occur during automated driving. Specifically, we found that not only a greater amount of information about vehicle behavior but also the humanization of this information can lead to an (illusory) higher sense of control.
We present an interaction scenario-based design space for autonomous vehicles (AVs) and external road users (ERU)s, emphasizing the gaps and fragmented approach in current standards development. We first provide an overview of research in AV-ERU interactions and ongoing standards and policy development efforts. We then outline the AV-ERU ecosystem and identify key information modes that encapsulate potential interaction scenarios, such as states, intent, and responses of AVs. We describe the elements of the design space, including the interfaces, interactors and agents, modalities of interaction, and design considerations. We demonstrate the application of our design space and map existing standards to highlight its significance as a comprehensive tool for future standards development. We discuss the need for collaborative efforts among stakeholders to expedite and reduce bias in the establishment of standards.
Autonomous vehicles (AVs) have the potential to transform the way we live. Nevertheless, at least in the near foreseeable future, it seems that AVs will not be able to resolve every problem they encounter and thus, there will be a need for human involvement in the form of remote vehicle teleoperation. However, it is still unclear how such a teleoperation station should be designed and which views should be presented to the teleoperator. In this study, we examine the effect of a bird's eye view (BEV) on situation awareness in tele-driving. We compare a regular frontal view with two different arrangements of BEV added to the frontal view. Our findings indicate that adding a BEV does not improve the situation awareness of the tele-driver, and in some cases, even makes it more difficult for the user to perceive some important environmental features.
Arousing empathy is an emerging theme in human-vehicle interaction design, as it can help minimize misunderstandings between drivers and improve driving safety. In this study, we investigated potential empathic triggers in in-vehicle voice-based interactions using the "co-constructing stories method" with 11 participants. Our findings suggest that effective empathic triggers possess three characteristics: (1) explainable, (2) definite, and (3) either objective and irresistible or subjective and urgent. We propose a design principle that empathic triggers construct by one Basic element (Reason) and at least one Detail element (Identity, Destination, or Time). Additionally, we discuss drivers’ willingness to share information. This study serves as a foundation for developing design principles that utilize voice to arouse empathy in human-vehicle interactions.
Various methods have been developed to measure user experience (UX). The majority of measures considers the multidimensionality of the construct including the User Experience Questionnaire (UEQ). However, the UEQ neglects that the individual UX-components might be of unequal importance to different cultures. Therefore, the aim of this contribution was to gain first insights on what users from different countries, i.e., China, Germany, and the United States, cherish in an automotive user interface (UI) using the six factors provided by the UEQ. The results indicated that participants from different cultural backgrounds preferred different UI-qualities. Based on the findings, we compiled some initial thoughts on how to consider cultural differences both in the design of an automotive UI as well as in the methodological approach. We raise the question whether a weighting scheme for the UEQ is worth considering in cross-cultural UX-research.
Fully automated vehicles (SAE Level 5) will eliminate the need for a human driver, allowing passengers to focus on non-driving activities, such as those provided by infotainment services. There has been previous research on the use of non-driving related content on windshield displays. However, the placement area is limited and the content is likely to be viewable only from a single viewpoint (perspective issues). Therefore, we propose to detach content from the windshield and instead use an augmented reality space to provide infotainment content to passengers. In a within-subject virtual reality user study (N=19), we examined how front seat passengers would place infotainment content windows (and with whatproperties) in the open space using controllers while seated in a fully automated vehicle (within-factor: seating position, driver’ or front passenger’s seat). We also looked at the similarities and differences based on sitting position (left or right). We found that most content was not placed on top of windows/the windshield, with the exception of video content, which raises the question of whether windshield displays are the right medium for fully automated vehicles. In addition, between 40 to 63% of content was placed in mirrored positions when comparing between seating positions.
Driving demands a significant amount of visual attention, which may impair drivers’ ability to detect and react to visual information such as blind spot warnings. This online pilot study (N = 242) evaluated the perception of binaural auditory cues and concurrent visual animations about overtaking vehicles designed to improve drivers’ situational awareness. The results showed that the spatial direction of the sounds was perceived in a high degree of precision, and they were rated as pleasant and natural sounding. Spatial congruency between the two information modalities was accurately perceived with visualizations rated as significantly more reliable blind spot indicators than sounds. The results suggest that spatial direction of binaural sounds can be consistently interpreted both alone and in tandem with visualizations in the given driving scenario. Binaural sounds could enhance drivers’ situational awareness in an eyes-free way to anticipate overtaking vehicles already before they trigger visual blind spot warnings.
Conducting research on user-energy interaction in automotive systems in controlled settings is challenging due to the lack of availability of low-cost driving simulation environments that enable both (1) a precise simulation of vehicle energy dynamics and (2) a high-fidelity representation of the driving environment. This Extended Abstract presents EcoSimLab, a driving simulator environment for rapid prototyping, testing and evaluation of energy interface and eco assistance design concepts as well as for comprehensive studies on eco-driving behavior and further facets of user-energy interaction. We present the system architecture based on BeamNG.tech, initial data on usability and energy model validity, and we discuss future enhancements and potential applications of EcoSimLab.
Representing sub-systems of automated vehicles with a team of conversational agents has been suggested in the past to improve the process of trust calibration. Experiments so far did not include embodied agents. In this work, we present two potential analogies derived from a company hierarchy and a spaceship crew. We developed comic strips showing everyday interactions with these agents in an automated vehicle and evaluated them in a Vignette study in comparison with single-agent approaches. Our results suggest that a multi-agent approach could be beneficial for trust calibration, but only for the company and not the spaceship analogy. Only some of the design decisions regarding the agents’ personalities worked out as intended. Still, we believe the concept is promising for future investigations.
Automated shuttle buses promise to bring numerous benefits, particularly for groups of people that are not able to drive themselves, such as senior citizens, teenagers, children and people with disabilities. However, these groups are currently underrepresented in research within the automotive area. In this paper, we conducted user enactments with a diverse range of potential users (N = 48) in terms of age, physical limitations and the intersection of these characteristics. We used a low-fidelity prototype of an automated shuttle bus with adjustable dimensions and seat arrangements and gathered qualitative data to identify specific challenges and needs for ensuring the inclusion of a wide range of individuals in these buses. Our data shows that there is a need to make certain changes, such as a multimodal approach, avoid elevation, and strategically planning of seat distributions, in order to ensure inclusivity.
In complex driving scenarios, drivers often face the challenge of making quick decisions regarding the safety of crossing intersections or entering roundabouts. These decisions, prone to human error, can compromise road safety and driving efficiency. The recent advancements in augmented reality (AR) glasses hold significant potential for assisting drivers in avoiding such dangers. Unlike traditional AR heads-up displays (HUDs), AR glasses provide a larger field-of-view. While previous research has proposed various driving-assistance concepts using simulated displays, only a few have explored actual implementation or experimentation with real AR glasses. This study introduces a novel concept of visualizing risks through AR for driving assistance systems. We have designed and implemented two different interfaces specifically tailored for real AR glasses, integrating them into a driving simulator system. To evaluate the effectiveness of AR glasses in driving assistance, we plan to conduct experiments based on this platform in the future.
Modern vehicles are equipped with advanced sensors and capabilities, yet engaging human drivers effectively in hazard perception remains a challenging research area. This paper investigates the potential of mixed reality (MR) to enhance driver warning systems. The study examines the impact of attention-shifting methods, including visual cues, audio cues, and a combination of both, on drivers’ hazard perception. A preliminary study involving six participants was conducted, and the NASA-TLX analysis did not yield significant differences. However, through the use of functional near-infrared spectroscopy (fNIRS), we unveil distinct brain activation patterns associated with visual and sound cues. Further research with larger sample sizes and diverse driving scenarios is required to validate and expand upon these preliminary results.
An increasing number of traffic accidents are attributed to distracted driving, encompassing both mind wandering (MW) and secondary task-related distractions. Autonomous vehicles (AVs) have various support features to reduce these safety issues. However, given that AVs on the road are not fully autonomous, human drivers are still required to be ready to take over where the system encounters limitations. Studying the driver's mental state during the takeover process becomes vital. Signal displays have been confirmed as an effective means of alerting drivers to takeover requests, yet there remains a paucity of studies investigating the effectiveness of multimodal displays in AVs. This study aims to compare the impacts of internal distraction (mind wandering) and external distractions (secondary tasks) on takeover performance in partially automated vehicles and how multimodal displays (comprising visual (V), tactile (T), and visual + tactile (VT) elements) can effectively notify drivers of a takeover request. A human subject experiment will be implemented, and the insights garnered from this study will guide the future design of AVs while also contributing to the development of a mental model to predict drivers’ takeover performance when distracted.
The development of automated vehicles (AVs) brings new challenges to human-vehicle communication, primarily in urban environments. One way to facilitate communication is seen in the use of external human-machine interfaces (eHMIs). These indicate the current status as well as the intentions of AVs to other road users. This paper focuses on designing and evaluating eHMIs for communication between multiple AVs and pedestrians at multi-lane intersections. For this purpose, we analyzed the current situation with traffic signals and future scenarios with eHMIs. We designed an eHMI concept suitable for scenarios with many vehicles and different intersection situations. In an online survey (N=104), we used videos from four test scenarios for a comparative evaluation (baseline: traffic signals). The results show that the eHMI concept received higher ratings for user experience and emotional state in some cases, but also highlight the need for consistent behavior across different vehicles, which would be the default with traffic lights.
Due to the occlusion of front vehicles, in most cases, drivers can only be reactive to the lead vehicle directly ahead of it (DLV) in chain braking scenarios. Research on connected vehicles has suggested that the awareness of traffic flow in chain braking scenarios can improve traffic safety. Thus, in a video simulation experiment, we explored the safety benefits and users’ acceptance of two windshield-displayed V2V-communication-based HMIs that inform the chain braking events, one with streaming video from the camera in front of the DLV (vHMI) and the other with warning sign informing chain braking events (sHMI). We found that both HMIs improved drivers' understanding of the situation and reduced reaction time and braking response time (BRT) compared to that of the baseline condition (without HMIs). Further, users reported higher usability and satisfaction with sHMI. This research highlights the benefits of providing front-traffic information to drivers in chain braking scenarios.
Driver monitoring systems detect driver distraction and will become a standard safety feature in vehicles soon. Advanced driver distraction warnings prompt drivers through different modalities to keep their attention on the road. With added driving automation, research suggests that drivers tend to be distracted and take their eyes off the road more often. To increase safety, distraction warnings will be used in conjunction with driving automation. However, there is no empirical evidence on how drivers’ altered gaze behavior affects the occurrence of distraction warnings when adding automation. In the present driving simulator study, N=22 participants performed use cases that triggered distraction warnings while driving in different automation modes. Results showed that warnings were most frequently triggered by long glances away from the roadway. Furthermore, it became evident that the warning frequency differed, depending on the automation mode. Finally, future research on advanced driver distraction warnings is outlined.
With the continual growth of new technology in modern day vehicles, more applications are being implemented in these complex devices. These applications push various notifications and messages to the driver from a variety of sources such as audio, communication or vehicle state related. While attending to these messages, which include reading and sometimes response, drivers must decide how to manage their attention. This involves moving their eyes from the forward roadway to attend to content located within the vehicle. To understand the impact of location message presentation by the vehicle on driver attention and task performance, we evaluated four different Human Machine Interface configuration locations. The results from a simulator study (N = 22) indicate that center screen location reduces drivers’ attention to the road when compared to both the instrument cluster and Head Up Display locations. Additionally, we also highlight some benefits of using attention-based metrics over distraction-based metrics.
Despite extensive analysis into the relationship between emotion and driving, the effects of empathy on driving remain less explored. This paper focuses on the role of empathy, particularly cognitive and affective empathy, as a potential mitigator of negative emotional states. We investigated how empathic responses from an in-vehicle agent influence a driver’s emotional state and their driving performance through a between-subject simulation study. Thirty participants were assigned one of three in-vehicle agents: cognitive empathy style, affective empathy style, and non-empathy style agent. They drove using a driving simulator and received empathic responses from in-vehicle agents when adverse events happened. The results showed that affective empathy style in-vehicle agent more helped driver drive safely with lower negative affect states compared to cognitive empathy style agent and no agent. We expect that the findings of this study could provide valuable insight for designing empathic interactions between a driver and a vehicle.
We employed the ‘Ghost Driver’ methodology to emulate an autonomous vehicle (AV) and explored pedestrians’ (n=520) crossing behaviour in response to external human-machine interfaces (eHMIs). Three eHMI designs were created to replace absent pedestrian-driver communication; each had different anthropomorphic elements and were identified as ‘explicit’, ‘implicit’ and ‘low’ to reflect the conspicuity of anthropomorphism. They were displayed on an LED matrix and strip mounted to the front of a Nissan Leaf vehicle, which was driven around the University campus over 5 days. Video analysis highlighted differences in pedestrians’ behaviour, with the explicit anthropomorphism eHMI extending crossing time and attracting more visual attention. Additionally, some pedestrians continued to use gestures, ostensibly to indicate their intention to cross or to thank the vehicle, despite the absence of a visible driver. While preliminary findings support the application of anthropomorphism in AV-pedestrian communications, further research will explore designs in more controlled, experimental settings.
Performing non-driving-related tasks as car passenger reduces visual perception of surroundings, which may cause a conflict with the human vestibular system and thus lead to motion sickness. Augmented reality head-mounted displays offer a possible solution to this phenomenon by presenting digital content at head level as opposed to common displays placed on one’s lap, keeping the peripheral vision intact. However, technical limitations such as end-to-end latency of video see-through devices may counteract this advantage. Therefore, we investigated a mobile office scenario by comparing video see-through augmented reality to a traditional laptop setup with regard to motion sickness and task performance in a moving car. Our results suggest similar responses to motion sickness between conditions, with limited effects on task performance and improved ergonomics when using augmented reality.
As an interactive medium between drivers and vehicles, in-vehicle virtual robots significantly impact user performance and experience. Among them, those robots that have anthropomorphic appearances are the mainstream. As an emerging human-vehicle interaction interface, Augmented Reality Head-Up Display (AR-HUD) still has not been used to showcase in-vehicle virtual robots. For this reason, it is currently unclear what preferences users have for the anthropomorphic appearance of virtual robots when using AR-HUD. In this work, we conducted two experimental online studies focusing on two aspects of anthropomorphic appearance: human similarity and morphological completeness. A total of 257 participants’ valid data sets were collected. Participants rated the in-vehicle virtual robot’s anthropomorphic appearance on five preference dimensions on Pleasure, Fear, Trust, Comprehensibility, and Acceptance dimensions. We found that users prefer appearances of virtual robots with medium human similarity and high morphological completeness, which aligns with the current theories on robot appearance. Our future research will focus on developing and validating design principles for the appearances of in-vehicle virtual robots.
Manipulating MW in driving simulator studies without the need to introduce a non-driving-related task remains challenging. This study aims to empirically evaluate the modified speed regulation n-back task, a tool to manipulate drivers’ MW. Our experiment involved 23 participants who experienced a 0-back and 2-back driving condition, with task-irrelevant novel environmental sounds used to elicit P3a event-related potentials. Results indicate that the 2-back condition was perceived as more demanding, evidenced by higher NASA-TLX scores (overall score, mental and temporal demand, effort, frustration). The mean P3a amplitude was diminished during the 2-back condition compared to the 0-back condition, suggesting that drivers experienced higher MW and had fewer resources available to process the novel environmental sounds. This study provides empirical evidence indicating that the speed regulation n-back task could be a valid, effective, and reproducible method to manipulate MW in driving research.
Age-related macular degeneration (AMD) is a leading cause of blindness, and it limits the ability of older adults to drive independently. Highly autonomous vehicles (AVs) offer a potential solution for those with low vision, but accessibility issues must be addressed to create a safe and pleasant experience. A study was conducted using a human-centered design process consisting of five stages to design a prototype that aids older adults with AMD to maintain situational awareness and control while riding in a highly or fully AV. The prototype includes a voice-activated navigation system with three levels of detail, a 360° in-vehicle camera, a retractable microphone, and physical buttons on the console-side of the right and left front seats to manually activate the navigation system. These features are designed to help passengers with low vision to travel with a necessary level of situational awareness and remain in control during the ride.
This paper presents a data review conducted as part of a larger study commissioned by the Queensland Government. The overall study focuses on improving transportation and mobility access for people with disabilities (PwDs) by developing design principles for accessible autonomous vehicles (AAVs). Existing literature highlights the cumulative disadvantage faced by PwDs, including lack of mobility, social inclusion, and violation of human rights due to barriers in transportation. An autonomous future has the potential to provide accessible transport options, leading to improved access to medical care, employment, social inclusion, and safety. The data review, based on available numerical and metric data research identifies key information for accessible AVs, such as wheeled mobility user anthropometric percentiles, clear floor space requirements, recommended ramp angles, interior layout configurations, and door dimensions. However, there is a need for further research in co-design prototyping, analysis of standards, and current design best practices. The data review, alongside the larger ongoing research, aims to provide a comprehensive set of design principles for an AVs that caters to the diverse needs of PwDs including blind and low-vision, deaf/hard of hearing, mobility impaired, intellectual, and developmental disability, and elderly populations. The review concludes by discussing clusters of information identified, gaps in data, and opportunities for further research.
Automated buses are expected to improve mobility services in depopulated regions of Japan. However, passenger concerns about safety and information provision must be addressed through an internal Human Machine interface (HMI) that displays information and allows communication with remote supervisors. By conducting an online survey, this study aims to investigate passenger's preferences on HMI designs. The survey required participants to rate four HMI elements (text, humanoid voice, supervisor's voice, and supervisor's voice and face) under seven scenarios in three aspects: sense-of-safety, trust, and necessity. The opinions of 324 frequent bus users were collected. The results indicated that using texts and humanoid voice to display information is sufficient in normal situations. Regarding communication during abnormal situations, while presenting the supervisor's face can increase the sense-of-safety and trust, participants reported sufficient necessity with supervisor's voice only. These findings can provide useful insights for parameterizing internal HMIs of automated buses.
Empathic in-vehicle interfaces are critical in improving user safety and experiences. There has been much research on how to estimate drivers’ affective states, whereas little research has investigated intervention methods that mitigate potential impacts from the driver's affective states on their driving performance and user experiences. To enhance the development of in-vehicle interfaces considering emotional aspects, we have organized a workshop series to gather automotive user interface experts to discuss this topic at the International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI). The present paper focuses particularly on the intervention methods created by the experts and proposes design recommendations for future empathic in-vehicle interfaces. We hope this work can spark lively discussions on the importance of drivers’ affective states in their user experience of automated vehicles and pose the right direction.
Research on automotive user interfaces has highlighted the importance of transparent and explainable user interfaces, especially in safety-critical situations where automation failures can lead to (fatal) accidents. In this work, we investigated the use of augmented reality (AR) windshield displays as a means to improve explainability in safety-critical driving situations. To this end, we conducted a mixed-methods study (VR-based driving simulation combined with a semi-structured interview) with N = 24 participants and evaluated two concepts, containing (1) first-person information about vehicle behavior and (2) additional information about other traffic objects. Results showed that confidence and situational awareness were significantly higher when additional information was presented. The highest scores were achieved when participants were shown "how & why" information.
Safety Assurance remains a challenge for the large-scale deployment of Automated Driving Systems (ADS). Safety models monitor the performance of the ADS. Most safety models are validated both in simulation and during on-road tests. However, first-hand experiences and analysis of ADS safety models are not easily accessible to the general research community. This paper introduces the RSS driving demonstrator an open-source simulation tool that enables first-hand experience of the Responsibility Sensitive Safety (RSS) safety model proposed by Intel and Mobileye and adopted by several Automotive Industry standards and regulatory frameworks. The RSS demonstrator enables first-hand interactions and experience of ADS safety model restrictions in both automated and manual driving conditions. As a User Experience (UX) tool, it provides quantitative safety metrics and flexible user interaction features. The results indicate it served to both evangelize the RSS ADS safety model with laymen population and is a versatile tool for Automotive UX development.
With the prevalence of smart cockpits, passengers spend increasingly more time engaging with non-driving related tasks (NDRTs, e.g., in-vehicle entertainment and meetings) in vehicles. However, prolonged NDRT engagement can cause sensory conflict and result in motion sickness. Previous research found that motion sickness is highly associated with head motion, which can further be influenced by the positions of the NDRT displays. To guide the design of the smart cockpit, this paper examines the impact of four common NDRT display locations on passengers’ motion sickness. A motion transfer function was utilized to translate a real-world vehicle trajectory into head motion when passengers looked at displays at different locations. Then, a 6 Degree-of-Freedom subjective vertical conflict model was adopted to evaluate the motion sickness caused by head motion. The results of this study can inform the design of in-vehicle displays and improve passenger comfort.
Driving simulators are among the most often used research tools in the AutomotiveUI community. However, no common understanding on when a simulator should be considered valid and how driving simulator validity should be investigated exists, despite numerous publications related to the topic throughout the past four decades. The present paper aims at achieving a more refined understanding of what driving simulator validity actually is. We propose a framework which may be used in context of driving simulator studies and provide recommendations for researchers approaching simulator validation.
A paradigm shift is taking place in the automotive industry, where combustion engines are being replaced with batteries. Electric cars have a shorter range and are to a greater extent affected by the driver’s behavior and other external factors. The need to teach electric car owners about their car’s performance has thus emerged. This paper investigates how to coach users of electric vehicles to drive more energy efficiently while simultaneously enhancing the driving experience by utilizing a smartphone and an in-car application. By collecting data about the user group and the contexts, the understanding of Battery Electric Vehicle (BEV) drivers has deepened. Based on the findings, prototypes were developed and evaluated iteratively with users, enabling further investigation into their needs. This resulted in ten design recommendations, addressing three domains: Feedback, Usability, and Device.
An emerging challenge concerns the interaction between automated personal mobility vehicles (APMVs) and other road users to avoid accidents. In this article, we report on the effects of two external human-machine interface (eHMI) types and compare them with no-eHMI as the baseline. A complex mobility scenario was designed by comparing the APMV approach to the pedestrian at 60, 90, and 120 degrees. The pedestrian's crossing initiation time and eHMI visibility distance were measured in virtual reality with eye-tracking (N=30). Significant differences were found between all variables and participants crossed the road earlier to textual eHMI compared to light bar eHMI. However, when the APMV was approaching from a 90- and 120-degree angle, participants made a faster decision to cross when presented with light bar eHMI. This study highlights the importance of the eHMI type in a complex situation where an APMV is coming from a blind spot.
Cooperative driving has been established as a method to engage the human driver even in higher levels of automated driving, in which human intervention is not strictly necessary. However, a key question regarding a cooperative approach is whether users of automated vehicles even feel the need for cooperative guidance in the vehicles equipped with SAE level 4 and 5 automation, where the human driver is no longer required to intervene. In order to answer this question, we conducted a use case analysis, in which we interviewed (N=16) participants regarding their need for cooperation in different situations. Results show that nearly all participants still wanted to be able to intervene in the driving task, which, however, was dependent on the specific situation. User needs were similarly dependent on the specific situation, while “Autonomy”, “Security”, “Competence”, and “Relatedness” emerged as the most important psychological needs.
Over the past few years, there has been increased emphasis placed on the research and development of in-vehicle advanced driver assistance systems (ADAS) that can be used in both traditional and self-driving (so-called, autonomous) vehicles. This is a huge step toward providing better comfort and improving the driver experience coupled with improvements to safety concerns. Despite this, we have found that drivers do not use the ADAS to its full potential in everyday use. This is something that has come to our attention. There could be a number of factors at play here. The primary purpose of this workshop is to shed light to the reasons why participants are not activating their ADAS and other comfort functions. In addition, it will serve as a useful benchmark against which to measure the progress of future driver expectations and requirements for ADAS.
As the development of mixed reality (MR) technology continues to progress, the range of applications and purposes within vehicles and for road users is expanding. MR holds the potential to enhance road safety, enable drivers to engage in non-driving related tasks (NDRTs), and improve passenger experiences. Moreover, the utilization of head/helmet-mounted displays (HMDs) with MR technology for cyclists can augment their vision and contribute to the safety of both cyclists and pedestrians. Furthermore, MR can play a crucial role in facilitating the transition towards automated driving. Nevertheless, there remain several challenges associated with the application of MR in vehicles, as well as human factors issues that require resolution. Additionally, virtual reality (VR) has the capability to immerse passengers in virtual worlds, offering opportunities for enjoyable passenger experiences. Presently, most MR research primarily focuses on individual users at a particular point along the reality-virtuality continuum. In this workshop, we will examine the potential, limitations, impact, role, and suitability of MR in driving applications and simulations. This will encompass topics such as holistic MR experiments, transitional interfaces, and HMDs specifically designed for cyclists. The primary objective of this workshop is to establish a research agenda for the application of MR utilized by road users over the next 3 to 5 years and beyond.
The adoption of automated vehicles will be a positive step towards road safety and environmental benefits. However, one major challenge that still exist is motion sickness. The move from drivers to passengers who will engage in non-driving related tasks as well as the potential change in the layout of the car interior that will come with automated vehicles are expected to result in a worsened experience of motion sickness. The previous workshop [18] highlighted the need for consensus on guidelines regarding study design for motion sickness research. Hence, this workshop will develop a guide for motion sickness research through reflection and discussions on the current methodologies used by experts in the field. Further it will build on the knowledge collected from the previous workshop and will thereby facilitate not only new research ideas and fruitful collaborations but also find a consensus in the field in regard to study design and methodologies.
While the benefits of open science and open data practices are well understood, experimental data sharing is still uncommon in the AutoUI community. The goal of this workshop is to address the current lack of data sharing practices and to promote a culture of openness. By discussing barriers to data sharing, defining best practices, and exploring open data formats, we aim to foster collaboration, improve data quality, and promote transparency. Special interest groups will be formed to identify parameter sets for recurring research topics, so that data collected in different individual studies can be used to generate insights beyond the results of the individual studies. Join us at this workshop to help democratize knowledge and advance research in the AutoUI community.
Vanlife refers to a lifestyle that may include part or full-time habitation of a modified van. The challenges that Vanlifers face in customizing their vans is similar to that of customizing automotive interiors. Designing a van requires people to think about the balance of their personal priorities and the physical constraints. We are proposing a workshop that asks participants to bring the prototyping methods they are most familiar with to design their own Vanlife. In this workshop, we aim to observe how people create and interact with their vans using the prototypes they chose or developed. We intend to understand how design tools shape how people navigate designing for the unique constraints of a van.
This workshop aims to design advanced empathic user interfaces for in-vehicle displays, particularly for high-level automated vehicles (SAE level 3 or higher). Incorporating model-based approaches for understanding human emotion regulation, it seeks to enhance the user-vehicle interaction. A unique aspect of this workshop is the integration of generative artificial intelligence (AI) tools in the design process. The workshop will explore generative AI’s potential in crafting contextual responses and its impact on user experience and interface design. The agenda includes brainstorming on various driving scenarios, developing emotion-oriented intervention methods, and rapid prototyping with AI tools. The anticipated outcome includes practical prototypes of affective user interfaces and insights on the role of AI in designing human-machine interactions. Through this workshop, we hope to contribute to making automated driving more accessible and enjoyable.
As the field of automated vehicles (AVs) advances, it has become increasingly critical to develop human-machine interfaces (HMI) for both internal and external communication. Critical dialogue is emerging around the potential necessity for a holistic approach to HMI designs, which promotes the integration of both in-vehicle user and external road user perspectives. This approach aims to create a unified and coherent experience for different stakeholders interacting with AVs. This workshop seeks to bring together designers, engineers, researchers, and other stakeholders to delve into relevant use cases, exploring the potential advantages and challenges of this approach. The insights generated from this workshop aim to inform further design and research in the development of coherent HMIs for AVs, ultimately for more seamless integration of AVs into existing traffic.
Being the premier forum for automotive user interface research and other vehicular technologies, AutomotiveUI concerns professionals, academics, researchers, and industry representatives from all around the world interested in innovation, research, and application of automotive user interface topics, embodying diversity at its core. This diversity is however not always reflected in the conference’s main program. In order expand the topic foci of the conference in the future, this workshop aims to identify the key factors that influence the main program creation, and create strategies that can help increase its diversity and accessibility, culturally and geographically. We aim to exchange ideas, experiences and start conversations that raise awareness about this topic, in order to inspire longer-term follow-up activities which will eventually result in increased diversity and accessibility not only at AutomotiveUI, but at international conferences in general.
The sense of presence is commonly defined as the recipient’s subjective sense of being there in a virtual environment. In the context of HCI research in traffic, virtual reality methods as well as simulators of all kinds, including driving simulators, bicycle simulators or pedestrian simulators, are part of the standard research repertoire today. In these simulated traffic environments, it is usually the aim to produce a realistic impression of the corresponding real-world situation, creating a high sense of presence as well as realistic driving behavior. Throughout the course of the presented workshop, we intend to collect measures and exchange views about methods which can be applied to increase the sense of presence in driving simulation, avoiding so called "breaks in presence". Different measures to increase presence shall be generated and evaluated by the workshop participants. We further seek to discuss different components of presence, and how these relate to various types of simulation, as well as simulation validity. Summarizing, the present workshop aims to establish current knowledge regarding the concept of presence, while identifying promising future measures to increase the sense of presence in simulated traffic environments.
As the adoption of automated vehicles becomes more prevalent, there is a need for systematic approaches to understanding, communicating, and analyzing the design aspects of the technology. Designing a taxonomy—a hierarchical framework that classifies and organizes design elements and features—is a critical tool in this pursuit. A well-defined taxonomy provides an effective communication tool and enables stakeholders to understand, discuss, and compare system designs. However, current taxonomies often fail to consider technological advancements and the needs of stakeholders. This gap limits the understanding of vehicle automation and hinders addressing critical aspects such as safety and social impact. This workshop aims to propose a stakeholder-centered taxonomy for automated vehicles, taking into consideration the perspectives and requirements of different stakeholder groups, such as technology professionals, policymakers, and end-users. The workshop aims to identify relevant considerations for taxonomy design that cater to different stakeholders, incorporate diverse perspectives, and highlight opportunities and challenges for effective communication and decision-making in vehicle automation.
The rapid advancement of electric vertical take-off and landing (eVTOL) technology presents new opportunities and challenges in the realm of urban air mobility. As eVTOLs transition from conceptual designs to practical implementations, ensuring a seamless and delightful user experience becomes crucial. This workshop aims to explore how cabin design including user interfaces (UI) and indoor environment can enhance the user experience at two different stages of eVTOL development: the initial phase, where increasing perceived safety is paramount, and the mature phase, where factors like comfort and hedonic quality play a significant role. During the workshop, experts and researchers will delve into the multifaceted aspects of UI design for eVTOLs. Participants will engage in discussions, share insights, and examine case studies to understand the potential impact of eVTOL UI design on user experience.
The 2023 European New Car Assessment Programme (Euro NCAP) [9] protocol states that Original Equipment Manufacturers (OEMs) should include Driver Monitoring Systems (DMS) and appropriate technical assessment dossiers for evaluation by driving authorities. This includes demonstrating how the system can identify elements of driver state; driver distractions, fatigue, and unresponsiveness. Whilst visual distractions have been detailed extensively, cognitive distraction has received less attention within these protocols. Part of the reason for this could be the lack of understanding or general consensus on cognitive distraction within the context of driver state. For example, how do we assess driver state, how do we develop ground truths, how much distraction should be considered too much, and what is and is not considered cognitive? To answer these questions, workshop participants will focus on the methods and metrics used to assess cognitive load and the impact this has on driver state and performance; whether during manual driving, monitoring an automated vehicle, or during takeovers after periods of automation.
There are several methods to design for positive user experiences (PUX). However, when it comes to technology-push innovations, identifying relevant user activities where the application of novel technologies can truly enable PUX is challenging. The workshop-format is based on an innovative method – the "Inspiration Matrix" [2]. The method aims to systematically identify user activities in which the technology has the potential to deliver exceptional PUX. It does so by combining technology functions with specific user activity, allowing applicants to foster their creativity systematically. We invite researchers and practitioners to join us in this interactive workshop, where we apply the Inspiration Matrix to develop PUX potentials of novel interaction technologies for next-generation vehicles, using the ProTable projection technology [3] as an example. This paper provides a comprehensive overview of the workshop's goals, schedule, and activities, with a focus on the anticipated outcomes that will drive future advancements in the field.
Human drivers are being gradually replaced by highly automated driving systems, and this trend is expected to continue. Alternatives should be available if driving algorithms are incapable of resolving ambiguous driving scenarios. What occurs if an autonomous vehicle follows a vehicle traveling below the posted speed limit? Should the autonomous vehicle cross the leading vehicle or maintain a safe distance? We must have solutions to address such situations. The way an autonomous vehicle responds to a variety of ambiguous driving scenarios is crucial for legal and safety reasons. To improve future road safety and convenience, this workshop aims the enhancement a framework to develop various ambiguous driving scenarios and plausible actions of AV in each of them. The results of this workshop will be an aid to scientists in their strategic policymaking and algorithm design for AVs responses to ambiguous driving scenarios.
The Workshop on Evaluating Augmented Reality in Transportation (EvalAR) brings together researchers and practitioners to address the challenges of evaluating augmented reality head-up displays (AR HUDs) with safety as a priority. With a collaborative approach, this workshop endeavors to shine a spotlight on the unique features of augmented reality, critically examine existing evaluation practices, and collectively identify future hurdles and actionable solutions. Our overarching goal is to collaboratively establish a strategic roadmap that addresses these challenges over the next 3-5 years and beyond. A key highlight of EvalAR is the introduction of the United Nations Economic Commission for Europe (UNECE) and its Working Party on General Safety Provisions to the AutoUI community. This introduction fosters invaluable collaboration and knowledge exchange, enabling researchers and practitioners to leverage each other's expertise. By facilitating discussions on knowledge and evidence provision, our workshop aims to bolster the academic community's contributions to regulatory improvements transportation safety. Furthermore, EvalAR actively explores avenues for alignment with global regulations and industry standards, creating a fertile ground for potential collaborations, funding opportunities, and transformative advancements in augmented reality research for enhanced transportation safety.
Camera-based driver monitoring systems (DMS) are a mature technology capable of reliably detecting a wide range of driver behaviours and states [3, 7]. However, fully realising the safety potential of DMS requires accurate monitoring be paired with effective human-machine interfaces (HMI) to facilitate behavioural change when risky behaviours or unsafe driver states are detected. Further, increasing levels of driving automation pose challenges for researchers and manufacturers in determining how automated components of the vehicle-system should react when such states are detected. This workshop will address the issue of DMS-HMI integration by providing a space for collaborators to discuss key emerging issues for driver monitoring and HMI response. The organisers will present an introduction to DMS concepts and industry standards, and outline the challenge of implementing DMS in both manual and assisted driving environments, followed by small-group discussions. The organisers will combine industry and academic perspectives to facilitate problem solving with participants.
An incremental but increasingly fast-paced revolution is taking place in our complex world: Automation capabilities of vehicles and development speed are growing, amplified with an even stronger push towards sustainability. As in these progressively more cooperative systems, humans, technical co-systems, organizations, society and environment all play their role to solve complex scenarios, a structured and integrated design is one of the keys to successful cooperation between automated vehicles and humans (as drivers, passengers and road users). This development not only carries risks, but also opens the opportunity to design systems, which are capable of dynamic cooperation instead of merely completing predefined tasks. In this workshop, we provide an introduction to the human systems integration approach to structured interaction design for cooperative vehicles by providing tools and methods needed for a successful design process that integrates and balances the needs of experts, users and other stakeholders.
In this demonstration, we present novel interaction modalities and use cases for AutoVis, a tool for the mixed-immersive analysis of automotive user interface (AUI) interaction studies. AutoVis uniquely enables exploration of AUI studies’ multilayered spatio-temporal interplay between humans, vehicles, and their surroundings by combining a non-immersive desktop view with a virtual reality view. It facilitates the analysis of passenger behavior, physiology, spatial interactions, and events within replications of study environments, employing avatars, trajectories, and heatmaps. To extend AutoVis and streamline interactions with it, we created a novel concept for gaze and gesture-supported analysis control. In addition, we conducted an exemplary use case study in the context of traffic accident reconstructions to explore the applicability of AutoVis apart from AUIs. By demonstrating these extensions, we contribute to the underexplored area of immersive analytics for AUIs and promote a more efficient and effective understanding of human-vehicle interaction.
Virtual (VR) and mixed reality (MR) simulations allow researchers to explore technologies and concepts that are not yet available. In the automotive sector, driving simulators are often used in user studies to investigate the experiences of drivers and passengers in manual and automated driving. However, it is also important to study the interactions with other road users, in particular with vulnerable road users (VRUs). User studies can help understand how VRUs interact with vehicles or explore new user interfaces that improve their safety or comfort on the road. MR simulation is a valuable tool for this purpose as it is allowing participants to experience potentially dangerous situations without being exposed to physical harm. A major improvement to previous VR/MR environments is our “procedural road generator” – a component that allows situations participants encounter to be randomized in a “controlled” manner. Study settings are built for this purpose in blocks of predefined road segments, which are arranged in random order one after the other. As an application example, we showcase an MR bicycle simulator to study the effects of hazard messages for cyclists in connected traffic in different traffic situations.
Autonomous vehicles (AVs) are a disruptive mode of transportation that is rapidly advancing. However, it is widely acknowledged in industry and academia that AVs may not be capable of handling every traffic situation independently, necessitating remote human intervention. Existing teleoperation methods face significant challenges, highlighting the need for innovative remote operation approaches. One such approach is tele-assistance, where remote operators (ROs) offer high-level guidance while delegating low-level controls to automation. Our research focuses on designing a tele-assistance interface. By interviewing 14 AV teleoperation experts and conducting an elicitation study with 17 experienced teleoperators, we identify road scenarios requiring remote human assistance. We then devise a set of discrete high-level commands that enable the resolution of these scenarios without manually controlling AVs. Finally, we integrate these findings into the design of a novel user interface for teleoperating autonomous vehicles.
Urban air mobility has attracted a lot of attention recently and will become more and more prevalent and established as a new form of mobility in the coming years. However, it should not be ignored that numerous people are skeptical of new technologies and reject them. This applies in particular to safety-critical areas such as (autonomous) flying. We created this demo setup to address community concerns and identify acceptance criteria together with the public. The demo allows viewers to experience flight scenarios in different conditions (day/night; normal weather/rain/snow/fog) in a true-to-scale mock-up of a passenger drone. This demonstrator will provide insight into this future transportation option, creating a space for open discussion.
This demo showcases the integration of a virtual reality (VR) driving simulator with a traffic simulator to enhance usability, improve traffic safety, and optimize traffic flow. By combining these two technologies, an immersive and holistic evaluation platform can be created. The VR driving simulator provides a realistic driving environment that enables users to experience various traffic scenarios, including complex and hazardous situations, in a controlled and safe manner (”me”-perspective). The traffic simulator complements the VR driving simulator by simulating realistic traffic patterns, including vehicle interactions, traffic signals, and road conditions based on statistical models of individualistic driving (”we”-perspective). The integration of these simulators allows for the evaluation of driver performance, decision-making, and reactions within a dynamic and realistic traffic environment. This combined approach offers benefits such as enhanced driver training, assessment of driver behavior, optimization of traffic management strategies, and development of intelligent transportation systems. By leveraging VR technology and traffic simulation, this integrated approach holds significant potential for future mixed traffic challenges.
Platooning in automated road transport is a promising method for reducing resource expenses while at the same time increasing efficiency and safety. By leveraging advanced technologies and communication systems, platooning enables vehicles to travel in close proximity, resulting in improved aerodynamics and reduced fuel consumption. While these very pragmatic aspects, on the one hand, show the potential of platooning, on the other hand, human factors issues need to be taken into account. To simulate platooning, we created an interactive demo where participants can experience an automated ride in a platoon. They can adjust the gap to the leading vehicle and see how different distances affect performance metrics. Additionally, a miniaturized-physical and a web-based prototype for learning about the topic of platooning is presented. With this demo, our goal is to provide a hands-on experience of platooning and encourage discussions on the human factors aspects of this novel concept.
This demo presents a virtual reality (VR) environment developed for evaluating signalized and unsignalized intersection scenarios using low cost head-mounted devices (HMD). Prior work focused on VR simulations that re-create less complex scenarios for the evaluation of different autonomous vehicle (AV) behavior and external human-machine interfaces (eHMIs). The proposed VR environment allows participants to walk across multiple road lanes, and experience and interact with high fidelity simulated traffic scenarios, including signalized and unsignalized intersections at 1:1 scale, with vehicular traffic making turns, and providing audio feedback. This simulation can be used for conducting user studies to test AV eHMIs or behavior interventions with participants.
Future mobility development is user-centric, integrating user feedback into the early phase of development is essential. The Mixed Reality Tool is a novel approach to automotive interior visualization and evaluation using green screens and mixed reality glasses. Utilizing a four-sided green screen arrangement in a studio setup, realistic automotive interiors are visualized, allowing for various evaluations and user interactions by immersing the real individuals into a virtual environment. Two examples are explored: (1) Showcasing the interior of a shared automated vehicle (SAV) and overlaying a real person using Chroma keying. (2) Simulating an automated car interior to analyze the possible transitions and time involved in various Non-Driving Related Activities (NDRA) carried out by real individuals. This innovative approach offers numerous advantages over traditional VR glasses, including improved immersion, realistic integration of real elements, and the ability to conduct evaluations with human presence for enhanced user experience and design optimization.
In contemporary automotive contexts, the growing substitution of physical buttons with digital screens has resulted in a diminished level of sensory feedback associated with physical forms and materials, and rich interaction. In response, this demonstration introduces shape-changing physical buttons that address this limitation. Building upon a material-oriented design exploration "Trycka", the buttons were created and seamlessly integrated into the leather fabric of the car interior. These buttons possess temporal forms that enable multi-sensory (haptic and visual) interactive feedforward and feedback. As the development of automated vehicles and intelligent systems progresses, these buttons serve as output compositions of the system, allowing expressivity in the interactions. By presenting experiential prototypes in this demonstration, we provide the automotive community an opportunity to engage with and explore tactile sensations. This paper presents the explorative process of shape-changing buttons and the implications for the automotive industry.
In the near future, Automated Urban Air Mobility (UAM) will revolutionize passenger transportation in metropolitan areas. However, for this mode of transportation to gain acceptance, potential passengers must overcome their limited knowledge and concerns about its safety and reliability. To address these challenges, our demonstration shows augmented ego path visualizations of an air taxi above New York City, offering an immersive experience that enhances situational awareness and fosters trust towards UAM. By integrating daytime, night, and foggy weather conditions into the simulation, we bring the experience closer to what passengers may encounter in their actual journeys. Through this demonstration, we contribute to the broader emerging conversation surrounding UAM by emphasizing the importance of situation-aware visualizations in future forms of transportation.
The Science Gallery in Ingolstadt’s city center serves as a platform for public engagement and discussion on important future topics concerning urban development. The exhibition aims to make science tangible and accessible to all, fostering dialogue and promoting social acceptance of automated and networked mobility solutions. The Science Gallery offers the possibility to engage via interactive elements, including a driving simulator and virtual reality experiences of an autonomous shuttle bus interior. As the exhibition will be closed by September 2023, we will provide a 360 ° video demonstration at the conference, allowing participants to experience the Science Gallery virtually. We reflect on and discuss about lessons learned when realizing a physical space for knowledge transfer and feedback encouragement from the local community.
To address future mobility needs, Urban Air Mobility (UAM) is emerging as a new transport concept, integrating electric vertical take-off and landing (eVTOL) vehicles into existing transport networks. However, despite significant investment and advances in the technology, potential adopters remain sceptical towards it, with safety, environmental concerns, and practicality presenting the most pressing issues. As several barriers hinder real-life testing of possible solutions, virtual reality (VR) offers a practical tool to explore some of the emerging issues. This demonstration simulates an airport-to-city UAM shuttle, presenting three scenarios with variable weather, cabin interfaces, and abnormal events in a high-fidelity representation of a potential use case. As such, it provides a groundwork for future exploration of UAM acceptance factors in VR studies.
Since July 2022, newly registered passenger cars have included numerous safety-relevant advanced driver assistance systems (ADAS). In addition, other ADAS and communication-based systems (V2X communication) on the market can help drivers manage their driving tasks. Existing research on driver assistance systems and V2X warnings has examined their impact on reaction times, considering additional distractions. However, these studies do not align with commercially available vehicles’ ADAS and V2X communication designs. More detailed research is needed to understand how real-world vehicle ADAS and V2X communication activities affect driver reaction times and behavior. With this interactive demo, we aim to showcase how ADAS activities and warnings through V2X communications can influence driver behavior and thus impact a potential traffic accident. In particular, determining reaction times for accident reconstruction is the focus of the investigation.
External Human-Machine Interfaces (eHMI) enable interaction between vehicles and Vulnerable Road Users (VRU), for example, to warn VRUs of the car’s presence. Warning systems should warn of the situation’s urgency, which can be achieved using Machine Learning (ML)-based VRU detection models. ML models and eHMI interaction concepts are usually developed by different teams and tested separately, often resulting in integration problems. This work contributes to a low-code approach to model interaction concepts involving ML models to enable end-to-end prototypes for early integration and User eXperience (UX) testing. We use flow-based modeling with Node-RED, the virtual driving simulator CARLA and YOLOv5 as state-of-the-art deep learning techniques for VRU detection. We show two scenarios (cornering lights and context-aware VRU warning) in an interactive demonstrator, meaning a manual live control of pedestrian and car. We consider our approach to model and evaluate interaction concepts without writing code feasible for non-computer scientists.
Current research on vehicle-pedestrian interactions focuses on the reaction of one actor other than the interaction of two actors, and considering the impact of the real-time behaviour of both actors on each other. To address this issue, the current study replicated a natural vehicle-pedestrian interaction to the virtual environment by connecting a high-fidelity driving simulator to a CAVE-based pedestrians’ simulator. Behaviours from both actors in response to each other were observed indifferent situations including two crossing locations and five time gaps. The proposed method enabled simultaneous interaction in a controlled and safe environment as well as provided implications for future AV design.
New research topics are appearing in the emerging field of urban air mobility (UAM). Successful introduction requires a high level of acceptance in society; this can be achieved by means of user-centered design and by involving citizens in the early stages of development. For immersive user studies, e.g., through user enactment, a physical prototype is helpful - especially since there are currently no drones that can fly on public routes and carry passengers. This project, therefore, developed a physical, true-to-scale prototype of a passenger drone and enhanced it with in-cabin services and a mobile booking app. Scales, interior design, and service concepts were derived from related work and own research. The prototype “FeliFly” can now visualize several phases of a customer journey and help to investigate acceptance factors as well as expectations and potential problems along a passenger drone flight to derive suggestions for later real implementation.
Smart infrastructures provide the opportunity to study the interaction between traffic participants in various situations like dense or little traffic, daytime or nighttime, sunny or rainy weather, etc. Such a smart infrastructure, a High-Definition-Testfield (HDT), has been set up in Ingolstadt, which allows studying the interaction between traffic participants and user interfaces in intersections. The HDT comprises three highly congested intersections with local sensor technology, i. e., LiDAR scanners and infrared cameras. A part of the HDT has a physical twin on the CARISSMA outdoor facility, allowing to collect data also in critical traffic scenarios, that are rarely observed in real traffic. This work presents the setting to collect data with a focus on critical scenarios and safety use cases. User interfaces for future mobility solutions have to take into account the information from smart infrastructures as well as use cases that can be implemented in such an environment.
In the future, automated vehicles can allow much longer journeys to be undertaken without a driver. This could involve travel across countries with different road systems, traffic rules, and regulations. Disparate cultures also have different expectations of how vehicles should behave. These need to be taken into account when designing HMIs to convey vehicle behaviour and intent. In this work, we presented four scenarios where culturally-sensitive HMIs could benefit both drivers and other road users in a cross-cultural setting. This video will hopefully provoke further discussion and research interest in cross-culture human-vehicle interaction.
In a fully automated vehicle (SAE L5), the vehicle performs lateral and longitudinal vehicle controls entirely. The former driver no longer has to intervene in the driving process, monitor it, or respond to take-over requests. At this automation level, vehicle occupants can focus exclusively on non-driving-related tasks. However, passengers might still want to influence driving behavior by choosing a different route or altering travel speed. Therefore, collaborative driving could be a viable option. However, it needs to be clarified how to realize those interactions with the vehicle best, as previous input modalities such as steering wheel and pedals could be left out in the future. In a prototyping workshop, we explored different interactions for collaborative driving with automated vehicles. We propose two interaction concepts in the accompanying video: 1) a mobile button with a turn and slide function and 2) by shifting weight in the seat to convey the desired intentions.
As the introduction of automated vehicles (AVs) into road traffic accelerates, establishing user acceptance is increasingly crucial. User comfort, largely influenced by the AVs’ driving styles, is one of the essential factors influencing acceptance. This video submission provides a methodological overview of a qualitative interview study, which used a Wizard-of-Oz method to investigate participants’ comfort levels during automated driving on real roads. By understanding the specific comfort experiences of both older and younger users, we can inform the design process for AVs, thereby enhancing user experience and facilitating broader acceptance of technology across a more diverse and inclusive demographic spectrum.
Despite increased use of lights, gates, and other active warning devices, crashes still happen at Highway-Rail Grade Crossings (HRGCs). To improve safety at HRGCs, we designed an in-vehicle auditory alert (IVAA) and conducted a multi-site driving simulator study to evaluate the effect of the IVAA on driving behavior at HRGCs. The video shows results of the collaboration between Virginia Tech, Michigan Tech, and the Volpe National Transportation Center recruited a total of N = 72 younger drivers. Driver simulator testing showed that the IVAA improved driving behavior near HRGCs, improving gaze behavior at HRGCs. Drivers looked both ways at crossings more often when the IVAA was present. We expect to run additional tests to further improve the IVAA. Our study can contribute to research efforts targeting driving safety at HRGCs.
In conditional driving automation (SAE L3), a vehicle may issue a take-over request at any time, and the driver must intervene by resuming control. Normally, the driver would manually resolve a situation that the vehicle could not handle and then switch back to full driving automation mode. In this video, however, we focus only on unsuccessful take-over attempts that resulted in collisions and their causes. An expert driver observed and labeled video recordings of 18 unsuccessful take-over attempts out of 216 take-overs, performed in a driving simulator user study. The unsuccessful take-overs were then categorized into five groups in which the driver: 1) did not brake; 2) took over too late; 3) did not take over at all; 4) was too immersed in the secondary task; and 5) had problems with the user interface. The resulting video shows an example of each type of unsuccessful take-over event.
While the literature often does not discern between autonomous, automated, or highly automated vehicles, there is an important distinction to be made: truly autonomous vehicles could act on their own behalf, circumventing or ignoring the wishes of their owners. With this video, we want to stimulate discussions on how such autonomous vehicles could impact our lives positively, negatively, and ambiguously. We do not promote such autonomous vehicles but rather explore this possibility.