Autonomous vehicles (AVs) necessitate a transformation in how pedestrians and vehicles communicate. Existing research largely focuses on unidirectional communication from AVs to pedestrians. However, traffic interactions are inherently reciprocal and social. To lay the foundation for truly bidirectional interactions with AVs, this study maps the diverse scenarios, messages, and gestures used by pedestrians to initiate communication with human drivers in today’s traffic. Following an online ethnography approach, we conducted an analysis of 20 Reddit discussions comprising 1,094 comments. Our findings reveal a rich communication landscape where pedestrians use gestures to signal urgency, negotiate space, convey warnings, and express needs beyond basic crossing intent. We highlight individual, cultural, and geographical variations in gestures. Based on this expanded understanding, we propose design implications for AV interactions, crucial for promoting safer, more intuitive, and trust-building communication between pedestrians and AVs.
This paper explores optimising in-cabin communication and user experience in future vehicles through dynamic, expressive interior lighting. Traditionally, vehicle interior lighting has been static and functional, but recent advancements have shifted towards mood-enhancing lighting. With AI technologies becoming integral to vehicles, sophisticated and intelligent interaction experiences are essential. This study uses the Volkswagen ID series' dynamic lighting feature to examine dynamic mood lighting as a tool for intuitive, non-intrusive communication. We assess the effectiveness and limitations of various lighting sequences through quantitative and qualitative methods. Our findings reveal key insights into user preferences and challenges, providing a framework for designing adaptive interior lighting systems. The results highlight the importance of user-centred design in enhancing driving experiences and offer directions for future research in automotive human-computer interaction.
Most of today’s studies investigating the driver-vehicle interaction of automated driving systems are conducted in simulated environments like driving simulators or virtual reality. While this simulation-based experimental research can produce valuable and valid results, it is at the same time limited by the inherent lack of realism. Important insights into real-world driving experiences and repeated system usage are rarely collected due to the constraints imposed by time and financial resources. In a multi-step research approach, we aim to connect the AutoUI research with real-world users. In the first step, we conducted qualitative interviews with 10 experienced, tech-savvy users of current automated driving systems (Waymo, Cruise, Tesla) and clustered the results into the most important issues from a human factor perspective.
The perceived safety of Autonomous Vehicles (AV) is considered a major challenge towards their public acceptance. Here we explore the role of vehicle design in instilling a sense of safety on behalf of prospective passengers. Previous work showed that variations in exterior design affect people's perceptions although the particular design features driving these shifts in perception were not elaborated on. We aim to identify and validate the relative importance of design features pertaining not just to the vehicle exterior. The work contributes towards the development of a wider Data-Informed-Design (D-I-D) approach to assist designers to make mindful decisions, empower their creativity, and increase design efficiency and effectiveness by sizing or shaping the design space. Based on a series of semi-structured interviews with senior designers, we developed an initial taxonomy of design features. Their relative importance was subsequently validated in a forced-choice experiment in which a panel was asked to judge perceived safety of concept autonomous vehicles including particular design features. Preliminary results indicated that not all features had the intended effect suggestive of a “knowledge gap” on behalf of the designers indicating the potential benefits of a Data-Informed-Design approach.
Passengers in autonomous vehicles need to perceive information about the position and size of external objects. This study investigates how hiding external object information can help reduce passenger stress. We propose four image processing patterns, including semi-transparent, bounding box, shadow, and fully concealed, to reduce the amount of visual information while preserving object position and size information to reduce stress and improve situational understanding. We categorized external stressors according to passengers’ stress levels and the need for an immediate understanding of the stressors using a VR-HMD simulator. This was done through subjective evaluations based on stress reduction and situation awareness questionnaires. The results from 20 subjects showed that the compatibility index was lower in the group with a higher need for immediate situation awareness compared to the group that prioritized confidentiality. This supports the idea that stressors requiring immediate awareness require both information reduction and situation awareness.
As automated vehicles advance towards higher levels of automation, users are increasingly engaged in non-driving related activities, diverting their attention from the road and the vehicle’s operations. Even in non-critical conditions where automated driving is routine and uninterrupted, users may experience automation surprise (AS) when they fail to anticipate the vehicle’s behaviour, potentially impacting their comfort and acceptance of automated vehicles. To explore AS in non-critical automated driving scenarios from an experiential perspective beyond safety considerations, we conducted a pilot study using a qualitative analysis approach, on-road probing and the think-aloud method. Initial findings indicate an expanded understanding of AS, which can arise in non-critical conditions, accompanied by cognitive and emotional responses. Future research aims to validate and refine the expanded definition through a more comprehensive study and inform the development of effective AS mitigation strategies to improve the general user experience.
This work seeks to ascertain the validity of pupil diameter as a physiological measure of arousal in a driving simulation study in the context of a highly automated system (SAE Level 4). N = 18 participants were repeatedly exposed to four different driving scenarios in which an authoritative control intervention occurred — either the automation completely blocked driver control authority (Type 1 intervention) or took control authority away from the driver (Type 2 intervention). No significant change in pupil diameter was observed following the interventions, and neither scenario criticality (p = 0.874) nor the intervention type (p = 0.932) impacted pupil diameter. Multilevel modelling results show (1) marginal significance linking pupil diameter change to participants’ prior experience with automated systems and age (2) significant levels of inter-individual variation. To better understand driver arousal during authoritative control interventions, future research should additionally consider factors that cause inter-individual variation, such as other genetic and psychological information.
Despite significant advancements of recommendation systems across various applications, recommendation systems within the automotive domain face a unique set of challenges and requirements. This paper underscores the need for a specialized approach in developing recommender systems for in-vehicle infotainment (IVI) systems, aimed at enhancing the driving experience through personalized, routine-based feature suggestions under appropriate contexts. Through the evaluation and comparison of various ensemble machine learning models, including Random Forest, LightGBM, XGBoost and CatBoost, our research identifies effective strategies for delivering context-aware recommendations. Moreover, a broad spectrum of IVI and contextual features from real-world driving data have been applied to the proposed algorithms, justifying the efficacy of the models. Consequently, our study demonstrates the potential for enhanced driving experiences through machine learning-based feature recommendations and lays the groundwork for future innovations in personalization in the automotive domain.
This research proposes leveraging commute time by car for idea incubation, targeting 80% of the US workforce who commute daily. Commuting, a potentially "mindless" task for seasoned drivers, offers an ideal environment for creative thinking that can be enhanced by mild immersive interactions to improve idea generation. The project integrates psychological interventions tailored for driving, drawing on automated driving advancements to ensure safety and effectiveness. Using a context-aware sensing framework, interactions are timed during appropriate driving scenarios to minimize distraction. The system obtains context data from car sensors and employs generative AI to deliver subtle interactions, enhancing user engagement and idea output without compromising safety. Evaluation includes measuring user engagement and post-drive self-reports to quantify the impact on idea generation and driving performance. This innovation integrates creativity support tools into daily commutes, potentially transforming in-car intelligence systems and enhancing productivity and innovation capabilities.
Virtual Reality (VR) in vehicles poses challenges, notably motion sickness. Using matched motion—mirroring actual vehicle movement in VR—can alleviate this. However, such visual cues may be distracting to users. This paper compares acceleration-based cues that only display changes in velocity, with traditional matched motion (speed-based) on straight roads. The research assesses their effectiveness in reducing motion sickness while potentially offering less distraction. Results show that displaying acceleration only cues in VR can mitigate motion sickness comparably to matched motion while causing fewer distractions. Our design is fully implemented in VR without additional hardware. This can facilitate a broader deployment. Additionally, it emphasizes the delicate balance between reducing motion sickness and preserving user experience.
Recently, the acceptance of autonomous vehicles (AVs) has been actively discussed. However, the impact of an automated driving society varies significantly depending on the stakeholders’ places of residence and work. This paper reports on an educational pilot study proposed at Kogakuin University as part of a Faculty of Engineering course focusing on AVs and public transportation. The aim is to explore reframing AVs development to include urban and suburban elements, and to consider approaches for incorporating DEI (Diversity, Equity, and Inclusion) in engineering education. This pilot study involved a survey of university students that revealed their attitudes toward AVs and the pain points they encountered during their commutes. Our findings provide foundational insights into the precise needs of an automated driving society. These results indicate that understanding the diverse needs of urban and suburban populations is crucial for the development of effective AV systems and educational programs.
Driver Monitoring Systems (DMS) are designed to assist drivers by issuing warnings when distraction is detected. However, evidence from other warning systems suggests that warnings may also increase the workload of the driver. This study therefore examined the impact of DMS warnings on drivers’ workload, measured by the NASA TLX. Participants performed typical in-vehicle infotainment tasks during driving. For half of the tasks, the DMS was active and triggered warnings, whereas for the other half, it was inactive and did not trigger warnings. The results indicate that warnings, issued by the active DMS, resulted in a significantly higher self-reported workload, particularly in terms of higher perceived temporal demand and lower perceived performance. Furthermore, there was a significant positive correlation between the self-reported workload and the number of warnings. These findings underscore the necessity of carefully designing DMS to balance the benefits of distraction mitigation and the potential impacts on workload.
This study examined driver failure and traffic conflict using a driving simulator during lane-change scenarios on a 2+1 road, focusing on differences between day and night conditions. Data on driver failure status and driving behavior were captured through a survey. The study also evaluated driver performance and electrodermal activity. The findings revealed that nighttime lane changes had a shorter minimum time-to-collision (MTTC), indicating a higher traffic conflict severity compared to daytime (p = 0.046). It was also found that increased driver stress was correlated with decreased MTTC (p = 0.039). Drivers who were prone to making mistakes were closer to collisions (p = 0.005), whereas those prone to violations avoided collisions better (p = 0.027). Despite facing traffic conflicts, the drivers reported no perception, decision, or planning errors. Moreover, “procedure error” was the most common cause of failure. The study suggests that advanced human-machine interface systems are required to aid perception and decision-making and recommends future research with larger, diverse samples.
This study explored communication designs for driverless automated service vehicles (ASVs) and pedestrians' trust and perceived safety when pedestrians cross narrow roads in residential areas. Forty participants carried out VR experiments that simulated interactions between implicit (vehicle behavior) and explicit communications (text-based external human-machine interfaces (eHMI)) of two ASVs types (bus and golf cart) in residential areas. Results indicated that pedestrians were more likely to trust the bus-type ASV more when it stopped further away compared to decelerating relatively late. Pedestrians' perception of safety varied depending on the type of ASV, with the golf cart type being perceived as safer than the bus type. Further, pedestrians exhibited greater trust when the ASV did not display any messages. However, text messages indicating the ASV's status, such as "In Automated Driving," improved their perception of safety. This highlights the need for effective communication methods to enhance road user attitudes towards ASVs.
Electric vehicles (EVs) are emerging as the leading alternative to internal combustion engines (ICE) and offer an opportunity to curb emissions, reduce air pollution, and decrease fossil fuel dependence. Key to their successful adoption involves understanding consumer expectations and preferences for EV user interfaces (UIs). This study explores consumer preferences and acceptance of EV UIs, focusing on expectations, perceptions and preferences of aesthetics, design, function, and features. The work presented here is part of a larger project investigating EV adoption in the Australian market and presents a qualitative observation of participants interacting with EVs. Key areas of EV interiors that impact perception and preferences include center console and dashboard, driving controls, button quantity, and screen size. Furthermore, participants expect EVs to have fewer physical buttons, prefer uncluttered UIs, favour large screens, and desire physical buttons for critical functions. These insights guide future automotive UI designs, enhancing EV adoption.
When considering novel, futuristic vehicle designs, it is important to consider the technology's “adoption potential”. Literature suggests that perceived risk is one of the key determinants of trust, which in turn affects adoption potential. Our research investigates how risky the general public perceives different design configurations for futuristic urban mobility vehicles, comparing highly futuristic flying cars to more traditional cars, while factoring in the issue of whether the vehicle is operated manually or autonomously. The initial results indicate that there is a combined effect of the level of autonomy and the driving mode of the vehicle on the way people perceive its riskiness. Further, the “exposure” component of perceived risk could be an important driver of people's judgments for future technologies. More research is required to establish the main drivers of futuristic, multimodal vehicles’ risk perception.
This study investigates the relationship between gender and anthropomorphism in shaping perceptions of autonomous vehicles, focusing on positive, negative, and competence perceptions. The findings reveal notable interaction effects: men with a high propensity to anthropomorphize report enhanced positive perceptions and higher competence ratings, whereas women with similar tendencies display more negative perceptions and reduced competence ratings. These findings highlight the critical importance of considering the combined effects of gender and anthropomorphic tendencies in the design and marketing of AutoUI to improve acceptance and user experience.
The increasing prevalence of in-vehicle voice assistants (VA) requires efficient evaluation methods. Existing scales provide valuable insights into human-technology interaction. However, their individual application does not provide a holistic impression of VAs, while using all these scales would be time-consuming. Therefore, we present the Short Assessment of Voice Assistants Scale (SAVAS) as a screening instrument, serving as a first step in the evaluation process. Based on a literature review and expert evaluations, we generated an initial item pool and subsequently conducted an online study, involving N = 285 participants evaluating an interaction. An exploratory factor analysis revealed five factors: usability, sociability, transparency, anthropomorphism, and overall visual design with good to excellent reliability values. Our methodological contribution provides a first step towards efficiently investigating users’ attitudes towards VAs. The brief but versatile SAVAS has implications for both industry and academia providing a means to economically assess attitudes on five dimensions.
Anthropomorphism in product marketing is particularly prevalent in Eastern versus Western marketplaces, potentially driven by a cultural preference. This trend may extend to the automotive domain, where leading Chinese car manufacturers have already adopted anthropomorphic designs for intelligent personal assistants (IPAs). This paper examines cross-cultural differences in the prevalence of anthropomorphic IPAs, aiming to determine if a disproportionality exists between global markets. The study also explores underlying cultural dimensions (i.e., collectivism, power distance, uncertainty avoidance) that may account for these observed differences. The findings reveal a notably higher prevalence of anthropomorphic IPAs in vehicles by Chinese manufacturers compared to global counterparts. Correlational analyses suggest specific cultural patterns may contribute to the differing prevalence across markets. However, further user studies are necessary to ascertain whether Chinese consumers in fact respond more favorably to anthropomorphic IPAs. This exploration offers first insights that can inform the adoption of anthropomorphic in-vehicle IPAs.
User interfaces are crucial for easy travel. To understand user preferences for travel information during automated shuttle rides, we conducted an online survey with 51 participants from 8 countries. The survey focused on the information passengers wish to access and their preferences for using mobile, private, and public screens during boarding and travelling on the bus. It also gathered opinions on the usage of Near-Field Communication (NFC) for shuttle bus confirmation and viewing assistance to help passengers stand precisely where the shuttle will arrive, overcoming navigation and language barriers. Results showed that 72.54% of participants indicated a need for NFC and 82.35% for viewing assistance. There was a strong correlation between preferences for shuttle bus schedules, route information (r=0.55), and next-stop information (r=0.57) on mobile screens, suggesting that passengers who value one type of information are likely to value related kinds too.
Autonomous shuttles can address the lack of reliability and service quality of public transport, but people are still skeptical about fully automated driving. This research aims to identify implications for future HMI design by investigating relationships between quality of experience, amount of information and willingness to use autonomous shuttles. An online survey with N = 175 participants indicated that a positive experience increases the propensity to trust in autonomous systems, and preference to use autonomous shuttles over conventional buses. Consequently, HMI should utilize user feedback to create positive experiences, for instance by explaining the behavior of automated driving systems. Additionally, passengers with a higher affinity for technology expressed interest in a greater amount of technical information. To enhance positive experiences here, the HMI and information presentation may be personalized and adapted to each individual user. Encouraging the use of public transportation can help to achieve goals of sustainable mobility.
In conditional automated driving, the car sometimes has to initiate a take-over. The safety of a take-over depends on the driver's situation awareness, which may be compromised due to engagement in non-driving related tasks. Light cues can direct the driver's gaze towards critical areas. This study investigates whether light cues also enhance situation awareness. A driving simulation study was conducted to compare situation awareness between a group receiving light cues and a control group. Situation awareness was assessed using the Situational Awareness Global Assessment Technique (SAGAT). No significant differences were found between the groups (N = 48). Situation awareness varied across different areas of interest, with near-perfect situation awareness for the speedometer and close to guessing probability for the rear-view mirror. Methodological approaches to assessing situation awareness during take-overs are discussed. Future analyses will examine the effect of light cues on driving behavior as an outcome of situation awareness.
Modern automotive infotainment systems offer a complex and wide array of controls and features through various interaction methods. However, such complexity can distract the driver from the primary task of driving, increasing response time and posing safety risks to both car occupants and other road users. Additionally, an overwhelming user interface (UI) can significantly diminish usability and the overall user experience. A simplified UI enhances user experience, reduces driver distraction, and improves road safety. Adaptive UIs that recommend preferred infotainment items to the user represent an intelligent UI, potentially enhancing both user experience and traffic safety. Hence, this paper presents a deep learning foundation model to develop a context-aware recommender system for infotainment systems (CARSI). It can be adopted universally across different user interfaces and car brands, providing a versatile solution for modern infotainment systems. The model demonstrates promising results in identifying driving contexts and providing contextually appropriate UI item recommendations, even for previously unseen users. Furthermore, the model’s performance is evaluated with fine-tuning to assess its ability to make personalized recommendations to new users.
Our research focuses on the smart pole interaction unit (SPIU) as an infrastructure external human-machine interface (HMI) to enhance pedestrian interaction with autonomous vehicles (AVs) in shared spaces. We extensively study SPIU with external human-machine interfaces (eHMI) on AVs as an integrated solution. To discuss interaction barriers and enhance pedestrian safety, we engaged 25 participants aged 18-40 to brainstorm design solutions for Pedestrian-AV interactions, emphasising effectiveness, simplicity, visibility, and clarity. Findings indicate a preference for real-time SPIU interaction over eHMI on AVs in multiple AV scenarios. However, the combined use of SPIU and eHMI on AVs is crucial for building trust in decision-making. Consequently, we propose innovative design solutions for both SPIU and eHMI on AVs, discussing their pros and cons. This study lays the groundwork for future autonomous mobility solutions by developing human-centric eHMI and SPIU prototypes as ieHMI.
The Panoramic Road Scene Generation (PRSG) pipeline is a framework designed to generate realistic and contextually accurate panoramic road scenes from textual descriptions. This approach leverages a combination of a stable diffusion model and super-resolution techniques to produce panoramic images for autonomous vehicle (AV) simulations and VR/AR applications. By generating detailed panoramic road scenes viewable in a VR HMD, an additional avenue for easily creating and simulating vehicular scenarios is possible. The pipeline utilizes a customized dataset with specifically tailored captions to fine-tune the generation process, ensuring both visual fidelity and contextual relevance. The evaluation framework includes both subjective and objective metrics to assess to the quality and applicability of the generated images. A pilot study determined the generated road scenes to be relatively realistic and acceptable for viewing. Potential applications include the creation of new synthetic scenes for simulating AV scenarios, or training and validation of autonomous systems.
Advanced Driver Assistance Systems (ADAS) enhance vehicle safety by providing critical information, warning drivers, and automating control tasks to reduce manual operation. However, the acceptability of ADAS is often limited by the human-machine interface (HMI) used, due to issues such as perceived usefulness, ease of use and trust of the ADAS operations. This study proposes using a robotic human-machine interface (RHMI) to improve the acceptability of ADAS and explores whether a small humanoid robot or a minimally designed robot is more effective as an RHMI in widely used three ADAS operations: Adaptive Cruise Control (ACC), Lane Tracking Assistance (LTA), and Blind Spot Monitoring (BSM). We conducted three experimental conditions in a driving simulator using a within-subject design: only a conventional HMI (C-HMI), C-HMI and a humanoid RHMI (RoBoHoN), and C-HMI and a minimally designed RHMI prototype (RHMI-P) in a within-subject design. Participants’ subjective assessments and eye gaze data were analyzed. The findings indicate that the acceptability of the BSM operation increased with RoBoHoN due to its familiar and human-like appearance. However, the objective measures revealed that RHMI-P increased gaze alertness and was perceived as more competent and trustworthy. This study highlights the importance of incorporating human-like elements and effectively using non-verbal cues when designing an interface for ADAS to improve the acceptability of ADAS operations and increase their usage for safer roads.
The cooperation between humans and automated vehicles (AVs) has emerged as an important perspective for understanding the authority dynamic of future transportation. Previous research indicates that passengers desire to influence the AV’s behavior in certain traffic situations. However, automation could counter this desire for control by providing more detailed explanations of its behavior and actions. Consequently, this paper investigates the potential relationship between an AV’s level of explainability and passengers’ desire for control to intervene in certain traffic situations while also examining the impact on passengers’ perception concerning UX, trust, and acceptance of AVs. Preliminary results from a virtual reality driving simulator study (N = 50) indicate that the level of explainability of an AV does significantly influence passengers’ desire to intervene or their perception of control over the vehicle. It can also be observed that the level of explainability significantly impacts the user experience, trust in AVs, and perceived safety.
This work-in-progress examines how the use of different non-driving related task (NDRT) interfaces (mobile phone vs. head-up display and baseline) influences drivers’ visual attention and scanning strategies during conditional automated driving. We present preliminary findings from a driving simulator study (N=46) that utilised Markov Chain analysis of gaze transitions and gaze dispersion metrics. Results show that NDRTs, particularly on mobile phones, compromise drivers’ attention distribution and gaze transitions. While HUDs keep drivers’ gaze closer to the road, they may still hinder hazard perception. Markov Chain analysis reveals valuable insights into drivers’ attention management, informing the design of safer in-vehicle interfaces for automated vehicles. These findings highlight the need for careful consideration when designing in-vehicle interfaces and NDRT interactions for automated vehicles.
As the popularity of Battery Electric Vehicles (BEV) increases, creating awareness regarding vehicle maintenance becomes crucial. A user-centred approach is necessary when implementing new features as previous research reveals that users have difficulty understanding EV-specific information in the interface. With new legislation requiring BEVs to display the percentage of their battery State of Health (SoH), this study investigates what information should be displayed using a user-centred iterative process by testing it on 15 participants in total. The objective is to improve users’ understanding of battery SoH and enhance in-vehicle experience for BEV drivers. Presenting the battery’s SoH in the form of a percentage alone was found to be insufficient. The research has led to the development of 11 design recommendations for effectively communicating battery SoH information to BEV drivers.
Physiological measurements have been widely used to study interactions between automated vehicles (AVs) and drivers, yet they have received little attention in AV-pedestrian interaction research. This paper reviews nine articles from a systematic query, outlining the current state and proposing future opportunities for using physiological measurements in this area. Our findings reveal a targeted application of these measures, with gaze fixation predominantly used to assess visual attention and electroencephalogram employed to evaluate emotional valence and cognitive workload. The review highlights the motivation (e.g., supporting the comparison of multiple experimental conditions), implementation (e.g., equipment, physiological indicators, and their inferred states), and impact (e.g., facilitating exchanges with self-reported measures) of using these measurements. Moreover, we offer promising directions for future research, laying a foundation for further incorporating physiological measurements in AV-pedestrian studies and advancing objective and quantitative methods in this field.
Cyclists are exposed to numerous potential hazards in traffic, often due to interactions with motorized vehicles. Early notifications about these hazards could help cyclists react more consciously and proactively before a hazard materializes. Future technologies, such as connected traffic systems, could provide real-time hazard information via smart glasses. Similar to head-up displays in vehicles, relevant information is presented in the cyclist’s direct line of sight. A crucial aspect for usability is how visual information is presented to support cyclists, ensuring quick comprehension of notifications without causing additional distraction. Our study involved focus group discussions with two groups: cycling HCI researchers (NR=3) and user experience design master’s students (NU=4). We explored key aspects of delivering information through smart glasses for cyclists. Furthermore, we asked participants to generate visual notification concepts they deemed suitable for providing hazard information. Many of these concepts involve integrating augmented reality overlays with real-world objects, posing a specific challenge in overlay placement due to the HUD moving with the cyclist’s head. Additionally, it is essential to identify which hazards are relevant for notification to not overload cyclists with excessive information by indicating every potential hazard.
In the ongoing project Ride the Future in Linköping since 2019, two automated shuttles operate seven days a week on the campus and in a residential area, sharing the road with cars, buses, cyclists, and pedestrians. Addressing the conflicts arising from unfamiliar technology, speed variations, and unpredictable behavior, this survey study with 67 respondents delves into the experience of the local community and students with the automated shuttles. Results highlight that while a high level of trust in the shuttles is established due to their low speed and cautiousness, this feature also leads respondents to choose other modes of transportation, such as cycling or walking. To mitigate road conflicts, suggestions include implementing separate lanes, providing more operational and project-related information, and implementing auditory signals to the shuttles. These insights can help shape the future of the project that is extended to new areas and other researchers working on similar topics.
Despite their potential to support multimodal communication between autonomous vehicles (AVs) and pedestrians, speech interfaces are relatively underexplored. Important voice characteristics such as gender and age have been investigated in robots and in-vehicle systems; however, no study has yet compared their impact on pedestrians’ acceptance of AVs through external speech interfaces. Through a within-subjects virtual reality experiment (n=10), we examine five conditions, including synthesised feminine and masculine voices from younger and older adults, alongside a baseline condition without sound. Results show that speech interfaces complement visual cues, generally enhancing pedestrians’ acceptance. Further, acceptance was influenced by gendered and aged voices, with female voices being favoured for their perceived likability, intelligence, and trustworthiness. This research enhances understanding of AV external speech interfaces and aids in developing socially acceptable robotic speech designs. Our future work includes involving more participants from a broader age spectrum and cultural backgrounds.
Human-machine interfaces (HMIs) are important for the introduction of automated vehicles (AVs). Even though interactions can involve multiple users and modes of transportation, current research and ideation for HMIs are often directed at only one road user group. This reductionist approach goes against the principles of design, which argue for a holistic understanding. To address this gap, we conducted a novel role-switching approach where participants explored a traffic scenario from four roles: pedestrian, cyclist, driver of a manually-driven vehicle and passenger of an AV. After experiencing all roles, participants evaluated each role and generated HMI designs. Results demonstrate that the roles were perceived differently and that switching between these different perspectives contributed to participants’ understanding of the traffic scenario and the generated designs. This paper reports insights on the value of a role-switching approach to promote the future development of a more holistic approach towards HMIs.
Introducing in-car health monitoring systems offers substantial potential to improve driver safety. However, camera-based sensing technologies introduce significant privacy concerns. This study investigates the impact of transparent user interface design on user acceptance of these systems. We conducted an online study with 42 participants using prototypes varying in transparency, choice, and deception levels. The prototypes included three onboarding designs: (1) a traditional Terms and Conditions text, (2) a Business Nudge design that subtly encouraged users to accept default data-sharing options, and (3) a Transparent Walk-Through that provided clear, step-by-step explanations of data use and privacy policies. Our findings indicate that transparent design significantly affects user experience measures, including perceived creepiness, trust in data use, and trustworthiness of content. Transparent onboarding processes enhanced user experience and trust without significantly increasing onboarding time. These findings offer practical guidance for designing user-friendly and privacy-respecting in-car health monitoring systems.
Driver Monitoring Systems (DMS) detect visual distraction and alert drivers to maintain focus on the road. The European New Car Assessment Programme developed an algorithm that triggers warnings upon detecting distraction. Previous research showed that off-road glances during lane changes triggered warnings and drivers had difficulties understanding why these warnings occurred. Therefore, a simulator study was conducted in which drivers performed an overtaking maneuver in dense traffic. The DMS was either inactive and did not generate any warnings, or it was active and triggered warnings with or without being instructed beforehand. Results revealed that in 50 % of the overtaking maneuvers driving-related distraction was recorded. There was a significant difference between whether the DMS was inactive, active or instructed with regards to warnings. It was observed that side mirror glances, prior to overtaking, paradoxically prompted warnings. Findings indicated the necessity to refine DMS algorithms to discern safety-relevant behaviors from genuine distractions.
As automated vehicles (AVs) progress in driving safely on the road, there is growing concern about the balance and adaptability of their driving styles. This simulator study examines how the driving styles of AVs and human drivers influence drivers’ decisions, behaviors, and trust during interactions in a mixed-traffic environment. In chicken game scenarios, we find that AVs with conservative driving styles are more frequently exploited by drivers, while aggressive drivers are more inclined to exploit AVs. AVs with aggressive driving styles significantly influence driver behavior in lateral interactions. Notably, the conservative driving style of an AV increases drivers’ trust, while the aggressive driving style does not harm it. This indicates that drivers can have a certain degree of confidence in the behavior of AVs with appropriate aggression. Our findings provide insights into how AV design should incorporate scenario-specific adaptive driving styles to balance driver confidence and AV trustworthiness.
Automated vehicles will soon be common in traffic, necessitating effective communication with pedestrians. eHMIs facilitate these interactions, but scalable solutions are needed to address various challenges. In urban environments, pedestrians often fail to perceive AVs’ intentions, especially when distracted by multitasking, flocking, or using gadgets. This study enhances attention, perception, and safety of inattentive pedestrians through eHMIs using multi-modal targeted communication (conditions P and N). These targeted eHMIs are evaluated against common multi-modal broadcast communication with audio and visual cues (baseline). The study examines situational awareness, intent perception, and perceived safety among pedestrians using mobile phones while crossing streets. Two targeted eHMI variants were tested: one with projection and personalized audio cues (condition P) and another with personalized mobile notifications and audio cues (condition N). A usability study with 28 participants using a mixed-methods approach shows that targeted nudging from eHMIs improves situational awareness, intent perception, and perceived safety among distracted pedestrians compared to broadcast eHMIs. Thus, targeted eHMI systems could improve pedestrian safety and reduce AV traffic time.
To maximize the benefits of driving automated vehicles (AVs), humans can allow the more capable party to drive in certain scenarios, which requires proper trust and takeover intention from humans. We conducted an online survey study with 136 participants to examine how drivers’ self-confidence and confidence in AV, as indicators of perceived driving capability, influence their trust and takeover intention in AVs. Results suggested that confidence in AV had a larger effect than self-confidence. Generally, low confidence in AV predicted lower trust and higher takeover intention, while low self-confidence predicted lower takeover intention. Specifically, low self-confidence predicted higher trust only when confidence in AV was low. Furthermore, we observed similar levels of trust across different self-confidence levels when confidence in AV was high, suggesting a potential ceiling effect of trust on imperfect AVs. Our findings provide a better understanding of human-AV teaming patterns from the perspective of human factors.
In-car interfaces are the primary medium for communication between the occupants and the increasingly agentic vehicle systems. Although many universities teach automotive user experience and design courses, there is no consensus on what topics to cover. Some schools may choose to focus on the interior design of the cabin, including, but not limited to, physical controls and ergonomics, while other schools may just focus on the usability of what is shown to the driver and passengers. Participants in our workshop will discuss various topics for teaching Automotive UX and UI at both undergraduate and graduate levels, participating in interactive activities such as panels, breakout discussions, and syllabus design. Participants will then combine and form their findings into a course outline based on themes (ex., UI, Human Factors, etc.). This workshop is expected to achieve general consensus on a Automotive UX curriculum drawing from diverse stakeholders, including academia, industry, and government.
Micromobility vehicles, such as electric scooters and bicycles, have emerged as promising solutions to urban transportation challenges. However, their integration into existing transport systems poses various challenges related to safety, regulation, and infrastructure. This workshop aims to address these challenges by bringing together researchers, practitioners, and industry experts to discuss innovative approaches to enhance road safety and sustainability in urban environments and cooperative intelligent transportation systems.
Despite the widespread integration of Advanced Driver-Assistance Systems (ADAS) in modern vehicles, a significant gap remains in user understanding and proper use, which can lead to critical safety issues. Training, information, and education for drivers are methods that may address these gaps. Researchers have assessed the preliminary efficacy of ADAS training and education among various driving populations via the use of different training processes, content, and medium. However, training drivers to understand and use ADAS is a complex, multifaceted process, that is influenced by multiple factors, making it difficult to draw conclusions from the current state of the literature. Developing a framework examining the various factors and their interactions would be of benefit to the driver safety community and facilitate the development of new ADAS training and education by stakeholders. The main goal of this workshop is to establish a framework and identify tools that can be used to optimize the effectiveness of ADAS training and education. Discussions among participants will inform the framework, which is intended to improve user understanding and usage of ADAS systems and lead to safe and responsible use.
In this workshop, we aim to explore how the design of technology can encourage sustainable mobility practices and facilitate interactions that promote environmentally friendly, prosocial transportation choices. We intend to identify real-world scenarios where these interactions can be implemented, discuss the challenges and opportunities they present, and develop actionable strategies for their application. We will use speculative design methods such as design fiction and anticipatory ethnography to envision alternative future mobility practices. This holistic approach aims to create a comprehensive understanding of how technology can shape sustainable and inclusive mobility ecosystems, and critique the current practices. By bringing together researchers, practitioners, and stakeholders from various disciplines, we hope to foster a collaborative network that will drive future advancements in sustainable mobility. Our goal is to address the urgent need to reduce ecological footprints and improve social experiences through innovative technological solutions.
There are many conversations among researchers associated with the limited utility of the SAE Levels of Automation (LoAs) for defining behavioral-related research questions. The goal of this workshop is to examine these issues and to present a complementary framework to describe the interactions between people and automation, to account for the differences in capabilities between the human operator and the autonomous system, and to match the user’s expectation to system capabilities. This workshop will identify considerations for research and design that is neglected in the SAE framework. It will help address the interaction between human and automation in a more adequate way for human-centered designs.
Most of today’s studies investigating the driver-vehicle interaction of automated driving systems are conducted in simulated environments like driving simulators or virtual reality. While this simulation-based experimental research can produce valuable and valid results, it is at the same time limited by the inherent lack of realism. Important insights into real-world driving experiences and repeated system usage are rarely collected due to the constraints imposed by time and financial resources. In a two-step research approach, we aim to connect the AutoUI research with real-world users. In the first step, we conducted qualitative interviews with 10 experienced, tech-savvy users of current automated driving systems (Waymo, Cruise, Tesla) and clustered the results into the most important topics from a human factor perspective. On this basis, the workshop now aims to bring these insights into the AutoUI research community to identify the most relevant and urgent issues that should be addressed in the coming years.
Recent advances in automated vehicle technology could lead to a “hybrid society” with close integration of traffic participants with different mobilities. This integration will likely impact their societal acceptance as well. While traditional methods of evaluating safety may be needed, consideration for the overall well-being of traffic participants is necessary. Transportation engineering and psychology researchers have considered the well-being of the overall travel experience. However, these methods need to consider well-being while interacting with mobilities. In the proposed workshop, we plan to define and characterize more precisely what should be considered while thinking about the well-being of traffic participants, how future interactions with semi-autonomous or automated mobility systems will interact in shared spaces, and the design considerations to facilitate cooperative interactions. Through this workshop, the organizers look to integrate the findings into a research agenda, which can lead to future collaborations initiated during this event.
Autonomous vehicle (AV) technologies aim for fully self-driving cars but still require human intervention for edge-case road scenarios. Remote human operation can mitigate AV shortcomings through two major teleoperation paradigms: remote driving (tele-driving) and remote assistance (tele-assistance). Tele-driving involves a remote operator (RO) continuously controlling the vehicle, while tele-assistance allows the RO to make high-level decisions by delegating the execution to automation. Tele-driving is complex, mostly due to the physical disconnect between the RO and the AV and latency. Tele-assistance faces unresolved design, implementation, and regulatory challenges. This workshop provides a platform for researchers and practitioners to explore teleoperation challenges and opportunities. Participants will address teleoperation methods and design innovative remote AV operation paradigms through interactive activities, discussions, and poster sessions. The workshop aims to explore teleoperation challenges, create a teleoperation paradigm-scenario mapping, and generate novel concepts and interaction methods for resolving edge-case road scenarios.
The rapid advancement of automated vehicles has aroused the curiosity of researchers in the automotive field. Understanding the emotional aspects of this technology is critical to improving human-vehicle interactions. The topics of the proposed workshop will be expanded from internal to external empathetic interface designs of automated vehicles. The workshop will gather researchers and practitioners to brainstorm and design affective internal and external interfaces for automated vehicles, targeting specific use cases within the social context. During the workshop, participants will use an affective design tool and generative AI to prototype affective interface designs in automated vehicles. With this creative approach, we aim to expand the knowledge of affective eHMIs in addition to in-vehicle designs and understand social factors that contribute to the user perceptions of automated vehicles.
CCS CONCEPTS •Human-centered computing∼Human computer interaction (HCI)∼HCI theory, concepts and models
As the development of Generative AI technology continues to progress, the opportunity for innovation with AI in the form of user interfaces, products and services within vehicles is expanding. Furthermore, automobiles are undergoing major transformations in design due to changes in the underlying technology resulting in evolved user needs, behaviors, activities and aspirations. This workshop is aimed at providing the participants hands-on experience of designing novel Generative AI interfaces for vehicles. While working on the design challenge as the connecting thread, we will introduce and weave together modules of knowledge domains focusing on Human-centered design, Ethical and Responsible behavior, and Autonomy in vehicles. Participants will learn about and engage collaboratively in employing design methods such as Co-creation using Activity Canvases, Enactment, Wizard of Oz, Bodystorming and inter-group discussion. As the outcome, we aim to publish participant’s design concepts as a booklet and a research paper, and seek new research collaborations.
In the future, roads will host a complex mix of automated and manually operated vehicles, along with vulnerable road users. However, most automotive user interfaces and human factors research focus on single-agent studies, where one human interacts with one vehicle. Only a few studies incorporate multi-agent setups. This workshop aims to (1) examine the current state of multi-agent research in the automotive domain, (2) serve as a platform for discussion toward more realistic multi-agent setups, and (3) discuss methods and practices to conduct such multi-agent research. The goal is to synthesize the insights from the AutoUI community, creating the foundation for advancing multi-agent traffic interaction research.
This workshop is designed to explore the potential synergies between established research on Social and Assistive Robots within HRI and the emerging field of Autonomous Vehicle and Other Road Users (AV-ORU) interactions. It examines the bidirectional applicability of principles, methodologies, and insights and seeks to initiate a detailed discussion on the distinctions and parallels between HRI and AV-ORU interactions to promote more meaningful, context-aware exchange and prevent the oversimplification of transferability. The goal is to encourage the integration of methodologies, broaden the ethical considerations in play, improve the focus on user-centric research in interactions and ultimately address shared challenges, innovative solutions towards sophisticated, ethically sound, and socially integrated autonomous systems.
We present a demonstration system that combines off-the-shelf capacitive screens with easy-to-produce stencils to facilitate the rapid iteration of tactile in-cabin user interfaces. Designers can use this tool to experiment with the position and layout of interactive components; they can 3D print or manually cut stencils out of common material and mount them to the display surface with low-tack glue to create a tactile experience that simulates tactile controls.
Standalone eXtended Reality (XR) headsets can be used across mobile use cases, including on different forms of transport. These scenarios provide unique opportunities for novel applications, as the vehicle motion can be used to create and enhance movement- and location-based experiences. The daily commute can be turned into a spaceship shooting game, where the journey of the player’s ship follows the motion of the car. But we can also use perceptual manipulation - altering the virtual motion - to give the sensation of moving in very different ways to reality, making even a daily commute feel different each time. This demo will let attendees play a VR shooting game in a moving car, where we detect the motion of the vehicle and apply translational gain (amplified forward motion) and rotational gain (amplified turning) to make it feel like a 460km/h spaceship journey with varied turns.
Driving simulators are vital for human-centered automotive research, offering safe, replicable environments for studying human interaction with transportation technology interfaces and behaviors. However, traditional driving simulators are not well-suited to studying traffic interactions with various degrees of freedom in a way that allows for the capture of nuances in implicit and explicit interactions, e.g. gestures, body language, and movement. We developed a multi-participant virtual reality (VR) driving simulation platform to study these interactions. This portable system supports cross-cultural experiments by modeling diverse scenarios, generating analyzable data, and capturing human behaviors in traffic. Our interactive demo allows participants to experience roles as drivers or pedestrians in a shared virtual environment, with the goal of providing a hands-on experience with this open-source VR simulator and demonstrating its affordability and scalability for traffic interaction studies to researchers and practitioners.
This study investigates a novel methodology for enhancing passenger comfort in highly automated vehicles (HAVs) by deriving driving patterns from professional chauffeurs. We collected detailed driving and interview data from chauffeurs, who are known for their smooth and comfortable driving styles. A Two-Step deceleration profile was derived from this data. Two different versions of the Two-Step Profile (V1 and V2), along with a One-Step profile, were integrated into an HAV and tested in various driving scenarios with 36 participants. Participants experienced each profile twice: once while attentive and once while engaged in a non-driving activity. By transferring the chauffeur's driving style to an HAV, this study evaluates the feasibility and comfort of human-like driving patterns in automated systems. The findings provide valuable insights for designing HAVs that prioritize passenger comfort, bridging the gap between human driving expertise and automated technology.
Autonomous vehicles are redefining the future of transportation, shifting the focus from mere transit to passenger-centric experiences. Among various in-vehicle activities, comfortable rest is a significant preference of passengers. Herein, we propose an innovative method employing user experiences to support and enhance passenger rest in future mobility. From our in-depth user interviews, we identified common pain points related to resting in autonomous vehicles and developed personalized climate control and an AI-based interaction system named “Zephy.” Our findings highlight the importance of tailoring vehicle environments to passenger needs, ensuring a restful and comfortable journey. These advancements facilitate not only physical comfort but also the overall enhancement of passenger experiences.
In modern society, communication between neighbors is not as effective as before. Owing to the COVID-19 pandemic, direct communication with others has decreased. During the pandemic, limited service platforms were available, resulting in an increase in Internet usage and online communication among users. As the pandemic recedes, online activities are transitioning offline. However, the lack of secure and safe meeting spaces has limited face-to-face interactions. To address this problem, we propose an autonomous purpose-built vehicle design and related services for local communities. By configuring a space based on a vehicle, scalability and mobility can be ensured. In addition, the space was planned to accommodate diverse people. Subsequently, we explored additional ways to provide a comfortable community space for a variety of people.
Considerable research has been carried out into explainable artificial intelligence (XAI) in automated driving to enhance user trust in these technologies. This work examines how XAI and HCI can improve user trust through visual text design in L3 automated vehicles. We aim to develop user interfaces (UIs) that make AI-driven vehicle decisions more transparent and understandable. Employing a mixed-methods approach (TiA scale, qualitative feedback) in a between-groups design (n=12 participants), we combine empirical analyses with UX design principles. The study indicates that effectively communicating AI processes through visual texts in UIs can bridge the gap between complex algorithms and user understanding, thus fostering trust. Results further indicate that contextual, clear, and expected multimodal interaction enhances user trust and understanding of automated systems. These findings are crucial for developing future automated systems in a user-centered manner.
Most vehicles on the road today range from SAE Level 1 to 3 automated vehicles, where Level 3 automated vehicles can handle most of the driving tasks on the road. Nevertheless, there are still moments when drivers need to prepare to take over the vehicle, such as when an object suddenly appears in the middle of the road. To respond to these critical events in time, multimodal displays have been introduced in instructional and informational formats. Yet, the effects of multimodal displays during takeovers for people with hearing impairments have yet to be studied. To address this, we investigated how signal type (single-modal vs. multimodal), informational type (instructional vs. informational), and hearing impairment (hearing-impaired and non-hearing-impaired drivers) can impact drivers’ takeover performance. These study findings can be used when implementing multimodal displays in automated vehicles for drivers with hearing impairments.
In this paper, we present a concept for a vehicle interior feature of the future that fuses projection mapping, sensors, and computer vision to create a novel user experience. The Dynamic Interior Light prototype, an actuated projection unit with a sensor array, was fabricated and integrated into a production vehicle to demonstrate the various additional functionalities that we created. Utilizing Unity, a variety of use cases were developed and implemented. Some of these use cases are more passive interactions (e.g. displaying a message on a seat or creating an atmospheric scene) while others are more active (e.g. playing a game or Body UI). The Dynamic Interior Light prototype demonstrates that this type of system can be implemented in a vehicle to provide a novel, unique, and useful interaction paradigm. However, additional considerations are necessary to address the various challenges for this concept.
Driving automation systems have provided great benefits to drivers and passengers by reducing fatigue, preventing accidents, and improving user experience. However, the adoption and proper use of such systems largely depend on a driver's understanding of the system's capabilities and limitations. Currently available automated vehicles on the road generally lack explanations for users to understand system malfunctions and incapabilities, which results in user confusion and prevents users from using the system. Additionally, those systems often do not disclose their operational domain, which might make users develop unrealistic reliance on driving automation systems. Providing meaningful explanations is considered as a solution to address those issues. The presented video first showcases some dilemmas that users might encounter while using driving automation systems, followed by scenarios showing explanation options and potential user reactions. With this video, we anticipate eliciting in-depth discussions on how to effectively construct explanations for future automated vehicles.
The complexity of automotive user interfaces impacts cognitive load and situation awareness of drivers. The aim of the present study is to empirically investigate the relationship between the complexity of the infotainment system in manual vehicles and driver situation awareness. In this experiment we compare the differences between a simple and a complex infotainment system, comparing the independent variable of infotainment system complexity at two levels (low and high complexity) within a simulated driving environment. The Situation Awareness Global Assessment Technique (SAGAT) was utilized to measure situation awareness in a sample of n=6 licensed drivers, this work is an exercise as part of a lecture. Our findings indicate a variation in SAGAT scores between the two complexity conditions, though there is no statistical differences. These findings indicate the necessity of balancing infotainment system complexity with user-centered design principles to enhance driver safety in manual vehicles. The study advocates for more extensive and diverse investigations in future research.