New Advanced Driver Assistance Systems (ADAS) are now available to support urban driving. To adequately use ADAS, especially in complex situations, drivers must comprehend them. An on-road study was conducted to investigate the mental model development while interacting with a state-of-the-art ADAS in both a rural (less complex) and an urban context (more complex). Forty-six participants experienced two rounds of each context. After each round, drivers rated their mental model, acceptance, and trust. Results indicate that for the rural context participants learned the system functionality in the first round without further improvement. In the urban context the mental model was generally less accurate, but improved in the second round. Trust increased from the first to the second rural round while acceptance did not show a significant change within the context. The results provide a first glimpse into the importance of evaluating different contexts and interaction scenarios for ADAS.
In this study, we focus on different strategies drivers use in terms of interleaving between driving and non-driving related tasks (NDRT) while taking back control from automated driving. We conducted two driving simulator experiments to examine how different cognitive demands of texting, priorities, and takeover time budgets affect drivers’ takeover strategies. We also evaluated how different takeover strategies affect takeover performance. We found that the choice of takeover strategy was influenced by the priority and takeover time budget but not by the cognitive demand of the NDRT. The takeover strategy did not have any effect on takeover quality or NDRT engagement but influenced takeover timing.
Drifting, a skillful driving technique involving intentional traction loss and counter-steering, traditionally demands high-speed maneuvers under high-friction conditions, posing significant risks and fear for novices. Our study explores low-µ (low friction) emulation, simulating icy conditions to facilitate drift training at safer, lower speeds. This approach not only enhances safety and mitigates fear by reducing the required speed for drifting, but also extends the time for them to react. A between-group design was employed, comparing drift training outcomes between participants trained exclusively in higher-µ conditions (control group) and those who trained initially in lower-µ conditions before transitioning to higher-µ conditions (target group). The performance was assessed through the average distance of continuous sliding, along with subjective measures of motivation and workload. The results showed that the target group achieved greater slide distances in the retention session and reported higher scores on the positive intrinsic motivation factors, suggesting enhanced performance and engagement.
Driving simulators often lack fundamental components needed for accurate simulation of energy dynamics. We introduce EcoSimLab, a comprehensive electric vehicle driving simulation framework consisting of (1) a simulation of electric vehicle energy dynamics, (2) an optimization-based approach of structuring eco-driving behaviors, (3) a synthetic driver module as versatile benchmark model to analyze human behavior. Guided by fundamentals of energy modeling and considerations on human action regulation, we further present the development of the EcoDrivingTestPark, an exemplary set of energy-relevant scenarios to enable the analysis of individual differences in eco-driving and intervention effects (e.g., HMIs). To generate a first characterization of driving behavior, we conducted two empirical studies with human (<Formula format="inline"><TexMath><?TeX $N_\text{S1}~=~31$?></TexMath><AltText>Math 1</AltText><File name="automotiveui24-4-inline1" type="svg"/></Formula>, <Formula format="inline"><TexMath><?TeX $N_\text{S2a}~=~41$?></TexMath><AltText>Math 2</AltText><File name="automotiveui24-4-inline2" type="svg"/></Formula>) and synthetic drivers (<Formula format="inline"><TexMath><?TeX $N_\text{S2b}~=~3$?></TexMath><AltText>Math 3</AltText><File name="automotiveui24-4-inline3" type="svg"/></Formula>). Results indicate substantial variations in driver behavior and considerable challenges for human drivers to achieve synthetic driver performance. Implications for augmenting human action regulation in eco-driving are discussed.
According to the Society of Automotive Engineers (SAE), in Level 2 systems (L2 systems), the system executes the longitudinal and lateral control of the vehicle, with the driver required to monitor the environment and intervene when necessary. To further improve safety and driver engagement, we compared a fully shared longitudinal control system, which permits speed adjustments via acceleration and braking without deactivation, with a conventional system that disengages upon braking. In a simulator study involving 61 participants, both systems were well-received in terms of acceptance and user experience. The fully shared longitudinal control led to more frequent and earlier braking, suggesting anticipatory driving, without compromising perceived safety. Furthermore, it outperformed in hedonic qualities of user experience, and elicited a stronger intention to use. Our findings indicate that fully shared longitudinal control can enhance driver engagement, offering a valuable improvement for L2 automated systems.
Amidst the replication crisis, it is increasingly clear that we need to understand contextual factors that drive participant behavior, because those factors influence the applicability of study findings more broadly. For AutoUI, as we conduct interaction studies involving drivers, pedestrians, and other traffic participants, it is useful to characterize the traffic contexts that human participants are familiar with, because their prior experiences with traffic are likely to influence their behaviors within the context of a controlled study.
To address this, we propose a new method, ‘POV (point-of-view) camera-driven urban fingerprinting,’ which can be used to characterize differentiating features of urban environments. We introduce two approaches, Small-Scale Custom Instrumentation, and Large-Scale Collection and Aggregation, and show how they can be used to acquire a broad picture on the characteristics of any city. One key benefit of POV camera-based data collection is that it better captures the experiential aspects of traffic and road scenes than methods such as satellite imaging. This work is the first to formalize a specification for this data collection method by describing existing work and outlining standards to serve as a baseline for downstream research. This work posits the medium of crowd-sourced POVcam as a new and useful tool for transportation/ automotive interaction studies and infrastructure analysis. Subsequently, we provide future researchers with guidelines for characterizing urban traffic in cities for interaction study design.
Augmented Reality (AR) holds great potential for enhancing pedestrian’s urban experiences; however, its use in road traffic poses safety concerns due to potential distractions from interacting with AR interfaces. This paper investigates the effectiveness of AR applications for assisting pedestrians in crossing scenarios, against traditional crossing methods, by incorporating a collision warning system that uses an arrow to indicate the direction of a potential danger, and a virtual traffic light showing whether it is safe to cross. By leveraging Vehicle-to-Everything (V2X) communications within the living lab of Aveiro, Portugal, we conducted a user study to evaluate involved workloads, perceived safety and system usability in a realistic scenario. The findings from our study involving 20 participants reveal significant improvements in pedestrians’ perceived safety and a decrease in the perceived workload when using AR for pedestrian crossings, with both collision warning systems and virtual traffic lights demonstrating excellent usability.
The Wizard-of-Oz method has been widely used recently as it allows mimicking automated vehicles with relatively few resources. In some studies, it is challenging to ensure that the wizard remains fully hidden from participants, despite this being a crucial aspect of such experiments. To determine whether participants’ awareness of the wizard influences the outcomes of these studies, we conducted an experiment investigating participants’ crossing behavior and subjective perception of a remote-controlled automated vehicle. Participants were exposed to two conditions: in one, they solely focused on a simulated vehicle driving autonomously; in the other, they observed a wizard with a remote control and were instructed to imagine the car was automated. Results, based on scales for user experience, acceptance, and trust, as well as crossing behavior, indicate similar results. However, participants’ knowledge of the wizard necessitates careful interpretation when system errors are simulated. We conclude with recommendations for future Wizard-of-Oz experiments.
In the emerging automated vehicle (AV)—passenger interaction domain, there is no agreed-upon set of methods to design early concepts. Non-designers may find it challenging to brainstorm interfaces for unfamiliar technology like AVs. Therefore, we explore using an immersive virtual environment to enable expert and non-expert designers to actively participate in the design phases. We built AV-Sketch, an in-situ (on-site) simulator that allows the creation of automotive interfaces while being immersed in VR depicting diverse AV-passenger interactions. At first, we conducted a participatory design study (N=15) by utilizing PICTIVE (Plastic Interface for Collaborative Technology) to conceptualize human-machine interfaces for AV passengers. The findings led to the design of AV-Sketch, which we tested in a design session (N=10), assessing users’ design experiences. Overall, participants felt more engaged and confident with the in-situ experience, enabling better contextualization of design ideas in real-world scenarios, with improved spatial considerations and dynamic aspects of in-vehicle interfaces.
We review the state of open science and the perspectives on open data sharing within the automotive user research community. Openness and transparency are critical not only for judging the quality of empirical research, but also for accelerating scientific progress and promoting an inclusive scientific community. However, there is little documentation of these aspects within the automotive user research community. To address this, we report two studies that identify (1) community perspectives on motivators and barriers to data sharing, and (2) how openness and transparency have changed in papers published at AutomotiveUI over the past 5 years. We show that while open science is valued by the community and openness and transparency have improved, overall compliance is low. The most common barriers are legal constraints and confidentiality concerns. Although research published at AutomotiveUI relies more on quantitative methods than research published at CHI, openness and transparency are not as well established. Based on our findings, we provide suggestions for improving openness and transparency, arguing that the motivators for open science must outweigh the barriers. All supporting materials are freely available at: https://osf.io/zdpek/
For people who are blind and low vision (BLV), ridesharing provides an important means of independence and mobility. However, a common challenge relates to finding the vehicle when it arrives to an unanticipated location. Although coordinating with the driver for assistance is serviceable in the near term, new solutions are necessary when a human is no longer available in future automated vehicles. Therefore, this paper presents and evaluates a multisensory smartphone-based map system designed to enable nonvisual tracking of summoned vehicles. Results from a user study with (N=12) BLV users suggest that vibro-audio maps (VAMs) promote superior spatial confidence and reasoning compared to current nonvisual audio interfaces in ridesharing apps, while also being desirable and easy to use. A subsequent expert evaluation based on improvements suggested during the user study indicate the practical utility of VAMs to address both current and future wayfinding challenges for BLV travelers.
Gesture-based interactions for automotive infotainment systems pose advantages over touchscreens such as alleviating the visual field. While the focus of these advantages is on improving the driving task, it is also important that a user feels in control and perceives influence over the in-vehicle system. This is known as the user's sense of agency in psychology, and sensory feedback is a key aspect. The current study involved a dual-task driving (simulator) and gesture-controlled infotainment interaction, accompanied by mid-air haptic or audio feedback. With 30 participants, we utilized an experimental approach with implicit and explicit measures of agency, as well as trust and usability. Results illustrated no difference in explicit judgements of agency, however mid-air haptic feedback improved the implicit feeling. More trust was also reported in the system with mid-air haptics. Our findings provide empirical evidence for mid-air haptics fostering user agency and trust in gesture-based automotive UI.
Interacting with in-vehicle infotainment screens (IVIS) while driving influences how drivers allocate attention to the road, which can affect task and driving proficiency. Haptic feedback in IVIs has emerged as a promising avenue to support drivers’ interactions. Little is known about the impact of diverse feedback modalities on different types of interactions commonly carried out when interacting with IVIS. Two studies were conducted utilising production vehicles to understand the effects of IVIS featuring haptic feedback modalities (vibration and force touch) compared to other modalities (audio-only and no haptic feedback) for different types of interactions. Findings indicate that vibration and audio-only feedback supported drivers’ attention to the road and increased task performance in most types of interactions. Conversely, force touch did not support the driver and yielded the same results as having no haptic feedback. These results can support the design and enhancement of IVIS systems to support the drivers.
This paper explores the development of a touch-sensitive interactive 3D central console for vehicles aimed at simplifying access to tasks unrelated to driving, such as adjusting seats and controlling music. We investigated three console designs with front surface angles of 45°, 90° and 135°. The initial study assessed how users interact with these three prototypes. Subsequently, we examined the ease with which users could reach across the different shapes. Additionally, we conducted a gesture elicitation study styled on guessability with the 135° model, focusing on user interaction with four applications: a radial menu, a 2D menu, car seat adjustments, and map navigation. Summarizing, this work aims to create a central console that is ergonomic, minimalist, and utilizes surface gestures, setting a new standard for future car interiors.
A technology scan of in-vehicle touchscreens was conducted to identify trends for the use of touchscreen displays and controls in North American passenger vehicles, and relevant human factors literature was used to explore the safety implications of these trends. The technology scan focused on touchscreen devices that are installed as original equipment in new and recent passenger vehicles (i.e., from 2020-2024). Relevant data was extracted from owner's manuals, manufacturer websites, and online videos. Results indicate a trend towards larger touchscreens in the last five years and the moving of functions (e.g., climate control, audio entertainment, primary vehicle controls) from physical buttons to touchscreens. Based on findings from human factors literature, these trends may have a negative impact on road safety. For example, moving primary vehicle controls and infotainment tasks to touchscreens may result in increased distraction and impaired driving performance. Insights about consumer recalls related to touchscreens are also reported.
The advancement of automated driving technologies is fundamentally transforming the relationship between humans and vehicles, shifting from direct control to a more collaborative dynamic. Consequently, the design of in-vehicle Human-Machine Interfaces (iHMIs) is becoming increasingly intricate, focusing on aspects beyond mechanics and ergonomics towards enriched interaction and enhanced user experience. This shift has prompted research efforts to explore and advance iHMI concepts. Despite the iterative nature of design and its role in knowledge creation, our high-level understanding of the design processes utilised in iHMI development remains limited. To provide a comprehensive overview, this paper presents a scoping review of 324 papers (2013—2023) focused on the design underpinnings of iHMI development. Our review presents a categorisation of study goals and a detailed classification of five key stages within the design process. Based on these analyses, we discuss the influence of design and identify potential avenues for future research on iHMIs.
Automotive manufacturers are continually looking for a competitive edge which may include implementing novel user interface solutions. This research aimed to understand how novel location and control type impacts on the discoverability of vehicle functions. Participants were asked to complete visual search tasks in two virtual vehicle interiors (wearing a virtual reality headset) where control types and their locations either met the de facto industry standard or did not. The tasks were repeated four times for each interior. Results showed that the mean search time and number of errors were higher when the control type or location did not align with the de facto standard. Furthermore, a novel location was found to have a greater impact on search time. However, the number of errors was greater when searching for a novel control type. The findings suggest moving away from de facto standards should be carefully considered by automotive manufacturers.
While the introduction of automated driving is intended to make driving safer and more comfortable, it may also make the interaction between user and vehicle more complex and prone to confusion. It is therefore important to be able to assess to what extent ADSs provide their users with clear and unambiguous information concerning system state and concerning allowed or expected actions from the side of the user. In this project, we have developed a questionnaire to measure whether the information provided by an ADS to its user is experienced as clear and unambiguous. The questionnaire consists of nine items, which were shown to discriminate between good and bad HMI designs in terms of user perception and comprehension. The development and evaluation of this questionnaire was done in several steps. First, a set of 15 questionnaire items was developed that could be presented as Likert scale items on a 7 point scale. Next, an online survey was conducted to test the reliability and internal consistency of the questionnaire as well as explore underlying factors. Data analysis showed that responses on some of the items were highly correlated. A subset of 9 items was found to be able to discriminate well between HMIs with different levels of clarity and (un)ambiguity. Exploratory factor analysis revealed two underlying factors, which could be interpreted to relate to perception and comprehension of HMI information. In the final step, the 9-item questionnaire was tested in an experimental setup. 23 Participants were presented with movie clips of transitions from or to automated driving, including a view of the traffic situation (through the windshield) and of the instrument cluster and steering wheel. Again, HMIs were designed to be either clear and unambiguous or unclear/ambiguous. The results showed that the questionnaire discriminated reliably between both versions of the HMI. As in the online survey, high reliability and internal consistency were observed. Additionally, qualitative measurement of user experience by means of Product Reaction Cards also showed clear differences between the two different HMI versions.
AV-cyclist interaction research has focused on individual interfaces, e.g., eHMIs or AR glasses. However, cyclists using multiple independent interfaces could receive redundant or contradictory messages, causing unsafe encounters. We introduce the Holistic AV-Cyclist Interface: multiple displays facilitating communication together to ensure safe road sharing. We used participatory design with cyclists and HCI researchers (N = 12) to develop holistic interfaces around an actual vehicle equipped with an eHMI. Participants created holistic interfaces communicating blindspot warnings and AV intent and accommodating two-way communication, e.g. through on-bike directional indicators. They mostly placed interfaces on the bicycle handlebars and cycling accessories, e.g. helmet. Holistic interfaces could overcome design challenges: they can be multimodal, facilitate communication between AVs and multiple cyclists, and work in varying weather/visibility conditions. We categorised participant designs into a taxonomy and developed a novel holistic interface based on our findings. Our results are important for the safe integration of AVs into traffic.
In this review, we analyze the current state of the art of computational models for in-vehicle User Interface (UI) design. Driver distraction, often caused by drivers performing Non Driving Related Tasks (NDRTs), is a major contributor to vehicle crashes. Accordingly, in-vehicle User Interfaces (UIs) must be evaluated for their distraction potential. Computational models are a promising solution to automate this evaluation, but are not yet widely used, limiting their real-world impact. We systematically review the existing literature on computational models for NDRTs to analyze why current approaches have not yet found their way into practice. We found that while many models are intended for UI evaluation, they focus on small and isolated phenomena that are disconnected from the needs of automotive UI designers. In addition, very few approaches make predictions detailed enough to inform current design processes. Our analysis of the state of the art, the identified research gaps, and the formulated research potentials can guide researchers and practitioners toward computational models that improve the automotive User Interface (UI) design process.
Situation Awareness (SA) is one of the core concepts describing drivers’ interaction with vehicles, and the lack of SA has contributed to multiple incidents with automated systems. Despite existing definitions and measurements, little is known about what constitutes the concept of situations from users’ perspective, i.e., do they have a similar or different understanding of situation dynamics? Therefore, we conducted a video-based experiment where participants had to mark the onset of new situations from their perspective, provide a continuous criticality rating, and justify their decisions in a post-test interview. Our results indicate that the understanding of situations, their complexity, and their duration is quite diverse between people and independent of properties such as age, gender, or driving experience, while partly being influenced by the road type. Additionally, we found correlations between subjective situation durations, criticality ratings, and algorithm output, which can be exploited by future applications and experiments.
Research has characterized mind-wandering as humans’ natural mental state, with moments of task-focused attention being the exception. With this framing, mind-wandering while driving likely occurs more than generally acknowledged, and seems poised to increase with higher levels of automation. This in turn may have adverse effects on drivers’ abilities to regain situation awareness or resume control when needed. Of the prior work on detecting mind-wandering while driving, none focuses on automation or complex urban environments. We ran an exploratory study (N = 14) of an automated drive through New York City in a two-dimensional virtual reality context, focusing on physiological measures such as gaze distribution, pupillometry, and heart rate. We also explored how drivers missing critical events may be a potential new measure. Results varied between focused and mind-wandering mental states and between moving and stopped driving contexts. These observations are an initial step toward understanding mind-wandering across diverse driving scenarios.
A method is presented for enhancing the steering maneuvers of human drivers using peripheral visual information generated based on the vehicle’s lateral positions on the road. Human drivers control vehicles mainly using visual feedback information received from the road. However, such feedback may sometimes become degraded due to various reasons. On the other hand, recent advances in sensory technologies make it possible to detect vehicle behavior on the road quite stably. In this paper, we investigate a way to compensate such impaired feedback using artificial visual stimuli. A series of LED devices were installed in the car’s doors to display the stimuli. We designed three types of illumination patterns (flow, position, and width conditions) that react to the current lateral position of the vehicle. Simulated driving studies were conducted to evaluate the effect of each pattern on the driver’s control of the vehicle. As a result, we found that the presenting the artificial visual stimuli without explanation could significantly reduce inappropriate steering maneuvers in the degraded feedback situations.
Conditionally automated driving systems face two main safety challenges: the inability to autonomously handle all situations the vehicle encounters, and the allowed inattention of drivers during these critical moments. Our study focuses on enhancing drivers’ situation awareness at such times by embedding information about system status and the road environment in the visual signals displayed when control is transferred from the automated driving system. Six visual signals, each including different levels of situation awareness information, were compared to examine how they influence drivers’ levels of situation awareness in a simulated environment. The results show that signals incorporating higher levels of situation awareness information about the environment significantly facilitate the recovery of situation awareness after engaging in non-driving related tasks. This research provides insights into how visual cues can be optimized to facilitate quicker recovery of situation awareness for drivers transitioning from non-driving tasks in conditionally automated vehicles.
The design of self-driving vehicles requires an understanding of the social interactions between drivers in resolving vague encounters, such as at un-signalized intersections. In this paper, we make the case for social situation awareness as a model for understanding everyday driving interaction. Using a dual-participant VR driving simulator, we collected data from driving encounter scenarios to understand how (N=170) participant drivers behave with respect to one another. Using a social situation awareness questionnaire we developed, we assessed the participants’ social awareness of other driver’s direction of approach to the intersection, and also logged signaling, speed and speed change, and heading of the vehicle. Drawing upon the statistically significant relationships in the variables in the study data, we propose a Social Situation Awareness model based on the approach, speed, change of speed, heading and explicit signaling from drivers.
E-bikes are recognized for their sustainable transportation benefits. However, the higher speeds associated with e-bikes pose an increased risk of potential accidents and hinder fluid riding in swarms with conventional bicycles. In this paper, we analyze the accuracy of maintaining an unknown speed, assess the associated workload, and investigate the self-reported speeds of e-bike cyclists in order to adapt the electric assistance to dynamic speed limits based on the surrounding traffic conditions. Our results from a pilot study with 15 participants show that the accuracy of maintaining a speed limit through active motor control and the associated workload are influenced by factors such as the level of electrical assistance and the perception of motor disengagement. E-bike cyclists using higher levels of electrical assistance demonstrated more accurate target speed maintenance. On average, participants consistently underestimated adapted speed limits, which were also influenced by the level of electrical support.
Research on cycling safety has recently gained the attention of the HCI community. While there have been multiple proposals for automated driving features on bikes, we are unaware of a project that systematically aims to translate and evaluate driver assistance systems from the automotive to the bike domain to promote cycling safety in traffic. Thus, we implemented an adaptive cruise control and a lane-keeping/centering system with hard- and software on a motion-based bicycle simulator and investigated their potential in a virtual reality experiment. Based on performance measurements and subjective ratings, results showed significant improvements in technology acceptance, subjective workload, and driving performance regarding the cruise control. In contrast, the lane-centering and lane-keeping features were rated significantly worse than the baseline without such assistance. The paper concludes with a critical reflection on automated driving features for bicycles.
Micromobility devices, such as e-scooters and delivery robots, hold promise for eco-friendly and cost-effective alternatives for future urban transportation. However, their lack of societal acceptance remains a challenge. Therefore, we must consider ways to promote prosocial behavior in micromobility interactions. We investigate how post-ride feedback can encourage the prosocial behavior of e-scooter riders while interacting with sidewalk users, including pedestrians and delivery robots. Using a web-based platform, we measure the prosocial behavior of e-scooter riders. Results found that post-ride feedback can successfully promote prosocial behavior, and objective measures indicated better gap behavior, lower speeds at interaction, and longer stopping time around other sidewalk actors. The findings of this study demonstrate the efficacy of post-ride feedback and provide a step toward designing methodologies to improve the prosocial behavior of mobility users.
When employing autonomous driving technology in logistics, small autonomous delivery vehicles (aka delivery robots) encounter challenges different from passenger vehicles when interacting with other road users. We conducted an online video survey as a pre-study and found that autonomous delivery vehicles need external human-machine interfaces (eHMIs) to ask for help due to their small size and functional limitations. Inspired by everyday human communication, we chose arms as eHMI to show their request through limb motion and gesture. We held an in-house workshop to identify the arm’s requirements for designing a specific arm with shrink-ability (conspicuous when delivering messages but not affect traffic at other times). We prototyped a small delivery robot with a shrinkable arm and filmed the experiment videos. We conducted two studies (a video-based and a 360-degree-photo VR-based) with 18 participants. We demonstrated that arm-on-delivery robots can increase interaction efficiency by drawing more attention and communicating specific information.
This paper analyzes honks directed at autonomous vehicles (AVs) by other drivers. As honks often mark problems, this focus allows us to better understand the challenges that AVs face in real traffic. Performing a sequential video analysis of 63 honk incidents uploaded by Tesla beta testers on YouTube, we identify how problematic situations emerge as honkable Traffic Gestalts. We identify four types of situated problems with AV driving performance marked by other drivers’ honks: they may wait too long, steer inconsistently, stop instead of going, and go too fast. We further show how a honk may be understandable as a warning, a nudge or a reprimand. Our work suggests designing honks for AVs to focus on relevant contexts, supported by developing bidirectional interfaces and audio analysis methods that consider the interplay of auditory and visual information in traffic.
The proportion of highly automated vehicles in traffic (i.e., the prevalence of AVs) is likely to increase over time. The aim of this study was to investigate whether the prevalence of AVs may influence how pedestrians interact with AVs and with conventional, human-driven vehicles (CVs). A video-based laboratory study was conducted using a two-group mixed design. Participants took the perspective of pedestrians about to cross the road in a situation where AVs (with eHMIs) and CVs were approaching their position. The prevalence of AVs was manipulated between groups (low/high). The participants indicated the moment they decided to cross in front of the vehicles. Our results show that AV prevalence did indeed significantly influence when participants decided to cross. Overall, participants decided to cross earlier in front of the more prevalent vehicle type. Therefore, taking into account the given prevalence of AVs could significantly benefit AVs in predicting pedestrian behavior.
The Development of Fully Autonomous Vehicles (AVs) would fundamentally change the nature of in-vehicle user interactions, behaviors, needs, and activities. Passengers free from driving would expect to undertake diverse Non-Driving-Related Tasks to keep themselves occupied. Introducing Conversational Artificial Intelligence (CAI) in Level 5 AVs could improve the in-vehicle user experience (UX). To explore this, firstly, we identify what roles and relationships can CAI play towards end-users of AVs through end-user interviews and thematic analysis. Secondly, we examine how end-users qualitatively assess the embodied UX of the CAI roles and relationships through guided brainstorming, post simulator interaction experiments employing Wizard of Oz setup and Participant Enactment methods. Results show that Tour Guide, Mentor, and Storyteller were the most preferred CAI roles, and that Human-CAI relationships are maintained if the CAI mediates in-vehicle user activities, interactions, sharing of vehicle control, and deep conversations. We discuss the research implications and propose design guidelines.
As vehicle automation advances to SAE Levels 3 to 5, transitioning driving control from human to system, ensuring automated vehicles (AVs) align with user preferences becomes a challenge. Natural interaction emerges as a common goal, offering ways to convey user interests in a user-friendly manner. However, technical, legal, or design constraints may prevent fulfilling these preferences, leading to potential conflicts. Through an online survey (N=50), potential driver-passenger conflicts and their handling strategies were explored. Subsequently, in a Virtual Reality study (N=14), we applied identified strategies (ranging from distracting to motivating and adhering to social norms) to user-AV interactions using a state-of-the-art language model (GPT-4 Turbo) primed with the strategies to simulate realistic dialogues. Additionally, adaptive communication was compared to non-adaptive communication. Our findings reveal a preference for adaptive communication. Yet, despite using advanced modeling, accurately predicting user interactions remained challenging, with users often trying to outsmart the AI.
We investigate the integration of embodied agents within the automotive environment to enhance driver guidance. We explore the concept through two user studies. Through a gesture elicitation study using motion capture technology, we first identified a set of whole-body gestures suitable for in-car driver assistance. We then recreated the selected gestures by acting them out and recording them as animations. The selected gestures were then evaluated and mapped to different driving-related scenarios in the second study conducted in a simulated in-car condition. The most promising gestures for an embodied in-car driving assistant were pointing towards a certain direction, showing warning signs by crossing the arms either on the chest or arms on top of the head, and performing waving gestures to alert or to grasp the driver’s attention. By charting the potential signaling gestures, we aim to advance the development of more intuitive in-car guidance systems, ultimately promoting road safety.
Automated Vehicles (AVs) must deal with uncivil human behaviors and may even encourage new ones. To provide an in-depth understanding of such behaviors, we studied interactions between AVs (shuttles, personal cars, delivery robots) and other road users on highways, urban roads, and pedestrian areas based on extensive data from 6 real-world tests in France. These interactions were categorized into prototypical cases, which were analyzed using a descriptive model we developed to capture their situational, motivational, and behavioral factors. The results show that uncivil behavioral patterns involve mixed emotional and trajectory management, focusing on displacement, space sharing, or personal interaction. Uncivil behavior is thus defined by its disrespectful outcomes on traffic fluidity, disputed right of way, and personal offense (e.g., space intrusion, aggressivity, griefing of AVs). Improved infrastructure, better information for other road users, and enhanced AV behavior could mitigate uncivil behaviors and facilitate AV acceptance.
Shared Automated Vehicles (SAVs) present a promising solution for future urban mobility. However, SAVs will reach the limits of their capabilities in some edge cases. Similar to personal AVs, passengers in SAVs might be utilized for this purpose, thus, methods for interaction between passengers and the automation or a teleoperator need to be explored. This study investigates whether the presence of other passengers leads to more passive behavior in critical situations (bystander effect). The results did not show significant differences in the participants’ behavior depending on whether the ride was experienced alone or with other passengers. However, the qualitative data indicate that the presence of other passengers can trigger psychological processes that promote a bystander effect. The findings emphasize the importance of considering group effects in the context of SAVs. By understanding these dynamics, we aim to help design SAVs to promote safer and more inclusive future transport systems.
In hybrid mobility societies, where automated vehicles (AVs) and humans interact in public spaces, the significance of prosocial behaviors intensifies. These behaviors are crucial for the smooth functioning of an interdependent transportation environment, mitigating challenges from the integration of AVs and human-operated systems, and enhancing user well-being by fostering more efficient, less stressful, and inclusive environments. This study explores the impact of receiving prosocial behaviors on cognition, riding behavior, and well-being of micromobility users through interdependent traffic situations within a simulated urban environment. Our mixed design study involved two types of social interactions as between-subject conditions of prosocial and asocial interaction, and three categories of time constraint as within-subject conditions: relaxed, neutral, and pressed. The findings reveal that receiving prosocial and asocial behaviors can affect the state of well-being and trial performance in a mobility environment.
On-demand ridepooling (ODR) can transform public transport by addressing urban challenges. However, to motivate usage, ODR should satisfy users’ psychological needs. The present study investigated to what extent needs predict ODR use, and to what extent ODR need satisfaction differs from key transportation modes. We conducted a longitudinal study spanning three months with weekly online surveys, focusing on nighttime ODR in an urban area. Longitudinal data were available from N = 29 participants. Results showed that need fulfillment significantly predicted ODR use, especially perceived competence. Discrete dimensions analyses showed that autonomy and competence were significantly predicting higher ODR usage. Comparing ODR and public bus, significant differences were found in all need dimensions; ODR performed consistently better. In comparing ODR and car, the only significant difference was monetary related; ODR was perceived as more cost-effective. In conclusion, psychological needs shape ODR usage and are crucial for designing such services.