{"id":2045,"date":"2018-12-05T21:45:11","date_gmt":"2018-12-05T20:45:11","guid":{"rendered":"https:\/\/cms.auto-ui.org\/19\/?page_id=2045"},"modified":"2020-09-25T13:58:03","modified_gmt":"2020-09-25T12:58:03","slug":"proceedings","status":"publish","type":"page","link":"https:\/\/cms.auto-ui.org\/20\/proceedings\/","title":{"rendered":"Proceedings"},"content":{"rendered":"
Proceedings<\/p>\n
The full proceedings of the AutoUI 2020 conference, and the adjunct proceedings can be downloaded from the ACM Digital Library.<\/p>\n
You can find a table of contents and individual papers for the proceedings of the 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications with the following links:<\/p>\n
TOC – Main Proceedings<\/p>\n
This paper reports a study investigating the effects of tactile input and output from the steering wheel and the centre console on non-driving task performance. While driving, participants were asked to perform list selection tasks using tactile switches and to experience tactile feedback on either the non-dominant, dominant or both hands as they were browsing the list. Our results show the average duration for selecting an item is 30% shorter when interacting with the steering wheel. They also show a 20% increase in performance when tactile feedback is provided. Our findings reveal that input prevails over output location when designing interaction for drivers. However, tactile feedback on the steering wheel is beneficial when provided at the same location as the input or to both hands. The results will help designers understand the trade-offs of using different interaction locations in the car.<\/p>\n<\/div>\n<\/div>\n
Currently, the visual demand incurred by vehicle displays is evaluated using time criteria (such as those provided by NHTSA). This 60-participant driving simulator study investigated to what extent glance time criteria applies to Head-up Display (HUD) imagery, considering 48 locations across the windshield (and 3 in-vehicle display positions). Participants were required to make a long controlled continuous glance to a sample of these locations. Consequently, the time at which lateral\/longitudinal unsafe driving occurred (e.g. deviating out of lane, unacceptable time to collision) could then be assessed. Using the selected measures, the results suggest that drivers are able to maintain driving performance for longer than recommended NHTSA guidelines for in-vehicle displays whilst engaging with HUD imagery in various locations. Importantly, the data from this study provides initial maps for designers highlighting the visual demand implications of HUD imagery across the windshield.<\/p>\n<\/div>\n<\/div>\n
Understanding user needs and behavior in automated vehicles (AVs) while traveling is essential for future in-vehicle interface and service design. Since AVs are not yet market-ready, current knowledge about AV use and perception is based on observations in other transportation modes, interviews, or surveys about the hypothetical situation. In this paper, we close this gap by presenting real-world insights into the attitude towards highly automated driving and non-driving-related activities (NDRAs). Using a Wizard of Oz AV, we conducted a real-world driving study (N = 12) with six rides per participant during multiple days. We provide insights into the users\u2019 perceptions and behavior. We found that (1) the users\u2019 trust a human driver more than a system, (2) safety is the main acceptance factor, and (3) the most popular NDRAs were being idle and the use of the smartphone.<\/p>\n<\/div>\n<\/div>\n
While automated vehicles are supposed to become places for purposes beyond transportation, motion sickness is still a largely unsolved issue that may be critical for this transformation. Due to its previously shown positive impact on the gastric and central nervous system, we hypothesize that olfaction (in particular the scents of lavender and ginger) may be able to reduce motion sickness symptoms in a non-invasive manner. We investigate the effects of these scents on the driver-passenger in chauffeured drives in a test track study with a reading-span non-driving related task. Evaluation of self-rated (Simulator Sickness Questionnaire, UX Curves) and physiologically measured motion sickness (Electrogastrography, Electrocardiography), and observations are presented and discussed. Results indicate that the issued scents were detrimental to the well-being of participants in the comparisons between post-task (baseline, scented) and pre-test measurements, with symptoms in the lavender-scented group being perceived as slightly less harsh than in the ginger-scented group.<\/p>\n<\/div>\n<\/div>\n
Trust is important in determining how drivers interact with automated vehicles. Overtrust has contributed to fatal accidents; and distrust can hinder successful adoption of this technology. However, existing studies on trust are often hard to compare, given the complexity of the construct and the absence of standardized measures. Further, existing trust scales often do not consider its multi-dimensionality. Another challenge is that driving is strongly context- and situation-dependent. We present the Situational Trust Scale for Automated Driving, a short questionnaire to assess different aspects of situational trust, based on the trust model proposed by Hoff and Bashir. We evaluated the scale using an online study in the US and Germany (N=303), where participants faced different videos of an automated vehicle. Results confirm the existence of situational factors as components of trust, and support the scale being a valid measure of situational trust in this automated driving context.<\/p>\n<\/div>\n<\/div>\n
As semi-automated vehicles get to have the ability to drive themselves, it is important (1) to explore drivers\u2019 affective states which may influence takeover performance and (2) to design optimized control transition displays to warn drivers to take control back from the vehicles. The present study investigated the influence of anger on drivers\u2019 takeover reaction time and quality, with varying urgency of auditory takeover request displays. Using a driving simulator, 36 participants experienced takeover scenarios in a semi-automated vehicle with a secondary task (game). Higher frequency and more repetitions of the auditory displays led to faster takeover reaction times, but there was no difference between angry and neutral drivers. For takeover quality, angry drivers drove faster, took longer to change lanes and had lower steering wheel angles than neutral drivers, which made riskier driving. Results are discussed with the necessity of affect research and display design guidelines in automated vehicles.<\/p>\n<\/div>\n<\/div>\n
We quantify the time-course of glance behavior and steering wheel control level in driver-initiated, non-critical disengagements of Tesla Autopilot (AP) in naturalistic driving. Although widely used, there are limited objective data on the impact of AP on driver behavior. We offer insights from 19 Tesla vehicle owners on driver behavior when using AP and transitioning to manual driving. Glance behavior and steering wheel control level were coded for 298 highway driving disengagements. The average proportion of off-road glances decreased from 36% when AP was engaged to 24% while driving manually after AP disengagement. Most of the off-road glances before the transition were downward and to the center stack (17%). Lastly, in 33% of the events drivers were not holding the steering wheel prior to AP disengagement. The study helps begin to enhance society\u2019s understanding, and provide a reference, of real-world AP use.<\/p>\n<\/div>\n<\/div>\n
In conditionally automated driving, drivers engaged in non-driving related tasks (NDRTs) have difficulty taking over control of the vehicle when requested. This study aimed to examine the relationships between takeover performance and drivers\u2019 cognitive load, takeover request (TOR) lead time, and traffic density. We conducted a driving simulation experiment with 80 participants, where they experienced 8 takeover events. For each takeover event, drivers\u2019 subjective ratings of takeover readiness, objective measures of takeover timing and quality, and NDRT performance were collected. Results showed that drivers had lower takeover readiness and worse performance when they were in high cognitive load, short TOR lead time, and heavy oncoming traffic density conditions. Interestingly, if drivers had low cognitive load, they paid more attention to driving environments and responded more quickly to takeover requests in high oncoming traffic conditions. The results have implications for the design of in-vehicle alert systems to help improve takeover performance.<\/p>\n<\/div>\n<\/div>\n
Our work extends contemporary research into visualizations and related applications for automobiles. Focusing on external car bodies as a design space we introduce the External Automotive Displays (EADs), to provide visualizations that can share context and user-specific information as well as offer opportunities for direct and mediated interaction between users and automobiles. We conducted a design study with interaction designers to explore design opportunities on EADs to provide services to different road users; pedestrians, passengers, and drivers of other vehicles. Based on the design study, we prototyped four EADs in virtual reality (VR) to demonstrate the potential of our approach. This paper contributes our vision for EADs, the design and VR implementation of a few EAD prototypes, a preliminary design critique of the prototypes, and a discussion of the possible impact and future usage of external automotive displays.<\/p>\n<\/div>\n<\/div>\n
Autonomous ride-sharing services have the potential to disrupt future transportation ecosystems. It is critical to understand factors that influence user experience in autonomous vehicles (AVs) to design for widespread adoption. We conducted an on-road driving study in a mock AV to examine how the amount of information provided by an in-vehicle digital assistant, and the manner in which information is delivered, can impact one’s overall AV experience. Passengers were divided into two cohorts, based on their assigned in-vehicle digital assistant (Lilly vs. Julie). Through a mixed-methods analysis, the data showed that quantity and quality of information presented via the digital assistant had a significant impact on one’s confidence in an AV’s driving capability and willingness to ride again. These findings highlight that although the two cohorts were identical with respect to the actual vehicle driven, differences in in-vehicle digital assistant design can alter passengers\u2019 perceptions of their overall AV experience.<\/p>\n<\/div>\n<\/div>\n
Electric vehicles\u2019 (EVs) nearly silent operation has proved to be dangerous for bicyclists and pedestrians, who often use an internal combustion engine\u2019s sound as one of many signals to locate nearby vehicles and predict their behavior. Inspired by regulations currently being implemented that will require EVs and hybrid vehicles (HVs) to play synthetic sound, we used a Wizard-of-Oz AV setup to explore how adding synthetic engine sound to a hybrid autonomous vehicle (AV) will influence how pedestrians interact with the AV in a naturalistic field study. Pedestrians reported increased interaction quality and clarity of intent of the vehicle to yield compared to a baseline condition without any added sound. These findings suggest that synthetic engine sound will not only be effective at helping pedestrians to hear EVs, but also may help AV developers implicitly signal to pedestrians when the vehicle will yield.<\/p>\n<\/div>\n<\/div>\n
Understanding how personalities relate to driving styles is crucial for improving Advanced Driver Assistance Systems (ADASs) and driver-vehicle interactions. Focusing on the \u201dhigh-risk\u201d population of young male drivers, the objective of this study is to investigate the association between personality traits and driving styles. An online survey study was conducted among 46 males aged 21-30 to gauge their personality traits, self-reported driving style, and driving history. Hierarchical Clustering was proposed to identify driving styles and revealed two subgroups of drivers who either had a \u201drisky\u201d or \u201dcompliant\u201d driving style. Compared to the compliant group, the risky cluster sped more frequently, was easily distracted and affected by negative emotion, and often behaved recklessly. The logit model results showed that the risky driving style was associated with lower Agreeableness and Conscientiousness, but higher driving exposure. An interaction effect was also detected between age and Extraversion to form a risky driving style.<\/p>\n<\/div>\n<\/div>\n
Automated vehicles will change the trucking industry as human drivers become more absent. In crossing scenarios, external communication concepts are already evaluated to resolve potential issues. However, automated delivery poses unique communication problems. One specific situation is the delivery to the curb with the truck remaining partially on the street, blocking sidewalks. Here, pedestrians have to walk past the vehicle with reduced sight, resulting in safety issues. To address this, we conducted a literature survey revealing the lack of addressing external communication of automated vehicles in situations other than crossings. Afterwards, a study in Virtual Reality (N=20) revealed the potential of such communication. While the visualization (e.g., arrows or text) of whether it is safe to walk past the truck only played a minor part, the information of being able to safely walk past was highly appreciated. This shows that external communication concepts carry great potential besides simple crossing scenarios.<\/p>\n<\/div>\n<\/div>\n
Autonomous vehicles (AVs) have the opportunity to reduce accident and injury rates in urban areas and improve safety for vulnerable road users (VRUs). To realize these benefits, AVs have to communicate with VRUs like pedestrians. While there are proposed solutions concerning the visualization or modality of external human-machine interfaces, a research gap exists regarding the AVs\u2019 communication strategy when interacting with pedestrians. Our work presents a comparative study of an autonomous delivery vehicle with three communication strategies ranging from polite to dominant in two scenarios, at a crosswalk or on the street. We investigated these strategies in an online-based video study in a German (N\u00a0=\u00a034) and a Chinese sample (N\u00a0=\u00a056) regarding compliance, acceptance and trust. We found that a polite strategy led to more compliance in the Chinese but not the German sample. However, the polite strategy positively affected trust and acceptance of the AV in both samples equally.<\/p>\n<\/div>\n<\/div>\n
The introduction of micro-mobility, such as e-scooters, brings new challenges. Nevertheless, these trend devices are spreading rapidly without a comprehensive study of their interactions with other road users. For example, many countries currently require drivers of e-scooters to signal turns by hand. In this work, we investigate whether e-scooter riders can do this without losing control and whether they perceive hand signals as safe enough to use in traffic. We have conducted two studies with 10 and 24 participants, respectively. Each participant was able to perform hand signals without apparent problems. We also observed an intensive training effect regarding the handling of e-scooters. Nevertheless, our results indicate that a considerable number of inexperienced riders will, outside the laboratory, turn without signs with the currently prevailing e-scooter designs and regulations due to uncertainties.<\/p>\n<\/div>\n<\/div>\n
We are interested in the role of field user interaction data in the development of In-Vehicle Information System (IVIS), the potentials practitioners see in analyzing this data, the concerns they share, and how this compares to companies with digital products. We conducted interviews with 14 UX professionals, 8 from automotive and 6 from digital companies, and analyzed the results by emergent thematic coding. Our key findings indicate that implicit feedback through field user interaction data is currently not evident in the automotive UX development process. Most decisions regarding the design of IVIS are made based on personal preferences and the intuitions of stakeholders. However, the interviewees also indicated that user interaction data has the potential to lower the influence of guesswork and assumptions in the UX design process and can help to make the UX development lifecycle more evidence-based and user-centered.<\/p>\n<\/div>\n<\/div>\n
With increasing automation, vehicles could soon become mobile work- and living spaces, but traditional user interfaces (UIs) are not designed for this domain. We argue that high levels of productivity and user experience will only be achieved in SAE L3 automated vehicles if UIs are modified for non-driving related tasks. As controls might be far away (up to 2 meters), we suggest to use gaze-based interaction with windshield displays. In this work, we investigate the effect of different dwell times and feedback designs (circular and linear progress indicators) on user preference, task performance and error rates. Results from a user study conducted in a virtual reality driving simulator (N = 24) highlight that circular feedback animations around the viewpoint are preferred for gaze input. We conclude this work by pointing out the potential of gaze-based interactions with windshield displays for future SAE L3 vehicles.<\/p>\n<\/div>\n<\/div>\n
After their success in the smart home, voice assistants are becoming increasingly popular in automotive user interfaces. These voice assistants are traditionally designed to provide a human-like dialog with the user. Thus, when processing voice input, especially dealing with uncertainty is an important factor that needs to be considered when designing system responses. While state-of-the-art voice assistants offer responses based on their certainty of what they understood, these response-thresholds are largely under-explored. In this work, we close this gap by providing a user-centered approach to investigate which responses are acceptable for voice input users depending on input certainty. Through findings from semi-structured online interviews with 101 participants, we provide insights about designing voice user interface responses based on system certainty. Our findings reveal a sweet spot for executing a task versus requesting additional user input. Further, we provide data-driven guidelines for different in-car voice assistant behaviors.<\/p>\n<\/div>\n<\/div>\n
It seems that autonomous driving systems are substituting human responsibilities in the driving task. However, this does not mean that vehicles should not interact with their driver anymore, even in case of full automation. One reason is that the automation is not yet advanced enough to predict other road user’s behavior in complex situations, which can lead to sub-optimal action choices, decrease comfort and user experience. In contrast, a human driver may have a more reliable understanding of other road users\u2019 intentions which could complement that of the automation. We propose a framework that distinguishes between four levels for interaction with automation. Based on the framework, we introduce a concept which allows drivers to provide prediction-level guidance to an automated driving system through gaze-speech interaction. Results of a pilot user study show that people hold a positive attitude towards prediction-level intervention as well as the gaze-based interaction method.<\/p>\n<\/div>\n<\/div>\n
Autonomous vehicles carry the potential to greatly improve mobility and safety in traffic. However, this technology has to be accepted and of value for the intended users. One challenge on this way is the detection and recognition of pedestrians and their intentions. While there are technological solutions to this problem, there seems to be no research on how to make this information transparent to the user in order to calibrate the user\u2019s trust. Our work presents a comparative study of 5 visualization techniques with Augmented Reality or tablet-based visualization technology and two or three information clarity states of pedestrian intention in the context of highly automated driving. We investigated these in a user study in Virtual Reality (N=15). We found that such a visualization was rated reasonable, necessary, and that especially the Augmented Reality-based version with three clarity states was preferred.<\/p>\n<\/div>\n<\/div>\n
External human-machine interfaces (eHMIs) support automated vehicles (AVs) in interacting with vulnerable road users such as pedestrians. While related work investigated various eHMIs concepts, these concepts communicate their message in one go at a single point in time. There are no empirical insights yet whether distance-dependent multi-step information that provides additional context as the vehicle approaches a pedestrian can increase the user experience. We conducted a video-based study (N = 24) with an eHMI concept that offers pedestrians information about the vehicle\u2019s intent without providing any further context information, and compared it with two novel eHMI concepts that provide additional information when approaching the pedestrian. Results show that additional distance-based information on eHMIs for yielding vehicles enhances pedestrians\u2019 comprehension of the vehicle\u2019s intention and increases their willingness to cross. This insight posits the importance of distance-dependent information in the development of eHMIs to enhance the usability, acceptance, and safety of AVs.<\/p>\n<\/div>\n<\/div>\n
Road traffic is a social situation where participants heavily interact with each other. Consequently, communication plays an important role. Typically, the communication between pedestrians and drivers is nonverbal and consists of a combination of gestures, eye contact, and body movement. However, when vehicles become automated, this will change. Previous work has investigated the design and effectiveness of additional communication cues between pedestrians and automated vehicles. It remains unclear, though, how this impacts the perceptions of the quality of communication and impressions of mindfulness and prosociality. In this paper, we report an online experiment, where we evaluated the perception of communication cues in the form of on-road light projections, across different traffic scenarios and roles. Our results indicate that, while the cues can improve communication, their effect is dependent on traffic scenarios. These results provide preliminary implications for the design of communication cues that consider their prosocial aspects.<\/p>\n<\/div>\n<\/div>\n
Autonomous vehicles are on the verge of entering the mass market. Communication between these vehicles with vulnerable road users could increase safety and ease their introduction by helping to understand the vehicle\u2019s intention. Numerous communication modalities and messages were proposed and evaluated. However, these explorations do not account for the factors described in communication theory. Therefore, we propose a two-part design space consisting of a concept part with 3 dimensions and a situation part with 6 dimensions based on a literature review on communication theory and a focus group with experts (N=4) on communication. We found that most work until now does not address situation-specific aspects of such communication.<\/p>\n<\/div>\n<\/div>\n
Fully autonomous driving leaves drivers with little opportunity to intervene in the driving decision. Giving drivers more control can enhance their driving experience. We develop two collaborative interface concepts to increase the user experience of drivers in autonomous vehicles. Our aim is to increase the joy of driving and to give drivers competence and autonomy even when driving autonomously. In a driving simulator study (N = 24) we investigate how vehicles and drivers can collaborate to decide on driving actions together. We compare autonomous driving (AD), the option to take back driving control (TBC) and two collaborative driving interface concepts by evaluating usability, user experience, workload, psychological needs, performance criteria and interview statements. The collaborative interfaces significantly increase autonomy and competence compared to AD. Joy is highly represented in the qualitative data during TBC and collaboration. Collaboration proves to be good for situations in which quick decisions are called for.<\/p>\n<\/div>\n<\/div>\n
Influencing factors on the take-over performance during conditionally automated driving are intensively researched these days. Most of the studies focus on visual and motoric reactions. Only limited information is available about what happens on the cognitive level during the transition from automated to manual driving. Thus, the aim of the study is to investigate a measurement method for assessing the cognitive take-over performance. In this method, the cognitive component decision-making is operationalized via concurrent verbalization of action decisions. The results suggest that valid predictions for the time of the decision can be provided. Additionally, it seems that the effects of situational complexity on the driver behavior can be extended to cognitive processes. A temporal classification of the decision-making within the take-over process is derived that can be applied for the development of cognitive plausible assistance systems.<\/p>\n<\/div>\n<\/div>\n
In conditionally automated driving, drivers do not have to constantly monitor their vehicle but they must be able to take over control when necessary. In this paper, we assess the impact of instructions about limitations of automation and the presentation of context-related information through a mobile application on the situation awareness and takeover performance of drivers. We conducted an experiment with 80 participants in a fixed-base driving simulator. Participants drove for an hour in conditional automation while performing secondary tasks on a tablet. Besides, they had to react to five different takeover requests. In addition to the assessment of behavioral data (e.g. quality of takeover), participants rated their situation awareness after each takeover situation. Instructions and context-related information on limitations combined showed encouraging results to raise awareness and improve takeover performance.<\/p>\n<\/div>\n<\/div>\n
Recent research indicates that transparent information on the behavior of automated vehicles positively affects trust, but how such feedback should be composed and if user trust influences the amount of desired feedback is relatively unexplored. Consequently, we conducted an interview study with (N=56) participants, who were presented different videos of an automated vehicle from the ego-perspective. Subjects rated their trust in the vehicle in these situations and could arbitrarily select objects in the driving environment that should be included in augmented reality feedback systems, so that they are able to trust the vehicle and understand its actions. The results show an inverse correlation between situational trust and participants\u2019 desire for feedback and further reveal reasons why certain objects should be included in feedback systems. The study also highlights the need for more adaptive in-vehicle interfaces for trust calibration and outlines necessary steps for automatically generating feedback in the future.<\/p>\n<\/div>\n<\/div>\n
With an increasing ability to answer and fulfill user requests, voice-enabled Conversational Agents (CAs) are becoming more and more powerful. However, as the complexity of the requests increase, the time for the CAs to process and fulfill the tasks can become longer. In other cases where input prediction is available, some requests can be processed and answered even before the user is finished saying the command. However, the effects of these positive and negative delays in system response time are still under-explored. In this paper, we systematically analyze the effects of different response delays on usability and acceptability considering three common interaction techniques for voice-enabled CAs. Our results reveal that an unnaturally long positive delay in system response time leads users to assume that an error occurred, while a negative delay is perceived by the users as rude. Based on our findings, we present design guidelines for voice-enabled CAs.<\/p>\n<\/div>\n<\/div>\n
Driverless shuttles bear different and novel challenges for passengers. One of these is related to capacity management, as such shuttles are often smaller (usually from 6 to 12 seats) with limited capacities to (re-)assign seating, control reservations, or arrange travels for groups that exceed a shuttle\u2019s capacity. Since a bus driver is missing, passengers need to resolve conflicts or uncertainties on their own, unless additional systems provide such support. In this paper, we present the results from a laboratory study, in which we investigated passenger needs in relation to booking and reserving spots (seats, standing spots, and strollers) in an automated shuttle. We found that such functionalities have a low-to-medium impact on an overall scale but could constitute exclusion criteria for more vulnerable parts of the population, such as older adults, families with small children, or physically impaired individuals.<\/p>\n<\/div>\n<\/div>\n
Emerging connected vehicle (CV) technologies provide timely, advance warnings of roadway hazards to road users who may be incapable of perceiving them otherwise. Between 2012 and 2019, United States government transportation agencies deployed V2X Hub, an open platform supporting a broad set of transportation safety applications. This platform facilitates real-time data sharing between infrastructure, in-vehicle, or mobile devices to communicate hazards using Dedicated Short-Range Communications (DSRC). Armed with expertise gained in developing the technology, along with a renewed focus for the design of the in-vehicle safety system’s Human-Machine Interface (HMI), the Rail Crossing Violation Warning (RCVW) research team evaluated practical driver use cases and extended the system capabilities to meet a broader set of safety goals.<\/p>\n<\/div>\n<\/div>\n
Self-driving vehicles have been described as one of the most significant advances in personal mobility of the past century. By minimizing the role of arguably error-prone human drivers, self-driving vehicles are heralded for improving traffic safety. Primarily driven by the technology\u2019s potential impact, there is a rapidly evolving body of literature focused on consumer preferences. Missing, we argue, are studies that explore the needs and design preferences of older adults (60+). This is a significant knowledge gap, given the disproportionate impact that self-driving vehicles may have concerning personal mobility for older adults who are unable or unwilling to drive. Within this paper, we explore the design and interaction preferences of older adults through a series of enactment-based design sessions. This work contributes insights into the needs of older adults, which may prove critical if equal access to emerging self-driving technologies are to be realized.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n
TOC – Adjunct Proceedings<\/p>\n
Highly and fully automated vehicles are not expected on public roads in the near future, but at lower levels of automation, several applications\/business models are discussed by vehicle manufacturers and fleet operators. Automated valet parking (AVP) is one of them, which could be implemented almost immediately. Vehicles with AVP feature are able to drive independently in the parking garage and find\/occupy a free parking space. However, a better understanding of public\u2019s opinion on this service is needed. In this paper, we present the findings from an exploratory interview study on public\u2019s opinion of automated valet parking. Results suggest that the main benefits from the user perspective are clearly practical in nature (time saving, efficient use of parking lots), but are mitigated by emotional concerns (feeling of uncertainty, loss of control). We therefore conclude that these concerns must be addressed to ultimately ensure automated valet parking\u2019s success and benefits in society.<\/p>\n
In this study, we provide a first look at driver responses when using augmented reality (AR) head-up displays (HUDs) during an unexpected and potentially dangerous event. Twenty participants followed a lead car in a driving simulator while completing no task or distracting secondary tasks on AR HUDs in three different vertical positions or head-down displays (HDDs). After a series of uneventful drives, the lead car unexpectedly braked while participants completed a distractor task, requiring them to respond quickly to avoid a collision. We qualitatively analyzed participants\u2019 glance behavior, crash avoidance, and self-reported experience. We found that participants using HDDs all frequently glanced back toward the roadway and lead vehicle while those using AR HUDs were inconsistent. Our results suggest that more research must be done to fully understand AR HUDs\u2019 impact on drivers during surprise events but display location may impact behavior.<\/p>\n
With current technology, mobile working has become a real trend. With wireless head-mounted displays we could soon even be using immersive working environments while commuting. However, it is unclear what such a virtual workplace will look like. In anticipation of autonomous cars, we investigate the use of VR in the rear seat of current cars. Given the limited space, how will interfaces make us productive, but also keep us aware of the essentials of our surroundings? In interviews with 11 commuters, they generally could imagine using VR in cars for working, but were concerned with their physical integrity while in VR. Two types of preferred working environments stuck out in the physical dimension and three information levels for rear-seat VR productivity emerged from our interviews: productivity, notification, and environment. We believe that the interview results and proposed information levels can inspire the UI structure of future ubiquitous productivity applications.<\/p>\n
Despite the rapid progress of Autonomous Vehicle (AV) technology, remote human situational assessment continues to be required. Herewith, remote operation is introducing several challenges, as limited perception and difficulty in maintaining Situation Awareness (SA). In this regard, this research provides first-hand SA requirements for remote teleoperation-based interfaces. Complementary to a previous literature review on requirements for Human-Machine Interface for unmanned systems, we conducted two (N = 18, N = 10) user studies. To ascertain the views of the users, we employed two methodologies, in-depth interviews, and traditional statistical analysis to find out specific preferences. We collected a total of 80 statements that we could cluster over 12 categories, presenting a comprehensive overview of SA user requirements. The research is envisioned to be used by others as a tool to help the development of AV teleoperation-based interfaces.<\/p>\n
In-vehicle gesture interfaces show potential to reduce visual demand and improve task performance when supported with mid-air, ultrasound-haptic feedback. However, comparative studies have tended to select gestures and haptic sensations based either on experimental convenience or to conform with existing interfaces, and thus may have fallen short on realising their full potential. Aiming to design and validate an exemplar set of ultrasonic, mid-air haptic icons (\u201cultrahapticons\u201d), a participatory design exercise was conducted, whereby seventeen participants were presented with seven in-vehicle infotainment tasks. Participants were asked to describe their mental models for each, and then sketch these visual, tactual and auditory associations. \u2018Haptifiable\u2019 elements were extracted, and these were analysed using semiotics principles, resulting in thirty ultrahapticon concepts. These were subsequently evaluated and further refined in a workshop involving user experience and haptics experts. The final seventeen concepts will be validated in a salience recognition and perspicuity study.<\/p>\n<\/div>\n
This paper investigates how working of Convolutional Neural Network (CNN) can be explained through visualization in the context of machine perception of autonomous vehicles. We visualize what type of features are extracted in different convolution layers of CNN that helps to understand how CNN gradually increases spatial information in every layer. Thus, it concentrates on region of interests in every transformation. Visualizing heat map of activation helps us to understand how CNN classifies and localizes different objects in image. This study also helps us to reason behind low accuracy of a model helps to increase trust on object detection module.<\/p>\n
New mobility concepts in public transport will benefit from automated driving systems. Nevertheless, fully automated vehicles are not expected within the next years. For this reason, a remote operating fallback authority might be a promising solution. To cope with highly complex automation tasks, teleoperation with a distinct human-machine interaction could be used. This work describes a task analysis in order to derive requirements for the design of a future control center workplace which deals with the control of driverless shuttles in combination with mobility-on-demand services in public transport. The results will contribute to create an efficient, valid and capable human-machine interaction concept for vehicle teleoperation.<\/p>\n
A driver’s confusion about the dynamic operating modes of an Automated Vehicle (AV), and thereby their confusion about their driving responsibilities can compromise safety. To be able to detect drivers\u2019 mode confusion in AVs, we expand on a previous theoretical model of mode confusion and operationalize it by first defining the possible operating modes within an AV. Consequently, using these AV modes as different classes, we then propose a classification framework that can potentially detect a driver’s mode confusion by classifying the driver’s perceived AV mode using measures of their gaze behavior. The potential applicability of this novel framework is demonstrated by a classification algorithm that can distinguish between drivers\u2019 gaze behavior measures during two AV modes of fully-automated and non-automated driving with 93% average accuracy. The dataset was collected from older drivers (65+), who, due to changes in sensory and\/or cognitive abilities can be more susceptible to mode confusion.<\/p>\n
There is a current trend towards natural, adaptive in-vehicle spoken dialogue systems which react flexibly to the individual requirements of a driver or the driving situation. They aim to provide the driver with the most efficient form of interaction and to thereby reduce his cognitive load. Studies show that even the syntactic form of system output has an influence on the driver and his driving performance. Against this background, in this paper we present our user-centered approach for a user- and situation-adaptive strategy for the syntactic design of voice output. Based on the collected data of two user studies, we combine the two aspects of speech production and perception and compare actual language behaviour with syntactic preferences. The resulting strategy will be evaluated and elaborate further in future user studies.<\/p>\n<\/div>\n