{"id":2045,"date":"2018-12-05T21:45:11","date_gmt":"2018-12-05T20:45:11","guid":{"rendered":"https:\/\/cms.auto-ui.org\/19\/?page_id=2045"},"modified":"2019-10-15T16:45:19","modified_gmt":"2019-10-15T15:45:19","slug":"proceedings","status":"publish","type":"page","link":"https:\/\/cms.auto-ui.org\/19\/proceedings\/","title":{"rendered":"Proceedings"},"content":{"rendered":"
Proceedings<\/p>\n
The full proceedings of the Auto-UI 2019 conference, and the adjunct proceedings will be distributed at the conference. In addition, both proceedings can be downloaded free of charge from the ACM Digital Library for a period of 1 month after the start of the conference.<\/p>\n
You can find a table of contents and individual papers for the proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications with the following links:<\/p>\n
TOC – Main Proceedings<\/p>\n
Advances in vehicle automation and the resulting change of the interior of cars lead to new challenges for user interface concepts. Augmented reality (AR) is a promising solution for the emerging design needs due to its diverse opportunities for user interaction and presenting information. This paper supports the development of novel AR applications. We describe a corresponding use case set consisting of 98 examples from a literature review and two focus groups. Based on these samples we present a design space for in-car AR applications. To demonstrate the benefit thereof, we show a fictional design process including our proposed design space to derive a custom AR system. This work supports designers and engineers by providing a systematic approach for integrating 3D AR interfaces in a vehicle, excluding windshields and windows.<\/p>\n<\/div><\/div>\n
The lack of automated cars above SAE level 3 raises challenges for conducting User Experience Design (UXD) and behaviour research for automated driving. User-centred methods are critical to ensuring a human-friendly progress of vehicle automation. This work introduces the Immersive Video-based Automated Driving (IVAD) Simulator. It uses carefully recorded 180\/360° videos that are played back in a driving simulator. This provides immersive driving experiences in visually realistic and familiar environments. This paper reports learnings from an iterative development of IVAD, and findings of two user studies: One simulator study (N=15) focused on the immersive experience; and one VR study (N=16) focused on rapid prototyping and the evaluation of Augmented Reality (AR) concepts. Overall, we found the method to be a useful, versatile and low budget UXD tool with a high level of immersion that is uniquely aided by the familiarity of the environment. IVAD’s limitations and future improvements are discussed in relation to research applications within AutoUI.<\/p>\n<\/div><\/div>\n
While the general debate about the potential of automated vehicles is pervasive, less is known about how people experience those vehicles in everyday life. To this end, we studied the experiential consequences of a speculative automated “valet parking” service. We designed three variants and confronted participants “in the wild.” The automation replaced practical worries about how to navigate a parking garage with unease about the safety of the car, which clearly limited potential gains in positive experiences. The unease could be counteracted by providing feedback about the car itself and the process. This created the impression that the car is properly cared for. In addition, while parking in garages in itself was not especially enjoyable, it played an important role in a number of related positive practices (e.g., shopping routines). To ensure acceptance, automated systems need to carefully address the question of how those systems become embedded in everyday life.<\/p>\n<\/div><\/div>\n
The design of effective energy interfaces for electric vehicles needs an integrated perspective on the technical and psychological factors that together establish real-world vehicle energy efficiency. The objective of the present research was to provide a transdisciplinary synthesis of key factors for the design of energy interfaces for battery electric vehicles (BEVs) that effectively support drivers in their eco-driving efforts. While previous research tends to concentrate on the (visual) representation of common energy efficiency measures, we focus on the design of action-integrated metrics and indicators for vehicle energy efficiency that account for the perceptual capacities and bounded rationality of drivers. Based on this rationale, we propose energy interface examples for the most basic driving maneuvers (acceleration, constant driving, deceleration) and discuss challenges and opportunities of these design solutions.<\/p>\n<\/div><\/div>\n
Recognising that one of the aims of conversation is to build, maintain and strengthen positive relationships with others, the study explores whether passengers in an autonomous vehicle display similar behaviour during transactions with an on-board conversational agent-interface; moreover, whether related attributes (e.g. trust) transcend to the vehicle itself. Employing a counterbalanced, within-subjects design, thirty-four participants were transported in a self-driving pod using an expansive testing arena. Participants undertook three journeys with an anthropomorphic agent-interlocutor (via Wizard-of-Oz), a voice-command interface, or a traditional touch-surface; each delivered equivalent task-related information. Results show that the agent-interlocutor was the most preferred interface, attracting the highest ratings of trust, and significantly enhancing the pleasure and sense of control over the journey experience, despite the inclusion of ‘trust challenges’ as part of the design. The findings can help support the design and development of in-vehicle agent-based voice interfaces to enhance trust and user experience in autonomous cars.<\/p>\n<\/div><\/div>\n
An exploratory literature review method was applied to publications from several sources on Human-Computer Interaction (HCI) for In-Vehicle Information Systems (IVIS). The novel approach for bibliographic classification uses a graph database to investigate connections between authors, papers, used methods, and investigated interface types. This allows the application of algorithms to find similarities between different publications and overlaps between different usability evaluation methods. Through community detection algorithms, the publications can be clustered based on similarity relationships. For the proposed approach several thousand papers were systematically filtered, classified, and stored in a graph database. The survey shows a trend for usability assessment methods with direct involvement of users, especially the observation of users and performance-related measurements, as well as questionnaires and interviews. However, especially methods usually applied in early stages of development based on the assessment through models or experts, as well as collaborative and creativity methods do not seem very popular in automotive HCI research.<\/p>\n<\/div><\/div>\n
This paper gives an overview of the ten-year development of the papers presented at the International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI) from 2009 to 2018. We categorize the topics into two main groups, namely, manual driving-related research and automated driving-related research. Within manual driving, we mainly focus on studies on user interfaces (UIs), driver states, augmented reality and head-up displays, and methodology; Within automated driving, we discuss topics, such as takeover, acceptance and trust, interacting with road users, UIs, and methodology. We also discuss the main challenges and future directions for AutoUI and offer a roadmap for the research in this area.<\/p>\n<\/div><\/div>\n
This paper presents a systematic review of the literature on the effectiveness of engineering countermeasures at reducing unintentional Red-Light Running (RLR) violations and improving safety at traffic intersections. 26 relevant studies on countdown timers, pavement markings, signal operations, advance warning systems, and in-vehicle warning systems are discussed and their results are summarized. While all countermeasures demonstrated varying levels of effectiveness, in-vehicle warning systems that provided audio and\/or visual feedback to drivers were found to be the most promising in lowering RLR rates, with studies showing RLR reduction by 84.3%, collision rate reduction by 37%, lower RLR probability and lower risks of crashes. Limitations of each countermeasure are discussed and research shortcomings are indicated. Further areas of potential advancements are highlighted and refinement of countermeasures are proposed in light of improving their effectiveness in reducing RLR violations and improving intersection safety.<\/p>\n<\/div><\/div>\n
Automated driving has the potential to reduce road fatalities. However, the public opinion to use automated driving can be described as skeptical. To increase the use of automated driving features, we investigate the persuasion principle of opt-out permission policies for enabling the automation, meaning automatically enabling the automation if users do not veto. In a driving simulator study (n = 19), participants drove on three different tracks (city, highway, rural). Three different interface concepts (opt-out, opt-in, control) were examined regarding their effects on automation use, trust, and acceptance. We found that an opt-out activation policy may increase automation usage for some participants. However, opt-out was perceived as more persuasive and more patronizing than the other conditions. Most importantly, opt-out can lead to mode confusion and therefore to dangerous situations. When such an opt-out policy is used in an automated vehicle, mode confusion must be addressed.<\/p>\n<\/div><\/div>\n
In light of recent incidents, it has become increasingly relevant to determine who is responsible in case of accidents involving automated vehicles. In this paper, we investigate the question of liability in automated vehicles of SAE levels 3 and above. We claim that there is a mismatch between current liability practices, where a designated driver is usually held responsible, and future perspectives, where the human assumes more and more a passive passenger-like role. Our claims are supported by the results from an interview study with insurance companies from two European countries. We show that insurers lack sufficient data to make informed decisions on how to apportion liability in SAE level 3+ scenarios. We discuss how these considerations have to be reflected in interfaces for the driver in order to make the legal status transparent for the driver.<\/p>\n<\/div><\/div>\n
Advanced Driver Assistance Systems (ADAS) aim to increase safety by supporting drivers in the driving task. Especially older drivers (65+ years), given the nature of aging, could benefit from these systems. However, little is known about older drivers’ acceptance of ADAS in general and how particular acceptance aspects influence their intention to use such systems. To address this research gap, we present results from a large-scale online survey (n=1328) with aging drivers, which was conducted in three European countries in 2019. We identified several demographic and driving-related variables, which are significantly related to acceptance. Furthermore, we found that older drivers’ intention to use ADAS is most strongly predicted by favorable acceptance aspects (i.e., usefulness, reassurance, and trust), while unfavorable aspects (i.e., annoyance, irritation, and stress) were found to have less to none predictive power. The findings are discussed considering future research directions in this area.<\/p>\n<\/div><\/div>\n
Efficient and safe interactions between automated vehicles and other road users can be supported through external Human-Machine Interfaces (eHMI). The success of these interactions relies on the eHMI signals being adequately understood by other road users. A paired-comparison forced choice task (Task 1), and a 6-point rating task (Task 2) were used to assess the extent to which ten different eHMI signals conveyed three separate messages, ‘I am giving way’, ‘I am in automated mode’ and ‘I will start moving’. The different eHMI options consisted of variations of a 360° lightband, a single lamp, and an auditory signal. Results demonstrated that the same eHMI format could convey different messages equally well, suggesting a need to be cautious when designing eHMI, to avoid presenting misleading, potentially unsafe, information. Future research should investigate whether the use of an eHMI signal indicating a change in the AV’s behaviour is sufficient for conveying intention.<\/p>\n<\/div><\/div>\n
Automated vehicles will eventually operate safely without the need of human supervision and fallback, nevertheless, scenarios will remain that are managed more efficiently by a human driver. A common approach to overcome such weaknesses is to shift control to the driver. Control transitions are challenging due to human factor issues like post-automation behavior changes. We thus investigated cooperative overtaking wherein driver and vehicle complement each other: drivers support the vehicle to perceive the traffic scene and decide when to execute a maneuver whereas the system steers. We explored two maneuver approval and cancel techniques on touchscreens, and show that cooperative overtaking is feasible, both interaction techniques provide good usability and were preferred over manual maneuver execution. However, participants disregarded rear traffic in more complex situations. Consequently, system weaknesses can be overcome with cooperation, but drivers should be assisted by an adaptive system.<\/p>\n<\/div><\/div>\n
In the context of highly automated driving, the driver has to be aware of driving risks and to take over control of the car in hazardous situations. The goal of this paper is to categorize and analyze the factors that lead to such critical scenarios. To this purpose, we analyzed limitations of Advanced Driver-Assistance Systems (ADAS) extracted from owner manuals of 12 partially automated cars available on the market. A taxonomy with 6 macro-categories and 26 micro-categories is proposed to classify and better understand the limitations of these vehicles. We also investigated if these limitations are conveyed to the driver through Human-Machine Interaction (HMI) in the car. Some suggestions are made to better communicate these limitations to the driver in order to raise his\/her situation awareness.<\/p>\n<\/div><\/div>\n
Automated driving will mean that people can engage in other activities and an important concern will be how to alert the driver to critical events that require their intervention. This study evaluates how various levels of assertiveness of voice command in a semi-AV and different degrees of immersion of a non-driving task may affect people’s attention on the road. In a simulated set-up, 20 participants were required to execute actions on the steering wheel when a voice command was given while playing a mobile game. Regardless of how immersed the driver was in the game, a more assertive voice resulted in faster reaction time to the instructions and was perceived as more urgent than a less assertive voice. Automotive systems should use an assertive voice to effectively grab people’s attention. This is effective even when they are engaged in an immersive secondary task.<\/p>\n<\/div><\/div>\n
When evaluating drivers’ trust in automated systems, perceived risk is an inevitable, yet underestimated component, especially during initial interaction. We designed two experimental studies focusing on how people assess risk in different driving environments and how introductory information about automation reliability influences trust and risk perception. First, we designed nine driving scenarios to determine which factors influence Perceived Situational Risk (PSR) and Perceived Relational Risk (PRR). Results showed that participants identified levels of risk based on traffic type and vehicles’ abnormal behaviors. We then evaluated how introductory information and situational risk influence trust and PRR. Results showed that participants reported the highest level of trust, perceived automation reliability, and the lowest level of PRR when presented with information about a highly reliable system, and when driving in a low-risk situation. These results highlight the importance of incorporating perceived risk and introductory information to support the trust calibration in automated vehicles.<\/p>\n<\/div><\/div>\n
In the domain of automated driving, numerous (technological) problems were solved in recent years, but still many limitations are around that could eventually prevent the deployment of automated driving systems (ADS) beyond SAE level 3. A remote operating fallback authority might be a promising solution. In order for teleoperation to function reliably and universal, it will make use of existing infrastructure, such as cellular networks. Unfortunately, cellular networks might suffer from variable performance. In this work, we investigate the effects of latency on task performance and perceived workload for different driving scenarios. Results from a simulator study (N=28) suggest that latency has negative influence on driving performance and subjective factors and led to a decreased confidence in Teleoperated Driving during the study. A latency of about 300 ms already led to a deteriorated driving performance, whereas variable latency did not consequently deteriorate driving performance.<\/p>\n<\/div><\/div>\n
Public opinion suggests that it is still unclear how people will react when automated vehicles (AVs) emerge on the roads. Fatal accidents involving AVs have received wide media attention, possibly disproportionate to their frequency. How does the framing of such stories affect public perceptions of AVs? Few drivers have encountered AVs, but how do they imagine interacting with AVs in the near future? This survey study with 600 UK and Hong Kong drivers addressed these two questions. After reading news ‘vignettes’ reporting an imagined car crash, respondents presented with subjective information perceived AVs as less safe than those presented with factual information. We draw implications for news media framing effects to counter negative newsflow with factual information. Respondents were presented with another imagined interaction with human-driven and AVs and did not differentiate between the two. Results from other variables e.g., first and third person framing, and cultural differences are also reported.<\/p>\n<\/div><\/div>\n
The intentions of an automated vehicle are hard to spot in the absence of eye contact with a driver or other established means of communication. External car displays have been proposed as a solution, but what if they malfunction or display misleading information? How will this influence pedestrians’ trust in the vehicle? To investigate these questions, we conducted a between-subjects study in Virtual Reality (N = 18) in which one group was exposed to erroneous displays. Our results show that participants already started with a very high degree of trust. Incorrectly communicated information led to a strong decline in trust and perceived safety, but both recovered very quickly. This was also reflected in participants’ road crossing behavior. We found that malfunctions of an external car display motivate users to ignore it and thereby aggravate the effects of overtrust. Therefore, we argue that the design of external communication should avoid misleading information and at the same time prevent the development of overtrust by design.<\/p>\n<\/div><\/div>\n
In a longitudinal study, 49 drivers undertook a commute-style journey, with part of the route supporting level-3 automation, over five consecutive days. Bespoke HMIs were provided to keep drivers in-the-loop during automation, and help them regain situational-awareness (SA) during handovers, in a 2×2 between-subjects design. Drivers demonstrated high levels of trust from the outset, delegating control to the vehicle (when available) and directing attention to their own activities\/devices. Ratings of trust and technology acceptance increased during the week — even following an unexpected, emergency handover on day four — with the highest ratings recorded on day five. High levels of lateral instability were observed immediately following takeovers, although improvements were noted during the week and following the provision of SA-enhancing hand-over advice. Results demonstrate benefits associated with novel HMI designs to keep drivers in-the-loop and improve takeover performance, as well as the necessity of multiple exposures during the evaluation of future, immersive technologies.<\/p>\n<\/div><\/div>\n
Riding a highly automated bus has the potential to bring about a set of novel challenges for the passenger. As there is no human driver present, there is no one to talk to regarding driving direction, stops, or delays. This lack of a human element is likely to cause a stronger reliance on the in-vehicle means of communication, such as displays. In this paper, we present the results from a qualitative study, in which we tested three different on-screen visualizations for passenger information during an automated bus trip. The designs focused primarily on signaling the next stop and proper time to request the bus to stop in absence of a human driver. We found that adding geo-spatial details can easily confuse more than help and that the absence of a human driver makes passengers feel more insecure about being able to exit at the right stop. Thus, passengers are less receptive for visual cues signaling upcoming stops and more likely to input stop requests immediately upon leaving the station.<\/p>\n<\/div><\/div>\n
Full windshield displays (WSDs) have the potential to present imagery across the windshield. Current knowledge on display location has not investigated translucent displays at high eccentricities from the driver’s forward view. A simulator study (n=26) was conducted aiming to, (a) investigate the effects of Head-Up Display (HUD) location across the entire windshield on driving performance, and (b) better understand how the visual demand for a complex HUD imagery differs from that for a Head-Down Display (HDD). Lane-keeping was poorer when HUD imagery was furthest from the driver (and for the HDD compared to the HUD). Equally, counts of “unacceptable” driving behaviour were greater for displays furthest from the driver’s forward view. Furthermore, drivers preferred HUD imagery that was closer to them. The results indicate that HUD evaluations should account for image location, because of how driver gaze location can impact lateral driving performance.<\/p>\n<\/div><\/div>\n
The visual demand of finger-touch based interactions with touch screens has been increasingly modelled using Fitts’ Law. With respect to driving, these models facilitate the prediction of mean glance duration and total glance time with an index of difficulty based on target size and location. Strong relationships between measures have been found in the controlled conditions of driving simulators. The present study aimed to validate such models in naturalistic conditions. Nineteen experienced drivers carried out a range of touchscreen button-press tasks in an instrumented car on a UK motorway. In contrast with previous simulator-based work, our on-road data produced much weaker relationships between the index of difficulty and glance times. The model improved by focusing on tasks that required one glance only. Limitations of Fitts’ Law in the more complex and dynamic real-world driving environment are discussed, as are the potential drawbacks of driving simulators for conducting visual demand research.<\/p>\n<\/div><\/div>\n
Automated vehicles (AVs) introduce a new challenge to human-computer interaction (HCI): pedestrians are no longer able to communicate with human drivers. Hence, new HCI designs need to fill this gap. This work presents the implementation and comparison of different interaction concepts in virtual reality (VR). They were derived after an analysis of 28 works from research and industry, which were classified into five groups according to their complexity and the type of communication. We implemented one concept per group for a within-subject experiment in VR. For each concept, we varied if the AV is going to stop and how early it starts to activate its display. We observed effects on safety, trust, and user experience. A good concept displays information on the street, uses unambiguous signals (e.g., green lights) and has high visibility. Additional feedback, such as continuously showing the recognized pedestrian’s location, seem to be unnecessary and may irritate.<\/p>\n<\/div><\/div>\n
Top-down simulations of autonomous intersections neglect considerations for the human experience of being in cars driving through these autonomous intersections. To understand the impact that perspective has on perception of autonomous intersections, we conducted a driving simulator experiment and studied the experience in terms of perception, feelings, and pleasure. Based on this data, we discuss experiential factors of autonomous intersections that are perceived as beneficial or detrimental for the future driver. Furthermore, we present what the change of perspective implies for designing intersection models, future in-car interfaces and simulation techniques.<\/p>\n<\/div><\/div>\n
Recent studies have investigated new approaches for communicating an autonomous vehicle’s (AV) intent and awareness to pedestrians. This paper adds to this body of work by presenting the design and evaluation of in-situ projections on the road. Our design combines common traffic light patterns with aesthetic visual elements. We describe the iterative design process and the prototyping methods used in each stage. The final design concept was represented as a virtual reality simulation and evaluated with 18 participants in four different street crossing scenarios, which included three scenarios that simulated various degrees of system errors. We found that different design elements were able to support participants’ confidence in their decision even when the AV failed to correctly detect their presence. We also identified elements in our design that needed to be more clearly communicated. Based on these findings, the paper presents a series of design recommendations for projection-based communication between AVs and pedestrians.<\/p>\n<\/div><\/div>\n
Autonomous vehicles’ (AVs) interactions with pedestrians remain an ongoing uncertainty. Several studies have claimed the need for explicit external human-machine interfaces (eHMI) such as lights or displays to replace the lack of eye contact with and explicit gestures from drivers, however this need is not thoroughly understood. We review literature on explicit and implicit eHMI, and discuss results from a field study with a Wizard-of-Oz driverless vehicle that tested pedestrians’ reactions in everyday traffic without explicit eHMI. While some pedestrians were surprised by the vehicle, others did not notice its autonomous nature, and all crossed in front without explicit signaling, suggesting that pedestrians may not need explicit eHMI in routine interactions—the car’s implicit eHMI (its motion) may suffice.<\/p>\n<\/div><\/div>\n
Semiautonomous driving still requires the driver’s control and attention in certain situations. Especially control transitions, i.e. take-over and hand-over situations, are important for safety. Our aim was to study control transitions supported by unimodal (i.e. visual, auditory, or haptic) or multimodal (i.e. visual, auditory and haptic) signals indicating change from manual to autonomous driving and vice versa. The signals were abstract visual blinks, auditory beeps, or haptic vibrations. The task was to take over driving while either looking through the windshield or playing a game. In addition, in half of the control transitions a feedback signal indicated successful control transition. The results showed that a secondary task slowed down the reaction times, but there was a great variation between individuals. In general, the response to auditory signal was slower than to visual, haptic, or multimodal signals. Moreover, users preferred feedback during control transitions but this slowed down the reaction time.<\/p>\n<\/div><\/div>\n
Haptic feedback has frequently been proposed as a means to support eco-driving behaviour. While force and vibrotactile feedback have proven to be effective and safe approaches, no studies were found that assessed the user experience of different feedback designs. We describe the design of six haptic effects which were implemented in a custom designed accelerator pedal. The user experience of three effects (linear force increase, bump and pulse) were assessed in a driving simulator and compared to a baseline with no feedback. Results show that the haptic pedal effects were rated positively on attractiveness, dependability, stimulation and novelty. The pulsating effect scored significantly lower on attractiveness and dependability but highest on the novelty. Qualitative results suggest that combining a bump and pulse could increase the positive experience of a haptic pedal. Consequently, we argue for more experiential approaches to haptic feedback design in accelerator pedals.<\/p>\n<\/div><\/div>\n
As a consequence of insufficient situation awareness and inappropriate trust, operators of highly automated driving systems may be unable to safely perform takeovers following system failures. The communication of system uncertainties has been shown to alleviate these issues by supporting trust calibration. However, the existing approaches rely on information presented in the instrument cluster and therefore require users to regularly shift their attention between road, uncertainty display, and non-driving related tasks. As a result, these displays have the potential to increase workload and the likelihood of missed signals. A driving simulator study was conducted to compare a digital uncertainty display located in the instrument cluster with a peripheral awareness display consisting of a light strip and vibro-tactile seat feedback. The results indicate that the latter display affords users flexibility to direct more attention towards the road prior to critical situations and leads to lower workload scores while improving takeover performance.<\/p>\n<\/div><\/div>\n
With increasing automation, vehicles could soon become “mobile offices” but traditional user interfaces (UIs) for office work are not optimized for this domain. We hypothesize that productive work will only be feasible in SAE level 3 automated vehicles if UIs are adapted to (A) the operational design domain, and (B) driver-workers’ capabilities. Consequently, we studied adapted interfaces for a typical office task (text-comprehension) by varying display modality (heads-up reading vs. auditory listening), as well as UI behavior in conjunction with take-over situations (attention-awareness vs. no attention-awareness). Self-ratings, physiological indicators, and objective performance measures in a driving simulator study (N = 32) allowed to derive implications for a mobile workspace automated vehicle. Results highlight that heads-up displays promote sequential multi-tasking and thereby reduce workload and improve productivity in comparison to auditory displays, which were still more attractive to users. Attention-awareness led to reduced stress but later driving reactions, consequently requiring further investigations.<\/p>\n<\/div><\/div>\n
Several research projects have recently explored the use of physiological sensors such as electroencephalography (EEG) or electrooculography (EOG) to measure the engagement and vigilance of a user in context of car driving. However, these systems still suffer from limitations such as an absence of a socially acceptable form-factor and use of impractical, gel-based electrodes. We present AttentivU, a device using both EEG and EOG for real-time monitoring of physiological data. The device is designed as a socially acceptable pair of glasses and employs silver electrodes. It also supports real-time delivery of feedback in the form of an auditory signal via a bone conduction speaker embedded in the glasses. A detailed description of the hardware design and proof of concept prototype is provided, as well as preliminary data collected from 20 users performing a driving task in a simulator in order to evaluate the signal quality of the physiological data.<\/p>\n<\/div><\/div>\n
In road-crossing situations involving negotiation with approaching vehicles, pedestrians need to take into account the behavior of the car before making a decision. To investigate the kind of information about the car that pedestrians seek, and the places where do they look for it, we conducted an eye-tracking study with 26 participants and analyzed the fixation behavior when interacting with a manually-driven vehicle that approached while slowing and displaying a yielding behavior. Results show that a clear pattern of gaze behavior exists for pedestrians in looking at a vehicle during road-crossing situations as a function of the vehicle’s distance. When the car is far away, pedestrians look at the environment or the road space ahead of the car. With the approach, the gaze gradually shifts to the windshield of the car. We conclude by discussing the implications of this insight in the user-centered-design of optimal external Human-Machine-Interfaces for automated vehicles.<\/p>\n<\/div><\/div>\n
Driving simulators are necessary for evaluating automotive technology for human users. While they can vary in terms of their fidelity, it is essential that users experience minimal simulator sickness and high presence in them. In this paper, we present two experiments that investigate how a virtual driving simulation system could be visually presented within a real vehicle, which moves on a test track but displays a virtual environment. Specifically, we contrasted display presentation of the simulation using either head-mounted displays (HMDs) or fixed displays in the vehicle itself. Overall, we find that fixed displays induced less simulator sickness than HMDs. Neither HMDs or fixed displays induced a stronger presence in our implementation, even when the field-of-view of the fixed display was extended. We discuss the implications of this, particular in the context of scenarios that could induce considerable motion sickness, such as testing non-driving activities in automated vehicles.<\/p>\n<\/div><\/div>\n<\/div>\n<\/div>\n
TOC – Adjunct Proceedings<\/p>\n
There are more types of vehicles than the automobile. Many are used for purposes other than transporting passengers or goods. They are often dedicated to enable the user in performing specific manual tasks, in parallel to driving. Such heavy vehicles range from construction vehicles, such as excavators and articulated haulers, to agriculture vehicles, such as tractors and harvesters. They also include speciality vehicles such as lifts and cranes. Recent advances in information technology radically increases their productivity and safety. Moreover, heavy vehicles are increasingly sensor and software-driven, as well as connected and integrated with information systems. This development creates new interaction challenges and research areas. The aim of this workshop is to gather practitioners, researchers, and professionals who wish to explore the potential opportunities, identify research challenges, and innovate in the domain of heavy vehicles.<\/p>\n<\/div><\/div>\n
AutoUI conference is the premier forum for user interface research in the automotive domain, annually bringing together over 200 researchers and practitioners interested in both the technical and the human aspects of in-vehicle user interfaces and applications. However, over 80% of its published papers come from only five countries from western Europe and North America. Considering the importance and valuable impact this conference has on the research and development of HMI (Human-Machine Interface) and automated systems in recent years, it raises the need for greater diversity and inclusion of researchers and practitioners from other continents. The goal of this workshop is to bring together researchers, practitioners, experts, and students from different research background, influenced by or influencing the automotive domain, and discuss the cross-cultural differences in driving behaviors and infrastructure, which is an essential prerequisite for future vehicle systems and driving safety.<\/p>\n<\/div><\/div>\n
Properly calibrated trust in automation is a key issue for a successful implementation of automated vehicle technology. Recent research and investigation of accidents involving automated driving systems have shown that drivers have difficulties adjusting their trust levels appropriately with system performance criteria, which is a key requirement for trust calibration [7]. Whereas previous editions of this workshop have concentrated on suitable definitions, measurements, and factors influencing trust, this year’s edition shifts the focus to the question: How does trust interact with and influence other latent constructs, such as risk behavior, situation awareness, or users’ willingness to engage in non-driving related tasks?<\/i> The workshop thereby welcomes both experts and young researchers who (already are or want to) conduct research in this timely area, with the aim of developing concrete research programs and experimental designs, that close existing knowledge gaps and allow further progress in the domain of trust calibration.<\/p>\n<\/div><\/div>\n Automation is increasingly gaining traction not only for individual but public transportation, especially in the last-mile sector. With no human driver at the helm, there is a need for adequate interaction replacements for passenger- and roadside information – not only as the bus is already in transit but before and during boarding as well. This workshop is intended to address these needs by exploring this design space in a hands-on setting. The expected outcome of the workshop is a set of interaction scenarios, design concepts and future challenges. These should serve as a basis for ongoing research and development for the field.<\/p>\n<\/div><\/div>\n\n2nd<\/sup> workshop on user interfaces for public transport vehicles: interacting with automation<\/a>
\n<\/h3>\n\n
\nSimulator showdown: pitch your virtual ride<\/a>