{"id":3249,"date":"2019-07-23T17:51:19","date_gmt":"2019-07-23T16:51:19","guid":{"rendered":"https:\/\/cms.auto-ui.org\/19\/?page_id=3249"},"modified":"2019-08-08T15:27:50","modified_gmt":"2019-08-08T14:27:50","slug":"demo-program","status":"publish","type":"page","link":"https:\/\/cms.auto-ui.org\/19\/program\/demo-program\/","title":{"rendered":"Demos (Program)"},"content":{"rendered":"
Interactive Demo\u2019s\n<\/p>\n
Tentative program: Interactive Demo\u2019s\n<\/p>\n
Tuesday 24 September during the coffee breaks in room “Ontwerpruimte” of “De Munt” <\/span><\/i><\/p>\n Interactive Demos (or short: demo’s) are presentations \/ demonstrations that highlight and foster discussion of current research and development in the area of automotive user interfaces and interactive vehicular applications. We see demos as catalysts for innovation and collaboration and we encourage submissions from both industry and academia. <\/p>\n Demo authors can set-up their demo Tuesday morning before the first coffee break (and ideally before the first paper session).<\/strong><\/p>\n The demo’s are on display in the room “Ontwerpruimte”. <\/strong><\/p>\n<\/p>\n Michael Hildebrandt, Jens-Patrick Langstrand, Hoa Thi Nguyen (Institute for Energy Technology)<\/i><\/p>\n Synopticon is a collection of tools for managing complex, multi-sensory data streams when conducting behavioral research in simulators or on the road. Synopticon’s functionality includes automatic gaze object detection, multi-camera synchronization, camera-sensor synchronization (e.g. physiological sensors), camera-simulator synchronization, and support for computer vision and machine learning.<\/p>\n <\/p>\n Jan Conrad, Dieter P. Wallach, Arthur Barz, Daniel Kerpen, Tobias Puderer, Andreas Weisenburg (University of Applied Sciences Kaiserslautern)<\/i><\/p>\n Developing and evaluating automotive user interfaces and driver assistance systems in real vehicles and in simulators is an effortful and costly process. To reduce the effort in defining and conducting user studies, this demo introduces the concept simulator K3F. K3F is a highly flexible driving simulation environment that allows a quick and easy modification and exchange of its hard- and software components including simulation software, dashboard\/infotainment, and peripheral systems. The K3F software supports a convenient setup of user studies and a comprehensive collection of data. It simplifies the creation and modification of virtual environments and the development of real-world street scenarios. The integration of a driver model based on the ACT-R cognitive architecture allows not only the explanation of observed behavior, but also the prediction of human driving patterns.<\/p>\n <\/p>\n Haoyu Dong (TU Eindhoven)<\/i><\/p>\n Currently, interactive devices can easily disappear into a wide range of physical context due to the development of microcontrollers, sensors and actuators. However, this disappearing interaction scenario may cause confusion to the users regarding where and how to interact with it. Therefore, a research project has been conducted to investigate different inherent feedforwards for this disappearing interaction scenario in textile surfaces. A Tangible User Interface (TUI) for volume adjusting was designed, which can provide both visual and shape-changing feedforwards. This interface can be implemented in ubiquitous soft surfaces, in this demo, a textile-based Human-Machine Interaction (HMI) in the vehicle seat. The textile interface provides a both natural and enjoyable HMI concept. This report describes the theoretical background, prototype, user test and demo setup and contribution.<\/p>\n <\/p>\n Andreas Riegler (University of Applied Sciences Upper Austria), Andreas Riener (Technische Hochschule Ingolstadt), Clemens Holzmann (FH Oberösterreich)<\/i><\/p>\n Human factors research in automated driving is nowadays often conducted using either low-quality setups such as 2D monitors or highly expensive driving simulators with motion platforms. Additionally, software for automated driving scenarios is often expensive and hard to modify for different scenarios. We intend to bridge this gap by proposing a low-cost, high-fidelity immersive prototyping solution by utilizing the recent advances in development of virtual reality (VR): AutoWSD – Automated driving simulator for research on windshield displays. We showcase a hybrid software and hardware solution that is based on the popular and widely used Unity development platform. Furthermore, we demonstrate the little effort needed to create scenarios for user studies, and thereby foster discussion about potential improvements and extensions for AutoWSD, as well as the topic of trust, acceptance, user experience and simulator sickness in automation.<\/p>\n<\/p>\n Toronto 2018<\/a><\/p>\n<\/p>\n\n
Overview of demo’s<\/h3>\n<\/p>\n
PAST EVENTS<\/h2>\n<\/p>\n