Refine
Document Type
- Article (21)
- Conference Proceeding (17)
- Part of a Book (3)
- Book (1)
Has Fulltext
- no (42)
Keywords
- Augmented Reality (13)
- Virtual Reality (7)
- Virtual reality (4)
- Midwifery (3)
- Assistance (2)
- Authoring (2)
- Cognitive Impairments (2)
- Toolkit (2)
- Virtuelle Realität (2)
- augmented reality (2)
Institute
- Fachbereich Technik (40)
Digital Government eröffnet Möglichkeiten, Verwaltungs- und Regierungsprozesse kritisch zu reflektieren und sie entsprechend neu zu denken. Oblagen Bürgerbeteiligungsprozesse in der Vergangenheit zahlreichen Hürden, bietet die e‑Partizipation Möglichkeiten, sie mit modernen Technologien zu verbinden, die eine niedrigschwellige Teilhabe ermöglichen. In dem Forschungsprojekt Take Part, gefördert durch das Bundesministerium für Bildung und Forschung, werden innovative Formen der Beteiligung von Bürger:innen in der Stadt- und Bauplanung mithilfe von Augmented und Virtual Reality (AR und VR) erforscht. Dabei geht es vor allem darum, neue Anreize zu schaffen, Bürger:innen zur Beteiligung zu motivieren und durch diese das Konfliktpotential um Bauprojekte zu reduzieren. Mithilfe der innerhalb von Take Part entwickelten App können Bürger:innen Bauvorhaben diskutieren, Feedback geben oder über sie abstimmen, während sie dabei den Beteiligungsgegenstand anschaulich in AR und VR präsentiert bekommen. Zugleich können auch Initiator:innen mithilfe eines Partizipationsökosystems die Beteiligung im jeweiligen Bauvorhaben konfigurieren, indem sie sowohl vorhandene Module kombinieren und konfigurieren, als auch passende Dienstleistungen, wie beispielsweise 3D-Modellierungen, einkaufen. In diesem Beitrag sollen die konkreten technologischen Entwicklungen (u. a. Outdoor-AR-Tracking und räumlich verankerte Diskussionen), sowie das Partizipationsökosystem (Dienstentwicklungs- und Ausführungsplattform) vorgestellt werden. Erstmalig soll so der entwickelte Prototyp umfassend dargestellt werden. Auf die Herausforderung, eine e‑Partizipations-App zu entwickeln, die die Möglichkeit bietet, verschiedene Interaktionskonzepte ineinander zu integrieren und gleichzeitig eine überzeugende User-Experience bietet, soll ebenfalls eingegangen werden. Anschließend wird das Potenzial einer solchen Lösung für die digitale Mitbestimmung in lokaler Verwaltung vor allem in Bezug auf gesteigerte Vorstellungskraft und Motivation zur Teilhabe für Nutzer:innen diskutiert und in den Kontext der Covid-19 Pandemie gesetzt.
One key aspect of the Internet of Things (IoT) is, that human machine interfaces are disentangled from the physicality of the devices. This provides designers with more freedom, but also may lead to more abstract interfaces, as they lack the natural context created by the presence of the machine. Mixed Reality (MR) on the other hand, is a key technology that enables designers to create user interfaces anywhere, either linked to a physical context (augmented reality, AR) or embedded in a virtual context (virtual reality, VR). Especially today, designing MR interfaces is a challenge, as there is not yet a common design language nor a set of standard functionalities or patterns. In addition to that, neither customers nor future users have substantial experiences in using MR interfaces.
Companies have the opportunity to better engage potential customers by presenting products to them in a highly immersive virtual reality (VR) shopping environment. However, a minimal amount is known about why and whether customers will adopt such fully immersive shopping environments. We therefore develop and experimentally validate a theoretical model, which explains how immersion affects adoption. The participants experienced the environment by using a head-mounted display (high immersion) or by viewing product models in 3D on a desktop (low immersion). We find that immersion does not affect the users’ intention to reuse the shopping environment, because two paths cancel each other out: Highly immersive shopping environments positively influence a hedonic path through telepresence, but surprisingly, they negatively influence a utilitarian path through product diagnosticity. We can explain this effect via low readability of product information in the VR environment and expect VR’s full potential to develop when the technology is further advanced. Our study contributes to literature on immersive systems and IS adoption research by introducing a research model for the adoption of VR shopping environments. A key practical implication of our study is that system designers need to pay special attention to the current state of technology when designing VR applications.
With high-immersive virtual reality (VR) systems approaching mass markets, companies are seeking to better understand how consumers behave when shopping in VR. A key feature of high-immersive VR environments is that they can create a strong illusion of reality to the senses, which could substantially change consumer choice behavior compared to online shopping. We compare consumer choice from virtual shelves in two environments: (i) a high-immersive VR environment using a head-mounted display and hand-held controllers with (ii) a low-immersive environment showing products as rotatable 3-D models on a desktop computer screen. We use an incentive-aligned choice experiment to investigate how immersion affects consumer choice. Our investigation comprises three key choice characteristics: variety-seeking, price-sensitivity, and satisfaction with the choice made. The empirical results provide evidence that consumers in high-immersive VR choose a larger variety of products and are less price-sensitive. Choice satisfaction, however, did not increase in high-immersive VR.
Virtual reality (VR) has gained increasing academic attention in recent years, and a possible reason for that might be its spread-out applications across different sectors of life. From the advent of the WebVR 1.0 API (application program interface), released in 2016, it has become easier for developers, without extensive knowledge of programming and modeling of 3D objects, to build and host applications that can be accessed anywhere by a minimum setup of devices. The development of WebVR, now continued as WebXR, is, therefore, especially relevant for research on education and teaching since experiments in VR had required not only expertise in the computer science domain but were also dependent on state-of-the-art hardware, which could have been limiting aspects to researchers and teachers. This paper presents the result of a project conducted at CITEC (Cluster of Excellence Cognitive Interaction Technology), Bielefeld University, Germany, which intended to teach English for a specific purpose in a VR environment using Amazon Sumerian, a web-based service. Contributions and limitations of this project are also discussed.
Classifying information search behavior helps tailor recommender systems to individual customers’ shopping motives. But how can we identify these motives without requiring users to exert too much effort? Our research goal is to demonstrate that eye tracking can be used at the point of sale to do so. We focus on two frequently investigated shopping motives: goal-directed and exploratory search. To train and test a prediction model, we conducted two eye-tracking experiments in front of supermarket shelves. The first experiment was carried out in immersive virtual reality; the second, in physical reality—in other words, as a field study in a real supermarket. We conducted a virtual reality study, because recently launched virtual shopping environments suggest that there is great interest in using this technology as a retail channel. Our empirical results show that support vector machines allow the correct classification of search motives with 80% accuracy in virtual reality and 85% accuracy in physical reality. Our findings also imply that eye movements allow shopping motives to be identified relatively early in the search process: our models achieve 70% prediction accuracy after only 15 seconds in virtual reality and 75% in physical reality. Applying an ensemble method increases the prediction accuracy substantially, to about 90%. Consequently, the approach that we propose could be used for the satisfiable classification of consumers in practice. Furthermore, both environments’ best predictor variables overlap substantially. This finding provides evidence that in virtual reality, information search behavior might be similar to the one used in physical reality. Finally, we also discuss managerial implications for retailers and companies that are planning to use our technology to personalize a consumer assistance system.
Although modern consumer level head-mounted-displays of today provide high-quality room scale tracking, and thus support a high level of immersion and presence, there are application contexts in which constraining oneself to seated set-ups is necessary. Classroom sized training groups are one highly relevant example. However, what is lost when constraining cybernauts to a stationary seated physical space? What is the impact on immersion, presence, cybersickness and what implications does this have on training success? Can a careful design for seated virtual reality (VR) amend some of these aspects? In this line of research, the study provides data on a comparison between standing and seated long (50–60 min) procedural VR training sessions of chemical operators in a realistic and lengthy chemical procedure (combination of digital and physical actions) inside a large 3-floor virtual chemical plant. Besides, a VR training framework based on Maslow's hierarchy of needs (MHN) is also proposed to systematically analyze the needs in VR environments. In the first of a series of studies, the physiological and safety needs of MHN are evaluated among seated and standing groups in the form of cybersickness, usability and user experience. The results (n=32, real personnel of a chemical plant) show no statistically significant differences among seated and standing groups. There were low levels of cybersickness along with good scores of usability and user experience for both conditions. From these results, it can be implied that the seated condition does not impose significant problems that might hinder its application in classroom training. A follow-up study with a larger sample will provide a more detailed analysis on differences in experienced presence and learning success.
Die Akademisierung der Hebammenausbildung bringt neue Herausforderungen mit sich. Die Heb@AR App ist eine innovative Augmented Reality (AR) Trainingsanwendung, um diesen Prozess nachhaltig zu unterstützen und die Umsetzung von erworbenem theoretischem Wissen in die Praxis zu fördern. Heb@AR wird derzeit für selbstgesteuertes und curriculares Lernen vor Ort in mehreren Hebammenstudiengängen in Deutschland eingesetzt. Ziel ist es, die praktischen Kompetenzen der Studierenden zu stärken, insbesondere in Notfallsituationen. Dabei ist die Heb@AR App kostenlos über Android & iOS App-Stores verfügbar und bietet derzeit fünf Hebammen-spezifische Trainingsszenarien inklusive supplementärem Material.
The academization of midwifery education entails novel challenges. To second the
transition process, the Heb@AR App is an innovative Augmented Reality (AR) training application.
It is currently being deployed for self-directed and curricular on-site learning in several midwifery
degree programs in Germany. The aim is to strengthen students’ practical competencies, especially
for emergency situations. Hereby, the Heb@AR App is available through the app store on handheld
(Android & iOS) devices for free and currently offers five midwifery-specific training scenarios.