Refine
Document Type
- Conference Proceeding (5)
- Article (1)
Language
- English (6)
Has Fulltext
- no (6)
Keywords
- UEQ (2)
- UX (2)
- Benchmark (1)
- Collaboration Tools (1)
- IPA (1)
- Importance-Performance Analysis (1)
- Professional and Leisure Contexts (1)
- UX Factors (1)
- User Experience (1)
- User Experience Factors (1)
Institute
User Experience Questionnaire is a common and valid method to measure the User Experience (UX) for a product or service. In recent years, these questionnaires have established themselves to measure various aspects of UX. In addition to the questionnaire, an evaluation tool is usually offered so that the results of a study can be evaluated in the light of the questionnaire. As a rule, the evaluation consists of preparing the data and comparing it with a benchmark. Often this interpretation of the data is not sufficient as it only evaluates the current User Experience. However, it is desirable to determine exactly where there is a need for action. In our article we present an approach that evaluates the results from the User Experience Questionnaire (UEQ) using the importance-performance analysis (IPA). The aim is to create another possibility to interpret the results of the UEQ and to derive recommendations for action from them. In a first study with 219 participants, we validated the approach presented with YouTube and WhatsApp. The results show that the IPA provides additional insights from which further recommendations for action can be derived.
Die User Experience ist ein relevanter Aspekt für eine erfolgreiche Produktentwicklung. Ein geeignetes Instrument für die quantitative Messung der User Experience sind Fragebögen wie der UEQ+. Dieser erlaubt es, mithilfe der zur Verfügung gestellten Skalen produktindividuelle Fragebögen zu erstellen. Es ist jedoch wahrnehmbar, dass manche UX-Faktoren für konkrete Produktkategorien relevanter sind als für andere. Welche Faktoren in welchem konkreten Anwendungsfall wichtig sind, wird in dieser Arbeit gezeigt. Dafür wurden zwei Studien aus 2016 und 2017 ausgewertet. Durch eine Korrelationsanalyse sowie t-Tests konnte nachgewiesen werden, dass die Ergebnisse beider Studien robust genug sind, sodass sie zu einem Gesamtdatensatz zusammengefasst werden konnten. Aus diesem Datensatz konnte eine globale Relevanzgrenze ermittelt werden, bei deren Überschreitung ein Faktor als relevant für ein Produkt angesehen werden kann. Das Ergebnis zeigt, dass die vermuteten Wichtigkeitsunterschiede nachgewiesen werden konnten. Dies sollte in der Produktentwicklung berücksichtigt werden, u. a. um die Stärken und Schwächen des Produkts zu identifizieren.
As collaborative technologies become integral in both professional and leisurely settings, especially during the rise of remote work and digital communities due to COVID-19, understanding the user experience (UX) factors is critical. This study aims to explore the differential importance of these UX factors across professional and leisure contexts, leveraging the widespread use of collaboration tools for an in-depth analysis. The objective of the study is to identify and assess key UX factors in collaboration tools, and to quantify their differential impact in professional and leisure settings. Our research underscores the nuanced role of context in evaluating User Experience (UX) factors’ importance in collaboration tools, with significant variances observed across professional and leisure settings. While some UX factors, including accessibility, clarity, and intuitive use, maintained universal importance across contexts and tools, others—specifically dependability and efficiency—co ntradicted assumptions of being universal "hygiene factors", demonstrating the complexity of UX evaluations. This complexity necessitates a differentiated approach for each context and collaboration tool type, challenging the possibility of a singular evaluation or statement.
A Benchmark for the UEQ+ Framework: Construction of a Simple Tool to Quickly Interpret UEQ+ KPIs
(2024)
Questionnaires are a highly efficient method to compare the user experience (UX) of different interactive products or versions of a single product. Concretely, they allow us to evaluate the UX easily and to compare different products with a numeric UX score. However, often only one UX score from a single evaluated product is available. Without a comparison to other measurements, it is difficult to interpret an individual score, e.g. to decide whether a product’s UX is good enough to compete in the market. Many questionnaires offer benchmarks to support researchers in these cases. A benchmark is the result of a larger set of product evaluations performed with the same questionnaire. The score obtained from a single product evaluation can be compared to the scores from this benchmark data set to quickly interpret the results. In this paper, the first benchmark for the UEQ+ (User Experience Questionnaire +) is presented, which was created using 3.290 UEQ+ responses for 26 successful software products. The UEQ+ is a modular framework that contains a high number of validated user experience scales that can be combined to form a UX questionnaire. Currently, no benchmark is available for this framework, making the benchmark constructed in this paper a valuable interpretation tool for UEQ+ questionnaires.
Questionnaires are a popular method to measure User Experience (UX). These UX questionnaires cover different UX aspects with their scales. However, UX includes a huge number of semantically different aspects of a user’s interaction with a product. It is therefore practically impossible to cover all these aspects in a single evaluation study. A researcher must select those UX aspects that are most important to the users of the product under investigation. Some papers examined which UX aspects are important for specific product categories. Participants in these studies rated the importance of UX aspects for different product categories. These categories were described by a category name and several examples for products in this category. In principle, the results of these studies can be used to indicate which UX aspects should be measured for a particular product in the corresponding product category. This is especially useful for modular frameworks, e.g., the UEQ+, that allow to create a questionnaire by selecting the relevant scales from a catalog of predefined scales. In this paper, it is investigated how accurate the UX aspect suggestions derived from category-level studies are for individual products. The results show that the predicted importance of a UX aspect from the category is fairly precise.
Collaboration tools are heavily used in work, education, and leisure. Yet, what makes a good collaboration tool is not well researched. This study focuses on what users expect of collaboration tools by investigating how they are used and which UX aspects are important to users when using them. In a survey, 184 participants described their use of collaboration tools and then rated the importance of 19 given UX aspects in their specific scenario. Results show that seven UX aspects are almost universally seen as most important. Additionally, five aspects seem to be especially relevant in specific usage domains. It is indicated that the context of use, especially the usage domain, influences which UX aspects are important to users. These results can be used by organisations as a guideline when selecting a collaboration tool suitable for their members in order to successfully adopt a tool.