Output list
Journal article
Published 2026-02
Journal of Retailing and Consumer Services, 89, 104638
Consumers often ask service providers for predictions about future events in service processes, and the current study examines reactions to service agents' predictions about when an event will happen. More specifically, since it has been argued that non-human service agents (such as chatbots) will be increasingly common, the study assesses if service agent identity (human vs. AI-powered chatbot) influences receivers' reactions to the agent's when-predictions. This identity aspect has so far not been addressed in the service literature on reducing uncertainty about future events. Two between-subjects experiments, in which agent identity was manipulated in a flight delay context, were conducted for this assessment. The main result was that a human service agent generated a more positive evaluation of a when-prediction than a non-human agent. The study also provides explanations for this result in terms of underlying mechanisms: the impact of agent identity on the evaluation of a prediction was found to be (serially) mediated by attribution of mind to the agent and the agent's perceived prediction skills. Conceptually, attribution of theory of mind (the capability to understand other’ minds) is a highly needed capability when it comes to predicting future events – particularly in a service systems in which several human minds are involved. The main result implies that firms' responses to customers' requests for when-predictions would benefit if the predictions are delivered by a human service agent rather than a non-human service agent.
Journal article
First online publication 2025-10-25
Services Marketing Quarterly, 1 - 22
This study examines morality issues with respect to one understudied role of service robots: the service robot as a moral judge. A main tenet is that robots' moral judgments of human behavior invite us humans to compare such judgments with our own judgments, and it is hypothesized that the resulting level of human-to-robot similarity influences evaluations of robots. This hypothesis was confirmed in an experiment. The results also show that the impact of human-to-robot similarity on robot evaluations was mediated by attribution of morality to the robot and trust. The net result was that human-to-robot similarity boosted the evaluation of the robot.
Journal article
Published 2025-10
The International Journal of Tourism Research, 27, 5, e70139
While existing research on happiness in tourism has primarily focused on short‐term, traditional contexts (e.g., leisure and recreation), little is known about the sources of happiness in long‐term, extended tourism contexts. Grounded in a phenomenological approach, this study explores the sources of happiness throughout a study‐abroad exchange journey (i.e., educational tourism). Data were collected across three stages of the study‐abroad exchange journey using semi‐structured interviews, scale data, reflection diaries, and short videos. Using longitudinal interpretative phenomenological analysis, the findings reveal nine sources of happiness. These are integrated into an experiencescape framework comprising social, cultural, natural, sensorial, functional, and psychological components. The findings enrich subjective well‐being theory by providing an understanding of the sources of happiness and their evolving meanings throughout the study‐abroad exchange journey. The study offers actionable insights for tourism practitioners and policymakers seeking to enhance happiness in educational tourism.
Journal article
Receiving Employee Attention on the Floor of the Store and Its Effects on Customer Satisfaction
Published 2025-07
Journal of Consumer Behaviour, 24, 4, 1656 - 1668
This study examines the effects of the customer's perceptions of being the object of employee attention in physical store settings. Our specific concern is employee attention when the customer is browsing in a store and does not require any particular service from employees. A main thesis is that we humans are highly sensitive to others' attention (and inattention) in social settings and that others' attention can influence our sense of importance from a meaningfulness-in-life point of view. Three experiments in which employee attention was manipulated (low vs. high) confirm this: high employee attention enhanced a sense of importance , which mediated the impact of employee attention on customer satisfaction. The net result, in each experiment, was a higher level of customer satisfaction under the condition of high employee attention.
Journal article
Self-Promotion by Non-Human Service Agents: An Examination of the Impact on Customer Satisfaction
Published 2025
Services Marketing Quarterly, 46, 3-4, 113 - 137
This study examines one specific form of presentation content for digital service agents, self-promotion. It is frequently occurring among humans in various settings and it is likely to be richly represented in the training material for digital agents. Two experiments, in which customer satisfaction was the dependent variable, were conducted to manipulate a digital agent’s level of self-promotion (relatively low vs. relatively high) in initial service encounters with potential users. The results show that a relatively high level of self-promotion attenuated customer satisfaction, and that the agent’s perceived self-focus, warmth, and competence mediated this negative impact.
Journal article
Published 2024-11
Journal of Retailing and Consumer Services, 81, 103964
A frequently made assumption – supported in a large number of empirical studies – is that customer satisfaction stemming from a service encounter influences the customer's subsequent word-of-mouth activities. The present study re-examines this association with respect to both human service employees and service robots (which are expected to become more common in service encounters in the near future). First, it is assumed that the customer's attribution of theory of mind to a service agent is an important source of information for the formation of a satisfaction assessment. Indeed, it is assumed that the agent's theory of mind is a prerequisite for understanding the customer's needs. Second, in contrast to many existing studies, word-of-mouth is captured in terms of the valence of what customers actually say (as opposed to various forms of intentions to engage in word-of-mouth, which represent a dominant contemporary operationalization of word-of-mouth). A between-subjects experiment was conducted in which a service agent's identity (service robot vs. human) and service performance (poor vs. good) were the manipulated factors. The results show that both these factors influenced attribution of theory of mind to the agent, and that attribution of theory of mind enhanced customer satisfaction. The results also show that customer satisfaction affected word-of-mouth content in a valence-congruent way.
Journal article
Published 2024-06
Technology in Society, 77, 102560
When we need service, we will soon be interacting with various non-human AI-powered agents. In the first phase of a transformation from human-to-human to human-to-robot service encounters, it can also be expected that many of us will share the same robot in multi-party settings in which several users are present at the same time. This setting is particularly challenging for a service robot when users have conflicting demands for what the robot should do. And conflicts are ubiquitous in human behavior. The present study examines this understudied situation with an experimental approach: a service robot's ability to detect inter-user conflicts was manipulated (low vs. high) in a domestic setting (a kitchen). The results show that a service robot with a high conflict-detection ability boosted (1) the perceived usefulness of the robot and (2) overall robot evaluations.
Journal article
Published 2024
International Review of Retail, Distribution and Consumer Research, 34, 2, 228 - 250
Virtual agents (VAs) used by retailers for online contacts with customers are becoming increasingly common. So far, however, many of them display relatively poor performance in conversations with users – and this is expected to continue for still some time. The present study examines one aspect of conversations between VAs and humans, namely what happens when a VA openly discloses its knowledge gaps versus when it makes attempt to conceal them in a setting in which it cannot answer user questions. A between-subjects experiment with a manipulated VA, and with perceived service quality as the main dependent variable, shows that a display of a high level of ability to answer user questions boosts perceived service quality. The study also offers explanations of this outcome in terms of mediating variables (perceived VA competence, openness to disclose own knowledge limits, usefulness, and learning-related benefits).
Journal article
Published 2023-11
Technological Forecasting & Social Change, 196, 122831
Service robots are expected to become increasingly common. As their capabilities become more advanced, it is also expected that they would be involved in tasks for which a human user would want to know why they do what they are doing. One way to accomplish this is to program robots so that they verbalize (i.e., they are thinking aloud) while they are providing service. This ability is likely to be particularly useful for tasks that involve behavioral norms. The present study used an experimental design to manipulate the level of a robot's ability to verbalize motivations for its behavior (low vs. high) while it was asked by a human to carry out a task with moral implications. The results show that robot verbalizing contributed positively to satisfaction with the robot's performance, and that this impact was mediated by understandability, perceived morality and intellectual stimulation.
Journal article
Service robots and artificial morality: An examination of robot behavior that violates human privacy
Published 2023-07-19
Journal of Service Theory and Practice, 33, 7, 52 - 72
Purpose Service robots are expected to become increasingly common, but the ways in which they can move around in an environment with humans, collect and store data about humans and share such data produce a potential for privacy violations. In human-to-human contexts, such violations are transgression of norms to which humans typically react negatively. This study examines if similar reactions occur when the transgressor is a robot. The main dependent variable was the overall evaluation of the robot.Design/methodology/approachService robot privacy violations were manipulated in a between-subjects experiment in which a human user interacted with an embodied humanoid robot in an office environment.FindingsThe results show that the robot's violations of human privacy attenuated the overall evaluation of the robot and that this effect was sequentially mediated by perceived robot morality and perceived robot humanness. Given that a similar reaction pattern would be expected when humans violate other humans' privacy, the present study offers evidence in support of the notion that humanlike non-humans can elicit responses similar to those elicited by real humans.Practical implicationsThe results imply that designers of service robots and managers in firms using such robots for providing service to employees should be concerned with restricting the potential for robots' privacy violation activities if the goal is to increase the acceptance of service robots in the habitat of humans.Originality/valueTo date, few empirical studies have examined reactions to service robots that violate privacy norms.