Output list
Conference paper
Published 2024
Proceedings of the ... Annual Hawaii International Conference on System Sciences, 5754 - 5763
Although trust has been identified as critical for successfully integrating Artificial Intelligence (AI) into organizations, we know little about trust in AI within the organizational context and even less about distrust. In this paper, we investigate how distrust in AI unfolds in the organizational setting. We draw from a longitudinal case study in which we follow a data analytics team assigned to develop numerous AI algorithms for an organization striving to become AI-driven. Using the principles of grounded theory, our research
reveals that different organizational distrust dynamics shape distrust in AI. Thus, we develop three significant insights. First, we reveal that distrust in AI is situated and involves both social and technical trust referents. Second, we show that when a trust referent is rendered partly invisible to the trustor, this leads to the misattribution of distrust. Lastly, we show how distrust is transferred between social and technical trust referents. We contribute to the growing literature on integrating AI in organizations by articulating a broader and richer understanding of distrust in AI. We present a model of distrust transference actuated by social and technical trust referents. We also contribute to the literature on trust, showing how AI artifacts are implicated in trust relations within organizations.
Journal article
Getting AI Implementation Right: Insights from a Global Survey
Published 2023-11
California Management Review, 66, 1, 5 - 22
While the promise of artificial intelligence (AI) is pervasive, many companies struggle with AI implementation challenges. This article presents results from a survey of 2,525 decision-makers with AI experience in China, Germany, India, the United Kingdom, and the United States—as well as interviews with 16 AI implementation experts—in order to understand the challenges companies face when implementing AI. The study covers technological, organizational, and cultural factors and identifies key challenges and solutions for AI implementation. This article develops a diagnostic framework to help executives navigate AI challenges as companies gain momentum, manage organization-wide complexities, and curate a network of partners, algorithms, and data sources to create value through AI.
Journal article
Blinded by the person? Experimental evidence from idea evaluation
Published 2023-10
Strategic Management Journal, 44, 10, 2443 - 2459
Research Summary: Seeking causal evidence on biases in idea evaluation, we conducted a field experiment in a large multinational company with two conditions: (a) blind evaluation, in which managers received no proposer information, and (b) non-blind evaluation, in which they received the proposer's name, unit, and location. To our surprise—and in contrast to the preregistered hypotheses—we found no biases against women and proposers from different units and locations, which blinding could ameliorate. Addressing challenges that remained intractable in the field experiment, we conducted an online experiment, which replicated the null findings. A final vignette study showed that people overestimated the magnitude of the biases. The studies suggest that idea evaluation can be less prone to biases than previously assumed and that evaluators separate ideas from proposers. Managerial Summary: We wanted to find out if there were biases in the way managers evaluate ideas from their employees. We did a field experiment in a large multinational technology company where we tested two different ways of evaluating ideas: one where managers did not know anything about the person who came up with the idea and one where they did know the person's name, which unit they worked for, and where they were located. The results were surprising. We did not find any bias against women and employees that did not work in the same location and unit as the evaluator. Managers are advised that hiding the identity of idea proposers (from idea evaluators) may not be a silver bullet to improving idea evaluation.
Dissertation
Published 2023
Artificial Intelligence (AI) is advancing its position in organizations by performing tasks historically perceived as exclusive to humans. As AI becomes more commonplace in the work environment, there is an increasing need to understand the implications of integrating AI into the fabric of organizations. This thesis investigates how AI-related dynamics are manifested in the organization and their impact on employee trust and distrust in AI. The thesis consists of three articles, each based on a unique data set, including a multinational survey, a longitudinal case study, and a field experiment. Together, the articles show that AI will not only induce continuous transformation of the organization but can also generate persistent uncertainty amongst employees. Such uncertainty can then be amplified by employees’ and managers’ limited understanding of AI functionality, resulting in distrust. The results also show that human biases, a challenge expected to be addressed using AI, manifest differently than commonly believed. Such misconceptions can create unrealistic expectations on AI, further contributing to employee uncertainty. The thesis ends with a call to increase our general knowledge regarding AI and its reliance on organizational data, followed by three suggestions for future research on the implications of integrating AI in organizations.
Dataset
Dahlander et al. (2023) - Blinded by the Person
Published 2023
This is the data for the article "Blinded by the Person? Experimental Evidence from Idea Evaluation" (Dahlander et al., 2023). It includes data and code for the online experiment and the vignette study. We cannot share the data from the field experiment because they are proprietary.
Conference paper
Published 2020
EGOS Colloquium, 2020-07-02–2020-07-04, Hamburg, Germany