Output list
Conference paper
Published 2024
Proceedings of the ... Annual Hawaii International Conference on System Sciences, 5754 - 5763
Although trust has been identified as critical for successfully integrating Artificial Intelligence (AI) into organizations, we know little about trust in AI within the organizational context and even less about distrust. In this paper, we investigate how distrust in AI unfolds in the organizational setting. We draw from a longitudinal case study in which we follow a data analytics team assigned to develop numerous AI algorithms for an organization striving to become AI-driven. Using the principles of grounded theory, our research
reveals that different organizational distrust dynamics shape distrust in AI. Thus, we develop three significant insights. First, we reveal that distrust in AI is situated and involves both social and technical trust referents. Second, we show that when a trust referent is rendered partly invisible to the trustor, this leads to the misattribution of distrust. Lastly, we show how distrust is transferred between social and technical trust referents. We contribute to the growing literature on integrating AI in organizations by articulating a broader and richer understanding of distrust in AI. We present a model of distrust transference actuated by social and technical trust referents. We also contribute to the literature on trust, showing how AI artifacts are implicated in trust relations within organizations.
Conference paper
Published 2020
EGOS Colloquium, 2020-07-02–2020-07-04, Hamburg, Germany