Abstract
As artificial intelligence (AI) systems increasingly make impactful decisions in the workplace, issues of explainability have gained prominence. However, current debates around explainability of AI either take on a technical perspective or focus on the use of AI for augmentation, in which professionals can decide to ignore or override AI outputs when hindered by opacity. Given that current AI tools have the increasing ability to act on their own, this calls for a deeper understanding of how professionals manage explainability in cases of AI automation. Building on a comparative field study, we identify different practices that professionals enacted to produce post hoc explanations to clients of decisions made by AI tools. These practices varied depending on whether professionals relied on their own expertise versus AI techniques and whether they deeply engaged with the AI tool in constructing explanations. Our preliminary findings yield important implications for the literature on AI and professions. © 2023 International Conference on Information Systems, ICIS 2023: "Rising like a Phoenix: Emerging from the Pandemic and Reshaping Hu. All Rights Reserved.