Expertise
Elmira van den Broek is an Assistant Professor at the Stockholm School of Economics, House of Innovation. She holds a Ph.D. from Vrije Universiteit, KIN Center for Digital Innovation with a focus on artificial intelligence (AI) and work. During her doctoral studies, she was a visiting scholar at New York University, Stern School of Business.
Her research interest lies in the intersection of the fields of technology, work, and organizations. Specifically, she explores the implications of emerging technologies such as AI for work and organizing. Her research is primarily characterized by qualitative, ethnographic methods and a practice approach to understanding how new technologies shape knowledge work, ethical decision-making, and occupations in organizations.
Her work has been published in Journal of Management Studies, MIS Quarterly, Information & Organization, and Journal of Management Inquiry, and received various awards, including the Grigor McClelland Award 2023 for Best Doctoral Dissertation and Best Paper awards at the Annual Meeting of Academy of Management, EGOS Colloquium, and International Conference for Information Systems. To extend the impact of her work beyond academia, Elmira regularly translates research insights to practitioner audiences.
Links
Organizational Affiliations
Highlights - Output
Journal article
Published 2025-09
Information and Organization, 35, 3, 100584
The rise of data-driven artificial intelligence (AI) technologies has sparked intense debates about their implications for work. These discussions often portray AI as an agentic force that turns data into knowledge and ultimately, “better” decisions, casting shadows over the labor that sustains and supports these technologies. This paper argues that to develop a grounded understanding of how AI contributes to transformations in the workplace, we must unpack AI at work, that is, how algorithms are shaped by, and in turn, shape everyday work practices. Building on a longstanding tradition of research that examines the interplay between technology and work, this study foregrounds three types of work that gain renewed significance in the context of AI: data work, knowledge work, and values work. Drawing on the empirical example of hiring, this study illustrates how these forms of work are critical not only for understanding how AI technologies are brought to life but also for recognizing deeper, often unforeseen changes in the workplace. By surfacing the hidden, interrelated, and ever-evolving nature of work for AI, the AI at work lens put forward in this study offers critical implications for information systems and organizational research, as well as practical insights for practitioners, policymakers, and regulators.
•Dominant AI narratives obscure the human labor that shapes and sustains these technologies.•Proposes an AI at work lens to unpack how algorithms are shaped by, and in turn, shape everyday work practices.•Data work, knowledge work, and values work gain new contours and significance in the context of AI.•An AI at work lens reveals the multifaceted, interrelated, and constantly evolving nature of work with AI.
Journal article
First online publication 2025-08-24
Journal of Management Studies
As predictive artificial intelligence (AI) technologies increasingly steer workplace decisions, debates around fairness have intensified. Existing research often approaches fairness either as a set of universal principles supported or undermined by algorithms, or as a product of social interpretations, thereby providing either technologically deterministic or purely social accounts. Drawing on an ethnographic study of a human resources (HR) department of a large international company that introduced AI in hiring, this study offers an alternative view that shifts focus to how fairness emerges through the ways people define, embed, and perform values with algorithms. Taking a sociomaterial perspective, we find that the introduction and use of AI resulted in crowding out expert practices of performing fairness, favouring instead the version performed by HR. Our process model explains this outcome by the growing symbiosis between HR's professional mandate for fairness and AI procedures, where each legitimizes, shapes, and protects the other over time. This study thus shows that fairness is not pre‐given but constantly redefined and enacted through evolving associations between professional mandates and AI technologies.
Journal article
The Future of Research in an Artificial Intelligence-Driven World
Published 2024-07
Journal of Management Inquiry, 33, 3, 207 - 229
Current and future developments in artificial intelligence (AI) systems have the capacity to revolutionize the research process for better or worse. On the one hand, AI systems can serve as collaborators as they help streamline and conduct our research. On the other hand, such systems can also become our adversaries when they impoverish our ability to learn as theorists, or when they lead us astray through inaccurate, biased, or fake information. No matter which angle is considered, and whether we like it or not, AI systems are here to stay. In this curated discussion, we raise questions about human centrality and agency in the research process, and about the multiple philosophical and practical challenges we are facing now and ones we will face in the future.
Journal article
When the Machine Meets The Expert
Published 2021
MIS Quarterly, 45, 3, 1557 - 1580
The introduction of machine learning (ML) in organizations comes with the claim that algorithms will produce insights superior to those of experts by discovering the "truth" from data. Such a claim gives rise to a tension between the need to produce knowledge independent of domain experts and the need to remain relevant to the domain the system serves. This two-year ethnographic study focuses on how developers managed this tension when building an ML system to support the process of hiring job candidates at a large international organization. Despite the initial goal of getting domain experts "out the loop," we found that developers and experts arrived at a new hybrid practice that relied on a combination of ML and domain expertise. We explain this outcome as resulting from a process of mutual learning in which deep engagement with the technology triggered actors to reflect on how they produced knowledge. These reflections prompted the developers to iterate between excluding domain expertise from the ML system and including it. Contrary to common views that imply an opposition between ML and domain expertise, our study foregrounds their interdependence and as such shows the dialectic nature of developing ML. We discuss the theoretical implications of these findings for the literature on information technologies and knowledge work, information system development and implementation, and human-ML hybrids.
Global ID
Metrics
- 6 Total file downloads
- 666 Total output views
- 205 Total Times Cited