RP22 - Digital sovereignty of medical practitioners

A crucial prerequisite for all technological developments to thrive in a clinical environment is the medical practitioners’ willingness to engage with the technology and to consider its advice and support. Previous work on algorithm aversion and algorithm appreciation [1],[2], however, shows that either based on previous attitudes or based on negative experience, users might not trust sufficiently in the AI-based support - or might trust too much. Based on these considerations, the current project explores how the digital sovereignty of medical practitioners can be fostered in a way that they are empowered to better judge when to trust in AI generated advice and when not. Theoretically, the work will ground in assumptions on calibrated trust [3], algorithm literacy and explainable AI [4] and assumptions on the balance of understanding and trust [5]. By means of qualitative interviews, subsequent quantitative surveys and experimental designs it will be scrutinized which factors influence whether medical practitioners trust to the extent trust in the specific system is actually warranted and how to develop digital sovereignty.

[1] Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126. https://doi.org/10.1037/xge0000033

[2] Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005

[3] Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392

[4] Leichtmann, B., Humer, C., Hinterreiter, A., Streit, M. & Mara, M. (2023). Effects of Explainable Artificial Intelligence on trust and human behavior in a high-risk decision task. Computers in Human Behavior, 139, 107539. doi.org/10.1016/j.chb.2022.107539.

[5] Krämer, N., Wischnewski, M., & Müller, E. (2023, May 7). Interacting with autonomous systems and intelligent algorithms – new theoretical considerations on the relation of understanding and trust. https://doi.org/10.31234/osf.io/h32ze