RP24 - Where does the AI look? Influence of convergence of human attention foci and AI attention foci on reliance on a clinical decision support system

In the previous WisPerMed projects, important steps towards a better understanding of how human biases can be overcome by the help of clinical decision support systems was reached [1]. However, it still needs to be better scrutinized which conditions influence whether humans trust a clinical decision making system and follow its advice or not. Trust in and acceptance of decisions can be based on a basic understanding of what principles an AI system bases its suggestions on [2] and the perception of how close these principles are to one’s own. Specifically when making decisions about correct diagnoses and treatments based on images (e.g., in radiology or dermatology), it might be relevant to check what specific parts of the image humans look at in order to make decisions, and how their trust in a clinical decision support system is increased or mitigated when the system (in the sense of explainable AI) displays which parts of the picture are crucial for its suggestion. A series of studies that include eye-tracking and experimental designs will test the impact of increased knowledge about the system’s procedure and decision background on user trust and reliance on AI-generated suggestions.

[1] Küper A., Lodde G., Livingstone E., Schadendorf D. and N. Krämer (2023). Mitigating cognitive bias with clinical decision support systems: an experimental study. J. Decis. Syst. https://doi.org/10.1080/12460125.2023.2245215.

[2] Krämer, N. and Szczuka, J. (2023). Experiments in human–machine communication research. (Vols. 1-0). SAGE Publications Ltd, https://doi.org/10.4135/9781529782783