Skip to main content

User Trust and Understanding of Explainable AI: Exploring Algorithm Visualisations and User Biases

  • Conference paper
  • First Online:
Human-Computer Interaction. Human Values and Quality of Life (HCII 2020)

Abstract

Artificial intelligence (AI) is increasingly being integrated into different areas of our lives. AI has the potential to increase productivity and relieve workload on staff in high-pressure jobs such as healthcare. However, most AI healthcare tools have failed. For AI to be effective, it is vital that users can understand how the system is processing data. Explainable AI (XAI) moves away from the traditional ‘black box’ approach, aiming to make the processes behind the system more transparent. This experimental study uses real healthcare data – and combines a computer science and psychological approach – to investigate user trust and understanding of three popular XAI algorithms (Decision Trees, Logistic Regression and Neural Networks). The results question the contribution of understanding towards user trust; Suggesting that understanding and explainability are not the only factors contributing to trust in AI. Users also show biases in trust and understanding – with a particular bias towards malignant results. This raises important issues around how humans can be encouraged to make more accurate judgements when using XAI systems. These findings have implications in relation to ethics, future XAI design, healthcare and further research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hall, L.H., Johnson, J., Watt, I., et al.: Healthcare staff wellbeing, burnout, and patient safety: a systematic review. PLoS One 11, e0159015 (2016). https://doi.org/10.1371/journal.pone.0159015

    Article  Google Scholar 

  2. Yang, Q., Steinfeld, A., Zimmerman, J.: Unremarkable AI: fitting intelligent decision support into critical, clinical decision-making processes. In: CHI Conference on Human Factors in Computing Systems Proceedings (CHI 2019) (2019)

    Google Scholar 

  3. Musen, M.A., Middleton, B., Greenes, R.A.: Clinical decision-support systems. In: Shortliffe, E.H., Cimino, J.J. (eds.) Biomedical Informatics, pp. 643–674. Springer, London (2014). https://doi.org/10.1007/978-1-4471-4474-8_22

    Chapter  Google Scholar 

  4. Marr, B.: What is the difference between artificial intelligence and machine learning? In: Forbes (2016). https://www.forbes.com/sites/bernardmarr/2016/12/06/what-is-the-difference-between-artificial-intelligence-and-machine-learning/#404d18732742. Accessed 3 Apr 2019

  5. Dunie, R., Miers, D., Wong, J., et al.: Magic quadrant for intelligent business process management suites. In: Gartner (2019). https://www.gartner.com/doc/reprints?id=1-66KPV4X&ct=190201&st=sb. Accessed 3 Apr 2019

  6. Szolovitz, P.: AI for the M.D. Science 80(363), 1402 (2019). https://doi.org/10.1126/science.aaw4041

  7. Ting, D.S.W., Liu, Y., Burlina, P., et al.: AI for medical imaging goes deep. Nat. Med. 24, 539–540 (2018). https://doi.org/10.1038/s41591-018-0029-3

    Article  Google Scholar 

  8. Coventry, L., Branley, D.: Cybersecurity in healthcare: a narrative review of trends, threats and ways forward. Maturitas 113, 48–52 (2018). https://doi.org/10.1016/j.maturitas.2018.04.008

    Article  Google Scholar 

  9. Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI 2019, pp. 1–15. ACM Press, New York (2019)

    Google Scholar 

  10. Accenture. Explainable AI: The Next Stage of Human-machine Collaboration (2018)

    Google Scholar 

  11. Amershi, S., Inkpen, K., Teevan, J., et al.: Guidelines for human-AI interaction. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI 2019, pp. 1–13. ACM Press, New York (2019)

    Google Scholar 

  12. Scikit-Learn Wisconsin Breast Cancer Database. https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_breast_cancer.html. Accessed 14 Jan 2020

  13. Mena, L.J., Orozco, E.E., Felix, V.G., et al.: Machine learning approach to extract diagnostic and prognostic thresholds: Application in prognosis of cardiovascular mortality. Comput. Math. Methods Med. (2012). https://doi.org/10.1155/2012/750151

    Article  Google Scholar 

  14. Salama, G.I., Salama, G.I., Abdelhalim, M.B., Zeid, M.A.: Breast cancer diagnosis on three different datasets using multi-classifiers. Int. J. Comput. Inform. Technol. 2277 (2012)

    Google Scholar 

  15. Graham, J.L., Giordano, T.P., Grimes, R.M., et al.: Influence of trust on HIV diagnosis and care practices: a literature review. J. Int. Assoc. Phys. AIDS Care 9, 346–352. https://doi.org/10.1177/1545109710380461

  16. Lewicki, R.J., Tomlinson, E.C., Gillespie, N.: Models of interpersonal trust development: theoretical approaches, empirical evidence, and future directions. J. Manag. (2006). https://doi.org/10.1177/0149206306294405

    Article  Google Scholar 

  17. Parasuraman, R., Riley, V.: Humans and automation: use, misuse, disuse, abuse. Hum. Factors (1997). https://doi.org/10.1518/001872097778543886

    Article  Google Scholar 

  18. Skirbekk, H., Middelthon, A.L., Hjortdahl, P., Finset, A.: Mandates of trust in the doctor-patient relationship. Qual. Health Res. (2011). https://doi.org/10.1177/1049732311405685

    Article  Google Scholar 

  19. Larosa, E., Danks, D.: Impacts on trust of healthcare. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, December 2018, AIAIES 2018, pp. 210–215 (2018). https://doi.org/10.1145/3278721.3278771

  20. Clare, A.S., Cummings, M.L., Repenning, N.P.: Influencing trust for human-automation collaborative scheduling of multiple unmanned vehicles. Hum. Factors (2015). https://doi.org/10.1177/0018720815587803

    Article  Google Scholar 

  21. Osoba, O.A., Welser IV, W.: An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. RAND Corporation, Santa Monica (2017). https://www.rand.org/pubs/research_reports/RR1744.html

  22. Ahmed, A.M., Salas, O.: The relationship between behavioral and attitudinal trust: a cross-cultural study. Rev. Soc. Econ. 67, 457–482 (2009). https://doi.org/10.1080/00346760902908625

    Article  Google Scholar 

  23. Chen, A.: IBM’s Watson gave unsafe recommendations for treating cancer - The Verge. The Verge (2018)

    Google Scholar 

  24. Liang, C., Proft, J., Andersen, E., Knepper, R.A.: Implicit communication of actionable information in human-AI teams. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems - CHI 2019, pp. 1–13. ACM Press, New York (2019)

    Google Scholar 

  25. EU GDPR. Key Changes with the General Data Protection Regulation (2018). https://eugdpr.org/the-regulation/. Accessed 3 Apr 2019

  26. Intersoft Consulting General Data Protection Regulation (GDPR) – Official Legal Text

    Google Scholar 

  27. Cohen, J.: Statistical Power Analysis of the Behavioral Sciences (1988)

    Google Scholar 

  28. Chandrayan, P.: Supervised Machine Learning For Dummies: Part 1 Overview. In: codeburst.io (2018). https://codeburst.io/supervised-machine-learning-for-dummies-part-1-overview-15c18f2269ba. Accessed 19 Sept 2019

  29. Gini coefficient and Lorenz curve explained - Towards Data Science. In: Towar. Data Sci. (2019). https://towardsdatascience.com/gini-coefficient-and-lorenz-curve-f19bb8f46d66. Accessed 19 Sept 2019

  30. Brownlee, J.: Logistic Regression for Machine Learning. In: Mach. Learn. Mastery (2016). https://machinelearningmastery.com/logistic-regression-for-machine-learning/. Accessed 22 Aug 2019

  31. Track, R., Anjomshoae, S., Najjar, A., et al.: Explainable agents and robots: results from a systematic literature review. In: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS 2019) (2019)

    Google Scholar 

  32. Fürnkranz, J., Kliegr, T., Paulheim, H.: On Cognitive Preferences and the Plausibility of Rule-based Models (2018)

    Google Scholar 

  33. Kahneman, D.: Thinking, Fast and Slow. Penguin, London (2011)

    Google Scholar 

  34. Banerjee, A., Chitnis, U.B., Jadhav, S.L., et al.: Hypothesis testing, type I and type II errors. Ind. Psychiatry J. 18, 127–131 (2009). https://doi.org/10.4103/0972-6748.62274

    Article  Google Scholar 

  35. de Visser, E.J., Monfort, S.S., McKendrick, R., et al.: Almost human: anthropomorphism increases trust resilience in cognitive agents. J. Exp. Psychol. Appl. (2016). https://doi.org/10.1037/xap0000092

    Article  Google Scholar 

  36. Siegrist, M., Cousin, M.-E., Frei, M.: Biased confidence in risk assessment studies. Hum. Ecol. Risk Assess. Int. J. 14, 1226–1234 (2008). https://doi.org/10.1080/10807030802494527

    Article  Google Scholar 

  37. Hoorens, V., Buunk, B.P.: Social comparison of health risks: locus of control, the person-positivity bias, and unrealistic optimism1. J. Appl. Soc. Psychol. 23, 291–302 (1993). https://doi.org/10.1111/j.1559-1816.1993.tb01088.x

    Article  Google Scholar 

  38. Clarke, V.A., Lovegrove, H., Williams, A., Machperson, M.: Unrealistic optimism and the health belief model. J. Behav. Med. 23, 367–376 (2000). https://doi.org/10.1023/A:1005500917875

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dawn Branley-Bell .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Branley-Bell, D., Whitworth, R., Coventry, L. (2020). User Trust and Understanding of Explainable AI: Exploring Algorithm Visualisations and User Biases. In: Kurosu, M. (eds) Human-Computer Interaction. Human Values and Quality of Life. HCII 2020. Lecture Notes in Computer Science(), vol 12183. Springer, Cham. https://doi.org/10.1007/978-3-030-49065-2_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-49065-2_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-49064-5

  • Online ISBN: 978-3-030-49065-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics