Skip to main content

CBR-LIME: A Case-Based Reasoning Approach to Provide Specific Local Interpretable Model-Agnostic Explanations

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12311))

Abstract

Research on eXplainable AI has proposed several model agnostic algorithms, being LIME [14] (Local Interpretable Model-Agnostic Explanations) one of the most popular. LIME works by modifying the query input locally, so instead of trying to explain the entire model, the specific input instance is modified, and the impact on the predictions are monitored and used as explanations. Although LIME is general and flexible, there are some scenarios where simple perturbations are not enough, so there are other approaches like Anchor where perturbations variation depends on the dataset. In this paper, we propose a CBR solution to the problem of configuring the parameters of the LIME algorithm for the explanation of an image classifier. The case base reflects the human perception of the quality of the explanations generated with different parameter configurations of LIME. Then, this parameter configuration is reused for similar input images.

Supported by the Spanish Committee of Economy and Competitiveness (TIN2017-87330-R) and UCM Research Group 921330.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://github.com/marcotcr/lime.

  2. 2.

    Explanations were generated using 3-NN as there are no significant changes with other k values.

References

  1. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001). https://doi.org/10.1023/A:1010933404324

    Article  MATH  Google Scholar 

  2. Doyle, D., Cunningham, P., Bridge, D.G., Rahman, Y.: Explanation oriented retrieval. In: Funk, P., González-Calero, P.A. (eds.) Advances in Case-Based Reasoning, ECCBR 2004. Lecture Notes in Computer Science, vol. 3155, pp. 157–168. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-28631-8_13

  3. Friedman, J.H.: Greedy function approximation: A gradient boosting machine. Ann. Statist. 29(5), 1189–1232 (2001). https://doi.org/10.1214/aos/1013203451

  4. Gates, L., Kisby, C., Leake, D.: CBR confidence as a basis for confidence in black box systems. In: Bach, K., Marling, C. (eds.) Case-Based Reasoning Research and Development, ICCBR 2019. Lecture Notes in Computer Science, vol. 11680, pp. 95–109. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-29249-2_7

  5. Goldstein, A., Kapelner, A., Bleich, J., Pitkin, E.: Peeking inside the black box: visualizing statistical learning with plots of individual conditional expectation. J. Computat. Graph. Stat. 24(1), 44–65 (2015). https://doi.org/10.1080/10618600.2014.907095

    Article  MathSciNet  Google Scholar 

  6. Keane, M.T., Kenny, E.M.: How case-based reasoning explains neural networks: A theoretical analysis of XAI using post-hoc explanation-by-example from a survey of ANN-CBR twin-systems. In: Bach, K., Marling, C. (eds.) Case-Based Reasoning Research and Development, ICCBR 2019. Lecture Notes in Computer Science, vol. 11680, pp. 155–171. Springer, Heidelberg (2019). https://doi.org/10.1007/978-3-030-29249-2_11

  7. Krishna, R., et al.: Visual genome: connecting language and vision using crowdsourced dense image annotations (2016). https://arxiv.org/abs/1602.07332

  8. Leake, D.B., McSherry, D.: Introduction to the special issue on explanation in case-based reasoning. Artif. Intell. Rev. 24(2), 103–108 (2005). https://doi.org/10.1007/s10462-005-4606-8

    Article  Google Scholar 

  9. Li, O., Liu, H., Chen, C., Rudin, C.: Deep learning for case-based reasoning through prototypes: a neural network that explains its predictions. In: McIlraith, S.A., Weinberger, K.Q. (eds.) Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, AAAI-18. pp. 3530–3537. AAAI Press (2018)

    Google Scholar 

  10. Lipton, Z.C.: The mythos of model interpretability. Commun. ACM 61(10), 36–43 (2018). https://doi.org/10.1145/3233231

    Article  Google Scholar 

  11. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in neural information processing systems, 30, pp. 4765–4774. Curran Associates, Inc. (2017)

    Google Scholar 

  12. Miller, T.: Explanation in artificial intelligence: Insights from the social sciences. CoRR abs/1706.07269 (2017). http://arxiv.org/abs/1706.07269

  13. Molnar, C.: Interpretable Machine Learning (2019). https://christophm.github.io/interpretable-ml-book/

  14. Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. Association for Computing Machinery, New York, NY, USA (2016). https://doi.org/10.1145/2939672.2939778

  15. Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: High-precision model-agnostic explanations. In: McIlraith, S.A., Weinberger, K.Q. (eds.) Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, AAAI-2018, pp. 1527–1535. AAAI Press (2018). https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16982

  16. Roth-Berghofer, T., Richter, M.M.: On explanation. Künstliche Intelligenz KI 22(2), 5–7 (2008)

    Google Scholar 

  17. Sanchez-Ruiz, A.A., Ontanon, S.: Structural plan similarity based on refinements in the space of partial plans. Computat. Intell. 33(4), 926–947 (2017). https://doi.org/10.1111/coin.12131

  18. Sheikh, H.R., Sabir, M.F., Bovik, A.C.: A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process. 15(11), 3440–3451 (2006). https://doi.org/10.1109/TIP.2006.881959

    Article  Google Scholar 

  19. Sørmo, F., Cassens, J., Aamodt, A.: Explanation in case-based reasoning-perspectives and goals. Artif. Intell. Rev. 24(2), 109–143 (2005). https://doi.org/10.1007/s10462-005-4607-7

    Article  MATH  Google Scholar 

  20. Suju, D.A., Jose, H.: Flann: Fast approximate nearest neighbour search algorithm for elucidating human-wildlife conflicts in forest areas. In: 2017 Fourth International Conference on Signal Processing, Communication and Networking (ICSCN), pp. 1–6, March 2017. https://doi.org/10.1109/ICSCN.2017.8085676

  21. Szegedy, C., et al.: Going deeper with convolutions. In: Computer Vision and Pattern Recognition (CVPR) (2015). http://arxiv.org/abs/1409.4842

  22. Vedaldi, A., Soatto, S.: Quick shift and kernel methods for mode seeking. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) Computer Vision - ECCV 2008, pp. 705–718. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  23. Weber, R.O., Johs, A.J., Li, J., Huang, K.: Investigating textual case-based XAI. In: Cox, M.T., Funk, P., Begum, S. (eds.) Case-Based Reasoning Research and Development, ICCBR 2018, Proceedings. Lecture Notes in Computer Science, vol. 11156, pp. 431–447. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01081-2_29

  24. Weld, D.S., Bansal, G.: The challenge of crafting intelligible intelligence. Commun. ACM 62(6), 70–79 (2019). https://doi.org/10.1145/3282486

    Article  Google Scholar 

  25. Zhou, W., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004). https://doi.org/10.1109/TIP.2003.819861

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Juan A. Recio-Garcí­a .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Recio-Garcí­a, J.A., Dí­az-Agudo, B., Pino-Castilla, V. (2020). CBR-LIME: A Case-Based Reasoning Approach to Provide Specific Local Interpretable Model-Agnostic Explanations. In: Watson, I., Weber, R. (eds) Case-Based Reasoning Research and Development. ICCBR 2020. Lecture Notes in Computer Science(), vol 12311. Springer, Cham. https://doi.org/10.1007/978-3-030-58342-2_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58342-2_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58341-5

  • Online ISBN: 978-3-030-58342-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics