Hostname: page-component-848d4c4894-ttngx Total loading time: 0 Render date: 2024-06-07T04:25:11.728Z Has data issue: false hasContentIssue false

Explaining Machine Learning Decisions

Published online by Cambridge University Press:  31 January 2022

John Zerilli*
Affiliation:
University of Oxford, Oxford, UK

Abstract

The operations of deep networks are widely acknowledged to be inscrutable. The growing field of Explainable AI (XAI) has emerged in direct response to this problem. However, owing to the nature of the opacity in question, XAI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions. While this has been taken to be a shortcoming of the field of XAI, I argue that it is broadly the right approach to the problem.

Type
Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the Philosophy of Science Association

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Adadi, Amina, and Berrada, Mohammed. 2018. “Peeking Inside the Black Box: A Survey on Explainable Artificial Intelligence.IEEE Access 6:52138–2160.CrossRefGoogle Scholar
Boden, Margaret. 1990. The Philosophy of Artificial Intelligence. New York: Oxford University Press.Google Scholar
Caruana, Rich, Lou, Yin, Gehrke, Johannes, Koch, Paul, Sturm, Marc, and Elhadad, Noemie. 2015. “Intelligible Models for Healthcare: Predicting Pneumonia Risk and Hospital 30-Day Readmission.” Proceedings of the 21st ACM International Conference on Knowledge Discovery and Data Mining, 1721–730.Google Scholar
Clark, Andy. 1990. “Connectionism, Competence, and Explanation.British Journal for the Philosophy of Science 41:195222.CrossRefGoogle Scholar
Dennett, Daniel C. 1971. “Intentional Systems.Journal of Philosophy 68 (4):87106.CrossRefGoogle Scholar
Dennett, Daniel C. 1987. The Intentional Stance. Cambridge, MA: MIT Press.Google Scholar
Dennett, Daniel C. 1991. “Real Patterns.Journal of Philosophy 87:2751.CrossRefGoogle Scholar
Dennett, Daniel C. 2009. “Intentional Systems Theory.” In The Oxford Handbook of Philosophy of Mind, ed. Beckermann, A., McLaughlin, B.P., and Walter, S., 339–50. New York: Oxford University Press.Google Scholar
Doshi-Velez, Finale, and Kortz, Mason. 2017. “Accountability of AI Under the Law: The Role of Explanation.” Version 1. https://arxiv.org/pdf/1711.01134v1.pdf CrossRefGoogle Scholar
Dressel, Julia, and Farid, Hany. 2018. “The Accuracy, Fairness, and Limits of Predicting Recidivism.Science Advances 4:15.CrossRefGoogle ScholarPubMed
Fodor, Jerry A. 1987. Psychosemantics. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Guidotti, Riccardo, Monreale, Anna, Ruggieri, Salvatore, Turini, Franco, Giannotti, Fosca, and Pedreschi, Dino. 2018. “A Survey of Methods for Explaining Black Box Models.” ACM Computing Surveys 51 (5):Art 93, 1-42.Google Scholar
Leslie, David. 2019. Understanding Artificial Intelligence Ethics and Safety. London: Alan Turing Institute.Google Scholar
Lipton, Zachary C. 2017. “The Mythos of Model Interpretability.” ICML Workshop on Human Interpretability in Machine Learning. https://arxiv.org/pdf/1606.03490.pdf Google Scholar
Marr, David. 1977. “Artificial Intelligence: A Personal View.Artificial Intelligence 9:3748.CrossRefGoogle Scholar
Marr, David. 1982. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. Cambridge, MA: MIT Press.Google Scholar
Ozkan, Turgut. 2017. “Predicting Recidivism through Machine Learning.” PhD diss. University of Texas, Dallas.Google Scholar
Ribeiro, Marco Tulio, Singh, Sameer, and Guestrin, Carlos. 2016. “‘Why Should I Trust You?’ Explaining the Predictions of Any Classifier.” Proceedings of the 22nd ACM International Conference on Knowledge Discovery and Data Mining, 1135–44.Google Scholar
Rosch, Eleanor. 1978. “Principles of Categorization.” In Cognition and Categorization, ed. Rosch, E. and Lloyd, B.B., 2748. Hillsdale: Lawrence Erlbaum Associates.Google Scholar
Rudin, Cynthia. 2019. “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.Nature Machine Intelligence 1:206–15.CrossRefGoogle Scholar
Selbst, Andrew D., and Barocas, Solon. 2018. “The Intuitive Appeal of Explainable Machines.Fordham Law Review 87:1085–139.Google Scholar