School of Computing and Information Systems, University of Melbourne, Melbourne, Australia E-mail: tmiller@unimelb.edu.au"/>
Search
2021 Volume 36
Article Contents
RESEARCH ARTICLE   Open Access    

Contrastive explanation: a structural-model approach

More Information
  • Abstract: This paper presents a model of contrastive explanation using structural casual models. The topic of causal explanation in artificial intelligence has gathered interest in recent years as researchers and practitioners aim to increase trust and understanding of intelligent decision-making. While different sub-fields of artificial intelligence have looked into this problem with a sub-field-specific view, there are few models that aim to capture explanation more generally. One general model is based on structural causal models. It defines an explanation as a fact that, if found to be true, would constitute an actual cause of a specific event. However, research in philosophy and social sciences shows that explanations are contrastive: that is, when people ask for an explanation of an event—the fact—they (sometimes implicitly) are asking for an explanation relative to some contrast case; that is, ‘Why P rather than Q?’. In this paper, we extend the structural causal model approach to define two complementary notions of contrastive explanation, and demonstrate them on two classical problems in artificial intelligence: classification and planning. We believe that this model can help researchers in subfields of artificial intelligence to better understand contrastive explanation.
  • 加载中
  • Akula , A. R., Wang , S. & Zhu , S.-C. 2020. CoCoX: Generating conceptual and counterfactual explanations via fault-lines. In AAAI, 2594–2601.

    Google Scholar

    Angwin , J., Larson , J., Mattu , S. & Kirchner , L. 2016. Machine bias. ProPublica, May, 23.

    Google Scholar

    Bromberger , S. 1966. Why–questions. In Mind and Cosmos: Essays in Contemporary Science and Philosophy, Colodny, R. G. (ed.). Pittsburgh University Press, 68–111.

    Google Scholar

    Buchanan , B. & Shortliffe , E. 1984. Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Addison-Wesley.

    Google Scholar

    Chandrasekaran , B., Tanner , M. C. & Josephson , J. R. 1989. Explaining control strategies in problem solving. IEEE Expert 4 (1), 9–15.

    Google Scholar

    Chin-Parker , S. & Cantelon , J. 2017. Contrastive constraints guide explanation-based category learning. Cognitive Science 41 (6), 1645–1655.

    Google Scholar

    Dhurandhar , A., Chen , P.-Y., Luss , R., Tu , C.-C., Ting , P., Shanmugam , K. & Das , P. 2018. Explanations based on the missing: Towards contrastive explanations with pertinent negatives. In Advances in Neural Information Processing Systems, 592–603.

    Google Scholar

    Garfinkel , A. 1981. Forms of Explanation: Rethinking the Questions in Social Theory. Yale University Press.

    Google Scholar

    Grice , H. P. 1975. Logic and conversation. In Syntax and Semantics 3: Speech Arts. Academic Press, 41–58.

    Google Scholar

    Halpern , J. Y. 2015. A modification of the Halpern-Pearl definition of causality. In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI 2015), 3022–3033.

    Google Scholar

    Halpern , J. Y. & Pearl , J. 2005a. Causes and explanations: A structural-model approach. part i: Causes. The British Journal for the Philosophy of Science 56(4), 843–887.

    Google Scholar

    Halpern , J. Y. & Pearl , J. 2005b. Causes and explanations: A structural-model approach. part ii: Explanations. The British Journal for the Philosophy of Science 56 (4), 889–911.

    Google Scholar

    Haynes , S. R., Cohen , M. A. & Ritter , F. E. 2009. Designs for explaining intelligent agents. International Journal of Human-Computer Studies 67 (1), 90–110.

    Google Scholar

    Hesslow , G. 1983. Explaining differences and weighting causes. Theoria 49 (2), 87–111.

    Google Scholar

    Hesslow , G. 1988. The problem of causal selection. Contemporary Science and Natural Explanation: Commonsense Conceptions of Causality, 11–32.

    Google Scholar

    Hilton , D. J. 1990. Conversational processes and causal explanation. Psychological Bulletin 107 (1), 65–81.

    Google Scholar

    Kean , A. 1998. A characterization of contrastive explanations computation. In Pacific Rim International Conference on Artificial Intelligence. Springer, 599–610.

    Google Scholar

    Krarup , B., Cashmore , M., Magazzeni , D. & Miller , T. 2019. Model-based contrastive explanations for explainable planning. In 2nd ICAPS Workshop on Explainable Planning (XAIP-2019). AAAI Press.

    Google Scholar

    Lewis , D. 1986. Causal explanation. Philosophical Papers 2, 214–240.

    Google Scholar

    Lim , B. Y. & Dey , A. K. 2009. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th International Conference on Ubiquitous Computing. ACM, 195–204.

    Google Scholar

    Linegang , M. P., Stoner , H. A., Patterson , M. J., Seppelt , B. D., Hoffman , J. D., Crittendon , Z. B. & Lee , J. D. 2006. Human-automation collaboration in dynamic mission planning: A challenge requiring an ecological approach. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 500 (23), 0 2482–2486.

    Google Scholar

    Lipton , P. 1990. Contrastive explanation. Royal Institute of Philosophy Supplement 27, 247–266.

    Google Scholar

    Madumal , P., Miller , T., Sonenberg , L. & Vetere , F. 2020. Explainable reinforcement learning through a causal lens. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2493–2500.

    Google Scholar

    Mercado , J. E., Rupp , M. A., Chen , J. Y. C., Barnes , M. J., Barber , D. & Procci , K. 2016. Intelligent agent transparency in human–agent teaming for multi-UxV management. Human Factors 580 (3), 401–415.

    Google Scholar

    Miller , T. 2018. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence. https://arxiv.org/abs/1706.07269.

    Google Scholar

    Mothilal , R. K., Sharma , A. & Tan , C. 2020. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–617.

    Google Scholar

    Ruben , D.-H. 1987. Explaining contrastive facts. Analysis 470 (1), 35–37.

    Google Scholar

    Slugoski , B. R., Lalljee , M., Lamb , R. & Ginsburg , G. P. 1993. Attribution in conversational context: Effect of mutual knowledge on explanation-giving. European Journal of Social Psychology 230 (3), 219–238.

    Google Scholar

    Sreedharan , S., Srivastava , S. & Kambhampati , S. 2018. Hierarchical expertise level modeling for user specific contrastive explanations. In IJCAI, 4829–4836.

    Google Scholar

    Stubbs , K., Hinds , P. & Wettergreen , D. 2007. Autonomy and common ground in human-robot interaction: A field study. IEEE Intell. Syst. 22 (2), 42–50.

    Google Scholar

    Swartout , W. R. & Moore , J. D. 1993. Explanation in second generation expert systems. In Second Generation Expert Systems. Springer, 543–585.

    Google Scholar

    Temple , D. 1988. The contrast theory of why–questions. Philosophy of Science 55(1), 141–151.

    Google Scholar

    Van Bouwel , J. & Weber , E. 2002. Remote causes, bad explanations? Journal for the Theory of Social Behaviour 320 (4), 437–449.

    Google Scholar

    Van Fraassen , B. C. 1980. The Scientific Image. Oxford University Press.

    Google Scholar

    Waa , J., van Diggelen , J., Bosch , K. & Neerincx , M. 2018. Contrastive explanations for reinforcement learning in terms of expected consequences. In Proceedings of the Workshop on Explainable AI at IJCAI.

    Google Scholar

    Wachter , S., Mittelstadt , B. & Russell , C. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard Journal of Law & Technology 31, 841.

    Google Scholar

    Wang , D., Yang , Q., Abdul , A. & Lim , B. Y. 2019. Designing theory-driven user-centric explainable ai. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–15.

    Google Scholar

    Winikoff , M. 2017. Debugging agent programs with Why?: Questions. In Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2017. IFAAMAS, 251–259.

    Google Scholar

    Ylikoski , P. 2007. The idea of contrastive explanandum. In Rethinking Explanation. Springer, 27–42.

    Google Scholar

  • Cite this article

    Tim Miller. 2021. Contrastive explanation: a structural-model approach. The Knowledge Engineering Review 36(1), doi: 10.1017/S0269888921000102
    Tim Miller. 2021. Contrastive explanation: a structural-model approach. The Knowledge Engineering Review 36(1), doi: 10.1017/S0269888921000102

Article Metrics

Article views(385) PDF downloads(147)

Other Articles By Authors

RESEARCH ARTICLE   Open Access    

Contrastive explanation: a structural-model approach

Abstract: Abstract: This paper presents a model of contrastive explanation using structural casual models. The topic of causal explanation in artificial intelligence has gathered interest in recent years as researchers and practitioners aim to increase trust and understanding of intelligent decision-making. While different sub-fields of artificial intelligence have looked into this problem with a sub-field-specific view, there are few models that aim to capture explanation more generally. One general model is based on structural causal models. It defines an explanation as a fact that, if found to be true, would constitute an actual cause of a specific event. However, research in philosophy and social sciences shows that explanations are contrastive: that is, when people ask for an explanation of an event—the fact—they (sometimes implicitly) are asking for an explanation relative to some contrast case; that is, ‘Why P rather than Q?’. In this paper, we extend the structural causal model approach to define two complementary notions of contrastive explanation, and demonstrate them on two classical problems in artificial intelligence: classification and planning. We believe that this model can help researchers in subfields of artificial intelligence to better understand contrastive explanation.

    • Although Van Fraassen (1980), p. 127 attributes the idea of contrastive explanation to Bengt Hannson in an unpublished manuscript circulated in 1974.

    • Although in this trivial example, technically we could infer them all, but this is a property of the particular example, not of ‘rather than’ questions and structural models in general.

    • Note that this is the later definition from Halpern (2015), which is simplified compared to the original definition of Halpern & Pearl (2005a). Halpern argues this updated definition is more robust.

    • We abuse notation slightly here: $\vec{X} = \vec{x}$ is the conjunction of the first items of all of the subset; similarly $\vec{X} = \vec{y}$ is the conjunction of the second items.

    • In the case of an explainer and explainee, we may say that it is ‘believed’ by the explainer.

    • © The Author(s), 2021. Published by Cambridge University Press2021Cambridge University Press
References (39)
  • About this article
    Cite this article
    Tim Miller. 2021. Contrastive explanation: a structural-model approach. The Knowledge Engineering Review 36(1), doi: 10.1017/S0269888921000102
    Tim Miller. 2021. Contrastive explanation: a structural-model approach. The Knowledge Engineering Review 36(1), doi: 10.1017/S0269888921000102
  • Catalog

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return