Explainable Artificial Intelligence (XAI)
Adoption and Advocacy
The field of explainable artificial intelligence (XAI) advances techniques, processes, and strategies that provide explanations for the predictions, recommendations, and decisions of opaque and complex machine learning systems. Increasingly academic libraries are providing library users with systems, services, and collections created and delivered by machine learning. Academic libraries should adopt XAI as a tool set to verify and validate these resources, and advocate for public policy regarding XAI that serves libraries, the academy, and the public interest.
A Machine-Generated Summary of Current Research, by Beta Writer, (Heidelberg: Springer International Publishing, 2019).
Abeba Birhane et al., “The Values Encoded in Machine Learning Research,” ArXiv:2106.15590 [Cs], 2021, http://arxiv.org/abs/2106.15590.
Ahmed Alkhateeb, “Science Has Outgrown the Human Mind and Its Limited Capacities,” Aeon, April 24, 2017, https://aeon.co/ideas/science-has-outgrown-the-human-mind-and-its-limited-capacities.
Alex Campolo et al., AI Now 2017 Report (New York: AI Now Institute, 2017).
Alfred Ng, “Can Auditing Eliminate Bias from Algorithms?,” The Markup, February 23, 2021, https://themarkup.org/ask-the-markup/2021/02/23/can-auditing-eliminate-bias-from-algorithms.
Alisa Bokulich, “How Scientific Models Can Explain,” Synthese 180, no. 1 (2011): 33–45, https://doi.org/10.1007/s11229-009-9565-1.
Amina Adadi and Mohammed Berrada, “Peeking inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI),” IEEE Access 6 (2018): 52138–60, https://doi.org/10.1109/ACCESS.2018.2870052.
Amitai Etzioni and Oren Etzioni, “Incorporating Ethics into Artificial Intelligence,” The Journal of Ethics 21, no. 4 (2017): 403–18, https://doi.org/10.1007/s10892-017-9252-2.
Ana Brandusescu, Artificial Intelligence Policy and Funding in Canada: Public Investments, Private Interests (Montreal: Centre for Interdisciplinary Research on Montreal, McGill University, 2021).
Andrew M. Cox, The Impact of AI, Machine Learning, Automation and Robotics on the Information Professions (CILIP, 2021), http://www.cilip.org.uk/resource/resmgr/cilip/research/tech_review/cilip_–_ai_report_-_final_lo.pdf.
Andrew Tutt, “An FDA for Algorithms,” Administrative Law Review 69, no. 1 (2017): 83–123.
Ashraf Abdul et al., “Trends and Trajectories for Explainable, Accountable, and Intelligible Systems: An HCI Research Agenda,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18 (New York: ACM, 2018), 582:1–582:18, https://doi.org/10.1145/3173574.3174156.
Association for Computing Machinery, Statement on Algorithmic Transparency and Accountability (New York: ACM, 2017), http://www.acm.org/binaries/content/assets/public-policy/2017_joint_statement_algorithms.pdf.
Babatunde K. Olorisade, Pearl Brereton, and Peter Andras, “Reproducibility in Machine Learning-Based Studies: An Example of Text Mining,” in Reproducibility in ML Workshop (International Conference on Machine Learning, Sydney, Australia, 2017), https://openreview.net/pdf?id=By4l2PbQ-.
Babatunde Kazeem Olorisade, Pearl Brereton, and Peter Andras, “Reproducibility of Studies on Text Mining for Citation Screening in Systematic Reviews: Evaluation and Checklist,” Journal of Biomedical Informatics 73 (2017): 1–13, https://doi.org/10.1016/j.jbi.2017.07.010.
Benjamin Haibe-Kains et al., “Transparency and Reproducibility in Artificial Intelligence,” Nature 586, no. 7829 (2020): E14–E16, https://doi.org/10.1038/s41586-020-2766-y.
Benjamin J. Heil et al., “Reproducibility Standards for Machine Learning in the Life Sciences,” Nature Methods, August 30, 2021, https://doi.org/10.1038/s41592-021-01256-7.
Beta Writer, Lithium-Ion Batteries: A Machine-Generated Summary of Current Research (Heidelberg: Springer Nature, 2019), https://link.springer.com/book/10.1007/978-3-030-16800-1.
Bryce Goodman and Seth Flaxman, “European Union Regulations on Algorithmic Decision Making and a ‘Right to Explanation’,” AI Magazine 38, no. 3 (2017): 50–57, https://doi.org/10.1609/aimag.v38i3.2741.
Cade Metz, Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World (Dutton, 2021).
Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Crown, 2016).
Chris Brinton, “A Framework for Explanation of Machine Learning Decisions” (IJCAI-17 Workshop on Explainable AI (XAI), Melbourne: IJCAI, 2017), http://www.intelligentrobots.org/files/IJCAI2017/IJCAI-17_XAI_WS_Proceedings.pdf.
Chris Olah, Alexander Mordvintsev, and Ludwig Schubert, “Feature Visualization,” Distill, November 7, 2017, https://doi.org/10.23915/distill.00007.
Christian Sandvig et al., “Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms,” Data and Discrimination: Converting Critical Concerns into Productive Inquiry, 2014, http://www-personal.umich.edu/~csandvig/research/Auditing%20Algorithms%20--%20Sandvig%20--%20ICA%202014%20Data%20and%20Discrimination%20Preconference.pdf.
Cliff Kuang, “Can A.I. Be Taught to Explain Itself?,” The New York Times Magazine, November 21, 2017, 50, https://nyti.ms/2hR1S15.
Corinne Cath et al., “Artificial Intelligence and the ‘Good Society’: The US, EU, and UK Approach,” Science and Engineering Ethics, March 28, 2017, https://doi.org/10.1007/s11948-017-9901-7.
Daniel Johnson, Machine Learning, Libraries, and Cross-Disciplinary Research: Possibilities and Provocations (Notre Dame, Indiana: Hesburgh Libraries, University of Notre Dame, 2020), https://dx.doi.org/10.7274/r0-wxg0-pe06.
Danielle Keats Citron and Frank Pasquale, “The Scored Society: Due Process for Automated Predictions,” Washington Law Review 89 (2014): 1–33.
DARPA, Explainable Artificial Intelligence (XAI) (Arlington, VA: DARPA, 2016), http://www.darpa.mil/attachments/DARPA-BAA-16-53.pdf.
David S. Watson and Luciano Floridi, “The Explanation Game: A Formal Framework for Interpretable Machine Learning,” Synthese (Dordrecht) 198, no. 10 (2020): 9214, https://doi.org/10.1007/s11229-020-02629-9.
Dillon Reisman et al., Algorithmic Impact Assessment: A Practical Framework for Public Agency Accountability (New York: AI Now Institute, 2018), https://ainowinstitute.org/aiareport2018.pdf.
Don R. Swanson, “Medical Literature as a Potential Source of New Knowledge.,” Bulletin of the Medical Library Association 78, no. 1 (1990): 29–37.
Don R. Swanson, “Undiscovered Public Knowledge,” The Library Quarterly 56, no. 2 (1986): 103–18.
Donald A. Norman, “Some Observations on Mental Models,” in Mental Models, ed. Dedre Gentner and Albert L. Stevens (New York: Psychology Press, 1983), 7–14.
Duri Long and Brian Magerko, “What Is AI Literacy? Competencies and Design Considerations,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ’20 (Honolulu, HI: Association for Computing Machinery, 2020), 2, https://doi.org/10.1145/3313831.3376727.
Ed Finn, “Algorithm of the Enlightenment,” Issues in Science and Technology 33, no. 3 (2017): 24.
Emanuel Moss et al., Assembling Accountability: Algorithmic Impact Assessment for the Public Interest (Data & Society, 2021), https://datasociety.net/wp-content/uploads/2021/06/Assembling-Accountability.pdf.
European Commission, “Artificial Intelligence Act,” 2021, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206.
European Union, “Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016,” 2016, http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32016R0679.
Frank C. Keil, “Explanation and Understanding,” Annual Review of Psychology 57 (2006): 227–54, https://doi.org/10.1146/annurev.psych.57.102904.190100.
Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Cambridge, Mass.: Harvard University Press, 2015).
Gesina Schwalbe and Bettina Finzel, “XAI Method Properties: A (Meta-) Study,” ArXiv:2105.07190 [Cs], 2021, http://arxiv.org/abs/2105.07190.
Giulia Vilone and Luca Longo, “Explainable Artificial Intelligence: A Systematic Review,” ArXiv:2006.00093 [Cs], 2020, http://arxiv.org/abs/2006.00093.
Guoying Liu, “The Application of Intelligent Agents in Libraries: A Survey,” Program: Electronic Library and Information Systems 45, no. 1 (2011): 78–97, https://doi.org/10.1108/00330331111107411.
Henning Schoenenberger, Christian Chiarcos, and Niko Schenk, preface to Lithium-Ion Batteries.
Herbert A. Simon, “What Is an ‘Explanation’ of Behavior?,” Psychological Science 3, no. 3 (1992): 150–61, https://doi.org/10.1111/j.1467-9280.1992.tb00017.x.
IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems (New York: IEEE, 2019), https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead1e.pdf.
Ingrid Nunes and Dietmar Jannach, “A Systematic Review and Taxonomy of Explanations in Decision Support and Recommender Systems,” User Modeling and User-Adapted Interaction 27, no. 3 (2017): 393–444, https://doi.org/10.1007/s11257-017-9195-0.
Isto Huvila et al., “Information Behavior and Practices Research Informing Information Systems Design,” Journal of the Association for Information Science and Technology, 2021, 1–15, https://doi.org/10.1002/asi.24611.
Jack Anderson, “Understanding and Interpreting Algorithms: Toward a Hermeneutics of Algorithms,” Media, Culture & Society 42, no. 7–8 (2020): 1479–94, https://doi.org/10.1177/0163443720919373.
Jason Griffey, ed., “Artificial Intelligence and Machine Learning in Libraries,” Library Technology Reports 55, no. 1 (2019), https://doi.org/10.5860/ltr.55n1.
Jenna Burrell and Marion Fourcade, “The Society of Algorithms,” Annual Review of Sociology 47, no. 1 (2021): 231, https://doi.org/10.1146/annurev-soc-090820-020800.
Jenna Burrell, “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms,” Big Data & Society 3, no. 1 (2016), https://doi.org/10.1177/2053951715622512.
Jenny Bunn, “Working in Contexts for Which Transparency Is Important: A Recordkeeping View of Explainable Artificial Intelligence (XAI),” Records Management Journal (London, England) 30, no. 2 (2020): 143–53, https://doi.org/10.1108/RMJ-08-2019-0038.
Joachim Diederich, “Methods for the Explanation of Machine Learning Processes and Results for Non-Experts,” PsyArXiv, 2018, https://doi.org/10.31234/osf.io/54eub.
Joelle Pineau, “Reproducibility Challenge,” October 6, 2017, http://www.cs.mcgill.ca/~jpineau/ICLR2018-ReproducibilityChallenge.html.
Jos de Mul and Bibi van den Berg, “Remote Control: Human Autonomy in the Age of Computer-Mediated Agency,” in Law, Human Agency, and Autonomic Computing, ed. Mireille Hildebrandt and Antoinette Rouvroy (Abingdon: Routledge, 2011), 59.
Joshua Alexander Knoll, “Accountable Algorithms” (PhD diss, Princeton University, 2015).
Julia Angwin et al., “Machine Bias,” ProPublica, May 23, 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
Julie Gerlings, Arisa Shollo, and Ioanna Constantiou, “Reviewing the Need for Explainable Artificial Intelligence (XAI),” in Proceedings of the Hawaii International Conference on System Sciences, 2020, http://arxiv.org/abs/2012.01007.
Kamran Alipour et al., “Improving Users’ Mental Model with Attention-Directed Counterfactual Edits,” Applied AI Letters, 2021, e47, https://doi.org/10.1002/ail2.47.
Kate Crawford and Jason Schultz, “Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms,” Boston College Law Review 55, no. 1 (2014): 93–128.
Law Commission of Ontario and Céline Castets-Renard, Comparing European and Canadian AI Regulation, 2021, https://www.lco-cdo.org/wp-content/uploads/2021/12/Comparing-European-and-Canadian-AI-Regulation-Final-November-2021.pdf.
Lilian Edwards and Michael Veale, “Enslaving the Algorithm: From a ‘Right to an Explanation’ to a ‘Right to Better Decisions’?,” IEEE Security & Privacy 16, no. 3 (2018): 46–54.
Lilian Edwards and Michael Veale, “Slave to the Algorithm? Why a ‘Right to Explanation’ Is Probably Not the Remedy You Are Looking For,” Duke Law & Technology Review 16 (2017): 18–84.
Linda C. Smith, “Artificial Intelligence in Information Retrieval Systems,” Information Processing and Management 12, no. 3 (1976): 189–222, https://doi.org/10.1016/0306-4573(76)90005-4.
“Loomis v. Wisconsin,” SCOTUSblog, June 26, 2017, http://www.scotusblog.com/case-files/cases/loomis-v-wisconsin/.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin, “Model-Agnostic Interpretability of Machine Learning,” ArXiv:1606.05386 [Cs, Stat], 2016, http://arxiv.org/abs/1606.05386.
Margot E. Kaminski, “The Right to Explanation, Explained,” Berkeley Technology Law Journal 34, no. 1 (2019): 189–218, https://doi.org/10.15779/Z38TD9N83H.
Mariarosaria Taddeo, “Trusting Digital Technologies Correctly,” Minds and Machines 27, no. 4 (2017): 565, https://doi.org/10.1007/s11023-017-9450-5.
Matt Turek, “Explainable Artificial Intelligence (XAI),” DARPA, https://www.darpa.mil/program/explainable-artificial-intelligence.
Matthew U. Scherer, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies,” Harvard Journal of Law & Technology 29, no. 2 (2016): 353–400.
Michael Power, The Audit Society: Rituals of Verification (Oxford: Oxford University Press, 1997).
Michael Ridley and Danica Pawlick-Potts, “Algorithmic Literacy and the Role for Libraries,” Information Technology and Libraries 40, no. 2 (2021), https://doi.org/doi.org/10.6017/ital.v40i2.12963.
Michael Ridley, “Explainable Artificial Intelligence,” Research Library Issues, no. 299 (2019): 28–46, https://doi.org/10.29242/rli.299.3.
Michael Ridley, “Machine Information Behaviour,” in The Rise of AI: Implications and Applications of Artificial Intelligence in Academic Libraries, ed. Sandy Hervieux and Amanda Wheatley (Association of College and University Libraries, 2022).
Nick Seaver, “Seeing like an Infrastructure: Avidity and Difference in Algorithmic Recommendation,” Cultural Studies 35, no. 4–5 (2021): 775, https://doi.org/10.1080/09502386.2021.1895248.
Nick Wallace, “EU’s Right to Explanation: A Harmful Restriction on Artificial Intelligence,” TechZone, January 25, 2017, http://www.techzone360.com/topics/techzone/articles/2017/01/25/429101-eus-right-explanation-harmful-restriction-artificial-intelligence.htm#.
Norbert Schwarz et al., “Ease of Retrieval as Information: Another Look at the Availability Heuristic,” Journal of Personality and Social Psychology 61, no. 2 (1991): 195–202, https://doi.org/10.1037/0022-35188.8.131.52.
Or Biran and Courtenay Cotton, “Explanation and Justification in Machine Learning: A Survey” (International Joint Conference on Artificial Intelligence, workshop on Explainable Artificial Intelligence (XAI), Melbourne, 2017), http://www.cs.columbia.edu/~orb/papers/xai_survey_paper_2017.pdf.
Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information.
Paul Thagard, “Evaluating Explanations in Law, Science, and Everyday Life,” Current Directions in Psychological Science 15, no. 3 (2006): 141–45, https://doi.org/10.1111/j.0963-7214.2006.00424.x.
Philip Adler et al., “Auditing Black-Box Models for Indirect Influence,” Knowledge and Information Systems 54 (2018): 95–122, https://doi.org/10.1007/s10115-017-1116-3.
Pigi Kouki et al., “User Preferences for Hybrid Explanations,” in Proceedings of the Eleventh ACM Conference on Recommender Systems, RecSys ’17 (New York, NY: ACM, 2017), 84–88, https://doi.org/10.1145/3109859.3109915.
Rao Aluri and Donald E. Riggs, “Application of Expert Systems to Libraries,” ed. Joe A. Hewitt, Advances in Library Automation and Networking 2 (1988): 1–43.
Roger Brownsword, “From Erewhon to AlphaGo: For the Sake of Human Dignity, Should We Destroy the Machines?,” Law, Innovation and Technology 9, no. 1 (January 2, 2017): 117–53, https://doi.org/10.1080/17579961.2017.1303927.
Ryan Cordell, Machine Learning + Libraries: A Report on the State of the Field (Washington DC: Library of Congress, 2020), https://labs.loc.gov/static/labs/work/reports/Cordell-LOC-ML-report.pdf.
Safiya Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: New York University Press, 2018).
Sahil Verma et al., “Pitfalls of Explainable ML: An Industry Perspective,” in MLSYS JOURNE Workshop, 2021, http://arxiv.org/abs/2106.07758.
Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation,” International Data Privacy Law 7, no. 2 (2017): 76–99, https://doi.org/10.1093/idpl/ipx005.
Sara Wachter-Boettcher, Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech (New York: W. W. Norton, 2017).
Sarah Lippincott, Mapping the Current Landscape of Research Library Engagement with Emerging Technologies in Research and Learning (Washington DC: Association of Research Libraries, 2020), https://www.arl.org/wp-content/uploads/2020/03/2020.03.25-emerging-technologies-landscape-summary.pdf.
Sarah Myers West, Meredith Whittaker, and Kate Crawford, Discriminating Systems: Gender, Race, and Power in AI (AI Now Institute, 2019), https://ainowinstitute.org/discriminatingsystems.html.
Sarah Tan et al., “Detecting Bias in Black-Box Models Using Transparent Model Distillation,” ArXiv:1710.06169 [Cs, Stat], November 18, 2017, http://arxiv.org/abs/1710.06169.
Sebastian Bach et al., “On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation,” PLoS ONE 10, no. 7 (2015): e0130140, https://doi.org/10.1371/journal.pone.0130140.
Sebastian Palacio et al., “XAI Handbook: Towards a Unified Framework for Explainable AI,” ArXiv:2105.06677 [Cs], 2021, http://arxiv.org/abs/2105.06677.
Shane T. Mueller et al., “Explanation in Human-AI Systems: A Literature Meta-Review, Synopsis of Key Ideas and Publications, and Bibliography for Explainable AI,” ArXiv:1902.01876 [Cs], 2019, http://arxiv.org/abs/1902.01876.
Shane T. Mueller et al., “Principles of Explanation in Human-AI Systems” (Explainable Agency in Artificial Intelligence Workshop, AAAI 2021), http://arxiv.org/abs/2102.04972.
“State v. Loomis,” Harvard Law Review 130, no. 5 (2017), https://harvardlawreview.org/2017/03/state-v-loomis/.
Taina Bucher, If ... Then: Algorithmic Power and Politics (New York: Oxford University Press, 2018).
Tania Lombrozo, “Explanatory Preferences Shape Learning and Inference,” Trends in Cognitive Sciences 20, no. 10 (2016): 756, https://doi.org/10.1016/j.tics.2016.08.001.
Thomas Padilla, Responsible Operations. Data Science, Machine Learning, and AI in Libraries (Dublin, OH: OCLC Research, 2019), https://doi.org/10.25333/xk7z-9g97.
Tim Miller, “Explanation in Artificial Intelligence: Insights from the Social Sciences,” Artificial Intelligence 267 (2019): 3, https://doi.org/10.1016/j.artint.2018.07.007.
Tom Simonite, “Google’s AI Guru Wants Computers to Think More like Brains,” Wired, December 12, 2018, https://www.wired.com/story/googles-ai-guru-computers-think-more-like-brains/.
Treasury Board of Canada Secretariat, “Directive on Automated Decision-Making,” 2019, http://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592.
Vijay Arya et al., “One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques,” ArXiv:1909.03012 [Cs, Stat], 2019, http://arxiv.org/abs/1909.03012.
Waddah Saeed and Christian Omlin, “Explainable AI (XAI): A Systematic Meta-Survey of Current Challenges and Future Opportunities,” ArXiv:2111.06420 [Cs], 2021, http://arxiv.org/abs/2111.06420.
William J. Clancey, “The Epistemology of a Rule-Based Expert System—a Framework for Explanation,” Artificial Intelligence 20, no. 3 (1983): 215–51, https://doi.org/10.1016/0004-3702(83)90008-5.
William Swartout, “XPLAIN: A System for Creating and Explaining Expert Consulting Programs,” Artificial Intelligence 21 (1983): 285–325.
William Swartout, Cecile Paris, and Johanna Moore, “Design for Explainable Expert Systems,” IEEE Expert-Intelligent Systems & Their Applications 6, no. 3 (1991): 58–64, https://doi.org/10.1109/64.87686.
Wojciech Samek and Klaus-Robert Muller, “Towards Explainable Artificial Intelligence,” in Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, ed. Wojciech Samek et al., Lecture Notes in Artificial Intelligence 11700 (Cham: Springer International Publishing, 2019), 17.
Copyright (c) 2022 Michael Ridley
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.