The TREAT project reading group meets every Tuesday at 1pm in 1532-314 (G.31) to discuss a paper related to explainable AI. Please feel free to contact us with any questions or to just show up. We are also open to suggestions for papers to read, as well as to using the time to workshop a paper, proposal, or similar that an attendee wants to share.
Reading plan for the current semester (Spring 2025):
21 Jan (Note: Change of room to 1532:318 (G.32): Saphra, N., & Wiegreffe, S. 2024. Mechanistic? In Belinkov, Kim, Jumelet, Mohebbi, Mueller and Chen (eds.): Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, Association for Computational Linguistics, pp. 480–498. https://aclanthology.org/2024.blackboxnlp-1.30/
28 Jan: Kästner, L., Crook, B. 2024. Explaining AI through mechanistic interpretability. Euro Jnl Phil Sci 14, 52. https://doi.org/10.1007/s13194-024-00614-4
4 Feb: Budding, C., & Zednik, C. 2024. Does Explainable AI Need Cognitive Models? Proceedings of the Annual Meeting of the Cognitive Science Society, 46. Retrieved from https://escholarship.org/uc/item/8h546501
11 Feb: [RN absent]Zednik, C. 2021. Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence. Philos. Technol. 34, 265–288. https://doi.org/10.1007/s13347-019-00382-7
18 Feb
: Wang, H. Transparency as Manipulation? Uncovering the Disciplinary Power of Algorithmic Transparency. Philos. Technol. 35, 69 (2022). doi.org/10.1007/s13347-022-00564-w
25 Feb: Franke, U. How Much Should You Care About Algorithmic Transparency as Manipulation?. Philos. Technol. 35, 92 (2022). https://doi.org/10.1007/s13347-022-00586-4
+
Wang, H. Why Should We Care About the Manipulative Power of Algorithmic Transparency?. Philos. Technol. 36, 9 (2023). doi.org/10.1007/s13347-023-00610-1
4 Mar: Klenk, M. Algorithmic Transparency and Manipulation. Philos. Technol. 36, 79 (2023). https://doi.org/10.1007/s13347-023-00678-9
11 Mar: Franke, U. Algorithmic Transparency, Manipulation, and Two Concepts of Liberty. Philos. Technol. 37, 22 (2024). https://doi.org/10.1007/s13347-024-00713-3
+
Klenk, M. Liberty, Manipulation, and Algorithmic Transparency: Reply to Franke. Philos. Technol. 37, 48 (2024). https://doi.org/10.1007/s13347-024-00739-7
18 Mar: Lauritsen, S.M., et al. “Explainable Artificial Intelligence Model to Predict Acute Critical Illness from Electronic Health Records.” Nature Communications, 11(1), 2020, 1–11, https://doi.org/10.1038/s41467-020-17431-x.
+
Babic, B., Gerke, S., Evgeniou, T. and Cohen, I.G. 2021. Beware explanations from AI in health care. Science 373:284-286. https://www.science.org/doi/pdf/10.1126/science.abg1834
----------------------------------------------------------------
Previous semesters
Autumn 2024
4 October: Nannini, L., Marchiori Manerba, M. & Beretta, I. Mapping the landscape of ethical considerations in explainable AI research. Ethics of Information Technology 26, 44 (2024). https://doi-org.ez.statsbiblioteket.dk/10.1007/s10676-024-09773-7
10 October: Emily Sullivan and Philippe Verreault-Julien. 2022. From Explanation to Recommendation: Ethical Standards for Algorithmic Recourse. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES '22), 712–722. https://doi.org/10.1145/3514094.3534185
24 October: Sullivan, E. and Kasirzadeh. 2024. Explanation Hacking: The perils of algorithmic recourse. Forthcoming in Philosophy of Science for Machine Learning: Core issues and new perspectives, eds. Juan Durán and Giorgia Pozzi. Synthese Library, Springer. Preprint available at: https://arxiv.org/abs/2406.11843
31 October: Sullivan, E. 2024. SIDEs: Separating Idealization from Deceptive 'Explanations' in xAI. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24). Association for Computing Machinery, pp 1714–1724. https://doi.org/10.1145/3630106.3658999
7 November: Vredenburgh, K. 2022. The Right to Explanation. Journal of Political Philosophy 30, 209-229. https://doi.org/10.1111/jopp.12262
14 November: CANCELLED!
21 November: Kawakami et al. 2022. Improving Human-AI Partnerships in Child Welfare: Understanding Worker Practices, Challenges, and Desires for Algorithmic Decision Support. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22). Association for Computing Machinery, Article 52. https://doi.org/10.1145/3491102.3517439
28 November: NO MEETING
5 December: Peters, Uwe. 2023. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque. AI and Ethics 3, 963–974. https://doi.org/10.1007/s43681-022-00217-w
12 December: Smart, A., Kasirzadeh, A. 2024. Beyond model interpretability: socio-structural explanations in machine learning. AI & Soc https://doi.org/10.1007/s00146-024-02056-1
19 December: Boge, F.J. Functional Concept Proxies and the Actually Smart Hans Problem: What’s Special About Deep Neural Networks in Science. Synthese 203, 16 (2024). https://doi.org/10.1007/s11229-023-04440-8