Aarhus University Seal

Reading Group

The TREAT project reading group meets every Thursday at 2pm in 1520-516 to discuss a paper related to explainable AI. Please feel free to contact us with any questions or to just show up. We are also open to suggestions for papers to read, as well as to using the time to workshop a paper, proposal, or similar that an attendee wants to share.

The current reading list for the current semester can be found below:

4 October: Nannini, L., Marchiori Manerba, M. & Beretta, I. Mapping the landscape of ethical considerations in explainable AI research. Ethics of Information Technology 26, 44 (2024). https://doi-org.ez.statsbiblioteket.dk/10.1007/s10676-024-09773-7

10 October: Emily Sullivan and Philippe Verreault-Julien. 2022. From Explanation to Recommendation: Ethical Standards for Algorithmic Recourse. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (AIES '22), 712–722. https://doi.org/10.1145/3514094.3534185 

24 October: Sullivan, E. and Kasirzadeh. 2024. Explanation Hacking: The perils of algorithmic recourse. Forthcoming in Philosophy of Science for Machine Learning: Core issues and new perspectives, eds. Juan Durán and Giorgia Pozzi. Synthese Library, Springer. Preprint available at: https://arxiv.org/abs/2406.11843 

31 October: Sullivan, E. 2024. SIDEs: Separating Idealization from Deceptive 'Explanations' in xAI. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24). Association for Computing Machinery, pp 1714–1724. https://doi.org/10.1145/3630106.3658999

7 November: Vredenburgh, K. 2022. The Right to Explanation. Journal of Political Philosophy 30, 209-229.  https://doi.org/10.1111/jopp.12262 

14 November: CANCELLED!

21 November: Kawakami et al. 2022. Improving Human-AI Partnerships in Child Welfare: Understanding Worker Practices, Challenges, and Desires for Algorithmic Decision Support. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22). Association for Computing Machinery, Article 52. https://doi.org/10.1145/3491102.3517439  

28 November: NO MEETING

5 December: Peters, Uwe. 2023. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque. AI and Ethics 3, 963–974. https://doi.org/10.1007/s43681-022-00217-w 

12 December: Smart, A., Kasirzadeh, A. 2024. Beyond model interpretability: socio-structural explanations in machine learning. AI & Soc https://doi.org/10.1007/s00146-024-02056-1

19 December: Boge, F.J. Functional Concept Proxies and the Actually Smart Hans Problem: What’s Special About Deep Neural Networks in Science. Synthese 203, 16 (2024). https://doi.org/10.1007/s11229-023-04440-8 

Provisional readings, Spring 2025 (dates tbd):