Aarhus University Seal

Towards Responsible Explainable AI Technologies (TREAT)

Towards Responsible Explainable AI Technologies (TREAT) examines advantages and disadvantages of so-called “Explainable AI” technologies for creating and using AI in an ethically responsible manner. One of the main ethical concerns regarding the use of complex AI systems in high-stakes decision making is that they risk become unintelligible black boxes. In response, a new subfield within AI research, known as explainable AI (XAI), seeks to develop tools for generating explanations of AI systems. Such explanations are important in order to enable people to understand and thereby think critically about the AI systems that are increasingly impacting our everyday lives. However, explanations are not just an ethical good. They also risk creating a false sense of understanding, which can be exploited to mislead or even manipulate.

To resolve this dilemma, we need an ethical framework for distinguishing beneficial and pernicious uses of explanation. Using case studies of current and foreseeable XAI technologies, TREAT seeks to develop both philosophically grounded theories of representational adequacy, explanatory honesty and legitimacy, as well as practical guidelines for how XAI technologies can be deployed responsibly.

If you are interested in project, feel free to contact us on treat@au.dk.

We also run a weekly reading group, open to anyone.

Funding

The project is funded by a Sapere Aude Research Leader Grant from the Independent Research Fund Denmark (case no. 3119-00051B).

About the banner

The banner is a slice of the image provides by Yutong Liu & Kingston School of Art / Better Images of AI / Talking to AI / Licenced by CC-BY 4.0. Direct link to the full image

Contact