Imagine going to the doctor for a routine check-up, where an advanced AI is responsible for diagnosing you. Unfortunately, the AI overlooks a dangerous tumor—not because of any mistake by the doctor or the developer, but because the AI has altered its own algorithm. Scenarios like this raise fundamental questions about moral responsibility. Who is truly accountable for the harm caused by the AI?
The HARM project addresses these "responsibility gaps" that arise when an AI causes harm, but it is unclear which moral agents are responsible. As AI technologies become more widespread, we will encounter more of these gaps. These gaps are problematic not only because someone needs to be held legally accountable for the harm caused by AIs, but also because legal demands for compensation can be unjust if the person held responsible by law is not morally responsible for the harm in a deeper sense.
Integrating insights from social ontology and AI ethics, the goal of the project is to develop a new way to close these responsibility gaps. The hypothesis is that AIs can often be considered members of larger organizations or companies, forming what we call "hybrid agents." In the project, we will examine the conditions under which hybrid agents can be held responsible for the actions performed by their AI members. Our hypothesis is that hybrid agents are morally responsible for the actions of their AI members in a similar way to how companies are morally responsible for the actions of their employees. The project helps us understand how AI technology can be used in a morally responsible way and to assess the need for regulatory adjustments.
Nicolai Knudsen (Co-PI)
Lauritz Munch (Co-PI)
David Ekdahl
HARM is funded by the Independent Research Fund Denmark.