Aarhus University Seal

Program

Preliminary Programme

Thursday, June 18

11:00–11.30 Welcome and introduction

Anne Gerdes & Charles Ess

Introduction


11:30-12:30 Kick off presentation

Charles Ess

What’s Love Got to Do with It? Robots, sexuality, and the arts of being human

Virtue ethics foregrounds phronesis as a reflective form of judgment critical to both ethical decision-making and the larger pursuit of good lives marked by love, friendship, and flourishing. Phronesis, along with analogical reasoning, is argued to be computationally intractable (e.g., Gerdes 2014; Ess 2015). Virtue ethics further implicates notions of eros which John Sullins argues likewise cannot be instantiated in social robots (2012, 2014).  I extend these analyses primarily with Sara Ruddick’s account of “complete sex,” as foregrounding embodiment, autonomy, self-awareness, and the specific desire that our desire be desired by the Other – all of which further entail respect for persons and equality (1975). Social robots arguably lack all of these capacities – especially autonomy and embodied awareness of desire – and so cannot qualify for complex sex and human eros. I then explore broader implications of these findings for both virtuous social robots and virtuous human beings.


12:30-13:30 Lunch


13:30-14:10

Raul Hakli

On simulation, computation, and randomness

I will study a recent argument according to which computational simulation
of physical and biological phenomena is impossible because these phenomena
involve randomness that cannot be implemented by discrete state machines.
I will try to find ways to criticize the argument, and I will suggest that
even if the argument were sound it cannot be used to support the claim
that robots are incapable of simulating human capacities.


14:10-14:50

Martin Bentzen

Functional ethics goes universal – on implementing Kant’s categorical imperative in robots

In this talk, I wish to point out possibilities for formalizing aspects of Kant’s deontological ethics and implementing this in robots. In particular, I wish to look at the possibility for formalizing the first formulation of the categorical imperative, which is as follows:

“…act only in accordance with that maxim through which you can at the same time will that it become a universal law.” 

Regarding this imperative as implying a decision procedure for a rational agent in moral doubt, the question is whether and how such a procedure can be made explicit and transformed in order to be applied to robot planning problems. Whereas the robot’s planning algorithms in principle (although rarely in practice) could be based on purely instrumental reasoning, the categorical imperative will narrow the space of possible actions into what is deontologically acceptable. The project might benefit robotics by presenting ways of making robot planning more ethically sophisticated. Further, by clarifying certain aspects of Kantian ethics through formalization and eventual implementation and experimentation we might gain a deepened understanding of this ethical system.


14:50–15:10 Coffee break


15:10–15:50

Johanna Seibt

Simulation and the Limits of Simulation

I begin with a brief reflection on the notion of simulation  and suggest that it is best understood in terms of a relationship (or more precisely, a field of relationships) between process systems of different kinds.  Working from this account of simulation I address the question on  which grounds we can maintain that there are certain capacities of one type of process system (e.g., a human being) that cannot be simulated by another type of process system (e.g., an artificial agent).  I argue that there are two types of limits for the simulation of capacities.  The first type of limit relates to a performance threshold—for principled reasons the capacity can only be realized in a certain kind of process system and no other kind of process system can produce a functional equivalent of the realization of the capacity in question.  The second type of limit arises due to what I call the ‘collapse of sortal difference’—if the realization of the capacity is an emergent phenomenon (in a weak sense I shall define),  we are no longer in the position to distinguish the real process from its simulation.  I then consider Kant’s notion of phronesis and discuss which sort of limit simulation, if any, we may be confronted with.


15:50–16:30

Stefan K Larsen

Can a robot ever TRULY x?”  - intentionality across asymmetric distributions of intentional capacity

I outline an ontology of “human sociality” and social action/interaction, that allows for an extension of our concepts of social actions, to include interactions with robots. Extending on Searle’s “we-intentionalities”, I argue that human we-intentionalities can be held in such a way as to also include non-conscious actors such as machines and robots. I argue that our conceptual ontology of “social interactions” is grounded in a sociality disposition which, while it has certain conditions for doing so, can and does manifest itself in a number of ways, even with nonhumans.


16:30-17:10

Michael Funk

Synaesthetic Perspectives and the Impact of Robotic(s) Simulations on Human-Robot-Interactions

With this lecture I want to continue to the first TRANSOR workshop (Aarhus, January 2015) about methodological issues of social robotics. Therefore, emphasis of this next presentation will be on philosophical epistemological aspects of simulation. My primary focus is related to the meaning of “perspective”, “position” and “synesthesia” as general human features. Can a (human) bodily position be simulated and implemented into a social robot? It think not, because every simulation needs an abstract model as basic data-basis. Data-based models, I like to argue, belong to an explicit layer of knowledge, which stands in contrast to the implicit knowledge layer. Ensured by a sensorimotor unity (synesthesia), humans are able to integrate the explicit and implicit layer in combination with a positional layer. I am going to explain this point with a heuristic template, developed after the first TRANSOR workshop. In conclusion I try to illustrate, which aspects of human knowledge – in terms of perspectivity and positionality – cannot be simulated.

On the other hand, also questions of simulation of human-robot-interactions are significant as well. Therefore, it is necessary to understand, which form of “technical tool” a social robot might be. That question also remained unanswered after the first TRANSOR meeting. I want to defend my argument, that robots are “tools”. Maybe the (limited) function of simulation is one characteristic, that makes robots become an outstanding form of technical tool. Related to the first question, I am also going to emphasize this second problem question in my lecture.


19:00 Conference Dinner at the Mokka Café (only 5 minutes’ walk from First Hotel- down the pedestrian street - click here for a map)

Friday, June 19

9:15–10:00 Kick off presentation

Anne Gerdes

Robot Unicorn Attack – Does it Make Sense to Ascribe Morality to Robots?

“’As-if’ and simulation. The Kantian “as-if” has come to the foreground more and more in recent approaches to social robotics (cf. e.g., Seibt 2014:  “Varieties of the “As if”: Five Ways to Simulate an Action”). Several studies highlight ways in which humans bond with social robots (Turkle 2010; Dautenhahn 2007; Schärfe et al 2011; Carpenter 2013; Bartneck, 2007). Moreover, In trying to clarify our interactions with social robots, some (e.g. Gunkel 2012; Coeckelbergh 2012) suggests we need to address what is at stake in the relation, per se, rather than framing the discussion around a basic distinction between a person versus a machine. Consequently, it makes good sense to explore, whether, and under which circumstances, human-robot interactions can qualify as instances of social interactions.


10:00-10:50

Raffaele Rodogno

Robots and the Emotional Life of Moral Evaluation

It is argued that central parts of what we recognize as moral judgement or evaluation rests on our emotional life — an old sentimentalist thesis. This is in particular true of the concept “morally wrong”. Our emotions not only guide us in the making of such evaluations in the simple and complex situations to which our lives expose us, they also allow us to recognize certain norm violations as moral as opposed to non-moral violations, such as, for example, violations of aesthetic norms or of etiquette.  The claim is that features proper to the phenomenology of certain human emotions are constitutive of our understanding of central moral concepts. If successful, this argument shows that for as long as social robots are incapable of feeling the relevant emotions, they will not be competent moral evaluators, or at least, they will fail to grasp some central concepts of human morality. In the reminder of the paper, it is discussed how best to equip social robots in view of human-robot interaction in the absence of the moral understanding and epistemic guidance provided by the emotions.


10:50-11:10 Coffee break


11:10-12:00

Kerstin Fischer

The Reality of Simulation: In-the-Moment and in Reflective Thought

Using the example of car simulation, I discuss the difference between in-the-moment and reflective contemplation of artificial interaction partners (cf. Fussel et al. 2008; Takayama 2011). While in our data people can be demonstrated to take the simulation for real in-the-moment, i.e. in spontaneous, immediate responses, they may also engage in reflective thought about the simulation that shows that they understand the illusion very well. These two different views occur simultaneously and are neither reconciled, nor are they treated as incompatible by the participants. This casts doubt on the hypothesis that principally distinct processes are involved, as has been previously proposed (e.g. Nass & Moon 2000; Kahneman 2011). I back up this hypothesis by examples of language use in which immediacy is simulated.


12:00-12:30

Lars Chr. Jensen

"That’s a nice Hummer!” - In-The-Moment Responses to a Talking Autonomous Car Simulation

A key question in experimental studies, and one often sought validated through post-hoc questionnaires, is how ‘real’ participants perceives an experiment to be. However, there may be considerable differences between how people react in the moment and how they reason about it afterwards. An alternative approach to validate participants’ perceptions is to analyze their in-the-moment responses during the experiment. In order to demonstrate how this can be achieved, I will present a study that took place in a highly immersive car simulator, which spoke to participants using computer-generated natural language speech and which featured autonomous driving. The data is elicited from six 30-minute interactions with the simulator at Stanford University. The autonomous driving and verbal behavior was controlled by two ‘wizards’ from a control station, not visible to the experiment participants. In my talk, I show, using conversation analysis (CA), that participants treated the simulation and the interactions they had with the system as real. I show that by using microanalysis of participants’ responses, we can give an indication of how ‘real’ participants perceive a simulation to be.


12:30-13:30 Lunch


13:30-14:45

Roundtable discussion: Pressing directions for research, current and future?


14:45–15:00

Closing remarks by Charles Ess and Anne Gerdes