Aarhus University Seal

Program

Wednesday, January 28:

 13:15-13:45 (Welcome, and Introduction to TRANSOR):

Johanna Seibt

Dept of Philosophy and the History of Ideas, Aarhus University; Research Group PENSOR

Is Social Robotics a Multi-, Inter-, Cross-, or Transdisciplinary Research Area?—Reflections on the Role of Philosophy for Social Robotics

Multidisciplinary research areas can be characterized in terms of their structures—the dynamic structures of research flow and conceptual revisions.  Prima facie, the core disciplines in the area of social robotics operate with two feedback cycles: roboticists draw on research in developmental psychology and cognitive science to build robots that afford social interactions with humans; human-robot interactions are in turn studied by psychologists, sociologists, and anthropologists, and robotic architectures for certain specific capacities (e.g., speech acquisition) provide heuristics for cognitive science research,.  In my talk I will discuss how different parts of philosophy fit into the structure of the research area of social robotics.  The answer depends, first, on whether human-robot interaction is viewed as an interdisciplinary, crossdisciplinary, or transdisciplinary field (Mitchell 2005, Fawcett 2013)and, second, on the extent to which the terminology of social robotics is to abide by extant conceptual constraints for the classification of human-robot interactions.

13:45-14:05 Discussion


14:15- 14:45

Cathrine Hasse

Dept for Education, Aarhus University /Emdrup Campus; Tecnucation Lab

Sci-Fi Normativity in Robotics

The paper takes up the notion of robots as artefacts. Are robots tools and is their sign-function tied to a tool functionality? In this paper I argue that robots are not developed as tools, but rather as signs with a reference to a science fiction universe influencing the imaginaries of the technical sciences. I shall discusses how socially assistive robots from this perspective impact professional work life and professional identities as multistable normative change agents. I propose a new understanding of normativity in machines building on a combination of postphenomenology and cultural historical activity theory (CHAT) to capture the science fiction embedded cultural and historical learning processes tied to human-robot interactions in professional work life in nursing homes, schools as well as in robot laboratories. This approach open up for new questions of the distribution of normativity between humans and machines.

14:45-15:05 Discussion


15:30-16:00

Charles Ess

The only good robot is a dead robot?

I explore social robots through three large philosophical, communicative, and cultural dimensions. First, social robots serve as very fine-grained test-beds and experiments that – as exemplified in the projects of Artificial Intelligence – issue in a techno-philosophical anthropology that helps us empirically define and (re)affirm distinctive human capacities vis-à-vis the machine.  I will illuminate these findings in part in conjunction with recent work from John Sullins, Luciano Floridi, Wendell Wallach, and others. Second, virtue ethics is especially helpful in exploring questions of how should we program social robots as moral agents, where “morality” in human beings implicates autonomy, emotion, and other essential human social and communicative skills. Finally, the social and communicative dimensions of social robots bring to the foreground important cultural assumptions regarding technology. I will contrast the characteristic “Western” trope of “the evil robot” in science fiction with salient Japanese cultural dimensions, beginning with animism, that apparently issue in a greater optimism regarding the development and presence of social robots.  I will argue that the Western thematic is comparatively recent and more superficial – and hence more easily moved beyond – than we might assume. The upshot is cautious optimism regarding our futures with social robots.

16:00-16:20 Discussion


16:30-17:00

Martin Mose Bentzen

(Philosophy), Denmark's Technical University Copenhagen

Further steps towards ethical robots: the double effect principle formalized and applied to ethical dilemmas

Currently, there is a need for roboticists and ethicists to work together on providing principles for ethical robots. For instance, a recent paper reporting on an experiment conducted by roboticists with rescue robots facing an ethical dilemma concludes that there is a need for a collaborative effort where ethicists suggest principled ways out the dilemma, see (Winfield, Blum and Liu, 2014). The way out of the dilemma, if such a way exists, must not only be technically feasible but also ethically justifiable.  In this talk, I consider how the ethical principle of double effect could be used to solve the dilemma. 
The principle of double effect states conditions for ethically acceptable behavior when there are both positive and negative consequences (effects) at the intended outcome of an act. The act itself must be good or neutral, the negative consequence must not be intended whereas the positive must be, the negative consequence must not be a means to obtain the positive consequence, and the positive consequence must be proportionally preferable to the negative consequence. I propose extensions of well-known formal models of consequentialist reasoning from game theory and stit theory. In particular, I suggest how to handle intentions, means-ends reasoning and proportionality of several positive or negative aspects of an event. I apply the formal methods in an analysis of a couple of thought experiments from ethics. I then show how the principle could be applied to robots facing dilemmas.  

17:00-17:20 Discussion


Thursday, January 29:

9:15-9:45

Kerstin Fischer

Dept of Design and Communication (Linguistics), University of Southern Denmark

Methodological Problems in Determining How Robots Become Social Actors

One of the core research questions in social robotics is how robots become social actors. Several suggestions for processes that influence the relationship between human users and robots have been made; the most prominent suggestion is certainly the mindless transfer hypothesis (Nass and Moon 2000), which suggests that ‘Computers Are Social Actors’ (CASA) due to a kind of confusion on the side of the human. Moreover, anthropomorphism has been suggested to influence the degree with which people interpret robots in social ways (e.g. Lee 2008). Other suggestions include joint pretense (Clark 1999, Fischer 2006), where humans are taken to engage in some kind of play with the robot, interactive alignment (Pickering and Garrod 2004), where people are taken to align automatically with robot behaviors, and the idea that social practices are carried over from human conversation to interactions with artificial communication partners (Hutchby 2001). Thus, the mechanisms possibly involved range from cognitive via automatic to normative and jointly negotiated. Corresponding to this multitude of possible factors, designing studies that help us distinguish empirically between these approaches constitutes an enormous challenge. I will try to outline some methodological implications.

9:45-10:05 Discussion


10:20-10:50

Lars Christian Jensen

Dept of Design and Communication (Linguistics), University of Southern Denmark

Designing Empirical Studies to Determine how Robots Become Social Actors – Putting Theory Into Practice

People respond in social ways to robots and computers. This has been shown in numerous studies by Nass and colleagues (2009; 2010; 2011), who attribute people’s social responses towards robots to evolutionary psychology. However, this explanation is contested by Clark who suggests that when people respond in social ways to robots, they are really responding to the human operators and programmers that robots represent (Clark, 1996, p. 20).  To test this hypothesis we carried out an experiment with 40 naïve participants who interacted shortly with a Care-O-bot 3 robot in one of three levels of autonomy. Participants were either introduced to a visible operator, told that the robot was being remote controlled or been given the impression the robot was fully autonomous. The experiment was set up in a way so that participants were introduced the robot, but while approaching the participant, the robot made a serious mistake by knocking over a camera. The seriousness of the mistake was enforced by the experimenter rushing to the camera to see if it was okay. After the interaction, participants were asked to fill out a pen and paper questionnaire where they were asked to rate the robot as well as the robot operator (if there was any) and to assign blame to the robot, the programmer or the operator. The results show no significant differences in participants’ ratings of the robot between the conditions. These results, or lack thereof, are puzzling and stand in contrast to what we know about the impact telepresence has on people’s people anthropomorphism of robots (Groom et al., 2011). I will therefore invite a discussion of the study, the results as well as a general discussion on how to design experiments to determine how robots become social actors. 

10:50-11:10 Discussion


11:15-11:45

Mikala Hansbøl

University College Zealand, Education Lab - Research program for technology and educational design

Empirical-Methodological-Theoretical Gatherings of Relationships Between IT, Autism, and Learning

What is social robotics when it comes to dealing with autists (and other children with disabilities) and educational matters? What kinds of (con-)figurations and articulations of children with disabilities and educational robots / educational robotics are made in existing (HRI) research? How do social robotics research contribute to the development of inclusion practices and educational practices? With which benefits? For whom? Under which circumstances? In which contexts? At present I am working my way into the field of social robotics and autism. University College Zealand has new project activities in 2015 dealing with social robotics, education and inclusion. The talk will articulate my own (very preliminary!) entrances (primarily through reading) into the field of social robotics. I will then try to challenge the configurations of social robotics and children with disabilities - that I have (so far) met - drawing on my own relational materialist (inspired by e.g. Latour) research into enactments of relationships between ITs and education in an everyday perspective. Through these alignments, I try to open up discussions of the existing knowledge and knowledge practices of social robotics in education.

11:45-12:05 Discussion


13.15-15:30

ROCA (Robot Culture and Æsthetics) Copenhagen

Gunhild Borgreen, Bojana Romic, Signe Juhl Møller and Thomas Sørensen, Stina Hasse, Elizabeth Jochum

Panel: Robotic Art and Practice-Based Research

A two-hour panel of 6 participants from ROCA (Robot Culture and Aesthetics) research group based at University of Copenhagen. The panel will contain 6 presentations of each 10 minutes, a 15 minutes video presentation, followed by a short break and a 30 minutes Q&A/discussion session.

This panel provides reflections on various methodologies connected to a practice-based workshop on Art and Robots. The workshop was a collaboration between ROCA (Robot Culture and Aesthetics) research group at University of Copenhagen, and Los-Angeles based artist and roboticist Ian Ingram. During an Artist in Residency in Copenhagen, Ingram developed the artistic framework for the workshop activities, and conducted the various stages of constructing, programming, and exploring simple mechatronic and robotic systems.

The panel will discuss various methodologies connected with the workshop format as a practice-based research method that can bring forth new types of knowledge production.

For more information on structure and participants, read here.


16:00-16:30

Michael Funk

Dept for Philosophy, University of Dresden (Germany)

Material Hermeneutics as methodology for SR – A theoretical investigation with respect to a case study of German-Japanese music making robots

From a philosophical point of view, I am going to focus the notions of “transdisciplinarity” and “perspectivity” with respect to SR methodology. Thereby, I want to emphasize a specific interdisciplinary and intercultural study, as well as general methodological issues of SR. Starting 2010 and finished 2013 in a research-project on “Robotics in Germany and Japan” “Philosophical and

Technical Perspectives” have been investigated (Funk, Michael & Bernhard Irrgang (eds.) 2014, Peter Lang). In this lecture to the First TRANSOR Workshop, I am going to summarize some of the results – mainly the epistemological and methodological issues. Focus will be on the meaning of “perspective”

in intercultural situations (Germany and Japan), as well as in concrete human-robot interactions. A trajectory will be developed from this concrete case study to more general reflections and conceptual analysis of transdisciplinary research, its definition and concrete practice in SR. I am going to elaborate impacts of 2PP (2nd-person-perspective), mirror-neurons and alterity relations, the “quasi other” (Don Ihde, Mark Coeckelbergh); tacit and implicit knowledge (Michael Polanyi, Bernhard Irrgang); and meaning of “perspectivity” as general epistemological term in methodology of transdisciplinary research (Jürgen Mittelstraß). The theoretical focus will be on relations between philosophy and engineering sciences with respect to “material hermeneutics” (Don Ihde, Peter-Paul Verbeek, Bernhard Irrgang, Hans Lenk, Hans Poser, Walther Zimmerli) and its applications on SR. This will be related to the practical example of music-robots.

16:30-16:50 Discussion 


Friday, January 30:

9:15-9:45

John Michael

Cognitive Science, Central European University Budapest; Centre for Subjectivity Research Copenhagen; Interacting Minds Centre, Aarhus University

Implementing commitment in Human-Robot Interaction

Commitment is a fundamental building block of human social life. By generating and/or stabilizing expectations about contributions that individual agents will make to the goals of other agents or to shared goals, commitments facilitate the planning and coordination of actions involving multiple agents. Moreover, they can also increase individual agents’ motivation to contribute to other agents’ goals or to shared goals, as well as their willingness to rely on other agents’ contributions. In this paper, I discuss the possibility of designing robots that participate in commitments with human agents – i.e. such that they are motivated to honor commitments made to human agents, such that human agents are motivated to honor commitments made to them, and such that each expects the other to be so motivated. My strategy will be to first introduce a minimal operational definition of commitment, and to identify factors modulating the level of commitment in the sense of this minimal operational definition.

9:45-10:05 Discussion


10:15-10:45

Oliver Lauenstein

Dept for Psychology, University of Bamberg (Germany)

„I for one salute our new robotic overlords…“: Studying lay moralities of robots, society and human futures.

Most of the research on human interactions with "social robots" focuses on interactions at an individual or at most institutional level. In other words, the social in social robotics is mostly understood as ‘sociable’ (gesellig) and related to direct social interactions. On this level, there is already some understanding of social robots and their impact on relationships. Reviewing the literature, it is apparent that despite initial concerns the majority of carers, elderly people with disabilities or others can become accepting of social robots and perceive them as useful. 

Taking a broader definition of social as ‘societal’ (gesellschaftlich), the perspective is less positive. Following a recent EuroBarometer (2012) the majority of respondents (60%) would ban robots in care contexts and 86% feel ‘uncomfortable’ when thinking of an elderly or disabled relative being cared for by a robot. This sceptical to anxious reaction towards robots, especially outside of industrial and housework contexts, has been replicated in a variety of other samples. Despite common portrayal as a ‘robot kingdom’ and wide-spread application of robots, it also holds for Japanese respondents. According to Sabanovic (2010) the common approach this popular rejection of robots is technological determinism, i.e. the understanding that society will (ultimately) have to cope and find ways of adapting to (social) robots. In contrast, she argues, a more dynamic approach allowing for society and robots mutually shaping each other is called for. 

Taking up on this research, knowing that scepticism and anxiety about robots and their societal impact exists, I will draw on theories from social and moral psychology, suggesting a framework towards understand: 1) which people are anxious about social robots, 2) which (moral) concerns their negative feelings are rooted in and 3) how these concerns are related to lay understandings of future societies, robotics and elderly or disabled people (as the affected population). Drawing on the Cuddy, Fiske and Glick’s (2008) Stereotype Content Model and Haidt’s (2012) Moral Foundations Theory, I suggest that part of the conflict might be rooted in our understanding of robots as ‘cold but capable’, people with disabilities as ‘warm but incapable’ and both the social and personal norm to protect the latter against the former.

17:45-11:05 Discussion


11:10-11:40

Iordanis Kavathatzopoulos and Ingrid Björk

Dept for Information Technology, Uppsala University (Sweden)

How ethical robots process information, communicate and act

Robots can be of great help to obtain optimal solutions to problems in situations where humans have difficulties to perceive and process information, or make decisions and implement actions because of the quantity, variation and complexity of information.

However, if they do not act in accordance to our ethical values they will not be used or will cause harm. Classical philosophical theory and psychological research on problem solving and decision making gives us a
concrete definition of ethics and opens up the way for the construction of robots that can support handling of moral problems. Linguistic research focusing on language use as realization of meaning during the communication between humans and robots gives us the tools for investigating how particular linguistic features such as words and grammar may be related to ethical thinking.

In such research work we can focus on three different kinds of robots: The first one is already programmed to act in certain ways, and the focus is on designers using ethical tools to identify moral problems and formulate solutions. The second is an integrated system which is also preprogrammed but also contains an ethical tool to gather information, to present it to the operators and to communicate with them. The third is traine autonomous systems in which we will implement automatic judgment. Such research will help us to clarify theoretical issues, to formulate working methods, and to develop technical solutions that will support ethical decision making of automated IT systems.

11:40-12:00 Discussion 


12:00-12:30 Final discussion

 

Lunch

 

13:00 End of workshop


References:

Clark, H. H. (1996). Using language: Cambridge University Press Cambridge.

Groom, V., Chen, J., Johnson, T., Kara, F. A., & Nass, C. (2010). Critic, compatriot, or chump?: Responses to robot blame attribution. Proceedings of the 5th ACM/IEEE international conference on Human-robot interaction, 211-218.

Groom, V., Srinivasan, V., Bethel, C. L., Murphy, R., Dole, L., & Nass, C. (2011). Responses to robot social roles and social role framing. Paper presented at the International Conference on Collaboration Technologies and Systems (CTS).

Takayama, L., Groom, V., & Nass, C. (2009). I'm sorry, Dave: I'm afraid I won't do that: Social aspects of human-agent conflict. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2099-2108. doi: 10.1145/1518701.1519021

Winfield Alan, Christian Blum, and Wenguo Liu. Towards an ethical robot: internal models, consequences and ethical action selection. In Michael Mistry, Aleš Leonardis, Mark Witkowski, and Chris Melhuish, editors, TAROS 2014 - Towards Autonomous Robotic Systems, volume 8717 of Lecture Notes in Computer Science, pages 85–96. Springer International Publishing, 2014.