Hostname: page-component-586b7cd67f-rdxmf Total loading time: 0 Render date: 2024-11-22T13:32:41.989Z Has data issue: false hasContentIssue false

How puzzling is the social artifact puzzle?

Published online by Cambridge University Press:  05 April 2023

Tom Ziemke
Affiliation:
Department of Computer and Information Science, Linköping University, 58183 Linköping, Sweden tom.ziemke@liu.se; https://liu.se/en/employee/tomzi64 sam.thellman@liu.se; https://liu.se/en/employee/samth78
Sam Thellman
Affiliation:
Department of Computer and Information Science, Linköping University, 58183 Linköping, Sweden tom.ziemke@liu.se; https://liu.se/en/employee/tomzi64 sam.thellman@liu.se; https://liu.se/en/employee/samth78

Abstract

In this commentary we would like to question (a) Clark and Fischer's characterization of the “social artifact puzzle” – which we consider less puzzling than the authors, and (b) their account of social robots as depictions involving three physical scenes – which to us seems unnecessarily complex. We contrast the authors' model with a more parsimonious account based on attributions.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

We fully agree with Clark and Fischer's (C&F's) conclusion that no new ontological category is required for understanding people's interactions with social robots. What we would like to question in this commentary, however, is (a) the authors' characterization of the “social artifact puzzle” (target article, sect. 1, para. 2) – which we consider less puzzling than the authors describe it, and (b) their account of social robots as depictions involving three physical scenes – which to us seems unnecessarily complex.

Our own perspective is roughly in line with what C&F characterize as the trait attribution approach. We have recently published a systematic review of 155 empirical studies of mental state attribution to robots (Thellman, de Graaf, & Ziemke, Reference Thellman, de Graaf and Ziemke2022), which shows that most research so far has been concerned with determinants (causes) and consequences, that is, the questions when do people attribute mental states to robots, and why? Known determinants include robot factors, such as appearance and behavior, and human factors, such as age and motivation. Known consequences include increased predictability, explainability, and trust, but also increases in cognitive drain and moral concern. However, relatively little is known about the how, that is, the mechanisms underlying such attributions – and this is of course where C&F's account of social robots as depictions involving three physical scenes could potentially make an important contribution.

We think that the three-physical-scenes account works best in cases where there is a clear difference between the depiction and the depicted. When, for example, you see the actor Mark Hamill portraying Luke Skywalker in The Empire Strikes Back, it is easy for viewers to understand Luke's physical and psychological pain when he gets his hand chopped off by Darth Vader, who then also turns out to be Luke's father, although it is of course more or less clear to everybody that the actor experiences neither of those pains. Things are less clear, we think, in C&F's example of Kermit the frog depicting “a ranarian creature named Kermit” (target article, sect. 10, para. 2). It seems to us that in this case the distinction between the Kermit that does the depicting and the Kermit that is being depicted might not be particularly useful. One might also ask what motivates the limitation to exactly three physical scenes? Is not Kermit (the depicted) himself also a depiction of a certain type of human personality, rather than just a ranarian creature? Is not the fact that Kermit and Piggy are depictions of very different human personality types part of the reason why their relationship is funny to us? Are these examples of a possible fourth level in C&F's model, or alternative third scenes, or maybe a blended third scene? In cases like this, in our opinion, the attribution account is preferable, because it seems relatively straightforward to view people as attributing any number and combination of human, ranarian, and possibly other traits to Kermit.

To get back to socially interactive artifacts, let us take a concrete example (cf. Thellman et al., Reference Thellman, de Graaf and Ziemke2022; Ziemke, Reference Ziemke2020): As a pedestrian encountering a driverless car at a crosswalk, you might be asking yourself: Has that car seen me? Does it understand I want to cross the road? Does it intend to stop for me? This would be an example of Dennett's (Reference Dennett1988) intentional stance, that is, an interpretation of the car's behavior in terms of attributed mental states, such as beliefs and intentions. C&F's analysis in terms of three types of agents is clearly also applicable here: We have the self-driving car, the pedestrian, and the authorities responsible for the car (maker, owner, etc.). If we look at this in terms of C&F's three physical scenes, though, we are again (as in Kermit's case) not quite sure who or what is the character depicted. Is the software controlling the car a depiction of a human driver? That seems unlikely, given that the software as such usually remains invisible to the pedestrian. Or is the self-driving car as a whole a depiction of a normal, human-driven car? This might be in line with arguments that people should not even need to know whether a car is self-driving or not. Or is the car as such a depiction of a self-driving car? It is not clear to us why one would want to distinguish between the depiction and the depicted here. Instead of interpreting this case in terms of three physical scenes, it seems more straightforward to distinguish between the physical car and people's attributions to that car. Moreover, from the perspective of situated and embodied cognition, it would also seem more straightforward to view the pedestrian as interacting with the car in front of it – rather than interacting with some internal representation or an imagined depicted character. In other words, we think the attribution account is more parsimonious, and therefore preferable.

To get back to C&F's notion of the “social artifact puzzle,” we do not agree that there is something “self-contradictory, even irrational” about the fact that “people are willing to interact with a robot as if it was a social agent when they know it is a mechanical artifact” (target article, sect. 1, para. 2). In the example above, instead of the intentional interpretation, the pedestrian could of course take what Dennett refers to as the design stance and predict the car's behavior based on the general assumption that such vehicles are designed to detect people and not harm them. That might seem safer or more appropriate to some pedestrians (and readers) but note that this would still require you to make additional, more situation-specific assumptions about whether the car has actually detected you (Thellman & Ziemke, Reference Thellman and Ziemke2021; Ziemke, Reference Ziemke2020). This brings us back to what we said earlier about the consequences of mental state attribution to robots: In a nutshell, such attributions have been found to increase predictability and trust, which means that treating such artifacts as intentional, social agents might simply make them easier to interact with. In that sense, C&F's “social artifact puzzle” is less puzzling than it might seem.

Financial support

Both authors are supported by ELLIIT, the Excellence Center at Linköping-Lund in Information Technology (https://elliit.se/).

Competing interest

None.

References

Dennett, D. C. (1988). Précis of The Intentional Stance. Behavioral and Brain Sciences, 11(3), 495505.CrossRefGoogle Scholar
Thellman, S., de Graaf, M., & Ziemke, T. (2022). Mental state attribution to robots: A systematic review of conceptions, methods, and findings. ACM Transactions on Human–Robot Interaction, 11(4), article 41 (51 pages). https://doi.org/10.1145/3526112CrossRefGoogle Scholar
Thellman, S., & Ziemke, T. (2021). The perceptual belief problem: Why explainability is a tough challenge in social robotics. ACM Transactions on Human–Robot Interaction, 10(3), article 29 (15 pages). https://doi.org/10.1145/3461781CrossRefGoogle Scholar
Ziemke, T. (2020). Understanding robots. Science Robotics, 5(46), eabe2987. https://doi.org/10.1126/scirobotics.abe2987CrossRefGoogle ScholarPubMed