Hostname: page-component-7bb8b95d7b-wpx69 Total loading time: 0 Render date: 2024-10-02T15:13:12.721Z Has data issue: false hasContentIssue false

The now and future of social robots as depictions

Published online by Cambridge University Press:  05 April 2023

Bertram F. Malle
Affiliation:
Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI 02912, USA bfmalle@brown.edu http://bit.ly/bfmalle
Xuan Zhao
Affiliation:
Department of Psychology, Stanford University, Stanford, CA 94305, USA xuanzhao@stanford.edu https://www.xuan-zhao.com/

Abstract

The authors at times propose that robots are mere depictions of social agents (a philosophical claim) and at other times that people conceive of social robots as depictions (an empirical psychological claim). We evaluate each claim's accuracy both now and in the future and, in doing so, we introduce two dangerous misperceptions people have, or will have, about social robots.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

When interacting with robots, people face an attribution problem (Heider, Reference Heider1958): To what entity should they attribute the various actions that a robot performs, such as greeting a hotel guest, tutoring a second-language speaker, or recommending a new song? A common assumption is that people conceive of the robot itself as performing these actions. Clark and Fischer (this issue) (C&F) propose instead that people often engage in a pretense and take an imagined character to do the greeting or tutoring or recommending – a character that is merely depicted by the machine in front of them. The authors' innovative depiction thesis suggests similarities between social robots and other human-created depictions, such as maps, puppets, and movies, and they provide illuminating examples suggesting that at least some people, some of the time, treat robots as depictions.

To evaluate the thesis of social robots as depictions, however, we must distinguish two versions of the thesis: that social robots are mere depictions of social agents (a philosophical claim); and that people conceive of social robots as depictions (an empirical psychological claim). Moreover, we have to evaluate how the thesis fares in the present but also how it will fare in the future. Analyzing these four combinations (see Table 1), we find that evidence for the depiction thesis is limited, but the analysis reveals two dangerous misperceptions people have, or will have, about social robots: Right now, people often treat robots as autonomous agents even though in reality the robots are little more than depictions. In the future, people may fail to treat robots as the autonomous agents that they are bound to become, far more powerful than today's depictions.

Table 1. Depiction thesis, in two interpretations, now and in the future

Consider what robots are now. Like children's dolls and ventriloquist dummies, social robots are dressed up to perform actions that in actuality they do not perform: they cannot hold a conversation, be empathic, or have relationships. Like nonsocial robots (vacuum bots, manufacturing automata), social robots are programmed and controlled by designers to perform a limited number of actions; but unlike nonsocial robots, current social robots are advertised to be much more capable than they really are – that is, they are largely a pretense, a fiction.

Now consider how people treat current social robots. C&F offer vivid anecdotes but only a small number of studies that support the claim that people conceive of robots as depictions. In fact, there is considerable evidence that people often do the opposite – they treat robots as autonomous agents when they should not. People spontaneously take a robot's visual perspective (and more so if it looks highly humanlike; Zhao & Malle, Reference Zhao and Malle2022); people ascribe personality to robots (Ferguson, Mann, Cone, & Shen, Reference Ferguson, Mann, Cone and Shen2019) as well as cognitive and moral capacities (Malle, Reference Malle, Goel, Seifert and Freksa2019; Weisman, Dweck, & Markman, Reference Weisman, Dweck and Markman2017; and more so if the robot looks highly humanlike; Zhao, Phillips, & Malle, Reference Zhao, Phillips and Malle2019); and people feel empathy for robots, especially when the robots have animal-like appearance (Darling, Reference Darling, Calo, Froomkin and Kerr2016; Rosenthal-von der Pütten, Krämer, Hoffmann, Sobieraj, & Eimler, Reference Rosenthal-von der Pütten, Krämer, Hoffmann, Sobieraj and Eimler2013). In all these cases, people's psychological response to robots – so well-practiced in encounters with other human beings – seems to be directed to the robot-proper, not to a depicted character. Or at least there is no evidence that people compartmentalize the depiction from the depicted (as C&F suggest, p. x). Thus, people often fail to take the stance of pretense that the depiction thesis postulates; instead, they fall prey to an illusion created by designers and engineers, who exploit the deep-seated human psychology of generalization (Shepard, Reference Shepard1987) and lure people into a dangerous overestimation of capabilities that robots-proper currently do not have (Malle, Fischer, Young, Moon, & Collins, Reference Malle, Fischer, Young, Moon, Collins, Zhang and Wei2020).

Now consider what robots will be like in the future. They will not just be depictions; they will instantiate, as robots-proper, the actions that current robots only depict. Unlike dolls and dummies, they will not just be crafted and controlled by human programs. They will rapidly evolve through directing their own learning and devising their own programs. They will increasingly make autonomous decisions enabled by continuously updated and massively expanded algorithms. And equipped with complex capacities, they will perform socially significant actions – making a customer feel welcome, consoling a child, or caring for an older adult in distress.

In this future, people will ascribe such significant actions to the robot in front of them, not to any depicted character. And yet people will underestimate future robots' capacities, because our human psychology – evolved to co-exist with other humans – will be unprepared for robots' superhuman speed and scope of information processing and their ability to acquire vast numbers of roles and capabilities. (The reader is encouraged to watch the movie Her to see an example of such a being.) Designers, engineers, and scientists must help users set the right expectations of what such robots are capable of and simultaneously build robots that can communicate their capabilities to users.

But the greatest fear in fiction and philosophy has always been that robots will develop their own preferences and interests that may be in conflict with those of humans. To allay this fear, policies and regulations must be in place to ensure the design and manufacturing of robots that, while being autonomous, are still fully responsive to human influence. For this is what humans are – autonomous but also responsive to each other's influence. Robots of the future, like humans, must be able to learn the norms and values of our communities, improve from people's moral criticism, and be altered or excluded if they fail to correct themselves. Experts and community members alike must be teachers of future robots – robots as real agents, not merely as depictions.

Financial support

This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.

Competing interest

None.

References

Darling, K. (2016). Extending legal protection to social robots: The effects of anthropomorphism, empathy, and violent behavior towards robotic objects. In Calo, R., Froomkin, A. M., & Kerr, I. (Eds.), Robot Law (pp. 213232). Edward Elgar Publishing.10.4337/9781783476732.00017CrossRefGoogle Scholar
Ferguson, M. J., Mann, T. C., Cone, J., & Shen, X. (2019). When and how implicit first impressions can be updated. Current Directions in Psychological Science, 28(4), 331336. https://doi.org/10.1177/0963721419835206CrossRefGoogle Scholar
Heider, F. (1958). The psychology of interpersonal relations. Wiley.10.1037/10628-000CrossRefGoogle Scholar
Malle, B. F. (2019). How many dimensions of mind perception really are there? In Goel, E. K., Seifert, C. M., & Freksa, C. (Eds.), Proceedings of the 41st annual meeting of the cognitive science society (pp. 22682274). Cognitive Science Society.Google Scholar
Malle, B. F., Fischer, K., Young, J. E., Moon, A. J., & Collins, E. C. (2020). Trust and the discrepancy between expectations and actual capabilities of social robots. In Zhang, D. & Wei, B. (Eds.), Human–robot interaction: Control, analysis, and design (pp. 123). Cambridge Scholars.Google Scholar
Rosenthal-von der Pütten, A. M., Krämer, N. C., Hoffmann, L., Sobieraj, S., & Eimler, S. C. (2013). An experimental study on emotional reactions towards a robot. International Journal of Social Robotics, 5(1), 1734. https://doi.org/10.1007/s12369-012-0173-8CrossRefGoogle Scholar
Shepard, R. N. (1987). Toward a universal law of generalization for psychological science. Science (New York, N.Y.), 237(4820), 13171323.CrossRefGoogle Scholar
Weisman, K., Dweck, C. S., & Markman, E. M. (2017). Rethinking people's conceptions of mental life. Proceedings of the National Academy of Sciences of the United States of America, 114(43), 1137411379. https://doi.org/10.1073/pnas.1704347114CrossRefGoogle ScholarPubMed
Zhao, X., & Malle, B. F. (2022). Spontaneous perspective taking toward robots: The unique impact of humanlike appearance. Cognition, 224, 105076. https://doi.org/10.1016/j.cognition.2022.105076CrossRefGoogle ScholarPubMed
Zhao, X., Phillips, E., & Malle, B. F. (2019). How people infer a humanlike mind from a robot body [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/w6r24CrossRefGoogle Scholar
Figure 0

Table 1. Depiction thesis, in two interpretations, now and in the future