Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-20T16:35:36.408Z Has data issue: false hasContentIssue false

On the potentials of interaction breakdowns for HRI

Published online by Cambridge University Press:  05 April 2023

Britta Wrede
Affiliation:
Software Engineering for Cognitive Robots and Cognitive Systems, University of Bremen, 28359 Bremen, Germany bwrede@techfak.uni-bielefeld.de
Anna-Lisa Vollmer
Affiliation:
Medical Assistive Systems, Bielefeld University, 33615 Bielefeld, Germany anna-lisa.vollmer@uni-bielefeld.de
Sören Krach
Affiliation:
Department of Psychiatry and Psychotherapy, Social Neuroscience Lab (SNL), Lübeck University, Center of Brain, Behavior and Metabolism (CBBM), 23538 Lübeck, Germany soeren.krach@uni-luebeck.de

Abstract

How do we switch between “playing along” and treating robots as technical agents? We propose interaction breakdowns to help solve this “social artifact puzzle”: Breaks cause changes from fluid interaction to explicit reasoning and interaction with the raw artifact. These changes are closely linked to understanding the technical architecture and could be used to design better human–robot interaction (HRI).

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Clark and Fischer (C&F) propose a new account for the “social artifact puzzle” as they call it: The observation that humans tend to interact with robots as if they were social agents framed in a specifically intended social situation while at the same time being aware of its technical nature and switching smoothly from “playing along” to treating it like a technical tool. C&F solve this riddle by proposing three levels at which a social robot is construed: The raw artifact, its depiction, and the scene depicted between which human interactants switch seemingly effortlessly. This approach elegantly explains the contradicting observations.

The switch from “playing along” to treating robots as technical agents is stated to happen “effortlessly,” “smoothly,” “implicitly,” “automatically,” “unconsciously,” and it has been proposed that “people are predisposed” or they use “natural rules” of communication. This assumption is in line with Nass and coworkers on stereotypes and research on anthropomorphism (e.g., Nass & Moon, Reference Nass and Moon2000; Złotowski et al., Reference Złotowski, Sumioka, Eyssel, Nishio, Bartneck and Ishiguro2018).

Here, we argue that unexpected and difficult to interpret “breaks” or “interruptions” in the interaction, such as for example, when the robot is crashing, falling, or shutting down, provide a valuable source of information about the “social artifact puzzle.” When human partners are urged to deal with questions such as “Did the character fall asleep, or did the robot's battery die?,” such “breaks” may shake up the human interaction partner to switch from an automatic interaction style to a more conscious process that requires more explicit strategies. From this we derive the following three assumptions:

1. Breaks structure interaction into phases that require different processing approaches: As C&F noticed: “As we noted at the beginning, when a robot stops moving, viewers must decide, ‘Did the character fall asleep, or did the robot's battery die?’” (target article, introduction, para. 2). Thus, while the interaction at the level of the scene depicted seems to progress rather effortlessly, making use of intuitive human interaction strategies that are strengthened by the anthropomorphization of the robot, the interaction at the “raw artifact level” requires explicit reasoning processes in order to try to find an explanation of the (unexpected) robot behavior. In line with this, studies indicate that during human–robot interaction (HRI) the interaction with a robot is facilitated when the users had a better understanding of the architecture, that is, the raw artifact, and thus were better able to derive the reasons for interaction errors (Hindemith, Göpfert, Wiebel-Herboth, Wrede, & Vollmer, Reference Hindemith, Göpfert, Wiebel-Herboth, Wrede and Vollmer2021). Moreover, higher anthropomorphism scores, that is, perceiving the robot as more human-like, were associated with a decreased understanding of interaction errors (Hindemith et al., Reference Hindemith, Göpfert, Wiebel-Herboth, Wrede and Vollmer2021) and less interaction success (Hindemith et al., Reference Hindemith, Göpfert, Wiebel-Herboth, Wrede and Vollmer2021), suggesting that a convincingly depicted scene, as indicated by high anthropomorphism scores, hindered the correct processing of the raw artifact. These findings are in line with neurobiological investigations of HRI showing that brain regions associated with theorizing about another agent's putative intentions were increasingly engaged the more human-like the scene was depicted (Hegel, Krach, Kircher, Wrede, & Sagerer, Reference Hegel, Krach, Kircher, Wrede and Sagerer2008; Krach et al., Reference Krach, Hegel, Wrede, Sagerer, Binkofski and Kircher2008).

2. How do prior experiences, expertise, or maturity affect these processes?: Vollmer, Read, Trippas, and Belpaeme (Reference Vollmer, Read, Trippas and Belpaeme2018) showed that children were more likely to “play along” in a social group pressure situation with a robot group than adults who were less affected by the social group pressure exerted by robots. This could indicate that adults, who have more experience with and thus stronger prior beliefs about robot behavior than children, were capable of guiding their attention more strongly to the raw artifact level, thus increasing the effect of the raw artifact on the depicted scene level. Thus, we assume that children will be less inclined to change levels in the interaction with a robot and that more “severe” breaks would be necessary to shake up children during HRI. It is unclear though, how expertise in robotics would affect this process. On the one hand, we would assume that more expertise allows the user to more easily spot when and why things go awry during the interaction with the robot. This would allow experts to switch into an interaction more smoothly at the raw artifact level as compared to more naïve interaction partners (see Fig. 1).

Figure 1. The proposed process of change between “playing along” and treating robots as technical agents caused by an interaction breakdown and vice versa.

On the other hand, it could be that children more easily immerse in the scene. Why should children become more easily immersed? According to Schilbach and colleagues, for an immersive social interaction at least two factors are required: A dynamic interaction between two agents with high emotional engagement (Pfeiffer, Timmermans, Vogeley, Frith, & Schilbach, Reference Pfeiffer, Timmermans, Vogeley, Frith and Schilbach2013; Schilbach et al., Reference Schilbach, Timmermans, Reddy, Costall, Bente, Schlicht and Vogeley2013). Studies indicate that children have higher engagement during HRI (Burdett, Ikari, & Nakawake, Reference Burdett, Ikari and Nakawake2022) thus one may reason that emotional engagement modulates how easily children may get out of the scene and change to the “raw artifact” level.

These thoughts finally lead to the question:

3. What can roboticists and robot designers learn from these observations and how can insights be derived from these to improve HRI?: As robots are based on fundamentally different architectures than humans, their interaction – at least at the current state – is fundamentally different from human interaction even when developers try to mimic human-like behavior. Thus, human interaction partners need to be able to change from time to time to the “raw artifact” level in order to be able to understand the underlying rules of the artificial interaction with the robot. In didactics of computer science, the changes between function (i.e., depicted scene) and structure (i.e., raw artifact) are seen as an important strategy for learners to comprehend computational artifacts (Schulte, Reference Schulte2008). This leads to the question of how to use such breaking points for HRI? It may be useful, for example, to experimentally control and insert failures within the interaction to help humans learn and better understand how the robot works. On the other hand, what strategies can help to guide the user back to an implicit and smoother interaction?

Overall, these considerations indicate that breakdowns may serve an important role in HRI and deserve further research.

Acknowledgment

We thank Helen Beierling for the two illustrations of human–robot interactions.

Financial support

BW and ALV received funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation): TRR 318/1 2021 – 438445824.

Competing interest

None.

References

Burdett, E. R. R., Ikari, S., & Nakawake, Y. (2022). British children's and adults’ perceptions of robots. Human Behavior and Emerging Technologies, 2022(January), 116.CrossRefGoogle Scholar
Hegel, F., Krach, S., Kircher, T., Wrede, B., & Sagerer, G. (2008). Understanding Social Robots: A User Study on Anthropomorphism. RO-MAN 2008 – The 17th IEEE International Symposium on Robot and Human Interactive Communication, August, pp. 574–579. IEEE.CrossRefGoogle Scholar
Hindemith, L., Göpfert, J. P., Wiebel-Herboth, C. B., Wrede, B., & Vollmer, A.-L. (2021). Why robots should be technical. Interaction Studies, 22(2), 244279.CrossRefGoogle Scholar
Krach, S., Hegel, F., Wrede, B., Sagerer, G., Binkofski, F., & Kircher, T. (2008). Can machines think? Interaction and perspective taking with robots investigated via FMRI. PLoS ONE, 3(7), 11.CrossRefGoogle ScholarPubMed
Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. The Journal of Social Issues, 56(1), 81103.CrossRefGoogle Scholar
Pfeiffer, U. J., Timmermans, B., Vogeley, K., Frith, C. D., & Schilbach, L. (2013). Towards a neuroscience of social interaction. Frontiers in Human Neuroscience, 7(February), 22.CrossRefGoogle ScholarPubMed
Schilbach, L., Timmermans, B., Reddy, V., Costall, A., Bente, G., Schlicht, T., & Vogeley, K. (2013). Toward a second-person neuroscience. The Behavioral and Brain Sciences, 36(4), 393414.CrossRefGoogle Scholar
Schulte, C. (2008). Duality reconstruction – Teaching digital artifacts from a socio-technical perspective. ISSEP (2008).Google Scholar
Vollmer, A.-L., Read, R., Trippas, D., & Belpaeme, T. (2018). Children conform, adults resist: A robot group induced peer pressure on normative social conformity. Science Robotics, 3(21), eaat7111.CrossRefGoogle ScholarPubMed
Złotowski, J., Sumioka, H., Eyssel, F., Nishio, S., Bartneck, C., & Ishiguro, H. (2018). Model of dual anthropomorphism: The relationship between the media equation effect and implicit anthropomorphism. International Journal of Social Robotics, 10(5), 701714.CrossRefGoogle Scholar
Figure 0

Figure 1. The proposed process of change between “playing along” and treating robots as technical agents caused by an interaction breakdown and vice versa.