Clark and Fischer (C&F) offer an excellent analysis of what they call the social artefact puzzle, that is, why it is that people simultaneously (1) hold the view that social robots – whether in the shape of animals or humans – are merely designed mechanical artefacts, and (2) show willingness to interact with them as if they were real agents. Their solution to this apparent inconsistency is to suggest that people do not inherently treat social robots as real agents, but rather treat them as interactive depictions (i.e., analogues) to real agents. To our surprise, however, in their discussion the authors did not mention Daniel Dennett's (Reference Dennett1987, Reference Dennett1988) distinction between the intentional stance and the design stance – two attitudes that humans routinely take in their engagement with the world. Yet we think that it is precisely this distinction that can help to address some of the unresolved issues the authors raise as currently lacking from the alternative perspectives: Why (i) people differ in their willingness to interact with social robots, (ii) why people can rapidly change their perspective of social robots, from agents to artefacts, and (iii) why people seem to only selectively treat social robots as agents.
The intentional stance, according to Dennett, involves treating “the system whose behavior is to be predicted as a rational agent; one attributes to the system the beliefs and desires it ought to have, given its place in the world and its purpose, and then predicts that it will act to further its goals in the light of its beliefs” (Dennett, Reference Dennett1988, p. 496). This stance can be applied to other agents as well as to oneself (Veit, Reference Veit2022; Veit et al., Reference Veit, Dewhurst, Dołega, Jones, Stanley, Frankish and Dennett2019). On the other hand, when one takes the design stance “one predicts the behavior of a system by assuming that it has a certain design (is composed of elements with functions) and that it will behave as it is designed to behave under various circumstances” (Dennett, Reference Dennett1988, p. 496).
When humans are faced with a social robot, both stances are useful for predicting how the robot is going to behave, so people are faced with a choice of how to treat it. Which stance they choose to adopt may depend on a range of factors, including individual differences, and the particular goals of the interaction. For instance, people will differ in their social personality traits, and their prior experience with social robots or similar artificial agents, which makes it unsurprising that they will then also differ in their willingness to adopt the intentional stance and interact with them as if they were real agents with beliefs and desires; as opposed to adopting the design stance and treating them in a more pragmatic manner, as useful objects but nothing more (though we note that Marchesi et al. [Reference Marchesi, Ghiglino, Ciardo, Perez-Osorio, Baykara and Wykowska2019] did not find any differences within the demographic groups they screened for).
Thinking about these perspectives as conditional and changing stances, rather than strong ontological and normative commitments about the status of social robots and how they should be treated, removes the mystery regarding why and how people can rapidly change their perspectives of social robots, treating them as artefacts at one point in time and as agents at another. It can now be regarded as a fairly simple switch from one stance to another. This also provides a solution to the question of why people show selectivity in their interpretation of the capacities and abilities of social robots. People can adopt one stance or the other, depending on the context and goals of the particular interaction.
It is important to keep in mind that both stances are ultimately meant to be useful within different contexts. Our interactions with social robots will occur within a range of contexts, and people will have vastly different goals depending both on their own aims and values, and the situation they are encountered in. In some cases it will be useful for someone, with reference to their goals, to ignore the nonhuman-like features of a social robot and treat them as another social agent. Particularly, in light of the evidence the authors discuss, of people's strong emotional responses to some social robots (e.g., companion “animals”), there may here be psychological and social benefits in adopting the intentional stance and treating the robot as a social agent (indeed, this would appear to be the very purpose of these robots in the first place). It may also assist in rapid and flexible predictions of behaviour, supported by the fact that people more readily adopt the intentional stance when viewing social robots interacting with other humans, than when viewing them acting alone (Spatola, Marchesi, & Wykowska, Reference Spatola, Marchesi and Wykowska2021). In other cases, often even within the same interaction, it will be more useful to ignore the human-like features and focus on the more mechanical properties, shifting to a treatment of the robot as an artefact instead. This is more likely in cases where interaction with the robot is more instrumental, in service of some other goal.
We want to emphasise that one doesn't have to see Dennett's account as a competitor to C&F's. Indeed, we think they are complementary. Our suggestion here is that the authors could include this distinction within their proposal, drawing more links between their account and some of the existing studies that explore the intentional and design stances in relation to people's responses to robots (e.g., Marchesi et al., Reference Marchesi, Ghiglino, Ciardo, Perez-Osorio, Baykara and Wykowska2019; Perez-Osorio & Wykowska, Reference Perez-Osorio and Wykowska2019; Spatola et al., Reference Spatola, Marchesi and Wykowska2021). In particular, we see benefit in more empirical research on people's interactions with and attitudes towards social robots, to test these ideas and see which may apply more strongly within different contexts. As the current evidence base is small, and underdetermines the current available theories, if we want to advance our understanding of when, how, and why ordinary people treat social robots as agents, we will ultimately need further empirical work and we think that Dennett's distinction provides an additional useful framework from which to build this.
Clark and Fischer (C&F) offer an excellent analysis of what they call the social artefact puzzle, that is, why it is that people simultaneously (1) hold the view that social robots – whether in the shape of animals or humans – are merely designed mechanical artefacts, and (2) show willingness to interact with them as if they were real agents. Their solution to this apparent inconsistency is to suggest that people do not inherently treat social robots as real agents, but rather treat them as interactive depictions (i.e., analogues) to real agents. To our surprise, however, in their discussion the authors did not mention Daniel Dennett's (Reference Dennett1987, Reference Dennett1988) distinction between the intentional stance and the design stance – two attitudes that humans routinely take in their engagement with the world. Yet we think that it is precisely this distinction that can help to address some of the unresolved issues the authors raise as currently lacking from the alternative perspectives: Why (i) people differ in their willingness to interact with social robots, (ii) why people can rapidly change their perspective of social robots, from agents to artefacts, and (iii) why people seem to only selectively treat social robots as agents.
The intentional stance, according to Dennett, involves treating “the system whose behavior is to be predicted as a rational agent; one attributes to the system the beliefs and desires it ought to have, given its place in the world and its purpose, and then predicts that it will act to further its goals in the light of its beliefs” (Dennett, Reference Dennett1988, p. 496). This stance can be applied to other agents as well as to oneself (Veit, Reference Veit2022; Veit et al., Reference Veit, Dewhurst, Dołega, Jones, Stanley, Frankish and Dennett2019). On the other hand, when one takes the design stance “one predicts the behavior of a system by assuming that it has a certain design (is composed of elements with functions) and that it will behave as it is designed to behave under various circumstances” (Dennett, Reference Dennett1988, p. 496).
When humans are faced with a social robot, both stances are useful for predicting how the robot is going to behave, so people are faced with a choice of how to treat it. Which stance they choose to adopt may depend on a range of factors, including individual differences, and the particular goals of the interaction. For instance, people will differ in their social personality traits, and their prior experience with social robots or similar artificial agents, which makes it unsurprising that they will then also differ in their willingness to adopt the intentional stance and interact with them as if they were real agents with beliefs and desires; as opposed to adopting the design stance and treating them in a more pragmatic manner, as useful objects but nothing more (though we note that Marchesi et al. [Reference Marchesi, Ghiglino, Ciardo, Perez-Osorio, Baykara and Wykowska2019] did not find any differences within the demographic groups they screened for).
Thinking about these perspectives as conditional and changing stances, rather than strong ontological and normative commitments about the status of social robots and how they should be treated, removes the mystery regarding why and how people can rapidly change their perspectives of social robots, treating them as artefacts at one point in time and as agents at another. It can now be regarded as a fairly simple switch from one stance to another. This also provides a solution to the question of why people show selectivity in their interpretation of the capacities and abilities of social robots. People can adopt one stance or the other, depending on the context and goals of the particular interaction.
It is important to keep in mind that both stances are ultimately meant to be useful within different contexts. Our interactions with social robots will occur within a range of contexts, and people will have vastly different goals depending both on their own aims and values, and the situation they are encountered in. In some cases it will be useful for someone, with reference to their goals, to ignore the nonhuman-like features of a social robot and treat them as another social agent. Particularly, in light of the evidence the authors discuss, of people's strong emotional responses to some social robots (e.g., companion “animals”), there may here be psychological and social benefits in adopting the intentional stance and treating the robot as a social agent (indeed, this would appear to be the very purpose of these robots in the first place). It may also assist in rapid and flexible predictions of behaviour, supported by the fact that people more readily adopt the intentional stance when viewing social robots interacting with other humans, than when viewing them acting alone (Spatola, Marchesi, & Wykowska, Reference Spatola, Marchesi and Wykowska2021). In other cases, often even within the same interaction, it will be more useful to ignore the human-like features and focus on the more mechanical properties, shifting to a treatment of the robot as an artefact instead. This is more likely in cases where interaction with the robot is more instrumental, in service of some other goal.
We want to emphasise that one doesn't have to see Dennett's account as a competitor to C&F's. Indeed, we think they are complementary. Our suggestion here is that the authors could include this distinction within their proposal, drawing more links between their account and some of the existing studies that explore the intentional and design stances in relation to people's responses to robots (e.g., Marchesi et al., Reference Marchesi, Ghiglino, Ciardo, Perez-Osorio, Baykara and Wykowska2019; Perez-Osorio & Wykowska, Reference Perez-Osorio and Wykowska2019; Spatola et al., Reference Spatola, Marchesi and Wykowska2021). In particular, we see benefit in more empirical research on people's interactions with and attitudes towards social robots, to test these ideas and see which may apply more strongly within different contexts. As the current evidence base is small, and underdetermines the current available theories, if we want to advance our understanding of when, how, and why ordinary people treat social robots as agents, we will ultimately need further empirical work and we think that Dennett's distinction provides an additional useful framework from which to build this.
Financial support
WV's research was supported under Australian Research Council's Discovery Projects funding scheme (project number FL170100160).
Competing interest
None.