Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-26T15:09:23.698Z Has data issue: false hasContentIssue false

25 - Communicative Gesturing in Interaction with Robots

from Part V - Gestures in Relation to Interaction

Published online by Cambridge University Press:  01 May 2024

Alan Cienki
Affiliation:
Vrije Universiteit, Amsterdam
Get access

Summary

We explore multimodal communication in robot agents and focus on communicative gesturing as a means to improve naturalness in the human–robot interactions and to create shared context between the user and the robot. We discuss challenges related to accurate timing and acute perception of the partner’s gestures, so as to support appropriate presentation of the message and understanding of the partner’s speech. We also discuss how such conversational behavior can be modelled for a robot agent in context-aware dialogue modelling. The chapter discusses technologies and the building of models for appropriate and adequate gesturing in HRI and presents some experimental research that addresses the challenges. The aim of the research is to gain better understanding of the gesture modality in HRI as well as to explore innovative solutions to improve human well-being and quality of life in the current society. The article draws examples from the AICO corpus which is collected for the purposes of comparative gaze and gesture studies between human–human and human–robot interactions.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2024

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Alexanderson, S., Henter, G. E., Kucherenko, T., & Beskow, J. (2020). Style-controllable speech-driven gesture synthesis using normalising flows. Computer Graphics Forum, 39(2), 487496. https://doi.org/10.1111/cgf.13946CrossRefGoogle Scholar
Allen, S., Özyürek, A., Kita, S., Brown, A., Furman, R., Ishizuka, T., & Fujii, M. (2007). Language-specific and universal influences in children’s syntactic packaging of Manner and Path: A comparison of English, Japanese, and Turkish. Cognition, 102(1), 1648. https://doi.org/10.1016/j.cognition.2005.12.006CrossRefGoogle ScholarPubMed
Allwood, J., Cerrato, L., Jokinen, K., Navarretta, C., & Paggio, P. (2007). The MUMIN coding scheme for the annotation of feedback, turn management and sequencing phenomena. International Journal of Language Resources and Evaluation, 41(3–4), 273287. https://doi.org/10.1007/s10579-007-9061-5CrossRefGoogle Scholar
Anastasiou, D. (2011). Gestures in assisted living environments. In Efthimiou, E. & Kouroupetroglou, G. (Eds.), Lecture notes in computer science: Vol. 7206. Gesture and sign language in human-computer interaction and embodied communication (pp. 112). Berlin, Germany: Springer. https://doi.org/10.1007/978-3-642-34182-3_1Google Scholar
Anastasiou, D., Jokinen, K., & Wilcock, G. (2013). Evaluation of WikiTalk: User studies of human–robot interaction. In Kurosu, M. (Ed.), Lecture notes in computer science: Vol. 8007. Universal access in human-computer interaction (pp. 3242). Berlin, Germany: Springer. https://doi.org/10.1007/978-3-642-39330-3_4Google Scholar
Aylett, R., Vannini, N., André, E., Paiva, A., Enz, S., & Hall, L. (2009). But that was in another country: Agents and intercultural empathy. Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems AAMAS-2009 Budapest, Hungary, 329336. https://dl.acm.org/citation.cfk?id=1558013.1558058Google Scholar
Baltrušaitis, T., Ahuja, C., & Morency, L. (2019). Multimodal machine learning: A survey and taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(2), 423443. https://doi.org/10.1109/TPAMI.2018.2798607CrossRefGoogle ScholarPubMed
Bartneck, C., Suzuki, T., Kanda, T., & Nomura, T. (2007). The influence of people’s culture and prior experiences with AIBO on their attitude towards robots. AI & Society: Knowledge, Culture and Communication, 21, 217230. https://doi.org/10.1007/s00146-006-0052-7CrossRefGoogle Scholar
Bavelas, J. B., & Chovil, N. (2000). Visible acts of meaning. An integrated message model of language use in face-to-face dialogue. Journal of Language and Social Psychology, 19(2), 163194. https://doi.org/10.1177/0261927X00019002001CrossRefGoogle Scholar
Bavelas, J. B., Chovil, N., & Roe, L. (1995). Gestures specialized for dialogue. Personality and Social Psychology Bulletin, 21(4), 394405. https://doi.org/10.1177/0146167295214010CrossRefGoogle Scholar
Beck, A., Canamero, L., & Bard, K. A. (2010). Towards an affect space for robots to display emotional body language. Proceedings of the 19th IEEEE International Symposium in Robot and Human Interactive Communication (Ro-MAN 2010), Viareggio, Italy, 464469. https://doi.org/10.1109/ROMAN.2010.5598649Google Scholar
Bergmann, K., & Kopp, S. (2009). GNetIc – Using Bayesian decision networks for iconic gesture generation. In Ruttkay, Z., Kipp, M., Nijholt, A. & Vilhjálmsson, H. H. (Eds.), Lecture notes in computer science: Vol. 5773. Intelligent Virtual Agents. IVA 2009 (pp. 7689). Berlin, Germany: Springer. https://doi.org/10.1007/978-3-642-04380-2_12Google Scholar
Bergmann, K., Kopp, S. & Eyssel, F. (2010). Individualized gesturing outperforms average gesturing: Evaluating gesture production in virtual humans. In Allbeck, J., Badler, N., Bickmore, T., Pelachaud, C., & Safonova, A. (Eds.), Lecture notes in computer science: Vol. 6356: Intelligent Virtual Agents. IVA 2010 (pp. 104–117). Berlin, Germany: Springer.Google Scholar
Billard, A., Robins, B., Nadel, J., & Dautenhahn, K. (2006). Building robota, a mini-humanoid robot for the rehabilitation of children with autism. The RESNA Assistive Technology Journal, 19, 3749. https://doi.org/10.1080/10400435.2007.10131864CrossRefGoogle Scholar
Breazeal, C. (2002). Designing sociable robots. Bradford Books. Cambridge, MA: MIT Press.Google Scholar
Cangelosi, A., & Harnad, S. (2001). The adaprive advantage of symbolc theft over sensorimotor toil: Grounding language in perceptual categories. Evolution of Communication, 4(1), 117142. https://doi.org/10.1075/eoc.4.1.07canCrossRefGoogle Scholar
Cassell, J. (2000). Nudge nudge wink wink: Elements of face-to-face conversation for embodied conversational agents. In Cassell, J., Sullivan, J., Prevost, S., & Churchill, E. F. (Eds.), Embodied Conversational Agents (pp. 127). Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Cassell, J., Stone, M., Douville, B., Prevost, S., Achorn, B., Steedman, M., Badler, N., & Pelachaud, C. (1994). Modeling the interaction between speech and gesture. In Ram, A., & Eiselt, K. (Eds.), Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society (pp. 153158). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
Çeliktutan, O., Skordos, E., & Gunes, H. (2017). Multimodal human–human–robot interactions (MHHRI) dataset for studying personality and engagement. IEEE Transactions on Affective Computing, 10(4), 481497. https://doi.org/10.1109/TAFFC.2017.2737019Google Scholar
Chiu, C.-C., Morency, L-P. , & Marsella, S. (2015). Predicting co-verbal gestures: A deep and temporal modeling approach. In Brinkman, W. P., Broekens, J., & Heylen, D. (Eds.), Intelligent virtual agents. Lecture notes in computer science: Vol. 9238 (pp. 152166). Berlin, Germany: Springer. https://doi.org/10.1007/978-3-319-21996-7_17CrossRefGoogle Scholar
Chu, M., & Kita, S. (2016). Co-thought and co-speech gestures are generated by the same action generation process. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(2), 257270. https://doi.org/10.1037/xlm0000168Google ScholarPubMed
Cienki, A. (1998). Metaphoric gestures and some of their relations to verbal metaphoric expressions. In Koenig, J.-P. (Ed.), Discourse and cognition: Bridging the gap (pp. 189–204). Stanford, CA: Center for the Study of Language and Information.Google Scholar
Cienki, A., & Muller, C. (Eds.). (2008). Metaphor and gesture. Amsterdam, the Netherlands: John Benjamins.CrossRefGoogle Scholar
Clark, H. H., & Schaefer, E. F. (1989). Contributing to discourse, Cognitive Science, 13(2), 256294. https://doi.org/10.1207/S15516709COG1302_7CrossRefGoogle Scholar
Csapo, A., Gilmartin, E., Grizou, J., Han, J., Meena, R., Anastasiou, D., Jokinen, K., & Wilcock, G. (2012). Multimodal conversational interaction with a humanoid robot. Proceedings of the 3rd IEEE Conference on Cognitive Infocommunications, Kosice, Slovakia, 667672. https://doi.org/10.1109/CogInfoCom.2012.6421935Google Scholar
Endrass, B., André, E., Rehm, M., & Nakano, Y. (2013). Investigating culture-related aspects of behavior for virtual characters. Autonomous Agents and Multi-Agent Systems, 27(2), 277304. https://doi.org/10.1007/s10458-012-9218-5CrossRefGoogle Scholar
Fujii, A., & Jokinen, K. (2022). Open source system integration towards natural interaction with robots. Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction, Sapporo, Japan, 768772. https://dl.acm.org/doi/10.5555/3523760.3523873Google Scholar
Gebhard, P., Aylett, R., Higashinaka, R., Jokinen, K., Tanaka, H. & Yoshino, K. (2021). Modeling trust and empathy for socially interactive robots. In Miehle, J., Minker, W., Andre, E., & Yoshino, K. (Eds.), Multimodal agents for ageing and multicultural societies (pp. 2160). Singapore: Springer. https://doi.org/10.1007/978-981-16-3476-5_2CrossRefGoogle Scholar
Glowinski, D., Dael, N., Camurri, A., Volpe, G., Mortillaro, M., & Scherer, K. (2011). Toward a minimal representation of affective gestures. IEEE Transactions in Affective Computing, 2(2), 106118. https://doi.org/10.1109/T-AFFC.2011.7CrossRefGoogle Scholar
Goldin-Meadow, S. (2003). Hearing Gesture: How our hands help us to think. Cambridge, MA: The Belknap Press of Harvard University Press.Google Scholar
Graziano, M. & Gullberg, M. (2018). When speech stops, gesture stops: Evidence from developmental and crosslinguistic comparisons. Frontiers in Psychology, 9, Article 879. https://doi.org/10.3389/fpsyg.2018.00879CrossRefGoogle ScholarPubMed
Gullberg, M. (2008). Gestures and second language acquisition. In Ellis, N. C. & Robinson, P. (Eds.), Handbook of cognitive linguistics and second language acquisition (pp. 276305). London, UK: Routledge.Google Scholar
Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1–3), 335346. https://doi.org/10.1016/0167-2789(90)90087-6CrossRefGoogle Scholar
Heimerl, A., Baur, T., Lingenfelser, F., Wagner, J., & André, E. (2019). NOVA – A tool for explainable cooperative machine learning. Proceedings of the 8th International Conference on Affective Computing and Intelligent Interaction (ACII), Cambridge, UK, 109115. https://doi.org10.1007/s13218-020-00632-3Google Scholar
Heinrich, S., Yao, Y., Hinz, T., Liu, Z., Hummel, T., Kerzel, M., Weber, C. & Wermter, S. (2020). Cross-modal language grounding in an embodied neurocognitive model. Frontiers in Neurorobotics, 14, Article 52. https://doi.org/10.3389/fnbot.2020.00052CrossRefGoogle Scholar
Heylen, D., Krenn, B., & Payr, S. (Eds.). (2010). Companions, virtual butlers, assistive robots: Empirical and theoretical insights for building long-term social relationships. In Trappl, R. (Ed.), Cybernetics and systems 2010 (pp. 539570). Vienna, Austria: Austrian Society for Cybernetic Studies.Google Scholar
Inoue, K., Milhorat, P., Lala, D., Zhao, T., & Kawahara, T. (2016). Talking with ERICA, an autonomous android. Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 212215. https://doi.org/10.18653/v1/W16-3625CrossRefGoogle Scholar
Ishi, C. T., Machiyashiki, D., Mikata, R., & Ishiguro, H. (2018). A speech-driven hand gesture generation method and evaluation in android robots. IEEE Robotics and Automation Letters, 3(4), 37573764. https://doi.org/10.1109/LRA.2018.2856281CrossRefGoogle Scholar
Jokinen, K. (2009). Constructive dialogue modelling: Speech interaction and rational agents. Chichester, UK: John Wiley & Sons.CrossRefGoogle Scholar
Jokinen, K. (2010). Pointing gestures and synchronous communication management. In Esposito, A., Campbell, N., Vogel, C., Hussain, A., & Nijholt, A. (Eds.), Lecture notes in computer science: Vol. 5967. Development of multimodal interfaces: Active listening and synchrony (pp. 3349). Heidelberg, Germany: Springer. https://doi.org/10.1007/978-3-642-12397-9_3Google Scholar
Jokinen, K. (2018). Dialogue models for socially intelligent robots. In Ge, S. S., Cabibihan, J.-J., Salichs, M. A., Broadbent, E., He, H., Wagner, A. R., & Castro-González, Á (Eds.), Lecture notes in computer science: Vol 11357. Social robotics (pp. 127138). Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-05204-1_13Google Scholar
Jokinen, K. & Pelachaud, C. (2013). From annotation to multimodal behaviour. In Rojc, M. & Campbell, N. (Eds.), Co-verbal synchrony in human-machine interaction. New York, NY: CRC Press. https://doi.org/10.1201/b15477Google Scholar
Jokinen, K., & Wilcock, G. (2014). Multimodal open-domain conversations with the Nao robot. In Mariani, J., Rosset, S., Garnier-Rizet, M., & Devillers, L. (Eds.), Natural interaction with robots, knowbots and smartphones: Putting spoken dialog systems into practice (pp. 213224). New York, NY: Springer. https://doi.org/10.1007/978-1-4614-8280-2_19CrossRefGoogle Scholar
Jokinen, K., & Wilcock, G. (Eds.). (2017). Dialogues with social robots: Enablements, analyses, and evaluation. Singapore: Springer.CrossRefGoogle Scholar
Jokinen, K., & Wilcock, G. (2021). Do you remember me? Ethics in long-term social robot interactions. The 30th IEEE International Conference on Robot and Human Interactive Communication (ROMAN-2021). https://doi.org/10.1109/RO-MAN50785.2021.9515399CrossRefGoogle Scholar
Kabashima, K., Nishida, M., Jokinen, K., & Yamamoto, S. (2012). Multimodal corpus of conversations in mother tongue and second language by same interlocutors. Proceedings of The 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction: Eye Gaze and Multimodality, Santa Monica, California, USA, 15. https://doi.org/10.1145/2401836.2401845Google Scholar
Kanda, T. & Ishiguro, H. (2013). Human-robot interaction in social robotics (eBook Published 2017). Boca Raton, FL: CRC Press. https://doi.org/10.1201/b13004Google Scholar
Kanda, T., Ishiguro, H., Ono, T., Imai, M., & Nakatsu, R. (2002). Development and evaluation of an interactive humanoid robot “Robovie”. Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No.02CH37292), 2, 1848–1855.Google Scholar
Kanda, T., Sato, R., Saiwaki, N., & Ishiguro, H. (2007). A two-month field trial in an elementary school for long-term human-robot interaction. IEEE Transactions on robotics, 23(5), 962971. https://doi.org/10.1109/TRO.2007.904904CrossRefGoogle Scholar
Kendon, A. (1980). Gesticulation and speech: Two aspects of the process of utterance. In Key, M. R. (Ed.), The relationship of verbal and nonverbal communication (pp. 207227). The Hague, the Netherlands: Mouton and Co. https://doi.org/10.1515/9783110813098.207CrossRefGoogle Scholar
Kendon, A. (1986). Some reasons for studying gesture, Semiotica, 62(1–2), 328. https://doi.org/10.1515/semi.1986.62.1-2.3CrossRefGoogle Scholar
Kendon, A. (1995). Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics, 23(3), 247279. https://doi.org/10.1016/0378-2166(94)00037-FCrossRefGoogle Scholar
Kendon, A. (2004). Gesture: Visible action as utterance. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
Kipp, M., Neff, M., & Albrecht, I. (2007). An annotation scheme for conversational gestures: How to economically capture timing and form. Language Resources and Evaluation Journal, 41, 325339. https://doi.org/10.1007/s10579-007-9053–5CrossRefGoogle Scholar
Kita, S., Alibali, M. W., & Chu, M. (2017). How do gestures influence thinking and speaking? The gesture-for-conceptualization hypothesis. Psychological Review, 124(3), 245266. https://doi.org/10.1037/rev0000059CrossRefGoogle ScholarPubMed
Kopp, S., Bergmann, K., & Wachsmuth, I. (2008). Multimodal communication from multimodal thinking: Towards an integrated model of speech and gesture production. International Journal of Semantic Computing, 2(1), 115136. https://doi.org/10.1142/S1793351X08000361CrossRefGoogle Scholar
Kucherenko, T., Jonell, P., van Waveren, S., Henter, G.E., Alexanderson, S., … & Kjellström, H. (2020). Gesticulator: A framework for semantically-aware speech-driven gesture generation. Proceedings of the 2020 International Conference on Multimodal Interaction (ICMI ’20), 242250. https://doi.org/10.1145/3382507.3418815CrossRefGoogle Scholar
Kwak, S. S., Kim, Y., Kim, E., Shin, C., & Cho, K. (2013). What makes people empathize with an emotional robot?: The impact of agency and physical embodiment on human empathy for a robot. Proceedings of the IEEE RO-MAN Conference, 180185. https://doi.org/10.1109/ROMAN.2013.6628441CrossRefGoogle Scholar
Leite, I., Martinho, C., & Paiva, A. (2013). Social robots for long-term interaction: A survey. International Journal of Social Robotics, 5(2), 291308. https://doi.org/10.1007/s12369-013-0178-yCrossRefGoogle Scholar
Lim, A., Ogata, T., & Okuno, H.G. (2012). Towards expressive musical robots: A cross-modal framework for emotional gesture, voice and music. Journal of Audio, Speeech and Music PROC, 3(2012). https://doi.org/10.1186/1687-4722-2012-3Google Scholar
Lis, M. & Navarretta, C. (2014). Classifying the form of iconic hand gestures from the linguistic categorization of co-occurring verbs. The 1st European Symposium on Multimodal Communication. University of Malta, Valletta, October 17–18, 2013, pp. 4150.Google Scholar
McNeill, D. (2005). Gesture and thought. Chicago, IL: University of Chicago Press.CrossRefGoogle Scholar
McNeill, D., & Duncan, S. (2011). Gestures and growth points in language disorders. In Guendouzi, J., Loncke, F., & Williams, M. J. (Eds.), The handbook of psycholinguistic and cognitive processes: Perspectives in communication disorders (pp. 663685). New York, NY: Psychology Press.Google Scholar
Melinger, A., & Levelt, W. (2004). Gesture and the communicative intention of the speaker. Gesture, 4(2), 119141. https://doi.org/10.1075/gest.4.2.02melCrossRefGoogle Scholar
Mitra, S., & Acharya, T. (2007). Gesture recognition: A Survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 37(3), 311324. https://doi.org/10.1109/TSMCC.2007.893280CrossRefGoogle Scholar
Mori, M. (2012). The Uncanny Valley [From the Field] (K. F. MacDorman & N. Kageki, Trans.). IEEE Robotics & Automation Magazine, 19(2), 98100. (Original work published 1970). https://doi.org/10.1109/MRA.2012.2192811CrossRefGoogle Scholar
Mori, T., Jokinen, K., & Den, Y. (2020). Analysis of body behaviours in human–human and human–robot interactions. Proceedings of the International LREC Workshop on people in language, vision and the mind. European Language Resources Association (ELRA), 7–14.Google Scholar
Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H., & Hagita., N. (2009). Footing in human–robot conversations: How robots might shape participant roles using gaze cues. Proceedings of the 4th ACM/IEEE International Conference on Human Robot Interaction (HRI ‘09) (pp. 6168). New York, NY: Association for Computing Machinery. https://doi.org/10.1145/1514095.1514109Google Scholar
Navarretta, C., Ahlsén, E., Allwood, J., Jokinen, K., & Paggio, P. (2012). Feedback in Nordic first-encounters: A comparative study. Proceedings of the Language Resources and Evaluation Conference (LREC-2012), Istanbul, Turkey, 24942499.Google Scholar
Neßelrath, R., Lu, C., Schulz, C. H., Frey, J., & Alexandersson, J. (2011). A gesture based system for context – sensitive interaction with smart homes. In Wichert, R. & Eberhardt, B. (Eds.), Proceedings of the Fourth Ambient Assisted Living (pp. 209222). Berlin, Germany: Springer. https://doi.org/10.1007/978-3-642-18167-2_15CrossRefGoogle Scholar
Nishio, S., Ogawa, K., Kanakogi, Y., Itakura, S. & Ishiguro, H. (2012). Do robot appearance and speech affect people’s attitude? Evaluation through the Ultimatum Game. In 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, 809–814. https://doi.org/10.1109/ROMAN.2012.6343851CrossRefGoogle Scholar
Noroozi, F., Kaminska, D., Corneanu, C., Sapinski, T., Escalera, S., & Anbarjafari, G. (2018). Survey on emotional body gesture recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 37(3), 311324. https://doi.org/10.1109/TAFFC.2018.2874986Google Scholar
Rau, P.P., Li, Y., Li, D. (2009). Effects of communication style and culture on ability to accept recommendations from robots. Computers in Human Behavior, 25(2), 587595. https://doi.org/10.1016/j.chb.2008.12.025CrossRefGoogle Scholar
Rehm, M., Nakano, Y., André, E., Nishida, T., Bee, N., … & Huang, H-H. (2009). From observation to simulation: Generating culture-specific behavior for interactive systems. AI & Society, 24, 267280. https://doi.org/10.1007/s00146-009-0216-3CrossRefGoogle Scholar
Robins, B., Dickerson, P., Hyams, P., & Dautenhahn, K.. (2004). Robot-mediated joint attention in children with autism: A case study in robot–human interaction. Interaction Studies 5, 161198. https://doi.org/10.1075/is.5.2.02rob.CrossRefGoogle Scholar
Romeo, M., Hernandez Garcia, D., Han, T., Cangelosi, A., & Jokinen, K. (2021). Predicting apparent personality from body language: Benchmarking deep learning architectures for adaptive social human–robot interaction. Journal of Advanced Robotics, 35(19), 11671179. https://doi.org/10.1080/01691864.2021.1974941CrossRefGoogle Scholar
Saunderson, S., & Nejat, G. (2019). How robots influence humans: A survey of nonverbal communication in social human-robot interaction. International Journal of Social Robotics, 11, 575608. https://doi.org/10.1007/S12369-019-00523-0CrossRefGoogle Scholar
Sidner, C., Rich, C., Shayganfar, M., Bickmore, T., Ring, L., & Zhang, Z. (2015). A robotic companion for social support of isolated older adults. Proceedings of the Tenth Annual ACM/IEEE International Conference on Human–Robot Interaction Extended Abstracts (HRI’15 Extended Abstracts), New York, NY, USA, 289–289. https://doi.org/10.1145/2701973.2702103Google Scholar
Sowa, T., & Wachsmuth, I. (2001). Interpretation of shape-related iconic gestures in virtual environments. In I. Wachsmuth & T. Sowa (Eds.), Lecture notes in computer science: Vol. 2298. Gesture and sign language in human-computer interaction (pp. 2133).Google Scholar
Spezialetti, M., Placidi, G., & Rossi, S. (2020). Emotion recognition for human–robot interaction: Recent advances and future perspectives. Frontiers in Robotics and AI, 7. https://doi.org/10.3389/frobt.2020.532279CrossRefGoogle ScholarPubMed
Streeck, J. (2009). Forward-gesturing. Discourse Processes, 46(2–3), 161179. https://doi.org/10.1080/01638530902728793CrossRefGoogle Scholar
Vels, M., & Jokinen, K. (2015). Detecting body, head, and speech signals for conversational engagement. Proceedings of the IVA Workshop Engagement in Social Intelligent Virtual Agents (ESIVA-2015) (2332). August 25, 2015, Delft, the Netherlands.Google Scholar
Wagner, P., Malisz, Z., & Kopp, S. (2014). Gesture and speech in interaction: An overview. Speech Communication, 57, 209232. https://doi.org/10.1016/j.specom.2013.09.008CrossRefGoogle Scholar
Wilcock, G. (2012). WikiTalk: A spoken Wikipedia-based open-domain knowledge access system. Proceedings of the COLING 2012 Workshop on Question Answering for Complex Domains, Mumbai, 5770.Google Scholar
Yoon, Y., Ko, W. R., Jang, M., Lee, J., Kim, J., & Lee, G. (2019). Robots learn social skills: End-to-end learning of co-speech gesture generation for humanoid robots. Proceedings of the IEEE International Conference on Robotics and Automation, 43034309. https://doi.org/10.48550/arXiv.1810.12541CrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×