Skip to main content Accessibility help
×
Hostname: page-component-84b7d79bbc-g5fl4 Total loading time: 0 Render date: 2024-07-29T19:15:31.638Z Has data issue: false hasContentIssue false

7 - Tracking of visual attention and adaptive applications

Published online by Cambridge University Press:  04 February 2011

Claudia Roda
Affiliation:
The American University of Paris, France
Get access

Summary

This chapter presents a number of software applications that make use of an eye tracker. It builds on the knowledge of visual attention and its control mechanisms as presented in chapters 3 and 5. It provides a tour through the years, showing how the use of eye gaze as indicator of visual attention has developed from being an additional input modality, supporting the disambiguation of fuzzy signals, to an interaction enhancement technique that allows software systems to work proactively and retrieve information without the user giving explicit commands.

Introduction

Our environment provides far more perceptual information than can be effectively processed. Hence, the ability to focus our attention on the essential is a crucial skill in a world full of visual stimuli. What we see is determined by what we attend to: the direction of our eye gaze, the focus of our visual attention, has a close relationship with the focus of our attention.

Eye trackers, which are used to measure the point of gaze, have developed rapidly during recent years. The history of eye-tracking equipment is long. For decades, eye trackers have been used as diagnostic equipment in medical laboratories and to enable and help communication with severely disabled people (see, e.g., Majaranta and Räihä 2007). Only recently have eye trackers reached the level of development where they can be considered as input devices for commonly used computing systems.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Abrams, R. A., and Christ, S. E. 2003. Motion onset captures attention, Psychological Science 14: 427–32CrossRefGoogle ScholarPubMed
Bartram, L., Ware, C., and Calvert, T. 2003. Moticons: Detection, distraction and task, International Journal of Human–Computer Studies 58: 515–45CrossRefGoogle Scholar
Baudisch, P., DeCarlo, D., Duchowski, A. T., and Geisler, W. S. 2003. Focusing on the essential: Considering attention in display design, Communications of the ACM 46(3): 60–66, http://doi.acm.org/10.1145/636772.636799CrossRefGoogle Scholar
Biedert, R., Buscher, G., Schwarz, S., Hees, J., and Dengel, A. 2010. Text 2.0, in Proceedings Extended Abstracts of Conference on Human Factors in Computing Systems (CHI EA '10), Atlanta, GA: ACM Press: 4003–8, http://doi.acm.org/10.1145/1753846.1754093Google Scholar
Bolt, R. A. 1980. ‘Put-That-There’: Voice and gesture at the graphics interface, in Proceedings of the 7th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH'80), Seattle, WA: ACM Press: 262–70, http://doi.acm.org/10.1145/800250.807503CrossRefGoogle Scholar
Bolt, R. A. 1981. Gaze-orchestrated dynamic windows, in Proceedings of the 8th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH'81), Dallas, TX: ACM Press: 109–19, http://doi.acm.org/10.1145/800224.806796CrossRefGoogle Scholar
Bolt, R. A. 1982. Eyes at the interface, in Proceedings of the 1982 Conference on Human Factors in Computing Systems (CHI'82), Gaithersburg, MD: ACM Press: 360–2, http://doi.acm.org/10.1145/800049.801811CrossRefGoogle Scholar
Bolt, R. A. 1984. The Human Interface. New York: Van Nostrand ReinholdGoogle Scholar
Breazel, C. 2003. Emotion and sociable humanoid robots, International Journal of Human–Computer Studies 59(1–2): 119–55CrossRefGoogle Scholar
Buscher, G., and Dengel, A. 2008. Attention-based document classifier learning, in Proceedings of the 8th IAPR Workshop on Document Analysis Systems (DAS'08), Nara, Japan: IEEE Xplore: 87–94Google Scholar
Buscher, G., Dengel, A., and Elst, L. 2008. Query expansion using gaze-based feedback on the subdocument level, in Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'08), Singapore: ACM Press: 387–94, http://doi.acm.org/10.1145/1390334.1390401CrossRefGoogle Scholar
,COGAIN 2007. COGAIN Student Competition Results. European Network of Excellence on Communication by Gaze Interaction. Retrieved 13 July 2010 from: www.cogain.org/node/41
Duchowski, A. T. 2002. A breadth-first survey of eye tracking applications, Behavior Research Methods, Instruments, and Computers (BRMIC) 34: 455–70CrossRefGoogle ScholarPubMed
Duncan, J. 1984. Selective attention and the organization of visual information, Journal of Experimental Psychology 113: 501–17CrossRefGoogle ScholarPubMed
Eaddy, M., Blasko, G., Babcock, J., and Feiner, S. 2004. My own private kiosk: Privacy-preserving public displays, in Proceedings 8th International Symposium on Wearable Computers (ISWC'04), Arlington, WA: IEEE Computer Society: 132–5CrossRefGoogle Scholar
Fono, D., and Vertegaal, R. 2005. EyeWindows: Evaluation of eye-controlled zooming windows for focus selection, in Proceedings SIGCHI Conference on Human Factors in Computing Systems (CHI'05), Portland, OR: ACM Press: 151–60, http://doi.acm.org/10.1145/1054972.1054994CrossRefGoogle Scholar
Franconeri, S. L., and Simons, D. J. 2003. Moving and looming stimuli capture attention, Perception and Psychophysics 65(7): 999–1010CrossRefGoogle ScholarPubMed
Groner, R., and Groner, M. T. 1989. Attention and eye movement control: An overview, European Archives of Psychiatry and Neurological Sciences 239(1): 9–16CrossRefGoogle ScholarPubMed
Horvitz, E., Kadie, C., Paek, T., and Hovel, D. 2003. Models of attention in computing and communication: From principles to applications, Communications of the ACM 46(3): 52–9, http://doi.acm.org/10.1145/636772. 636798CrossRefGoogle Scholar
Hyrskykari, A. 2006. Eyes in attentive interfaces: Experiences from creating iDict, a gaze-aware reading aid. Dissertations in Interactive Technology 4, Department of Computer Sciences, University of Tampere. Retrieved 13 July 2010 from: http://acta.uta.fi/pdf/951-44-6643-8.pdf
Hyrskykari, A., Majaranta, P., and Räihä, K.-J. 2003. Proactive response to eye movements, in Proceedings of INTERACT 2003, Rauterberg, M., Menozzi, M. and Wesson, J. (eds.), Zurich: IOS Press: 129–36Google Scholar
Hyrskykari, A., Majaranta, P., and Räihä, K.-J. 2005. From gaze control to attentive interfaces, in Proceedings of the HCII 2005, Las VegasGoogle Scholar
Jacob, R. J. K. 1991. The use of eye movements in human–computer interaction techniques: What you look at is what you get, ACM Transactions on Information Systems 9(2): 152–69, http://doi.acm.org/10.1145/123078.128728CrossRefGoogle Scholar
Kembel, J. A. 2003. Reciprocal eye contact as an interaction technique, in Extended Abstracts of SIGCHI Conference on Human Factors in Computing Systems (CHI'03), Ft Lauderdale, FL: ACM Press: 952–3, http://doi.acm.org/10.1145/765891.766089CrossRefGoogle Scholar
Koons, D., and Flickner, M. 2003. PONG: The attentive robot, Communication of the ACM 46(3): 50 (sidebar), http://doi.acm.org/10.1145/636772.636797Google Scholar
Kumar, M., Paepcke, A., and Winograd, T. 2007. EyePoint: Practical pointing and selection using gaze and keyboard, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'07), San Jose, CA:ACM Press: 421–30, http://doi.acm.org/10.1145/1240624.1240692CrossRefGoogle Scholar
Lieberman, H. A., and Selker, T. 2000. Out of context: Computer systems that adapt to, and learn from, context, IBM Systems Journal 39: 617–31CrossRefGoogle Scholar
Maglio, P. P., and Campbell, C. S. 2003. Attentive agents, Communication of the ACM 46(3): 47–51, http://doi.acm.org/10.1145/636772.636797CrossRefGoogle Scholar
Maglio, P. P., Matlock, T., Campbell, C. S., Zhai, S., and Smith, B. A. 2000. Gaze and speech in attentive user interfaces, in Proceedings of the 3rd International Conference on Advances in Multimodal Interfaces (ICMI 2000), Beijing1–7Google Scholar
Majaranta, P., and Räihä, K.-J. 2007. Text entry by gaze: Utilizing eye-tracking, in MacKenzie, I. S. and Tanaka-Ishii, K. (eds.), Text Entry Systems: Mobility, Accessibility, Universality. San Francisco: Morgan Kaufmann: 175–87CrossRefGoogle Scholar
Morimoto, C. H., Koons, D., Amir, A., and Flickner, M. 2000. Pupil detection and tracking using multiple light sources, Image and Vision Computing 18(4): 331–5CrossRefGoogle Scholar
Oh, A., Fox, H., Kleek, M., Adler, A., Gajos, K., Morency, L.-P., and Darrell, T. 2002. Evaluating Look-to-Talk: A gaze-aware interface in a collaborative environment, in Extended Abstracts of SIGCHI Conference on Human Factors in Computing Systems (CHI'02), Minneapolis: ACM Press: 650–1, http://doi.acm.org/10.1145/506443.506528CrossRefGoogle Scholar
Ohno, T. 2004. EyePrint: Support of document browsing with eye gaze trace, in Proceedings of the 6th International Conference on Multimodal Interfaces (ICMI'04), State College, PA: ACM Press: 16–23, http://doi.acm.org/10.1145/1027933.1027937CrossRefGoogle Scholar
Pashler, H., Johnston, J. C., and Ruthruff, E. 2001. Attention and performance, Annual Review of Psychology 52: 629–51CrossRefGoogle ScholarPubMed
Porta, M. 2002. Vision-based user interfaces: Methods and applications, International Journal of Human–Computer Studies 57: 27–73CrossRefGoogle Scholar
Posner, M. I. 1980. Orienting of attention, Quarterly Journal of Experimental Psychology 32: 3–25CrossRefGoogle ScholarPubMed
Qvarfordt, P., Beymer, D., and Zhai, S. 2005. RealTourist: A study of augmenting human–human and human–computer dialogue with eye-gaze overlay, in Costabile, M. F. and Paternò, F. (eds.), Proceedings of INTERACT 2005, Rome: Springer: 767–80CrossRefGoogle Scholar
Qvarfordt, P., and Zhai, S. 2005. Conversing with the user based on eye-gaze patterns, in Proceedings SIGCHI Conference on Human Factors in Computing Systems (CHI'05), Portland, OR: ACM Press: 221–30, http://doi.acm.org/10.1145/1054972.1055004CrossRefGoogle Scholar
Räihä, K.-J., and Špakov, O. 2009. Disambiguating Ninja cursors with eyegaze, in Proceedings of the 27th International Conference on Human Factors in Computing Systems (CHI'09), Boston: ACM Press, http://doi.acm.org/10.1145/1518701.1518913Google Scholar
Raskin, J. 2000. The Humane Interface. Reading, MA: Addison-WesleyGoogle Scholar
Roda, C., and Thomas, J. 2006a. Attention aware systems: Introduction to special issue, Computers in Human Behavior 22(4): 555–6CrossRefGoogle Scholar
Roda, C., and Thomas, J. 2006b. Attention aware systems: Theories, applications, and research agenda. Computers in Human Behavior 22(4): 557–87CrossRefGoogle Scholar
Rudmann, D. S., McConkie, G. W., and Zheng, X. S. 2003. Eyetracking in cognitive state detection for HCI, in Proceedings of the 5th International Conference on Multimodal Interfaces (ICMI'03), Vancouver: ACM Press: 159–63, http://doi.acm.org/10.1145/958432.958464CrossRefGoogle Scholar
Selker, T. 2004. Visual attentive interfaces, BT Technology Journal 22: 146–50CrossRefGoogle Scholar
Shell, J. S., Selker, T., and Vertegaal, R. 2003. Interacting with groups of computers, Communications of the ACM 46(3): 40–6, http://doi.acm.org/10.1145/636772.636796CrossRefGoogle Scholar
Shell, J. S., Vertegaal, R., and Skaburskis, A. W. 2003. EyePliances: Attention-seeking devices that respond to visual attention, in Extended Abstracts of SIGCHI Conference on Human Factors in Computing Systems (CHI'03), Fort Lauderdale, FL: ACM Press: 770–1, http://doi.acm.org/10.1145/765891.765981CrossRefGoogle Scholar
Sibert, L. E., and Jacob, R. J. K. 2000. Evaluation of eye gaze interaction, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI 2000), The Hague: ACM Press: 281–8, http://doi.acm.org/10.1145/332040.332445CrossRefGoogle Scholar
Starker, I., and Bolt, R. A. 1990. A gaze-responsive self-disclosing display, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'90), Seattle: ACM Press: 3–10, http://doi.acm.org/10.1145/97243.97245Google Scholar
Thorisson, K. R., Koons, D. B., and Bolt, R. A. 1992. Multi-modal natural dialogue, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'92), Monterey, CA: ACM Press: 653–4, http://doi.acm.org/10.1145/142750.150714CrossRefGoogle Scholar
Velichkovsky, B. M., Rothert, A., Kopf, M., Dornhoefer, S. M., and Joos, M. 2002. Towards an express diagnostics for level of processing and hazard perception, Transportation Research, Part F, 5(2): 145–56CrossRefGoogle Scholar
Vertegaal, R. 1999. The GAZE groupware system: Mediating joint attention in multiparty communication and collaboration, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'99), Pittsburgh: ACM Press: 294–301, http://doi.acm.org/10.1145/302979.303065Google Scholar
Vertegaal, R. 2003. Introduction to special issue on ‘Attentive user interfaces’, Communications of the ACM 46(3): 30–3, http://doi.acm.org/10.1145/636772.636794CrossRefGoogle Scholar
Vertegaal, R., Weevers, I., and Sohn, C. 2002. GAZE-2: An attentive video conferencing system, in Extended Abstracts of SIGCHI Conference on Human Factors in Computing Systems (CHI'02), Minneapolis: ACM Press: 736–7, http://doi.acm.org/10.1145/506443.506572CrossRefGoogle Scholar
Vesterby, T., Voss, J. C., Hansen, J. P., Glenstrup, A. J., Hansen, D. W., and Rudolph, M. 2005. Gaze-guided viewing of interactive movies, Digital Creativity 16(4): 193–204CrossRefGoogle Scholar
Wang, H., Chignell, M., and Ishizuka, M. 2006. Empathic tutoring software agents using real-time eye tracking, in Proceedings of the 2006 Symposium on Eye Tracking Research and Applications (ETRA'06), San Diego: ACM Press: 73–8, http://doi.acm.org/10.1145/1117309.1117346CrossRefGoogle Scholar
Ware, C. 2008. Visual Thinking for Design. Burlington, MA: Morgan KaufmannGoogle Scholar
Wolfe, J. M. 1998. Visual search, in Pashler, H. (ed.), Attention (5th edn). Hove: Psychology Press: 13–73Google Scholar
Xu, S., Jiang, H., and Lau, F. C. 2008. Personalized online document, image and video recommendation via commodity eye-tracking, in Proceedings of the 2008 ACM Conference on Recommender Systems (RecSys'08), Lausanne, Switzerland: ACM Press: 83–90, http://doi.acm.org/10.1145/1454008.1454023CrossRefGoogle Scholar
Yantis, S., and Jonides, J. 1990. Abrupt visual onsets and selective attention: Voluntary versus automatic allocation, Journal of Experimental Psychology 16(1): 121–34Google ScholarPubMed
Yonezawa, T., Yamazoe, H., Utsumi, A., and Abe, S. 2007. Gaze-communicative behavior of stuffed-toy robot with joint attention and eye contact based on ambient gaze-tracking, in Proceedings of the 9th International Conference on Multimodal Interfaces (ICMI'07), Nagoya, Japan: 140–5, http://doi.acm.org/10.1145/1322192.1322218CrossRefGoogle Scholar
Zhai, S. 2003. What's in the eyes for attentive input? Communications of the ACM 46(3): 34–9, http://doi.acm.org/10.1145/636772.636795CrossRefGoogle Scholar
Zhai, S., Morimoto, C., and Ihde, S. 1999. Manual and gaze input cascaded (MAGIC) pointing, in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'99), Pittsburgh, PA: ACM Press: 246–53, http://doi.acm.org/10.1145/302979.303053Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×