In gaze-based geographic human-computer interaction (HCI), a person’s visual attention is used in real-time to adapt the interface to her information requirements. A gaze-based map interface, for instance, could automatically pan or zoom the map based on the user’s gaze position.
Cognitively motivated approaches try to infer the user’s activities, intentions or plans from gaze and provide assistance accordingly. The goal is to incorporate the knowledge and models derived from empirical eye tracking studies into the interaction concept.
For more details, read our papers about the FeaturEyeTrack platform, the Gaze-Adaptive Lenses and Gaze-Adaptive Legends approaches, and about the user experience of gaze-based map adaptation.
Current and former projects on Gaze-Based Interaction with Maps:
Read also all our news posts tagged with “Gaze-Based Interaction“.