• eyetracking@ethz.ch
  • +41 44 633 71 59

Author Archives: Tianyi Xiao

  • -

CHI‘26 paper accepted!

geoGazeLab has one paper accepted at ACM CHI, the leading international conference on Human–Computer Interaction.

📄 𝗖𝗼𝗠𝗮𝗽: 𝗔 𝗖𝗼𝗹𝗹𝗮𝗯𝗼𝗿𝗮𝘁𝗶𝘃𝗲 𝟯𝗗 𝗦𝗸𝗲𝘁𝗰𝗵 𝗠𝗮𝗽𝗽𝗶𝗻𝗴 𝗚𝗮𝗺𝗲 𝗳𝗼𝗿 𝗦𝗲𝗮𝗿𝗰𝗵 𝗮𝗻𝗱 𝗥𝗲𝘀𝗰𝘂𝗲

Authors: Tianyi Xiao, Sailin Zhong, Peter Kiefer, Miki Mizuki, Phoebe O. Toups Dugas, and Martin Raubal.

🚨 What is it about?

Search and rescue operations rely on fast and accurate spatial communication between commanders and field teams, often under severe time pressure and with asymmetric information. While maps are central to this process, traditional paper-based sketching struggles with 3D environments and remote collaboration.

We present CoMap, a collaborative 3D sketch mapping system, validated through a virtual reality fire-rescue game. In a controlled study with 13 commander–field team pairs, CoMap enabled more accurate and efficient spatial communication than conventional 2D sketch mapping and fostered more proactive communication strategies.

The paper also distills three design implications for future mapping tools to support SAR training and real-world operations.

🔗 More information:

This work is part of the 3D Sketch Maps, a project funded by SNSF, Sinergia. Project page: https://geogaze.ethz.ch/3d-sketch-maps/comap/


  • -

Design++ XR in AEC Summer School 2025

As a member of geoGAZElab, Tianyi Xiao participated in organizing the Design++ XR in AEC Summer School 2025 and served as a workshop mentor.

Titled “Enhancing Spatial Communication through 3D Sketch Mapping in Extended Reality,” the workshop content is based on our project funded by the Swiss National Science Foundation: 3D Sketch Maps. The workshop aims to impart foundational theoretical knowledge on spatial cognition to students, introduce research findings on 3D Sketch Mapping, teach basic empirical research methods, and conduct user studies using the 3D Sketch Mapping tool developed within the project.

The timetable for the workshop is as follows:

 

 


  • -

CHI 2025: two papers accepted

We are pleased to share that two of our papers have been accepted for the ACM (Association of Computing Machinery) CHI conference on Human Factors in Computing Systems  2025, the most important international conference in the field of human-computer interaction. CHI 2025 took place in Yokohama, Japan, from 26 April to 1 May 2025.

Adrian Sarbach presented his paper with the title “Next-Generation Navigation: Evaluating the Impact of Augmented Reality on Situation Awareness in General Aviation Cockpits“, written together with Thierry Weber. This paper presents findings on how AR technology (tested with the Microsoft HoloLens 2) can help pilots obtain and keep a higher level of situation awareness in the aircraft cockpit.

Tianyi Xiao presented his paper with the title “Sketch2Terrain: AI-Driven Real-Time Terrain Sketch Mapping in Augmented Reality“, written together with Yizi Chen, Sailin Zhong, Peter Kiefer, Jakub Krukar, Kevin Gonyop Kim, Lorenz Hurni, Angela Schwering, and Martin Raubal. This paper developed a generative AI model that can help novice mappers externalize complex terrain memory in XR.

Please refer to our paper if you are interested in aviation and 3D sketch mapping in VR/AR technologies!

———-

Here again papers and their respective abstracts


Next-Generation Navigation: Evaluating the Impact of Augmented Reality on Situation Awareness in General Aviation Cockpits

Flights in general aviation require pilots to navigate using 2D maps, which splits their attention between the cockpit and the outside environment, reducing situation awareness. Augmented reality (AR) can bridge the gap between the inside and outside world, and thus can resolve the issue of attention switches. In a mixed methods simulator study with 19 pilots, we tested an AR application that integrated invisible and hard-to-see aeronautical data and navigation features with the visible world. Results show that the AR tool enhances and accelerates orientation, and can result in flight trajectories being more accurate with AR than without AR. Situation awareness, measured with a subjective self-rating, was not increased with AR support. Participants voiced concerns about AR content occluding outside features, while positive feedback included use cases in unfamiliar areas and in low visibility, as well as highlighting of hazards.


Sketch2Terrain: AI-Driven Real-Time Terrain Sketch Mapping in Augmented Reality

Dataset, Open-sourced project page

Sketch mapping is an effective technique to externalize and communicate spatial information. However, it has been limited to 2D mediums, making it difficult to represent 3D information, particularly for terrains with elevation changes. We present Sketch2Terrain, an intuitive generative-3D-sketch-mapping system combining freehand sketching with generative Artificial Intelligence that radically changes sketch map creation and representation using Augmented Reality. Sketch2Terrain empowers non-experts to create unambiguous sketch maps of natural environments and provides a homogeneous interface for researchers to collect data and conduct experiments. A between-subject study (N=36) revealed that generative-3D-sketch-mapping improved efficiency by 38.4%, terrain-topology accuracy by 12.5%, and landmark accuracy by up to 12.1%, with only a 4.7% trade-off in terrain-elevation accuracy compared to freehand 3D-sketch-mapping. Additionally, generative-3D-sketch-mapping reduced perceived strain by 60.5% and stress by 39.5% over 2D-sketch-mapping. These findings underscore potential applications of generative-3D-sketch-mapping for in-depth understanding and accurate representation of vertically complex environments. The implementation is publicly available.


  • -

New publication in the International Journal of Human-Computer Studies

The 3D Sketch Maps project team recently published a paper entitled:  VResin: Externalizing spatial memory into 3D sketch map. In this paper, we refined the concept of 3D sketch maps, proposed a technological framework using 3D Cartesian coordinate axes, and provided design criteria for 3D sketch mapping interfaces. We implemented a VR-based 3D sketch mapping tool called VResin, which is associated with traditional resin painting, and a layer-by-layer sketching interface that considers both researcher and user needs. We then conducted a comparative user study with 48 participants between mid-air sketching and layer-by-layer sketching interfaces for memorising multi-layered buildings. We found that VResin helps users to create less distorted sketches while maintaining the level of completeness and generalisation compared to mid-air sketching in VR. Finally, we presented application scenarios demonstrating how 3D sketch maps can support people’s externalisation of their 3D spatial understanding. This study is part of a Sinergia project called “3D Sketch Maps”, funded by the Swiss National Science Foundation (SNSF) [grant number 202284].

The interface design of VResin