ARTÍCULO
TITULO

A Visual Attention Model Based on Eye Tracking in 3D Scene Maps

Bincheng Yang and Hongwei Li    

Resumen

Visual attention plays a crucial role in the map-reading process and is closely related to the map cognitive process. Eye-tracking data contains a wealth of visual information that can be used to identify cognitive behavior during map reading. Nevertheless, few researchers have applied these data to quantifying visual attention. This study proposes a method for quantitatively calculating visual attention based on eye-tracking data for 3D scene maps. First, eye-tracking technology was used to obtain the differences in the participants? gaze behavior when browsing a street view map in the desktop environment, and to establish a quantitative relationship between eye movement indexes and visual saliency. Then, experiments were carried out to determine the quantitative relationship between visual saliency and visual factors, using vector 3D scene maps as stimulus material. Finally, a visual attention model was obtained by fitting the data. It was shown that a combination of three visual factors can represent the visual attention value of a 3D scene map: color, shape, and size, with a goodness of fit (R2) greater than 0.699. The current research helps to determine and quantify the visual attention allocation during map reading, laying the foundation for automated machine mapping.

 Artículos similares

       
 
Jakub Wabinski and Emilia Smiechowska-Petrovskij    
Much attention is currently being paid to developing universally designed solutions. Tactile maps, designed for people with visual impairments (PVI), require both graphic and tactile content. While many more- or less-official guidelines regarding tactile... ver más

 
Merve Keskin, Vassilios Krassanakis and Arzu Çöltekin    
This study investigates how expert and novice map users? attention is influenced by the map design characteristics of 2D web maps by building and sharing a framework to analyze large volumes of eye tracking data. Our goal is to respond to the following r... ver más

 
Haoming Liang, Jinze Du, Hongchen Zhang, Bing Han and Yan Ma    
Recently, few-shot learning has attracted significant attention in the field of video action recognition, owing to its data-efficient learning paradigm. Despite the encouraging progress, identifying ways to further improve the few-shot learning performan... ver más
Revista: Future Internet

 
Mingyang Yu, Haiqing Xu, Fangliang Zhou, Shuai Xu and Hongling Yin    
Accurate and efficient classification maps of urban functional zones (UFZs) are crucial to urban planning, management, and decision making. Due to the complex socioeconomic UFZ properties, it is increasingly challenging to identify urban functional zones... ver más

 
Song Yuan, Zexin Lu, Qiyuan Li and Jinguang Gu    
Due to inter-modal effects hidden in multi-modalities and the impact of weak modalities on multi-modal entity alignment, a Multi-modal Entity Alignment Method with Inter-modal Enhancement (MEAIE) is proposed. This method introduces a unique modality call... ver más