ARTÍCULO
TITULO

Multi-Scale Remote Sensing Semantic Analysis Based on a Global Perspective

Wei Cui    
Dongyou Zhang    
Xin He    
Meng Yao    
Ziwei Wang    
Yuanjie Hao    
Jie Li    
Weijie Wu    
Wenqi Cui and Jiejun Huang    

Resumen

Remote sensing image captioning involves remote sensing objects and their spatial relationships. However, it is still difficult to determine the spatial extent of a remote sensing object and the size of a sample patch. If the patch size is too large, it will include too many remote sensing objects and their complex spatial relationships. This will increase the computational burden of the image captioning network and reduce its precision. If the patch size is too small, it often fails to provide enough environmental and contextual information, which makes the remote sensing object difficult to describe. To address this problem, we propose a multi-scale semantic long short-term memory network (MS-LSTM). The remote sensing images are paired into image patches with different spatial scales. First, the large-scale patches have larger sizes. We use a Visual Geometry Group (VGG) network to extract the features from the large-scale patches and input them into the improved MS-LSTM network as the semantic information, which provides a larger receptive field and more contextual semantic information for small-scale image caption so as to play the role of global perspective, thereby enabling the accurate identification of small-scale samples with the same features. Second, a small-scale patch is used to highlight remote sensing objects and simplify their spatial relations. In addition, the multi-receptive field provides perspectives from local to global. The experimental results demonstrated that compared with the original long short-term memory network (LSTM), the MS-LSTM?s Bilingual Evaluation Understudy (BLEU) has been increased by 5.6% to 0.859, thereby reflecting that the MS-LSTM has a more comprehensive receptive field, which provides more abundant semantic information and enhances the remote sensing image captions.

 Artículos similares

       
 
Wenzhuo Zhang, Mingyang Yu, Xiaoxian Chen, Fangliang Zhou, Jie Ren, Haiqing Xu and Shuai Xu    
Deep learning technology, such as fully convolutional networks (FCNs), have shown competitive performance in the automatic extraction of buildings from high-resolution aerial images (HRAIs). However, there are problems of over-segmentation and internal c... ver más
Revista: Buildings

 
Jiangfan Feng and Chengjie Yi    
Recent advances in unmanned aerial vehicles (UAVs) have increased altitude capability in road-traffic monitoring. However, state-of-the-art vehicle detection methods still lack accurate abilities and lightweight structures in the UAV platform due to the ... ver más
Revista: Drones

 
Yifan Liu, Qigang Zhu, Feng Cao, Junke Chen and Gang Lu    
Semantic segmentation has been widely used in the basic task of extracting information from images. Despite this progress, there are still two challenges: (1) it is difficult for a single-size receptive field to acquire sufficiently strong representation... ver más

 
Peng Li, Dezheng Zhang, Aziguli Wulamu, Xin Liu and Peng Chen    
A deep understanding of our visual world is more than an isolated perception on a series of objects, and the relationships between them also contain rich semantic information. Especially for those satellite remote sensing images, the span is so large tha... ver más

 
Tong Yu, Wenjin Wu, Chen Gong and Xinwu Li    
Tropical forests are of vital importance for maintaining biodiversity, regulating climate and material cycles while facing deforestation, agricultural reclamation, and managing various pressures. Remote sensing (RS) can support effective monitoring and m... ver más