Redirigiendo al acceso original de articulo en 21 segundos...
Inicio  /  Information  /  Vol: 11 Par: 6 (2020)  /  Artículo
ARTÍCULO
TITULO

Attention-Based SeriesNet: An Attention-Based Hybrid Neural Network Model for Conditional Time Series Forecasting

Yepeng Cheng    
Zuren Liu and Yasuhiko Morimoto    

Resumen

Traditional time series forecasting techniques can not extract good enough sequence data features, and their accuracies are limited. The deep learning structure SeriesNet is an advanced method, which adopts hybrid neural networks, including dilated causal convolutional neural network (DC-CNN) and Long-short term memory recurrent neural network (LSTM-RNN), to learn multi-range and multi-level features from multi-conditional time series with higher accuracy. However, they didn?t consider the attention mechanisms to learn temporal features. Besides, the conditioning method for CNN and RNN is not specific, and the number of parameters in each layer is tremendous. This paper proposes the conditioning method for two types of neural networks, and respectively uses the gated recurrent unit network (GRU) and the dilated depthwise separable temporal convolutional networks (DDSTCNs) instead of LSTM and DC-CNN for reducing the parameters. Furthermore, this paper presents the lightweight RNN-based hidden state attention module (HSAM) combined with the proposed CNN-based convolutional block attention module (CBAM) for time series forecasting. Experimental results show our model is superior to other models from the viewpoint of forecasting accuracy and computation efficiency.

 Artículos similares

       
 
Xinhua Wang, Xuemeng Yu, Lei Guo, Fangai Liu and Liancheng Xu    
As students? behaviors are important factors that can reflect their learning styles and living habits on campus, extracting useful features of them plays a helpful role in understanding the students? learning process, which is an important step towards p... ver más
Revista: Information

 
Long Wu, Ta Li, Li Wang and Yonghong Yan    
As demonstrated in hybrid connectionist temporal classification (CTC)/Attention architecture, joint training with a CTC objective is very effective to solve the misalignment problem existing in the attention-based end-to-end automatic speech recognition ... ver más
Revista: Applied Sciences