2   Artículos

 
en línea
Xuanzhi Liao, Shahnorbanun Sahran, Azizi Abdullah and Syaimak Abdul Shukor    
Adaptive gradient descent methods such as Adam, RMSprop, and AdaGrad achieve great success in training deep learning models. These methods adaptively change the learning rates, resulting in a faster convergence speed. Recent studies have shown their prob... ver más
Revista: Applied Sciences    Formato: Electrónico

« Anterior     Página: 1 de 1     Siguiente »