Inicio  /  Information  /  Vol: 12 Par: 12 (2021)  /  Artículo
ARTÍCULO
TITULO

Learnable Leaky ReLU (LeLeLU): An Alternative Accuracy-Optimized Activation Function

Andreas Maniatopoulos and Nikolaos Mitianoudis    

Resumen

In neural networks, a vital component in the learning and inference process is the activation function. There are many different approaches, but only nonlinear activation functions allow such networks to compute non-trivial problems by using only a small number of nodes, and such activation functions are called nonlinearities. With the emergence of deep learning, the need for competent activation functions that can enable or expedite learning in deeper layers has emerged. In this paper, we propose a novel activation function, combining many features of successful activation functions, achieving 2.53% higher accuracy than the industry standard ReLU in a variety of test cases.

 Artículos similares

       
 
Simone Becarelli, Giacomo Bernabei, Giovanna Siracusa, Diego Baderna, Monica Ruffini Castiglione, Giampiero De Simone and Simona Di Gregorio    
To accelerate the depletion of total petroleum hydrocarbons, a hydrocarburoclastic ascomycetes, Lambertella sp. MUT 5852, was bioaugmented to dredged sediments co-composting with a lignocellulosic matrix. After only 28 days of incubation, a complete depl... ver más
Revista: Water

 
Dario Guidotti, Laura Pandolfo and Luca Pulina    
Interest in machine learning and neural networks has increased significantly in recent years. However, their applications are limited in safety-critical domains due to the lack of formal guarantees on their reliability and behavior. This paper shows rece... ver más
Revista: Information

 
Felipe C. Farias, Teresa B. Ludermir and Carmelo J. A. Bastos-Filho    
In this paper we propose a procedure to enable the training of several independent Multilayer Perceptron Neural Networks with a different number of neurons and activation functions in parallel (ParallelMLPs) by exploring the principle of locality and par... ver más
Revista: AI

 
Napsu Karmitsa, Sona Taheri, Kaisa Joki, Pauliina Paasivirta, Adil M. Bagirov and Marko M. Mäkelä    
In this paper, a new nonsmooth optimization-based algorithm for solving large-scale regression problems is introduced. The regression problem is modeled as fully-connected feedforward neural networks with one hidden layer, piecewise linear activation, an... ver más
Revista: Algorithms

 
Andrinandrasana David Rasamoelina, Ivan Cík, Peter Sincak, Marián Mach, Luká? Hru?ka     Pág. 95 - 109