Redirigiendo al acceso original de articulo en 24 segundos...
Inicio  /  Applied Sciences  /  Vol: 9 Par: 8 (2019)  /  Artículo
ARTÍCULO
TITULO

Compressed Learning of Deep Neural Networks for OpenCL-Capable Embedded Systems

Sangkyun Lee and Jeonghyun Lee    

Resumen

Deep neural networks (DNNs) have been quite successful in solving many complex learning problems. However, DNNs tend to have a large number of learning parameters, leading to a large memory and computation requirement. In this paper, we propose a model compression framework for efficient training and inference of deep neural networks on embedded systems. Our framework provides data structures and kernels for OpenCL-based parallel forward and backward computation in a compressed form. In particular, our method learns sparse representations of parameters using l1 l 1 -based sparse coding while training, storing them in compressed sparse matrices. Unlike the previous works, our method does not require a pre-trained model as an input and therefore can be more versatile for different application environments. Even though the use of l1 l 1 -based sparse coding for model compression is not new, we show that it can be far more effective than previously reported when we use proximal point algorithms and the technique of debiasing. Our experiments show that our method can produce minimal learning models suitable for small embedded devices.

 Artículos similares

       
 
Miu Sakaida, Takaaki Yoshimura, Minghui Tang, Shota Ichikawa and Hiroyuki Sugimori    
Convolutional neural networks (CNNs) in deep learning have input pixel limitations, which leads to lost information regarding microcalcification when mammography images are compressed. Segmenting images into patches retains the original resolution when i... ver más
Revista: Algorithms

 
Jonathan Miquel, Laurent Latorre and Simon Chamaillé-Jammes    
Biologging refers to the use of animal-borne recording devices to study wildlife behavior. In the case of audio recording, such devices generate large amounts of data over several months, and thus require some level of processing automation for the raw d... ver más

 
Elena Loli Piccolomini, Marco Prato, Margherita Scipione and Andrea Sebastiani    
In this paper, we propose a new deep learning approach based on unfolded neural networks for the reconstruction of X-ray computed tomography images from few views. We start from a model-based approach in a compressed sensing framework, described by the m... ver más
Revista: Algorithms

 
Artemiy Belousov, Ivan Kisel, Robin Lakos and Akhil Mithran    
Algorithms optimized for high-performance computing, which ensure both speed and accuracy, are crucial for real-time data analysis in heavy-ion physics experiments. The application of neural networks and other machine learning methodologies, which are fa... ver más
Revista: Algorithms

 
Carmine Paolino, Alessio Antolini, Francesco Zavalloni, Andrea Lico, Eleonora Franchi Scarselli, Mauro Mangia, Alex Marchioni, Fabio Pareschi, Gianluca Setti, Riccardo Rovatti, Mattia Luigi Torres, Marcella Carissimi and Marco Pasotti    
Analog In-Memory computing (AIMC) is a novel paradigm looking for solutions to prevent the unnecessary transfer of data by distributing computation within memory elements. One such operation is matrix-vector multiplication (MVM), a workhorse of many fiel... ver más