6   Artículos

 
en línea
Guido Bologna    
The explainability of connectionist models is nowadays an ongoing research issue. Before the advent of deep learning, propositional rules were generated from Multi Layer Perceptrons (MLPs) to explain how they classify data. This type of explanation techn... ver más
Revista: Information    Formato: Electrónico

 
en línea
Guido Bologna    
In machine learning, ensembles of models based on Multi-Layer Perceptrons (MLPs) or decision trees are considered successful models. However, explaining their responses is a complex problem that requires the creation of new methods of interpretation. A n... ver más
Revista: Algorithms    Formato: Electrónico

 
en línea
Guido Bologna and Yoichi Hayashi    
A natural way to determine the knowledge embedded within connectionist models is to generate symbolic rules. Nevertheless, extracting rules from Multi Layer Perceptrons (MLPs) is NP-hard. With the advent of social networks, techniques applied to Sentimen... ver más
Revista: Big Data and Cognitive Computing    Formato: Electrónico

« Anterior     Página: 1 de 1     Siguiente »