Inicio  /  Future Internet  /  Vol: 15 Par: 7 (2023)  /  Artículo
ARTÍCULO
TITULO

Synonyms, Antonyms and Factual Knowledge in BERT Heads

Lorenzo Serina    
Luca Putelli    
Alfonso Emilio Gerevini and Ivan Serina    

Resumen

In recent years, many studies have been devoted to discovering the inner workings of Transformer-based models, such as BERT, for instance, attempting to identify what information is contained within them. However, little is known about how these models store this information in their millions of parameters and which parts of the architecture are the most important. In this work, we propose an approach to identify self-attention mechanisms, called heads, that contain semantic and real-world factual knowledge in BERT. Our approach includes a metric computed from attention weights and exploits a standard clustering algorithm for extracting the most relevant connections between tokens in a head. In our experimental analysis, we focus on how heads can connect synonyms, antonyms and several types of factual knowledge regarding subjects such as geography and medicine.