Home Community How Transformer-Based LLMs Extract Knowledge From Their Parameters

How Transformer-Based LLMs Extract Knowledge From Their Parameters

0
How Transformer-Based LLMs Extract Knowledge From Their Parameters

In recent times, transformer-based large language models (LLMs) have develop into extremely popular due to their ability to capture and store factual knowledge. Nevertheless, how these models extract factual associations during inference stays relatively underexplored. A recent study by researchers from Google DeepMind, Tel Aviv University, and Google Research aimed to look at the interior mechanisms by which transformer-based LLMs store and extract factual associations.

The study proposed an information flow approach to analyze how the model predicts the proper attribute and the way internal representations evolve across layers to generate outputs. Specifically, the researchers focused on decoder-only LLMs and identified critical computational points related to the relation and subject positions. They achieved this by utilizing a “knock out” technique to block the last position from attending to other positions at specific layers, then observing the impacts during inference.

To further pinpoint locations where attribute extraction occurs, the researchers analyzed the data propagating at these critical points and the preceding representation construction process. They achieved this through additional interventions to the vocabulary and the model’s multi-head self-attention (MHSA) and multi-layer perceptron (MLP) sublayers and projections.

🚀 JOIN the fastest ML Subreddit Community

The researchers identified an internal mechanism for attribute extraction based on a subject enrichment process and an attribute extraction operation. Specifically, information concerning the subject is enriched within the last subject token across early layers of the model, while the relation is passed to the last token. Finally, the last token uses the relation to extract the corresponding attributes from the topic representation via attention head parameters.

The findings offer insights into how factual associations are stored and extracted internally in LLMs. The researchers consider these findings could open latest research directions for knowledge localization and model editing. For instance, the study’s approach may very well be used to discover the interior mechanisms by which LLMs acquire and store biased information and to develop methods for mitigating such biases.

Overall, this study highlights the importance of examining the interior mechanisms by which transformer-based LLMs store and extract factual associations. By understanding these mechanisms, researchers can develop more practical methods for improving model performance and reducing biases. Moreover, the study’s approach may very well be applied to other areas of natural language processing, corresponding to sentiment evaluation and language translation, to grasp higher how these models operate internally.


Take a look at the Paper. Don’t forget to hitch our 20k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the newest AI research news, cool AI projects, and more. If you have got any questions regarding the above article or if we missed anything, be happy to email us at Asif@marktechpost.com

🚀 Check Out 100’s AI Tools in AI Tools Club


Niharika is a Technical consulting intern at Marktechpost. She is a 3rd yr undergraduate, currently pursuing her B.Tech from Indian Institute of Technology(IIT), Kharagpur. She is a highly enthusiastic individual with a keen interest in Machine learning, Data science and AI and an avid reader of the newest developments in these fields.


LEAVE A REPLY

Please enter your comment!
Please enter your name here