Home Community Unlocking the Brain’s Language Response: How GPT Models Predict and Influence Neural Activity

Unlocking the Brain’s Language Response: How GPT Models Predict and Influence Neural Activity

0
Unlocking the Brain’s Language Response: How GPT Models Predict and Influence Neural Activity

Recent advancements in machine learning and artificial intelligence (ML) techniques are utilized in all fields. These advanced AI systems have been made possible attributable to advances in computing power, access to vast amounts of knowledge, and enhancements in machine learning techniques. LLMs, which require huge amounts of knowledge, generate human-like language for a lot of applications.

A brand new study by researchers from MIT and Harvard University have developed latest insights to predict how the human brain responds to language. The researchers emphasized that this will be the primary AI model to effectively drive and suppress responses within the human language network. Language processing involves language networks, specifically brain areas primarily within the left hemisphere. They include parts of the frontal and temporal lobes of the brain. There was research to grasp how this network functions, but much continues to be to be known in regards to the underlying mechanisms involved in language comprehension.

Through this study, the researchers tried to judge LLMs’ effectiveness in predicting brain responses to numerous linguistic inputs. Also, they aim to grasp higher the characteristics of stimuli that drive or suppress responses throughout the language network area of humans. The researchers formulated an encoding model based on a GPT-style LLM to predict the human brain’s reactions to arbitrary sentences presented to participants. They built this encoding model using last-token sentence embeddings from GPT2-XL. It was trained on a dataset of 1,000 diverse, corpus-extracted sentences from five participants. Finally, they tested the model on held-out sentences to evaluate its predictive capabilities. They concluded that the model achieved a correlation coefficient of r=0.38.

To further evaluate the model’s robustness, the researchers performed several other tests using alternative methods for obtaining sentence embeddings and incorporating embeddings from one other LLM architecture. They found that the model maintained high predictive performance in these tests. Also, they found that the encoding model was accurate for predictive performance when applied to anatomically defined language regions. 

The researchers emphasized that this study and its findings hold substantial implications for fundamental neuroscience research and real-world applications. They noted that manipulating neural responses within the language network can open latest fields for studying language processing and potentially treating disorders affecting language function. Also, implementing LLMs as models of human language processing can improve natural language processing technologies, equivalent to virtual assistants and chatbots. 

In conclusion, this study is a major step in understanding the connection and dealing similarity between AI and the human brain. Researchers use LLMs to unravel the mysteries surrounding language processing and develop revolutionary strategies for influencing neural activity. Researchers expect to see more exciting discoveries on this domain as AI and ML evolve.


Try the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and Google News. Join our 36k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

When you like our work, you’ll love our newsletter..

Don’t Forget to affix our Telegram Channel


Rachit Ranjan is a consulting intern at MarktechPost . He’s currently pursuing his B.Tech from Indian Institute of Technology(IIT) Patna . He’s actively shaping his profession in the sphere of Artificial Intelligence and Data Science and is passionate and dedicated for exploring these fields.


🎯 [FREE AI WEBINAR] ‘Using ANN for Vector Search at Speed & Scale (Demo on AWS)’ (Feb 5, 2024)

LEAVE A REPLY

Please enter your comment!
Please enter your name here