Home Community This AI Paper Presents A Comprehensive Study of Knowledge Editing for Large Language Models

This AI Paper Presents A Comprehensive Study of Knowledge Editing for Large Language Models

0
This AI Paper Presents A Comprehensive Study of Knowledge Editing for Large Language Models

Recently, GPT-4 and other Large Language Models (LLMs) have demonstrated a formidable capability for Natural Language Processing (NLP) to memorize extensive amounts of knowledge, possibly much more so than humans. The success of LLMs in coping with massive amounts of knowledge has led to the event of models of the generative processes which might be more temporary, coherent, and interpretable—a “world model,” should you will. 

Additional insights are gained from LLMs’ capability to understand and control intricate strategic contexts; for instance, previous research has shown that transformers trained to predict the following token in board games like Othello create detailed models of the present game state. Researchers have discovered the power of LLMs to learn representations that reflect perceptual and symbolic notions and track subjects’ boolean states inside certain situations. With this two-pronged capability, LLMs can store massive amounts of knowledge and organize it in ways in which mimic human thought processes, making them ideal knowledge bases. 

Factual fallacies, the potential of creating harmful content, and out-of-date information are among the limitations of LLMs on account of their training limits. It would take money and time to retrain everyone to repair these problems. In response, there was a proliferation of LLM-centric knowledge editing approaches lately, allowing for efficient, on-the-fly model tweaks. Understanding how LLMs display and process information is critical for guaranteeing the fairness and safety of Artificial Intelligence (AI) systems; this system focuses on specific areas for change without affecting overall performance. The first goal of this work is to survey the history and current state of information editing for LLMs.

Latest research by a team of researchers from Zhejiang University, the National University of Singapore, the University of California, Ant Group, and Alibaba Group provides the initial step to supply an summary of Transformers’ design, the best way LLMs store knowledge, and related approaches corresponding to parameter-efficient fine-tuning, knowledge augmentation, continuing learning, and machine unlearning. After that, the team lays out the groundwork, officially defines the knowledge editing problem, and provides a brand new taxonomy that brings together theories from education and cognitive science to supply a coherent perspective on knowledge editing techniques. Specifically, they classify knowledge editing strategies for LLMs as follows: editing internal knowledge methods, merging knowledge into the model, and resorting to external knowledge.

The researchers present their classification criteria of their paper as follows:

  • Drawing on Information from Other Sources: This method is analogous to the popularity phase of human cognition, which, upon initial encounter with latest information, requires exposure to the data inside an appropriate context. 
  • Integrating Experiential Data Into The Model: By drawing parallels between the incoming information and the model’s current knowledge, this method is analogous to the association phase in human cognitive processes. A learned knowledge representation can be combined with or used rather than the output or intermediate output by the methods. 
  • Revising Inherent Information: Revising knowledge in this fashion is analogous to going through the “mastery phase” of learning something latest. It entails the model consistently using LLM weight modifications to include knowledge into its parameters.

Subsequently, twelve natural language processing datasets are subjected to thorough experiments in this text. The performance, usability, underlying mechanisms, and other issues are rigorously considered of their design.

To supply a good comparison and show how well these methods work in information insertion, modification, and erasure settings, the researchers construct a brand new benchmark called KnowEdit and describe the empirical results of state-of-the-art LLM knowledge editing techniques. 

The researchers display how knowledge editing affects each general tasks and multi-task knowledge editing, suggesting that modern methods of information editing successfully update facts with little impact on the model’s cognitive abilities and flexibility in several knowledge domains. In altered LLMs, they find that a number of columns in the worth layer are heavily focused. It has been suggested that LLMs could also be retrieving answers by retrieving information from their pre-training corpus or through a multi-step reasoning process. 

The findings suggest that knowledge-locating processes, corresponding to causal evaluation, concentrate on areas related to the entity in query slightly than the whole factual context. Moreover, the team also explores the potential for knowledge editing for LLMs to have unexpected repercussions, which is a vital element to take into consideration thoroughly. 

Lastly, they explore the vast array of uses for knowledge editing, taking a look at its possibilities from several angles. These uses include trustworthy AI, efficient machine learning, AI-generated content (AIGC), and individualized agents in human-computer interaction. The researchers hope this study may spark latest lines of inquiry into LLMs with a watch toward efficiency and creativity. They’ve released all of their resources—including codes, data splits, and trained model checkpoints—to the general public to facilitate and encourage more study.


Try the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter. Join our 35k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and LinkedIn Group.

If you happen to like our work, you’ll love our newsletter..


Dhanshree

” data-medium-file=”https://www.marktechpost.com/wp-content/uploads/2022/11/20221028_101632-Dhanshree-Shenwai-169×300.jpg” data-large-file=”https://www.marktechpost.com/wp-content/uploads/2022/11/20221028_101632-Dhanshree-Shenwai-576×1024.jpg”>

Dhanshree Shenwai is a Computer Science Engineer and has experience in FinTech corporations covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is smitten by exploring latest technologies and advancements in today’s evolving world making everyone’s life easy.


⬆️ Join Our 35k+ ML SubReddit

LEAVE A REPLY

Please enter your comment!
Please enter your name here