Home Community UC Berkeley Researchers Introduce Ghostbuster: A SOTA AI Method for Detecting LLM-Generated Text

UC Berkeley Researchers Introduce Ghostbuster: A SOTA AI Method for Detecting LLM-Generated Text

0
UC Berkeley Researchers Introduce Ghostbuster: A SOTA AI Method for Detecting LLM-Generated Text

ChatGPT has revolutionized the potential of easily producing a big selection of fluent text on a big selection of topics. But how good are they really? Language models are liable to factual errors and hallucinations. This lets readers know if such tools have been used to ghostwrite news articles or other informative text when deciding whether or to not trust a source. The advancement in these models has also raised concerns regarding the authenticity and originality of the text. Many educational institutions have also restricted the usage of ChatGPT attributable to content being easy to supply.

LLMs like ChatGPT generate responses based on patterns and knowledge within the vast amount of text they were trained on. It doesn’t reproduce responses verbatim but generates latest content by predicting and understanding probably the most suitable continuation for a given input. Nonetheless, the reactions may draw upon and synthesize information from its training data, resulting in similarities with existing content. It’s essential to notice that LLMs aim for originality and accuracy; it’s not infallible. Users should exercise discretion and never solely depend on AI-generated content for critical decision-making or situations requiring expert advice.

Many detection frameworks exist, like DetectGPT and GPTZero, to detect whether an LLM has generated the content. Nonetheless, these framework’s performance falters on datasets they were originally not evaluated. Researchers from the University of California present Ghostbusters. It is a technique for detection based on structured search and linear classification. 

Ghostbuster uses a three-stage training process named probability computation, feature selection, and classifier training. Firstly, it converts each document right into a series of vectors by computing per-token probabilities under a series of language models. Then, it selects features by running a structured search procedure over an area of vector and scalar functions that mix these probabilities by defining a set of operations that mix these features and run forward feature selection. Finally, it trains an easy classifier on the perfect probability-based features and a few additional manually chosen features. 

Ghostbuster’s classifiers are trained on mixtures of the probability-based features chosen through structured search and 7 additional features based on word length and the biggest token probabilities. These other features are intended to include qualitative heuristics observed about AI-generated text. 

Ghostbuster performance gains over previous models are robust with respect to the similarity of the training and testing datasets. Ghostbuster achieved 97.0 F1 averaged across all conditions and outperformed DetectGPT by 39.6 F1 and GPTZero by 7.5 F1. Ghostbuster outperformed the RoBERTa baseline on all domains except creative writing out-of-domain, and RoBERTa had a much worse out-of-domain performance. The F1 rating is a metric commonly used to guage the performance of a classification model. It’s a measure that mixes each precision and recall right into a single value and is especially useful when coping with imbalanced datasets.


Take a look at the Paper and Blog Article. All credit for this research goes to the researchers of this project. Also, don’t forget to hitch our 33k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the newest AI research news, cool AI projects, and more.

For those who like our work, you’ll love our newsletter..


Arshad is an intern at MarktechPost. He’s currently pursuing his Int. MSc Physics from the Indian Institute of Technology Kharagpur. Understanding things to the basic level results in latest discoveries which result in advancement in technology. He’s enthusiastic about understanding the character fundamentally with the assistance of tools like mathematical models, ML models and AI.


🔥 Join The AI Startup Newsletter To Learn About Latest AI Startups

LEAVE A REPLY

Please enter your comment!
Please enter your name here