Home Community This AI Research Dives Into The Limitations and Capabilities of Transformer Large Language Models (LLMs), Empirically and Theoretically, on Compositional Tasks

This AI Research Dives Into The Limitations and Capabilities of Transformer Large Language Models (LLMs), Empirically and Theoretically, on Compositional Tasks

0
This AI Research Dives Into The Limitations and Capabilities of Transformer Large Language Models (LLMs), Empirically and Theoretically, on Compositional Tasks

ChatGPT is trending, and thousands and thousands of individuals are using it every single day. With its incredible capabilities of imitating humans, akin to query answering, generating unique and inventive content, summarizing massive textual data, code completion, and developing highly useful virtual assistants, ChatGPT is making our lives easier. Developed by OpenAI, ChatGPT relies on GPT 3.5 (Generative Pre-Trained Transformer) and GPT 4’s transformer architecture. GPT 4, the newest version of language models released by OpenAI, is multimodal in nature, i.e., it takes in input in the shape of text and pictures, unlike the previous versions. Even other Large Language Models (LLMs) like PaLM, LLaMA, and BERT are getting used in applications of varied domains involving healthcare, E-commerce, finance, education, etc.

A team of researchers has highlighted the difference between the impressive performance of LLMs like GPT on complex tasks and their struggles with easy tasks in a recently released research paper. Diving into the restrictions and capabilities of Transformer LLMs, the team has conducted experiments on three representative compositional tasks: multi-digit multiplication, logic grid puzzles, and a classic dynamic programming problem. These tasks involve breaking down problems into smaller steps and mixing those steps to provide a precise solution.

With the aim of studying the bounds of Transformers in solving compositional tasks that require multi-step reasoning, the authors have proposed two hypotheses. The primary is that the Transformers accomplish tasks by linearizing multi-step reasoning into path matching, thus counting on pattern-matching and shortcut learning moderately than actually comprehending and implementing the underlying computational rules required to develop proper solutions. This approach enables fast and accurate predictions in similar patterns during training but fails to generalize to unusual complex examples. The second hypothesis states that Transformers can have inherent limitations while trying to resolve high-complexity compositional tasks having unique patterns. Early computational errors might spread and lead to severe compounding errors in later steps, stopping the models from arriving at the proper solution.

🚀 JOIN the fastest ML Subreddit Community

The authors have formulated the compositional tasks as computation graphs to be able to investigate the 2 hypotheses. These graphs decompose the strategy of solving problems into smaller, more manageable submodular functional steps, enabling structured measures of problem complexity and verbalization of computing steps as input sequences to language models. They even use information gain to make predictions concerning the patterns that models would probably learn based on the underlying task distribution without running full computations inside the graph.

Based on the empirical findings, the authors have proposed that the Transformers handle compositional challenges by reducing multi-step reasoning into linearized subgraph matching. They’ve provided theoretical arguments based on abstract multi-step reasoning problems, which highlight that because the task complexity increases, Transformers’ performance rapidly deteriorates. This shows that the models might already be constrained of their ability to handle compositional problems of great complexity.

In conclusion, the empirical and theoretical results imply that moderately than a radical comprehension of the underlying pondering processes, Transformers’ performance is generally driven by pattern matching and subgraph matching, which also supports the concept that Transformers would find it difficult to do increasingly difficult tasks.


Check Out The Paper. Don’t forget to hitch our 22k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the newest AI research news, cool AI projects, and more. If you might have any questions regarding the above article or if we missed anything, be at liberty to email us at Asif@marktechpost.com

🚀 Check Out 100’s AI Tools in AI Tools Club


Tanya Malhotra is a final yr undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning.
She is a Data Science enthusiast with good analytical and important pondering, together with an ardent interest in acquiring recent skills, leading groups, and managing work in an organized manner.


➡️ Ultimate Guide to Data Labeling in Machine Learning

LEAVE A REPLY

Please enter your comment!
Please enter your name here