Home News AI’s Analogical Reasoning Abilities: Difficult Human Intelligence?

AI’s Analogical Reasoning Abilities: Difficult Human Intelligence?

0
AI’s Analogical Reasoning Abilities: Difficult Human Intelligence?

Analogical reasoning, the unique ability that humans possess to unravel unfamiliar problems by drawing parallels with known problems, has long been thought to be a particular human cognitive function. Nonetheless, a groundbreaking study conducted by UCLA psychologists presents compelling findings that may push us to rethink this.

GPT-3: Matching As much as Human Intellect?

The UCLA research found that GPT-3, an AI language model developed by OpenAI, demonstrates reasoning capabilities almost on par with college undergraduates, especially when tasked with solving problems akin to those seen in intelligence tests and standardized exams just like the SAT. This revelation, published within the journal , raises an intriguing query: Does GPT-3 emulate human reasoning as a consequence of its extensive language training dataset, or is it tapping into a completely novel cognitive process?

The precise workings of GPT-3 remain concealed by OpenAI, leaving the researchers at UCLA interested by the mechanism behind its analogical reasoning skills. Despite GPT-3’s laudable performance on certain reasoning tasks, the tool isn’t without its flaws. Taylor Webb, the study’s primary writer and a postdoctoral researcher at UCLA, noted, “While our findings are impressive, it’s essential to emphasize that this technique has significant constraints. GPT-3 can perform analogical reasoning, however it struggles with tasks trivial for humans, comparable to utilizing tools for a physical task.”

GPT-3’s capabilities were put to the test using problems inspired by Raven’s Progressive Matrices – a test involving intricate shape sequences. By converting images to a text format GPT-3 could decipher, Webb ensured these were entirely latest challenges for the AI. In comparison to 40 UCLA undergraduates, not only did GPT-3 match human performance, however it also mirrored the mistakes humans made. The AI model accurately solved 80% of the issues, exceeding the common human rating yet falling inside the top human performers’ range.

The team further probed GPT-3’s prowess using unpublished SAT analogy questions, with the AI outperforming the human average. Nonetheless, it faltered barely when attempting to attract analogies from short stories, although the newer GPT-4 model showed improved results.

Bridging the AI-Human Cognition Divide

UCLA’s researchers aren’t stopping at mere comparisons. They’ve launched into developing a pc model inspired by human cognition, always juxtaposing its abilities with industrial AI models. Keith Holyoak, a UCLA psychology professor and co-author, remarked, “Our psychological AI model outshined others in analogy problems until GPT-3’s latest upgrade, which displayed superior or equivalent capabilities.”

Nonetheless, the team identified certain areas where GPT-3 lagged, especially in tasks requiring comprehension of physical space. In challenges involving tool usage, GPT-3’s solutions were markedly off the mark.

Hongjing Lu, the study’s senior writer, expressed amazement on the leaps in technology over the past two years, particularly in AI’s capability to reason. But, whether these models genuinely “think” like humans or just mimic human thought remains to be up for debate. The search for insights into AI’s cognitive processes necessitates access to the AI models’ backend, a leap that might shape AI’s future trajectory.

Echoing the sentiment, Webb concludes, “Access to GPT models’ backend would immensely profit AI and cognitive researchers. Currently, we’re limited to inputs and outputs, and it lacks the decisive depth we aspire for.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here