While AI has made tremendous progress and has turn into a beneficial tool in lots of domains, it just isn’t a substitute for humans’ unique qualities and capabilities. Essentially the most effective approach, in lots of cases, involves humans working alongside AI, leveraging one another’s strengths to attain the very best outcomes. There are fundamental differences between human and artificial intelligence, and there are tasks and domains where human intelligence stays superior.
Humans can think creatively, imagine recent concepts, and innovate. AI systems are limited by the info and patterns they’ve been trained on and sometimes struggle with truly novel and inventive tasks. Nevertheless, the query is, can a median human outperform the AI model?
Researchers tried to check the creativity of humans (n= 256) with that of three current AI chatbots, ChatGPT3.5, ChatGPT4, and Copy.AI, through the use of the alternate uses task (AUT), which is a divergent pondering task. It’s a cognitive method utilized in psychology and creativity research to evaluate a person’s ability to generate creative and novel ideas in response to a particular stimulus. These tasks measure an individual’s capability for divergent pondering, which is the power to think broadly and generate multiple solutions or ideas from a single problem.
Participants were asked to generate unusual and inventive uses for on a regular basis objects. AUT consisted of 4 tasks with objects: rope, box, pencil, and candle. The human participants were instructed to offer ideas qualitatively but not depend solely on the amount. The chatbots were tested 11 times with 4 object prompts in numerous sessions. The 4 objects were tested just once inside that session.
They collected subjective creativity or originality rankings from six professionally trained humans to guage the outcomes. The order during which the responses inside object categories were presented was randomized individually for every rater. The scores of every rater were averaged across all of the responses a participant or chatbot in a session gave to an object, and the ultimate subjective scores for every object were formed by averaging the six raters’ scores.
On average, the AI chatbots outperformed human participants. While human responses included poor-quality ideas, the chatbots generally produced more creative responses. Nevertheless, the very best human ideas still matched or exceeded those of the chatbots. While this study highlights the potential of AI as a tool to boost creativity, it also underscores the unique and sophisticated nature of human creativity that could be difficult to duplicate or surpass with AI technology fully.
Nevertheless, AI technology is rapidly developing, and the outcomes could also be different after half a 12 months. Based on the current study, the clearest weakness in human performance lies within the relatively high proportion of poor-quality ideas, which were absent in chatbot responses. This weakness could also be because of normal variations in human performance, including failures in associative and executive processes and motivational aspects.
Take a look at the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to hitch our 30k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the newest AI research news, cool AI projects, and more.When you like our work, you’ll love our newsletter..
Arshad is an intern at MarktechPost. He’s currently pursuing his Int. MSc Physics from the Indian Institute of Technology Kharagpur. Understanding things to the basic level results in recent discoveries which result in advancement in technology. He’s enthusiastic about understanding the character fundamentally with the assistance of tools like mathematical models, ML models and AI.