Humor may improve human performance and motivation and is crucial in developing relationships. It’s an efficient tool for influencing mood and directing attention. Subsequently, a humorousness that’s computational has the potential to enhance human-computer interaction (HCI) greatly. Sadly, despite the fact that computational humor is a long-standing study area, the computers created are removed from “funny.” This issue is even considered AI-complete. Nonetheless, ongoing improvements and up to date machine learning (ML) discoveries create a wide selection of recent applications and present fresh probabilities for natural language processing (NLP).
Transformer-based large language models (LLMs) increasingly reflect and capture implicit knowledge, including morality, humor, and stereotypes. Humor is regularly subliminal and driven by minute nuances. So there may be cause for optimism regarding future developments in artificial humor, given these fresh properties of LLMs. OpenAI’s ChatGPT most recently attracted much attention for its ground-breaking capabilities. Users can have conversations-like exchanges with the model through the general public chat API. The system can reply to a wide selection of inquiries while considering the prior contextual dialogue. As seen in Fig. 1, it will possibly even tell jokes. Fun to make use of, ChatGPT engages on a human level.
Nonetheless, consumers may immediately see the model’s shortcomings while engaging with it. Despite producing text in almost error-free English, ChatGPT occasionally has grammar and content-related errors. They found that ChatGPT will likely frequently repeat the identical jokes throughout the previous investigation. The jokes that were offered were also quite accurate and nuanced. These findings supported that the model didn’t create the jokes produced. As an alternative, they were copied from the training data and even hard-coded into a listing. They ran several structured prompt-based experiments to learn concerning the system’s behavior and enable inference regarding the generation technique of ChatGPT’s output since the system’s inner workings should not disclosed.
Researchers from the German Aerospace Center (DLR), Technical University Darmstadt, and Hessian Center for AI specifically need to know, through a scientific prompt-based investigation, how well ChatGPT can capture human humor. The three experimental conditions of joke invention, joke explanation, and joke detection are assembled as the foremost contribution. Artificial intelligence vocabulary regularly uses comparisons to human traits, comparable to neural networks or the phrase artificial intelligence itself. As well as, they utilize human-related words when discussing conversational agents, which aim to emulate human behavior as closely as possible. As an example, ChatGPT “understands” or “explains.”
Although they think these comparisons accurately capture the behavior and inner workings of the system, they could be deceptive. They wish to make clear that the AI models under discussion should not on a human level and, at most, are simulations of the human mind. This study doesn’t try and answer the philosophical query of whether AI can ever think or understand consciously.
Check Out The Paper and GitHub link. Don’t forget to affix our 24k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the newest AI research news, cool AI projects, and more. If you’ve any questions regarding the above article or if we missed anything, be at liberty to email us at Asif@marktechpost.com
🚀 Check Out 100’s AI Tools in AI Tools Club
Aneesh Tickoo is a consulting intern at MarktechPost. He’s currently pursuing his undergraduate degree in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. He spends most of his time working on projects aimed toward harnessing the ability of machine learning. His research interest is image processing and is keen about constructing solutions around it. He loves to attach with people and collaborate on interesting projects.