Home Learn AI literacy is likely to be ChatGPT’s biggest lesson for schools

AI literacy is likely to be ChatGPT’s biggest lesson for schools

0
AI literacy is likely to be ChatGPT’s biggest lesson for schools

This yr thousands and thousands of individuals have tried—and been wowed by— artificial-intelligence systems. That’s in no small part due to OpenAI’s chatbot ChatGPT. 

When it launched last November, the chatbot became an quick hit amongst students, lots of whom embraced it as a tool to write down essays and finish homework. Some media outlets went so far as to declare that the college essay is dead. 

Alarmed by an influx of AI-generated essays, schools around the globe moved swiftly to ban the usage of the technology. 

But nearly half a yr later, the outlook is so much less bleak. For MIT Technology Review’s upcoming print issue on education, my colleague Will Douglas Heaven spoke to a lot of educators who are actually reevaluating what chatbots like ChatGPT mean for a way we teach our children. Many teachers now consider that removed from being only a dream machine for cheaters, ChatGPT could actually help make education higher. Read his story here. 

What’s clear from Will’s story is that ChatGPT will change the way in which schools teach. But the largest educational consequence from the technology may not be a brand new way of writing essays or homework. It’s AI literacy. 

AI is becoming an increasingly integral a part of our lives, and tech corporations are rolling out AI-powered products at a breathtakingly fast pace. AI language models could turn into powerful productivity tools that we use each day. 

I’ve written so much concerning the dangers related to artificial intelligence, from biased avatar generators to the unattainable task of detecting AI-generated text. 

Each time I ask experts about what extraordinary people can do to guard themselves from a majority of these harm, the reply is identical. They are saying there may be an urgent need for the general public to be higher informed about how AI works and what its limitations are, to be able to prevent ourselves from being fooled or harmed by a pc program.

Until now, the uptake of AI literacy schemes has been sluggish. But ChatGPT has forced many colleges to quickly adapt and begin teaching kids an ad hoc curriculum of AI 101. 

The teachers Will spoke to had already began applying a critical lens to technologies corresponding to ChatGPT. Emily Donahoe, a writing tutor and academic developer on the University of Mississippi, said she thinks that ChatGPT could help teachers shift away from an excessive deal with final results. Getting a category to interact with AI and think critically about what it generates could make teaching feel more human, she says, “relatively than asking students to write down and perform like robots.”

And since the AI model has been trained with North American data and reflects North American biases, teachers are finding that it’s an incredible option to start a conversation about bias. 

David Smith, a professor of bioscience education at Sheffield Hallam University within the UK, allows his undergraduate students to make use of ChatGPT of their written assignments, but he’ll assess the prompt in addition to—and even relatively than—the essay itself. “Knowing the words to make use of in a prompt after which understanding the output that comes back is very important,” he says. “We want to show tips on how to do this.” 

Considered one of the largest flaws of AI language models is that they make stuff up and confidently present falsehoods as facts. This makes them unsuitable for tasks where accuracy is incredibly necessary, corresponding to scientific research and health care. But Helen Crompton, an associate professor of instructional technology at Old Dominion University in Norfolk, Virginia, has found the AI’s model’s “hallucinations” a useful teaching tool too. 

“The proven fact that it’s not perfect is great,” Crompton says. It’s a chance for productive discussions about misinformation and bias. 

These sorts of examples give me hope that education systems and policymakers will realize just how necessary it’s to show the following generation critical pondering skills around AI. 

For adults, one promising AI literacy initiative is a free online course called Elements of AI, which is developed by startup MinnaLearn and the University of Helsinki. It was launched in 2018 and is now available in 28 languages. Elements of AI teaches people what AI is and, most vital, what it will probably and might’t do. I’ve tried it myself, and it’s an incredible resource.

My larger concern is whether or not we’ll have the opportunity to get adults up to the mark quickly enough. Without AI literacy among the many internet-surfing adult population, increasingly more persons are certain to fall prey to unrealistic expectations and hype. Meanwhile, AI chatbots might be weaponized as powerful phishing, scamming, and misinformation tools. 

The youngsters will probably be alright. It’s the adults we want to fret about.  

Deeper Learning

The complex math of counterfactuals could help Spotify pick your next favorite song

A brand new form of machine-learning model built by a team of researchers on the music-streaming firm Spotify captures, for the primary time, the complex math behind counterfactual evaluation, a precise technique that will be used to discover the causes of past events and predict the consequences of future ones. By tweaking the appropriate things, it’s possible to separate true causation from correlation and coincidence.

What’s the massive deal: The model could improve the accuracy of automated decision-making, especially personalized recommendations, in a spread of applications from finance to health care. In Spotify’s case, which may mean selecting what songs to indicate you or when artists should drop a brand new album. Read more from Will Douglas Heaven here. 

Bits and Bytes

Sam Altman’s PR blitz continues
It’s fascinating to see the birth of tech folklore in real time. Two profiles of OpenAI founder Sam Altman from the Latest York Times and the Wall Street Journal paint an image of Altman as a brand new tech luminary, akin to Steve Jobs or Bill Gates. The Times calls Altman “ChatGPT King,” while the Journal goes for “AI Crusader.” Yet more proof that the Great Man myth continues to be alive and well in tech. 

ChatGPT invented a sexual harassment scandal and accused an actual law professor 
AI models make things up, and sometimes they even offer legitimate-looking citations for his or her nonsense. This story about an innocent professor who was accused of sexual harassment illustrates the very real harm that may end up. “Hallucinations” are already getting OpenAI in legal problems. Last week, an Australian mayor threatened to sue OpenAI for defamation unless it corrects false claims that he served time in prison for bribery. That is something I warned about last yr. (Washington Post) 

How Lex Fridman’s podcast became a secure space for the “anti-woke” tech elite
An enchanting read on the rise of Lex Fridman, the controversial and hugely popular AI researcher turned podcaster, and his complicated relationship with the AI community—and Elon Musk. (Business Insider) 

Pollsters are beginning to survey AIs as a substitute of individuals 
People don’t reply to political polls. A brand new research experiment is attempting to see if AI chatbots could help by mirroring how certain demographics would answer polling questions. Polling is already a dubious science, and that is prone to make it much more so. (The Atlantic) 

Fashion brands are using AI-generated models within the name of diversity
Brands corresponding to Levi’s and Calvin Klein are using AI-generated models to “complement” their representation of individuals of varied sizes, skin tones, and ages. But why not only hire diverse humans? *Screams into the void* (The Guardian)

LEAVE A REPLY

Please enter your comment!
Please enter your name here