Home News Kids, AI, and Ethics: How Children Perceive and Treat Alexa and Roomba

Kids, AI, and Ethics: How Children Perceive and Treat Alexa and Roomba

0
Kids, AI, and Ethics: How Children Perceive and Treat Alexa and Roomba

A recent study conducted by Duke developmental psychologists explored how children perceive the intelligence and emotions of AI devices, specifically comparing the smart speaker Alexa to the autonomous vacuum Roomba. Researchers found that children aged 4 to eleven tended to view Alexa as having more human-like thoughts and emotions in comparison with Roomba.

The findings of the study were published online on April 10 within the journal Developmental Psychology.

Lead creator Teresa Flanagan was partly inspired by Hollywood portrayals of human-robot interactions, resembling those seen in HBO’s “Westworld.” The study involved 127 children aged 4 to eleven, who watched a 20-second clip of every technology after which answered questions on the devices.

“In Westworld and the movie Ex Machina, we see how adults might interact with robots in these very cruel and horrible ways,” said Flanagan. “But how would kids interact with them?”

Treating AI Devices with Respect

Despite the differences in perceived intelligence between Alexa and Roomba, children across all age groups agreed that it was incorrect to hit or yell on the machines. Nevertheless, as children grew older, they reported that it was barely more acceptable to attack technology.

“4- and five-year-olds appear to think you don’t have the liberty to make an ethical violation, like attacking someone,” Flanagan said. “But as they become old, they appear to think it’s not great, but you do have the liberty to do it.”

The study revealed that children generally believed that Alexa and Roomba didn’t have the power to feel physical sensations like humans do. They attributed mental and emotional capabilities to Alexa, resembling having the ability to think or get upset, while they didn’t think the identical of Roomba.

“Even with no body, young children think the Alexa has emotions and a mind,” Flanagan said. “And it’s not that they think every technology has emotions and minds — they don’t think the Roomba does — so it’s something special concerning the Alexa’s ability to speak verbally.”

Flanagan and her graduate advisor Tamar Kushnir, a Duke Institute for Brain Sciences faculty member, are currently trying to know why children think it’s incorrect to assault home technology.

Implications and Ethical Questions

The study’s findings provide insights into the evolving relationship between children and technology, raising necessary ethical questions regarding the treatment of AI devices and machines. For instance, should parents model good behavior for his or her children by thanking AI devices like Siri or ChatGPT for his or her help?

The research also highlights the necessity to explore whether children imagine that treating AI devices poorly is morally incorrect, or just because it would damage someone’s property.

“It’s interesting with these technologies because there’s one other aspect: it’s a bit of property,” Flanagan said. “Do kids think you shouldn’t hit these items since it’s morally incorrect, or since it’s anyone’s property and it would break?”

LEAVE A REPLY

Please enter your comment!
Please enter your name here