Home Learn Why it’s unattainable to construct an unbiased AI language model

Why it’s unattainable to construct an unbiased AI language model

0
Why it’s unattainable to construct an unbiased AI language model

AI language models have recently change into the most recent frontier within the US culture wars. Right-wing commentators have accused ChatGPT of getting a “woke bias,” and conservative groups have began developing their own versions of AI chatbots. Meanwhile, Elon Musk has said he’s working on “TruthGPT,” a “maximum truth-seeking” language model that might stand in contrast to the “politically correct” chatbots created by OpenAI and Google. 

An unbiased, purely fact-based AI chatbot is a cute idea, but it surely’s technically unattainable. (Musk has yet to share any details of what his TruthGPT would entail, probably because he is simply too busy desirous about X and cage fights with Mark Zuckerberg.) To grasp why, it’s price reading a story I just published on recent research that sheds light on how political bias creeps into AI language systems. Researchers conducted tests on 14 large language models and located that OpenAI’s ChatGPT and GPT-4 were essentially the most left-wing libertarian, while Meta’s LLaMA was essentially the most right-wing authoritarian. 

“We consider no language model could be entirely free from political biases,” Chan Park, a PhD researcher at Carnegie Mellon University, who was a part of the study, told me. Read more here.

One of the vital pervasive myths around AI is that the technology is neutral and unbiased. This can be a dangerous narrative to push, and it would only exacerbate the issue of humans’ tendency to trust computers, even when the computers are incorrect. In reality, AI language models reflect not only the biases of their training data, but in addition the biases of people that created them and trained them. 

And while it’s well-known that the information that goes into training AI models is a large source of those biases, the research I wrote about shows how bias creeps in at virtually every stage of model development, says Soroush Vosoughi, an assistant professor of computer science at Dartmouth College, who was not a part of the study. 

Bias in AI language models is a particularly hard problem to repair, because we don’t really understand how they generate the things they do, and our processes for mitigating bias aren’t perfect. That in turn is partly because biases are complicated social problems with no easy technical fix. 

That’s why I’m a firm believer in honesty as the most effective policy. Research like this might encourage corporations to trace and chart the political biases of their models and be more forthright with their customers. They might, for instance, explicitly state the known biases so users can take the models’ outputs with a grain of salt.

In that vein, earlier this yr OpenAI told me it’s developing customized chatbots which are in a position to represent different politics and worldviews. One approach can be allowing people to personalize their AI chatbots. That is something Vosoughi’s research has focused on. 

As described in a peer-reviewed paper, Vosoughi and his colleagues created a technique just like a YouTube advice algorithm, but for generative models. They use reinforcement learning to guide an AI language model’s outputs in order to generate certain political ideologies or remove hate speech. 

OpenAI uses a way called reinforcement learning through human feedback to fine-tune its AI models before they’re launched. Vosoughi’s method uses reinforcement learning to enhance the model’s generated content after it has been released, too. 

But in an increasingly polarized world, this level of customization can result in each good and bad outcomes. While it may very well be used to weed out unpleasantness or misinformation from an AI model, it may be used to generate more misinformation. 

“It’s a double-edged sword,” Vosoughi admits. 

Deeper Learning

Worldcoin just officially launched. Why is it already being investigated?

OpenAI CEO Sam Altman’s recent enterprise, Worldcoin, goals to create a worldwide identity system called “World ID” that relies on individuals’ unique biometric data to prove that they’re humans. It officially launched last week in greater than 20 countries. It’s already being investigated in several of them. 

Privacy nightmare: To grasp why, it’s price reading an MIT Technology Review investigation from last yr, which found that Worldcoin was collecting sensitive biometric data from vulnerable people in exchange for money. What’s more, the corporate was using test users’ sensitive, though anonymized, data to coach artificial intelligence models, without their knowledge. 

On this week’s issue of The Technocrat, our weekly newsletter on tech policy, Tate Ryan-Mosley and our investigative reporter Eileen Guo take a look at what has modified since last yr’s investigation, and the way we make sense of the most recent news. Read more here. 

Bits and Bytes

That is the primary known case of a lady being wrongfully arrested after a facial recognition match
Last February, Porcha Woodruff, who was eight months pregnant, was arrested over alleged robbery and carjacking and held in custody for 11 hours, just for her case to be dismissed a month later. She is the sixth person to report that she has been falsely accused of against the law due to a facial recognition match. All the six people have been Black, and Woodruff is the primary woman to report this happening to her. (The Recent York Times) 

What are you able to do when an AI system lies about you?
Last summer, I wrote a story about how our personal data is being scraped into vast data sets to coach AI language models. This is just not only a privacy nightmare; it may lead to reputational harm. When reporting the story, a researcher and I discovered that Meta’s experimental BlenderBot chatbot had called a distinguished Dutch politician, Marietje Schaake, a terrorist. And, as this piece explains, in the meanwhile there may be little protection or recourse when AI chatbots spew and spread lies about you. (The Recent York Times) 

Every startup is an AI company now. Are we in a bubble? 
Following the discharge of ChatGPT, AI hype this yr has been INTENSE. Every tech bro and his uncle seems to have founded an AI startup, it seems. But nine months after the chatbot launched, it’s still unclear how these startups and AI technology will earn a living, and there are reports that customers are beginning to lose interest. (The Washington Post) 

Meta is creating chatbots with personas to attempt to retain users
Truthfully, this sounds more annoying than the rest. Meta is reportedly on the brink of launch AI-powered chatbots with different personalities as soon as next month in an try and boost engagement and collect more data on people using its platforms. Users will give you the chance to talk with Abraham Lincoln, or ask for travel advice from AI chatbots that write like a surfer. But it surely raises tricky ethical questions—how will Meta prevent its chatbots from manipulating people’s behavior and potentially making up something harmful, and the way will it treat the user data it collects? (The Financial Times)

LEAVE A REPLY

Please enter your comment!
Please enter your name here