Home Learn We want to concentrate on the AI harms that exist already

We want to concentrate on the AI harms that exist already

0
We want to concentrate on the AI harms that exist already

That is an excerpt from by Joy Buolamwini, published on October 31 by Random House. It has been evenly edited. 

The term “x-risk” is used as a shorthand for the hypothetical existential risk posed by AI. While my research supports the concept AI systems mustn’t be integrated into weapons systems due to lethal dangers, this isn’t because I imagine AI systems by themselves pose an existential risk as superintelligent agents. 

AI systems falsely classifying individuals as criminal suspects, robots getting used for policing, and self-driving cars with faulty pedestrian tracking systems can already put your life at risk. Sadly, we don’t need AI systems to have superintelligence for them to have fatal outcomes on individual lives. Existing AI systems that cause demonstrated harms are more dangerous than hypothetical “sentient” AI systems because they’re real. 

One problem with minimizing existing AI harms by saying hypothetical existential harms are more essential is that it shifts the flow of precious resources and legislative attention. Corporations that claim to fear existential risk from AI could show a real commitment to safeguarding humanity by not releasing the AI tools they claim could end humanity. 

I’m not against stopping the creation of fatal AI systems. Governments concerned with lethal use of AI systems can adopt the protections long championed by the Campaign to Stop Killer Robots to ban lethal autonomous systems and digital dehumanization. The campaign addresses potentially fatal uses of AI without making the hyperbolic jump that we’re on a path to creating sentient systems that may destroy all humankind.

Though it’s tempting to view physical violence as the final word harm, doing so makes it easy to forget pernicious ways our societies perpetuate . The Norwegian sociologist Johan Galtung coined this term to explain how institutions and social structures prevent people from meeting their fundamental needs and thus cause harm. Denial of access to health care, housing, and employment through using AI perpetuates individual harms and generational scars. AI systems can kill us slowly.

Given what my “Gender Shades” research revealed about algorithmic bias from a number of the leading tech firms on the earth, my concern is in regards to the immediate problems and emerging vulnerabilities with AI and whether we could address them in ways that might also help create a future where the burdens of AI didn’t fall disproportionately on the marginalized and vulnerable. AI systems with subpar intelligence that result in false arrests or unsuitable diagnoses have to be addressed now. 

When I believe of x-risk, I believe of the people being harmed now and people who are susceptible to harm by AI systems. I believe in regards to the risk and reality of being excoded. You may be excoded when a hospital uses AI for triage and leaves you without care, or uses a clinical algorithm that precludes you from receiving a life-saving organ transplant. You may be excoded when you find yourself denied a loan based on algorithmic decision-making. You may be excoded when your résumé is routinely screened out and you’re denied the chance to compete for the remaining jobs that aren’t replaced by AI systems. You may be excoded when a tenant-screening algorithm denies you access to housing. All of those examples are real. Nobody is immune from being excoded, and people already marginalized are at greater risk.

For this reason my research can’t be confined simply to industry insiders, AI researchers, and even well-meaning influencers. Yes, academic conferences are essential venues. For a lot of academics, presenting published papers is the capstone of a particular research exploration. For me, presenting “Gender Shades” at Recent York University was a launching pad. Deserting the island of decadent desserts, I felt motivated to place my research into motion—beyond talking shop with AI practitioners, beyond the educational presentations, beyond private dinners. Reaching academics and industry insiders is solely not enough. We want to be certain on a regular basis people susceptible to experiencing AI harms are a part of the fight for algorithmic justice.

Read our interview with Joy Buolamwini here

LEAVE A REPLY

Please enter your comment!
Please enter your name here