Home News Juliette Powell & Art Kleiner, Authors of the The AI Dilemma – Interview Series

Juliette Powell & Art Kleiner, Authors of the The AI Dilemma – Interview Series

0
Juliette Powell & Art Kleiner, Authors of the The AI Dilemma – Interview Series

The AI Dilemma is written by Juliette Powell & Art Kleiner.

Juliette Powell is an writer, a television creator with 9,000 live shows under her belt, and a technologist and sociologist. She can be a commentator on Bloomberg TV/ Business News Networks and a speaker at conferences organized by the Economist and the International Finance Corporation. Her TED talk has 130K views on YouTube. Juliette identifies the patterns and practices of successful business leaders who bank on ethical AI and data to win. She is on faculty at NYU’s ITP where she teaches 4 courses, including Design Skills for Responsible Media, a course based on her book.

Art Kleiner is a author, editor and futurist. His books include , , , and . He was editor of strategy+business, the award-winning magazine published by PwC. Art can be a longstanding faculty member at NYU-ITP and IMA, where his courses include co-teaching Responsible Technology and the Way forward for Media.

“The AI Dilemma” is a book that focuses on the risks of AI technology within the flawed hands while still acknowledging the advantages AI offers to society.

Problems arise since the underlying technology is so complex that it becomes unattainable for the top user to really understand the inner workings of a closed-box system.

Probably the most significant issues highlighted is how the definition of responsible AI is all the time shifting, as societal values often do not stay consistent over time.

I quite enjoyed reading “The AI Dilemma”. It is a book that does not sensationalize the risks of AI or delve deeply into the potential pitfalls of Artificial General Intelligence (AGI). As an alternative, readers learn in regards to the surprising ways our personal data is used without our knowledge, in addition to a few of the current limitations of AI and reasons for concern.

Below are some questions which might be designed to point out our readers what they will expect from this ground breaking book.

What initially inspired you to put in writing “The AI Dilemma”?

Juliette went to Columbia partly to review the boundaries and possibilities of regulation of AI. She had heard firsthand from friends working on AI projects in regards to the tension inherent in those projects. She got here to the conclusion that there was an AI dilemma, a much greater problem than self-regulation. She developed the Apex benchmark model — a model of how decisions about AI tended toward low responsibility due to the interactions amongst firms and groups inside firms. That led to her dissertation.

Art had worked with Juliette on plenty of writing projects. He read her dissertation and said, “You might have a book here.” Juliette invited him to coauthor it. In working on it together, they found they’d very different perspectives but shared a robust view that this complex, highly dangerous AI phenomenon would have to be understood higher so that folks using it could act more responsibly and effectively.

One among the basic problems that’s highlighted in The AI Dilemma is the way it is currently unattainable to know if an AI system is responsible or if it perpetuates social inequality by simply studying its source code. How big of an issue is that this?

The  problem is just not primarily with the source code. As Cathy O’Neil points out, when there is a closed-box system, it is not just the code. It is the sociotechnical system — the human and technological forces that shape each other — that should be explored. The logic that built and released the AI system involved identifying a purpose, identifying data, setting the priorities, creating models, establishing guidelines and guardrails for machine learning, and deciding when and the way a human should intervene. That is the part that should be made transparent — a minimum of to observers and auditors. The danger of social inequality and other risks are much greater when these parts of the method are hidden. You may’t really reengineer the design logic from the source code.

Can specializing in Explainable AI (XAI) ever address this?

To engineers, explainable AI is currently regarded as a gaggle of technological constraints and practices, aimed toward making the models more transparent to people working on them. For somebody who’s being falsely accused, explainability has an entire different meaning and urgency. They need explainability to have the opportunity to keep off in their very own defense. All of us need explainability within the sense of constructing the business or government decisions underlying the models clear. At the least in the US, there’ll all the time be a tension between explainability — humanity’s right to know – and a company’s right to compete and innovate. Auditors and regulators need a distinct level of explainability. We go into this in additional detail in The AI Dilemma.

Are you able to briefly share your views on the importance of holding stakeholders (AI firms) answerable for the code that they release to the world?

To date, for instance within the Tempe, AZ self-driving automotive collision that killed a pedestrian, the operator was held responsible. A person went to jail. Ultimately, nevertheless, it was an organizational failure.

When a bridge collapses, the mechanical engineer is held responsible. That’s because mechanical engineers are trained, continually retrained, and held accountable by their career. Computer engineers should not.

Should stakeholders, including AI firms, be trained and retrained to take higher decisions and have more responsibility?

The AI Dilemma focused rather a lot on how firms like Google and Meta can harvest and monetize our personal data. Could you share an example of great misuse of our data that must be on everyone’s radar?

From The AI Dilemma, page 67ff:

Recent cases of systematic personal data misuse proceed to emerge into public view, many involving covert use of facial recognition. In December 2022, MIT Technology Review published accounts of a longstanding iRobot practice. Roomba household robots record images and videos taken in volunteer beta-testers’ homes, which inevitably means gathering intimate personal and family-related images. These are shared, without testers’ awareness, with groups outside the country. In a minimum of one case, a picture of a person on a bathroom was posted on Facebook. Meanwhile, in Iran, authorities have begun using data from facial recognition systems to trace and arrest women who should not wearing hijabs.16

There’s no must belabor these stories further. There are such a lot of of them. It can be crucial, nevertheless, to discover the cumulative effect of living this manner. We lose our sense of getting control over our lives once we feel that our private information is perhaps used against us, at any time, suddenly.

One dangerous concept that was brought up is how our entire world is designed to be frictionless, with the definition of friction being “any point in the client’s journey with an organization where they hit a snag that slows them down or causes dissatisfaction.” How does our expectation of a frictionless experience potentially result in dangerous AI?

In Recent Zealand, a Pak’n’Save savvy meal bot suggested a recipe that may create chlorine gas if used. This was promoted as a way for purchasers to make use of up leftovers and lower your expenses.

Frictionlessness creates an illusion of control. It’s faster and easier to hearken to the app than to look up grandma’s recipe. People follow the trail of least resistance and don’t realize where it’s taking them.

Friction, against this, is creative. You get entangled. This results in actual control. Actual control requires attention and work, and – within the case of AI – doing an prolonged cost-benefit evaluation.

With the illusion of control it seems like we live in a world where AI systems are prompting humans, as an alternative of humans remaining fully on top of things. What are some examples which you can give of humans collectively believing they’ve control, when really, they’ve none?

San Francisco without delay, with robotaxis. The concept of self-driving taxis tends to bring up two conflicting emotions: Excitement (“taxis at a much lower cost!”) and fear (“will they hit me?”) Thus, many regulators suggest that the cars get tested with people in them, who can manage the controls. Unfortunately, having humans on the alert, able to override systems in real-time, might not be a very good test of public safety. Overconfidence is a frequent dynamic with AI systems. The more autonomous the system, the more human operators are likely to trust it and never pay full attention. We get bored watching over these technologies. When an accident is definitely about to occur, we don’t expect it and we regularly don’t react in time.

Lots of research went into this book, was there anything that surprised you?

One thing that actually surprised us was that folks around the globe couldn’t agree on who should live and who should die in The Moral Machine’s simulation of a self-driving automotive collision. If we are able to’t agree on that, then it’s hard to assume that we could have unified global governance or universal standards for AI systems.

You each describe yourselves as entrepreneurs, how will what you learned and reported on influence your future efforts?

Our AI Advisory practice is oriented toward helping organizations grow responsibly with the technology. Lawyers, engineers, social scientists, and business thinkers are all stakeholders in the long run of AI. In our work, we bring all these perspectives together and practice creative friction to seek out higher solutions. We now have developed frameworks just like the calculus of intentional risk to assist navigate these issues.

LEAVE A REPLY

Please enter your comment!
Please enter your name here