Home Learn Responsible technology use within the AI age

Responsible technology use within the AI age

0
Responsible technology use within the AI age

In association withThoughtworks

The sudden appearance of application-ready generative AI tools over the past yr has confronted us with difficult social and ethical questions. Visions of how this technology could deeply alter the ways we work, learn, and live have also accelerated conversations—and breathless media headlines—about how and whether these technologies might be responsibly used.

Responsible technology use, after all, is nothing latest. The term encompasses a broad range of concerns, from the bias that could be hidden inside algorithms, to the info privacy rights of the users of an application, to the environmental impacts of a brand new way of labor. Rebecca Parsons, CTO emerita on the technology consultancy Thoughtworks, collects all of those concerns under “constructing an equitable tech future,” where, as latest technology is deployed, its advantages are equally shared. “As technology becomes more essential in significant elements of individuals’s lives,” she says, “we wish to consider a future where the tech works right for everybody.”

Technology use often goes flawed, Parsons notes, “because we’re too focused on either our own ideas of what attractiveness like or on one particular audience versus a broader audience.” Which will appear to be an app developer constructing just for an imagined customer who shares his geography, education, and affluence, or a product team that doesn’t consider what damage a malicious actor could wreak of their ecosystem. “We expect persons are going to make use of my product the way in which I intend them to make use of my product, to resolve the issue I intend for them to resolve in the way in which I intend for them to resolve it,” says Parsons. “But that’s not what happens when things get out in the true world.”

AI, after all, poses some distinct social and ethical challenges. A number of the technology’s unique challenges are inherent in the way in which that AI works: its statistical slightly than deterministic nature, its identification and perpetuation of patterns from past data (thus reinforcing existing biases), and its lack of knowledge about what it doesn’t know (leading to hallucinations). And a few of its challenges stem from what AI’s creators and users themselves don’t know: the unexamined bodies of knowledge underlying AI models, the limited explainability of AI outputs, and the technology’s ability to deceive users into treating it as a reasoning human intelligence.

Parsons believes, nonetheless, that AI has not modified responsible tech a lot because it has brought a few of its problems right into a latest focus. Concepts of mental property, for instance, date back a whole bunch of years, however the rise of huge language models (LLMs) has posed latest questions on what constitutes fair use when a machine might be trained to emulate a author’s voice or an artist’s style. “It’s not responsible tech should you’re violating anyone’s mental property, but fascinated with that was a complete lot more straightforward before we had LLMs,” she says.

The principles developed over many many years of responsible technology work still remain relevant during this transition. Transparency, privacy and security, thoughtful regulation, attention to societal and environmental impacts, and enabling wider participation via diversity and accessibility initiatives remain the keys to creating technology work toward human good.

MIT Technology Review Insights’ 2023 report with Thoughtworks, “The state of responsible technology,” found that executives are taking these considerations seriously. Seventy-three percent of business leaders surveyed, for instance, agreed that responsible technology use will come to be as essential as business and financial considerations when making technology decisions. 

This AI moment, nonetheless, may represent a novel opportunity to beat barriers which have previously stalled responsible technology work. Lack of senior management awareness (cited by 52% of those surveyed as a top barrier to adopting responsible practices) is definitely less of a priority today: savvy executives are quickly becoming fluent on this latest technology and are continually reminded of its potential consequences, failures, and societal harms.

The opposite top barriers cited were organizational resistance to alter (46%) and internal competing priorities (46%). Organizations which have realigned themselves behind a transparent AI strategy, and who understand its industry-altering potential, may give you the option to beat this inertia and indecision as well. At this singular moment of disruption, when AI provides each the tools and motivation to revamp most of the ways by which we work and live, we will fold responsible technology principles into that transition—if we elect to.

For her part, Parsons is deeply optimistic about humans’ ability to harness AI for good, and to work around its limitations with common sense guidelines and well-designed processes with human guardrails. “As technologists, we just get so focused on the issue we’re trying to resolve and the way we’re trying to resolve it,” she says. “And all responsible tech is de facto about is lifting your head up, and searching around, and seeing who else could be on this planet with me.”

To read more about Thoughtworks’ evaluation and proposals on responsible technology, visit its .

LEAVE A REPLY

Please enter your comment!
Please enter your name here