Home Community Google DeepMind Proposes An Artificial Intelligence Framework for Social and Ethical AI Risk Assessment

Google DeepMind Proposes An Artificial Intelligence Framework for Social and Ethical AI Risk Assessment

0
Google DeepMind Proposes An Artificial Intelligence Framework for Social and Ethical AI Risk Assessment

Generative AI systems, which create content across different formats, have gotten more widespread. These systems are utilized in various fields, including medicine, news, politics, and social interaction, providing companionship. Using natural language output, these systems produced information in a single format, akin to text or graphics. To make generative AI systems more adaptable, there’s an increasing trend in improving them to operate with additional formats, akin to audio (including voice and music) and video.

The increasing use of generative AI systems highlights the necessity to assess potential risks related to their deployment. As these technologies grow to be more prevalent and integrated into various applications, concerns arise regarding public safety. Consequently, evaluating the potential risks posed by generative AI systems is becoming a priority for AI developers, policymakers, regulators, and civil society.

The growing use of those systems highlights the need to guage potential dangers related to implementing generative AI systems. Because of this, it’s becoming more essential for AI developers, regulators, and civil society to evaluate the possible threats posed by generative AI systems. The event of AI that may spread false information raises moral questions on how such technologies will affect society.

Consequently, a recent study by Google DeepMind researchers offers a radical approach to assessing AI systems’ social and ethical hazards across several contextual layers. The DeepMind framework systematically assesses risks at three distinct levels: the system’s capabilities, human interactions with the technology, and the broader systemic impacts it can have. 

They emphasized that it’s crucial to acknowledge that even highly capable systems may only necessarily cause harm if used problematically inside a particular context. Also, the framework examines real-world human interactions with the AI system. This involves considering aspects akin to who utilizes the technology and whether it operates as intended.

Finally, the framework checks how AI delves into the risks which will arise when AI is extensively adopted. This evaluation considers how technology influences larger social systems and institutions. The researchers emphasize how essential context is in determining how dangerous AI is. Each layer of the framework is permeated by contextual concerns, emphasizing the importance of knowing who will use the AI and why. For example, even when an AI system produces factually accurate outputs, users’ interpretation and subsequent dissemination of those outputs can have unintended consequences only apparent inside certain contextual constraints.

The researchers provided a case study concentrating on misinformation to show this strategy. The evaluation includes assessing an AI’s tendency for factual errors, observing how users interact with the system, and measuring any subsequent repercussions, akin to the spread of misinformation. This interconnection of model behavior with actual harm occurring in a given context results in actionable insights.

DeepMind’s context-based approach underscores the importance of moving beyond isolated model metrics. It emphasizes the critical need to guage how AI systems operate inside the complex reality of social contexts. This holistic assessment is crucial for harnessing the advantages of AI while minimizing associated risks.


Try the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to affix our 31k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the newest AI research news, cool AI projects, and more.

In case you like our work, you’ll love our newsletter..

We’re also on WhatsApp. Join our AI Channel on Whatsapp..


Rachit Ranjan is a consulting intern at MarktechPost . He’s currently pursuing his B.Tech from Indian Institute of Technology(IIT) Patna . He’s actively shaping his profession in the sector of Artificial Intelligence and Data Science and is passionate and dedicated for exploring these fields.


▶️ Now Watch AI Research Updates On Our Youtube Channel [Watch Now]

LEAVE A REPLY

Please enter your comment!
Please enter your name here