
AI growth and advancements have been exponential over the past few years. Statista reports that by 2024, the worldwide AI market will generate a staggering revenue of around $3000 billion, in comparison with $126 billion in 2015. Nonetheless, tech leaders are actually warning us in regards to the various risks of AI.
Especially, the recent wave of generative AI models like ChatGPT has introduced recent capabilities in various data-sensitive sectors, equivalent to healthcare, education, finance, etc. These AI-backed developments are vulnerable attributable to many AI shortcomings that malicious agents can expose.
Let’s discuss what AI experts are saying in regards to the recent developments and highlight the potential risks of AI. We’ll also briefly touch on how these risks might be managed.
Tech Leaders & Their Concerns Related to the Risks of AI
Geoffrey Hinton
Geoffrey Hinton – a famous AI tech leader (and godfather of this field), who recently quit Google, has voiced his concerns about rapid development in AI and its potential dangers. Hinton believes that AI chatbots can develop into “quite scary” in the event that they surpass human intelligence.
Hinton says:
“Straight away, what we’re seeing is things like GPT-4 eclipses an individual in the quantity of general knowledge it has, and it eclipses them by a good distance. By way of reasoning, it isn’t pretty much as good, but it surely does already do easy reasoning. And given the speed of progress, we expect things to recover quite fast. So we’d like to fret about that.”
Furthermore, he believes that “bad actors” can use AI for “bad things,” equivalent to allowing robots to have their sub-goals. Despite his concerns, Hinton believes that AI can bring short-term advantages, but we should always also heavily spend money on AI safety and control.
Elon Musk
Elon Musk’s involvement in AI began together with his early investment in DeepMind in 2010, to co-founding OpenAI and incorporating AI into Tesla’s autonomous vehicles.
Although he’s keen about AI, he incessantly raises concerns in regards to the risks of AI. Musk says that powerful AI systems might be more dangerous to civilization than nuclear weapons. In an interview at Fox News in April 2023, he said:
“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad automotive production. Within the sense that it has the potential — nonetheless, small one may regard that probability — but it surely is non-trivial and has the potential of civilization destruction.”
Furthermore, Musk supports government regulations on AI to make sure safety from potential risks, although “it’s not so fun.”
Pause Giant AI Experiments: An Open Letter Backed by 1000s of AI Experts
Way forward for Life Institute published an open letter on twenty second March 2023. The letter calls for a short lived six months halt on AI systems development more advanced than GPT-4. The authors express their concerns in regards to the pace with which AI systems are being developed poses severe socioeconomic challenges.
Furthermore, the letter states that AI developers should work with policymakers to document AI governance systems. As of June 2023, the letter has been signed by greater than 31,000 AI developers, experts, and tech leaders. Notable signatories include Elon Musk, Steve Wozniak (Co-founder of Apple), Emad Mostaque (CEO, Stability AI), Yoshua Bengio (Turing Prize winner), and plenty of more.
Counter Arguments on Halting AI Development
Two outstanding AI leaders, Andrew Ng, and Yann LeCun, have opposed the six-month ban on developing advanced AI systems and regarded the pause a nasty idea.
Ng says that although AI has some risks, equivalent to bias, the concentration of power, etc. However the value created by AI in fields equivalent to education, healthcare, and responsive coaching is tremendous.
Yann LeCun says that research and development shouldn’t be stopped, although the AI products that reach the end-user might be regulated.
What Are the Potential Dangers & Immediate Risks of AI?
1. Job Displacement
AI experts consider that intelligent AI systems can replace cognitive and inventive tasks. Investment bank Goldman Sachs estimates that around 300 million jobs will likely be automated by generative AI.
Hence, there ought to be regulations on the event of AI in order that it doesn’t cause a severe economic downturn. There ought to be educational programs for upskilling and reskilling employees to cope with this challenge.
2. Biased AI Systems
Biases prevalent amongst human beings about gender, race, or color can inadvertently permeate the information used for training AI systems, subsequently making AI systems biased.
For example, within the context of job recruitment, a biased AI system can discard resumes of people from specific ethnic backgrounds, creating discrimination within the job market. In law enforcement, biased predictive policing could disproportionately goal specific neighborhoods or demographic groups.
Hence, it is important to have a comprehensive data strategy that addresses AI risks, particularly bias. AI systems have to be incessantly evaluated and audited to maintain them fair.
3. Safety-Critical AI Applications
Autonomous vehicles, medical diagnosis & treatment, aviation systems, nuclear power plant control, etc., are all examples of safety-critical AI applications. These AI systems ought to be developed cautiously because even minor errors could have severe consequences for human life or the environment.
For example, the malfunctioning of the AI software called Maneuvering Characteristics Augmentation System (MCAS) is attributed partly to the crash of the 2 Boeing 737 MAX, first in October 2018 after which in March 2019. Sadly, the 2 crashes killed 346 people.
How Can We Overcome the Risks of AI Systems? – Responsible AI Development & Regulatory Compliance
Responsible AI (RAI) means developing and deploying fair, accountable, transparent, and secure AI systems that ensure privacy and follow legal regulations and societal norms. Implementing RAI might be complex given AI systems’ broad and rapid development.
Nonetheless, big tech firms have developed RAI frameworks, equivalent to:
- Microsoft’s Responsible AI
- Google’s AI Principles
- IBM’S Trusted AI
AI labs across the globe can take inspiration from these principles or develop their very own responsible AI frameworks to make trustworthy AI systems.
AI Regulatory Compliance
Since, data is an integral component of AI systems, AI-based organizations and labs must comply with the next regulations to make sure data security, privacy, and safety.
- GDPR (General Data Protection Regulation) – an information protection framework by the EU.
- CCPA (California Consumer Privacy Act) – a California state statute for privacy rights and consumer protection.
- HIPAA (Health Insurance Portability and Accountability Act) – a U.S. laws that safeguards patients’ medical data.
- EU AI Act, and Ethics guidelines for trustworthy AI – a European Commission AI regulation.
There are numerous regional and native laws enacted by different countries to guard their residents. Organizations that fail to make sure regulatory compliance around data may end up in severe penalties. For example, GDPR has set a effective of €20 million or 4% of annual profit for serious infringements equivalent to illegal data processing, unproven data consent, violation of knowledge subjects’ rights, or non-protected data transfer to a global entity.
AI Development & Regulations – Present & Future
With every passing month, AI advancements are reaching unprecedented heights. But, the accompanying AI regulations and governance frameworks are lagging. They should be more robust and specific.
Tech leaders and AI developers have been ringing alarms in regards to the risks of AI if not adequately regulated. Research and development in AI can further bring value in lots of sectors, but it surely’s clear that careful regulation is now imperative.
For more AI-related content, visit unite.ai.