Home News Concerns Over Potential Risks of ChatGPT Are Gaining Momentum but Is a Pause on AI a Good Move?

Concerns Over Potential Risks of ChatGPT Are Gaining Momentum but Is a Pause on AI a Good Move?

0
Concerns Over Potential Risks of ChatGPT Are Gaining Momentum but Is a Pause on AI a Good Move?

AI applications are pervasive, impacting virtually every facet of our lives. While laudable, putting the brakes on now could also be implausible.

There are actually palpable concerns calling for increased regulatory oversight to reign in its potential harmful impacts.

Only recently, Italian Data Protection Authority temporarily blocked using ChatGPT nationwide because of privacy concerns related to the style of collection and processing of non-public data used to coach the model, in addition to an apparent lack of safeguards, exposing children to responses “absolutely inappropriate to their age and awareness.”

The European Consumer Organisation (BEUC) is urging the EU to analyze potential harmful impacts of large-scale language models given “concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people. These AI systems need greater public scrutiny, and public authorities must reassert control over them.”

Within the US, the Center for AI and Digital Policy has filed a criticism with the Federal Trade Commission that ChatGPT violates section 5 of the Federal Trade Commission Act (FTC Act) (15 USC 45). The premise of the criticism is that ChatGPT allegedly fails to fulfill the guidance set out by the FTC for transparency and explainability of AI systems. Reference was made to ChatGPT’s acknowledgements of several known risks including compromising privacy rights, generating harmful content, and propagating disinformation.

The utility of large-scale language models corresponding to ChatGPT notwithstanding research points out its potential dark side. It’s proven to provide incorrect answers, because the underlying ChatGPT model relies on deep learning algorithms that leverage large training data sets from the web. Unlike other chatbots, ChatGPT uses language models based on deep learning techniques that generate text much like human conversations, and the platform “arrives at a solution by making a series of guesses, which is a component of the explanation it will probably argue unsuitable answers as in the event that they were completely true.”

Moreover, ChatGPT is proven to intensify and amplify bias leading to “answers that discriminate against gender, race, and minority groups, something which the corporate is attempting to mitigate.” ChatGPT might also be a bonanza for nefarious actors to take advantage of unsuspecting users, compromising their privacy and exposing them to scam attacks.

These concerns prompted the European Parliament to publish a commentary which reinforces the necessity to further strengthen the present provisions of the draft EU Artificial Intelligence Act, (AIA) which remains to be pending ratification. The commentary points out that the present draft of the proposed regulation focuses on what’s known as narrow AI applications, consisting of specific categories of high-risk AI systems corresponding to recruitment, credit worthiness, employment, law enforcement and eligibility for social services.  Nonetheless, the EU draft AIA regulation doesn’t cover general purpose AI, corresponding to large language models that provide more advanced cognitive capabilities and which may “perform a wide selection of intelligent tasks.” There are calls to increase the scope of the draft regulation to incorporate a separate, high-risk category of general-purpose AI systems, requiring developers to undertake rigorous ex ante conformance testing prior to placing such systems available on the market and constantly monitor their performance for potential unexpected harmful outputs.

A very helpful piece of research draws awareness to this gap that the EU AIA regulation is “primarily focused on conventional AI models, and never on the brand new generation whose birth we’re witnessing today.”

It recommends 4 strategies that regulators should consider.

  1. Require developers of such systems to recurrently report on the efficacy of their risk management processes to mitigate harmful outputs.
  2. Businesses using large-scale language models needs to be obligated to confide in their customers that the content was AI generated.
  3. Developers should subscribe to a proper technique of staged releases, as a part of a risk management framework, designed to safeguard against potentially unexpected harmful outcomes.
  4. Place the onus on developers to “mitigate the danger at its roots” by having to “pro-actively audit the training data set for misrepresentations.”

An element that perpetuates the risks related to disruptive technologies is the drive by innovators to realize first mover advantage by adopting a “ship first and fix later” business model. While OpenAI is somewhat transparent concerning the potential risks of ChatGPT, they’ve released it for broad business use with a “buyer beware” onus on users to weigh and assume the risks themselves. That could be an untenable approach given the pervasive impact of conversational AI systems. Proactive regulation coupled with robust enforcement measures should be paramount when handling such a disruptive technology.

Artificial intelligence already permeates nearly every a part of our lives, meaning a pause on AI development could imply a large number of unexpected obstacles and consequences. As a substitute of suddenly pumping the breaks, industry and legislative players should collaborate in good faith to enact actionable regulation that’s rooted in human-centric values like transparency, accountability, and fairness. By referencing existing laws corresponding to the AIA, leaders within the private and public sectors can design thorough, globally standardized policies that may prevent nefarious uses and mitigate antagonistic outcomes, thus keeping artificial intelligence inside the bounds of improving human experiences.

LEAVE A REPLY

Please enter your comment!
Please enter your name here