
In relation to Generative AI and Large Language Models, like ChatGPT. AI enthusiasm is mixed with technophobia. That is natural for most people: they like latest exciting things, but they’re afraid of the unknown. The brand new thing is that several outstanding scientists became themselves techno-sceptics, if not technophobic. The case of the scientists and industrialists asking for a six month ban on AI research, or the skepticism of the highest AI scientist Prof. A. Hinton, are such examples. The one related historical equivalent I can recall, is the criticism of atomic and nuclear bombs by an element of the scientific community through the cold war. Luckily, humanity managed to deal with these concerns in a relatively satisfactory way.
After all, everyone has the fitting to query the present state of AI affairs:
- No person knows why Large Language Models work so well and in the event that they have a limit.
- Many dangers that the bad guys create ‘AI bombs’ lurk, particularly if the states remain passive bystanders, when it comes to regulations.
These are legitimate concerns that fuel the fear of the unknown, even to outstanding scientists. In any case, they’re humans themselves.
Nonetheless, can AI research stop even temporarily? For my part, no, as AI is the response of humanity to a world society and physical world of ever-increasing complexity. Because the physical and social complexity increase processes are very deep and seeming relentless, AI and citizen morphosis are our only hope to have a smooth transition from the present Information Society to a Knowledge Society. Else, we may face a catastrophic social implosion.
The answer is to deepen our understanding of AI advances, speed up its development, regulate its use towards maximizing its positive impact, while minimizing the already evident and other hidden negative effects. AI research can and may develop into different: more open, democratic, scientific and ethical. Here’s a proposed list of points to this end:
- The primary word on vital AI research issues which have far-reaching social impact needs to be delegated to elected Parliaments and Governments, relatively than to corporations or individual scientists.
- Every effort needs to be made to facilitate the exploration of the positive facets of AI in social and financial progress and to reduce its negative facets.
- The positive impact of AI systems can greatly outweigh their negative facets, if proper regulatory measures are taken. Technophobia is neither justified, neither is an answer.
- For my part, the largest current threat comes from the incontrovertible fact that such AI systems can remotely deceive too many commoners which have little (or average) education and/or little investigative capability. This will be extremely dangerous to democracy and any type of socio-economic progress.
- Within the near future, we must always counter the large threat coming from LLM and/or CAN use in illegal activities (cheating in University exams is a relatively benign use within the space of the related criminal possibilities).
- Their impact on labor and markets will likely be very positive, within the medium-long run.
- In view of the above, AI systems should: a) be required by international law to be registered in an ‘AI system register’, and b) notify their users that they converse with or use the outcomes of an AI system.
- As AI systems have huge societal impact, and towards maximizing profit and socio-economic progress, advanced key AI system technologies should develop into open.
- AI-related data needs to be (at the least partially) democratized, again towards maximizing profit and socio-economic progress.
- Proper strong financial compensation schemes should be foreseen for AI technology champions to compensate any profit loss, as a result of the fore-said open-ness and to make sure strong future investments in AI R&D (e.g., through technology patenting, obligatory licensing schemes).
- The AI research balance between Academia and Industry needs to be rethought to maximise research output, while maintaining competitiveness and granting rewards for undertaken R&D risks.
- Education practices needs to be revisited in any respect education levels to maximise the profit out of AI technologies, while making a latest breed of creative and adaptable residents and (AI) scientists.
- Proper AI regulatory/supervision/funding mechanisms needs to be created and beefed as much as make sure the above.
Several such points are treated intimately in my recent book 4 volume book on ‘AI Science and Society’, particularly in volumes A (rewritten in May 2023 to cover LLMs and Artificial General Intelligence) and C.
Book References:
Artificial Intelligence Science and Society Part A: Introduction to AI Science and Information Technology
Artificial Intelligence Science and Society Part C: AI Science and Society