LAION (The Large Scale Artificial Intelligence Open Network), and other Renowned research institutions, have published an open letter addressed to the European Parliament. This letter emphasizes the inevitable negative repercussions the draft AI Act can have on open-source research and development (R&D) inside the realm of artificial intelligence (AI).
The letter underlines the essential role that open-source R&D plays in guaranteeing the protection, security, and competitiveness of AI throughout Europe, while also cautioning against inhibiting such groundbreaking work.
The letter addresses the next as outlined by LAION.
The Importance of Open-Source AI
The letter outlines three fundamental the reason why open-source AI is value protecting:
- Safety through transparency: Open-source AI promotes safety by enabling researchers and authorities to audit model performance, discover risks, and establish mitigations or countermeasures.
- Competition: Open-source AI allows small to medium enterprises to construct on existing models and drive productivity, reasonably than counting on a couple of large firms for essential technology.
- Security: Private and non-private organizations can adapt open-source models for specialised applications without sharing sensitive data with proprietary firms.
Concerns with the Draft AI Act
The draft AI Act may introduce latest requirements for foundation models, which could negatively impact open-source R&D in AI. The letter argues that “one size matches all” rules will stifle open-source R&D and will:
- Entrench proprietary gatekeepers, often large firms, to the detriment of open-source researchers and developers
- Limit academic freedom and forestall the European research community from studying models of public significance
- Reduce competition between model providers and drive investment in AI overseas
Recommendations for the European Parliament
The open letter makes three key recommendations:
- Ensure open-source R&D can comply with the AI Act: The Act should promote open-source R&D and recognize the distinctions between closed-source AI models offered as a service and AI models released as open-source code. Where appropriate, the Act should exempt open-source models from regulations intended for closed-source models.
- Impose requirements proportional to risk: The Act should impose rules for foundation models which might be proportional to their actual risk. A “one size matches all” framework could make it not possible to field low-risk and open-source models in Europe.
- Establish public research facilities for compute resources: The EU should establish large-scale supercomputing facilities for AI research, enabling the European research community to review open-source foundation models under controlled conditions with public oversight.
The Way forward for AI in Europe
The letter concludes with a call to motion for the European Parliament to think about the points raised and foster a legislative environment that supports open-source R&D. This approach will promote safety through transparency, drive innovation and competition, and speed up the event of a sovereign AI capability in Europe.
With quite a few esteemed supporters, including the European Laboratory for Learning and Intelligent Systems (ELLIS), the Pan-European AI Network of Excellence, and the German AI Association (KI-Bundesverband), the letter serves as a robust reminder of the importance of protecting open-source AI for the long run of Europe.
Supporters
- European Laboratory for Learning and Intelligent Systems (ELLIS) – Pan-European AI Network of Excellence
- German AI Association (KI-Bundesverband) – With greater than 400 corporations, the biggest AI network in Germany
- Prof. Jürgen Schmidhuber: Scientific Director of the Swiss AI Lab IDSIA (USI & SUPSI), Co-Founder & Chief Scientist of NNAISENSE, Inventor of LSTM Networks
- Prof. Sepp Hochreiter: JKU Linz, Inventor of LSTM Networks
- Prof. Bernhard Schölkopf: Director, Max Planck Institute for Intelligent Systems and ELLIS Institute, Tübingen, Germany
- Prof. Serge Belongie: University of Copenhagen; Director, Pioneer Centre for AI
- Prof. Andreas Geiger: University of Tübingen and Tübingen AI Center
- Prof. Irina Rish: Full Professor at Université de Montréal, Canada Excellence Research Chair (CERC) in Autonomous AI and Canada CIFAR AI Chair, core member of Mila – Quebec AI Institute.
- Prof. Antonio Krüger: CEO of the German Research Center for AI (DFKI) and Professor on the Saarland University
- Prof. Kristian Kersting: Full Professor at Technical University of Darmstadt and Co-Director, Hessian Center for AI (hessian.AI)
- Jörg Bienert: CEO of German AI Association, CPO of Alexander Thamm GmbH
- Patrick Schramowski: Researcher at German Center for Artificial Intelligence (DFKI) and Hessian Center for AI (hessian.AI)
- Dr. Jenia Jitsev: Lab Leader at Juelich Supercomputing Center, Research Center Juelich, Helmholtz Association, ELLIS member
- Dr. Sampo Pyysalo: Research Fellow on the University of Turku, Finland
- Robin Rombach: Co-Developer of Stable Diffusion, PhD Candidate at LMU Munich
- Prof. Michael Granitzer: Chair of Data Science University of Passau, Germany and Coordinator of OpenWebSearch.eu
- Prof. Dr. Jens Meiler: Leipzig University, ScaDS.AI Center for Scalable Data Analytics and Artificial Intelligence
- Prof. Dr. Martin Potthast: Leipzig University, ScaDS.AI Center for Scalable Data Analytics and Artificial Intelligence, and OpenWebSearch.EU
- Prof. Dr. Holger Hoos: Alexander von Humboldt Professor in AI at RWTH Aachen University (Germany) and Professor of Machine Learning at Universiteit Leiden (Netherlands)
- Prof. Dr. Henning Wachsmuth: Chair of Natural Language Processing on the Institute of Artificial Intelligence, Leibniz University Hannover
- Prof. Dr. Wil van der Aalst: Alexander von Humboldt Professor in Process and Data Science at RWTH Aachen University and Chief Scientist at Celonis
- Prof. Dr. Bastian Leibe: Chair of Computer Vision at RWTH Aachen University (Germany)
- Prof. Dr. Martin Grohe: Chair for Logic and the Theory of Discrete Systems, RWTH University
- Prof. Ludwig Schmidt: Paul G. Allen School of Computer Science & Engineering, University of Washington
- Dr Morten Irgens: Vice Rector, Kristiania, Co-founder and board member of CLAIRE (the Confederation of Laboratories of AI Research in Europe), Adra (the AI, Data and Robotics Association) and NORA (the Norwegian AI Research Consortium)
- Prof. Dr. Hector Geffner: Alexander von Humboldt Professor in AI at RWTH Aachen University (Germany), and Wallenberg Guest Professor in AI at Linköping University, Sweden
- Prof. Dr. Hilde Kuehne: Goethe University Frankfurt (Germany), MIT-IBM Watson AI Lab (USA)
- Prof. Gerhard Lakemeyer, Ph.D.: Head of the Knowledge-based Systems Group and Chair of the Computer Science Department, RWTH Aachen University, Germany
- Sebastian Nagel: Crawl Engineer, Common Crawl, Konstanz, Germany
While not officially on the Supporters list, Unite.AI also supports this Open Letter.