
The rapidly growing realm of artificial intelligence (AI) is renowned for its performance but comes at a considerable energy cost. A novel approach, proposed by two leading scientists on the Max Planck Institute for the Science of Light in Erlangen, Germany, goals to coach AI more efficiently, potentially revolutionizing the best way AI processes data.
Current AI models devour vast amounts of energy during training. While precise figures are elusive, estimates by Statista suggest GPT-3’s training requires roughly 1000 megawatt hours—similar to the yearly consumption of 200 sizable German households. While this energy-intensive training has fine-tuned GPT-3 to predict word sequences, there’s consensus that it hasn’t grasped the inherent meanings of such phrases.
Neuromorphic Computing: Merging Brain and Machine
While conventional AI systems depend on digital artificial neural networks, the longer term may lie in neuromorphic computing. Florian Marquardt, a director on the Max Planck Institute and professor on the University of Erlangen, elucidated the disadvantage of traditional AI setups.
“The info transfer between processor and memory alone consumes a major amount of energy,” Marquardt highlighted, noting the inefficiencies when training vast neural networks.
Neuromorphic computing takes inspiration from the human brain, processing data parallelly slightly than sequentially. Essentially, synapses within the brain function as each processor and memory. Systems mimicking these characteristics, equivalent to photonic circuits utilizing light for calculations, are currently under exploration.
Training AI with Self-Learning Physical Machines
Working alongside doctoral student Víctor López-Pastor, Marquardt introduced an progressive training method for neuromorphic computers. Their “self-learning physical machine” fundamentally optimizes its parameters via an inherent physical process, making external feedback redundant. “Not requiring this feedback makes the training rather more efficient,” Marquardt emphasized, suggesting that this method would save each energy and computing time.
Yet, this groundbreaking technique has specific requirements. The method should be reversible, ensuring minimal energy loss, and sufficiently complex or non-linear. “Only non-linear processes can execute the intricate transformations between input data and results,” Marquardt stated, drawing a distinction between linear and non-linear actions.
Towards Practical Implementation
The duo’s theoretical groundwork aligns with practical applications. Collaborating with an experimental team, they’re advancing an optical neuromorphic computer that processes information using superimposed light waves. Their objective is evident: actualizing the self-learning physical machine concept.
“We hope to present the primary self-learning physical machine in three years,” projected Marquardt, indicating that these future networks would handle more data and be trained with larger data sets than contemporary systems. Given the rising demands for AI and the intrinsic inefficiencies of current setups, the shift towards efficiently trained neuromorphic computers seems each inevitable and promising.
In Marquardt’s words, “We’re confident that self-learning physical machines stand a solid likelihood in the continued evolution of artificial intelligence.” The scientific community and AI enthusiasts alike wait with bated breath for what the longer term holds.