Photolithography involves manipulating light to exactly etch features onto a surface, and is often used to fabricate computer chips and optical devices like lenses. But tiny deviations in the course of the manufacturing process often cause these devices to fall wanting their designers’ intentions.
To assist close this design-to-manufacturing gap, researchers from MIT and the Chinese University of Hong Kong used machine learning to construct a digital simulator that mimics a selected photolithography manufacturing process. Their technique utilizes real data gathered from the photolithography system, so it might more accurately model how the system would fabricate a design.
The researchers integrate this simulator right into a design framework, together with one other digital simulator that emulates the performance of the fabricated device in downstream tasks, comparable to producing images with computational cameras. These connected simulators enable a user to provide an optical device that higher matches its design and reaches the very best task performance.
This method could help scientists and engineers create more accurate and efficient optical devices for applications like mobile cameras, augmented reality, medical imaging, entertainment, and telecommunications. And since the pipeline of learning the digital simulator utilizes real-world data, it might be applied to a big selection of photolithography systems.
“This concept sounds easy, but the explanations people haven’t tried this before are that real data could be expensive and there are not any precedents for tips on how to effectively coordinate the software and hardware to construct a high-fidelity dataset,” says Cheng Zheng, a mechanical engineering graduate student who’s co-lead creator of an open-access paper describing the work. “We have now taken risks and done extensive exploration, for instance, developing and trying characterization tools and data-exploration strategies, to find out a working scheme. The result’s surprisingly good, showing that real data work far more efficiently and precisely than data generated by simulators composed of analytical equations. Though it might be expensive and one can feel clueless firstly, it’s price doing.”
Zheng wrote the paper with co-lead creator Guangyuan Zhao, a graduate student on the Chinese University of Hong Kong; and her advisor, Peter T. So, a professor of mechanical engineering and biological engineering at MIT. The research shall be presented on the SIGGRAPH Asia Conference.
Printing with light
Photolithography involves projecting a pattern of sunshine onto a surface, which causes a chemical response that etches features into the substrate. Nevertheless, the fabricated device finally ends up with a rather different pattern due to miniscule deviations in the sunshine’s diffraction and tiny variations within the chemical response.
Because photolithography is complex and hard to model, many existing design approaches depend on equations derived from physics. These general equations give some sense of the fabrication process but can’t capture all deviations specific to a photolithography system. This could cause devices to underperform in the true world.
For his or her technique, which they call neural lithography, the MIT researchers construct their photolithography simulator using physics-based equations as a base, after which incorporate a neural network trained on real, experimental data from a user’s photolithography system. This neural network, a kind of machine-learning model loosely based on the human brain, learns to compensate for lots of the system’s specific deviations.
The researchers gather data for his or her method by generating many designs that cover a big selection of feature styles and sizes, which they fabricate using the photolithography system. They measure the ultimate structures and compare them with design specifications, pairing those data and using them to coach a neural network for his or her digital simulator.
“The performance of learned simulators is determined by the information fed in, and data artificially generated from equations can’t cover real-world deviations, which is why it can be crucial to have real-world data,” Zheng says.
Dual simulators
The digital lithography simulator consists of two separate components: an optics model that captures how light is projected on the surface of the device, and a resist model that shows how the photochemical response occurs to provide features on the surface.
In a downstream task, they connect this learned photolithography simulator to a physics-based simulator that predicts how the fabricated device will perform on this task, comparable to how a diffractive lens will diffract the sunshine that strikes it.
The user specifies the outcomes they desire a device to attain. Then these two simulators work together inside a bigger framework that shows the user tips on how to make a design that can reach those performance goals.
“With our simulator, the fabricated object can get the very best possible performance on a downstream task, just like the computational cameras, a promising technology to make future cameras miniaturized and more powerful. We show that, even in case you use post-calibration to attempt to get a greater result, it would still not be nearly as good as having our photolithography model within the loop,” Zhao adds.
They tested this system by fabricating a holographic element that generates a butterfly image when light shines on it. When put next to devices designed using other techniques, their holographic element produced a near-perfect butterfly that more closely matched the design. In addition they produced a multilevel diffraction lens, which had higher image quality than other devices.
In the longer term, the researchers want to reinforce their algorithms to model more complicated devices, and likewise test the system using consumer cameras. As well as, they wish to expand their approach so it might be used with several types of photolithography systems, comparable to systems that use deep or extreme ultraviolet light.
This research is supported, partially, by the U.S. National Institutes of Health, Fujikura Limited, and the Hong Kong Innovation and Technology Fund.