In an era when artificial intelligence (AI) learns the laws of physics on its own, a domestic research team has made that learning method more stable and smarter.
The Gwangju Institute of Science and Technology (GIST) team led by Professor Hwang Ui-seok of the School of Electrical Engineering and Computer Science said on Oct. 20 that it developed a new technology that solves the problem of instability during training in AI that computes physical laws.
The "Langevin Adaptive Sampling (LAS)" method developed by the team helps a physics-informed neural network (PINN), an AI model that solves partial differential equations (PDEs), learn stably. The result was selected as a Spotlight paper, corresponding to the top about 3.5% of all submissions, at NeurIPS, one of the most prestigious conferences in artificial intelligence. The paper was accepted for publication on Sept. 18 and will be presented in San Diego in December.
Partial differential equations are formulas that mathematically express various physical phenomena that change over time and space, such as temperature, pressure, fluid flow, and electromagnetic fields. A physics-informed neural network is a technique that lets AI solve such equations, directly incorporating physical laws into the learning process to improve computational efficiency and reduce data collection expense, rather than simply memorizing data.
But the existing approach had limits. When the residual grew large in certain sections during training, the AI fixated on those sections, causing lopsided learning. It was so unstable that even a slight change in learning rate could lead to dramatically different results. Residual refers to the degree to which the AI's predicted values do not meet the actual conditions, that is, the error.
The researchers applied a training method that mimics particle motion so the AI would not lose its way in difficult parts of the computation. The model is called Langevin dynamics. Just as particles move randomly but pass through important regions more often, the AI is designed to focus its exploration on sections with large errors or complex conditions.
They also ensured the AI would not "fixate" only on high-error regions by having it track how the error changes. In other words, it judges not only "how wrong it is" but also "which way to go to be less wrong." By mixing in a bit of random movement, the AI is guided to train around gentle and stable regions instead of bouncing through unstable ones.
As a result, LAS significantly reduced error compared with existing methods. It produced consistent results even when learning speed or network architecture changed. In particular, LAS stably found answers to high-dimensional heat transfer problems of 4 to 8 dimensions where existing techniques failed. It was also computationally efficient, delivering faster and more accurate results at a similar expense to prior methods.
Hwang said, "This study presents a way to enable stable training even in complex models while reducing computational expense," adding, "It could provide reliable AI solutions across industries, including manufacturing, energy, environment, and climate."