Date
December 17 (Wed) 11:00 - 12:00, 2025 (JST)
Speaker
  • Liu Liu (Assistant Professor, Department of Mathematics, The Institute of Mathematical Sciences, The Chinese University of Hong Kong, Hong Kong)
Language
English
Host
Antoine Diez

In this talk, we will introduce a bi-fidelity Asymptotic-Preserving Neural Network (BI-APNNs) framework, designed to efficiently solve forward and inverse problems for the linear Boltzmann equation. Our approach builds upon the previously studied Asymptotic-Preserving Neural Network (APNNs), which employs a micro-macro decomposition to handle the model’s multiscale nature. We specifically address a bottleneck in the original APNNs: the slow convergence of the macroscopic density in the near fluid-dynamic regime. This strategy significantly accelerates the training convergence as well as improves the accuracy of the forward problem solution, particularly in the fluid-dynamic limit. We show several numerical experiments on both linear Boltzmann and the Boltzmann-Poisson system that this new BI-APNN method produces more accurate and robust results for forward and inverse problems compared to the standard APNNs. This is a joint work with Zhenyi Zhu and Xueyu Zhu.

This is a closed event for scientists. Non-scientists are not allowed to attend. If you are not a member or related person and would like to attend, please contact us using the inquiry form. Please note that the event organizer or speaker must authorize your request to attend.

Inquire about this event