## MIM: A deep mixed residual method for solving high-order partial differential equations

#### 北京师范大学数学科学学院计算数学学术报告

### 报告题目(Title)：**MIM: A deep mixed residual method for solving high-order partial differential equations**

报告人(Speaker)：*陈景润 （苏州大学）*

地点(Place)：*电子楼 103 https://meeting.tencent.com/s/5OddolQfcg8M 会议 ID：807 363 025*

时间(Time)：*2020年9月28日14:30 *

邀请人(Inviter)：*蔡勇勇*

### 报告摘要

In recent years, a significant amount of attention has been paid to solve partial differential equations (PDEs) by deep learning. For example, deep Galerkin method (DGM) uses the PDE residual in the least-squares sense as the loss function and a deep neural network (DNN) to approximate the PDE solution. In this work, we propose a deep mixed residual method (MIM) to solve PDEs with high-order derivatives. Notable examples include Poisson equation, Monge-Ampere equation, biharmonic equation, and Korteweg-de Vries equation. In MIM, we first rewrite a high-order PDE into a first-order system, very much in the same spirit as local discontinuous Galerkin method and mixed finite element method in classical numerical methods for PDEs. We then use the residual of first-order system in the least-squares sense as the loss function, which is in close connection with least-squares finite element method. For aforementioned classical numerical methods, the choice of trail and test functions is important for stability and accuracy issues in many cases. MIM shares this property when DNNs are employed to approximate unknowns functions in the first-order system. In one case, we use nearly the same DNN to approximate all unknown functions and in the other case, we use totally different DNNs for different unknown functions. Numerous results of MIM with different loss functions and different choice of DNNs are given for four types of PDEs. In most cases, MIM provides better approximations (not only for high-derivatives of the PDE solution but also for the PDE solution itself) than DGM with nearly the same DNN and the same execution time, sometimes by more than one order of magnitude. When different DNNs are used, in many cases, MIM provides even better approximations than MIM with only one DNN, sometimes by more than one order of magnitude. Numerical observations also imply a successive improvement of approximation accuracy when the problem dimension increases and interesting connections between MIM and classical numerical methods. Therefore, we expect MIM to open up a possibly systematic way to understand and improve deep learning for solving PDEs from the perspective of classical numerical analysis.