Deep Learning Models for Global CoordinateTransformations that Linearize PDEs

Abstract

We develop a deep autoencoder architecture that can be used to find a coordinate trans-formation which turns a nonlinear PDE into a linear PDE. Our architecture is motivatedby the linearizing transformations provided by the Cole-Hopf transform for Burgers equa-tion and the inverse scattering transform for completely integrable PDEs. By leveraginga residual network architecture, a near-identity transformation can be exploited to en-code intrinsic coordinates in which the dynamics are linear. The resulting dynamics aregiven by a Koopman operator matrixK. The decoder allows us to transform back to theoriginal coordinates as well. Multiple time step prediction can be performed by repeatedmultiplication by the matrixKin the intrinsic coordinates. We demonstrate our methodon a number of examples, including the heat equation and Burgers equation, as wellas the substantially more challenging Kuramoto-Sivashinsky equation, showing that ourmethod provides a robust architecture for discovering interpretable, linearizing transformsfor nonlinear PDEs

Related