├── DL_Phy.md
├── IP.md
├── ODEDL.md
└── readme.md
/DL_Phy.md:
--------------------------------------------------------------------------------
1 | # Deep Learning For Physics
2 |
3 | #### Deep Learning And PDE
4 |
5 | Han J, Jentzen A, Weinan E. Solving high-dimensional partial differential equations using deep learning[J]. Proceedings of the National Academy of Sciences, 2018, 115(34): 8505-8510.
6 |
7 | Raissi M, Perdikaris P, Karniadakis G E. Physics informed deep learning (part i): Data-driven solutions of nonlinear partial differential equations[J]. arXiv preprint arXiv:1711.10561, 2017.
8 |
9 | Long Z, Lu Y, Ma X, et al. PDE-net: Learning PDEs from data[J]. arXiv preprint arXiv:1710.09668, 2017.
10 |
11 | Raissi M. Forward-backward stochastic neural networks: Deep learning of high-dimensional partial differential equations[J]. arXiv preprint arXiv:1804.07010, 2018.
12 |
13 | Sun Y, Zhang L, Schaeffer H. NeuPDE: Neural Network Based Ordinary and Partial Differential Equations for Modeling Time-Dependent Data[J]. arXiv preprint arXiv:1908.03190, 2019.
14 |
15 | Yufei Wang, Ziju Shen, Zichao Long and Bin Dong, Learning to Discretize: Solving 1D Scalar Conservation Laws via Deep Reinforcement Learning, arXiv: 1905.11079, 2019.
16 |
17 | Turbulence forecasting via Neural ODE link
18 |
19 | System Identification with Time-Aware Neural Sequence Models AAAI2020
20 |
21 |
22 | ### Physic Meaningful Embedding
23 |
24 | Variational Integrator Networks for Physically Meaningful Embeddings link
25 |
26 | ### Physics-informed Neural Network Architectures
27 |
28 | de Bezenac, E., Pajot, A., & Gallinari, P. (2017). Deep learning for physical processes: Incorporating prior scientific knowledge. arXiv preprint arXiv:1711.07970. (**ICLR 2018**) link
29 |
30 | Lutter, M., Ritter, C., & Peters, J. (2019). Deep lagrangian networks: Using physics as model prior for deep learning. arXiv preprint arXiv:1907.04490. (**ICLR 2019**) link
31 |
32 | de Avila Belbute-Peres, F., Smith, K., Allen, K., Tenenbaum, J., & Kolter, J. Z. (2018). End-to-end differentiable physics for learning and control. In Advances in Neural Information Processing Systems (pp. 7178-7189). (**NeurIPS 2018**) link
33 |
34 | Schütt, K., Kindermans, P. J., Felix, H. E. S., Chmiela, S., Tkatchenko, A., & Müller, K. R. (2017). Schnet: A continuous-filter convolutional neural network for modeling quantum interactions. In Advances in Neural Information Processing Systems (pp. 991-1001). (**NeurIPS 2018**) link
35 |
36 | Li Y, Wu J, Tedrake R, et al. Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids[J]. arXiv preprint arXiv:1810.01566, 2018.
37 |
--------------------------------------------------------------------------------
/IP.md:
--------------------------------------------------------------------------------
1 | #### Image Processing
2 |
3 |
4 | Liu R, Lin Z, Zhang W, et al. Learning PDEs for image restoration via optimal control[C]//European Conference on Computer Vision. Springer, Berlin, Heidelberg, 2010: 115-128.
5 |
6 | Chen Y, Yu W, Pock T. On learning optimized reaction diffusion processes for effective image restoration CVPR2015
7 |
8 | Xiaoshuai Zhang*, Yiping Lu*, Jiaying Liu, Bin Dong. "Dynamically Unfolding Recurrent Restorer: A Moving Endpoint Control Method for Image Restoration" Seventh International Conference on Learning Representations(ICLR) 2019(*equal contribution)
9 |
10 | Xixi Jia, Sanyang Liu, Xiagnchu Feng, Lei Zhang, "FOCNet: A Fractional Optimal Control Network for Image Denoising," in CVPR 2019.
11 |
--------------------------------------------------------------------------------
/ODEDL.md:
--------------------------------------------------------------------------------
1 | # Deep Learning And ODE
2 |
3 | A very early paper using differential equation to design residual like network
4 |
5 | **Chen Y, Yu W, Pock T. On learning optimized reaction diffusion processes for effective image restoration CVPR2015**
6 |
7 | The First papers introducing the idea linking ODEs and Deep ResNets
8 |
9 | **Weinan E. A proposal on machine learning via dynamical systems[J]. Communications in Mathematics and Statistics, 2017, 5(1): 1-11.**
10 |
11 | **Sonoda S, Murata N. Transport analysis of infinitely deep neural network[J]. The Journal of Machine Learning Research, 2019, 20(1): 31-82. (It's on arxiv 2017)**
12 |
13 | **Haber E, Ruthotto L. Stable architectures for deep neural networks[J]. Inverse Problems, 2017, 34(1): 014004.**
14 |
15 | **Lu Y, Zhong A, Li Q, et al. Beyond finite layer neural networks: Bridging deep architectures and numerical differential equations[J]. arXiv preprint arXiv:1710.10121, 2017.(ICLR workshop 2018/ICML2018)**
16 |
17 | #### Architecture Design
18 |
19 | Chang B, Meng L, Haber E, et al. Reversible architectures for arbitrarily deep residual neural networks[C]//Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
20 |
21 | Haber E, Ruthotto L. Stable architectures for deep neural networks[J]. Inverse Problems, 2017.
22 |
23 | Lu Y. et al., Beyond Finite Layer Neural Network: Bridging Deep Architects and Numerical Differential Equations, ICML 2018.
24 |
25 | Chang B, Chen M, Haber E, et al. Antisymmetricrnn: A dynamical system view on recurrent neural networks[J]. arXiv preprint arXiv:1902.09689, 2019.(ICLR2019)
26 |
27 | Latent ODEs for Irregularly-Sampled Time Series
28 | Yulia Rubanova, Ricky T. Q. Chen, David Duvenaud
29 | Advances in Neural Information Processing Systems (NeurIPS).
30 |
31 | Chen R T Q, Duvenaud D. Neural Networks with Cheap Differential Operators[C]//2019 ICML Workshop on Invertible Neural Nets and Normalizing Flows (INNF). 2019.
32 |
33 | Dupont E, Doucet A, Teh Y W. Augmented neural odes[J]. arXiv preprint arXiv:1904.01681, 2019.
34 |
35 | Zhong Y D, Dey B, Chakraborty A. Symplectic ODE-Net: Learning Hamiltonian Dynamics with Control[J]. arXiv preprint arXiv:1909.12077, 2019.
36 |
37 | Che Z, Purushotham S, Cho K, et al. Recurrent neural networks for multivariate time series with missing values[J]. Scientific reports, 2018, 8(1): 6085.
38 |
39 | ###### Modeling other networks
40 |
41 | Tao Y, Sun Q, Du Q, et al. Nonlocal Neural Networks, Nonlocal Diffusion and Nonlocal Modeling. NeurIPS 2018. *(Modeling nonlocal neural networks)*
42 |
43 | Lu Y, Li Z, He D, et al. Understanding and Improving Transformer From a Multi-Particle Dynamic System Point of View. arXiv preprint arXiv:1906.02762, 2019.*(Modeling Transformer like seq2seq learning networks)*
44 |
45 | Variational Integrator Networks for Physically Meaningful Embeddings link
46 |
47 |
48 | ###### Changing schemes
49 |
50 | Zhang L, Schaeffer H. Forward Stability of ResNet and Its Variants. arXiv preprint arXiv:1811.09885, 2018.
51 |
52 | Zhu M, Chang B, Fu C. Convolutional Neural Networks combined with Runge-Kutta Methods. arXiv:1802.08831, 2018.
53 |
54 | Xie X, Bao F, Maier T, Webster C. Analytic Continuation of Noisy Data Using Adams Bashforth ResNet. arXiv:1905.10430, 2019.
55 |
56 | Dynamical System Inspired Adaptive Time Stepping Controller for Residual Network Families AAAI2020
57 |
58 | Herty M, Trimborn T, Visconti G. Kinetic Theory for Residual Neural Networks[J]. arXiv preprint arXiv:2001.04294, 2020.
59 |
60 | #### Training Algorithm
61 |
62 | ###### Adjoint Method
63 |
64 | Li Q, Chen L, Tai C, et al. Maximum principle based algorithms for deep learning[J]. The Journal of Machine Learning Research, 2017, 18(1): 5998-6026.
65 |
66 | Li Q, Hao S. An optimal control approach to deep learning and applications to discrete-weight neural networks[J]. arXiv preprint arXiv:1803.01299, 2018.
67 |
68 | Chen T Q, Rubanova Y, Bettencourt J, et al. Neural ordinary differential equations[C]//Advances in neural information processing systems. 2018: 6571-6583.
69 |
70 | Zhang D, Zhang T, Lu Y, et al. You only propagate once: Painless adversarial training using maximal principle[J]. arXiv preprint arXiv:1905.00877, 2019.(Neurips2019)
71 |
72 | ###### Multi-grid like algorithm
73 |
74 | Chang B, Meng L, Haber E, et al. Multi-level residual networks from dynamical systems view[J]. arXiv preprint arXiv:1710.10348, 2017.
75 |
76 | Günther S, Ruthotto L, Schroder J B, et al. Layer-parallel training of deep residual neural networks[J]. SIAM Journal on Mathematics of Data Science, 2020, 2(1): 1-23.
77 |
78 | Parpas P, Muir C. Predict Globally, Correct Locally: Parallel-in-Time Optimal Control of Neural Networks. arXiv:1902.02542.
79 |
80 | #### Linking SDE
81 |
82 | Lu Y. et al., Beyond Finite Layer Neural Network: Bridging Deep Architects and Numerical Differential Equations, ICML 2018.
83 |
84 | Sun Q, Tao Y, Du Q. Stochastic training of residual networks: a differential equation viewpoint[J]. arXiv preprint arXiv:1812.00174, 2018.
85 |
86 | Tzen B, Raginsky M. Neural Stochastic Differential Equations: Deep Latent Gaussian Models in the Diffusion Limit[J]. arXiv preprint arXiv:1905.09883, 2019.
87 |
88 | Twomey N, Kozłowski M, Santos-Rodríguez R. Neural ODEs with stochastic vector field mixtures[J]. arXiv preprint arXiv:1905.09905, 2019.
89 |
90 | neural jump stochastic differential equation arXiv:1905.10403
91 |
92 | neural stochastic differential equation arXiv:1905.11065
93 |
94 | Wang B, Yuan B, Shi Z, et al. Enresnet: Resnet ensemble via the feynman-kac formalism. arXiv preprint arXiv:1811.10745, 2018.
95 |
96 | Li X, Wong T K L, Chen R T Q, et al. Scalable Gradients for Stochastic Differential Equations[J]. arXiv preprint arXiv:2001.01328, 2020.
97 |
98 | #### Theoritical Papers
99 |
100 | Weinan E, Han J, Li Q. A mean-field optimal control formulation of deep learning[J]. Research in the Mathematical Sciences, 2019, 6(1): 10.
101 |
102 | Thorpe M, van Gennip Y. Deep limits of residual neural networks[J]. arXiv preprint arXiv:1810.11741, 2018.
103 |
104 | Avelin B, Nyström K. Neural ODEs as the Deep Limit of ResNets with constant weights[J]. arXiv preprint arXiv:1906.12183, 2019.
105 |
106 | Zhang H, Gao X, Unterman J, et al. Approximation Capabilities of Neural Ordinary Differential Equations[J]. arXiv preprint arXiv:1907.12998, 2019.
107 |
108 | Hu K, Kazeykina A, Ren Z. Mean-field Langevin System, Optimal Control and Deep Neural Networks[J]. arXiv preprint arXiv:1909.07278, 2019.
109 |
110 | Tzen B, Raginsky M. Theoretical guarantees for sampling and inference in generative models with latent diffusions[J]. arXiv preprint arXiv:1903.01608, 2019.(COLR2019)
111 |
112 | #### Robustness
113 |
114 | Zhang J, Han B, Wynter L, Low KH, Kankanhalli M. Towards robust resnet: A small step but a giant leap. IJCAI 2019.
115 |
116 | Yan H, Du J, Tan V Y F, et al. On Robustness of Neural Ordinary Differential Equations[J]. arXiv preprint arXiv:1910.05513, 2019.
117 |
118 | Liu X, Si S, Cao Q, et al. Neural SDE: Stabilizing Neural ODE Networks with Stochastic Noise[J]. arXiv preprint arXiv:1906.02355, 2019.
119 |
120 | Reshniak V, Webster C. Robust learning with implicit residual networks[J]. arXiv preprint arXiv:1905.10479, 2019.
121 |
122 | Wang B, Yuan B, Shi Z, et al. Enresnet: Resnet ensemble via the feynman-kac formalism[J]. arXiv preprint arXiv:1811.10745, 2018.(Neurips2019)
123 |
124 |
125 |
126 | #### Generative Models
127 |
128 | Neural Ordinary Differential Equations (BEST PAPER AWARD)
129 | Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, David Duvenaud
130 | Advances in Neural Information Processing Systems (NeurIPS).
131 |
132 | FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models (ORAL)
133 | Will Grathwohl, Ricky T. Q. Chen, Jesse Bettencourt, Ilya Sutskever, David Duvenaud
134 | International Conference on Learning Representations (ICLR).
135 |
136 | Invertible Residual Networks (LONG ORAL)
137 | Jens Behrmann, Will Grathwohl, Ricky T. Q. Chen, David Duvenaud, Jörn-Henrik Jacobsen
138 |
139 | Residual Flows for Invertible Generative Modeling (SPOTLIGHT)
140 | Ricky T. Q. Chen, Jens Behrmann, David Duvenaud, Jörn-Henrik Jacobsen
141 | Advances in Neural Information Processing Systems (NeurIPS).
142 |
143 | Quaglino A, Gallieri M, Masci J, et al. Accelerating Neural ODEs with Spectral Elements[J]. arXiv preprint arXiv:1906.07038, 2019.
144 |
145 | Yıldız Ç, Heinonen M, Lähdesmäki H. ODE $^ 2$ VAE: Deep generative second order ODEs with Bayesian neural networks[J]. arXiv preprint arXiv:1905.10994, 2019.(Neurips2019)
146 |
147 | ANODEV2: A Coupled Neural ODE Framework arXiv:1906.04596
148 |
149 | Port-Hamiltonian Approach to Neural Network Training CDC19
150 |
151 | How to train your neural ODE arXiv:2002.02798 By Chris Finlay, Jörn-Henrik Jacobsen, Levon Nurbekyan, Adam M Oberman
152 |
153 | #### Image Processing
154 |
155 |
156 | Liu R, Lin Z, Zhang W, et al. Learning PDEs for image restoration via optimal control[C]//European Conference on Computer Vision. Springer, Berlin, Heidelberg, 2010: 115-128.
157 |
158 | Chen Y, Yu W, Pock T. On learning optimized reaction diffusion processes for effective image restoration CVPR2015
159 |
160 | Xiaoshuai Zhang*, Yiping Lu*, Jiaying Liu, Bin Dong. "Dynamically Unfolding Recurrent Restorer: A Moving Endpoint Control Method for Image Restoration" Seventh International Conference on Learning Representations(ICLR) 2019(*equal contribution)
161 |
162 | Xixi Jia, Sanyang Liu, Xiagnchu Feng, Lei Zhang, "FOCNet: A Fractional Optimal Control Network for Image Denoising," in CVPR 2019.
163 |
164 |
165 | ###### very early work for learning ode/pdes
166 | Zhu S C, Mumford D B. Prior learning and Gibbs reaction-diffusion[C]. Institute of Electrical and Electronics Engineers, 1997.
167 |
168 | Gilboa G, Sochen N, Zeevi Y Y. Estimation of optimal PDE-based denoising in the SNR sense[J]. IEEE Transactions on Image Processing, 2006, 15(8): 2269-2280.
169 |
170 | Bongard J, Lipson H. Automated reverse engineering of nonlinear dynamical systems[J]. Proceedings of the National Academy of Sciences, 2007, 104(24): 9943-9948.
171 |
172 | Liu R, Lin Z, Zhang W, et al. Learning PDEs for image restoration via optimal control[C]//European Conference on Computer Vision. Springer, Berlin, Heidelberg, 2010: 115-128.
173 |
174 |
175 | #### Review Paper
176 |
177 | Liu G H, Theodorou E A. Deep learning theory review: An optimal control and dynamical systems perspective[J]. arXiv preprint arXiv:1908.10920, 2019.
178 |
179 | #### 3d Vision
180 |
181 | He X, Cao H L, Zhu B. AdvectiveNet: An Eulerian-Lagrangian Fluidic reservoir for Point Cloud Processing[J]. arXiv preprint arXiv:2002.00118, 2020.
182 |
183 |
--------------------------------------------------------------------------------
/readme.md:
--------------------------------------------------------------------------------
1 | # Paper List
2 |
3 | ### ODE Based Analysis For Deep Learning
4 |
5 | - Paper List link
6 | - Computer Vision Papers(Image Processing) link
7 |
8 | ### Deep Learning For Physics
9 |
10 | - Paper List link
11 |
--------------------------------------------------------------------------------