A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 10 Issue 6
Jun.  2023

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 11.8, Top 4% (SCI Q1)
    CiteScore: 17.6, Top 3% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
S. B. Cheng, C. Quilodrán-Casas, S. Ouala, A. Farchi, C. Liu, P. Tandeo, R. Fablet, D. Lucor, B. Iooss, J. Brajard, D. H. Xiao, T. Janjic, W. P. Ding, Y. K. Guo, A. Carrassi, M. Bocquet, and  R. Arcucci,  “Machine learning with data assimilation and uncertainty quantification for dynamical systems: A review,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 6, pp. 1361–1387, Jun. 2023. doi: 10.1109/JAS.2023.123537
Citation: S. B. Cheng, C. Quilodrán-Casas, S. Ouala, A. Farchi, C. Liu, P. Tandeo, R. Fablet, D. Lucor, B. Iooss, J. Brajard, D. H. Xiao, T. Janjic, W. P. Ding, Y. K. Guo, A. Carrassi, M. Bocquet, and  R. Arcucci,  “Machine learning with data assimilation and uncertainty quantification for dynamical systems: A review,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 6, pp. 1361–1387, Jun. 2023. doi: 10.1109/JAS.2023.123537

Machine Learning With Data Assimilation and Uncertainty Quantification for Dynamical Systems: A Review

doi: 10.1109/JAS.2023.123537
More Information
  • Data assimilation (DA) and uncertainty quantification (UQ) are extensively used in analysing and reducing error propagation in high-dimensional spatial-temporal dynamics. Typical applications span from computational fluid dynamics (CFD) to geoscience and climate systems. Recently, much effort has been given in combining DA, UQ and machine learning (ML) techniques. These research efforts seek to address some critical challenges in high-dimensional dynamical systems, including but not limited to dynamical system identification, reduced order surrogate modelling, error covariance specification and model error correction. A large number of developed techniques and methodologies exhibit a broad applicability across numerous domains, resulting in the necessity for a comprehensive guide. This paper provides the first overview of state-of-the-art researches in this interdisciplinary field, covering a wide range of applications. This review is aimed at ML scientists who attempt to apply DA and UQ techniques to improve the accuracy and the interpretability of their models, but also at DA and UQ experts who intend to integrate cutting-edge ML approaches to their systems. Therefore, this article has a special focus on how ML methods can overcome the existing limits of DA and UQ, and vice versa. Some exciting perspectives of this rapidly developing research field are also discussed.

     

  • loading
  • [1]
    A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, “Deep learning for computer vision: A brief review,” Comput. Intell. Neurosci., vol. 2018, p. 7068349, Feb. 2018.
    [2]
    T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep learning based natural language processing [Review Article],” IEEE Comput. Intell. Mag., vol. 13, no. 3, pp. 55–75, Aug. 2018. doi: 10.1109/MCI.2018.2840738
    [3]
    G. E. Karniadakis, I. G. Kevrekidis, L. Lu, P. Perdikaris, S. F. Wang, and L. Yang, “Physics-informed machine learning,” Nat. Rev. Phys., vol. 3, no. 6, pp. 422–440, Jun. 2021. doi: 10.1038/s42254-021-00314-5
    [4]
    S. Ravuri, K. Lenc, M. Willson, D. Kangin, R. Lam, P. Mirowski, M. Fitzsimons, M. Athanassiadou, S. Kashem, S. Madge, R. Prudden, A. Mandhane, A. Clark, A. Brock, K. Simonyan, R. Hadsell, N. Robinson, E. Clancy, A. Arribas, and S. Mohamed, “Skilful precipitation nowcasting using deep generative models of radar,” Nature, vol. 597, no. 7878, pp. 672–677, Sept. 2021. doi: 10.1038/s41586-021-03854-z
    [5]
    J. S. Dramsch, “70 years of machine learning in geoscience in review,” Adv. Geophys., vol. 61, pp. 1–55, Jun. 2020.
    [6]
    S. L. Brunton and J. N. Kutz, Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control. 2nd ed. Cambridge, UK: Cambridge University Press, 2022.
    [7]
    A. Carrassi, M. Bocquet, L. Bertino, and G. Evensen, “Data assimilation in the geosciences: An overview of methods, issues, and perspectives,” WIREs Climate Change, vol. 9, no. 5, p. e535, Sep.-Oct. 2018.
    [8]
    M. Abdar, F. Pourpanah, S. Hussain, D. Rezazadegan, L. Liu, M. Ghavamzadeh, P. Fieguth, X. C. Cao, A. Khosravi, U. R. Acharya, V. Makarenkov, and S. Nahavandi, “A review of uncertainty quantification in deep learning: Techniques, applications and challenges,” Inf. Fusion, vol. 76, pp. 243–297, Dec. 2021. doi: 10.1016/j.inffus.2021.05.008
    [9]
    R. C. Smith, Uncertainty Quantification: Theory, Implementation, and Applications. Philadelphia, USA: SIAM, 2014.
    [10]
    T. J. Sullivan, Introduction to Uncertainty Quantification. Cham, Germany: Springer, 2015.
    [11]
    A. J. Geer, “Learning earth system models from observations: Machine learning or data assimilation?” ECMWF, Reading, UK, Tech. Rep. 863, May 2020.
    [12]
    J. Brajard, A. Carrassi, M. Bocquet, and L. Bertino, “Combining data assimilation and machine learning to emulate a dynamical model from sparse and noisy observations: A case study with the Lorenz 96 model,” J. Comput. Sci., vol. 44, p. 101171, Jul. 2020. doi: 10.1016/j.jocs.2020.101171
    [13]
    R. Arcucci, J. C. Zhu, S. Hu, and Y.-K. Guo, “Deep data assimilation: Integrating deep learning with data assimilation,” Appl. Sci., vol. 11, no. 3, p. 1114, Jan. 2021. doi: 10.3390/app11031114
    [14]
    C. Buizza, C. Q. Casas, P. Nadler, J. Mack, S. Marrone, Z. Titus, C. Le Cornec, E. Heylen, T. Dur, L. B. Ruiz, C. Heaney, J. A. D. Lopez, K. S. S. Kumar, and R. Arcucci, “Data learning: Integrating data assimilation and machine learning,” J. Comput. Sci., vol. 58, p. 101525, Feb. 2022. doi: 10.1016/j.jocs.2021.101525
    [15]
    M. Bocquet, J. Brajard, A. Carrassi, and L. Bertino, “Bayesian inference of chaotic dynamics by merging data assimilation, machine learning and expectation-maximization,” Found. Data Sci., vol. 2, no. 1, pp. 55–80, Mar. 2020. doi: 10.3934/fods.2020004
    [16]
    M. Bonavita, “Data assimilation and machine learning science at ECMWF,” ECMWF, Reading, UK, Oct. 29, 2014.
    [17]
    M. Raissi, P. Perdikaris, and G. E. Karniadakis, “Inferring solutions of differential equations using noisy multi-fidelity data,” J. Comput. Phys., vol. 335, pp. 736–746, Apr. 2017. doi: 10.1016/j.jcp.2017.01.060
    [18]
    P. Perdikaris, M. Raissi, A. Damianou, N. D. Lawrence, and G. E. Karniadakis, “Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling,” Proc. Roy. Soc. A: Math. Phys. Eng. Sci., vol. 473, no. 2198, p. 20160751, Feb. 2017.
    [19]
    G. Berkooz, P. Holmes, and J. L. Lumley, “The proper orthogonal decomposition in the analysis of turbulent flows,” Annu. Rev. Fluid Mech., vol. 25, pp. 539–575, Jan. 1993. doi: 10.1146/annurev.fl.25.010193.002543
    [20]
    G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, Jul. 2006. doi: 10.1126/science.1127647
    [21]
    H.-X. Lin, J. B. Jin, and H.-J. van den Herik, “Air quality forecast through integrated data assimilation and machine learning,” in Proc. 11th Int. Conf. Agents and Artificial Intelligence, Prague, Czech Republic, 2019, pp. 787–793.
    [22]
    C. Quilodrán-Casas, R. Arcucci, L. Mottet, Y. K. Guo, and C. Pain, “Adversarial autoencoders and adversarial LSTM for improved forecasts of urban air pollution simulations,” arXiv preprint arXiv: 2104.06297, 2021.
    [23]
    M. O. Williams, C. W. Rowley, I. Mezić, and I. G. Kevrekidis, “Data fusion via intrinsic dynamic variables: An application of data-driven Koopman spectral analysis,” Europhys. Lett., vol. 109, no. 4, p. 40007, Feb. 2015. doi: 10.1209/0295-5075/109/40007
    [24]
    C. Quilodrán Casas, “Fast ocean data assimilation and forecasting using a neural-network reduced-space regional ocean model of the north Brazil current,” Ph.D. dissertation, Imperial College London, London, UK, 2018.
    [25]
    I. Mezić and T. Runolfsson, “Uncertainty propagation in dynamical systems,” Automatica, vol. 44, no. 12, pp. 3003–3013, Dec. 2008. doi: 10.1016/j.automatica.2008.04.020
    [26]
    R. Ghanem and J. Red-Horse, “Propagation of probabilistic uncertainty in complex physical systems using a stochastic finite element approach,” Phys. D: Nonlinear Phenom., vol. 133, no. 1–4, pp. 137–144, Sept. 1999. doi: 10.1016/S0167-2789(99)00102-5
    [27]
    D. Lucor, C.-H. Su, and G. E. Karniadakis, “Generalized polynomial chaos and random oscillators,” Int. J. Numer. Methods Eng., vol. 60, no. 3, pp. 571–596, May 2004. doi: 10.1002/nme.976
    [28]
    J. Li and D. B. Xiu, “A generalized polynomial chaos based ensemble Kalman filter with high accuracy,” J. Comput. Phys., vol. 228, no. 15, pp. 5454–5469, Aug. 2009. doi: 10.1016/j.jcp.2009.04.029
    [29]
    A. Borovikov, M. M. Rienecker, C. L. Keppenne, and G. C. Johnson, “Multivariate error covariance estimates by Monte Carlo simulation for assimilation studies in the Pacific Ocean,” Mon. Wea. Rev., vol. 133, no. 8, pp. 2310–2334, Aug. 2005. doi: 10.1175/MWR2984.1
    [30]
    N. Bousserez, D. K. Henze, A. Perkins, K. W. Bowman, M. Lee, J. Liu, F. Deng, and D. B. A. Jones, “Improved analysis-error covariance matrix for high-dimensional variational inversions: Application to source estimation using a 3D atmospheric transport model,” Quart. J. Roy. Meteor. Soc., vol. 141, no. 690, pp. 1906–1921, Jul. 2015. doi: 10.1002/qj.2495
    [31]
    Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, May 2015. doi: 10.1038/nature14539
    [32]
    Y. Gal, “Uncertainty in deep learning,” Ph.D. dissertation, University of Cambridge, Cambridge, UK, 2016.
    [33]
    E. Hüllermeier and W. Waegeman, “Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods,” Mach. Learn., vol. 110, no. 3, pp. 457–506, Mar. 2021. doi: 10.1007/s10994-021-05946-3
    [34]
    A. Saltelli, S. Tarantola, F. Campolongo, and M. Ratto, Sensitivity Analysis in Practice: A Guide to Assessing Scientific Models. Hoboken, USA: John Wiley & Sons, Ltd., 2004.
    [35]
    R. Ghanem, D. Higdon, and H. Owhadi, Handbook of Uncertainty Quantification. Cham, Germany: Springer, 2017.
    [36]
    S. Da Veiga, F. Gamboa, B. Iooss, and C. Prieur, Basics and Trends in Sensitivity Analysis: Theory and Practice in R. Philadelphia, USA: SIAM, 2021.
    [37]
    F. A. Rihan, “Sensitivity analysis for dynamic systems with time-lags,” J. Comput. Appl. Math., vol. 151, no. 2, pp. 445–462, Feb. 2003. doi: 10.1016/S0377-0427(02)00659-3
    [38]
    D. G. Cacuci, M. Ionescu-Bujor, and I. M. Navon, Sensitivity and Uncertainty Analysis, Volume II: Applications to Large-Scale Systems. Boca Raton, USA: CRC Press, 2005.
    [39]
    B. Iooss, R. Kennet, and P. Secchi, “Different views of interpretability,” in Interpretability for Industry 4.0: Statistical and Machine Learning Approaches, A. Lepore, B. Palumbo, and J.-M. Poggi, Eds. Cham, Germany: Springer, 2022, pp. 1–20.
    [40]
    M. Il Idrissi, N. Bousquet, F. Gamboa, B. Iooss, and J.-M. Loubes, “Quantile-constrained wasserstein projections for robust interpretability of numerical and machine learning models,” arXiv preprint arXiv: 2209.11539, 2022.
    [41]
    C. Molnar, Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. 2nd ed. 2022.
    [42]
    M. D’Elia and A. Veneziani, “Uncertainty quantification for data assimilation in a steady incompressible Navier-Stokes problem,” ESAIM: Math. Modell. Numer. Anal., vol. 47, no. 4, pp. 1037–1057, Jul.-Aug. 2013. doi: 10.1051/m2an/2012056
    [43]
    B. Wang, X. L. Zou, and J. Zhu, “Data assimilation and its applications,” Proc. Natl. Acad. Sci. USA, vol. 97, no. 21, pp. 11143–11144, Oct. 2000. doi: 10.1073/pnas.97.21.11143
    [44]
    A. C. Lorenc, “Analysis methods for numerical weather prediction,” Quart. J. Roy. Meteor. Soc., vol. 112, no. 474, pp. 1177–1194, Oct. 1986. doi: 10.1002/qj.49711247414
    [45]
    R. N. Bannister, “A review of forecast error covariance statistics in atmospheric variational data assimilation. II: Modelling the forecast error covariance statistics,” Quart. J. Roy. Meteor. Soc., vol. 134, no. 637, pp. 1971–1996, Oct. 2008. doi: 10.1002/qj.340
    [46]
    T. Janjić, N. Bormann, M. Bocquet, J. A. Carton, S. E. Cohn, S. L. Dance, S. N. Losa, N. K. Nichols, R. Potthast, J. A. Waller, and P. Weston, “On the representation error in data assimilation,” Quart. J. Roy. Meteor. Soc., vol. 144, no. 713, pp. 1257–1278, Apr. 2018. doi: 10.1002/qj.3130
    [47]
    Y. Tr’emolet, “Accounting for an imperfect model in 4D-Var,” Quart. J. Roy. Meteor. Soc., vol. 132, no. 621, pp. 2483–2504, Oct. 2006. doi: 10.1256/qj.05.224
    [48]
    L. Hascoët, “Adjoints by automatic differentiation,” in Advanced Data Assimilation for Geosciences: Lecture Notes of the Les Houches School of Physics: Special Issue, June 2012, É. Blayo, M. Bocquet, E. Cosme, and L. F. Cugliandolo, Eds. Oxford, UK: Oxford University Press, 2014, pp. 349–370.
    [49]
    G. A. Gottwald and S. Reich, “Combining machine learning and data assimilation to forecast dynamical systems from noisy partial observations,” Chaos: Interdiscip. J. Nonlinear Sci., vol. 31, no. 10, p. 101103, Oct. 2021. doi: 10.1063/5.0066080
    [50]
    A. H. Jazwinski, Stochastic Processes and Filtering Theory. New York, USA: Dover Publications, Inc., 2007.
    [51]
    R. N. Miller, M. Ghil, and F. Gauthiez, “Advanced data assimilation in strongly nonlinear dynamical systems,” J. Atmos. Sci., vol. 51, no. 8, pp. 1037–1056, Apr. 1994. doi: 10.1175/1520-0469(1994)051<1037:ADAISN>2.0.CO;2
    [52]
    M. Ghil and P. Malanotte-Rizzoli, “Data assimilation in meteorology and oceanography,” Adv. Geophys., vol. 33, pp. 141–266, Dec. 1991.
    [53]
    P. de Rosnay, G. Balsamo, C. Albergel, J. Muñoz-Sabater, and L. Isaksen, “Initialisation of land surface variables for numerical weather prediction,” Surv. Geophys., vol. 35, no. 3, pp. 607–621, May 2014. doi: 10.1007/s10712-012-9207-x
    [54]
    G. Evensen, “Using the extended Kalman filter with a multilayer quasi-geostrophic ocean model,” J. Geophys. Res.: Oceans, vol. 97, no. C11, pp. 17905–17924, Nov. 1992. doi: 10.1029/92JC01972
    [55]
    G. Evensen, Data Assimilation: The Ensemble Kalman Filter. 2nd ed. Berlin, Heidelberg, Germany: Springer, 2009.
    [56]
    P. J. Van Leeuwen, H. R. Künsch, L. Nerger, R. Potthast, and S. Reich, “Particle filters for high-dimensional geoscience applications: A review,” Quart. J. Roy. Meteor. Soc., vol. 145, no. 723, pp. 2335–2365, Jul. 2019. doi: 10.1002/qj.3551
    [57]
    A. Carrassi, M. Bocquet, J. Demaeyer, C. Grudzien, P. Raanes, and S. Vannitsem, “Data assimilation for chaotic dynamics,” in Data Assimilation for Atmospheric, Oceanic and Hydrologic Applications (Vol. IV), S. K. Park and L. Xu, Eds. Cham, Germany: Springer, 2022, pp. 1–42.
    [58]
    M. Asch, M. Bocquet, and M. Nodet, Data Assimilation: Methods, Algorithms, and Applications. Philadelphia, USA: SIAM, 2016.
    [59]
    E. Calvello, S. Reich, and A. M. Stuart, “Ensemble Kalman methods: A mean field perspective,” arXiv preprint arXiv: 2209.11371, 2022.
    [60]
    G. Evensen, F. C. Vossepoel, and P. J. van Leeuwen, Data Assimilation Fundamentals: A Unified Formulation of the State and Parameter Estimation Problem. Cham, Germany: Springer, 2022.
    [61]
    R. E. Kalman, “A new approach to linear filtering and prediction problems,” J. Basic Eng., vol. 82, no. 1, pp. 35–45, Mar. 1960. doi: 10.1115/1.3662552
    [62]
    S. Ouala, R. Fablet, L. Drumetz, B. Chapron, A. Pascual, F. Collard, and L. Gaultier, “End-to-end Kalman filter for the reconstruction of sea surface dynamics from satellite data,” in Proc. IEEE Int. Geoscience and Remote Sensing Symp., Brussels, Belgium, 2021, pp. 7414–7417.
    [63]
    S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Discovering governing equations from data by sparse identification of nonlinear dynamical systems,” Proc. Natl. Acad. Sci. USA, vol. 113, no. 15, pp. 3932–3937, Mar. 2016. doi: 10.1073/pnas.1517384113
    [64]
    U. Fasel, J. N. Kutz, B. W. Brunton, and S. L. Brunton, “Ensemble-SINDy: Robust sparse model discovery in the low-data, high-noise limit, with active learning and control,” Proc. Roy. Soc. A: Math. Phys. Eng. Sci., vol. 478, no. 2260, p. 20210904, Apr. 2022.
    [65]
    S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Sparse identification of nonlinear dynamics with control (SINDYc),” IFAC-PapersOnLine, vol. 49, no. 18, pp. 710–715, Dec. 2016. doi: 10.1016/j.ifacol.2016.10.249
    [66]
    S. B. Cheng, J. H. Chen, C. Anastasiou, P. Angeli, O. K. Matar, Y.-K. Guo, C. C. Pain, and R. Arcucci, “Generalised latent assimilation in heterogeneous reduced spaces with machine learning surrogate models,” J. Sci. Comput., vol. 94, no. 1, p. 11, Jan. 2023. doi: 10.1007/s10915-022-02059-4
    [67]
    S. B. Cheng, Y. F. Jin, S. P. Harrison, C. Quilodrán-Casas, I. C. Prentice, Y.-K. Guo, and R. Arcucci, “Parameter flexible wildfire prediction using machine learning techniques: Forward and inverse modelling,” Remote Sens., vol. 14, no. 13, p. 3228, Jul. 2022. doi: 10.3390/rs14133228
    [68]
    H. L. Gong, S. B. Cheng, Z. Chen, Q. Li, C. Quilodrán-Casas, D. H. Xiao, and R. Arcucci, “An efficient digital twin based on machine learning SVD autoencoder and generalised latent assimilation for nuclear reactor physics,” Ann. Nucl. Energy, vol. 179, p. 109431, Dec. 2022. doi: 10.1016/j.anucene.2022.109431
    [69]
    A. D. Carnerero, D. R. Ramirez, and T. Alamo, “State-space kriging: A data-driven method to forecast nonlinear dynamical systems,” IEEE Control Syst. Lett., vol. 6, pp. 2258–2263, Jan. 2022. doi: 10.1109/LCSYS.2021.3140167
    [70]
    Z. Bai and L. Q. Peng, “Non-intrusive nonlinear model reduction via machine learning approximations to low-dimensional operators,” Adv. Model. Simul. Eng. Sci., vol. 8, no. 1, p. 28, Dec. 2021. doi: 10.1186/s40323-021-00213-5
    [71]
    H. L. Gong, S. B. Cheng, Z. Chen, and Q. Li, “Data-enabled physics-informed machine learning for reduced-order modeling digital twin: Application to nuclear reactor physics,” Nucl. Sci. Eng., vol. 196, no. 6, pp. 668–693, Feb. 2022. doi: 10.1080/00295639.2021.2014752
    [72]
    B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,” in Proc. 31st Int. Conf. Neural Information Processing Systems, Long Beach, USA, 2017, pp. 6405–6416.
    [73]
    S. B. Cheng, I. C. Prentice, Y. H. Huang, Y. F. Jin, Y.-K. Guo, and R. Arcucci, “Data-driven surrogate model with latent data assimilation: Application to wildfire forecasting,” J. Comput. Phys., vol. 464, p. 111302, Sept. 2022. doi: 10.1016/j.jcp.2022.111302
    [74]
    K. Liu, K. Ok, W. Vega-Brown, and N. Roy, “Deep inference for covariance estimation: Learning gaussian noise models for state estimation,” in Proc. IEEE Int. Conf. Robotics and Automation, Brisbane, Australia, 2018, pp. 1436–1443.
    [75]
    M. Amendola, R. Arcucci, L. Mottet, C. Q. Casas, S. W. Fan, C. Pain, P. Linden, and Y.-K. Guo, “Data assimilation in the latent space of a neural network,” arXiv preprint arXiv: 2012.12056, 2020.
    [76]
    Y. L. Zhuang, S. B. Cheng, N. Kovalchuk, M. Simmons, O. K. Matar, Y.-K. Guo, and R. Arcucci, “Ensemble latent assimilation with deep learning surrogate model: Application to drop interaction in a microfluidics device,” Lab Chip, vol. 22, no. 17, pp. 3187–3202, Jul. 2022. doi: 10.1039/D2LC00303A
    [77]
    Y. S. Wang, H. X. Yao, and S. C. Zhao, “Auto-encoder based dimensionality reduction,” Neurocomputing, vol. 184, pp. 232–242, Apr. 2016. doi: 10.1016/j.neucom.2015.08.104
    [78]
    S. B. Cheng and M. M. Qiu, “Observation error covariance specification in dynamical systems for data assimilation using recurrent neural networks,” Neural Comput. Appl., vol. 34, no. 16, pp. 13149–13167, Aug. 2022. doi: 10.1007/s00521-021-06739-4
    [79]
    G. Revach, N. Shlezinger, X. Y. Ni, A. L. Escoriza, R. J. G. van Sloun, and Y. C. Eldar, “KalmanNet: Neural network aided Kalman filtering for partially known dynamics,” IEEE Trans. Signal Process., vol. 70, pp. 1532–1547, Mar. 2022. doi: 10.1109/TSP.2022.3158588
    [80]
    S. Legler and T. Janjić, “Combining data assimilation and machine learning to estimate parameters of a convective-scale model,” Quart. J. Roy. Meteor. Soc., vol. 148, no. 743, pp. 860–874, Jan. 2022. doi: 10.1002/qj.4235
    [81]
    L. V. Jospin, H. Laga, F. Boussaid, W. Buntine, and M. Bennamoun, “Hands-on bayesian neural networks—a tutorial for deep learning users,” IEEE Comput. Intell. Mag., vol. 17, no. 2, pp. 29–48, May 2022. doi: 10.1109/MCI.2022.3155327
    [82]
    T. Pfaff, M. Fortunato, A. Sanchez-Gonzalez, and P. W. Battaglia, “Learning mesh-based simulation with graph networks,” in Proc. 9th Int. Conf. Learning Representations, Austria, 2021.
    [83]
    R. Rakhimov, D. Volkhonskiy, A. Artemov, D. Zorin, and E. Burnaev, “Latent video transformer,” in Proc. 16th Int. Joint Conf. Computer Vision, Imaging and Computer Graphics Theory and Applications, 2021, pp. 101–112.
    [84]
    T. Y. Lin, Y. X. Wang, X. Y. Liu, and X. P. Qiu, “A survey of transformers,” AI Open, vol. 3, pp. 111–132, Oct. 2022. doi: 10.1016/j.aiopen.2022.10.001
    [85]
    R. Fu, D. Xiao, I. M. Navon, and C. Wang, “A data driven reduced order model of fluid flow by auto-encoder and self-attention deep learning methods,” arXiv preprint arXiv: 2109.02126, 2021.
    [86]
    T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2nd ed. New York, USA: Springer, 2009.
    [87]
    S. Shalev-Shwartz and S. Ben-David, Understanding Machine Learning: From Theory to Algorithms. New York, USA: Cambridge University Press, 2014.
    [88]
    DEEL Certification Workgoup, “Machine learning in certified systems,” DEpendable & Explainable Learning (DEEL), Tech. Rep. S079L03T00-005, Mar. 2021.
    [89]
    L. Le Gratiet, S. Marelli, and B. Sudret, “Metamodel-based sensitivity analysis: Polynomial chaos expansions and Gaussian processes,” in Handbook of Uncertainty Quantification, R. Ghanem, D. Higdon, and H. Owhadi, Eds. Cham, Germany: Springer, 2017, pp. 1289–1325.
    [90]
    G. Blatman, “Adaptive sparse polynomial chaos expansions for uncertainty propagation and sensitivity analysis,” Ph.D. dissertation, Université Blaise Pascal, Clermont-Ferrand, France, 2009.
    [91]
    C. Demay, B. Iooss, L. Le Gratiet, and A. Marrel, “Model selection based on validation criteria for Gaussian Process regression: An application with highlights on the predictive variance,” Qual. Reliab. Eng. Int., vol. 38, no. 3, pp. 1482–1500, Apr. 2022. doi: 10.1002/qre.2973
    [92]
    N. Meinshausen, “Forest garrote,” Electron. J. Statist., vol. 3, pp. 1288–1304, Dec. 2009.
    [93]
    L. Mentch and G. Hooker, “Quantifying uncertainty in random forests via confidence intervals and hypothesis tests,” J. Mach. Learn. Res., vol. 17, no. 1, pp. 841–881, Jan. 2016.
    [94]
    Y. Gal and Z. Ghahramani, “Dropout as a Bayesian approximation: Representing model uncertainty in deep learning,” in Proc. 33rd Int. Conf. Machine Learning, New York, USA, 2016, pp. 1050–1059.
    [95]
    J. Gawlikowski, C. R. N. Tassi, M. Ali, J. Lee, M. Humt, J. X. Feng, A. Kruspe, R. Triebel, P. Jung, R. Roscher, M. Shahzad, W. Yang, R. Bamler, and X. X. Zhu, “A survey of uncertainty in deep neural networks,” arXiv preprint arXiv: 2107.03342, 2021.
    [96]
    A. F. Psaros, X. H. Meng, Z. R. Zou, L. Guo, and G. E. Karniadakis, “Uncertainty quantification in scientific machine learning: Methods, metrics, and comparisons,” J. Comput. Phys., vol. 477, p. 111902, Mar. 2023. doi: 10.1016/j.jcp.2022.111902
    [97]
    G. Shafer and V. Vovk, “A tutorial on conformal prediction,” J. Mach. Learn. Res., vol. 9, pp. 371–421, Jun. 2008.
    [98]
    G. Rega and H. Troger, “Dimension reduction of dynamical systems: Methods, models, applications,” Nonlinear Dyn., vol. 41, no. 1-3, pp. 1–15, Aug. 2005. doi: 10.1007/s11071-005-2790-3
    [99]
    S. Klus, F. Nüske, P. Koltai, H. Wu, I. Kevrekidis, C. Schütte, and F. Noé, “Data-driven model reduction and transfer operator approximation,” J. Nonlinear Sci., vol. 28, no. 3, pp. 985–1010, Jun. 2018. doi: 10.1007/s00332-017-9437-7
    [100]
    D. J. Lucia, P. S. Beran, and W. A. Silva, “Reduced-order modeling: New approaches for computational physics,” Prog. Aerosp. Sci., vol. 40, no. 1-2, pp. 51–117, Feb. 2004. doi: 10.1016/j.paerosci.2003.12.001
    [101]
    K. Lee and K. T. Carlberg, “Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders,” J. Comput. Phys., vol. 404, p. 108973, Mar. 2020. doi: 10.1016/j.jcp.2019.108973
    [102]
    D. Wu, Y. He, X. Luo, and M. C. Zhou, “A latent factor analysis-based approach to online sparse streaming feature selection,” IEEE Trans. Syst.,Man,Cybern.: Syst., vol. 52, no. 11, pp. 6744–6758, Nov. 2022. doi: 10.1109/TSMC.2021.3096065
    [103]
    H. Sowrirajan, J. B. Yang, A. Y. Ng, and P. Rajpurkar, “MoCo pretraining improves representation and transferability of chest X-ray models,” in Proc. Medical Imaging with Deep Learning, Lübeck, Germany, 2021, pp. 728–744.
    [104]
    K. T. Carlberg, A. Jameson, M. J. Kochenderfer, J. Morton, L. Q. Peng, and F. D. Witherden, “Recovering missing CFD data for high-order discretizations using deep neural networks and dynamics learning,” J. Comput. Phys., vol. 395, pp. 105–124, Oct. 2019. doi: 10.1016/j.jcp.2019.05.041
    [105]
    S. E. Otto and C. W. Rowley, “Linearly recurrent autoencoder networks for learning dynamics,” SIAM J. Appl. Dyn. Syst., vol. 18, no. 1, pp. 558–593, Jan. 2019. doi: 10.1137/18M1177846
    [106]
    N. Takeishi, Y. Kawahara, and T. Yairi, “Learning Koopman invariant subspaces for dynamic mode decomposition,” in Proc. 31st Int. Conf. Neural Information Processing Systems, Long Beach, USA, 2017, pp. 1130–1140.
    [107]
    C. Q. Casas, R. Arcucci, and Y. K. Guo, “Urban air pollution forecasts generated from latent space representations,” in Proc. Int. Conf. Learning Representations, 2020, pp. 1–6.
    [108]
    T. R. F. Phillips, C. E. Heaney, P. N. Smith, and C. C. Pain, “An autoencoder-based reduced-order model for eigenvalue problems with application to neutron diffusion,” Int. J. Numer. Methods Eng., vol. 122, no. 15, pp. 3780–3811, Aug. 2021. doi: 10.1002/nme.6681
    [109]
    D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” in Proc. 2nd Int. Conf. Learning Representations, Banff, Canada, 2014.
    [110]
    A. van den Oord, O. Vinyals, and K. Kavukcuoglu, “Neural discrete representation learning,” in Proc. 31st Int. Conf. Neural Information Processing Systems, Long Beach, USA, 2017, pp. Pages 6309–6318.
    [111]
    A. Polyak and L. Wolf, “Attention-based wavenet autoencoder for universal voice conversion,” in Proc. IEEE Int. Conf. Acoustics, Speech and Signal Processing, Brighton, UK, 2019, pp. 6800–6804.
    [112]
    H. Song, C. Sun, X. X. Wu, M. Chen, and Y. D. Jia, “Learning normal patterns via adversarial attention-based autoencoder for abnormal event detection in videos,” IEEE Trans. Multimedia, vol. 22, no. 8, pp. 2138–2148, Aug. 2020. doi: 10.1109/TMM.2019.2950530
    [113]
    A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Proc. 31st Int. Conf. Neural Information Processing Systems, Long Beach, USA, 2017, pp. Pages 6000–6010.
    [114]
    A. Sanchez-Gonzalez, J. Godwin, T. Pfaff, R. Ying, J. Leskovec, and P. W. Battaglia, “Learning to simulate complex physics with graph networks,” in Proc. 37th Int. Conf. Machine Learning, 2020, pp. 8459–8468.
    [115]
    K. M. He, X. L. Chen, S. N. Xie, Y. H. Li, P. Dollár, and R. Girshick, “Masked autoencoders are scalable vision learners,” in Proc. IEEE/CVF Conf. Computer Vision and Pattern Recognition, New Orleans, USA, 2022, pp. 15979–15988.
    [116]
    D. Xiao, P. Yang, F. Fang, J. Xiang, C. C. Pain, and I. M. Navon, “Non-intrusive reduced order modelling of fluid-structure interactions,” Comput. Methods Appl. Mech. Eng., vol. 303, pp. 35–54, May 2016. doi: 10.1016/j.cma.2015.12.029
    [117]
    C. Quilodrán-Casas, R. Arcucci, C. Pain, and Y. K. Guo, “Adversarially trained LSTMs on reduced order models of urban air pollution simulations,” arXiv preprint arXiv: 2101.01568, 2021.
    [118]
    S. B. Reddy, A. R. Magee, R. K. Jaiman, J. Liu, W. Xu, A. Choudhary, and A. A. Hussain, “Reduced order model for unsteady fluid flows via recurrent neural networks,” in Proc. 38th Int. Conf. Ocean, Offshore Mechanics and Arctic Engineering, Glasgow, UK, 2019, pp. V002T08A007.
    [119]
    S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, Nov. 1997. doi: 10.1162/neco.1997.9.8.1735
    [120]
    T. Nakamura, K. Fukami, K. Hasegawa, Y. Nabae, and K. Fukagata, “Convolutional neural network and long short-term memory based reduced order surrogate for minimal turbulent channel flow,” Phys. Fluids, vol. 33, no. 2, p. 025116, Feb. 2021. doi: 10.1063/5.0039845
    [121]
    A. T. Mohan and D. V. Gaitonde, “A deep learning based approach to reduced order modeling for turbulent flow control using LSTM neural networks,” arXiv preprint arXiv: 1804.09269, 2018.
    [122]
    X. J. Shi, Z. R. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-C. Woo, “Convolutional LSTM network: A machine learning approach for precipitation nowcasting,” in Proc. 28th Int. Conf. Neural Information Processing Systems, Montreal, Canada, 2015, pp. 802–810.
    [123]
    K. Greff, R. K. Srivastava, J. Koutník, B. R. Steunebrink, and J. Schmidhuber, “LSTM: A search space odyssey,” IEEE Trans. Neural Netw. Learn. Syst., vol. 28, no. 10, pp. 2222–2232, Oct. 2017. doi: 10.1109/TNNLS.2016.2582924
    [124]
    M. Cheng, F. Fang, C. C. Pain, and I. M. Navon, “Data-driven modelling of nonlinear spatio-temporal fluid flows using a deep convolutional generative adversarial network,” Comput. Methods Appl. Mech. Eng., vol. 365, p. 113000, Jun. 2020. doi: 10.1016/j.cma.2020.113000
    [125]
    J. L. Tong, L. P. Xie, W. K. Yang, and K. J. Zhang, “Probabilistic decomposition transformer for time series forecasting,” arXiv preprint arXiv: 2210.17393, 2022.
    [126]
    E. Kaiser, J. N. Kutz, and S. L. Brunton, “Sparse identification of nonlinear dynamics for model predictive control in the low-data limit,” Proc. Roy. Soc. A: Math. Phys. Eng. Sci., vol. 474, no. 2219, p. 20180335, Nov. 2018.
    [127]
    A. A. Kaptanoglu, B. M. de Silva, U. Fasel, K. Kaheman, A. J. Goldschmidt, J. Callaham, C. B. Delahunt, Z. G. Nicolaou, K. Champion, J.-C. Loiseau, J. N. Kutz, and S. L. Brunton, “PySINDy: A comprehensive Python package for robust sparse system identification,” J. Open Source Softw., vol. 7, no. 69, p. 3994, Jan. 2022. doi: 10.21105/joss.03994
    [128]
    C. Paglia, A. Stiehl, and C. Uhl, “Identification of low-dimensional nonlinear dynamics from high-dimensional simulated and real-world data,” in Proc. 15th APCA Int. Conf. Automatic Control and Soft Computing, Caparica, Portugal, 2022, pp. 205–213.
    [129]
    Y. J. Cai, X. Z. Wang, G. Joos, and I. Kamwa, “An online data-driven method to locate forced oscillation sources from power plants based on sparse identification of nonlinear dynamics (SINDy),” IEEE Trans. Power Syst., 2022. DOI: 10.1109/TPWRS.2022.3189602
    [130]
    K. Champion, B. Lusch, J. N. Kutz, and S. L. Brunton, “Data-driven discovery of coordinates and governing equations,” Proc. Natl. Acad. Sci. USA, vol. 116, no. 45, pp. 22445–22451, Oct. 2019. doi: 10.1073/pnas.1906995116
    [131]
    B. Kim, V. C. Azevedo, N. Thuerey, T. Kim, M. Gross, and B. Solenthaler, “Deep fluids: A generative network for parameterized fluid simulations,” Comput. Graph. Forum, vol. 38, no. 2, pp. 59–70, May 2019. doi: 10.1111/cgf.13619
    [132]
    S. Wiewel, M. Becher, and N. Thuerey, “Latent space physics: Towards learning the temporal evolution of fluid flow,” Comput. Graph. Forum, vol. 38, no. 2, pp. 71–82, May 2019. doi: 10.1111/cgf.13620
    [133]
    N. Thuerey, K. Weißenow, L. Prantl, and X. Y. Hu, “Deep learning methods for reynolds-averaged navier-stokes simulations of airfoil flows,” AIAA J., vol. 58, no. 1, pp. 25–36, Jan. 2020. doi: 10.2514/1.J058291
    [134]
    A. Sanchez-Gonzalez and K. Stachenfeld, “Learning general-purpose CNN-based simulators for astrophysical turbulence,” in Proc. Int. Conf. Learning Representations, 2021.
    [135]
    S. Rasp, M. S. Pritchard, and P. Gentine, “Deep learning to represent subgrid processes in climate models,” Proc. Natl. Acad. Sci. USA, vol. 115, no. 39, pp. 9684–9689, Sept. 2018. doi: 10.1073/pnas.1810286115
    [136]
    T. Bolton and L. Zanna, “Applications of deep learning to ocean data inference and subgrid parameterization,” J. Adv. Model. Earth Syst., vol. 11, no. 1, pp. 376–399, Jan. 2019. doi: 10.1029/2018MS001472
    [137]
    X. W. Jia, J. Willard, A. Karpatne, J. Read, J. Zwart, M. Steinbach, and V. Kumar, “Physics guided rnns for modeling dynamical systems: A case study in simulating lake temperature profiles,” in Proc. SIAM Int. Conf. Data Mining, Calgary, Canada, 2019, pp. 558–566.
    [138]
    P. A. G. Watson, “Applying machine learning to improve simulations of a chaotic dynamical system using empirical error correction,” J. Adv. Model. Earth Syst., vol. 11, no. 5, pp. 1402–1417, May 2019. doi: 10.1029/2018MS001597
    [139]
    M. Bonavita and P. Laloyaux, “Machine learning for model error inference and correction,” J. Adv. Model. Earth Syst., vol. 12, no. 12, p. e2020MS002232, Dec. 2020.
    [140]
    J. Brajard, A. Carrassi, M. Bocquet, and L. Bertino, “Combining data assimilation and machine learning to infer unresolved scale parametrization,” Philos. Trans. Roy. Soc. A: Math. Phys. Eng. Sci., vol. 379, no. 2194, p. 20200086, Apr. 2021.
    [141]
    D. J. Gagne II, H. M. Christensen, A. C. Subramanian, and A. H. Monahan, “Machine learning for stochastic parameterization: Generative adversarial networks in the lorenz ’96 model,” J. Adv. Model. Earth Syst., vol. 12, no. 3, p. e2019MS001896, Mar. 2020.
    [142]
    A. Wikner, J. Pathak, B. R. Hunt, I. Szunyogh, M. Girvan, and E. Ott, “Using data assimilation to train a hybrid forecast system that combines machine-learning and knowledge-based components,” Chaos: Interdiscip. J. Nonlinear Sci., vol. 31, no. 5, p. 053114, May 2021. doi: 10.1063/5.0048050
    [143]
    A. Farchi, P. Laloyaux, M. Bonavita, and M. Bocquet, “Using machine learning to correct model error in data assimilation and forecast applications,” Quart. J. Roy. Meteor. Soc., vol. 147, no. 739, pp. 3067–3084, Jul. 2021. doi: 10.1002/qj.4116
    [144]
    A. Farchi, M. Bocquet, P. Laloyaux, M. Bonavita, and Q. Malartic, “A comparison of combined data assimilation and machine learning methods for offline and online model error correction,” J. Comput. Sci., vol. 55, p. 101468, Oct. 2021. doi: 10.1016/j.jocs.2021.101468
    [145]
    S. Barthélémy, J. Brajard, L. Bertino, and F. Counillon, “Super-resolution data assimilation,” Ocean Dyn., vol. 72, no. 8, pp. 661–678, Aug. 2022. doi: 10.1007/s10236-022-01523-x
    [146]
    M. A. Sacco, J. J. Ruiz, M. Pulido, and P. Tandeo, “Evaluation of machine learning techniques for forecast uncertainty quantification,” Quart. J. Roy. Meteor. Soc., vol. 148, no. 749, pp. 3470–3490, Oct. 2022. doi: 10.1002/qj.4362
    [147]
    D. P. Dee, “Bias and data assimilation,” Quart. J. Roy. Meteor. Soc., vol. 131, no. 613, pp. 3323–3343, Oct. 2005. doi: 10.1256/qj.05.137
    [148]
    A. Carrassi and S. Vannitsem, “Treatment of the error due to unresolved scales in sequential data assimilation,” Int. J. Bifurcation Chaos, vol. 21, no. 12, pp. 3619–3626, Dec. 2011. doi: 10.1142/S0218127411030775
    [149]
    M. Bocquet, A. Farchi, and Q. Malartic, “Online learning of both state and dynamics using ensemble Kalman filters,” Found. Data Sci., vol. 3, no. 3, pp. 305–330, Sept. 2021. doi: 10.3934/fods.2020015
    [150]
    Q. Malartic, A. Farchi, and M. Bocquet, “State, global, and local parameter estimation using local ensemble Kalman filters: Applications to online machine learning of chaotic dynamics,” Quart. J. Roy. Meteor. Soc., vol. 148, no. 746, pp. 2167–2193, Jul. 2022. doi: 10.1002/qj.4297
    [151]
    A. Farchi, M. Chrust, M. Bocquet, P. Laloyaux, and M. Bonavita, “Online model error correction with neural networks in the incremental 4D-Var framework,” arXiv preprint arXiv: 2210.13817, 2022.
    [152]
    Y. M. Chen, D. Sanz-Alonso, and R. Willett, “Autodifferentiable ensemble Kalman filters,” SIAM J. Math. Data Sci., vol. 4, no. 2, pp. 801–833, Jun. 2022. doi: 10.1137/21M1434477
    [153]
    J. F. Shogren, Encyclopedia of Energy, Natural Resource, and Environmental Economics. Amsterdam, The Netherlands: Elsevier, 2013.
    [154]
    H. Modares, A. Alfi, and M.-M. Fateh, “Parameter identification of chaotic dynamic systems through an improved particle swarm optimization,” Expert Syst. Appl., vol. 37, no. 5, pp. 3714–3720, May 2010. doi: 10.1016/j.eswa.2009.11.054
    [155]
    C. W. Brown, R. R. Hood, W. Long, J. Jacobs, D. L. Ramers, C. Wazniak, J. D. Wiggert, R. Wood, and J. Xu, “Ecological forecasting in Chesapeake Bay: Using a mechanistic-empirical modeling approach,” J. Mar. Syst., vol. 125, pp. 113–125, Sept. 2013. doi: 10.1016/j.jmarsys.2012.12.007
    [156]
    F. A. Albini, “Estimating wildfire behavior and effects,” Intermountain Forest and Range Experiment Station, Forest Service, US Department of Agriculture, Ogden, USA, Gen. Tech. Rep. INT-30, 1976.
    [157]
    C. Lautenberger, “Wildland fire modeling with an Eulerian level set method and automated calibration,” Fire Saf. J., vol. 62, pp. 289–298, Nov. 2013. doi: 10.1016/j.firesaf.2013.08.014
    [158]
    A. Alessandri, P. Bagnerini, M. Gaggero, and L. Mantelli, “Parameter estimation of fire propagation models using level set methods,” Appl. Math. Modell., vol. 92, pp. 731–747, Apr. 2021. doi: 10.1016/j.apm.2020.11.030
    [159]
    P. J. Smith, S. L. Dance, M. J. Baines, N. K. Nichols, and T. R. Scott, “Variational data assimilation for parameter estimation: Application to a simple morphodynamic model,” Ocean Dyn., vol. 59, no. 5, pp. 697–708, Nov. 2009. doi: 10.1007/s10236-009-0205-6
    [160]
    N. Wanders, M. F. P. Bierkens, S. M. de Jong, A. de Roo, and D. Karssenberg, “The benefits of using remotely sensed soil moisture in parameter identification of large-scale hydrological models,” Water Resour. Res., vol. 50, no. 8, pp. 6874–6891, Aug. 2014. doi: 10.1002/2013WR014639
    [161]
    Y. Ruckstuhl and T. Janjić, “Combined state-parameter estimation with the LETKF for convective-scale weather forecasting,” Mon. Wea. Rev., vol. 148, no. 4, pp. 1607–1628, Apr. 2020. doi: 10.1175/MWR-D-19-0233.1
    [162]
    G. A. Kivman, “Sequential parameter estimation for stochastic systems,” Nonlinear Process. Geophys., vol. 10, no. 3, pp. 253–259, Jun. 2003. doi: 10.5194/npg-10-253-2003
    [163]
    J. D. Annan, J. C. Hargreaves, N. R. Edwards, and R. Marsh, “Parameter estimation in an intermediate complexity earth system model using an ensemble Kalman filter,” Ocean Modell., vol. 8, no. 1-2, pp. 135–154, Dec. 2005. doi: 10.1016/j.ocemod.2003.12.004
    [164]
    A. A. Emerick and A. C. Reynolds, “History matching time-lapse seismic data using the ensemble Kalman filter with multiple data assimilations,” Comput. Geosci., vol. 16, no. 3, pp. 639–659, Jun. 2012. doi: 10.1007/s10596-012-9275-5
    [165]
    J. J. Ruiz, M. Pulido, and T. Miyoshi, “Estimating model parameters with ensemble-based data assimilation: A review,” J. Meteor. Soc. Japan, vol. 91, no. 2, pp. 79–99, Apr. 2013. doi: 10.2151/jmsj.2013-201
    [166]
    D. J. Posselt and C. H. Bishop, “Nonlinear parameter estimation: Comparison of an ensemble Kalman smoother with a Markov chain Monte Carlo algorithm,” Mon. Wea. Rev., vol. 140, no. 6, pp. 1957–1974, Jun. 2012. doi: 10.1175/MWR-D-11-00242.1
    [167]
    D. J. Posselt, D. Hodyss, and C. H. Bishop, “Errors in ensemble Kalman smoother estimates of cloud microphysical parameters,” Mon. Wea. Rev., vol. 142, no. 4, pp. 1631–1654, Apr. 2014. doi: 10.1175/MWR-D-13-00290.1
    [168]
    D. Hodyss, “Ensemble state estimation for nonlinear systems using polynomial expansions in the innovation,” Mon. Wea. Rev., vol. 139, no. 11, pp. 3571–3588, Nov. 2011. doi: 10.1175/2011MWR3558.1
    [169]
    Y. M. Ruckstuhl and T. Janjić, “Parameter and state estimation with ensemble Kalman filter based algorithms for convective-scale applications,” Quart. J. Roy. Meteor. Soc., vol. 144, no. 712, pp. 826–841, Apr. 2018. doi: 10.1002/qj.3257
    [170]
    M. C. Rochoux, S. Ricci, D. Lucor, B. Cuenot, and A. Trouvé, “Towards predictive data-driven simulations of wildfire spread-part I: Reduced-cost ensemble Kalman filter based on a polynomial chaos surrogate model for parameter estimation,” Nat. Hazards Earth Syst. Sci., vol. 14, no. 11, pp. 2951–2973, Nov. 2014. doi: 10.5194/nhess-14-2951-2014
    [171]
    P. Nadler, R. Arcucci, and Y. K. Guo, “A neural SIR model for global forecasting,” in Proc. Machine Learning for Health, 2020, pp. 254–266.
    [172]
    P. Nadler, R. Arcucci, and Y.-K. Guo, “Data assimilation for parameter estimation in economic modelling,” in Proc. 15th Int. Conf. Signal-Image Technology & Internet-Based Systems, Sorrento, Italy, 2019, pp. 649–656.
    [173]
    X. Li, C. Xiao, A. Cheng, and H. Lin, “Joint estimation of parameter and state with hybrid data assimilation and machine learning,” Preprint, 2022.
    [174]
    L. A. Ferrat, M. Goodfellow, and J. R. Terry, “Classifying dynamic transitions in high dimensional neural mass models: A random forest approach,” PLoS Comput. Biol., vol. 14, no. 3, p. e1006009, Mar. 2018. doi: 10.1371/journal.pcbi.1006009
    [175]
    M. Frangos, Y. Marzouk, K. Willcox, and B. van Bloemen Waanders, “Surrogate and reduced-order modeling: A comparison of approaches for large-scale statistical inverse problems,” in Large-Scale Inverse Problems and Quantification of Uncertainty, L. Biegler, G. Biros, O. Ghattas, M. Heinkenschloss, D. Keyes, B. Mallick, Y. Marzouk, L. Tenorio, B. van Bloemen Waanders, and K. Willcox, Eds. Hoboken, USA: John Wiley & Sons, Ltd., 2011, pp. 123–149.
    [176]
    L. Cai, L. Ren, Y. H. Wang, W. X. Xie, G. Y. Zhu, and H. Gao, “Surrogate models based on machine learning methods for parameter estimation of left ventricular myocardium,” Roy. Soc. Open Sci., vol. 8, no. 1, p. 201121, Jan. 2021. doi: 10.1098/rsos.201121
    [177]
    P. Tandeo, P. Ailliot, M. Bocquet, A. Carrassi, T. Miyoshi, M. Pulido, and Y. C. Zhen, “A review of innovation-based methods to jointly estimate model and observation error covariance matrices in ensemble data assimilation,” Mon. Wea. Rev., vol. 148, no. 10, pp. 3973–3994, Oct. 2020. doi: 10.1175/MWR-D-19-0240.1
    [178]
    G. Desroziers, L. Berre, B. Chapnik, and P. Poli, “Diagnosis of observation, background and analysis-error statistics in observation space,” Quart. J. Roy. Meteor. Soc., vol. 131, no. 613, pp. 3385–3396, Oct. 2005. doi: 10.1256/qj.05.108
    [179]
    T. Berry and T. Sauer, “Adaptive ensemble Kalman filtering of non-linear systems,” Tellus A: Dyn. Meteorol. Oceanogr., vol. 65, no. 1, p. 20331, Dec. 2013. doi: 10.3402/tellusa.v65i0.20331
    [180]
    R. Mehra, “On the identification of variances and adaptive Kalman filtering,” IEEE Trans. Automat. Control, vol. 15, no. 2, pp. 175–184, Apr. 1970. doi: 10.1109/TAC.1970.1099422
    [181]
    D. P. Dee, “On-line estimation of error covariance parameters for atmospheric data assimilation,” Mon. Wea. Rev., vol. 123, no. 4, pp. 1128–1145, Apr. 1995. doi: 10.1175/1520-0493(1995)123<1128:OLEOEC>2.0.CO;2
    [182]
    J. R. Stroud, M. Katzfuss, and C. K. Wikle, “A bayesian adaptive ensemble Kalman filter for sequential state and parameter estimation,” Mon. Wea. Rev., vol. 146, no. 1, pp. 373–386, Jan. 2018. doi: 10.1175/MWR-D-16-0427.1
    [183]
    R. H. Shumway and D. S. Stoffer, “An approach to time series smoothing and forecasting using the EM algorithm,” J. Time Ser. Anal., vol. 3, no. 4, pp. 253–264, Jul. 1982. doi: 10.1111/j.1467-9892.1982.tb00349.x
    [184]
    P. Tandeo, M. Pulido, and F. Lott, “Offline parameter estimation using EnKF and maximum likelihood error covariance estimates: Application to a subgrid-scale orography parametrization,” Quart. J. Roy. Meteor. Soc., vol. 141, no. 687, pp. 383–395, Jan. 2015. doi: 10.1002/qj.2357
    [185]
    M. Pulido, G. Scheffler, J. J. Ruiz, M. M. Lucini, and P. Tandeo, “Estimation of the functional form of subgrid-scale parametrizations using ensemble-based data assimilation: A simple model experiment,” Quart. J. Roy. Meteor. Soc., vol. 142, no. 701, pp. 2974–2984, Oct. 2016. doi: 10.1002/qj.2879
    [186]
    D. Dreano, P. Tandeo, M. Pulido, B. Ait-El-Fquih, T. Chonavel, and I. Hoteit, “Estimating model-error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximization algorithm,” Quart. J. Roy. Meteor. Soc., vol. 143, no. 705, pp. 1877–1885, Apr. 2017. doi: 10.1002/qj.3048
    [187]
    M. Pulido, P. Tandeo, M. Bocquet, A. Carrassi, and M. Lucini, “Stochastic parameterization identification using ensemble Kalman filtering combined with maximum likelihood methods,” Tellus A: Dyn. Meteorol. Oceanogr., vol. 70, no. 1, p. 1442099, Jan. 2018. doi: 10.1080/16000870.2018.1442099
    [188]
    T. J. Cocucci, M. Pulido, M. Lucini, and P. Tandeo, “Model error covariance estimation in particle and ensemble Kalman filters using an online expectation-maximization algorithm,” Quart. J. Roy. Meteor. Soc., vol. 147, no. 734, pp. 526–543, Jan. 2021. doi: 10.1002/qj.3931
    [189]
    T. T. T. Chau, P. Ailliot, V. Monbet, and P. Tandeo, “Comparison of simulation-based algorithms for parameter estimation and state reconstruction in nonlinear state-space models,” Discrete Contin. Dyn. Syst.: Ser. S, vol. 16, no. 2, pp. 240–264, Feb. 2023. doi: 10.3934/dcdss.2022054
    [190]
    G. Desroziers and S. Ivanov, “Diagnosis and adaptive tuning of observation-error parameters in a variational assimilation,” Quart. J. Roy. Meteor. Soc., vol. 127, no. 574, pp. 1433–1452, Apr. 2001. doi: 10.1002/qj.49712757417
    [191]
    R. Ménard, “Error covariance estimation methods based on analysis residuals: Theoretical foundation and convergence properties derived from simplified observation networks,” Quart. J. Roy. Meteor. Soc., vol. 142, no. 694, pp. 257–273, Jan. 2016. doi: 10.1002/qj.2650
    [192]
    W. Vega-Brown, A. Bachrach, A. Bry, J. Kelly, and N. Roy, “CELLO: A fast algorithm for covariance estimation,” in Proc. IEEE Int. Conf. Robotics and Automation, Karlsruhe, Germany, 2013, pp. 3160–3167.
    [193]
    Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-Net: Learning dense volumetric segmentation from sparse annotation,” in Proc. 19th Int. Conf. Medical Image Computing and Computer-Assisted Intervention, Athens, Greece, 2016, pp. 424–432.
    [194]
    Z. C. Long, Y. P. Lu, and B. Dong, “PDE-Net 2.0: Learning PDEs from data with a numeric-symbolic hybrid deep network,” J. Comput. Phys., vol. 399, p. 108925, Dec. 2019. doi: 10.1016/j.jcp.2019.108925
    [195]
    M. Andrychowicz, M. Denil, S. G. Colmenarejo, M. W. Hoffman, D. Pfau, T. Schaul, B. Shillingford, and N. de Freitas, “Learning to learn by gradient descent by gradient descent,” in Proc. 30th Int. Conf. Neural Information Processing Systems, Barcelona, Spain, 2016, pp. 3988–3996.
    [196]
    M. Raissi, P. Perdikaris, and G. E. Karniadakis, “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations,” J. Comput. Phys., vol. 378, pp. 686–707, Feb. 2019. doi: 10.1016/j.jcp.2018.10.045
    [197]
    A. Barth, A. Alvera-Azcárate, M. Licer, and J.-M. Beckers, “DINCAE 1.0: A convolutional neural network with error estimates to reconstruct sea surface temperature satellite observations,” Geosci. Model Dev., vol. 13, no. 3, pp. 1609–1622, Mar. 2020. doi: 10.5194/gmd-13-1609-2020
    [198]
    G. E. Manucharyan, L. Siegelman, and P. Klein, “A deep learning approach to spatiotemporal sea surface height interpolation and estimation of deep currents in geostrophic ocean turbulence,” J. Adv. Model. Earth. Syst., vol. 13, no. 1, p. e2019MS001965, Jan. 2021.
    [199]
    P. Boudier, A. Fillion, S. Gratton, and S. Gürol, “DAN-An optimal data assimilation framework based on machine learning recurrent networks,” arXiv preprint arXiv: 2010.09694, 2020.
    [200]
    R. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. Duvenaud, “Neural ordinary differential equations,” in Proc. 32nd Int. Conf. Neural Information Processing Systems, Montreal, Canada, 2018, pp. 6572–6583.
    [201]
    S. Ouala, R. Fablet, C. Herzet, B. Chapron, A. Pascual, F. Collard, and L. Gaultier, “Neural network based Kalman filters for the spatio-temporal interpolation of satellite-derived sea surface temperature,” Remote Sens., vol. 10, no. 12, p. 1864, Nov. 2018. doi: 10.3390/rs10121864
    [202]
    T. Hospedales, A. Antoniou, P. Micaelli, and A. Storkey, “Meta-learning in neural networks: A survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 9, pp. 5149–5169, Sept. 2022.
    [203]
    R. Fablet, B. Chapron, L. Drumetz, E. Mémin, O. Pannekoucke, and F. Rousseau, “Learning variational data assimilation models and solvers,” J. Adv. Model. Earth. Syst., vol. 13, no. 10, p. e2021MS002572, Oct. 2021.
    [204]
    R. S. Liu, L. Ma, X. M. Yuan, S. Z. Zeng, and J. Zhang, “Bilevel integrative optimization for ill-posed inverse problems,” arXiv preprint arXiv: 1907.03083, 2019.
    [205]
    N. Cressie and C. K. Wikle, Statistics for Spatio-Temporal Data. Hoboken, USA: John Wiley & Sons, Inc., 2011.
    [206]
    R. Fablet, Q. Febvre, and B. Chapron, “Multimodal 4DVarNets for the reconstruction of sea surface dynamics from SST-SSH synergies,” arXiv preprint arXiv: 2207.01372, 2022.
    [207]
    S. Ouala, D. Nguyen, L. Drumetz, B. Chapron, A. Pascual, F. Collard, L. Gaultier, and R. Fablet, “Learning latent dynamics for partially observed chaotic systems,” Chaos: Interdiscip. J. Nonlinear Sci., vol. 30, no. 10, p. 103121, Oct. 2020. doi: 10.1063/5.0019309
    [208]
    N. Lafon, R. Fablet, and P. Naveau, “Uncertainty quantification when learning dynamical models and solvers with variational methods,” 2023.
    [209]
    T. Eltoft, T. Kim, and T.-W. Lee, “On the multivariate Laplace distribution,” IEEE Signal Process. Lett., vol. 13, no. 5, pp. 300–303, May 2006. doi: 10.1109/LSP.2006.870353
    [210]
    J. Y. Yao, W. W. Pan, S. Ghosh, and F. Doshi-Velez, “Quality of uncertainty quantification for Bayesian neural network inference,” arXiv preprint arXiv: 1906.09686, 2019.
    [211]
    Y. L. Huang, Y. G. Zhang, and J. A. Chambers, “A novel Kullback-Leibler divergence minimization-based adaptive student’s t-filter,” IEEE Trans. Signal Process., vol. 67, no. 20, pp. 5417–5432, Oct. 2019. doi: 10.1109/TSP.2019.2939079
    [212]
    R. M. Neal, “MCMC using hamiltonian dynamics,” in Handbook of Markov Chain Monte Carlo, S. Brooks, A. Gelman, G. Jones, and X.-L. Meng, Eds. New York, USA: Chapman and Hall/CRC, 2011.
    [213]
    A. D. Cobb and B. Jalaian, “Scaling Hamiltonian Monte Carlo inference for Bayesian neural networks with symmetric splitting,” in Proc. 37th Conf. Uncertainty in Artificial Intelligence, 2021, pp. 675–685.
    [214]
    M. Welling and Y. W. Teh, “Bayesian learning via stochastic gradient Langevin dynamics,” in Proc. 28th Int. Conf. Machine Learning, Bellevue, USA, 2011, pp. 681–688.
    [215]
    C. Nemeth and P. Fearnhead, “Stochastic gradient Markov chain Monte Carlo,” J. Am. Stat. Assoc., vol. 116, no. 533, pp. 433–450, Jan. 2021. doi: 10.1080/01621459.2020.1847120
    [216]
    B. Staber and S. Da Veiga, “ Benchmarking Bayesian neural networks and evaluation metrics for regression tasks,” arXiv preprint arXiv: 2206.06779, 2022.
    [217]
    B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,” in Proc. 31st Int. Conf. Neural Information Processing Systems, Long Beach, USA, 2017, pp. 6405–6416.
    [218]
    W. J. Maddox, T. Garipov, P. Izmailov, D. Vetrov, and A. G. Wilson, “A simple baseline for Bayesian uncertainty in deep learning,” in Proc. 33rd Int. Conf. Neural Information Processing Systems, Vancouver, Canada, 2019, pp. 1179.
    [219]
    D. Seuß, “Bridging the gap between explainable AI and uncertainty quantification to enhance trustability,” arXiv preprint arXiv: 2105.11828, 2021.
    [220]
    G. Agarwal, L. Hay, I. Iashvili, B. Mannix, C. McLean, M. Morris, S. Rappoccio, and U. Schubert, “Explainable AI for ML jet taggers using expert variables and Layerwise relevance propagation,” J. High Energy Phys., vol. 2021, no. 5, p. 208, May 2021. doi: 10.1007/JHEP05(2021)208
    [221]
    X. G. Zhang, F. T. S. Chan, and S. Mahadevan, “Explainable machine learning in image classification models: An uncertainty quantification perspective,” Knowl.-Based Syst., vol. 243, p. 108418, May 2022. doi: 10.1016/j.knosys.2022.108418
    [222]
    K. Sokol and P. A. Flach, “Counterfactual explanations of machine learning predictions: Opportunities and challenges for AI safety,” in Proc. 33rd AAAI Conf. Artificial Intelligence, Honolulu, USA, 2019.
    [223]
    R. Arcucci, L. Mottet, C. Pain, and Y.-K. Guo, “Optimal reduced space for variational data assimilation,” J. Comput. Phys., vol. 379, pp. 51–69, Feb. 2019. doi: 10.1016/j.jcp.2018.10.042
    [224]
    P. Binev, A. Cohen, W. Dahmen, R. DeVore, G. Petrova, and P. Wojtaszczyk, “Data assimilation in reduced modeling,” SIAM/ASA J. Uncertainty Quantif., vol. 5, no. 1, pp. 1–29, Jan. 2017. doi: 10.1137/15M1025384
    [225]
    S. B. Cheng, D. Lucor, and J.-P. Argaud, “Observation data compression for variational assimilation of dynamical systems,” J. Comput. Sci., vol. 53, p. 101405, Jul. 2021. doi: 10.1016/j.jocs.2021.101405
    [226]
    D. Xiao, J. Du, F. Fang, C. C. Pain, and J. Li, “Parameterised non-intrusive reduced order methods for ensemble Kalman filter data assimilation,” Comput. Fluids, vol. 177, pp. 69–77, Nov. 2018. doi: 10.1016/j.compfluid.2018.10.006
    [227]
    C. Q. Casas, R. Arcucci, P. Wu, C. Pain, and Y.-K. Guo, “A reduced order deep data assimilation model,” Phys. D: Nonlinear Phenom., vol. 412, p. 132615, Nov. 2020. doi: 10.1016/j.physd.2020.132615
    [228]
    D. Xiao, F. Fang, C. E. Heaney, I. M. Navon, and C. C. Pain, “A domain decomposition method for the non-intrusive reduced order modelling of fluid flow,” Comput. Methods Appl. Mech. Eng., vol. 354, pp. 307–330, Sept. 2019. doi: 10.1016/j.cma.2019.05.039
    [229]
    S. B. Cheng, J.-P. Argaud, B. Iooss, A. Ponçot, and D. Lucor, “A graph clustering approach to localization for adaptive covariance tuning in data assimilation based on state-observation mapping,” Math. Geosci., vol. 53, no. 8, pp. 1751–1780, Nov. 2021. doi: 10.1007/s11004-021-09951-z
    [230]
    X. Luo, Z. G. Liu, M. S. Shang, J. G. Lou, and M. C. Zhou, “Highly-accurate community detection via pointwise mutual information-incorporated symmetric non-negative matrix factorization,” IEEE Trans. Netw. Sci. Eng., vol. 8, no. 1, pp. 463–476, Jan.–Mar. 2021. doi: 10.1109/TNSE.2020.3040407
    [231]
    P. R. Vlachas, W. Byeon, Z. Y. Wan, T. P. Sapsis, and P. Koumoutsakos, “Data-driven forecasting of high-dimensional chaotic systems with long short-term memory networks,” Proc. Roy. Soc. A: Math. Phys. Eng. Sci., vol. 474, no. 2213, p. 20170844, May 2018.
    [232]
    C. Liu, R. Fu, D. Xiao, R. Stefanescu, P. Sharma, C. Zhu, S. Sun, and C. Wang, “EnKF data-driven reduced order assimilation system,” Eng. Anal. Bound. Elem., vol. 139, pp. 46–55, Jun. 2022. doi: 10.1016/j.enganabound.2022.02.016
    [233]
    M. Peyron, A. Fillion, S. Gürol, V. Marchais, S. Gratton, P. Boudier, and G. Goret, “Latent space data assimilation by using deep learning,” Quart. J. Roy. Meteor. Soc., vol. 147, no. 740, pp. 3759–3777, Oct. 2021. doi: 10.1002/qj.4153
    [234]
    R. Maulik, V. Rao, J. L. Wang, G. Mengaldo, E. Constantinescu, B. Lusch, P. Balaprakash, I. Foster, and R. Kotamarthi, “Efficient high-dimensional variational data assimilation with machine-learned reduced-order models,” Geosci. Model Dev., vol. 15, no. 8, pp. 3433–3445, May 2022. doi: 10.5194/gmd-15-3433-2022
    [235]
    J. Mack, R. Arcucci, M. Molina-Solana, and Y.-K. Guo, “Attentionbased convolutional autoencoders for 3D-variational data assimilation,” Comput. Methods Appl. Mech. Eng., vol. 372, p. 113291, Dec. 2020. doi: 10.1016/j.cma.2020.113291
    [236]
    S. M. Razak, A. Jahandideh, U. Djuraev, and B. Jafarpour, “Deep learning for latent space data assimilation in subsurface flow systems,” SPE J., vol. 27, no. 5, pp. 2820–2840, Oct. 2022. doi: 10.2118/203997-PA
    [237]
    Y. Y. Wang, X. M. Shi, L. L. Lei, and J. C.-H. Fung, “Deep learning augmented data assimilation: Reconstructing missing information with convolutional autoencoders,” Mon. Wea. Rev., vol. 150, no. 8, pp. 1977–1991, Aug. 2022. doi: 10.1175/MWR-D-21-0288.1
    [238]
    A. Storto, G. De Magistris, S. Falchetti, and P. Oddo, “A neural network-based observation operator for coupled ocean-acoustic variational data assimilation,” Mon. Wea. Rev., vol. 149, no. 6, pp. 1967–1985, Jun. 2021.
    [239]
    L. Espeholt, S. Agrawal, C. Sønderby, M. Kumar, J. Heek, C. Bromberg, C. Gazen, R. Carver, M. Andrychowicz, J. Hickey, A. Bell, and N. Kalchbrenner, “Deep learning for twelve hour precipitation forecasts,” Nat. Commun., vol. 13, no. 1, p. 5145, Sept. 2022. doi: 10.1038/s41467-022-32483-x
    [240]
    Y.-G. Ham, J.-H. Kim, and J.-J. Luo, “Deep learning for multi-year ENSO forecasts,” Nature, vol. 573, no. 7775, pp. 568–572, Sept. 2019. doi: 10.1038/s41586-019-1559-7
    [241]
    J. Paduart, L. Lauwers, J. Swevers, K. Smolders, J. Schoukens, and R. Pintelon, “Identification of nonlinear systems using polynomial nonlinear state space models,” Automatica, vol. 46, no. 4, pp. 647–656, Apr. 2010. doi: 10.1016/j.automatica.2010.01.001
    [242]
    S. H. Rudy, S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Data-driven discovery of partial differential equations,” Sci. Adv., vol. 3, no. 4, p. e1602614, Apr. 2017. doi: 10.1126/sciadv.1602614
    [243]
    J.-C. Loiseau, B. R. Noack, and S. L. Brunton, “Sparse reduced-order modelling: Sensor-based dynamics to full-state estimation,” J. Fluid Mech., vol. 844, pp. 459–490, Jun. 2018. doi: 10.1017/jfm.2018.147
    [244]
    Y. F. Guan, S. L. Brunton, and I. Novosselov, “Sparse nonlinear models of chaotic electroconvection,” Roy. Soc. Open Sci., vol. 8, no. 8, p. 202367, Aug. 2021. doi: 10.1098/rsos.202367
    [245]
    A. A. Kaptanoglu, K. D. Morgan, C. J. Hansen, and S. L. Brunton, “Physics-constrained, low-dimensional models for magnetohydrodynamics: First-principles and data-driven approaches,” Phys. Rev. E, vol. 104, no. 1, p. 015206, Jul. 2021. doi: 10.1103/PhysRevE.104.015206
    [246]
    E. N. Lorenz, “Predictability: A problem partly solved,” in Proc. Seminar on Predictability, Reading, UK, 1995.
    [247]
    W. Maass, T. Natschläger, and H. Markram, “Real-time computing without stable states: A new framework for neural computation based on perturbations,” Neural Comput., vol. 14, no. 11, pp. 2531–2560, Nov. 2002. doi: 10.1162/089976602760407955
    [248]
    H. Jaeger and H. Haas, “Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication,” Science, vol. 304, no. 5667, pp. 78–80, Apr. 2004. doi: 10.1126/science.1091277
    [249]
    J. Pathak, Z. X. Lu, B. R. Hunt, M. Girvan, and E. Ott, “Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data,” Chaos: Interdiscip. J. Nonlinear Sci., vol. 27, no. 12, p. 121102, Dec. 2017. doi: 10.1063/1.5010300
    [250]
    J. Pathak, B. Hunt, M. Girvan, Z. X. Lu, and E. Ott, “Model-free prediction of large spatiotemporally chaotic systems from data: A reservoir computing approach,” Phys. Rev. Lett., vol. 120, no. 2, p. 024102, Jan. 2018. doi: 10.1103/PhysRevLett.120.024102
    [251]
    R. Fablet, S. Ouala, and C. Herzet, “Bilinear residual neural network for the identification and forecasting of dynamical systems,” arXiv preprint arXiv: 1712.07003, 2017.
    [252]
    R. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. Duvenaud, “Neural ordinary differential equations,” in Proc. 32nd Int. Conf. Neural Information Processing Systems, Montreal, Canada, 2018, pp. 6572–6583.
    [253]
    S. Ouala, L. Debreu, A. Pascual, B. Chapron, F. Collard, L. Gaultier, and R. Fablet, “Learning runge-kutta integration schemes for ODE simulation and identification,” arXiv preprint arXiv: 2105.04999, 2021.
    [254]
    C. Herrera, F. Krach, and J. Teichmann, “Neural jump ordinary differential equations: Consistent continuous-time prediction and filtering,” in Proc. 9th Int. Conf. Learning Representations, Austria, 2021.
    [255]
    R. Wang, R. Walters, and R. Yu, “Incorporating symmetry into deep dynamics models for improved generalization,” in Proc. 9th Int. Conf. Learning Representations, Austria, 2021.
    [256]
    H. Ma, X. Y. Hu, Y. X. Zhang, N. Thuerey, and O. J. Haidn, “A combined data-driven and physics-driven method for steady heat conduction prediction using deep convolutional neural networks,” arXiv preprint arXiv: 2005.08119, 2020.
    [257]
    R. Wang, K. Kashinath, M. Mustafa, A. Albert, and R. Yu, “Towards physics-informed deep learning for turbulent flow prediction,” in Proc. 26th ACM SIGKDD Int. Conf. Knowledge Discovery & Data Mining, USA, 2020, pp. 1457–1466.
    [258]
    X. W. Jin, S. Z. Cai, H. Li, and G. E. Karniadakis, “NSFnets (Navier-Stokes flow nets): Physics-informed neural networks for the incompressible Navier-Stokes equations,” J. Comput. Phys., vol. 426, p. 109951, Feb. 2021. doi: 10.1016/j.jcp.2020.109951
    [259]
    S. Z. Cai, Z. C. Wang, S. F. Wang, P. Perdikaris, and G. E. Karniadakis, “Physics-informed neural networks for heat transfer problems,” J. Heat Transfer, vol. 143, no. 6, p. 060801, Jun. 2021. doi: 10.1115/1.4050542
    [260]
    K. Kashinath, M. Mustafa, A. Albert, J.-L. Wu, C. Jiang, S. Esmaeilzadeh, K. Azizzadenesheli, R. Wang, A. Chattopadhyay, A. Singh, A. Manepalli, D. Chirila, R. Yu, R. Walters, B. White, H. Xiao, H. A. Tchelepi, P. Marcus, A. Anandkumar, P. Hassanzadeh, and N. Prabhat, “Physics-informed machine learning: Case studies for weather and climate modelling,” Philos. Trans. Roy. Soc. A: Math. Phys. Eng. Sci., vol. 379, no. 2194, p. 20200093, Apr. 2021.
    [261]
    M. Mahmoudabadbozchelou, M. Caggioni, S. Shahsavari, W. H. Hartt, G. Em Karniadakis, and S. Jamali, “Data-driven physics-informed constitutive metamodeling of complex fluids: A multifidelity neural network (MFNN) framework,” J. Rheol., vol. 65, no. 2, pp. 179–198, Feb. 2021. doi: 10.1122/8.0000138
    [262]
    E. Kharazmi, Z. Q. Zhang, and G. E. M. Karniadakis, “hp-VPINNs: Variational physics-informed neural networks with domain decomposition,” Comput. Methods Appl. Mech. Eng., vol. 374, p. 113547, Feb. 2021. doi: 10.1016/j.cma.2020.113547
    [263]
    D. Lucor, A. Agrawal, and A. Sergent, “Simple computational strategies for more effective physics-informed neural networks modeling of turbulent natural convection,” J. Comput. Phys., vol. 456, p. 111022, May 2022. doi: 10.1016/j.jcp.2022.111022
    [264]
    R. Lguensat, P. Tandeo, P. Ailliot, M. Pulido, and R. Fablet, “The analog data assimilation,” Mon. Wea. Rev., vol. 145, no. 10, pp. 4093–4107, Oct. 2017. doi: 10.1175/MWR-D-16-0441.1
    [265]
    Y. C. Zhen, P. Tandeo, S. Leroux, S. Metref, T. Penduff, and J. Le Sommer, “An adaptive optimal interpolation based on analog forecasting: Application to SSH in the gulf of Mexico,” J. Atmos. Oceanic Technol., vol. 37, no. 9, pp. 1697–1711, Sept. 2020. doi: 10.1175/JTECH-D-20-0001.1
    [266]
    F. Takens, “Detecting strange attractors in turbulence,” in Dynamical Systems and Turbulence, Warwick 1980, D. Rand and L.-S. Young, Eds. Berlin, Heidelberg, Germany: Springer, 1981, pp. 366–381.
    [267]
    M. Sangiorgio, F. Dercole, and G. Guariso, “Forecasting of noisy chaotic systems with deep neural networks,” Chaos,Solitons Fractals, vol. 153, p. 111570, Dec. 2021. doi: 10.1016/j.chaos.2021.111570
    [268]
    S. Rasp, P. D. Dueben, S. Scher, J. A. Weyn, S. Mouatadid, and N. Thuerey, “WeatherBench: A benchmark data set for data-driven weather forecasting,” J. Adv. Model. Earth Syst., vol. 12, no. 11, p. e2020MS002203, Nov. 2020.
    [269]
    T. Kurth, S. Subramanian, P. Harrington, J. Pathak, M. Mardani, D. Hall, A. Miele, K. Kashinath, and A. Anandkumar, “FourCastNet: Accelerating global high-resolution weather forecasting using adaptive fourier neural operators,” arXiv preprint arXiv: 2208.05419, 2022.
    [270]
    A. McGovern, R. Lagerquist, D. J. Gagne II, G. E. Jergensen, K. L. Elmore, C. R. Homeyer, and T. Smith, “Making the black box more transparent: Understanding the physical implications of machine learning,” Bull. Am. Meteor. Soc., vol. 100, no. 11, pp. 2175–2199, Nov. 2019. doi: 10.1175/BAMS-D-18-0195.1
    [271]
    B. A. Toms, E. A. Barnes, and I. Ebert-Uphoff, “Physically interpretable neural networks for the geosciences: Applications to earth system variability,” J. Adv. Model. Earth Syst., vol. 12, no. 9, p. e2019MS002002, 2020.
    [272]
    C. Irrgang, N. Boers, M. Sonnewald, E. A. Barnes, C. Kadow, J. Staneva, and J. Saynisch-Wagner, “Towards neural Earth system modelling by integrating artificial intelligence in Earth system science,” Nat. Mach. Intell., vol. 3, no. 8, pp. 667–674, Aug. 2021. doi: 10.1038/s42256-021-00374-3
    [273]
    I. Ayed, E. de Bézenac, A. Pajot, J. Brajard, and P. Gallinari, “Learning dynamical systems from partial observations,” arXiv preprint arXiv: 1902.11136, 2019.
    [274]
    B. Boots and G. Gordon, “An online spectral learning algorithm for partially observable nonlinear dynamical systems,” in Proc. 25th AAAI Conf. Artificial Intelligence, San Francisco, USA, 2011, pp. 293–300.
    [275]
    M. E. Levine and A. M. Stuart, “A framework for machine learning of model error in dynamical systems,” Commun. Am. Math. Soc., vol. 2, no. 7, pp. 283–344, Oct. 2022. doi: 10.1090/cams/10
    [276]
    H. D. I. Abarbanel, “Modeling chaos,” in Analysis of Observed Chaotic Data, H. D. I. Abarbanel, Ed. New York, USA: Springer, 1996, pp. 95–114.
    [277]
    J. Frank, S. Mannor, and D. Precup, “Activity and gait recognition with time-delay embeddings,” in Proc. 24th AAAI Conf. Artificial Intelligence, Atlanta, USA, 2010, pp. 1581–1586.
    [278]
    A. Kazem, E. Sharifi, F. K. Hussain, M. Saberi, and O. K. Hussain, “Support vector regression with chaos-based firefly algorithm for stock market price forecasting,” Appl. Soft Comput., vol. 13, no. 2, pp. 947–958, Feb. 2013. doi: 10.1016/j.asoc.2012.09.024
    [279]
    T. Berry and J. Harlim, “Forecasting turbulent modes with nonparametric diffusion models: Learning from noisy data,” Phys. D: Nonlinear Phenom., vol. 320, pp. 57–76, Apr. 2016. doi: 10.1016/j.physd.2016.01.012
    [280]
    D. J. Gauthier, E. Bollt, A. Griffith, and W. A. S. Barbosa, “Next generation reservoir computing,” Nat. Commun., vol. 12, no. 1, p. 5564, Sept. 2021. doi: 10.1038/s41467-021-25801-2
    [281]
    K. M. He, X. Y. Zhang, S. Q. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, Las Vegas, USA, 2016, pp. 770–778.
    [282]
    R. Krishnan, U. Shalit, and D. Sontag, “Structured inference networks for nonlinear state space models,” in Proc. 31st AAAI Conf. Artificial Intelligence, San Francisco, USA, 2017, pp. 2101–2109.
    [283]
    D. Nguyen, S. Ouala, L. Drumetz, and R. Fablet, “EM-like learning chaotic dynamics from noisy and partial observations,” arXiv preprint arXiv: 1903.10335, 2019.
    [284]
    H. D. I. Abarbanel, “Choosing time delays,” in Analysis of Observed Chaotic Data, H. D. I. Abarbanel, Ed. New York, USA: Springer, 1996, pp. 25–37.
    [285]
    H. D. I. Abarbanel, “Choosing the dimension of reconstructed phase space,” in Analysis of Observed Chaotic Data, H. D. I. Abarbanel, Ed. New York, USA: Springer, 1996, pp. 39–67.
    [286]
    S. Ouala, S. L. Brunton, A. Pascual, B. Chapron, F. Collard, L. Gaultier, and R. Fablet, “Bounded nonlinear forecasts of partially observed geophysical systems with physics-constrained deep learning,” arXiv preprint arXiv: 2202.05750, 2022.
    [287]
    S. Ouala, P. Tandeo, B. Chapron, F. Collard, and R. Fablet, “Endto-end Kalman filter in a high dimensional linear embedding of the observations,” in Stochastic Transport in Upper Ocean Dynamics, B. Chapron, D. Crisan, D. Holm, E. Mémin, and A. Radomska, Eds. Cham, Germany: Springer, 2023, pp. 211–221.
    [288]
    P. Tandeo, P. Ailliot, and F. Sévellec, “Data-driven reconstruction of partially observed dynamical systems,” EGUsphere, 2022. DOI: 10.5194/egusphere-2022-1316
    [289]
    T. Sauer, J. A. Yorke, and M. Casdagli, “Embedology,” J. Stat. Phys., vol. 65, no. 3, pp. 579–616, Nov. 1991.
    [290]
    R. M. Noyes and R. J. Field, “Oscillatory chemical reactions,” Annu. Rev. Phys. Chem., vol. 25, pp. 95–119, Oct. 1974. doi: 10.1146/annurev.pc.25.100174.000523
    [291]
    K. P. Sharp, “Stochastic differential equations in finance,” Appl. Math. Comput., vol. 37, no. 2, pp. 131–148, May 1990.
    [292]
    U. Piomelli, “Large-eddy simulation: Achievements and challenges,” Prog. Aerosp. Sci., vol. 35, no. 4, pp. 335–362, May 1999. doi: 10.1016/S0376-0421(98)00014-1
    [293]
    D. D. Holm, “Variational principles for stochastic fluid dynamics,” Proc. Roy. Soc. A: Math. Phys. Eng. Sci., vol. 471, no. 2176, p. 20140963, Apr. 2015.
    [294]
    E. Mémin, “Fluid flow dynamics under location uncertainty,” Geophys. Astrophys. Fluid Dyn., vol. 108, no. 2, pp. 119–146, May 2014. doi: 10.1080/03091929.2013.836190
    [295]
    B. Chapron, P. Dérian, E. Mémin, and V. Resseguier, “Large-scale flows under location uncertainty: A consistent stochastic framework,” Quart. J. Roy. Meteor. Soc., vol. 144, no. 710, pp. 251–260, Jan. 2018. doi: 10.1002/qj.3198
    [296]
    K. P. Champion, S. L. Brunton, and J. N. Kutz, “Discovery of nonlinear multiscale systems: Sampling strategies and embeddings,” SIAM J. Appl. Dyn. Syst., vol. 18, no. 1, pp. 312–333, Jan. 2019. doi: 10.1137/18M1188227
    [297]
    H. Frezat, G. Balarac, J. Le Sommer, R. Fablet, and R. Lguensat, “Physical invariance in neural networks for subgrid-scale scalar flux modeling,” Phys. Rev. Fluids, vol. 6, no. 2, p. 024607, Feb. 2021. doi: 10.1103/PhysRevFluids.6.024607
    [298]
    H. Frezat, J. Le Sommer, R. Fablet, G. Balarac, and R. Lguensat, “A posteriori learning for quasi-geostrophic turbulence parametrization,” arXiv preprint arXiv: 2204.03911, 2022.
    [299]
    R. Vinuesa and S. L. Brunton, “Enhancing computational fluid dynamics with machine learning,” Nat. Comput. Sci., vol. 2, no. 6, pp. 358–366, Jun. 2022. doi: 10.1038/s43588-022-00264-7
    [300]
    B. Ouyang, L.-T. Zhu, Y.-H. Su, and Z.-H. Luo, “A hybrid mesoscale closure combining CFD and deep learning for coarse-grid prediction of gas-particle flow dynamics,” Chem. Eng. Sci., vol. 248, p. 117268, Feb. 2022. doi: 10.1016/j.ces.2021.117268
    [301]
    T. M. Bury, R. I. Sujith, I. Pavithran, M. Scheffer, T. M. Lenton, M. Anand, and C. T. Bauch, “Deep learning for early warning signals of tipping points,” Proc. Natl. Acad. Sci. USA, vol. 118, no. 39, p. e2106140118, Sept. 2021. doi: 10.1073/pnas.2106140118
    [302]
    P. J. van Leeuwen, “Nonlinear data assimilation in geosciences: An extremely efficient particle filter,” Quart. J. Roy. Meteor. Soc., vol. 136, no. 653, pp. 1991–1999, Oct. 2010. doi: 10.1002/qj.699
    [303]
    R. H. Reichle, “Data assimilation methods in the Earth sciences,” Adv. Water Resour., vol. 31, no. 11, pp. 1411–1418, Nov. 2008. doi: 10.1016/j.advwatres.2008.01.001
    [304]
    X. Zou, I. M. Navon, and F. X. Le Dimet, “Incomplete observations and control of gravity waves in variational data assimilation,” Tellus A: Dyn. Meteor. Oceanogr., vol. 44, no. 4, pp. 273–296, Jan. 1992. doi: 10.3402/tellusa.v44i4.14961
    [305]
    J. Y. Liang, K. Terasaki, and T. Miyoshi, “A machine learning approach to the observation operator for satellite radiance data assimilation,” in Proc. 23rd EGU General Assembly, 2021.
    [306]
    S. Y. Jing, “Data assimilation with a machine learned observation operator and application to the assimilation of satellite data for sea ice models,” University of North Carolina at Chapel Hill, Orange County, USA, Rep. DMS-1722578, Apr. 2019.
    [307]
    T. Frerix, D. Kochkov, J. A. Smith, D. Cremers, M. P. Brenner, and S. Hoyer, “Variational data assimilation with a learned inverse observation operator,” in Proc. 38th Int. Conf. Machine Learning, 2021, pp. 3449–3458.
    [308]
    F. Z. Zhuang, Z. Y. Qi, K. Y. Duan, D. B. Xi, Y. C. Zhu, H. S. Zhu, H. Xiong, and Q. He, “A comprehensive survey on transfer learning,” Proc. IEEE, vol. 109, no. 1, pp. 43–76, Jan. 2021. doi: 10.1109/JPROC.2020.3004555
    [309]
    W. M. Kouw and M. Loog, “A review of domain adaptation without target labels,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 3, pp. 766–785, Mar. 2021. doi: 10.1109/TPAMI.2019.2945942
    [310]
    R. Schneider, M. Bonavita, A. Geer, R. Arcucci, P. Dueben, C. Vitolo, B. Le Saux, B. Demir, and P.-P. Mathieu, “ESA-ECMWF Report on recent progress and research directions in machine learning for Earth System observation and prediction,” npj Climate Atmos. Sci., vol. 5, no. 1, p. 51, Jun. 2022. doi: 10.1038/s41612-022-00269-z

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(8)  / Tables(5)

    Article Metrics

    Article views (1036) PDF downloads(184) Cited by()

    Highlights

    • To the best of the authors' knowledge, this is the first review that addresses the combination of machine learning, data assimilation and uncertainty quantification for dynamical systems
    • This paper has a special focus on how machine learning methods can contribute to the key challenges of data assimilation and uncertainty quantification, and vice versa
    • This review paper includes a range of main applications in data assimilation, uncertainty quantification and machine learning, such as numerical weather prediction, environmental modelling and computational fluid dynamics
    • Some promising and insightful research perspectives and challenges are discussed

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return