We discuss novel multilevel best linear unbiased estimators (BLUEs) introduced in [4]. The goal is the estimation of the expectation of a scalar-valued quantity of interest associated with a family of model evaluations. The key idea of the multilevel BLUEs is to reformulate
the estimation as a generalized linear regression problem. By construction, BLUEs are variance minimal within the class of linear unbiased estimators. By solving a sample allocation problem we further construct a variance minimal, linear, and unbiased estimator for a given computational budget. We compare our proposed estimator to other multilevel estimators such as multilevel Monte Carlo [1], multifidelity Monte Carlo [3], and approximate control variates [2]. In addition, we show that our estimator approaches a sharp lower bound that holds for any linear unbiased multilevel estimator in the infinite low-fidelity data limit. Finally, we specialize the results in [4] to PDE-based models which are parameterized by a discretization quantity, e.g. the finite element mesh size. We prove that in this case the complexity of the BLUE is not worse than the complexity of multilevel Monte Carlo [5]. In practise, we observe a complexity reduction for selected random elliptic PDE problems.
[1] M. B. Giles, Multilevel Monte Carlo methods, Acta Numerica, 24, pp. 259-328, 2015.
[2] A. A. Gorodetsky, G. Geraci, M. Eldred, J. D. Jakeman, A Generalized Framework for Approximate Control Variates, J. Comput. Phys., 408, pp. 109257, 2020.
[3] B. Peherstorfer, K. Willcox, M. Gunzburger, Optimal Model Management for Multifidelity Monte Carlo Estimation, SIAM J. Sci. Comput., 38, pp. A3163-A3194, 2016.
[4] D. Schaden, E. Ullmann, On multilevel best linear unbiased estimators. To appear in SIAM/ASA J. Uncert. Quantif.
[5] D. Schaden, E. Ullmann, Asymptotic analysis of multilevel best linear unbiased estimators. Submitted.