Choosing models from a hypothesis space is a frequent task in inverse problems. Cross-validation is a classical tool in the learner's repertoire to estimate the goodness of one such model. Much work was dedicated to computing this quantity in a fast manner but tackling its theoretical properties occurs to be difficult. So far, most optimality results are stated in an asymptotic fashion. In this talk we propose a concentration inequality on the difference of cross-validation score and the risk functional with respect to the squared error. This gives a pre-asymptotic bound which holds with high probability. For the assumptions we rely on bounds on the uniform error of the model which allow for a broadly applicable framework.
We support our claims by applying this machinery to Shepard's model where we are able to determine precise constants of the concentration inequalities. Fast numerical experiments indicate the applicability of our results.