The total variation (of the gradient) is widely applied as a regularization prior for diverse inverse problems. It is most useful when the true data is expected to be nearly piecewise constant, for example in the recovery of relatively simple images consisting of well-defined objects with limited texture, or for identification of physical parameters which are expected to contain inclusions or discontinuities. A basic question for any regularization method is consistency in the low noise regime and with vanishing regularization parameter. For total variation regularization, basic compactness considerations yield convergence in L^p norms, while adding a source condition involving the subgradient of the total variation at the least-energy exact solution allows for convergence rates in Bregman distance. However, these distances do not provide much information in the setting of nearly piecewise constant functions that motivates the use of the total variation in the first place.
A different, perhaps more adequate choice is convergence of the boundaries of level sets with respect to Hausdorff distance, which can be loosely interpreted as uniform convergence of the objects to be recovered. Such a result requires an adequate choice of (possibly Banach) spaces for the measurements and dual stability estimates to account for the noise, which combined provide uniform weak regularity estimates for the level sets. We present some recent results obtaining this type of convergence for regularization of linear inverse problems under the same type of source condition and for denoising of simple data without source condition, along with some additional consequences of this point of view.