This talk studies supervised linear operator learning between infinite-dimensional Hilbert spaces. Learning is framed as a Bayesian inverse problem with a linear operator as the unknown parameter. Assuming that the true operator is diagonalizable in a known basis, this work solves the equivalent inverse problem of estimating the operator's eigenvalues given the data. The analysis establishes posterior contraction rates in the infinite data limit under Gaussian priors. These convergence rates reveal fundamental principles of operator learning that could help guide practical developments and reduce the required data volume. Numerical evidence supports the theory in diagonal and non- diagonal settings corresponding to familiar PDE operators.