Principal component analysis is a ubiquitous tool in exploratory data analysis. It is widely used by applied scientists for visualization and interpretability purposes. We raise an important issue (the curse of isotropy) about the interpretation of principal components with close eigenvalues. They may indeed suffer from an important rotational variability, which is a pitfall for interpretation. Through the lens of a probabilistic covariance model parameterized with flags of subspaces, we show that the curse of isotropy cannot be overlooked in practice. In this context, we propose to transition from ill-defined principal components to more-interpretable principal subspaces. The final methodology (principal subspace analysis) is extremely simple and shows promising results on a variety of datasets from different fields.