Invariance is a fundamental principle that underlies the definition of shape: an object's shape is characterized by what remains after quotienting out transformations such as translations and rotations—in other words, a shape descriptor is invariant under the action of a group G. In this talk, we ask: can we design neural networks that leverage this principle to achieve greater generalization? Group-Equivariant Convolutional Neural Networks (G-CNNs) provide a solution where invariance to a group action is achieved through a max pooling operation. Yet, this pooling operation is excessively invariant. It is inadequate to represent any notion of shape as it destroys most information about the object, which in turn results in a general lack of robustness both in classical CNNs and G-CNNs. To address this, we leverage a spectral perspective to propose novel pooling primitives that achieve lossless invariance in CNNs and G-CNNs. We demonstrate that these techniques can enhance both accuracy and robustness compared to traditional pooling, and we explore their trade-offs in terms of computational efficiency and performance. Our approach illustrates the benefits of reexamining even the most established deep learning primitives through the lens of algebra and geometry, revealing new insights and unlocking performance gains.