Pooling - are there any advanges for not pooling?

Classic texts on convolutional neural networks (CNN) always show the use of pooling. Clearly, this reduces the size of the data being processed, which in many cases might necessary. However, perhaps in cases where sizes are not large, might there be a case for not pooling? Put another way, is pooling in any way detrimental to securing the highest classification accuracy? After all, pooling does actually discard information, which otherwise might be useful. NZ1

Not using pooling in CNNs can preserve fine details and improve feature learning but may increase computational costs and the risk of overfitting. It’s beneficial when detail preservation is crucial, but it requires more resources and may not capture global context as effectively.

Thank you for your response. Seems then it’s like a form of regularisation. Possibly then when the image contains few pixels, which are highly valued, then pooling might not be such a good idea. If for example, when images are already over sampled, ie sampled at many times the diffraction limit of the optics, discarding information by pooling might be a benefit, acting as a data reduction technique. However, when you are just at the diffraction limit not pooling might be the best option to preserve valuable information. However, i dont really know, just supposing, but will continue to think about this one.