Ok, firstly you need understand some value in analysis are herustic (what means they can be shown incorrectly), most of values that used in rating calculation is very strict, but not "clipping", current version of algorithm is made to be fast, sometimes it can show high values, with no clipping at all (very rare condition). And secondly some decoders can decode and interpret song with very high level of anti-aliasing and averaging such artifacts, example Apple QuickTime (it's decode all data as 32-bit floating point even if native data 8-bit integer with very high level of smoothing), this is good for playing, not for analyzing, such methods hide some artifacts.
What clipping mean, example take sound data 16bit stereo, sample rate isn't important, and see raw values:
-30000, -30001, -29988, -16000, 89, 8900,
Values near -30000 clipped, because they located near they extreme (min/max) and they repeated (in some small range), normally encoded sound data didn't have such repeating values at all, if we had one value near it extreme, next value would be much different. You need to understand min/max values didn't depend on min/max of integer (-32768,+32767), they absolutely unrelated, some songs could have clipping with min/max equal to 10000, they just made quite.