Underflow in computing occurs when a number near zero is rounded to zero when it is stored or represented in a system that has finite precision and a limited number of exponent bits. It’s like trying to measure a grain of sand with a ruler marked in centimeters – the measurement is so small that the ruler cannot register it.