Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> equal precision across the range

What? Pretty sure there's more precision in [0-1] than there is in really big numbers.



Precision in numerics is usually considered in relative terms (eg significant figures). Every floating point number has an equal number of bits of precision. It is true, though, that half of the floats are between -1 and 1. That is because precision is equal across the range.


Only the normal floating point numbers have this property, the sub-normals do not.

In the single precision floats for example there is no 0.000000000000000000000000000000000000000000002 it goes straight from 0.000000000000000000000000000000000000000000001 to 0.000000000000000000000000000000000000000000003

So that's not even one whole digit of precision.


Yes, that is true. The subnormal numbers gradually lose precision going towards zero.


Subnormals are a dirty hack to squeeze a bit more breathing space around zero for people who really need it. They aren't even really supported in hardware. Using them in normal contexts is usually an error.


As of 2025, they finally have hardware support from Intel and AMD. IIRC it took until Zen 2 and Ice Lake to do this.


Oh joy! Just in time for all computation to move to GPUs running eight-bit "floats".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: