Precision in numerics is usually considered in relative terms (eg significant figures). Every floating point number has an equal number of bits of precision. It is true, though, that half of the floats are between -1 and 1. That is because precision is equal across the range.
Only the normal floating point numbers have this property, the sub-normals do not.
In the single precision floats for example there is no 0.000000000000000000000000000000000000000000002
it goes straight from 0.000000000000000000000000000000000000000000001
to 0.000000000000000000000000000000000000000000003
Subnormals are a dirty hack to squeeze a bit more breathing space around zero for people who really need it. They aren't even really supported in hardware. Using them in normal contexts is usually an error.
What? Pretty sure there's more precision in [0-1] than there is in really big numbers.