Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this has to do with the nature of the metric underneath. Closed-open intervals are the way to go for integers. However they don’t seen to be a good fit for sampling from continuuous space


When you are working on a continuous space, what exactly is the difference in a program whether you use a closed-closed or half-open interval?

The only case that comes to my mind where it makes a difference at all is when you take discrete values from the interval and happen to hit exactly the end of the interval. But then the difference comes from the "discrete" part again. As long as you work on the continuous space, everything you typically do (e.g. integral, average, ...) give the same result no matter whether the end is part of the interval.

(You mentioned sampling, but I'd suggest that sampling by just taking values without lowpass filtering first is a bad idea anyway, and with a filter you are back to an integral which doesn't care about open/closed)


It has to do with how cpus do floating point division.

Here is an article that discusses this, along with workarounds:

https://dl.acm.org/doi/10.1145/3503512

From the abstract: "Drawing a floating-point number uniformly at random from an interval [a, b) is usually performed by a location-scale transformation of some floating-point number drawn uniformly from [0, 1). Due to the weak properties of floating-point arithmetic, such a transformation cannot ensure respect of the bounds, uniformity or spatial equidistributivity."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: