Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In numerical analysis class numerical stability was a major theme[1]. It was very definitely eye-opening how quickly rounding/truncation errors can bite you in seemingly trivial situations!

On the other hand, if I have a function that works on parameters in a specified range (say, 0.0 to 1.0) and returns results which is supposed to be correct to a specified accuracy I would love to have the compiler do all the possible optimizations without having to specify a compiler flag (doing so for, say, just a single function can be quite annoying!). Maybe approaches like Haskell's type inference will, eventually, produce significantly faster code because they can do things like this?

In a somewhat related case, it would be awesome if the compiler recognized properties like associativeness, distributiveness, commutitiveness of composite functions and rearange things optimally for performance. It's really nice to see so much development in languages and compilers at the moment (llvm, javascript, functional languages like Haskell, etc.) :°)

[1] http://en.wikipedia.org/wiki/Numerical_stability



So you want something like GCC's -funsafe-math flag.


David Buehler's mixed-point computations can do that, I believe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: