>- I wonder if this cast is always correct in C [ie.: math.h], no matter the datatype and/or the number base?
Floating point arithmetic is deterministic. As long as it is implemented as specified atan(1) has to give the floating point number which is the closest approximation to the real number pi/4 (in the current rounding mode), the multiplication by 4 means that precision can be lost and potentially your result is no longer the closest possible approximation to pi.
this isn't true. the standard only recommends correct rounding, but does not actually set any limits on acceptable error. also, no OS provided libm produces correctly rounded results for all inputs.
Page 507 for adherence to number formats. Page 517 for atan adhering to IEEE 754 specification for the functions defined therein, which guarantees best possible results for individual operations.
Any C implementation where atan gives a result which is inconsistent with IEEE 754 specification does not adhere to the standard.
> also, no OS provided libm produces correctly rounded results for all inputs.
Every IEEE 754 conforming library does adhere to the best possible rounding guarantee. If you have any evidence to the contrary that would be a disaster and should be reported to the vendor of that library ASAP.
Can you provide some function and some input which violates the IEEE 754 guarantee together with the specific library and version? Or are you just making stuff up?
In the interests of moving this discussion in a positive direction, the comment you're replying to is correct. IEEE 754 doesn't specify correct rounding except for a small subset of elementary functions. In the 1985 version, this was the core +, -, *, /, and sqrt, but it was updated to include a few of the other functions when they were added. arctan is one of those functions which is not always correctly rounded due to the tablemaker's dilemma. If you read the latest standard (2019), they actually cite some of the published literature giving specific worst case examples for functions like arctan.
Even beyond transcendental functions, 754 isn't deterministic in practice because implementations have choices that aren't always equivalent. Using FMA vs separate multiplication and addition leads to different results in real programs, even though both methods are individually deterministic.
>arctan is one of those functions which is not always correctly rounded due to the tablemaker's dilemma.
But then it doesn't conform to the standard. It is pretty unambiguous on that point.
From Section 9.2:
"A conforming operation shall return results correctly rounded for the applicable rounding direction for all operands in its domain."
I do not see how two conforming implementations can differ in results.
>Using FMA vs separate multiplication and addition leads to different results in real programs, even though both methods are individually deterministic.
Obviously. I never claimed that the arithmetic was invariant under transformations which change floating point operations, but are equivalent for real numbers. That would be ridiculous.
Is there actually an example of two programs performing identical operations under the same environment that give different results where both implementations conform to the standard?
>Even beyond transcendental functions, 754 isn't deterministic in practice because implementations have choices that aren't always equivalent.
Could you give an example? Where are implementations allowed to differ? And are these cases relevant, in the sense that identical operations lead to differing results? Or do they just relate to error handling and signaling.
That section is recommended but not required for a conforming implementation:
> 9. Recommended operations
> Clause 5 completely specifies the operations required for all supported arithmetic formats. This clause specifies additional operations, recommended for all supported arithmetic formats.
>That section is recommended but not required for a conforming implementation:
Who cares? The C standard for math.h requires these functions to be present as specified. They are specified to round correctly, the C standard specifies them to be present as specified, therefore the C standard specifies them as present and correctly rounded. I literally quoted the relevant sections, there are no conforming C specification which give different results.
Any evidence whatsoever that this is caused by two differing implementations of tanh, which BOTH conform to the IEEE 754 standard?
Everyone is free to write their own tanh, it is totally irrelevant what numpy gives, unless there are calls to two standard confirming tanh function which for the same datatype produce different results.
> The C standard for math.h requires these functions to be present as specified. They are specified to round correctly, the C standard specifies them to be present as specified, therefore the C standard specifies them as present and correctly rounded. I literally quoted the relevant sections, there are no conforming C specification which give different results.
Forgive me, but I cannot see that in the document sections you point out. The closest I can see is F.10-3, on page 517, but my reading of that is that it only applies to the Special cases (i.e values in Section 9.2.1), not the full domain.
In fact, my reading of F.10-10 (page 518) suggests that a conforming implementation does not even have to honor the rounding mode.
I'm not aware of any libm implementations that will guarantee correct rounding across all inputs for all types. I'm aware of a few libm's that will guarantee that for floats (e.g. rlibm: https://people.cs.rutgers.edu/~sn349/rlibm/ ), but these are not common.
I don't particularly want to read the standard today to quote line and verse, but it's generally understood in the wider community that correct rounding is not required by 754 outside a small group of core functions where it's practically reasonable to implement. This includes everything from the 754 implementation in your CPU to compiler runtimes. Correct rounding is computationally infeasible without arbitrary precision arithmetic, which is what the major compilers use at compile time. If you're expecting it at any other time, I'm sorry to say that you'll always be disappointed.
I mean, maybe I am just an insane guy on the internet, but to me "correctly rounded", just sounds a bit different to "the implementor gets to decide, how many correct bits he wants to provide".
We're thankfully in a world these days where all the relevant implementations are sane and reliable for most real usage, but a couple decades back that was very much the practical reality. Intel's x87 instruction set was infamous for this. Transcendentals like fsin would sometimes have fewer than a dozen bits correct and worse, the documentation on it was straight up wrong until Bruce Dawson on the chrome team filed a bug report.
(in summary, llvm-libc correctly rounds all functions, as it's explicitly borrowing from one of the correctly-rounded efforts; of the other implementations, Intel's library usually gets the closest, but not always).
In binary floating point, 4.0 = 1.0×2^2, so the mantissa of the multiplicand will stay the same (being multiplied by 1.0) and the exponent will be incremented by 2. Scaling by exact integer powers of 2 preserves the relative accuracy of the input so long as you stay in range. The increase in absolute error is inherent to the limited number of mantissa bits and not introduced by any rounding from the multiplication; there are no additional bits.
This is about the approximation to pi not the approximation to float(atan(1))*4, it is exact (but irrelevant) for the later, for the former you loose two bits, so you have a 25% chance of correctly rounding towards pi.
Floating point arithmetic is deterministic. As long as it is implemented as specified atan(1) has to give the floating point number which is the closest approximation to the real number pi/4 (in the current rounding mode), the multiplication by 4 means that precision can be lost and potentially your result is no longer the closest possible approximation to pi.