Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reusing symbols like +, *, or / to define operations that aren't the + or the / you're used to is pretty common in math. It's just notation.

At the end of the day, the / that we have in programming has the same problem as this article's /, almost all programming languages will return 5/2 = 2 when dividing integers, even though 2 * 2 is not 5! Division is not defined for all integers, but it's just convenient to extend it when programming.

So if some languages want to define 1/0 = 0, we really shouldn't be surprised that 0*0 is not 1, we already had the (a/b)*b != a problem all along!



> Reusing symbols like +, *, or / to define operations that aren't the + or the / you're used to is pretty common in math. It's just notation.

Reusing symbols in a different context is pretty common; taking a symbol that is already broadly used in a specific way (in this case, that `a/b` is defined for elements in a field as multiplying `a` by the multiplicative inverse of `b`) is poor form and, frankly, a disingenuous argument.


I am a professor for algebra at a research university. I make a point out of teaching my students that `a/b` is NOT the same as multiplying `a` by the multiplicative inverse of `b`.

The standard example is that we have a well-defined and useful notion of division in the ring Z/nZ for n any positive integer even in cases were we "divide" by an element that has no multiplicative inverse. Easy example: take n=8 then you can "divide" 4+nZ by 2+nZ just fine (and in fact turn Z/nZ into a Euclidean ring), even though 2+nZ is not a unit, i.e. admits no multiplicative inverse.


That's nonsense. a/b is float in Python 3, and even in other languages a/b gets closer to it's actual value as a and b get bigger (the "limit", which is the basis of Algebra). So four operations in programming generally do agree with foundations of Algebra. But a/0=0 is %100 against Algebra. And it's very unintuitive. It's basically saying zero is the same as infinity, and therefore all numbers are the same, so why bother having any numbers at all?


Floats don't have multiplicative inverses, and the floating point operations don't give us any of the mathematical structures we expect of numbers. Floating point division already abandons algebra for the sake of usefulness.


Knuth vol 2 has a nice discussion of floating point operations and shows how to reason about them. Wilkinson's classic "Rounding Errors in Algebraic Processes" (1966) also has a good discussion.


> even in other languages a/b gets closer to it's actual value as a and b get bigger (the "limit", which is the basis of Algebra)

This is not generally true. 5/2 = 2, 50/20 = 2, 500/200 = 2, and so on no matter how big the numbers get.


Yes, I meant when the result gets bigger. You get the idea.


What's the output of this Go program, without going to the playground link?

  print(math.MinInt / -1)
https://go.dev/play/p/Vy1kj0dEsqP


If you were to define a/0 the most logical choice would be a new special value "Infinity". The second best choice would be the maximum supported value of the type of a (int, int64 etc). Anything else would be stupid.


What if a is negative?


Same. Unless you want to differentiate -0 and +0 (which make it more complicated), you can not distinguish infinity from negative infinity.


IEEE floating point representation does both


John Conway can




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: