I think you either haven't thought about this or you did your math wrong.
You need (2^e)+m+1 bits. That is more bits than would fit in the cheap machine integer type you just have lying around, but it's not that many in real terms.
Let's do a tiny one to see though first, the "half-precision" or f16 type, 5 bits of exponent, 10 bits of fraction, 1 sign bit. We need 43 bits. This will actually fit in the 64-bit signed integer type on a modern CPU.
Now lets try f64, the big daddy, 11 exponent, 52 fraction, 1 sign bit so total 2048 + 52 + 1 = 2101 bits. As I said it doesn't fit in our machine integer types but it's much smaller than a kilobyte of RAM.
Edited: I can't count, though it doesn't make a huge difference.
You also need some extra bits at the top so that it doesn't overflow (e.g., on an adversarial input filled with copies of the max finite value, followed by just as many copies of its negation, so that it sums to 0). The exact number will depend on the maximum input length, but for arrays stored in addressible memory it will add up to no more than 64 or so.
Thanks, you're right there for accumulating excess, I don't think you can actually get to 64 extra bits but sure, lets say 64 extra bits if you want a general purpose in memory algorithm, it's not cheap but we shouldn't be surprised it can be done.
EDIT: Oh and Erdős was the great collaborator. There is an Erdős number (Bacon and Ozzie too) which defines how close you are to him. eg if you co-authored a paper with Erdős you have a Erdős number of one. If you co-auth a paper with someone with an Erdős number of one, then you have an Erdős number of two etc.
I think that the Bacon (Kevin Bacon) number was the original and there is also a Black Sabbath number which is related to Ozzie (MHRiP).
I also gather that a very few people have managed a minimum measure of all three numbers. Feynman might be one of them (its too late to check).
reply