The article is (obviously) a joke. But it raises an interesting point about computation (namely, it's a small miracle that practically computable problems are usually those with polynomial time solutions.)
We might talk about polynomial time like that's reasonable, but few useful algorithms are worse than X^2 * log x.
After all, the man issue is not the size of the algorithm but how long f(N) takes and all to often we see N > 1,000,000,000 which limits what's reasonable.
Yes, but that is for an N x N matrix. It is more useful to express the complexity in the size of the input, in which case matrix multiplication is N^1.5 or better. (As far as I know, the best known algorithm is pretty close to linear, but it's not known if a linear one exists.)
Matrix multiplication is a great example of this, there has been a lot of research into in sparse matrix multiplication because N^2.71 is still fairly difficult.
Edit: There are a lot of known algorithms that would be extremely useful if they where faster.
I don't think it's that much of a miracle - Cobham's "The intrinsic computational difficulty of functions" covered it pretty well - or least explained the intuition behind why it's a pretty sensible idea - and he (along with Jack Edmonds) pretty much defined the complexity class P.