> Due to the age of the routines, I personally think it’s prudent to minimize its usage.
Why? LAPACK is probably the most well tested bit of numerical code out there and must be a piece of the most well tested software in general. What is the technical justification except for some NIH?
It's netlib's FORTRAN 77 LAPACK distribution (over 15 years old), mechanically translated into Common Lisp. LAPACK isn't just one thing; there are multiple distributions of it, and the reference distribution is versioned, frequently updated, and no longer written in FORTRAN 77. So it's a bit erroneous to call LAPACK "well tested"; it's roughly like saying Python (2.5) is super well tested because half the programming world uses Python (3.x), Cython, Unladen Swallow, and/or Numba now.
The above quote is suggesting to minimize dependence on a mechanical translation of 15 year old code that's solely used as a fallback to the native (albeit foreign) routines. There is already a way to load a full, native LAPACK library in MAGICL (including netlib's reference, but also e.g. Apple's Accelerate framework), and that's always available precisely because of the quality of modern LAPACK distributions.
To be clear, the article discusses how ~10 lines of code (+ 3000 words of mathematical justification) can allow an existing LAPACK function that works on one data type to become useful for another data type. That hardly seems like NIH to me.
A more nuanced discussion is elsewhere in this thread.
Why? LAPACK is probably the most well tested bit of numerical code out there and must be a piece of the most well tested software in general. What is the technical justification except for some NIH?