…and is now being taught in combined “Formal Real Analysis”[1] courses to undergrads, and the lean prover community has a joint project to formalize the proof of Fermat’s Last Theorem, which is a lot of work but is progressing. It’s sort of weird to say there is no progress. It seems to me when you have a fields medal winner publishing lean4 formal proofs on github[2] to go with one of his books you are making a lot of progress.
Wouldn’t it make more sense to write the same functionality using a more performant, no-gc language? Aren’t competitors praised for their CLIs being faster for that reason?
With AI tooling, we are in the era where rapid iteration on product matters more than optimal runtime performance. Given that, implementing your AI tooling in a language that maximizes engineer productivity makes sense, and I believe GC does that.
JS/TS has a fundamental advantage, because there is more open source JS/TS than any other language, so LLMs training on JS/TS have more to work with. Combine that with having the largest developer community, which means you have more people using LLMs to write JS/TS than any other language, and people use it more because it works better, then the advantage compounds as you retrain on usage data.
One would expect that "AI tooling" is there for rapid iteration and one can use it with performant languages. We already had "rapid iteration" with GC languages.
If "AI tooling" makes developers more productive regardless of language, then it's still more productive to use a more productive language. If JS is more productive than C++, then "N% more productive JS" is still more productive than "N% more productive C++", for all positive N.
One of the authors, Adrian, is a very interesting person. Got his PhD at 21, started CS studies at an age when his peers were starting high school. Knowing some of his previous achievements, I’d say his work deserves at least some curiosity.
Can this be used in a public cloud provider , to speed up VM provisioning in CI/CD pipelines? Im looking for ways to speed up app provisioning for e2e tests.
I think the post, while extensive, missed one important issue.
The fact then when we read others' code, we don't remember/integrate it into our thinking as well as we do when we're the authors. So mentoring "AI Juniors" provides less growth then doing the job, esp. if it is mostly corrective actions.
Was hoping for a more data-driven diagnosis. The reality is much different, smaller orgs can move faster than large. It is definitely not possible in areas that require huge CAPEX or OPEX like AI, but in many other areas it happens often.
What do you guys use to profile whole ci/cd pipelines that involve building software , building (many) containers, running tests, running e2e tests etc. Ci/cd can be a huge drag on the lead time of delivery teams, containerisation helped with one thing but prolonged another. Is there a way out of this performance drag?
reply