That is why modern spellmakers have the tlateochihualapaztli, or the Metaspell, in which other spells can be cast and isolated from the outside world, and their effects recorded and carefully combed through for problems; testing of more advanced spells may also need the iztlacatlamatini, or the Fakespell, which can emulate the weave of other spells without actually casting them, letting other spells talk to it without knowledge of its illegitimacy.
The type problem is starting to be solved by flakes; the equivalent to the type `operating-system` is the type `nixpkgs.lib.nixosSystem`.
To analyze a NixOS system configuration, I look for all modifications to the `services` attr (representing all the services of the system, e.g. `services.mpd.dir = "~/Music"`). If I want to know even more I can look for all the `.enable` keys to see if any hardware/program/etc options have been enabled, and the sources are easy to read so I can open the corresponding module file in Nixpkgs if I want to know what a particular option does. The default NixOS machine does literally nothing so I don't have to worry about that.
Guix does look really cool, and I would try it if it had more support for nonfree software (I depend on some nonfree software in my day-to-day.)
I mean, as far as correctness as concerned, of course it works, but performance-wise in a raytracer I was toying around with it was a dismal failure compared to the (traditionally shunned) analytic approach.
The issue, as far as I understand it, is that LuaJIT only handles traces of two shapes: linear code; or linear loop prologue followed by linear loop body. The conditions of any branches (that could not be determined statically) are evaluated and checked against their tracing-time values, but a failed check (“guard”) throws you out of the compiled trace and back into the interpreter. This includes normal termination of a loop and misspeculated types as well as explicit conditionals.
Of course, handling only these two patterns of control flow is too limiting, so if a guard fails too often the trace compiler can start a new trace from that exit and tie its other end to the original one if it comes back. Aside from handling things like polymorphic functions which get called for more than one combination of types and predictable branches in loops that still get taken occasionally (but too rarely for unrolling), this also turns out to handle nested loops (as explained somewhere in the docs): the inner loop get traced first as a loop, then the outer loop body gets traced as a linear block that ties the end of the inner one back to its beginning, going around from the middle of the outer loop to its end, then from the beginning of its next iteration to the middle.
The catch is that register allocation or optimizations can’t see across trace boundaries, so those side traces get second-class treatment (in particular, only the inner loop gets subjected to loop-invariant code hoisting and strength reduction, as the outer ones are not viewed as loops). Additionally, the part that decides what to trace and how (is it a loop, is it hot, should it be unrolled, etc.) is an opaque pile of heuristics which is generally well tuned, but gets more confused and slower to converge as the nesting level increases.
Turns out having your innermost loop be rejection sampling which usually terminates quickly (under the unrolling threshold) but with a varying number of iterations, in a raytracer that’s already bound to have two or three levels of nested loops, plays merry hell on all of this. Occasionally LuaJIT couldn’t eliminate the GC in the now-“outer” loop and ran 5x slower, occasionally it just fell back to the interpreter and ran 20–100x slower, but generally I think that counts as a failure when the motivation is to avoid libm because it’s slow.
(For similar reasons, the small-vector library used there is almost entirely sad repetitive code along the lines of
c.x = f(a.x, b.x)
if one component then return c end
c.y = f(a.y, b.y)
if two components then return c end
...
because that does not count against the thresholds when you have dozens of these operations per actual iteration.)
The raytracer itself is not in a state I’d want to inflict on anybody. The vector library is just HLSL-style small vectors with subpar error reporting, there’s like half a dozen of those floating around, but if you want it, sure, go wild: http://ix.io/3ZQk (CC0). I’d only ask you to hold off on posting it to LuaRocks under that name because I want to do that myself eventually.
It can fall under any of these with minimal changes, so your question is not very meaningful. I'd say it is better suited for the application web though, because you expect interactivity. The point here is that there are not many documents that do necessarily need interactivity.
But that article is exactly what we want on the internet. It's the kind of interactive teaching demonstrations that forward looking educators have been thinking about for decades. It's not bloated or overdone; it doesn't take up significantly more processing power; it works far better than a traditional article ever could. This is what we desire the internet to be like.
It's because of articles like this one, or 3Blue1Brown videos, that I feel like the idea of "the document web" is really a step backwards: the internet is not just a glorified transmission method for paper anymore. We have computers, which are able to do so much more than paper ever could. To not exploit them for their capabilities, simply due to fear of exploitation, is far worse. How much worse do you think https://thebookofshaders.com/ would be to learn from if it couldn't include live editable demnostrations? To divide up the web into "interactive" and "non-interactive" content is to remove the ability for creators such as the author of the Mechanical Watch article to add progressive interactivity to their documents, remove the ability for learning resources to fully utilize the power of computers, to remove the ability for small fun things to be added to sites such as a Konami Code easter egg or https://bruno-simon.com/'s interactive website. And before you reply with something like "these would all go on the interactive web", my point is that dividing the web like this would not only be pointless but would also be ultimately harmful to the creation of things such as these.
Don't get me wrong, I like interactive documents a lot, but they also take a lot of time and effort to produce and thus will remain a minority. Texts greatly outweigh every other medium primarily because they are very easy to produce and distribute.
> they also take a lot of time and effort to produce
This is a case of bad tooling, not an intrinsic fact.
https://www.joshwcomeau.com/css/understanding-layout-algorit... is an article that could have easily been written without any interactive components. All the author did was replace traditional code blocks with an embed to a web playground (e.g. by s/Code/Playground/g in a MDX file). Ta-da, the demonstrations are immediately interactive.
> and thus will remain a minority.
Not if we put in effort to encourage them. On the other hand, if we put in effort to _discourage_ them (say, by splitting the web into documents without interaction and apps with interaction), they will disappear completely. It is not worth sacrificing these creations simply so that we can remove a few bad actors. (Mind you, there are many ways to act badly in non-interactive ways too. Image watermarks, embedded advertisements in videos, embedded advertisements in ASCII text...)
> All the author did was replace traditional code blocks with an embed to a web playground (e.g. by s/Code/Playground/g in a MDX file).
Well, not surprising given that the article was about the web technology (which powers the application web, of course!). I might have been convinced otherwise.
There is indeed some negative feedback loop going here. But I think the cost to achieve interactivity has been decreasing over the time (compare with, say, 1990s' interactive CD) and the gap is still very significant, even with a very favorable condition (nowadays you can practically have interactivity anywhere anytime!). I'm happy to be proven wrong, but for now I'm not optimistic about that.
Using print debugging/strace/valgrind/etc, you're looking at the evolution over time of particular components of program state. Using a debugger, you're looking at all components of program state at particular time slices. They enable different viewing angles. (https://mobile.twitter.com/ManishEarth/status/13870782220560...)
I personally find that both are good tools, but to know what you should be using, you need to think about which viewing angle you want to choose: which components of state do you want to inspect and which moments in time do you want to track? If the answer is "these specific state components/unknown" (e.g. "when is my program accessing invalid memory?"), that's where print debugging comes in, with specializations depending on which particular state components you're looking at (strace or eBPF for IO, valgrind for invalid memory accesses, etc.) If it's "unknown/these specific times" (e.g. "where is my program setting HTTP response headers when a request occurs?"), then a debugger is a good idea.
However, what you said does have truth to it. Good programming practices generally center around the management of program state* (using types, specifications, whatever). If your program is not designed well in those terms, then the more likely it is that you have no idea what state components you want to inspect, meaning you reach for a debugger first. And if it is designed well, you won't need a debugger nearly as often. But sometimes we have no choice, whether that's due to an inherently hard problem space or lots of bad code that we didn't write, so a debugger is necessary.
* I don't actually know if there are programming practices that center around managing moments in time. I can't even think how that would work, but I would be very interested to know if there are any :)
> * I don't actually know if there are programming practices that center around managing moments in time. I can't even think how that would work, but I would be very interested to know if there are any :)
There was a live, 3D, peer-to-peer interactive environment back in the early 2000s called Croquet that made use of the concept of "pseudo-time." It was built on top of Squeak Smalltalk (there are some decendants today, including croquet.io). The part that handled the time management in a way you might find interesting was called TeaTime [1], and built on the ideas of David Reed thesis about pseudo-time [2]. If you are not familiar with these you might want to check them out!
> But sometimes we have no choice, whether that's due to an inherently hard problem space or lots of bad code that we didn't write, so a debugger is necessary.
I don't disagree! The notion of state vs. time snapshots in terms of the mental model seems like it might be missing something small but sounds mostly on the mark? E.g. in dynamic languages my REPL is more of a debugger than any actual debugger could be.
> * I don't actually know if there are programming practices that center around managing moments in time. I can't even think how that would work, but I would be very interested to know if there are any :)
I think this depends on how you abstract control flow. Actor model comes to mind, as Smalltalk really doesn't have this notion of "time" in the same way. You debug live in such a system, and "time" is more a matter of what abstraction sent or received something (and as such may have failed). Similar arguments could be made for conditions / restarts in Lisp, perhaps even more strongly, since you can manage errors through conditions and then restart code (going back in time, so to speak) in a sort of live-debugging way. Not sure it measures quite to the degree you were asking, but that's the first thing that comes to mind.
Another thing is the movement towards async-await / cooperative coordination in programs. Even if you ignore concurrency, designing code in a way that coroutines cooperatively yield to one another (e.g. generators in Python) helps sort this out too. Basically, you make "time" in a program a function of control flow, and do that by forcing an abstraction where you explicitly yield control in the program. This relates to both the actor model and conditions / restarts in Lisp, so I feel like I'm pulling on the same train of thought there.
For Linux, look at the eBPF tools - they're really useful[0]. I've used them to troubleshoot annoying problems before, and they're far more practical to use than the usual `strace -f ./program` dance.
Troubleshooting system issues can also be done with SystemTap[1], which looks cool, but I haven't personally tried it yet.