I think you're actually more in agreement with the author than you think you are. He's not arguing against iterative development at all. Rather he's arguing that product should evolve quickly and iteratively, and the design system underlying that product should move slowly and carefully, only systematizing aspects of the design once they have a large number of motivating examples.
I think some of the confusion is a misunderstanding of what design systems (and each of the layers in the authors' diagram) is. It's different from design. "Design" is "This button should be 56px wide, have 8px of padding and be colored #4590ff". "Product" is "We are are building an app to let users listen to podcasts, and we should show a list of recommended next podcasts with a button beside each to play, but default to autoplaying the next one if no user interaction happens." "Product research" is "Our users frequently listen to podcasts in the car, so they may need hands-free interaction and will want to listen to the next track without user intervention." "Visual brand" is "Across all of our products, we like to use a simple white background, with dashed-line dividers between list items and #4590ff primary call to action buttons, each of which has 8px of padding and 4px rounded corners." "Design systems" is "Across all our products, use a list whenever you have a collection of homogenous items. If the list represents an audio multimedia track, use this music icon that we have uploaded to the general library. The currently playing audio track should have a moving audio signal display to indicate its selected status."
Notice the increasing level of abstraction across each of these categories. Design is rules for users. Design systems is rules for designers. And that's exactly why it should move slowly - you cannot build good rules until you have seen many instances of problems in the real world and have a chance to gather data across many instances of the pattern.
In my experience, design should iterate ~daily. Product ~monthly. UXR ~quarterly. Visual brand ~annually. And design systems every 2-3 years. The differing cadence is a rough indication of how many examples of each lower level you need to build a general pattern, eg. you may have 20 or so individual design decisions to make to ship a feature, you might ship 3 features from each UX insight, you might refresh the whole product's UI roughly each year as the market changes, and you need experience from 2-3 full visual refreshes to understand what sort of patterns need to form guidance for the next generation of designers.
There’s no other way to do it for this type of a brain. I know because I have the same type of brain.
I spend 90% of my time formulating descriptions of the problem and the desired end state
Hallucinating futures where the state of the world is in a state that I either wanted to be or that somebody’s asking me to build
Once you know your final end state, then you need to evaluate the current state of the things that need to change in order to transition to the final state
Once you have your S’ and S respectively then the rest of the time is choosing between hallucinations based on sub-component likelihood of being able to move from S to S’ within the time window
So the process is to basically trying to derive the transition function and sequencing of creating systems and components that are required, to successfully transition from state S to state S'
So the more granular and precise you can define the systems at S and S' then the easier it is to discover the likelihood pathway for transitional variables and also discover gaps, where systems don't exist, that would be required for S'
Said another way: treat everything - both existing and potential futures- as though they are or within an existing state machine that can be modeled. Your task is to understand the markov process that would result in such a state and then implement the things required to realize it.
People really need to come up with better names. "Linear Programming" or "Integer Linear Programming" mean absolutely nothing.
Also anything dealing with finding the minimum distance distances can be short circuited by keeping the shortest distance and not taking paths that exceed that. This is how approximate nearest neighbor works and can still speed up the full solution. Figuring out full paths that have short average distances first can also get to shorter distances sooner.
You can also cluster points knowing you probably don't want to jump from one cluster to another multiple times.
Fibonacci recursion is a bad example for DP because it is obvious how to do that. You need to teach a generative recursion, as pointed out by Shriram Krishnamurthi [1]. Once you've got a hang about a generative recursion DP is a space optimization on top of that.
Whenever I set up a new computer for older family members, despite it being windows 11, I always install open shell[1] and retro bar[2]. Between the two, I've made the operating system look very close to Windows XP visually, and they always appreciate it.
Sphinx [1] gets my vote. It's the docs system that powers most sites in the Python ecosystem so it probably looks familiar to you.
I call it a docs system rather than static site generator because the web is just one of many output targets it supports.
To tap into its full power you need to author in a markup that predates Markdown called reStructuredText (reST). It's very similar to Markdown (MD) so it's never bothered me, but I know some people get very annoyed at the "uncanny valley" between reST and MD. reST has some very powerful yet simple features; it perplexes me that these aren't adopted in other docs systems. For example, to cross-link you just do :ref:`target` where `target` is an ID for a section. At "compile-time" the ref is replaced with the section title text. If you remove that ID then the build fails. Always accurate internal links, in other words.
The extension system really works and there is quite a large ecosystem of extensions on PyPI for common tasks, such as generating a sitemap.
The documentation for Sphinx is ironically not great; not terrible but not great either. I eventually accomplish whatever I need to do but the sub-optimal docs make the research take a bit longer than it probably has to.
I have been a technical writer for 11 years and have used many SSGs over the years. There's no perfect SSG but Sphinx strikes the best balance between the common tradeoffs.
Alternatively, use instantdomainsearch.com (disclosure, I'm the CTO), which packages most of these things up in a fast web interface and also enables searching in a bunch of other TLDs.
If this wants to sell itself as a shell scripting language, it should very quickly advertise what is it that makes it superior to say, bash for typical shell scripting tasks.
Shell scripts with bash are painful to the point that if I find myself writing more than around 10 lines of shell, I tend to stop and switch to Perl instead. But Perl hasn't been too popular lately and isn't ideal either, so I'm very much up for something better.
Here's some features I want from a shell scripting language:
* 100% reliable argument passing. That is, when I run `system("git", "clone", $url);` in Perl, I know with exact precision what arguments Git is going to get, and that no matter what weirdness $url contains, it'll be passed down as a single argument. Heck, make that mandatory.
* 100% reliable file iteration. I want to do a "for each file in this directory" in a manner that doesn't ever run into trouble with spaces, newlines or unusual characters.
* No length limits. If I'm processing 10K files, I don't want to run into the problem that the command line is too long.
* Excellent path parsing. Such as filename, basename, canonicalization, finding the file extension and "find the relative path between A and B".
* Good error handling and reporting
* Easy capture of stdout and stderr, at the same time. Either together or individually, as needed.
* Excellent process management. We're in 2022, FFS. We have 128 core CPUs. A modern shell scripting language should make it trivial to do something like: take these 50000 files, and feed them all through imagemagick, using every core available, while being able to report progress, record each failure, and abort the entire thing if needed.
* Excellent error reporting. I don't want things failing with "Command failed, aborted". I want things to fail with "Command 'git checkout https://....' exited with return code 3, and here's for good measure the stdout and stderr even if I redirected them somewhere".
* Give me helpers for common situations. Eg, "Recurse through this directory, while ignoring .git and vim backup files". Read this file into an array, splitting by newline, in a single line of code. It's tiresome to implement that kind of thing in every script I write. At the very least it should be simple and comfortable.
That's the kind of thing I care about for shell scripting. A better syntax is nice, but actually getting stuff done without having to work around gotchas and issues is what I'm looking for.
It's actually quite interesting to compare best practices of people who have > ~5 years of experience vs. those who have less than that.
The latter say "Use package managers. Use SemVer. Automatically upgrade to the latest version when it comes out. Check in your package.json, but don't check in your node_modules."
The former say "Check any dependencies on third-party software into source control, and always build from source. If you use a different build system from the original package, write the appropriate files & tools to build it with your own build system. Use new versions only after they've been proven safe with your software. Budget significant time and manpower each time you need to upgrade the version of your dependencies."
The other interesting thing is that this distinction has been in force for at least 15 years. It's not something related to just the Node.js ecosystem, nor to tooling that was recently developed. I can recall being that junior developer in 2000-2002 saying "Let's just write a few shell scripts to run RPM and PEAR install, and everyone can run them to bring their local installations up to date", and being overruled with "No, we're checking this code into the repository with all the appropriate license files and build rules." I think it's really because version breakage is something that will hit you and cause a monetarily-significant loss about once every 5 years, and so until you've been burned by it or worked with someone who has, you'll always gravitate to the more convenient package-manager approach.
I think some of the confusion is a misunderstanding of what design systems (and each of the layers in the authors' diagram) is. It's different from design. "Design" is "This button should be 56px wide, have 8px of padding and be colored #4590ff". "Product" is "We are are building an app to let users listen to podcasts, and we should show a list of recommended next podcasts with a button beside each to play, but default to autoplaying the next one if no user interaction happens." "Product research" is "Our users frequently listen to podcasts in the car, so they may need hands-free interaction and will want to listen to the next track without user intervention." "Visual brand" is "Across all of our products, we like to use a simple white background, with dashed-line dividers between list items and #4590ff primary call to action buttons, each of which has 8px of padding and 4px rounded corners." "Design systems" is "Across all our products, use a list whenever you have a collection of homogenous items. If the list represents an audio multimedia track, use this music icon that we have uploaded to the general library. The currently playing audio track should have a moving audio signal display to indicate its selected status."
Notice the increasing level of abstraction across each of these categories. Design is rules for users. Design systems is rules for designers. And that's exactly why it should move slowly - you cannot build good rules until you have seen many instances of problems in the real world and have a chance to gather data across many instances of the pattern.
In my experience, design should iterate ~daily. Product ~monthly. UXR ~quarterly. Visual brand ~annually. And design systems every 2-3 years. The differing cadence is a rough indication of how many examples of each lower level you need to build a general pattern, eg. you may have 20 or so individual design decisions to make to ship a feature, you might ship 3 features from each UX insight, you might refresh the whole product's UI roughly each year as the market changes, and you need experience from 2-3 full visual refreshes to understand what sort of patterns need to form guidance for the next generation of designers.