I think OODA is fundamentally different to Auftragstaktik.
Auftragstaktik describes a clear purpose / intent. Like: capture the bridge (but: we don’t care how you do it since we can’t foresee specific circumstances)
OODA describes a process of decision making.
So, Auftragstaktik answers who decides what and why.
OODA answers how decisions are made and updated over time.
SEEKING WORK - Data scientist, consulting & fractional leadership, US/remote worldwide, email in profile.
I'm a data scientist with 20+ years experience who enjoys gnarly, avant-garde problems. I saved a German automaker from lemon law recalls. I've worked with a major cloud vendor to predict when servers would fail, allowing them to load shed in time.
Some of the things I've done:
- Oil reservoir & well engineering forecasting production.
- Automotive part failure prediction (Lemon law recalls)
- Server fleet failure prediction allowing load shedding.
- Shipping piracy risk prediction - routing ships away from danger.
- Realtime routing (CVRP-PD-TW, shifts) for on demand delivery.
- Legal entity and contract term extraction from documents.
- Wound identification & tissue classification.
- The usual LLM and agent stuff. (I'd love to work on effective executive functioning)
- Your nasty problem here.
I use the normal stacks you'd expect. Python, Pytorch, Spark/ray, Jupyter/Merimo, AWS, Postgres, Mathematica and whatever else is needed to get the job done. Ultimately it's about the problem, not the tools.
I have years of experience helping companies plan, prototype, and productionize sane data science solutions. Get in touch if you have a problem, my email is in my profile.
NB: Don't contact me if you're doing ads, gambling, or Enshittification.. I'd like to sleep at night.
Linux had been doing SMP for about 5 years by that point.
But you're right OS resource limitations (file handles, PIDs, etc) would be the real pain for you. One problem after another.
Now, the real question is do you want to spend your engineering time on that? A small cluster running erlang is probably better than a tiny number of finely tuned race-car boxen.
My recollection is fuzzy but i remember having to recompile 2.4-ish kernels to enable SMP back in the day which took hours... And I think it was buggy too.
Totally agree on many smaller boxes vs bigger box especially for proxying usecase.
Since I might have peoples' attention for a moment (Can no longer edit my original post). Might I suggest the King William's college quiz this Christmas?
>The paper carries a telling Latin motto: "Scire ubi aliquid invenire possis ea demum maxima pars eruditionis est" – "To know where you can find something is, after all, the greatest part of learning."
In the modern era it is acknowledged that people will use online search engines to find the answers but hopefully we all learn something on the way
These ascii art remind me the good old days of internet where we would celebrate holidays and special days sending these little marvels thru our mail client, pine under ibm aix unix terminals.
Woah. I didn't know about that. I found it from asciiart.eu. This stuff makes me wish HN supported ANSI. We could have so much fun. (Also PRE or preformatted text would be useful).
I wouldn't be surprised if someone comes up with obfuscated C code that looks like a christmas tree and prints out wishes by the end of the day/season.
ChatGPT did much better but I cannot paste it into this text box no matter how many times I try with different formatting to get the white space preserved. chatGPT also could not figure out how to format it for pasting here.
It’s tragic that having a language as flexible and unopinionated as Perl is admittedly terrible for novice programmers because Learning Perl is easily one of the greatest introductory programming books.
The innovation being shutdown wasn't innovation towards making robot vacuum cleaners better. It was innovation direct towards military applications like building robotic hands.
Exactly this. If they had been innovating in vacuum technology then maybe this article would have a point. But they were building stuff for the military and for space, and there's a good reason investors wanted them to get out of that because it was sucking up money and not resulting in better vacuum cleaners.
Well it's 2025, we've just spent the better half of the year discussing the bitter lesson. It seems clear solving more general problem is key to innovation.
Hardware is not like software. A general purpose humanoid cleaning robot will be superior to a robot vacuum but it will always cost an order of magnitude more. This is different from software where the cost exponentially decreases and you can do the computer in the cloud.
I'm not sure advancements in AI and advancements in vacuum cleaners are at similar stages in terms of R&D. I'd be very wary of trying to apply lessons from one to the other.
What he's getting at is single level storage. Ram isn't used for loading data & working on it. Ram is cache. The size of your disk defines the "size" of your system.
This existed in Lisp and Smalltalk systems. Since there's no disk/running program split you don't have to serialize your data. You just pass around Lisp sexprs or Smalltalk code/ASTs. No more sucking your data from Postgres over a straw, or between microservices, or ...
These systems are magnitudes smaller and simpler than what we've built today. I'd love to see them exist again.
There has always been pressure to do so, but there are fundamental bottlenecks in performance when it comes to model size.
What I can think of is that there may be a push toward training for exclusively search-based rewards so that the model isn't required to compress a large proportion of the internet into their weights. But this is likely to be much slower and come with initial performance costs that frontier model developers will not want to incur.
> so that the model isn't required to compress a large proportion of the internet into their weights.
The knowledge compressed into an LLM is a byproduct of training, not a goal. Training on internet data teaches the model to talk at all. The knowledge and ability to speak are intertwined.
I wonder if this maintains the natural language capabilities which are what LLM's magic to me. There is a probably some middle ground, but not having to know what expressions, or idiomatic speech an LLM will understand is really powerful from a user experience point of view.
reply