Hacker Newsnew | past | comments | ask | show | jobs | submit | Grosvenor's commentslogin

There's also Franz Osinga's book Scince, strategy, and war, which covers John Boyd's work in detail.

I think OODA is fundamentally different to Auftragstaktik.

Auftragstaktik describes a clear purpose / intent. Like: capture the bridge (but: we don’t care how you do it since we can’t foresee specific circumstances)

OODA describes a process of decision making.

So, Auftragstaktik answers who decides what and why.

OODA answers how decisions are made and updated over time.

They’re complementary but different.


John Boyd, describer of the “OODA” loop: https://en.wikipedia.org/wiki/OODA_loop

                                    .''.
           .''.             *''*    :_\/_:     .
          :_\/_:   .    .:.*_\/_*   : /\ :  .'.:.'.
      .''.: /\ : _\(/_  ':'* /\ *  : '..'.  -=:o:=-
     :_\/_:'.:::. /)\*''*  .|.* '.\'/.'_\(/_'.':'.'
     : /\ : :::::  '*_\/_* | |  -= o =- /)\    '  *
      '..'  ':::'   * /\ * |'|  .'/.\'.  '._____
          *        __*..* |  |     :      |.   |' .---"|
           _*   .-'   '-. |  |     .--'|  ||   | _|    |
        .-'|  _.|  |    ||   '-__  |   |  |    ||      |
        |' | |.    |    ||       | |   |  |    ||      |
     ___|  '-'     '    ""       '-'   '-.'    '`      |____
    jgs~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

SEEKING WORK - Data scientist, consulting & fractional leadership, US/remote worldwide, email in profile.

I'm a data scientist with 20+ years experience who enjoys gnarly, avant-garde problems. I saved a German automaker from lemon law recalls. I've worked with a major cloud vendor to predict when servers would fail, allowing them to load shed in time.

Some of the things I've done:

    - Oil reservoir & well engineering forecasting production.
    - Automotive part failure prediction (Lemon law recalls)
    - Server fleet failure prediction allowing load shedding.
    - Shipping piracy risk prediction - routing ships away from danger.
    - Realtime routing (CVRP-PD-TW, shifts) for on demand delivery.
    - Legal entity and contract term extraction from documents.
    - Wound identification & tissue classification.
    - The usual LLM and agent stuff. (I'd love to work on effective executive functioning)
    - Your nasty problem here.
I use the normal stacks you'd expect. Python, Pytorch, Spark/ray, Jupyter/Merimo, AWS, Postgres, Mathematica and whatever else is needed to get the job done. Ultimately it's about the problem, not the tools.

I have years of experience helping companies plan, prototype, and productionize sane data science solutions. Get in touch if you have a problem, my email is in my profile.

NB: Don't contact me if you're doing ads, gambling, or Enshittification.. I'd like to sleep at night.


Linux had been doing SMP for about 5 years by that point.

But you're right OS resource limitations (file handles, PIDs, etc) would be the real pain for you. One problem after another.

Now, the real question is do you want to spend your engineering time on that? A small cluster running erlang is probably better than a tiny number of finely tuned race-car boxen.


My recollection is fuzzy but i remember having to recompile 2.4-ish kernels to enable SMP back in the day which took hours... And I think it was buggy too.

Totally agree on many smaller boxes vs bigger box especially for proxying usecase.


         *             ,
                      _/^\_
                     <     >
    *                 /.-.\         *
             *        `/&\`                   *
                     ,@.*;@,
                    /_o.I %_\    *
       *           (`'--:o(_@;
                  /`;--.,__ `')             *
                 ;@`o % O,*`'`&\
           *    (`'--)_@ ;o %'()\      *
                /`;--._`''--._O'@;
               /&*,()~o`;-.,_ `""`)
    *          /`,@ ;+& () o*`;-';\
              (`""--.,_0 +% @' &()\
              /-.,_    ``''--....-'`)  *
         *    /@%;o`:;'--,.__   __.'\
             ;*,&(); @ % &^;~`"`o;@();         *
             /(); o^~; & ().o@*&`;&%O\
       jgs   `"="==""==,,,.,="=="==="`
          __.----.(\-''#####---...___...-----._
        '`         \)_`"""""`
                .--' ')
              o(  )_-\
                `"""` `


Merry Christmas !

May your glühwein be hot and your cats purring :)


Since I might have peoples' attention for a moment (Can no longer edit my original post). Might I suggest the King William's college quiz this Christmas?

https://kwc.im/wp-content/uploads/2025/12/GKP_2025_26.pdf


Thanks for posting. This was fun! Towards the end it gets really hard. Merry Christmas!


Towards the end?! I can't answer ANY of the questions!


You're not expected to know the answers without doing research. https://britbrief.co.uk/education/schools/king-williams-coll...

>The paper carries a telling Latin motto: "Scire ubi aliquid invenire possis ea demum maxima pars eruditionis est" – "To know where you can find something is, after all, the greatest part of learning."

In the modern era it is acknowledged that people will use online search engines to find the answers but hopefully we all learn something on the way


I know the answer to 1.5. And I know the story behind 1.6, but not the name of the town... That's it :)


How do you even know ANY of that??


These ascii art remind me the good old days of internet where we would celebrate holidays and special days sending these little marvels thru our mail client, pine under ibm aix unix terminals.


That jgs is killing me


Just because old ASCII and ANSI art is often signed by the author, which is something fun I appreciate, see also:

https://oldcompcz.github.io/jgs/joan_stark/xmas.html#treewit...

And of course, the BBS Documentary: Artscene episode:

https://www.youtube.com/watch?v=oQrBbm5ZMlo&list=PL7nj3G6Jpv...


Woah. I didn't know about that. I found it from asciiart.eu. This stuff makes me wish HN supported ANSI. We could have so much fun. (Also PRE or preformatted text would be useful).

Merry Christmas and happy new years everyone!


> Text after a blank line that is indented by two or more spaces is reproduced verbatim. (This is intended for code.)

        .-"```"-.
       /_\ _ _ __\
      | /{` ` `  `}
      {} {_,_,_,_,}
         )/ a  a \(
        _(<o_()_o>)_          __
       { | /_)(_\ | }       _/  \
     /`{ \        / }`\   _| `  |
    /   { '.____.' }   \ {_`-._/
   /     {_,    ,_}     \/ `-._}
  |        `{--}`    \  /    /
  |    /    {  }     | `    /
  ;    |    {  }     |\    /
   \    \   {__}     | '..'
    \_.-'}==//\\====/
    {_.-'\==\\//====\
     |   ,)  ``     |
      \__|          |
       |     __     |
       | _   /\   _ |
       |     ||     |
       \ _ _ || _ _ /
       {` ` `}{` ` `}
       {_,_,_}{_,_,_}
        |    ||    |
        /-. _||_ .-\
   jgs /    /  \    \
      |___ /    \ ___|
      `"""`      `"""`


Glad I could share it!

https://news.ycombinator.com/formatdoc says:

> Text after a blank line that is indented by two or more spaces is reproduced verbatim. (This is intended for code.)

I assume that's how the top level comment posted the tree which is in a <pre><code> structure.


But we still miss out on the great ANSI colors.

I kinda do want to go back to oldschool bbses, with cool text-mode lightbar interfaces.


This guy was prolific! Wow



TIL! Thanks! :)


Ahhh of course it's a signature. Makes total sense now.

I thought it was just a straggler.


I thought this was Advent of Code! Merry Christmas!


Very nice, the introduction to recursion in university (which was around that time of the year) was drawing a fractal Christmas tree


Lots of great ASCII art by jgs


Thought it was obfuscated C code, then I realized it’s a tree or something


Pretty sure it's a regex to match email address strings.


Thank you kind stranger for a wonderful laugh on Christmas!


Challenge issued.

I wouldn't be surprised if someone comes up with obfuscated C code that looks like a christmas tree and prints out wishes by the end of the day/season.

Merry Christmas!


I'll admit it does look a bit like an IOCCC [0] entry. Can someone please do it for us?

[0]: https://www.ioccc.org/


Let's ask an LLM as a test!


Gemini 3 Pro gave some rather unimpressive answers: https://gemini.google.com/share/3cbcbe1fd64c


ChatGPT did much better but I cannot paste it into this text box no matter how many times I try with different formatting to get the white space preserved. chatGPT also could not figure out how to format it for pasting here.


Try four leading spaces?

    Test, is this monospaced?
    012345678901234567890123456789


A screenshot and a link to the resulting JPG?


I did recently try to use an LLM to make a twinkling christmas tree quine in the style of qlobe. It didn’t get close, maybe next year!


Perl time


Exactly, looks like Perl code to me. :)


It’s tragic that having a language as flexible and unopinionated as Perl is admittedly terrible for novice programmers because Learning Perl is easily one of the greatest introductory programming books.


It's a red-green tree. Your task is to invert it in O(n) operations.


I thought it was unobfuscated perl code. Merry Christmas ya filthy animals.


That is probably true. But Roomba sucked in the early 2000's too. They never got better.


I believe the author's thesis is that if they had invested in innovation over a couple decades, the product probably would have sucked less.


Or perhaps would have sucked more where it needed to, and sucked less where it didn't.


It's a vacuum cleaner. All you want it to do is suck.


But not at navigation.


It does seem like that upon reading the article, but it’s not what the title of the article suggests.


The innovation being shutdown wasn't innovation towards making robot vacuum cleaners better. It was innovation direct towards military applications like building robotic hands.


Exactly this. If they had been innovating in vacuum technology then maybe this article would have a point. But they were building stuff for the military and for space, and there's a good reason investors wanted them to get out of that because it was sucking up money and not resulting in better vacuum cleaners.


Well it's 2025, we've just spent the better half of the year discussing the bitter lesson. It seems clear solving more general problem is key to innovation.


Hardware is not like software. A general purpose humanoid cleaning robot will be superior to a robot vacuum but it will always cost an order of magnitude more. This is different from software where the cost exponentially decreases and you can do the computer in the cloud.


I'm not sure advancements in AI and advancements in vacuum cleaners are at similar stages in terms of R&D. I'd be very wary of trying to apply lessons from one to the other.


And the commenter above is highlighting the article's hypothesis about why they never got better.


Yet they were about far more than just vacuums!

In the 2000s, no one was doing what they were doing.


Worthwhile for the hotel and book recommendations.

Obviously he has better food taste than I do, so those too. I will shit like a mink and love it.


What he's getting at is single level storage. Ram isn't used for loading data & working on it. Ram is cache. The size of your disk defines the "size" of your system.

This existed in Lisp and Smalltalk systems. Since there's no disk/running program split you don't have to serialize your data. You just pass around Lisp sexprs or Smalltalk code/ASTs. No more sucking your data from Postgres over a straw, or between microservices, or ...

These systems are magnitudes smaller and simpler than what we've built today. I'd love to see them exist again.


And we have those French/English text corpora in the form of Canadian law. All laws in Canada at the federal level are written in English and French.

This was used to build the first modern language translation systems, testing them going from English->french->english. And in reverse.

You could do similar here , understanding that your language is quite stilted legalese.

Edit: there might be other countries with similar rules in place that you could source test data from as well.


Incredibly, I had not thought to use that data set.

Now I will. Thanks.


Belgian federal law is also written in Dutch, French and German, by the way.

But no English so you might not be interested.


Could this generate pressure to produce less memory hungry models?


There has always been pressure to do so, but there are fundamental bottlenecks in performance when it comes to model size.

What I can think of is that there may be a push toward training for exclusively search-based rewards so that the model isn't required to compress a large proportion of the internet into their weights. But this is likely to be much slower and come with initial performance costs that frontier model developers will not want to incur.


> exclusively search-based rewards so that the model isn't required to compress a large proportion of the internet into their weights.

That just gave me an idea! I wonder how useful (and for what) a model would be if it was trained using a two-phase approach:

1) Put the training data through an embedding model to create a giant vector index of the entire Internet.

2) Train a transformer LLM but instead only utilising its weights, it can also do lookups against the index.

Its like a MoE where one (or more) of the experts is a fuzzy google search.

The best thing is that adding up-to-date knowledge won’t require retraining the entire model!


Yeah that was my unspoken assumption. The pressure here results in an entirely different approach or model architecture.

If openAI is spending $500B then someone can get ahead by spending $1B which improves the model by >0.2%

I bet there's a group or three that could improve results a lot more than 0.2% with $1B.


> so that the model isn't required to compress a large proportion of the internet into their weights.

The knowledge compressed into an LLM is a byproduct of training, not a goal. Training on internet data teaches the model to talk at all. The knowledge and ability to speak are intertwined.


I wonder if this maintains the natural language capabilities which are what LLM's magic to me. There is a probably some middle ground, but not having to know what expressions, or idiomatic speech an LLM will understand is really powerful from a user experience point of view.


Or maybe models that are much more task-focused? Like models that are trained on just math & coding?


isn't that what the mixture of experts trick that all the big players do is? Bunch of smaller, tightly focused models


Not exactly. MoE uses a router model to select a subset of layers per token. This makes them faster but still requires the same amount of RAM.


Of course and then watch those companies reined in.


I’ll counter with the jaguar xk engine in production for 43 years.

https://en.wikipedia.org/wiki/Jaguar_XK_engine

I assume the American s will be by with a pushrod v8 soon.


Actually I'll give a V6 instead:

https://en.wikipedia.org/wiki/Buick_V6_engine

1961-2008.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: