Hacker Newsnew | past | comments | ask | show | jobs | submit | willvk's commentslogin

Great to see someone thinking outside the box.


Not sure if it's been mentioned but 1831 on a keypad makes a triangle, or a prism, if you will.


This is the best and simplest description I have seen of parenting. Do no get emotional. Describe, explain, reason, rationalise. Beautiful. Such timeless advice.


Bingo. The best proven way to control anger is to train your mind to seek understanding of what's going on in those tense situations.

Like anything, you need to practice training yourself at handling anger.

When my kids get angry, I teach them to ask questions. "Why?!!" "Why didn't they let me play with them?" "Why does my sister always get to use my toys?!" "Why does it make me feel this way?!" "What can I do to prevent this in the future?!"

The most understanding you can gain during a confrontation, the better you train your mind to react in that way rather than in our following our instincts. We are, by default, animals, and anger is part of our fight-flight response - but it's something we can reprogram with practice. ...especially if you start with a young malleable mind.


Correct me if I'm wrong, but I don't think that protected mode existed in old Unix systems and so was not delineated between user space and kernel space as described.

Protected mode and the separation of kernel and user space was only available since 80286 processors starting in 1982.

Refs:

https://en.wikipedia.org/wiki/User_space

https://en.wikipedia.org/wiki/Protected_mode


Virtual memory systems and protected memory certainly did exist before the 286, and Unix supported them well before 1982. Hell, the most popular CPU architecture for production Unix systems -- VAX -- was 32-bit and dates from the mid-1970s.


There is a world outside of Intel, especially back in the 80's and 90's...


And especially back in the '60s and '70s, when Unix and its predecessors were originally developed.


Happy to be wrong on this and good to know it did exist before x86. Thanks for the replies.


This is a very PC-centric way of looking at things.

The 68K processors did not have a concept of protected mode. It just had user mode and supervisor mode[1]

[1]: https://retrocomputing.stackexchange.com/questions/2784/how-...



Sorry, but you are wrong.

Have a look at the architectures of the PDP-11, VAX, 68000 and many other CPUs.


By the way, an interesting story:

When Sun (?) wanted to implement their Unix with paging on the 68000 they had a problem: If you access a memory page that has been swapped out to disk, the OS has to (1) load the page from disk to RAM and (2) let the CPU repeat the instruction that caused the memory access. But the last step could not be properly done on the 68000 because it did not store its internal state completely when an (address) error happened (this was fixed in the 68010).

Their solution: Run two 68000 in parallel on the same code, one of them delayed by one instruction. When the first CPU triggers the page fault, the system can stop the second CPU before it reaches the instruction that caused the fault.


That's astounding. 68ks were not cheap. I assume the system then stalls for a couple of instructions to switch back to the first chip after the page fault completes? I have to assume they reached that solution after exhausting every other option.

On the other hand, this could also be used as a form of error checking. If CPU2 ever returns something different you know there was an error somewhere in the system and you can hard stop to prevent further data corruption.


> I have to assume they reached that solution after exhausting every other option

I assume that the 68000 was so powerful for its price (probably much cheaper than the existing mainframe CPUs) that it made this a viable solution. Or maybe the company had promised their customers a 68k-based Unix system (with the 68451 MMU) and the 68010 was delayed or too expensive and they had to find a quick solution? (I have no idea)

Btw, I was probably wrong about Sun being the company behind this. Apollo and MassComp have been mentioned in mailing lists.


IIRC they cost around $350 each back in the 80s. That's roughly $1000 in today's dollars.

A lot of companies wanted to use them instead of the Mickey Mouse Intel chips of the day, but the price point was too severe.


Speaking of which: http://www.os2museum.com/wp/the-nearly-ultimate-386-board/

The article describes a motherboard with both a soldered on 386 CPU, and a separate socket for another one. This was not a multiprocessor board, to make use of the socket you were supposed to disable the onboard CPU using a jumper.

But if you plugged in a similar enough CPU without setting the jumper, it appears that both CPUs ran at the same time. And since they had the same clock and they should have no difference in behavior, the system, at least as tested, ran fine.


I'm having a hard time understanding how this would actually work in practice.

CPUs act on data coming from eg the disk, memory, the network, etc. The results of instructions influences what the CPU will do next, especially so in the case of self-modifying code.

So, with a system that has a "chaser" CPU following a master CPU one instruction behind... how do you also impose a one-instruction delay on the CPU's view of the real world?!

Unless this is done, the chaser CPU is going to see real-world data and I/O that is out of sync with the instruction stream, isn't it?

I'm quite sure this problem was very elegantly solved, and I'm very curious+interested to find out what that solution was.


Hah, that is amazing and baffling at the same time.

Elsewhere, you unfortunately sometimes have to plain emulate the instructions. VM86 mode on x86 is notorious for that; not so much when accessing memory but for all the vast privileged instructions that can trap into the monitor (e.g. CLI, IN, OUT, POPF...). VM86 Extensions ameliorated it only somewhat.

But VM86 also does not have much relevance anymore. It was a mode for running 16bit real mode software, and 64bit x86 CPUs don't support it anymore when in long mode.


I didn't know you could do that with lea. That's pretty cool!


That's some very concise code golf! It would be interesting in seeing how Forth compiles this under the hood.


Normally Forth is interpreted. Compiled Forth code is unlikely to look particularly clean or elegant.

(If you like languages with a very high density of functionality to symbol count, you should look at APL and its successor J)


I agree the structure of some of my explanations could have been a bit better. The book I was using as a basis was from the mid-90s so it would be a version of GDB from 20 years ago that requires the NOP.


Thanks for that. I've amended the diagram accordingly.


Thanks for that. I didn't intend for it to be a chronological list but will add AVX to it. Cheers.


Oh, okay, my bad!

And wow, fibonacci using hand-rolled AVX :) that would definitely be a sight to see.

...although, I should explicitly point out, AVX is a set of vector streaming/processing instructions. Fibonacci is inherently a single-step operation, with no lookahead.

Okay, so I googled "fibonacci avx". The results were quite a mess but some careful wading found me http://www.insideloop.io/blog/2014/10/14/arrays-part-iii-vec..., which states: "A a consequence, the following code that computes the sequence of Fibonacci can’t get vectorized."

So, forgive my own (bullish!) naivete. AVX, and _possibly_ SIMD, SSE, etc _may not_ be applicable to/usable for Fibonacci computation.


Vector instructions are certainly usable for scalar computation, although it doesn't make much sense --- you basically just use one piece of one vector and throw away all the other results.

Fibonacci is a toy example but illustrates where the vector instructions do become useful: if you want to compute the n'th term of various Fibonacci-like sequences with different starting conditions but the same recurrence, then you could do them in parallel using those instructions.


Unrolling the computation of the series, it is possible to compute f(n) using only f(n-4) and f(n-8):

f(n) = 7*f(n-4)-f(n-8)

So, using 4-vector operations, you can compute 4 numbers in one step.


That's some awesome feedback. Thanks so much. As I'm teaching myself asm I'm learning as I go. I'll incorporate it all into a revised version soon.


Quick suggestion to massively improve your writing for general consumption: strip out the passive voice where not absolutely necessary, e.g., "was decided". Not only is the active voice "I/we decided" generally much more interesting to read, it usually forces you to adopt a sentence structure with fewer words (easier to read). Your writing later in the article is much more engaging as you start to use the active voice.

And thanks for the article, I love reading interesting excursions in low level programming!


Thanks for reading, and thanks for the feedback! That's a good tip on the passive/active voice. I did actually think about standardising that but let it (unfortunately) slide as I thought I would go from my initial thinking to more instructional but that's probably not as effective and engaging to the reader the whole way through. Will remember that tip for my next post! Thanks again!


I mentioned this elsewhere, but just to make sure you see it, I also stumbled on https://codegolf.stackexchange.com/a/135618/15675, which may also be useful.


Thanks for that. Yep, I've seen that too.


Your article is actually pretty handy in terms of setting up the "workflow" for compilation and debugging from scratch. It's the sort of thing genuine beginners need to avoid getting tripped up.


Thank-you for your kind words about the post. That was what I was trying to do with it - to break down some of the nebulous fear that beginners have with assembly. Cheers!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: