Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can't vendor's making desktop/mobile class CPUs detect the equivalent pattern and optimize it in microcode or silicon?

Or is that what we are trying to get away from?



Maybe, but it's a leap, IMO. The equivalent patterns are 3x as long, and modify tons of arch visible state for their intermediate results which leaves more work for those combined instructions to do.

The complaint is valid, IMO, and would show up on the filtration test they used to come up with ops if they were working with JITs too rather than just what's in AOT code.


It can try... but you're basically trying to "decompile" or "compress" code to a higher level, and that's not easy nor efficient. If something relatively simple like ADC is difficult, think of something like an entire encryption/hash round, which competing CISC processors already have dedicated instructions for. In the case that you do manage to make that work, there's still the matter of those extra instructions taking up valuable space in caches and memory bandwidth.

Hence why I don't think "RISC is the future" unlike a lot of other proponents; I think a CISC with uop-based decoding will be more scalable and performant. Even ARMs have moved a little in that direction.


Note that RiscV has just gotten the cryptography extension finalized.


Classic CISC processors like the VAX had lots of memory to memory instructions complex looping constructs etc. Special ops that are register to register aren't anti-RISC.


> Can't vendor's making desktop/mobile class CPUs detect the equivalent pattern and optimize it in microcode or silicon?

The riscv stans keep saying that, but nobody has given a demo or shown benchmarks afaik, even under simulation. So it's just handwaving.

It's not only javascript, of course. int overflow in C is an error condition (undefined behaviour) that compilers usually don't try to trap (the -trapv option in gcc and clang enables trapping at some performance cost, so it's rarely used and we get continuing bugs and vulnerabilities as a result. Ada mandates trapping unless you enable an unsafe optimization which is, um, enabled by default in GNAT). Riscv increases that performance cost considerably from what I can tell. That's the opposite of what we needed.

I'm no CPU architect but I know they are able to signal overflow in floating point arithmetic, since IEEE 754 requires that. So I don't understand why they can't do it for integers.


Isn't the obvious solution to the problem of overflows to define the behavior like pretty much all newer languages did it (presumably because they learned from the errors committed by C)?


> we get continuing bugs and vulnerabilities as a result

Background reading: https://huonw.github.io/blog/2016/04/myths-and-legends-about...

If you only want safety, then trapping or not, signalling or not does not matter at all. It is UB that causes safety problems, not the overflow itself. And RISC-V mandates the overflow handling manner. No UBs.

Throw on arithmetic overflow is a language choice. And at least Rust thinks that arithmetic exception everywhere is not necessary for security.

The only related problem with no overflow trapping is that dynamically typed languages needs numerical type conversion on overflow. But TBH, if a numerical javascript program often generates 1.7E308, then it's a terrible program that no one should care.


Interestingly the MIPS CPU traps on overflow for the add and sub instructions. You have to use the addu or subu instructions to get the usual behavior of overflow.

MIPS is kind of the spiritual ancestor to RISC-V




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: