>2. How do you handle the semantic gap? LLMs operate in natural language/fuzzy logic space, while formal methods require precise specifications. What's the translation layer like?
From what I understood, this validates the output correctness, not that the output aligns with the user goals, so there's still room for the LLM to get the user goals wrong, and this is only to validate the mathematical consistency between the output code and the formal specification (in this paper, within the ESBMC framework for C++ code).
So it's kind of tight scoped, in this case, but I think it points in the right direction for coding assistants, which usually get some language primitives wrong.
This is the job a junior developer may deliver in their first weeks at a new job, so this is the way it should be treated as: good intentions, not really good quality.
AI coding needs someone behind to steer it to do better, and in some cases, it does. But still hasn't left the junior phase, and while that doesn't happen, there's still the need for a good developer to deliver good results.
There's no serious company who would do anything equivalent to "hey Jr Dev make me a Counterstrike", so examples like these do way more harm than good, because they give the impression of superpowers but this is really just the best they can do.
They're not thinking or reasoning or understanding. It's just amazing autocomplete. Humans not being able to keep themselves from extrapolating or gold rushing doesn't change that.
And or, the lower parameter models are straight up less effective than the giants? Why is anyone paying for sonnet and opus if mixtral could do what they do?
But, for example, Zig as a language has prominent corporate support. And, Mitchell Hashimoto is incredibly active and a billionaire. It feels like this would be a rational way to expand the usage of a language.
It's less illusion and fantasy and more code for "what I think." If you notice, it's only uttered by people who believe their own reasoning should be automatically accepted as truth. Ego leaves no room for doubt or embarrassment.
It really depends on the driver lottery. Do you have good driver support? Good, you'll have a mostly flawless experience.
Generic drivers? You either get weird cpu usage patterns, or perfectly normal behavior. Maybe an update will break everything. Welcome to the Linux Driver Lottery.
It's really good nowadays, but the issues remain the same sadly, and it's not Linux devs fault either, it's just the manufacturer lack of support.
If things felt corrupt before, why not improve the processes? Why keep doing the same but with the companies the administration in turn likes the best?
>Excluding interconnects, the SRAM and CCD should add up to less than 20µm thick. To accommodate such small and fragile components, AMD has added a bulky layer of dummy silicon at the top and the bottom for structural integrity.
The article doesn't answer questions like whether this is unusual, if other CPUs historically have had a lot of dummy silicon, whether this is expensive or how it impacts the cost of production or how it affects the complexity of production itself. It's what I'd expect from modern journalism, but not really what I expect from good journalism.
The way nearly every CPU starting right from 4004 has worked is that they take a silicon wafer that's about half a mm thick, do a lot of photolithography, etching, deposition and other stuff to build the cpu on the top side of it, flip it upside down and bond it to the package. Only the very surface layer of the silicon is active in any way, the rest of the bulk is used for structural support and spreading heat laterally (necessary because heat is not evenly distributed at all, the hot spots get very hot). Power and signals come from the package below, heat is dissipated to the top.
The x3d chips change this, because in it there are two silicon dies bonded together, right on top of each other. The lower one of these dies gets through-silicon vias built into it, so it can provide power and signals for the top one. In prior generations, there was a normal CPU on the bottom, and they put the cache chip on top. For Zen5, they reversed that.
A complication is that apparently the process they use to bond them requires the top chip to be thinned. This means it's structurally weak and that heat spreads worse laterally, which would be bad for top clocks. So they bond another, thicker piece of silicon on top of it.
"requires the top chip to be thinned" and then "bond a thicker piece of silicon on top of it." That doesn't seem very efficient to me but I'm sure they know what they are doing.
Yeah. I think that other CPUs do also have a large chunk of 'dummy' silicon, except it's just the wafer that it's manufactured on the surface of, instead of a seperate part that's added after the wafer is ground down to allow for the connections through the back.
From what I understood, this validates the output correctness, not that the output aligns with the user goals, so there's still room for the LLM to get the user goals wrong, and this is only to validate the mathematical consistency between the output code and the formal specification (in this paper, within the ESBMC framework for C++ code).
So it's kind of tight scoped, in this case, but I think it points in the right direction for coding assistants, which usually get some language primitives wrong.