Hacker Newsnew | past | comments | ask | show | jobs | submit | XzAeRosho's commentslogin

>2. How do you handle the semantic gap? LLMs operate in natural language/fuzzy logic space, while formal methods require precise specifications. What's the translation layer like?

From what I understood, this validates the output correctness, not that the output aligns with the user goals, so there's still room for the LLM to get the user goals wrong, and this is only to validate the mathematical consistency between the output code and the formal specification (in this paper, within the ESBMC framework for C++ code).

So it's kind of tight scoped, in this case, but I think it points in the right direction for coding assistants, which usually get some language primitives wrong.


This is the job a junior developer may deliver in their first weeks at a new job, so this is the way it should be treated as: good intentions, not really good quality.

AI coding needs someone behind to steer it to do better, and in some cases, it does. But still hasn't left the junior phase, and while that doesn't happen, there's still the need for a good developer to deliver good results.


There's no serious company who would do anything equivalent to "hey Jr Dev make me a Counterstrike", so examples like these do way more harm than good, because they give the impression of superpowers but this is really just the best they can do.

They're not thinking or reasoning or understanding. It's just amazing autocomplete. Humans not being able to keep themselves from extrapolating or gold rushing doesn't change that.


An option trader would bet on the volatility index of the predictions, instead of the prediction itself.


The answer to most convenient solutions is money. There's no money in that.


No, it's because that's not how training an LLM works.


And or, the lower parameter models are straight up less effective than the giants? Why is anyone paying for sonnet and opus if mixtral could do what they do?


But, for example, Zig as a language has prominent corporate support. And, Mitchell Hashimoto is incredibly active and a billionaire. It feels like this would be a rational way to expand the usage of a language.


Absolutely. Common sense and critical thinking is less common than you may think.


That's in part because "common sense" is an illusion and fantasy.


It's less illusion and fantasy and more code for "what I think." If you notice, it's only uttered by people who believe their own reasoning should be automatically accepted as truth. Ego leaves no room for doubt or embarrassment.


It really depends on the driver lottery. Do you have good driver support? Good, you'll have a mostly flawless experience. Generic drivers? You either get weird cpu usage patterns, or perfectly normal behavior. Maybe an update will break everything. Welcome to the Linux Driver Lottery.

It's really good nowadays, but the issues remain the same sadly, and it's not Linux devs fault either, it's just the manufacturer lack of support.


If things felt corrupt before, why not improve the processes? Why keep doing the same but with the companies the administration in turn likes the best?


Removing the process is the improvement. This is what we voted for. Less processes not more and actual results.

Ask ChatGPT to compare and contrast Kamala Harris's effectiveness as "broadband Tzar" as compared to SpaceX Starlink.


Less process creates transparency gaps which become opportunities for more corruption.


There is a balance in process vs. efficiency.

The idea that anyone removing any processes is only doing it to insert corruption isn't a valid argument against efficiency


Got it, so frame the question using language hinting i'm looking for an anti Harris response.


What do you think is the best way for a voter to decide if Harris would be a good steward of tax payer money?


Bleep bloop bleep bloop, looks like AI generated responses even reached Hacker News.


I think the answer it's in the Blackmagic website:

>the world’s first advanced cinema camera designed to shoot for Apple Immersive Video

I think they are tapping early into an emerging "new" video format.


3D TV back at it again.


It says so in the article itself:

>Excluding interconnects, the SRAM and CCD should add up to less than 20µm thick. To accommodate such small and fragile components, AMD has added a bulky layer of dummy silicon at the top and the bottom for structural integrity.


The article doesn't answer questions like whether this is unusual, if other CPUs historically have had a lot of dummy silicon, whether this is expensive or how it impacts the cost of production or how it affects the complexity of production itself. It's what I'd expect from modern journalism, but not really what I expect from good journalism.


Traditionally, it's not needed.

The way nearly every CPU starting right from 4004 has worked is that they take a silicon wafer that's about half a mm thick, do a lot of photolithography, etching, deposition and other stuff to build the cpu on the top side of it, flip it upside down and bond it to the package. Only the very surface layer of the silicon is active in any way, the rest of the bulk is used for structural support and spreading heat laterally (necessary because heat is not evenly distributed at all, the hot spots get very hot). Power and signals come from the package below, heat is dissipated to the top.

The x3d chips change this, because in it there are two silicon dies bonded together, right on top of each other. The lower one of these dies gets through-silicon vias built into it, so it can provide power and signals for the top one. In prior generations, there was a normal CPU on the bottom, and they put the cache chip on top. For Zen5, they reversed that.

A complication is that apparently the process they use to bond them requires the top chip to be thinned. This means it's structurally weak and that heat spreads worse laterally, which would be bad for top clocks. So they bond another, thicker piece of silicon on top of it.


Well, that was kinda my point:

- traditional: bottom - actual IC; top - "dummy silicon" (part of the same chip).

- new: bottom - thin IC 1; middle - thin IC 2; top - "dummy silicon" (separate chip)


"requires the top chip to be thinned" and then "bond a thicker piece of silicon on top of it." That doesn't seem very efficient to me but I'm sure they know what they are doing.


Yes, people have found you can sand the die down by ~0.3mm on a 9900k if you are that desperate to improve temps by 2-3 degrees C: https://www.youtube.com/watch?v=O1ed_rBRb7Q&t=525s


Yeah. I think that other CPUs do also have a large chunk of 'dummy' silicon, except it's just the wafer that it's manufactured on the surface of, instead of a seperate part that's added after the wafer is ground down to allow for the connections through the back.


I'd say it's load bearing silicon. Like, literally.


yup: https://grapesjs.com/

Self-hostable too.


Thanks for sharing! This looks good.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: