Hacker Newsnew | past | comments | ask | show | jobs | submit | yosefk's commentslogin

Seems like a case of snobbery on behalf of these people. These are nice images but not "high art" which I guess prompts some people to scoff at them

Being critical of generic-looking murals doesn’t make someone a snob.

I searched for some pictures. The first couple I came across looked like the result of a prompt to an AI: "generate images of plastic honey bears with various outfits and/or accessories":

https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQajHzw...

https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRpoQbV...

There's AI slop, and then there's human slop.


Yeah I mean, they are cute little graphics and a fun character/brand, but I don’t exactly see how people consider this some masterful piece of artwork. I don’t live in SF, but I can imagine it gets old to see it everywhere.

It kinda does, friend.

The idea that someone is a snob because they dislike generic looking artworks is a hilarious indicator of how far aesthetic discussion and standards have fallen. The word used to mean someone that looks down upon the popular arts in favor of more traditional/expensive/sophisticated art.

Now apparently it means having any standards or metrics of evaluation, period. Either you think everything is equal aesthetically, or you’re a snob.

Thankfully this kind of empty opinion isn’t convincing many people these days.


You might not be a snob, but you sure as hell sound like one. It's okay when other people like simple things that you don't like.

Where did I say it’s not okay for people to like simple things I don’t like?

I just said having aesthetic opinions doesn’t make someone a snob.


[flagged]


I really don’t know how to reply to this.

I’m not “shaming someone’s work,” I said 1) they look like generic graphics, and 2) I primarily said someone isn’t a snob for disliking them, which is what the OP comment claimed.

Even then, analyzing a piece of art work is called art criticism. It’s not exactly a new thing, nor is it some kind of personal attack.

But as I said above, the quality of aesthetic discussion has fallen so much that expressing any critical opinion, no matter how minor, is some kind of shaming attack that indicates I have a personal problem or I’m a snob. Which is a totally insane way to view the world.


Friend. Friend....

Snobbery is a spectrum. You might not perceive your words as snobbery, but I do. We just have a different opinion of where you fall on that snobbery line.


I'm a snob for good hn threads with substance, but this thread stinks.

Glad you could stop by to contribute! :)

"Ironically, among the four stages, the compiler (translation to assembly) is the most approachable one for an AI to build. It is mostly about pattern matching and rule application: take C constructs and map them to assembly patterns.

The assembler is harder than it looks. It needs to know the exact binary encoding of every instruction for the target architecture. x86-64 alone has thousands of instruction variants with complex encoding rules (REX prefixes, ModR/M bytes, SIB bytes, displacement sizes). Getting even one bit wrong means the CPU will do something completely unexpected.

The linker is arguably the hardest. It has to handle relocations, symbol resolution across multiple object files, different section types, position-independent code, thread-local storage, dynamic linking and format-specific details of ELF binaries. The Linux kernel linker script alone is hundreds of lines of layout directives that the linker must get exactly right."

I worked on compilers, assemblers and linkers and this is almost exactly backwards


Exactly this. Linker is threading given blocks together with fixups for position-independent code - this can be called rule application. Assembler is pattern matching.

This explanation confused me too:

  Each individual iteration: around 4x slower (register spilling)
  Cache pressure: around 2-3x additional penalty (instructions do not fit in L1/L2 cache)
  Combined over a billion iterations: 158,000x total slowdown
If each iteration is X percent slower, then a billion iterations will also be X percent slower. I wonder what is actually going on.

Claude one-shot a basic x86 assembler + linker for me. Missing lots of instructions, yes, but that is a matter of filling in tables of data mechanically.

Supporting linker scripts is marginally harder, but having manually written compilers before, my experience is the exact opposite of yours.


I am inclined to agree with you... but, did CC produce a working linker as well as a working compiler?

I thought it was just the compiler that Anthropic produced.


Why would the correct output of a C compiler not work with a standard linker?

> Why would the correct output of a C compiler not work with a standard linker?

I feel it should for a specific platform/target, but I don't know if it did.

Writing a linker is still a lot of work, so if their original $20k cost of production did not include a linker I'd be less impressed.

Which raises the question, did CC also produce its own pre-processor or just use one of the many free ones?


Thank you very much for your work. I think people envious of someone's compensation don't deserve a response

"Also, it [Claude Code] flickers" - it does, doesn't it? Why?.. Did it vibe code itself so badly that this is hopeless to fix?..

Because they target 60 fps refresh, with 11 of the 16 ms budget per frame being wasted by react itself.

They are locked in this naive, horrible framework that would be embarrassing to open source even if they had the permission to do it.


That's what they said, but as far as I can see it makes no sense at all. It's a console app. It's outputing to stdout, not a GPU buffer.

The whole point of react is to update the real browser DOM (or rather their custom ASCII backend, presumably, in this case) only when the content actually changes. When that happens, surely you'd spurt out some ASCII escape sequences to update the display. You're not constrained to do that in 16ms and you don't have a vsync signal you could synchronise to even if you wanted to. Synchronising to the display is something the tty implementation does. (On a different machine if you're using it over ssh!)

Given their own explanation of react -> ascii -> terminal, I can't see how they could possibly have ended up attempting to render every 16ms and flickering if they don't get it done in time.

I'm genuinely curious if anybody can make this make sense, because based on what I know of react and of graphics programming (which isn't nothing) my immediate reaction to that post was "that's... not how any of this works".


Claude code is written in react and uses Ink for rendering. "Ink provides the same component-based UI building experience that React offers in the browser, but for command-line apps. It uses Yoga to build Flexbox layouts in the terminal,"

https://github.com/vadimdemedes/ink


I figured they were doing something like Ink, but interesting to know that they're actually using Ink. Do you have any evidence that's the case?

It doesn't answer the question, though. Ink throttles to at most 30fps (not 60 as the 16ms quote would suggest, though the at most is far more important). That's done to prevent it churning out vast amounts of ASCII, preventing issues like [1], not as some sort of display sync behaviour where missing the frame deadline would be expected to cause tearing/jank (let alone flickering).

I don't mean to be combative here. There must be some real explanation for the flickering, and I'm curious to know what it is. Using Ink doesn't, on it's own, explain it AFAICS.

Edit: I do see an issue about flickering on Ink [2]. If that's what's going on, the suggestion in one of the replies to use alternate screen sounds reasonable and nothing to do with having to render in 16ms. There are tons of TUI programs out there that manage to update without flickering.

[1] https://github.com/gatsbyjs/gatsby/issues/15505

[2] https://github.com/vadimdemedes/ink/issues/359


How about the ink homepage (same link as before), which lists Claude as the first entry under

Who's Using Ink?

    Claude Code - An agentic coding tool made by Anthropic.

Great, so probably a pretty straightforward fix, albeit in a dependency. Ink does indeed write ansiEscapes.clearTerminal [1], which does indeed "Clear the whole terminal, including scrollback buffer. (Not just the visible part of it)" [2]. (Edit: even the eraseLines here [4] will cause flicker.)

Using alternate screen might help, and is probably desirable anyway, but really the right approach is not to clear the screen (or erase lines) at all but just write out the lines and put a clear to end-of-line (ansiEscapes.eraseEndLine) at the end of each one, as described in [3]. That should be a pretty simple patch to Ink.

Likening this to a "small game engine" and claiming they need to render in 16ms is pretty funny. Perhaps they'll figure it out when this comment makes it into Claude's training data.

[1] https://github.com/vadimdemedes/ink/blob/e8b08e75cf272761d63...

[2] https://www.npmjs.com/package/ansi-escapes

[3] https://stackoverflow.com/a/71453783

[4] https://github.com/vadimdemedes/ink/blob/e8b08e75cf272761d63...


Claude code programmers are very open that they vibe code it.

I don't think they say they vibe code, just that claude writes 100% of the code.

The list of the oil producers listed and omitted on a given forum in these contexts is always interesting. On HN it is often SA or Russia, and almost never Qatar or Iran.


How dare you question the rigor of the venerable LLM peer review process! These are some of the most esteemed LLMs we are talking about here.


It's about formalization in Lean, not peer review


TFA explains how std::move is tricky to use and this is not a feature reserved for library writers


Of course it is not reserved for library writers - nothing is. But it is not a feature that application writers should worry about overmuch.


std::move is definitely for there for optimizing application code and is often used there. another silly thing you often see is people allocating something with a big sizeof on the stack and then std::moving it to the heap, as if it saves the copying


> another silly thing you often see is people allocating something with a big sizeof on the stack and then std::moving it to the heap, as if it saves the copying

never seen this - an example?


You could say the same things about assemblers, compilers, garbage collection, higher level languages etc. In practice the effect has always been an increase in the height of a mountain of software that can be made before development grinds to a halt due to complexity. LLMs are no different


In my own experience (and from everything I’ve read), LLMs as they are today don’t help us as an industry build a higher mountain of software because they don’t help us deal with complexity — they only help us build the mountain faster.


I see this response a lot but I think it's self-contradictory. Building faster, understanding faster, refactoring faster — these do allow skilled developers to work on bigger things. When it takes you one minute instead of an hour to find the answer to a question about how something works, of course that lets you build something more complex.

Could you say more about what you think it would look like for LLMs to genuinely help us deal with complexity? I can think of some things: helping us write more and better tests, fewer bugs, helping us get to the right abstractions faster, helping us write glue code so more systems can talk to each other, helping us port things to one stack so we don't have to maintain polyglot piles of stuff (or conversely helping us not worry about picking and choosing the best stuff from every language ecosystem).


> I see this response a lot but I think it's self-contradictory. Building faster, understanding faster, refactoring faster — these do allow skilled developers to work on bigger things. When it takes you one minute instead of an hour to find the answer to a question about how something works, of course that lets you build something more complex.

I partially agree. While LLMs don't magically increase a human's mental capacity, but they do allow a given human to explore the search space of e.g. abstractions faster than they otherwise could before they run out of time or patience.

But (to use GGP's metaphor) do LLMs increase the ultimate height of the software mountain at which complexity grinds everything to a halt?

To be more precise, this is point at which the cost of changing the system gets prohibitively high because any change you make will likely break something else. Progress becomes impossible.

Do current LLMs help us here? No, they don't. It's widely known that if you vibe code something, you'll pretty quickly hit a wall where any change you ask the LLM to make will break something else. To reliably make changes to a complex system, a human still needs to really grok what's going on.

Since the complexity ceiling is a function of human mental capacity, there are two ways to raise that ceiling:

1. Reduce cognitive load by building high-leverage abstractions and tools (e.g. compilers, SQL, HTTP)

2. Find a smarter person/machine to do the work (i.e. some future form of AI)

So while current LLMs might help us do #1 faster, they don't fundamentally alter the complexity landscape, not yet.


Thanks for replying! I disagree that current LLMs can't help build tooling that improves rigor and lets you manage greater complexity. However, I agree that most people are not doing this. Some threads from a colleague on this topic:

https://bsky.app/profile/sunshowers.io/post/3mbcinl4eqc2q

https://bsky.app/profile/sunshowers.io/post/3mbftmohzdc2q

https://bsky.app/profile/sunshowers.io/post/3mbflladlss26


Rust HashSets are HashMaps with an empty type as the value type, but the compiler actually optimizes away the storage for the keys based on the type being empty. Go doesn't bother to either define a set type like most languages do, or to optimize the map implementation with an empty type as the value type


The Chinese are ahead at too many things at this point to think they're only good at copying


And it is not like making a copy for cheaper isn't something that requires skill and innovation. Or then iterate on that copy. Didn't Roomba just fail to these copies. If west was truly so much more innovative and better shouldn't they as company be infinitely ahead still?


That depends heavily on where the cost saving came from. For a long time China made cheap copies with extremely cheap labor, though that may no longer be the case as it seems they're innovating on the manufacturing process these days.


I never said that, or that there's something wrong with copying. I just said the sentence implies copying. Which it does.

And in fact this meme Chinese only copy is crap as I point out in my last paragraph. Over the centuries the Chinese were the first at quite a few things.

But the sentence says what it says.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: