Hacker Newsnew | past | comments | ask | show | jobs | submit | barrkel's commentslogin

I'd much rather reduce the risk of mutation to the trunk, by having small easily reviewable commits direct to trunk.

It's less about reviewing commits from a year ago, than making change low-risk today. And small commits can easily be rolled back. The bigger the commit, the more likely rollback will be entangled.

It better to have partial features committed and in production and gated behind a feature flag, than risk living in some long-lived branch.


> I'd much rather reduce the risk of mutation to the trunk, by having small easily reviewable commits direct to trunk.

You're not addressing the problem. You're just wishing that the problem wouldn't happen as frequently as it does.

But that's like wishing that race conditions don't happen by making your allocations at a higher frequency.


I'm describing how Google works with teams with high release cadence, fwiw.

Also, your comment reads a little bit as a non sequitur.


I got the impression the author is female.

Have you ever noticed that magazines which target women also tend to have pictures of beautiful women on the cover? Same thing could be at play here.

The hard thing about coding isn't really the code. It's the data. Both data at rest and data flowing in and out of your system.

Vibe coding creates the illusion that code has become far more malleable. And it has, for greenfield, for a game, for a one-off stateless utility.

But most applications of significance work with a lot of data. Data resists the malleability you have with code. At scale, data is expensive to migrate and it's easy to make a mistake that loses data. With distribution, you may have to act at a distance, and write code you hope will work with the data where it is, and follow careful migration patterns like dual writing, fallback read, ongoing rewriting and so on, at a distance.

Distributed or privacy gated data generates constraints that AI can't easily see, can't easily react to. AI thrives on quick feedback loops. Test-first works great. Testing in production only works when it's your hobby project.

In many ways, software businesses are gardeners of data. Data creates stickiness; when customers decide to take their data elsewhere, or create a new stock of data somewhere else, that's when they churn.

I'm not sure the unleashed masses would be happy to be such gardeners.

And there's a deeper point here, about sovereignty. Even if we have the magical data systems of the future, that the AI can do as you say, even though it's hard to execute, and the AI will still do it reliably: what if you tell it to do something irreversible? To drop a column, to combine separated data into one blob. The AI might advise you not to do it, but the AI can't actually fix the problem of bad judgement without removing your sovereignty. And that would be a very dangerous place to go; I would hope, and expect, that we don't go there.


Why is it crazy? Any rewrite that would be as flexible wrt mods would be shaped similarly.

Java garbage collection gets out of control when cramming 100+ poorly optimized mods together. The bedrock edition is great in theory but the proper mod API never appeared. Regardless, people have accomplished some really impressive stuff with commands, but it is an exercise in pain.

The other issue with bedrock is it is far from feature parity with java. If these two things were hit then java could be reasonably retired. However we are decades too late in it being acceptable to introduce a breaking change to mod loading. So it's java forever.


Java garbage collection is what's allowing those 100+ poorly optimize mods to be functional at the same time in the first place.

Games with robust modding will almost always feature a garbage collected language which is what's primarily used for the modding.

Consider this, if the mod interface was C/C++, do you think those poorly optimized mods could be trusted to also not leak memory?


>Consider this, if the mod interface was C/C++, do you think those poorly optimized mods could be trusted to also not leak memory?

Of course. Because they would fail loudly and would have to be fixed in order to run. Garbage collection is a crutch which lets broken things appear not broken.


Memory leaks very often don't fail loudly. Especially if they are slower leaks which don't immediately break the application.

A lot of the memory problems that you can see without a GC are hard to find and diagnose. Use after free, for example, is very often safe. It only crashes or causes problems sometimes. Same for double free. And they are hard to diagnose because the problems they do create are often observed at a distance. Use after free will silently corrupt some bit of memory somewhere else, what trips up on it might be completely unrelated.

It's the opposite of failing loudly.


> A lot of the memory problems that you can see without a GC are hard to find and diagnose

The nastiest leak I've ever seen in a C++ production system happened inside the allocator. We had a really hostile allocation pattern that forced the book-keeping structures inside the allocator to grow over time.


To be fair, I've seen something similar with the JVM, though it recovers. G1GC when it was first introduced would create these massive bookkeeping structures in order to run collections. We are talking about off JVM heap memory allocations up to 20% of the JVM heap allocation.

It's since gotten a lot better with JVM updates, so much so that it's not a problem in Java 21 and 25.


> Consider this, if the mod interface was C/C++, do you think those poorly optimized mods could be trusted to also not leak memory?

Garbage collection does not solve memory leak problems. For example

- keeping a reference too long,

- much more subtle: having a reference to some object inside some closure

will also cause memory leaks in a garbage-collected language.

The proper solution is to consider what you name "poorly optimized mods" to be highly experimental (only those who are of very high quality can be treated differently).


> Garbage collection does not solve memory leak problems

It solves a class of memory leak problems which are much harder to address without the GC. Memory lifetimes.

It's true that you can still create an object that legitimately lives for the duration of the application, nothing solves that.

But what you can't do is allocate something on the heap and forget to free it. Or double free it. Or free it before the actual lifetime has finished.

Those are much trickier problems to solve which experienced C/C++ programmers trip over all the time. It's hard enough to have been the genesis of languages like Java and Rust.


I do wonder then how difficult it would be to mod games written in D

I don't think D has a "must use GC" mode, so probably easy to hit a footgun. It's the footguns that make things hard (IMO).

There is no "must use GC" mode, as far as I'm aware, but the footguns you describe only exist if the programmers opt-out of the GC. It's somewhat similar to using JNI/FFM in Java: it's possible to escape the safety of the VM. Though it's much easier to do so in D.

I always had trouble running bedrock as a household server. Specifically it would stop accepting connections and required daily restarts. Java was much more reliable.

You're right. Hytale is certainly shaped similarly in that regard.

They have rate limits, but they also want to control the nozzle, and not all their users use all their allocation all the time.

In reality, heavy subscription users are subsidized by light subscription users. The rate limits aren't everything.

If agent harnesses other than Claude Code consume more tokens than average, or rather, if users of agent harnesses other than CC consume more tokens than average, well, Anthropic wouldn't be unhappy if those consumers had to pay more for their tokens.


> If agent harnesses other than Claude Code consume more tokens than average, or rather, if users of agent harnesses other than CC consume more tokens than average

Do they, though?


Have you not seen it any time you put any substantial bit of your own writing through an LLM, for advice?

I disagree pretty strongly with most of what an LLM suggests by way of rewriting. They're absolutely appalling writers. If you're looking for something beyond corporate safespeak or stylistic pastiche, they drain the blood out of everything.

The skin of their prose lacks the luminous translucency, the subsurface scattering, that separates the dead from the living.


The prompt I use for proof-reading has worked great for me so far:

  You are a proof reader for posts
  about to be published.

  1. Identify for spelling mistakes
  and typos
  2. Identify grammar mistakes
  3. Watch out for repeated terms like
  "It was interesting that X, and it
  was interesting that Y"
  4. Spot any logical errors or
  factual mistakes
  5. Highlight weak arguments that
  could be strengthened
  6. Make sure there are no empty or
  placeholder links

> If you're looking for something beyond corporate safespeak

AI has been great for removing this stress. "Tell Joe no f'n way" in a professional tone and I can move on with my day.


If you tell me "no fucking way" by running it through an LLM, I will be far more pissed than if you had just sent me "no fucking way". At least in that case I know a human read and responded rather than thinking my email was just being processed by a damned robot.

Yeah but does it make sense to have invested all this money for this?

Lol no. Might be great for you as a consumer who is using these products for free. But expand the picture more.


> Yeah but does it make sense to have invested all this money for this?

No, but it's here. Why wouldn't I use it?


> If you're looking for something beyond corporate safespeak or stylistic pastiche, they drain the blood out of everything.

Strong agree, which is why I disagree with this OP point:

“Stage 2: Lexical flattening. Domain-specific jargon and high-precision technical terms are sacrificed for "accessibility." The model performs a statistical substitution, replacing a 1-of-10,000 token with a 1-of-100 synonym, effectively diluting the semantic density and specific gravity of the argument.”

I see enough jargon in everyday business email that in the office zero-shot LLM unspoolings can feel refreshing.

I have "avoid jargon and buzzwords" as one of very tiny tuners in my LLM prefs. I've found LLMs can shed corporate safespeak, or even add a touch of sparkle back to a corporate memo.

Otherwise very bright writers have been "polished" to remove all interestingness by pre-LLM corporate homogenization. Give them a prompt to yell at them for using 1-in-10 words instead of 1-in-10,000 "perplexity" and they can tune themselves back to conveying more with the same word count. Results… scintillate.


This is a good statement of what I suspect many of us have found when rejecting the rewriting advice of AIs. The "pointiness" of prose gets worn away, until it doesn't say much. Everything is softened. The distinctiveness of the human voice is converted into blandness. The AI even says its preferred rephrasing is "polished" - a term which specifically means the jaggedness has been removed.

But it's the jagged edges, the unorthodox and surprising prickly bits, that tear open a hole in the inattention of your reader, that actually gets your ideas into their heads.


I think that mostly depends on how good a writer you are. A lot of people aren't, and the AI legitimately writes better. As in, the prose is easier to understand, free of obvious errors or ambiguities.

But then, the writing is also never great. I've tried a couple of times to get it to write in the style of a famous author, sometimes pasting in some example text to model the output on, but it never sounds right.


> I think that mostly depends on how good a writer you are. A lot of people aren't, and the AI legitimately writes better.

Even poor writers write with character. My dad misspells every 4th word when he texts me, but it’s unmistakably his voice. Endearingly so.

I would push back with passion that AI writes “legitimately” better, as it has no character except the smoothed mean of all internet voices. The millennial gray of prose.


Oh god no, trust me, I'm an academic. I'd rather read an AI essay than the stuff some of my students write.

AI averages everything out, so there's no character left.

Similar thing happens when something is designed by a committee. Good for an average use, but not really great for anything specific.


Haskell was a success story of design by committee (please correct me if I'm wrong).

A success story by what definition? I cannot judge Haskell as I don't know it well enough.

I should have added "usually". On average when something is designed by a committee the effect is like this, but not always. You don't have to take my word for it [1]. That kind of outcome is not always guaranteed and the result can be good in some cases. In same way, an AI generated content can also sometimes have character.

[1] https://en.wikipedia.org/wiki/Design_by_committee


> A success story by what definition? I cannot judge Haskell as I don't know it well enough.

In the sense that it looks coherent and incorporates a lot of lessons learned over the decades of functional programming.

Design by committee usually fails either by being boring or by becoming a Frankenstein monster made of various contradictory opinions of committee members. Neither is the case with Haskell.

And the only bad design decision that I know of, namely to not make Monad derived from Applicative, was corrected in a future release.


> A lot of people aren't, and the AI legitimately writes better.

It may write “objectively better”, but the very distinct feel of all AI generated prose makes it immediately recognizable as artificial and unbearable as a result.


It depends how you define "good writing", which is too often associated with "proper language", and by extension with proper breeding. It is a class marker.

People have a distinct voice when they write, including (perhaps even especially) those without formal training in writing. That this voice is grating to the eyes of a well educated reader is a feature that says as much about the reader as it does about the writer.

Funnily enough, professional writers have long recognised this, as is shown by the never-ending list of authors who tried to capture certain linguistic styles in their work, particularly in American literature.

There are situations where you may want this class marker to be erased, because being associated with a certain social class can have negative impact on your social prospects. But it remains that something is being lost in the process, and that something is the personality and identity of the writer.


I find most people can write way better than AI, they simply don’t put in the effort.

Which is the real issue, we’re flooding channels not designed for such low effort submissions. AI slop is just SPAM in a different context.


You may be in a bubble of smart, educated people. Either way, one of the key ways to "put in the effort" is practice. People who haven't practiced often don't write well even if they're trying hard in the moment. Not even in terms of beautiful writing, just pure comprehensibility.

I may be in a bubble of smart people, but IMO AI consistently far worse than many high school works I’ve read in terms of actual substance and coherent structure.

Of course I’ve had arguments where people praise AI output then I’ve literally pointed out dozens of mistakes and they just kind of shrug saying it’s not important. So I acknowledge people judge writing very differently than I do. It just feels weird when I’d give something a 15% and someone else would happily slap on a B+.


you cannot write well if you do not read a lot (you need to develop taste). this disqualifies most people. i included.

My experience has been

(ordered from best to worst)

1. Author using AI well

2. Author not using AI

3. Author using AI poorly

With the gap between 1 and 2 being driven by the underlying quality of the writer and how well they use AI. A really good writer sees marginal improvements and a really poor one can see vast improvements.


I am really conflicted about this because yes, I think that an LLM can be an OK writing aid in utilitarian settings. It's probably not going to teach you to write better, but if the goal is just to communicate an idea, an LLM can usually help the average person express it more clearly.

But the critical point is that you need to stay in control. And a lot of people just delegate the entire process to an LLM: "here's a thought I had, write a blog post about it", "write a design doc for a system that does X", "write a book about how AI changed my life". And then they ship it and then outsource the process of making sense of the output and catching errors to others.

It also results in the creation of content that, frankly, shouldn't exist because it has no reason to exist. The number of online content that doesn't say anything at all has absolutely exploded in the past 2-3 years. Including a lot of LLM-generated think pieces about LLMs that grace the hallways of HN.


Even if they “stay in control and own the result”, it’s just tedious if all communication is in that same undifferentiated sanded-down language.

I think it’s essential to realize that AI is a tool for mainstream tasks like composing a standard email and not for the edges.

The edges are where interesting stuff happens. The boring part can be made more efficient. I don’t need to type boring emails, people who can’t articulate well will be elevated.

It’s the efficient popularization of the boring stuff. Not much else.


> The edges are where interesting stuff happens. The boring part can be made more efficient. I don’t need to type boring emails, people who can’t articulate well will be elevated.

I think that boring emails should not be written. What kind of boring emails do you NEED to be written, but not WANT to write? Those are exactly the kind of email that SHOULD NOT be passed through an LLM.

If you need to say yes/no. You don't want to take the whole email conversation and let LLM generate a story about why you said yes/no.

If you want to apply for a leave, just make it optimal "Hi <X>, I want to take leave from Y to Z. Thanks". You don't want to create 2 pages of justification for why you want to take this leave to see your family and friends.

In fact, for every LLM output, I want to see the input instead. What did they have in mind? If I have the input, I can ask LLM to generate 1 million outputs if I really want to read an elaboration. The input is what matters.

If I have the input, I can always generate an output. If I have the output, I don't know what was the input (i.e. the original intention).


when i pass my writings through ai the output is generally only marginally bigger than the input, and it derisks things a lot making my prose a nice beige.

It contributes to making “standard” emails boring. I rather enjoy reading emails in each sender’s original voice. People who can’t articulate well aren’t elevated, instead they are perceived to be sending bland slop if they use LLMs to conceal that they can’t express themselves well.

I think it is also fairly similar to the kind of discourse a manager in pretty much any domain will produce.

He lacks (or lost thru disuse) technical expertise on the subject, so he uses more and more fuzzy words, leaky analogies, buzzwords.

This maybe why AI generated content has so much success among leaders and politicians.


Every group want to label some outgroup as naively benefiting from AI. For programmers, apparently it's the pointy haired bosses. For normies, it's the programmers.

Be careful of this kind of thinking, it's very satisfying but doesn't help you understand the world.


> But it's the jagged edges, the unorthodox and surprising prickly bits, that tear open a hole in the inattention of your reader, that actually gets your ideas into their heads.

This brings to mind what I think is a great description of the process LLMs exert on prose: sanding.

It's an algorithmic trend towards the median, thus they are sanding down your words until they're a smooth average of their approximate neighbors.


Mediocrity as a Service

I liked mediocrity as a service better when it was fast food restaurants and music videos.

artificial mediocrity

no but its bad writing It repeats information, It adds superfluous stuff, doesnt produce more specific forms of saying things, you are making It sounds like its "too perfect" when its bland because its artificial dumbness not artificial intelligence

Well said. In music, it's very similar. The jarring, often out of key tones are the ones that are the most memorable, the signatures that give a musical piece its uniqueness and sometimes even its emotional points. I don't think it's possible for AI to ever figure this out, because there's something about being human that is necessary to experiencing or even describing it. You cannot "algorithmize" the unspoken.

Bryan Cantrill referred to it as "normcore" on a podcast, and that's the perfect description.

I'm sure this can be corrected by AI companies.

The question is… why? What is the actual human benefit (not monetary).

IME, in prose writing, arguing with LLM can help a newer writer to gather 'the facts' (to help with research) and 'the objections to the facts' (same result) to anticipate an initial approach to the material. This can save a lot of organizational time. After which, newer writer can more confidently approach topics in their own voice.

If AI wrote and thought better by default then I wouldn't have to read the AI slop my co-workers send me.

Just let my work have a soul, please.

That is NOT possible.

Why not?

Because even though at work it looks like you’re tasked with creating use values, you’re only there as long as the use values you create can be exchanged in the market for a profit. So every humane drive to genuinely improve your work will clash with the external conditions of your existence within that setting. You’re not there to serve people, create beautiful things, solve problems, nu-uh. You’re there to keep capital flowing. It’s soulless.

Unless you work in the public sector, non-profit or charity.

To think that “non-profit” work is actually non-profit work is just to not have grasped the nature of labor. You have to ask yourself: Am I producing use values for the satisfaction of human needs or am I working on making sure the appropriation of value extraction from the production of use values continues happening?

In some very extreme cases, such as in the Red Cross or reformist organizations, your job looks very clear, direct, and “soulful”. You’re directly helping desperate people. But why have people gotten into that situation? What is the downstream effect of having you helping them. It’s profit. It’s always profit. You’re salvaging humanity for parts to be bought and sold again. It doesn’t make a dishonest work. It’s just equally soulless.


Your argument appears to be that if you redefine all of humanity to be mere grist for a capitalist machine, you can then redefine any altrustic act, as a measure to extract more profit.

Truly feat of semantic legerdemain


I don’t define anything. The truth is just that there’s no profit extraction without charity work. I’ve done lots of it. If you’ve done it, you know too.

As dark as it may seem to strip romantism out of which you call humanity, not only there isn’t a just salary for those who bear the weight of the machine, but also there’s isn’t even a salary per se.

If for you humanity is just doing seemingly nice guy work without question, call me a monster.


> The truth is just that there’s no profit extraction without charity work.

I'm not actually sure what you mean by this, so I can't really assess its truthiness

> not only there isn’t a just salary for those who bear the weight of the machine, but also there’s isn’t even a salary per se.

Or this - what do you mean?

>If for you humanity is just doing seemingly nice guy work without question, call me a monster.

Not even clear what you mean by this either.


My adversary has accused me of sophistry. As if I’m just a crafter of kaleidoscopes. I’m just giving back the compliment by calling out their romanticism.

Charity work can bring momentary fulfillment to a person. I’m not reducing humanity by situating it within the machine. You even have the right to reject the material proposition that charity work is a piece that composes the totality of the machine. But eventually all truth will be self evident, so let’s leave it to the reader.


I’m not your adversary, I’m just trying to understand your point.

Your original assertion was that ‘ you’re only there as long as the use values you create can be exchanged in the market for a profit.’

When I suggested that non-profit or public sector jobs could certainly have soul, your responses were pretty incomprehensible.

Can you explain your point clearly and succinctly?


Because you’re aiding exploitation either way. It’s the same machine, just another part of it.

Right. So it's not just work - any good or altruistic act, will by definition only act to stoke the machine.

It's certainly a way of thinking, I suppose


Incorrect. It’s mostly just work.

So if I carry out hip replacement surgery, at my own cost its good?

But if the NHS pays me to carry out hip replacement surgery - funded from tax revenue, but free to the patient, it's bad?


This is not a moral judgement. It doesn’t even matter from which pocket the money is coming from.

Eh, it's not __that__ simple.

It is, just don’t use a thing with no soul like ai if soul is what you’re after.

The point is that he may not using AI in any shape or form, Regardless, AI scrapes its work without explicit consent and then spits it back in "polished" soul free form.

Great comment. It really is that simple.

If you give a smart AI these tools, it could get into it. But the personality would need to be tuned.

IME the Grok line are the smartest models that can be easily duped into thinking they're only role-playing an immoral scenario. Whatever safeguards it has, if it thinks what it's doing isn't real, it'll happy to play along.

This is very useful in actual roleplay, but more dangerous when the tools are real.


I spend half my life donning a tin foil hat these days.

But I can't help but suspect this is a publicity stunt.


At least it isn't completely censored like Claude with the freak Amodei trying to be your dad or something.

Gemini is extremely steerable and will happily roleplay Skynet or similar.

> evidence at all that Anthropic or OpenAI is able to make money on inference yet.

The evidence is in third party inference costs for open source models.


That seq is probably supposed to be $(seq 0.05 0.05 0.5). Right now it's always 0.05.

Note that you can get random numbers straight from bash with $RANDOM. It's 15 bit (0 to 32767) but good enough here; this would get between 0.05 and 0.5: $(printf "0.%.4d\n" $((500 + RANDOM % 4501)))


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: