Hacker Newsnew | past | comments | ask | show | jobs | submit | tomstuart's commentslogin

Congratulations on the new adventure, Steve, and good luck!


What do you mean by “Pagemorphs”? A quick Google search suggests you’re the only person using this term so it’s hard to know what you’re recommending. I think it must mean e.g. https://turbo.hotwired.dev/handbook/page_refreshes?


Do you want another person (or yourself in the future) to be able to read your commits, in order, to get a clear account of what changed & why? If so, you should fix up those commits to address mistakes. If not, it doesn’t matter.


Not the OP but for me, no I don't actually.

In a PR branch, my branches usually have a bunch of WIP commits, especially if I've worked on a PR across day boundaries. It's common for more complex PRs that I started down one path and then changed to another path, in which case a lot of work that went into earlier commits is no longer relevant to the picture as a whole.

Once a PR has been submitted for review, I NEVER want to change previous commits and force push, because that breaks common tooling that other team mates rely on to see what changes since their last review. When you do a force push, they now have to review the full PR because they can't be guaranteed exactly which lines changed, and your commit message for the old pr is now muddled.

Once the PR has been merged, I prefer it merged as a single squashed commit so it's reflective of the single atomic PR (because most of the intermediary commits have never actually mattered to debugging a bug caused by a PR).

And if I've already merged a commit to main, then I 100% don't want to rewrite the history of that other commit.

So personally I have never found the commit history of a PR branch useful enough that rewriting past commits was beneficial. The commit history of main is immensely useful, enough that you never want to rewrite that either.


> Once the PR has been merged, I prefer it merged as a single squashed commit so it's reflective of the single atomic PR (because most of the intermediary commits have never actually mattered to debugging a bug caused by a PR).

While working on a maintenance team, most of the projects we handled were on svn where we couldn't squash commits and it as been a huge help enough times that I've turned against blind squashing in general. For example once a bug was introduced during the end-of-work linting cleanup, and a couple times after a code review suggestion. They were in rarely-triggered edge cases (like it came up several years after the code was changed, or were only revealed after a change somewhere else exposed them), but because there was no squash happening afterwards it was easy to look at what should have been happening and quickly fix.

By all means manually squash commits together to clean stuff up, but please keep the types of work separate. Especially once a merge request is opened, changes made from comments on it should not be squashed into the original work.


I wonder by your last comment if this is just is talking past each other.

I try very hard to keep my PRs very focused on one complete unit of work at a time. So when the squash happens that single commit represents one type of change being made to the system.

So when going through history to pinpoint the cause of the big, I can still get what logical change and unit of work caused the change. I don't see the intermediary commits of that unit of work, but I have not personally gotten value out of that level of granularity (especially on team projects where each person's commit practices are different).

If I start working on one PR that starts to contain a refactor or change imthat makes sense to isolate, I'll make that it's own pr that will be squashed.


When you force push a PR, Gitlab shows the changes from the last push. So it also depends which forge you use. I could see that working less well on Github or simpler Git forges


Yeah I don't have much experience outside of GitHub for team projects, so maybe gitlab works better. For GitHub it just gives up and claims it can't give you diff since the last review


GitHub is basically worst in class here, it’s true. Some forges are slightly better, others are way better. It’s so sad because I like GitHub overall but this is a huge weakness of it.


Most of the bad modern Git practices summed up in one comment (one atomic, squashed comment).


It'd be more helpful if you explained what exactly is wrong, and what you suggest to do instead.


In my experience people who at the same time, “no I don’t [care], actually”, and who squash their PRs as a matter of policy do not care and cannot be convinced to care.

Like the GP said

> > If not, it doesn’t matter.

You might as well just conclude that it’s subjective since there’s no progress to be made.


It’s useful for me to see the mistake and the fix, as it is a good way to jog my memory about the “why” of things. Pristine commit history is not important to me.


Yes, but in that case, I want the fix of the original mistake to be done in a new commit.

Why?

Example #1: - I am working on implementing API calls in the client, made 3 commits and opened a PR - In the meantime, the BE team decides they screwed up and need to update the spec

If I now go and fix it in the commit #1, I lose data. I both lose the version where the API call is in its original state, and I lose the data on what really happened, pretending everything is okay.

Example #2: - I am writing a JVM implementation for our smart-lens - In commit #2 I wrongly implement something, let's say garbage collection, and I release variables after they have 2 references due to a bug. - I am now 6 commits ahead and realise "oh shit wait I have a bug"

If I edit it inline in commit #2, I lose all the knowledge of what the bug was, what the fix is, what even happened or that there was a bug.

tldr: just do an interactive rebase


From what I understand of how jj works, and I’m happy to be corrected because I’m still learning it, is that going back and editing those commits doesn’t change actual original “change” - that is, jj tracks a change ID to everything, and those original commits (once pushed) are immutable. So in theory, with jj, you should be able to see the original commit and the change to fix it, and you can still couple them into a single commit without losing that change history.



Yes, that one! Thanks :)


Yes, it is.


This sounds intuitively true, but it’s a (very persistent) myth: https://www.viget.com/articles/just-use-double-quoted-ruby-s...


TIL. Thanks


I also am not sure why it's intuitive at all. Article doesn't mention why - ruby has a parse phase, it doesn't scan your string byte for byte every time it runs looking for an interpolation. During the parse those "foo#{bar}" are replaced by something like 'foo'.dup << bar.to_str in the bytecode, but that happens only when parser hits #{ in a double quoted string, there's no penalty for that and it happens just once. Only really old, naive interpretators parse source code each time line is hit. (Or some weird ones, where interpolation can change semantics and argument count, like tcl and bash.)


The branding is incredibly confusing, but: Hotwire (https://hotwired.dev/) is a set of ideas about how to build dynamic web UI (“HTML over the wire”), and the Turbo, Stimulus and Native frameworks are complementary implementations of those ideas in JavaScript and native mobile code. You can use all, some or none of them to build a Hotwire-style app.

The three frameworks originated in Rails apps, so they have good Rails integrations, but there’s nothing Rails-specific about them and you can use them in any environment where HTML is sent from server to client, even a static web site.


Rails’ nightly CI already needed a fix: https://github.com/rails/rails/pull/52937


Here’s the PR. I don’t know whether it was discussed elsewhere. https://github.com/ruby/rdoc/pull/1157


That’s the joke.


If I call some place I've never heard of before, know nothing about, my first interaction with them on the phone shouldn't result in "Oh my god, these people seem like scammy used car salespeople!"

If your assertion is true, that it's a joke, it's going to backfire. That's because that call is the equivalent of what's happening here. I called, and the person on the other end ... thinking it a joke, funny, did their best to convince me that they're scam artists.

That's what's happened here. I know nothing about this website, and this was my first impression. And no... my initial reaction isn't "Hmm. This website seems scammy and lame. Maybe I should spend my time investigating to determine if I'm right or wrong!". If I did that, I'd spend my entire life looking at scammy websites... I have better things to do.

Like I said, it's a shame to see this on what seems to be reputable website. But I literally stopped reading, and moved on to other things when I saw it. The website owner should take that into account.

(And indeed, I may be some small ratio, 2% of users, but it could be higher. It could be a lot higher. Or it could obviously be 0.2%. But that's a bold move, putting a big "I'm a scam artist!" sign on a website, first engagement is going to bite.)

Heck... if I was Google, any page with "One * trick" on it would be downranked.

TL;DR don't put a massive sign on your website that reads "I'm a scam artist, clickbait website!"


It pattern-matched "scam" so you classified it as "scam" and absolved yourself of doing any further thinking.

If something pattern-matches "legit" are you equally blase about sticking with your snap judgment and absolving yourself of doing any further thinking?


Snap judgement? I cite my phone call scenario, which this parallels.

Should I.. what? Call back and see if they laugh and say "Oh no, we're not really used car salespeople, what was a just a good joke!". Why would I, or anyone do that? Yet this is apparently a "snap judgement" and "not thinking" to you?

So why would I spend time trying to determine if the people which purposefully acted as scam artists and clickbait boneheads on websites, are actually playing a joke? What's in it for me? As I said, I'd have to do this for every single clickbait website.

I don't read clickbait websites, and I'm not going to take the time to see if it was all a big jolly joke.


It hardly requires a huge amount of investigation to see that's not a scam link. It literally has the blog authors name attached to it, along with a post date and a "read the full story"link that has the same web address as the blog. It's just a few seconds work to see it's legit.


You're not fully getting it. I said with clarity that I know it's pointing back to his website. But any website with a click-bait title of 'One small trick" or some such, is a scammy, clickbaitish site.


Any negative aspect of media from the past can, and often will, be transformed into a positive trait in future media.

People embrace vinyl records in an age of digital music. They take photos with analog cameras even though everyone has a phone in their pocket. Musicians use the harsh artifacts of MP3 compression as creative effects in their music. The examples are countless, and they all emerge precisely when the media that once produced these unwanted artifacts becomes obsolete.

If you haven't noticed this shift, I suggest you learn to recognize it quickly. Otherwise, you might miss out on great content because it doesn't make it past your mental spam filter.

And if you don't want to adapt, that's fine too—just don't tell others how to manage their websites.


Nothing you cited has anything to do with emulating scam artists and clickbait boneheads, and trying to claim acting like a clickbait artist is all the rage, is invalid.

However, your commandments to not provide my opinion, predicated upon your opinion, is the gold standard in ridiculousness.

Way over the line.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: