Hacker Newsnew | past | comments | ask | show | jobs | submit | purkka's commentslogin

Greylisting is great until it delays your email login/signup verification codes for 20 minutes. Especially if they expire in 15.

I guess this only shows how email is used for entirely orthogonal purposes now.


I have an auto-whitelist if my greylisting has been handled properly, which means that, the first signup email is indeed invalid, but the second works.

On rare occasions I get frustrated by this, and I'm forced to login via ssh and manually permit a greylisted address through - though normally I am not so time sensitive. My greylisting is only 5 minutes.


I tend to despise senders that believe email is always an effective real-time channel. Delays happen for all sorts of reasons, ranging from massive outages to scanning incoming emails for spam or malware (my corporate email is sloooow).

Greylisting has been so effective for my personal email, I don't mind waiting a bit on the rare occasion (by now, most senders are already recognized). And on the rare occasion I get spam, it's been cathartic, adding a rule to reject the sender with a quippy SMTP eerror. It's also been easy enough just to forward it to abuse@google.com, because it's almost always from Gmail.


Unless you whitelist the notification email, which I've has to do a few times.


Whitelisting doesn't work if one doesn't know the email domain name the service will use.

An Amazon verification email will be sent from "account-update@amazon.com". It's intuitive to predict "@amazon.com" so whitelisting works.

However, State Farm Insurance login verification codes are actually sent from "noreply@sfauthentication.com" instead of the "@statefarm.com"


Per the tweet linked in the article there were also random bans in addition to the ban feed shitposting.

https://x.com/KingGeorge/status/2004902566434668686


Copy of tweet:

>@KingGeorge

>Seems like R6 is completely fucked. It’s unreal how bad.

>Hackers have done the following.

>1. Banned + unbanned thousands of people.

>2. Taken over the ban feed can put anything.

>3. Gave everyone 2 billion credits + renown.

>4. Gave everyone every skin including dev skins.

>5:09 AM · Dec 27, 2025


Python has LiteralString for this exact purpose. It's only on the type checker level, but type checking should be part of most modern Python workflows anyway. I've seen DB libraries use this a lot for SQL parameters.

https://typing.python.org/en/latest/spec/literal.html#litera...


Beyond LiteralString there is now also t-strings, introduced in Python 3.14, that eases how one writes templated strings without loosing out on security. Java has something similar with Template class in Java 21 as preview.


It does, especially at the scale of operating systems.

Bugs and vulnerabilities are always being found, with fewer and fewer people in the pool that might even theoretically want to pay for fixing them.

Also, hardware does deteriorate, and the story is the same for adding software support for whatever is currently available in hardware.


> Bugs and vulnerabilities are always being found

none which haven't been there from the beginning


Based on the various benchmarks linked here and in the OP, the name feels justifiable. "Mini" models tend to be a lot worse compared to the base model than this one seems to be.


> Are you aware that here you are arguing for criminal sanctions on the order of 10 years in prison, for writing a letter?

It's about writing a letter that can result in someone else receiving criminal sanctions on the order of 10 years in prison, when that someone might not have even written a letter.

Provably false is essential here.


> Provably false

As far as I can tell, nobody has offered (or likely can offer) proof of anything on either side and yet people are talking about long prison sentences for speech.


> Unlike GNU Emacs, JOVE does not support UTF-8.

If this is still true in the latest versions, I find it pretty amazing that something like this has been maintained all the way until 2023.


Well, that's basically a deal breaker in 2025.

But the real question is: Can it run evil mode?!


No. It lacks elisp. It offers some familiar keyboard shortcuts to appease your muscle memory, multiple buffers, screen splits, but apparently not much more.


Still maintained. There was an update in May of this year.


ASCII is still adequate for great many programming tasks, especially in highly confined environment where JOVE can make sense.


Generally, yes: https://reproducible-builds.org/docs/timestamps/

Since the build is reproducible, it should not matter when it was built. If you want to trace a build back to its source, there are much better ways than a timestamp.


C compilers offer __DATE__ and __TIME__ macros, which expand to string constants that describe the date and time that the preprocessor was invoked. Any code using these would have different strings each time it was built, and would need to be modified. I can't think of a good reason for them to be used in an actual production program, but for whatever reason, they exist.


And that’s why GCC (among others) accepts SOURCE_DATE_EPOCH from the environment, and also has -Wdate-time. As for using __DATE__ or __TIME__ in code, I suspect that was more helpful in the age before ubiquitous source control and build IDs.


Source control only helps you if everything is committed. If you're, say, working on changes to the FreeBSD boot loader, you're probably not committing those changes every time you test something but it's very useful to know "this is the version I built ten minutes ago" vs "I just booted yesterday's version because I forgot to install the new code after I built it".


Versions built into the code are nice. I think the correct answer is to commit before the build proper starts (automatically, without changing your HEAD ref) and put that in there. Then you can check version control for the date information, but if someone else happens to add the same bytes to the same base commit, they also have the same version that you do. (Similarly, you can always make the date "XXXXXXXXXXXXXXXXXXXXXX" or something, and just replace the bytes with the actual date after the build as you deploy it.)

What I actually did at $LAST_JOB for dev tooling was to build in <commit sha> + <git diff | sha256> which is probably not amazingly reproducible, but at least you can ask "is the code I have right now what's running" which is all I needed.

Finally, there is probably enough flexibility in most build systems to pick between "reuse a cache artifact even if it has the wrong stamping metadata", "don't add any real information", and "spend an extra 45 cpu minutes on each build because I want $time baked into a module included by every other source file". I have successfully done all 3 with Bazel, for example.


> you're probably not committing those changes every time you test something

I’m not, but I really think I should be. As in, there should be a thing that saves the state of the tree every time I type `make`, without any thought on my part.

This is (assuming Git—or Mercurial, or another feature-equivalent VCS) not hard in theory: just take your tree’s current state and put it somewhere, like in a merge commit to refs/compiles/master if you’re on refs/heads/master, or in the reflog for a special “stash”-like “compiles” ref, or whatever you like.

The reason I’m not doing it already is that, as far as I can tell, Git makes it stupendously hard to take a dirty working tree and index, do some Git to them (as opposed to a second worktree using the same gitdir), then put things back exactly as they were. I mean, that’s what `git stash` is supposed to do, right?.. Except if you don’t have anything staged then (sometimes?..) after `git stash pop` everything goes staged; and if you’ve added new files with `git add -N` then `git stash` will either refuse to work, or succeed but in such a way that a later `git stash pop` will not mark these files staged (or that might be the behaviour for plain `git add` on new files?). Gods help you if you have dirty submodules, or a merge conflict you’ve fixed but forgot to actually commit.

My point is, this sounds like a problem somebody’s bound to have solved by now. Does anyone have any pointers? As things are now, I take a look at it every so often, then remember or rediscover the abovementioned awfulness and give up. (Similarly for making precommit hooks run against the correct tree state when not all changes are being committed.)


An easy (ish) option here is to use autosquashing [1], which lets you create individual commits (saving your work - yay!) and then eventually clean em up into a single commit!

Eg

    git commit -am “Starting work on this important feature”
    
    # make some changes
    git add . && git commit —-squash “I made a change” HEAD

Then once you’re all done, you can do an auto squash interactive rebase and combine them all into your original change commit.

You can also use `git reset —-soft $BRANCH_OR_COMITTISH` to go back to an earlier commit but leave all changes (except maybe new files? Sigh) staged.

You also might check out `git reflog` to find commits you might’ve orphaned.

[1] https://thoughtbot.com/blog/autosquashing-git-commits


> If you're, say, working on changes to the FreeBSD boot loader, you're probably not committing those changes every time you test something

Whyever not? Does the FreeBSD boot loader not have a VCS or something?


A subtlety that may be lost: FreeBSD uses CVS, and so there isn't a way to commit locally while you're working, like with a DVCS.


FreeBSD hasn't used CVS since 2008.


Huh! So, before I posted this, I went to go double check, and found https://wiki.freebsd.org/VersionControl. What I missed was the (now obvious) banner saying

> The sections below are currently a historical reference covering FreeBSD's migration from CVS to Subversion.

My apologies! At the end of the day, the point still stands in that SVN isn't a DVCS and so you wouldn't want to be committing unfinished code though, correct?

(I suspect I got FreeBSD mixed up with OpenBSD in my head here, embarrassing.)


You could still use git-svn, but yeah, as another commenter wrote, I don't think reproducible build is that useful when debugging, it should be fine to have an actual timestamp in the binaries.


Well yes, but we've actually migrated to Git now. ;-)


Welp! Egg on my face twice!


It's in the FreeBSD src tree. But we usually commit code once it's working...


Huh. If I was confident enough in a change to consider it worth doing an actual boot to test I'd certainly want to have it committed, to be able to track and go back to it. Even the broken parts of history are valuable IME.


Which is fine, you don't need to use a reproducible build for local dev and can just use the real timestamp.


Nobody cares about reproducibility of local development builds so just limit your use of date/time to those and use a more appropriate build reference for release builds.


> I can't think of a good reason for them

I work on a product whose user interface in one place says something like “Copyright 2004-2025”. The second year there is generated from __DATE__, that way nobody has to do anything to keep it up to date.


I mean, you could do that, it's sort-of a lie though, maybe something better would be using the date of the most recent commit, which would be both more accurate, as far as authorship goes, and actually deterministic..

Pipe something like this into your build system:

    date --date "$(git log HEAD --author-date-order --pretty=format:"%ad" --date=iso | head -n1)" +"%Y"


Toolchains for reproducible software likely let you set these values, or ensure they are 1970-01-01 00:00:00


Nix sets everything to the epoch, although I believe Debian's approach is to just use the date of the newest file in the dsc tarballs.


Debian's approach is actually to use the date specified in the top entry in the debian/changelog file. That's more transparent and resilient than any mtime.


Nix can also set it to things other than 0; I think my favorite is to set it by the time of the commit from which you're building.


Which is also used when the contents of a derivation will be included in a zip file. The Unix epoch is about a decade older than the zip epoch.


Strangely enough, sometimes using the epoch can expose bugs in libraries (etc.) when running or building in a timezone west of Greenwich due to the negative time offset taking time "below" zero.


It's super nice to have timestamps as a quick way to know what program you're looking at.

Sticking it into --version output is helpful to know if, for example, the Python binary you're looking at is actually the one you just built rather than something shadowing that


The whole point or reproducible builds is that you don't need to rely on timestamps and similar information to know which binary you're looking at.


"Security through obscurity" can definitely be defined in a meaningful way.

The opposite of "bad security through obscurity" is using completely public and standard mechanisms/protocols/algorithms such as TLS, PGP or pin tumbler locks. The security then comes from the keys and other secrets, which are chosen from the space permitted by the mechanism with sufficient entropy or other desirable properties.

The line is drawn between obscuring the mechanism, which is designed to have measurable security properties (cryptographic strength, enumeration prevention, lock security pins), and obscuring the keys that are essentially just random hidden information.

Obscuring the mechanism provides some security as well, sure, but a public mechanism can be publicly verified to provide security based only on secret keys.


This is a Gmail-specific feature. I'd guess it's there for user convenience and some protection against typos (accidental or malicious).

https://support.google.com/mail/answer/7436150?hl=en


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: