Hacker Newsnew | past | comments | ask | show | jobs | submit | flowerthoughts's commentslogin

There's no -c on the command line, so I'm guessing this is starting fresh every iteration, unless claude(1) has changed the default lately.

Tying this into the Paul Krugman post about social media tech giants running the US [1], perhaps it's the US running the tech giants for mass surveilance? Especially of foreigners, of course.

1. https://news.ycombinator.com/item?id=46204100

Edit: Krugman


They have common interests.

Governments around the world criticise social media and tech giants but they still work with them because they want the concentration of power to enable surveillance.


Of course. And it's not just surveillance, it's censorship and narrative shaping -- doing a convenient end-run around the first amendment's prohibition on government infringing speech.

Pretty sure this is why Microsoft acquired Skype.

[flagged]


It's the other way around, the US wants Israel as an asset and is currently fighting to keep it one despite having been implicated in the accelerated genocide in Palestine, crime of aggression and so on. This is an extremely unpopular policy, and will only become more so over time as current young generations mature with the trauma of having spent years constantly exposed to graphic material from rather heinous atrocities and proud celebrations from the perpetrators. One of the few solutions to this problem that relevant US elites consider viable is to extend control over mass and social media, recently and infamously voiced by Hillary Clinton who says it outright at staged events.

In this context it is important to keep in mind that the bulk of members of the zionist movement are christians from modern denominations. The unnamed "they" you mention are these people, and not to be conflated with the israeli jews or jewish zionists in general, who are a minority influence in the movement.

I'm also fairly certain that the shaky old people chasing this control over communications actually believe that younger generations are more or less brainwashed by what they presumably see on screens, which is likely to be quite the psychological projection.


Layman thoughts: The photon cannot experience the universe as it passes through it instantly. It seems to me the universal speed limit creates an observability barrier that is really fascinating. The question is what are we missing, because we're zipping through _something_ at the speed of light relative to it.

Lately, I've been wondering what evidence we have that the speed of the photon/light is really the universal speed limit, and not a very close fraction of it. I could find the argument that a photon must be massless, otherwise photons of different wavelengths would travel at different speeds. But that says nothing of the speed of a massless photon relative to maximum causality propagation speeds.


It does, though. Because it's massless, it either needs to be going at max speed or zero speed. And a zero-mass, zero-energy object is a pretty good working definition for "nothing", so photons must travel at the speed of causality, thus making it "the speed of light".

Thanks for the reply. That's still a theoretical reasoning. "Based on our current _models_, it must follow that c=c'." I can accept that. I guess part of a wider theoretical answer is that a photon is just an interaction in quantum fields, and that indicates there's nothing special about a photon that could limit its speed (as you imply.) What you're saying makes me think I should be looking for impediments for attaining speed, and it seems only (inertial) mass is that thing.

My question is if this part of the model has been validated experimentally somehow.

BTW, it seems odd calling a photon a zero-energy object.


Photons are zero-mass, some energy. (and thus moving at max speed). I was trying to convey that the only way for something to be massless and not moving at the speed of light would be for the massless object to be stationary (and thus zero energy).

You're joking, but that's probably the right strategy: make sure to enjoy things on both sides of the aisle, so you don't have to worry about which side adds, and which removes, years. And then don't fret about it.

Aside from that, I'd love to know how each of those items affects life quality. Living long is only a life goal up to a certain age, and from what I've seen around me, that age is very rarely 90.


Yeah, living to old age at the cost of overoptimizing every little thing in your life does not seem like a worthwhile endeavor. You'll only add years that won't be terribly useful or pleasant anyway, because anyone has to deal with some form of wear and tear regardless. At the very least, all the old people I know have to deal with some audition and sight loss, and even when they are in decent physical shape, they seem to be hurting somewhere.

It feels like trying to be immortal, which is a bit of a folly.

Anyway, the other day I noticed that Warren Buffett is just retiring at the age of 94. The man has eaten McDonald's for breakfast for much of his life. Diet cannot be that big of a deal.

What those epidemiological studies reveal is that food associated with higher class makes you live longer, which is reverse causation, at best.


It seems fine that financial centers subsidise other regions. GP wasn't asking to ban building the data centers there, just make it more expensive. Because the delivery is more expensive.

I also don't see why the DMARC reporting would retry sending. If the receiver isn't receiving right away, surely it's okay to just drop that report to keep the queue small.

We already had 'low effort' mail queues (for things like password reset emails: these are retried 1/2/4/8 minutes apart and don't generate bounces, other than an API flag and a metrics record), to which we added 'least effort' for DMARC reports. Retry once, then forget about the entire thing other than incrementing a counter for the destination domain.

> retried 1/2/4/8 Minuten apart

That's generally not very clever, as it will impose an unneeded burden on a receiving server which actually has temporary resource problems, and it will collide with greylisting, for example.

RFC 5321 states in section "4.5.4.1. Sending Strategy" that the retry interval should be at least 30 minutes, while the give-up time needs to be at least 4–5 days:

https://datatracker.ietf.org/doc/html/rfc5321#section-4.5.4


SHOULD is not MUST. These capitalized terms are have very specific meanings in RFCs, see RFC https://datatracker.ietf.org/doc/html/rfc2119. SHOULD is:

3. SHOULD This word, or the adjective "RECOMMENDED", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course.

MUST is a requirement. You left out the "however" part:

In general, the retry interval SHOULD be at least 30 minutes; however, more sophisticated and variable strategies will be beneficial when the SMTP client can determine the reason for non-delivery.

There's absolutely nothing wrong with a fine tuned backoff. I am not saying the specific backoff discussed by GP is best, merely that 30 minutes is absolutely not a requirement, and in fact, discussed in tandem with the fact that "more sophisticated strategies" are actually beneficial.

The RFC does not agree with you. Partially quoted as you have, does not help.


> the retry interval should be at least 30 minutes, while the give-up time needs to be at least 4–5 days

RFCs have very little to do anymore with the realities of email delivery. And advocating for password reset emails to only be retried after 30 minutes (all while the user is manically mashing the 'resend link' button) and/or to be kept around for 5 days (while the link contained therein expires after an hour) doesn't either.


This replaces an anonymous token with a LetsEncrypt account identifier in DNS. As long as accounts are not 1:1 to humans, that seems fine. But I hope they keep the other challenges.

I really would have felt better with a random token that was tied to the account, rather than the account number itself. The CA side can of course decide to implement it either way , but all examples are about the account ID.


That seems worth suggesting to the acme working group mailing list, if it hasn't already been discussed there.

I don't expect we'll ever remove the other validation methods, and certainly have no plans to do so.

There are pros and cons of various approaches.


Accounts are many to one email address. Each of my servers have an individual account attached to the same email address.

Copy-paste KiCad snippets. Direct link: https://www.circuitsnips.com/

You're moving the goal posts. Your assertion was that viruses don't last long outside the body. GP shot down that argument. You have not refuted their argument.

Even without being that strict about the discussion, I think GP was making the point that viruses can survive for many days, so stating that "you'd only be exposed to viruses from people you already share a room (or even bed) with." is an argument that requires some elaboration.


This sounds useful, but the example of the feature rows reminds me how sad it is that CSS sometimes requires adding information about the document structure to make a layout work. In this case the number of rows.


Ideally, we would have a way to align elements even when they don't share a parent. Or maybe a flex container that can have its layout mimic another flex container so the distribution in them can line up. It seems that there are a lot of heuristics and edge cases though to keep a simple DX.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: