Hacker Newsnew | past | comments | ask | show | jobs | submit | filleokus's commentslogin

Someone was using Xray, proxying to my employer, and it was detected in our attack surface management tool (Censys). I had some quite stressful few minutes before I realised what was going on, "how the hell have our TLS cert leaked to some random VPS hoster in Vietnam!?".

Thankfully for my blood pressure, whoever had set it up had left some kind of management portal accessible on a random high port number and it contained some strings which led me back to the Xray project.


Yes!

Any many CRDT implantations have already solved this for the styled text domain (e.g bold and cursive can be additive but color not etc).

But something user definable would be really useful


I agree, this seems like straight up bad design from a security perspective.

But at the same time, me as a customer of Github, would prefer if Github made it harder for vendors like CodeRabbit to make misstakes like this.

If you have an app with access to more than 1M repos, it would make sense for Github to require a short lived token to access a given repository and only allow the "master" private key to update the app info or whatever.

And/or maybe design mechanisms that only allow minting of these tokens for the repo whenever a certain action is run (i.e not arbitrarily).

But at the end of the day, yes, it's impossible for Github to both allow users to grant full access to whatever app and at the same time ensure stuff like this doesn't happen.


The private key isn’t a key in the “API KEY” sense, it’s a key in the “public/private key pair” sense. It’s not sent to github and there’s no way for them to know if the signing of the token used to make the call happened in a secure manner or not, because github doesn’t receive the key as part of the request at all.


GH Apps already use short-lived tokens that can be scoped per repo. You mint a token using your private key and exchange it for a token via API. Then you use that token and dispose of it. That's the only way to use GH Apps (User Access Tokens which are the same thing, but require user interaction) Those tokens always expire.

I'd rather GitHub finally fix their registry to allow these GH Apps to push/pull with that instead of PAT.


That's...literally the way it already works.

There is a master private key that mints expiring limited-use tokens.

The problem was leaking the master private key.


Spivak is saying that the DNS method is superior (i.e you are agreeing - and I do too).

One reason I can think of for HTTP-01 / TLS-ALPN-01 is on-demand issuance, issuing the certificate when you get the request. Which might seem insane (and kinda is), but can be useful for e.g crazy web-migration projects. If you have an enormous, deeply levelled, domain sprawl that are almost never used but you need it up for some reason it can be quite handy.

(Another reason, soon, is that HTTP-01 will be able to issue certs for IP addresses: https://letsencrypt.org/2025/07/01/issuing-our-first-ip-addr...)


Oh I totally misread the comment.

Nevermind, I agree!


The comment is strangely worded, I too had to read it over a couple of times to understand what they meant.


I ride rental scooters almost 10k minutes per year and would really like to get my hands on my own ride data to plug it into something like this (or simpler) to find the optimal routes for my regular trips.

Google Maps (or others) works good to find a resonable route, but I can do better on my own. One-way streets where bikes are allowed to go do opposite way is sometimes missing, short desire paths connecting bike ways, crossings where it's safe to do an (illegal) right-on-red etc.

Tried a GDPR data claim from Voi but got nothing back :( But I hope the data is somehow available for urban planners, think it would be a great source of truth to use in tools like this.


Unfortunately it is not that easy to simulate traffic, especially not on city scale or larger.

The most important input to a traffic simulator such as this is the so-called "traffic demand", i.e. the routes that vehicles follow. Typically this is provided in the form of origin-destination matrices, but this data is not freely available.

Next up is the way in which traffic lights work. Reality is very hard to model here, again because the data is not freely available.

And then, due to numerous modeling errors in vehicle density, in the way that roads differ from e.g. OpenStreetMap, and how traffic behaves, the simulations are highly unrealistic, unless one spends some time to calibrate it.

It costs quite a lot of money to set up a realistic simulation, and most governments use commercial tooling that is easier to use, such as Vissim or Aimsun.


Many people with mosquito issues around here (Sweden) uses something like https://www.clasohlson.com/se/Mosquito-Magnet/p/31-7190 which burns propane to produce Co2 to lure in mosquitoes and then sucks them in with a fan towards a metal grid to zap them with electricity.

Non-poisonous and from what I've heard fairly effective. Not sure if these exists in the US?


I used one of these for two weeks. It killed many mosquitos, yes, but it killed far, far more non-mosquito pollinators which ultimately is not acceptable to me. If I didn't care about the other insects, I'd just spray my yard with poison and be done with it.

As always, YMMV


I tried one of these once. It was annoying, seemed kind of dangerous, and wasn't especially effective. This is much better:

https://us-shop.biogents.com/products/bg-mosquitaire-co2

Admittedly, it is more annoying to refill.

That being said, depending on your mosquito species, the Biogents chemical attractant may be even more effective than CO2.


We have one with a fan and a coated UV light that does similar, it just sucks them in till they dehydrate/starve

https://www.costco.com/dynatrap-1-acre-insect-and-mosquito-t...

catches moths as well, so it's not as eco friendly.


Does it actually attract mosquitoes? I didn't think they'd be attracted to UV light.


you usually use a chemical attractant insert along with the device. I have one of these. Success seems to be hit or miss. I've run one for a year and it seems to only accidentally catch mosquitos, and that's not a unique outcome. But then others seem to have massive success.


I'm going off-topic, but what's up with that font with the ugly square-bottomed lower-case g's? I've been seeing it everywhere. It's not good.


I thought it couldn't be that bad but wow, that's bad. The first font-family entry is Clas Ohlson Sans Web, so presumably a font developed for Clas Ohlson? Looking at samples online, J and to a lesser extend t are similarly hideous.


You might even say it's... Grotesk.


Very cool where can I get on?


I have seen a similar thing at https://us.biogents.com/

Haven't used it so I can't comment on its effectiveness


My in-laws use a system like this, maybe the same one. It uses a CO2 tank rather than burning propane. He runs it 24/7 and he needs to refill the CO2 tank every 1-2 weeks iirc so it's definitely more expensive to run than the death bucket. I think it's a 20 pound tank like you'd see on a homebrew keg system. When he showed it to me, it had caught an absolutely nightmarish number of mosquitoes over the course of a couple days. Like maybe half a liter to a liter in volume squirming around in the net. It made me queasy honestly. I didn't notice anything like bees or butterflies in the trap but I didn't look very closely.


Don't worry, when you press them into a party and fry em up, they're quite moreish


How is that better than what the article describes? You need gas, electricity (outdoors!) and get constant fan noise.


I guess it depends a lot on your situation, but for OP's method to be effective you need to out-compete other breeding grounds in not only your backyard but also X feet/meters away (whatever distance mosquitoes typically fly to "hunt").

If there's a nice shallow pond on the property line 100 feet from your porch (or water filled tires at the sloppy neighbour or whatever it might be), I seriously doubt the efficacy of the method in the article.

This thing would lure in any mosquitoes (and unfortunately other things, as per sibling comment) that fly in your backyard, wherever they come from.

For electricity: That also of course depends, but around here it's not uncommon to have an outlet on the outside of some garage or outbuilding or something. The product I linked have a 50 feet cord as well. The fan noise has not been noticeable at all when I've seen it.


I've been occasionally using Microsoft's RDP Client [0] on my iPhone with external keyboard + mouse with a usb-c cable into my external monitor (with a Logitech RF dongle connected to the back of it).

It worked okay, the mouse support is somewhat of a hack, but keyboard works awesome.

The biggest annoyance was actually getting RDP to work satisfactory on a linux box with no external monitor plugged in to it (hetzner box).

I thought someone would have created an app to run browser on the external screen in full resolution, so I could skip RDP and use vscode server via the browser. But the only option seems to be infinitex2p which is not available in the EU :(.

[0]: Which in typical Microsoft idiotic fashion semi recently got renamed to "Windows app"... [1]: https://x.com/infinitex2p


I've run vscode over ssh via tailscale before and it was pretty good, I'm mostly connecting to a remote using rustdesk however, that also requires a "dummy" hdmi to operate. The only thing it needs to make it perfect would be if there were officially supported forwarded web browsing windows in vscode. I wish apple would actually let us use "our" usb-c as.. usb-c


Allianz have more than 150k employees with offices in 50+ countries. Not all of them need access to the CRM of course, but I think going back to on-prem is just asking for different kind of trouble.

We don't have any details now, but I wouldn't be surprised if the cloud-based CRM provider didn't have a very technical interesting weakness, but rather that some kind of social engineeringy method was used.

If global companies like this instead had stuff running on-prem all around the world the likelihood of more technical vulnerabilities seems MORE likely to me.

(Air gapping is of course possible, but in my experience, outside of the most security sensitive areas the downsides are simply not acceptable. Or the "air gapping" is just the old "hard shell" / permitter based access-model...)


> It’s pretty clear if you check github that Azure’s services and documentation are written by distributed teams with little coordination.

I've come to the same conclusion after dealing (and reporting) jankyness in both the Azure (ARM) API and especially the CLI. [0] is a nice issue I look at every once in a while. I think an installed az cli is now 700 MB+ of Python code and different bundled python versions...

[0]: https://github.com/Azure/azure-cli/issues/7387


Why do all these use Python? AWS, GCP, Azure, all three CLIs use Python; they're slow, bloated, heavy to install... what advantage does Python really offer here? You can't in any sensible way rely on it being installed (in your linked issue we see that they actually bundle it) so it's not even an 'easy' runtime.


Python takes up less than 16 MB on disk (python3.11-minimal + libpython3.11-stdlib on Debian) so whatever Microsoft did to make their Azure CLI package take up almost 700 MB, I don't think the language is the problem.


They bundle versioned API schemas....looooots of them.

It would be a garbage fire in any language


It might well be part of the problem. Certainly any language can be inefficient, especially in terms of size, if you don't pay attention (have certainly found this with Go recently). But as I said it's also slow (interpreting code, or dealing with cached versions of it) and it's not obvious to me why all three major cloud CLIs have chosen it over alternatives.


I don't understand the Python hate. What would they use instead?

Python is installed on most systems and easy to install when it's not. Only Azure is dumb enough to bundle it, and that was a complaint in the bug - there's no good reason to do so in this day and age.

The performance bottle neck in all three is usually the network communication - have you seen cases where the Python CLI app itself was using 100% of a CPU and slowing things down? I personally haven't.

Looking at the crazy way Azure packaged their CLI, it's hard to believe they weren't making it bloated on purpose.


> Python is installed on most systems (...)

Not on Windows.

And which Python are you talking about? I mean, Python3 is forward compatible but you SoL if you have the bad luck of having an older interpreter installed and you want to run a script which uses a new construct.


I don't understand why Windows people are completely okay having to install all kinds of crazy service packs and Visual C++ runtimes anytime they install anything, but then having to install Python seperately makes it a no-go.


A type safe and memory safe language, like rust? Or their own c# perhaps?


Python is a memory and typesafe language.

Also, AWS is 10 years older than Rust, and C# only runs on Windows (at least it certainly only did when AWS was created, and is laughably more difficult to get running on Linux or OSX than Python).


> Python is a ... typesafe language.

You're funny

> is laughably more difficult to get running on Linux or OSX than Python).

$(dotnet publish) <https://learn.microsoft.com/en-us/dotnet/core/deploying/sing...> is the way they solve that problem in modern .net

And, unlike the scripting language you mentioned, most of the languages on the CLR actually are statically typed


> $(dotnet publish) <https://learn.microsoft.com/en-us/dotnet/core/deploying/sing...> is the way they solve that problem in modern .net

That may work now, but it didn't exist when AWS was started.


It’s legitimately fun to see people gaining hope something would happen about this and then losing hope, again and again. Thanks for the laugh.

This is how you can tell that people doing systems work aren’t running the sdk project. A gig dependency for a few python scripts is hard to swallow.


As mentioned in the thread and expanded on the blog [0] moxie is also against the whole idea of federation and multiple clients.

I think my perception has changed in the last ≈ 10 years, to be more leaning in moxie's direction. It's hard enough to design something secure and usable, having to try and support all different implementations under the sun makes most federated approaches never reach any mass adoption.

Even though it's not a one-to-one analog I also think e.g the lack of crypto agility in Wireshark was a very good decision, the same with QUIC having explicit anti-ossification (e.g encrypted headers). Giving enterprise middle boxes the chance to meddle in things is just setting things to hurt for everyone else.

https://signal.org/blog/the-ecosystem-is-moving/


I don't think it's a problem that they're against federation. I think federation is nice, but it has some clear trade-offs, and I don't feel like it's something Signal needs.

I don't even think they have to officially support third party clients or provide a stable API. I'd have no problem if they just occasionally made API changes which broke unofficial clients until their developers updated them.

But I really don't like that they're so openly hostile to the idea of other people "using their servers for free", with the threat of technical blocks and legal action which that implies. Especially not when their official client is as bad as it is. (Again, it's fucking blurry!)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: