Hacker Newsnew | past | comments | ask | show | jobs | submit | thewisenerd's commentslogin

inb4 "tailscale hurr durr",

if you are using tailscale already, with it setup as the DNS resolver,

you can setup NextDNS as the global resolver within tailscale[1];

i'm not sure exactly how much my latency's being affected, but am at something like 900k queries/mo and don't really notice it

[1] https://tailscale.com/kb/1218/nextdns


one of the non-intrusive approaches i have for this [1] is kubenetmon[2] which uses a kernel feature called nf_conntrack_acct to have counters for (src, dst).

it's not perfect [3] but gets the job done for me

[1] not as much "control" as it is "logging", of sorts; "especially when you just need to answer “what is my cluster talking to?”"

[2] https://github.com/ClickHouse/kubenetmon / https://clickhouse.com/blog/kubenetmon-open-sourced

[3] if you have a lot of short-lived containers, you're likely to run into something like this: https://github.com/ClickHouse/kubenetmon/issues/24

edit: clarifying [1]


it's an ad, for what?

i do not see a product upsell anywhere.

if it's an ad for the author themselves, then it's a very good one.


At the end there's a form where you can get a "personalized report", I have a feeling that'll advertise some kind of service, it's usually the case.


it'd be great if they can couple this with an SLA for GitHub actions so we won't have to end up paying as much..

(ofc, that'd only mean they stop updating the status page, so eh)


For what it's worth, they already fail to update the status page. We had an "outage" just this morning where jobs were waiting 10+ minutes for an available runner -- resolved after half an hour or so but nothing was ever posted

https://downdetector.com/status/github/


Last week (Sunday to Sunday) I had a repo running a lot of cron workflows 24/7. After like 4 or 5 days I exceeded the free limits (Pro plan) and so set up self hosted runners.

After like day 2 my workflows would take 10-15 minutes past their trigger time to show up and be queued. And switching to the self hosted runners didn't change that. Happens every time with every workflow, whether the workflow takes 10 seconds or 10 minutes.


I don't want to shit on the Code to Cloud team but they act a lot like an internal infrastructure team when they're a product team with paying customers


discussed 2 years ago,

https://news.ycombinator.com/item?id=38790597

4B If Statements (469 comments)


Meta: Yeah, this should have a "(2023)" tag in the title. Thanks.


> Hey! I asked AI for this code, do you think this will work? I think you should use it.

unfortunately this problem preceeds AI, and has been worsened by it.

i've seen instances of one-file, in-memory hashmap proof-of-concept implementations been requested to be integrated in semi-large evolving codebases with "it took me 1 day to build this, how long will it take to integrate" questions


discussed a couple days ago: https://news.ycombinator.com/item?id=46191993

AWS introduces Graviton5–the company's most powerful and efficient CPU (14 comments)


the custom error page is configurable at a domain (zone) level

which sometimes gets annoying because branding for subdomains could be different.

https://developers.cloudflare.com/rules/custom-errors/edit-e...


> Error Pages do not apply to responses with an HTTP status code of 500, 501, 503, or 505. These exceptions help avoid issues with specific API endpoints and other web applications. You can still customize responses for these status codes using Custom Error Rules.

From that page ;)


previously discussed here: https://news.ycombinator.com/item?id=46064571

Migrating the main Zig repository from GitHub to Codeberg - 883 comments


Didn't know about codeberg and can't even access it... Is it https://codeberg.org/ ??


That is correct. It is down quite a bit. https://status.codeberg.org/status/codeberg


92% uptime? What do you do the other 8% of the time? Do you just invoke git push in a loop and leave your computer on?


You keep working since Git is decentralized.

You can also run a Forgejo instance (the software that powers Codeberg) locally - it is just a single binary that takes a minute to setup - and setup a local mirror of your Codeberg repo with code, issues, etc so you have access to your issues, wiki, etc until Codeberg is up and Forgejo (though you'll have to update them manually later).


I hope Codeberg is able to scale up to this surge in interest, but

> it is just a single binary that takes a minute to setup - and setup a local mirror of your Codeberg repo with code, issues, etc so you have access to your issues, wiki, etc

is really cool! Having a local mirror also presumably gives you the means to build tools on top, to group and navigate and view them as best works for you, which could make that side of the process so much easier.

> you'll have to update them manually later

What does the manually part mean here? Just that you'll have to remember to do a `forgejo fetch` (or whatever equivalent) to sync it up?


As discussed elsewhere in this thread: They're under DDoS, and have been very public about this fact.


_if_ you're using ubuntu,

there's the CVE tracker you can use to ~argue~ establish that the versions you're using either aren't affected or, have been patched.

https://ubuntu.com/security/cves

https://ubuntu.com/security/CVE-2023-28531


that said, we've also had the same auditor ask us to remove the openssh version upon telnet (which by RFC 4253, is not possible)

so ymmv


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: