Hacker Newsnew | past | comments | ask | show | jobs | submit | nevon's commentslogin

Not stated in the most diplomatic way, but I do agree. Having used CDK (not cdktf) and now being forced back to Terraform feels like going back to the stone age. It is absolutely obvious to me that generating infrastructure definitions from a regular, testable language using all the same tools, techniques and distribution mechanisms that you use for all your other software development is the superior way. Being able to piggyback off of the vast ecosystem of Terraform providers was a really clever move, although I understand it led to some rough edges.

Completely disagree. To me, and the OSI, none of those things other than redistribution and forking have anything to do with being open source or not. In fact, you could have a closed source project tick nearly all of those boxes, although that would indeed be very unusual.

I'm not sure if there is a term for what you are describing. Perhaps "community driven project".


That's the fun part; neither you, nor the OSI, get to make that determination!

I suppose that's true, but it makes it quite hard to communicate specific concepts if everyone gets to come up with their own definition of existing terms. I'm aware that language evolves, but at least at the moment, expecting projects to be community driven just because they use the term open source when describing themselves will set you up for conflict if they are referring to the conventional definition of the term and don't also happen to want to run a community driven project.

Do we work in the same company? That said, I really don't understand why everyone hates on Bitbucket. I really thought it was _fine_ from a user perspective. Now we're on GHE and I find it a sidegrade at best.

Now for the people who were operating Bitbucket, I'm sure it's a relief.


As a user, I found Bitbucket to be a lot harder when it comes to searching and browsing code. The Markdown formatting is also more limited for documentation and the lack of Mermaid support in Markdown documents was shocking to see considering how both of the primary competitors (GitHub and GitLab) have implemented it.

Don't send the client information about players they should not be able to see based on their current position.

How does it know what isn't visible? Can it handle glass? Frosted glass? Smoke? What if I can't see the player but I can see their shadow? What if I can't see them because they're behind me but I can hear their footsteps? What if I have 50ms ping and the player is invisible after turning a corner because the server hasn't realized I can see them yet?

To answer all those questions you either have to render the entire game on the server for every player (not possible) or make the checks conservative enough that cheaters still get a significant advantage.


GeforceNow begs to differ.

I know, not the same, but IMHO the future of anticheat. We just need faster fiber networks.


Yeah. Stadia worked well in ideal conditions, so for people lucky enough to live that life, the technology's there.

I never understoof why google gave up so early on cloud gaming. Clearly it is the future, the infrastructure will need to develop but your userbase can grow by the day.

I live a bit remote on an island group, and even though I have a 500Mbit Fiber, my latency to the next GeforceNOW datacenter is 60-70ms (which is my latency to most continental datacenters, so not NVidias fault). That makes it unplayable for i.e. Battlefield 6 (I tried, believe me), but I have been playing Fortnite (which is less aim sensitive) for 100+ hours with that.


And under such system, how do you stop people from abusing latency-compensation to make their character appear out of thin air on the opponent’s perspective by fake-juking a corner to trick the netcode into not sending the initial trajectory of your peeks?

Fortnite had this same issue when BR was first released. It was promptly fixed after cheaters started abusing it by adding more stringent checks.

Fortnite has a fairly invasive root kit level anti cheat too, don’t forget.

The invasive kernel root kit came months after they fixed the netcode abuses.

Then how would the client know where to render the positional audio of their footsteps for instance?

I'm coming from a place of complete ignorance here, so take my question as genuine and not trying to imply that this _should_ be an easy problem. But what exactly is it that makes it so difficult to have a KVM that lets me connect two computers to two high definition (2k in my case) monitors, along with some basic USB peripherals and audio components and switch between them? Every single device I've found has had some drawbacks like not supporting high framerates (144hz), not supporting Mac/Linux/Windows, only supporting audio output and not a microphone, not supporting thunderbolt or only supporting low resolutions.

Is it just that there's no market for it and that the cost of it would just be too high? If money was not an issue, would there still be technical reasons that this is impossible?


Because those signals are really high-speed, and the protocols are really complicated.

Doing the equivalent of "yank cable from PC 1, plug cable in PC 2" is just about doable at a reasonable price point. Anything more complicated either requires a bunch of expensive hard-to-source dedicated chips, or a bunch of hard-to-implement software solutions. Especially stuff like reliable keyboard-controlled switching or USB-C laptop connectivity is a nightmare.

In practice this means you either have to give up on features, or let the price balloon to unacceptable levels.


There already exists many implementations of this idea. CDK, Pulumi and Winglang are the ones that come to mind as probably the most well known.


I have a simple solution for you: don't join a union if you don't want to be part of one.


That option isn't always available, at least in the US. Unless you live in a right-to-work state, you may be forced to join the union as a condition of employment.

Somehow this is seen as "more progressive."


There is a conflicting tension.

Freeriding

If a union negotiates better conditions at a workplace, who should be subject to them? Everybody, of course IMO

But what of people who never paid union dues?

There is no nice tidy solution to that tension, only messy ones that impinge on a freedom somewhere

It is worth unionising, voluntarily


Just to save someone 5 minutes of research, if you are using the EKS AMIs based on AL2023 or Bottlerocket, this is already done for you by pointing to an image on ECR. At least on Bottlerocket, I haven't checked AL2023, the image is baked into the AMI so you don't even need to pull it from ECR.


We removed the image registry dependency on AL2023 as well. :)

https://github.com/awslabs/amazon-eks-ami/pull/2000


Thank you, was just about to task my team with figuring out how affected we are by this.


Used this the other day when for whatever reason Gnome's built-in bluetooth GUI refused to connect to my headphones. Very nice and easy to use.


This is not exactly true. The az names are indeed randomized per account, and this is the identifier that you see everywhere in the APIs. The difference now is that they also expose a mapping from AZ name (randomized) to AZ id (not randomized), so that you can know that AZ A in one account is actually in the same datacenter as AZ B in a different account. This becomes quite relevant when you have systems spread across accounts but want the communication to stay zonal.


You're both partially right. Some regions have random mapping for AZs; all regions since 2012 have static AZ mapping. https://docs.aws.amazon.com/global-infrastructure/latest/reg...


Oh wow. Thanks for telling me this. I didn't know that this was different for different regions. I just checked some of my accounts, and indeed the mapping is stable between accounts for for example Frankfurt, but not Sydney.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: