Hacker Newsnew | past | comments | ask | show | jobs | submit | about3fitty's commentslogin

Besides this being ineffective for the motivated, it might have a subtle antitrust effect.

As kids find alternative platforms, perhaps they will be vendor locked to them instead of the Meta empire.


I think you're 180° backwards on that.

How many alternative platforms are there really going to be that can afford to develope and operate the legally-mandated age-detection ML-models?

Especially after the bureaucrats see that the law isn't working and start looking for scapegoats without massive lawyer teams to make an example of


Why would they do that? There are plenty of platforms that simply won't care, and there's stuff like Mastodon et al.

The law tends to be pretty good at caring about you when you don't care about it.

This is underappreciated. The number of individual conversations (edges) possible between n engineers (nodes) does not scale linearly.

https://en.wikipedia.org/wiki/Complete_graph


As a former employee of state and local government, who walked away from both pensions, this was my takeaway.

At the beginning of a project, the government could spend above market for a great architect to lay down the data model and put some patterns in place which could then be reasonably well maintained by below market rate staff, but there are rules and public pressure.

Interestingly, my local govt hired Deloitte to put in a serverless AWS-based application that could have been a simple CRUD app hosted on a medium EC2 instance. It cost $1.5 million and didn’t work, in addition to the hundreds of thousands per year in cloud costs.

Could have been a Django app with Celery. The cost could have been in the low thousands per year.

It could even have been done with a succinct AWS serverless system.

But that’s not the schmooze that can impress high level stakeholders, themselves less familiar with good design patterns, and win the contract.


The Supreme Court has weighed in on this with a little more nuance in their decision in Katz v. United States:

“What a person knowingly exposes to the public, even in his own home or office, is not a subject of Fourth Amendment protection. But what he seeks to preserve as private, even in an area accessible to the public, may be constitutionally protected.”

This “lack of privacy in public” absolutism would mean that there would never be certiorari granted for these types of cases in the first place.

Reductionist at best, IMO

See also United States v Jones, Carpenter v United States


I think this may be an example of Simpson’s Paradox

https://en.m.wikipedia.org/wiki/Simpson's_paradox


To add to this, engineers consider tradeoffs.

You might choose to add comments and let the logic unfold in a less succinct way in order to improve readability and understandability.

You might also consider your colleagues’ limited cognitive reserves, some of which could be spent on more important issues.


100% this. I'm increasingly of the opinion that "ability to evaluate how often/extensively a chunk of logic may need to be changed in the future; and to design accordingly" is one of the most underrated skills in software development:

* If it's "write once, run a few times, discard", go ahead and throw together whatever you want

* If it's "write once, run a bazillion times at the very core of your logic with very few changes over time", optimize for efficiency over legibility

* If it's going to be written and rewritten and evolved and tweaked many times, optimize for readability and flexibility


I wonder if we are back to “who you know” because of a couple of factors:

1. The risk of a bad hire is great, and this de-risks that

2. It facilitates more natural and spontaneous conversations, which for better or worse short-circuits a well crafted and pre-planned anti-bias interview process which can be too rigid for both parties to explore detail


Cognitive load is super important and should be optimised for. We all should have as our primary objective the taming of complexity.

I was surprised to find an anti-framework, anti-layering perspective here. The author makes good points: it’s costly to learn a framework, costly to break out of its established patterns, and costly when we tightly couple to a framework’s internals.

But the opposite is also true. Learning a framework may help speed up development overall, with developers leaning on previous work. Well designed frameworks make things easy to migrate, if they are expressive enough and well abstracted. Frameworks prevent bad and non-idiomatic design choices and make things clear to any new coder who is familiar with the framework. They prevent a lot of glue, bad abstractions, cleverness, and non-performant code.

Layering has an indirection cost which did not appeal to me at all as a less experienced developer, but I’ve learnt to appreciate a little layering because it helps make predictable where to look to find the source of a bug. I find it saves time because the system has predictable places for business logic, serialisation, data models, etc.


I can unironically imagine legitimate use cases for this idea. I’d wager that many DBs could fit unnoticed into the data footprint of a modern SPA load.


Yes, probably a lot of storefronts could package up their entire inventory database in a relatively small (comparatively) JSON file, and avoid a lot of pagination and reloads. Regardless, my comment was, of course, intended as sarcasm.


Stream the db to the clients post page load and validate client requests against a cache on the server.


This is super difficult for me to parse. Could you please dumb it down for me?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: