But he isn't honestly accounting for the cost of the salmon, even by his own rules ("Food that I don't use needs to be factored into my costs.", "if it goes to waste, I have to charge myself for it.").
The salmon, at $1.99/lb, costs $4.48 for 2.25lbs (36oz) of whole fish. From that, he's only able to extract 3x 6.25oz portions (19oz total) of meat.
Despite consuming everything he could/would from $4.48 worth of fish, he only charges himself $0.77 * 3 = $2.31, ignoring the cost of the waste that he was required to purchase in order to obtain the meat.
IMO a fairer accounting for 3 equal servings would be $4.48/3 = $1.49 per serving. Nearly double.
I'm in a city well under 50,000 people not in the Americas nor Europe. The site gave a message that it was retrieving the data from OSM, then rendered the map faster than the browser would render a png. Very impressive.
I'm at 10 minutes now, for a town of <15k. Render time might depend more on total area than number of lines to draw. Update: gave up after 20min. Something might be wrong with the particular city.
Render time for Seattle is a blink of the eye which has both area and density. I think the time people is observing is loading the raw data from open street map itself.
Because they cache the biggest cities in the world.
> To improve the performance of download, I indexed ~3,000 cities with population larger than 100,000 people and stored into a very simple protobuf format. The cities are stored into a cache in this github repositor
About 2 seconds for me to load and draw Los Angeles. It’s definitely the load time/network latency, depending on where it’s loading from. This is amazing! I might use it for a custom map or something
Depends on the kid. Mine was a superstar sleeper from 2-6mo (8p-7a every night, predictable 90min-2h naps during the day), and then as he got older his napping took a nosedive -- shorter duration, lower quality, less predictable.
It sounds like you are assuming work = solo work. My day is about 25% meetings, my wife's is about 90% meetings. We can't participate in meetings with a baby in the room (let alone strapped to our body!) who might start screaming at any moment. Sure, we could fake it -- camera off, muted, jump out periodically to tend to the baby -- but then we're not really fully engaged in work.
Even if I had no meetings, I can't concentrate on solo work with a wiggly/screamy thing on me or in the same room. One of the biggest benefits of WFH for me is avoiding the distractions of the office. Babies are FAR more distracting than anything at the office.
What is there to "figure out"? Someone needs to look after the kid. If you as parents are unable due to full time work, you need to hire someone else to watch the kid (nanny, daycare, etc) or find a volunteer (extended family, friends, etc).
If you can't swing it financially, you have various choices -- Don't have kids, find higher-paying jobs, reduce expenses, or move closer to extended family/volunteers.
Nobody is "disinterested in supporting stay-at-home parenthood." On the contrary, the tax code is structured to give significant advantages to single-income (or at least lopsided-income) households over dual equivalent-income households.
It is rather absurd that one man working a city government IT job cannot support his family within said city without having to have his wife work (when she would rather raise children and keep the home instead). A few short decades ago this was an uncontroversial, common sentiment.
Nothing in your description sounds difficult to do in powershell. You can certainly output "many-things" from a part of the pipeline that takes "single-things" as input. Crawling files is a single command, then you can do whatever you want with each one in the next part of the pipeline - "map" from file info object to something else (e.g. custom object with filename, size, checksum, etc props) 1-1, multiplex each file into N output objects, buffer file inputs until some heuristic is met then emit outputs, etc.
I described a time where a friend summarized this effect and how it stuck with me since, which is the same effect described in the article? And saying that it's a bummer to think about? I didn't say it was original insight on anyone's part, just that it was a sticky thought when it was wrapped up in a small 1 sentence explanation. You're right, there's no great insight in my comment, all it said was "yea this happens to me too all the time" but for some reason it got upvoted.
Unbelievable how meta the original comment is. Seems like many of the subsequent replies, too. Skim a few paragraphs and that's plenty, time to share your thoughts with the internet...
This passage comes to mind:
> we want so badly to be believed, to be seen as someone who knows stuff.
> Skim a few paragraphs and that's plenty, time to share your thoughts with the internet...
Some articles posted to HN have poor enough quality that reading a few paragraphs is all they deserve. The false premise is usually evident from the first paragraphs, and there's no reason to read further (unless it's one of those articles where the author changes their mind through a long exposition).
> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that".
Normally yes, but I think an exception is allowed in this case due to the irony of the person being oblivious to an article talking about people being oblivious.
For us old timers this is nothing new. For people that know what a 3 digit /. userid is, the meme of not reading the article is as old as internet news boards.
I usually find this author's posts pretty insightful, but this one was a miss for me. Just a rambling mishmash of ranting and humblebragging.
The main point (I think? Hard to extract a thesis from this one besides "algo interviews bad") doesn't even hold up: "People say algo interviews reduce costly algo issues in production, but these issues still happen in production, therefore algo interviews don't work/those people are wrong." This conclusion doesn't follow. Who's to say that there wouldn't be twice as many, or twice as severe, algo problems if companies didn't interview this way? I'm not asserting that's the case, but it's consistent with the data points provided.
Outside of official HR statements, perhaps, I don't think anybody pretends leetcode-style interviews are actually ideal for evaluating corporate software engineer candidates. Everyone knows they are deeply flawed and provided limited value. They persist because they seem to be the least worst option, all things considered.
I think this is a typo? Should be "fructose:glucose"