Hacker Newsnew | past | comments | ask | show | jobs | submit | therealmocker's commentslogin

I couldn't find a reference to an app on the linked page, could you share more details on the app you use?


re: #1, I’m curious how life guards stay attentive.


There are many levels to this.

The inherent problem of constant attentiveness being extremely challenging means that most relevant authorities say a life guard should not be on duty for more than an hour at a time, and they should have at least 15 minutes break between those hour shifts.

Even with those minds are going to wander (or there will be distractions - people asking for directions, etc) but there are multiple failsafes: there's generally more than one life guard, being distracted for a few seconds is not a failure (a life guard is having to scan a significant area so they don't have 100% awareness of 100% of their zone at once, so there's always potentially significant delay between something going wrong for someone, and a life guard seeing it), and the time to catastrophic failure is measured in (contextually) significant amounts of time.

The issue with self driving cars, as they are currently set up, is that they say "the car will do everything" and it does, but they then say "however the driver is still in control of the vehicle so if a crash happens it was the drivers fault for not paying sufficient attention".

In the pilot case: there are periods of flight where the pilot is doing very little for extended periods, but those are all at altitude, and the time from "something went wrong" to "it is irrecoverable" (in non-aircraft failure modes) is remarkably large (at least to me - my mental model was always 'something went wrong, it's seconds to crashing' until I binged air crash documentaries and even if they're trying it takes a long time to go from cruising altitude to 0). There are also modes where the pilots must always react immediately, whether or not they were distracted, or if they were focused but on a completely different task, but those modes are all close to "this alert occurs->reflexively do a specific action before you even know why".

Attentiveness is a real problem for long haul train traffic and multiple accidents have occurred because of it (or the loss of it), and there are many things they've tried to do to prevent the exact same problem that self driving cars introduce, and they simply do not work. At least for trains you can in principle (those the US seemingly does not) use safety cut offs such that a train that is not responding correctly to signals is halted automatically regardless of the engineers and operators. What companies frequently try to do is add variations of dead controller switches (similar to the eye tracking in "self driving" cars), but for the same reason that attentiveness is an issue in multiple hours of no operation those switches get circumvented (brains don't like to focus on a single thing while not actually doing anything for hours, muscles don't like being constantly in a single stress point for hours).


You just sparked a memory of FILE_ID.DIZ, a standard file included in zip archives during the BBS era.

Pretty sure the .DIZ stood for “Description In Zip”


Could you share some examples of settings you customize with this tool? The description on the website seems pretty fairly broad.


Read through the thread and didn't see references to the translation by Derek Lin, could you point to why you selected that version?


It is in the linked post, not - comments.


What you are describing sounds like the usual backup strategies. Filesystem bugs that silently corrupt your data will also get synced and backed up.


> Filesystem bugs that silently corrupt your data will also get synced and backed up

This is a very easy problem to solve: don't do incremental backups. Or have N backups and rotate, which isn't as good but still gives you more time to notice. Hard drives are cheap.


Still doesn't excuse a filesystem claimed to be designed for reliability from having shoddy development practices.


My guess -- Microsoft wasn’t excited about the company structure - the for-profit portion subject to the non-profit mission. Microsoft/Altman structured the deal with OpenAI in a way that cements their access regardless of the non-profit’s wishes. Altman may not have shared those details with the board and they freaked out and fired him. They didn’t disclose to Microsoft ahead of time because they were part of the problem.


Check out what Mitchell Hashimoto (from HashiCorp) does w/ Nix on Mac for developer setup - https://www.youtube.com/watch?v=ubDMLoWz76U


What does AIoT stand for? It is used throughout the document -- Something like "Artificial Intelligence of Things" ?


Do you have a link? Never heard of cheddar.


I think skrebbel was parodying prox's reply as a commentary on drive-by comments about a seemingly orthogonal technology.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: