Hacker Newsnew | past | comments | ask | show | jobs | submit | zaitanz's commentslogin

So this confuses me slightly and I am keen to know the advantage of using this. I work on projects that heavily use auto-differentiation for complex models. The models are defined by user input files at run-time, so the state and execution pathway of the model is unknown at compilation time. Would this help?

Given that all auto-differentiation is an approximation anyways. I've found existing tooling (CppAD, ADMB, ADOL-C, Template Model Builder (TMB)) work fine. You don't need to come up with a differentiable problem or re-parameterize. Why would I pick this over any of those?


In `if i > x`, derivative with respect to x is mathematically 0 at all points. DiscoGrad gives you a useful smooth approximation that is not 0 and lets the function learn those conditional values.


The platforms I work on function solely on user defined input files. For unit tests we have example configuration files stored in the application as strings (and #ifdef'd away for non unit test builds). Then we split the file loading to allow loading from memory (string/stream).

This works so well, we actually package a library with our binary that contains the unit tests so the end user can run them locally without having to download/build anything. This has been super useful in identifying calculation differences between Linux/Windows/MacOS and processors. When a user reports an issue, we can ask them to run the unit tests just by running (./binary --test) and then give us the results.


If you have cronjobs/scheduled tasks running every day to try and renew the certificate (as recommended), then you'd not have any issues with them revoking. Any certificates that are going to be revoked will be renewed before then; this is how LE works. They gave 5 days notice, and during that 5 day period any certificates that will revoked would be renewed.

For multiple servers running the same domain, you can configure them all the same and they will get certificates fine. If required, they will get a new certificate from LE; if this is not required then LE will provide the current certificate to the server. There maybe a short time where the actual certificate on two servers maybe different, but both would still be considered valid. So there really shouldn't be any plumbing required. (edit: This is dependent on you having a sensible way to load balance them. If you're just running IP round-robin then it's going to be difficult, but that is what scp and custom routes are for).

I use LE for multiple domains, on multiple systems. Internal and external with no issue. I've even had certificates revoked by LE and it's never had any operational impact.


In general, we will always start from a position of considering a developer machine to be infected. This is part of the Zero trust approach to security. We work with defense in depth. If the developer machine isn't trustworthy, and the developer isn't trustworthy, how do we best protect our systems and client data?

As you move through from code to production we have multiple stage gates and steps.

- From a code perspective, we use dependency and code scanning (yarn audit, sonarcloud, sonarcube etc). Sonarcloud has nice IDE integrations.

- Code is pushed and is picked up by a pipeline, further scans are done looking for vulnerabilities/CVEs etc. If any significant ones are found, the pipeline fails (yarn audit, sonarcloud, sonarcube, Palo Alto container scanner, docker bench etc)

- The pipeline deploys to test and does automated checking

- Prior to a production deployment, the pipeline must be manually approved.

- Once in production, we use further scanning and monitoring (Security Hub/Centre, Tenable, SIEM)

Our developers have no direct ability to change the production systems in any way. But, they can write code and commit to our Git repository as much as they want. Everything from that point is automated (except for manual approvals).


Ahhh yes. The chicken farms where the chickens are flushed down a hole and automatically killed and cooked for you.


Also the more recent cow farms, where they are crammed in so tight they die. There are other cases where one mob is used to torture another, as in the case of creeper farms.

You might add a section about the forced sex work.


Oh that's really interesting and an area I hadn't considered writing about. But forced breeding in animals and the villagers is very much part of setting up the farms and villager professions.


This is very true. Once upon a time -o3 would enable fast-math. We're very careful to ensure we use test models to verify the outputs of release binaries. Enabling fast-math as you have said optimises the equations in a way that produces different results.


TBH. I had no idea this was a thing either. I was incredibly confused when I identified a single piece of code that was producing an variation in result.


Yes this is technically true, but it manifests itself through use of the MinGW compiler. When I first noticed the problem, there was very little indication as to the cause.

TBH I accredit dumb luck more than anything to finding the cause of this. It took many hours.


Op here. There is a huge amount of scientific code written in C++. Just have a look through https://www.coin-or.org/

This code was specifically C++ because we had the intention of integrating with libraries like CppAD and ADOL-C.


Op here. Yea this only occurs on Windows with MinGW. Visual C++ and Clang do not have this issue. None of the Linux compilers I tested also had this issue.

And yea you're right on the associative mistake. Will correct this :)


I currently build Windows binaries via cross-compilation on Linux using gcc-mingw-w64; I assume that is affected by the bug?


Most likely. You can check with the pastebin code https://pastebin.com/thTapSgn . Local and Thread outputs should be the same.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: