Hacker Newsnew | past | comments | ask | show | jobs | submit | rsecora's favoriteslogin

I have read many books. If you can only read one book about how to program in your life , I would say that it is this book: A philosophy of software design: John Ousterhout. It is 10 times better than the next best book.

After 10 years in defense tech, watching missile attacks in Ukraine and the Middle East made it clear how little most people really get about air defense. So I'm builiding this simulator which drops you into the operator’s seat. You can test out different scenarios and build an air defense network against various types of threats (stats from real world). Also have Ukraine, Israel-Iran scenarios.

https://airdefense.dev/


Double-entry bookkeeping is very easy to understand once you ditch the ridiculous "credit" and "debit" terminology.

Essentially, the goal is to keep the accounting equation true at all times. The equation is: Equity = Assets - Liabilities. Eventually, earnings (Income - Expenses) will become part of equity, so splitting that out, you have: Equity + Income - Expenses = Assets - Liabilities. Rearranging to get rid of the minus signs you get: Equity + Income + Liabilities = Assets + Expenses. This equation must be true or something has gone wrong - like money appearing or disappearing out of nowhere. To keep it true at all times, it should be clear that any time you add money to an account on the left side of the equation (say, to an Income account), you must either add the same amount to an account on the other side or subtract the same amount from the same side.

For example, you sell a lemonade for $5. You add $5 to Sales (Income) and add $5 to Current Account (Assets).

The "credit" and "debit" terminology is ridiculous because their definitions swap around depending on which account you're talking about, which is an utterly absurd (mis)use of language and the main reason people find this confusing.


Haha, same. https://news.ycombinator.com/item?id=39994886

I kept digging and digging on a "sell some lemonade for $5" example, and ended up at:

  - $5 debit to cash (asset => debit means +5)
  - $5 credit to revenue (equity => credit means + 5)
  - $X debit to cost of goods sold (liability => debit means - X)
  - $X credit to inventory (asset => credits mean - X)
A double-entry for the money, and a double-entry for the inventory, for a total of 4 entries.

It's too complicated for me. I'd model it as a Sale{lemonade:1,price:$5} and be done with it. Nothing sums to zero and there's no "Equity + Income + Liabilities = Assets + Expenses" in my version.

But this is HN, and I think a lot of people would call my way of doing things "double" because it has both the lemonade and the money in it. So when I say I'm not sold on doing actual double-entry [https://news.ycombinator.com/item?id=42270721] I get sweet down-votes.


EMWM can do that without propietary components.

or turn to sci-hub and annas-arhive :)

Going off topic a bit, I've been reading a number of scholarly works on early Christianity over the past year. These include, "The Origin of Satan," "The Gnostic Gospels," "The Gospel of Mary Magdala," "The Passover Plot," "Jesus the Jew," "How Jesus Became God," and "From Jesus to Christ." To be clear, I am an atheist and a history nerd and I'm really enjoying the scholarship of these works.

I recently started reading works that argue against the historicity of Jesus Christ: "Salvation - From Ancient Judaism to Christianity Without a Historical Jesus," "The Jesus Puzzle: Did Christianity Begin with a Mythical Christ? Challenging the Existence of an Historical Jesus," and next up is, "On the Historicity of Jesus: Why We Might Have Reason for Doubt."

I have become largely convinced that the epistles of Paul and pseudo-Paul are writing not about a man who recently lived, but about a being revealed to him / them in revelations from god (small "g;" remember, I'm an atheist). I won't litigate their arguments here as I'd have to write blocks of text, but I have been persuaded that the book of Mark was likely an allegory and it was only with time that such came to be taken literally.

The need to reset the expectations of the believers because the arrival of the kingdom of god kept getting pushed back from "this generation will certainly not pass away until all these things have happened" (Mark), to now coming soon following the destruction of the temple (Matthew, Luke), to (paraphrasing) "it's coming eventually, so trust in the Church," (John) created a need to keep reinterpreting Mark (hence, why Jewish Christianity died out and mostly only Gentiles remained).

Anyway, it's a fun topic if you're a non-believer and won't get offended by the ideas presented. I'm enjoying it a lot and thought I'd share. Q is frequently cited in most of the above works and that was my jumping off point. There's also thought to be a "sayings" source that was made up of quotes by Jesus used in the gospels. The thing is, following the Nicene Creed, the variants of Christianity (of which there were at least three documented by ancient historians) were systematically wiped out. Were it not for the works found around the Dead Sea, we'd have little to go on other than descriptions from Christian apologists; what little we have demonstrates the rich tapestry of alternative beliefs fighting for supremacy (even Paul fought the Jerusalem apostles: Peter, James, John on topics such as The Law and kosher foods).

It's just after 6am where I live and I just woke up. Please forgive typos and errors, as I don't have the leisure of properly proofing this comment before getting on with my day / job.


“Wizards of Armageddon” is one of the best books about anything ever. It makes articles like the one linked redundant.

Ultibo is pretty cool…

https://ultibo.org/

It’s a bare metal Free Pascal environment for Raspberry Pi.


> the way to calculate the price of an Option/Derivative hasn't changed in my understanding for 20/30 years

That’s not true. It is true that the black scholes model was found in the 70s but since then you have

- stochastic vol models

- jump diffusion

-local vol or Dupire models

- levy process

- binomial pricing models

all came well After the initial model was derived.

Also a lot of work in how to calculate vols or prices far faster has happened.

The industry has definitely changed a lot in the past 20 years.


I think Veritasium made a really good video talking about some of the differential equations governing option pricing [1] which I found really fascinating. Patrick Boyle's video about Jim Simons' history is really interesting too [2].

Also just reading about Jim Simons' being an already-very-successful mathematician dropping everything to start a hedge fund and ending up extremely successful at the end of it was a bit of a wakeup call. Clearly this was an extremely smart dude (he was the chair of the math department at Stony Brook!), and so if this is interesting enough for someone like him, then it's probably something worth looking into.

I read through a book on basic trading strategies and I thought it was pretty interesting [3], though I've gone in a pretty different direction from what they taught.

[1] https://youtu.be/A5w-dEgIU1M

[2] https://youtu.be/xkbdZb0UPac

[3] https://www.amazon.com/Machine-Learning-Algorithmic-Trading-...


I used claude opus recently to help me build a website. I mocked up the design in balsamiq and pasted a screenshot of it in into Claude and told it to write the html for it with Twitter Bootstrap CSS framework.

The output got me 85% of the way there and meant I could focus on tweaks rather than having to bootstrap the structure from scratch, so maybe saved 20-30 minutes? I’m not a frontend developer but I’ve dabbled and this definitely gave me a headstart.

I also used it to ask some questions about how to do something in vue.js. These questions would have easily sucked up time doing google searches and getting a lot of low quality results.

The way I see these tools right now is like a useful but dumb assistant, if you know most of the stuff you’re asking about but just don’t have that final % it seems pretty good and you can verify the output.


I do.

I started using Mathematica in middle school and continued from there. My initial use case was simply double-checking I did my math homework correctly. A lot of Solve, DSolve, FindInstance, Reduce, FullSimplify, etc. I did a lot of plotting to visualize things: not just plotting functions of one variable, but parametric curves, inequalities, functions of multiple variables. When I studied linear algebra, I implemented Gaussian elimination myself as a learning exercise and I was very proud of it: the nice thing was that although the algorithm worked on matrices containing known numbers, it automatically worked for matrices containing unknowns thanks to its symbolic computation. When I studied basic image processing tasks like edge detection or the like, it was again of great help. When I got into personal investing, I did yet more calculations using the FinancialData function to retrieve financial time series and backtested many kinds of portfolio. When I got into trading options, it was of tremendous help to learn options from first principles, starting from the log-normal distributions, implementing Black–Scholes modeling, and then implemented the option greeks (delta, gamma, theta, etc) from scratch. Even as a regular software engineer, when I needed to work on algorithms, Mathematica is great help when I needed to do complexities analysis more sophisticated than interview-level big-O notations. I even used it as a SAT solver in a pinch, or a linear programming solver, when I knew there are other tools, but they won't be as nice as Mathematica or have higher learning curves than Mathematica's builtin documentation.


I had a startup a few years ago that was in the “eh we’ve got some money left from our BigTech days, let’s buy a lottery ticket that’s also a masters degree” category.

And in late 2018, attention/transformers was quite the risqué idea. We were trying to forecast price action in financial markets, and while it didn’t work (I mean really Ben), it smoked all the published stuff like DeepLOB.

It used learned embeddings of raw order books passed through a little conv widget to smooth a bit, and then learned embeddings of order book states before passing them through big-standard positional encoding and multi-head masked self-attention.

This actually worked great!

The thing that kills you is trying to reward-shape on the policy side to avoid getting eaten by taker fees, but it’s a broken ATM with artificially lowered fees.


Can't say I was surprised to see TSA near the top of the rankings https://microprediction.github.io/timeseries-elo-ratings/htm...

What we are missing is industry. People and politicians by themselves really cant take on the financial sector. They are the modern day rent collecting feudal lords.

The Financial Sector(banks/insurers/real estate/stocks/bonds) has a much larger say on the allocation of resources (the economy) than anyone else including politicians and individual corporations.

The counter balance is not just people. It has to come from people+industry. The western industries have basically been hollowed out thanks to financialization.

Even Google and Apple bow to the shareholder. The shareholder doesn't do any actually work, but is sitting on his ass demanding greater rewards even if it means fucking others over, or even the entire system. Same thing with real estate and insurance. So we end up with monopolies.

Now the system is fucked, cause we have already peaked in terms of customers available to milk. Everyone is in debt to keep things afloat. The FED balance sheet has jumped from 900 billion to 9 trillion in mostly junk debt. People/Politicians we already saw during 2008 gfc folded, when they had a real shot at change.

Its in the interest of Google, Apple et al to use their influence to get on the right side of this.


I don't know very much about HFT, and these books are old, so take this with a cow lick of salt, but I've seen these mentioned on a blog I found here on HN (by 'yummyfajitas), and I found them illuminating:

Algorithmic Trading & DMA by Barry Johnson

Trading and Exchanges: Market Microstructure for Practitioners by Larry Harris

You might also be interested in 'yummyfajitas blog:

https://www.chrisstucchio.com/blog/2012/hft_apology.html

If people have up to date replacements for these books, I'd be super interested.

If you keep asking around, people are gunnuh recommend Reminiscences of a Stock Operator, and it's an enjoyable book but keep in mind that it's basically the fictionalized biography of a problem gambler who happened to operate in the stock market, he traded recklessly and died destitute (by suicide - when you reread it, you start to notice how he never mentions enjoying anything, or talks about the people in his life in more than passing). Don't take it too seriously.


Mostly agree. The modern shell scripting environment is much more robust than 30 years ago, with ShellCheck and some sane defaults, as you say. I also find it pleasant, once you get over some of its quirks.

As for managing libraries, that's true, but you can certainly import and reuse some common util functions.

For example, this is at the top of most of my scripts:

    set -eEuxo pipefail

    _scriptdir="$(dirname "$(readlink -f "${BASH_SOURCE[0]}")")"
    source "${_scriptdir}/lib.sh"
This loads `lib.sh` from a common directory where my shell scripts live, which has some logging and error handling functions, so it cuts down on repetition just like a programming language would.

I’m not a filesystem admin, but we at LLNL use OpenZFS as the storage layer for all of our Lustre file systems in production, including using raid-z for resilience in each pool (order 100 disks each), and have for most of a decade. That combined with improvements in Lustre have taken the rate of data loss or need to clear large scale shared file systems down to nearly zero. There’s a reason we spend as many engineer hours as we do maintaining it, it’s worth it.

LLNL openzfs project: https://computing.llnl.gov/projects/openzfs Old presentation from intel with info on what was one of our bigger deployments in 2016 (~50pb): https://www.intel.com/content/dam/www/public/us/en/documents...


https://robkhenderson.substack.com/p/status-symbols-and-the-...

usually only the lower classes suffer for our luxury beliefs. this is an exception.


Implied probability of default over a given time-span can be approximated with the equation P = 1 - ( e ^ ( ( -S * t ) / ( 1 - R ) ) where S is the CDS spread and R is the recovery rate.

The spread can be solved using the inverse S = ln ( 1 - P ) * ( ( R - 1 ) / t )

Probabilities and rates are both expressed as percentages not basis points.

S is the spread. t is years. R is the recovery rate.

Source is my notes from undergrad. Options, Futures, and Other Derivatives (9th Edition) by John C. Hull. Take all this with a grain of salt as I am not a quant. (but I am looking for a job!)

Doing some additional reading, there are some more precise approximations but they are less general.[1]

Last number I was able to pull up the $CS CDS was trading at 551 BP. Up from 446 yesterday (an all-time high for $CS)

Lehman Bros hit 640 BP just days before collapse.

[1] https://quant.stackexchange.com/questions/15986/how-to-compu...


For anyone who's interested in learning PyTorch, here's the best video course I was able to find:

https://www.youtube.com/playlist?list=PLZbbT5o_s2xrfNyHZsM6u...

They explain things incredibly well, videos are easy to understand, engaging, and to the point. Highly recommend it to everyone!

I've also heard that Udacity has some good courses, but I can't vouch for those yet.


This is all you needed to know to short SVB a month ago: https://twitter.com/WatcherGuru/status/1634246217226919937

I remember listening to an episode of Odd Lots a couple of months back discussing how someone was using the the Fed Discount Window.

https://podcasts.google.com/feed/aHR0cHM6Ly93d3cub21ueWNvbnR...

I'm not finance savvy enough to determine if this fits, but I've been wondering if some kind of event might follow


Johnny is correct and this most likely has to do with the 404 backlinks and accidental misuse of Google Analytics.

I recently did a comprehensive SEO audit for a web3 brand that saw a similar drastic drop in traffic. Here are some other things to keep in mind to rank better on Google.

  1. One H1 per page
SEO experts agree the best practice is to use one H1 per page. Headers help keep content structured for both Google and readers. If there are multiple H1s, convert them into proper H2-H4s to improve the content's hierarchy.

  2. High-quality content >1000 words
According to Yoast, chances of ranking in Google are higher if you write high-quality posts of 1000 words or more. If you have some sparse content that is less than this add a few more detailed sections.

  3. Google PageSpeed score >90
Google PageSpeed says that improving performance increases rankings and user experience. The ideal Google PageSpeed score is >90. Make sure your pages takes less than 2s to load, and ideally less than 500ms. Reduce JS and additional requests on important pages.

  4. Add title tags and meta descriptions with primary keyword
Google recommends matching H1 tags to title tags to prevent inaccurate article titles from showing up on search results. It's also best to include the primary keyword in the meta description too.

  5. Improve primary blog page with multiple sections
Your blog's primary page is your chance to showcase your best content, not just an archive of your latest posts. Separate your posts into sections like best, classics, and latest. Add H1 to make it clear to the audience what the blog is about. Add H2 subheadline to clarify.

By the way Johnny, you can see the full SEO audit on my Twitter[0].

If you're open to it, I'd love to do a full SEO audit for your blog. Let's get those numbers back to their original state. Please DM me on Twitter.

P.S. - I worked with a software company[1] to build out their docs using Docusaurus so I'm familiar with how it works.

[0]: https://twitter.com/dericksozo/status/1613171898430488578

[1]: https://medium.com/solidstateso/shortcat-documentation-case-...


Back in my day, wireless was a bug, not a feature!

On a completely different topic: After watching my first marriage degenerate and my wife asked me to move out, I was fairly well mystified about what was going on. We'd read lots of books about marriage and communication, planned for it to be tough, intended to see it through... so what happened?

"The Seven Principles for Making Marriage Work" by John Gottman was a book that I read after we'd separated that first started to make sense of the dynamics of my first marriage, and what caused it to spiral out of control. Unfortunately by that point it was basically too late (as the book predicted, actually); but it certainly helped a lot in my second marriage, and has helped make sense of the dynamics of a lot of other relationships as well. Definitely recommended reading.


If you liked Stephen King's "On Writing", give William Zinsser's "On Writing Well" a read. It's not a memoir but more like a manual addressing different types of writing from corporate writing to sports to academic.

If you want to have a solid understanding and need to do it in just a few hours here's a few things to review.

- The Go programming language spec https://go.dev/ref/spec

- Effective Go https://go.dev/doc/effective_go

- Advanced Go concurrency patterns https://go.dev/talks/2013/advconc.slide#1

- Plus many more talks/slides https://go.dev/talks/


> For example, in Advances in Financial Machine Learning, the author discusses how to pick sensible thresholds and transform the data to convert the regression into a classification problem.

This is interesting! Standard wisdom says that you get more statistical power from predicting a continuous variable as such, and then applying a threshold on the output, rather than trying to model the classification directly. Modeling the dichotomisation directly is equivalent to throwing a third of the data in the rubbish bin: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2972292/#__sec1...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: