This works in theory and somewhat in practice but it is not as clean as people make it seem, as someone who has spent tens of thousands on Opus tokens and worktrees - it’s just not that great. It works, but it’s just, ugh, boring, super tedious, etc. at the end of it all, you’re still sitting around waiting for Claude to merge conflicts.
So far, it’s relatively bug-free well-written code that I’ve forked to work behind the walls of a hedge fund, and it works, but the reality is that it doesn’t provide anything that some terminal windows and git worktrees can’t offer. Am I missing something?
You really need to add more features, because I struggle to find a compelling reason for advanced users to use it.
My curmudgeonly genius Q/Kdb+ programmer of a co-worker, whom claims to be immune to the impact of LLMs, is going to be fucking pissed when he hears about Qython.
:D Well I'm still building Qython, but if your colleague has some example code snippets they think particularly difficult to translate, I'd love to take on the challenge!
Interesting. I use Opus exclusively (like $1000/day in tokens) via Claude Code. Do you really think Sonnet is better for programming? I’m not sure I agree, though I’d love to save $900/day by taking you up on it.
Genuine: how? I assume you're using something like cc-usage to get that value. $1k/day is tons. Would genuinely love to know how you're managing to keep the inference burning through that much a day, as I'd love to do the same, but even with 2-4 simultaneous sessions running fairly continuously, mostly on Opus for 10-12 hours a day, I get maybe $500/day. What's your workflow rig/setup look like to get you to that $1k velocity?
I use Vertex and work at a hedge fund. I just spam Claude Code Opus all day long. There’s not much to it, other than I sit at a chair for 12-16 hours and spam poor (actually, rich) Claude. I don’t use the cc usage too - I just look at my GCP bill :(
Im not sure that it's "better". I still use Opus and it's better at coding but needs more steering to be less of a "Yes you're EXACTLY right" every time i suggest a new solution path. Purely anecdotal though
I’ve been writing an equivalent of uv for the R language at work, and it’s quite daunting / unwieldy. I was feeling bad about it, but then I reminded myself that uv has hundreds of contributors and my project only has one.
I set out on a journey to learn quantitative trading based on signals. The first thing I did was study linear algebra, calculus, statistics, probability, and deep learning. Four months in, I’m on chapter 11 of a deep-learning time-series forecasting book, working through the problems when the author mentions that, to date (it has changed since), there hasn’t been a published paper proving deep learning works better than traditional statistical analysis—i.e., everything in scikit-learn. That sucked.
So I moved on to the next step. There’s an industry that generates hundreds of millions of dollars in ad revenue from blogs and videos on how to trade. For several months, I tested each of these strategies using a backtesting library, vectorbt. People had blogs and videos about using XGBoost and LSTM with other deep-learning libraries—every single one failed.
There’s so much BS in the industry, and I got sucked into the rabbit hole. At least I’m honest enough not to take advantage of other people. Maybe I’ll start a YouTube channel where I backtest every strategy to show everyone that they all fail—and explain why.
Anything that is related to HFT or day trading is almost impossible in my experience as well. The big funds can do it but I dont think retail has a chance with these approaches.
Have you researched trend or stage analysis? They're slower strategies and dont offer the get rich quick hype, but they do have far better success when done well.
Yeah, I think all arbitrage setups of any kind are already being exploited by those with a hard to replicate edge (nano second execution/latency, very very deep pockets - typically your large market makers like Citadel). Just think about it: Jim Simons (RIP) had been doing it for longer than most of us have been alive.
Pretty soon AI won’t be optional, it’ll be the only way to run a profitable quant shop.
It is well possible that GPT-4.1 references Sean Carroll, either directly or by regurgitation.
> One sometimes hears the claim that the Big Bang was the beginning of both time and space; that to ask about spacetime “before the Big Bang” is like asking about land “north of the North Pole.”