Author here, welcoming feedback and ideas for how you'd approach challenges such as these. There was a long road of operational tasks to get to a "quick" iteration loop in my debug UI, that I'll be ready to share that story once my new ranker is outperforming my old one. As of right now, the entire data + ranking operation runs on an few-years-old gaming PC with 24GB of ram and 12 cores.
The NMF model was fairly simple to get off the ground, thanks to the maturity of the deployment ecosystem surrounding it. Getting the Set Transformer to beat it in my internal benchmarks has been a journey, hopefully I'll share later this year!
I only write about once a year, but in 2025, I started getting serious about using LLMs to make headway on some of my larger side projects, and the results are getting promising. Link here: https://derekrodriguez.dev/magic-the-gathering-is-full-of-in...
I would love to host an ultra high quality stream on my own web server, and then have that exact stream piped to YouTube live via OBS. Is there an easy way to do that now?
YouTube likely won't support streaming 3440x1440 60FPS video, and while discord technically supports it, they usually compress the footage fairly aggressively once it's sent up to the client, so I'd like to host my own; it only needs to support a few people. I wouldn't mind hosting it so my friends and side project partners can watch me code and play games in high quality.
I think if AI can help us modernize the current state of hardware verification, I think that would be an enormous boon to the tech industry.
Server class CPUs and GPUs are littered with side channels which are very difficult to “close”, even in hardened cloud VMs.
We haven’t verified “frontier performance” hardware down to the logic gate in quite some time. Prof. Margaret Martinosi’s lab and her students have spent quite some time on this challenge, and i am excited to see better, safer memory models oyt in the wild.
When I was in my 20s, I hit a point where I started looking back on my high school years and realized there were a small handful of teachers who had a very large influence on what I use as my "compass" for guiding me towards being the person I wanted to become as an adult.
One commonality among all of those teachers is that decade(s) later, it seems that they are mostly the same person, beliefs-wise and character-wise. It appeared that they had hit a point in their life where they "figured it out", and anchored themselves on that point. I put the phrase in quotes, because as an adult, I know the statement is superficial now, but that it certainly how it seemed when I was younger.
Circling back to the post: in my own lived experience, "Men who mean what they say" became that way not necessarily through the sole virtue of honesty, but by guiding themselves using the same set of virtues (honesty included) for large portions of their life. It was very easy to understand what mattered to them and what they believed in, and as an adult at the end of my 20s, it is clear to me that should I want to become the person my younger self aspired to be, following in my teachers' example means making an increasing percent of my actions reflect the virtues that matter the most to me.
But it is a learned process, not one necessarily passed down through merely being a person who has learned that lying is bad. By learning to practice actions which reflect your virtues, you also learn how to avoid shallower "means-justify-the-ends" behavior (e.g. is it more important to NEVER tell a lie, even if speaking only in facts you know to be true creates more harm?)
trying to build a webapp where i apply some recommender systems knowledge to TCG deckbuilding. MtG in particular is suffering from product fatigue and as someone who is both an MLE and a casual MtG player, it has been a fun challenge to apply my skills to a domain of interest
That’s like having extra configuration dependent ASLR on top!
If carefully implemented the compiler and linker could omit code that the configuration didn’t require. Wouldn’t less code mean better security and performance?
Formal methods = “this software cannot do things it shouldn’t do”, I have formally proven it ALWAYS EXECUTES THE WAY I CONSTRAINED IT TO.
Contrast with
Testing = “My tests prove these inputs definitely produce these test outputs”
IME Formal methods struggle making contact with reality because you really only get their promise “it always does what it is constrained to do” when every abstraction underneath provides the same guarantee, I wager most CPUs/GPUs aren’t verified down to the gate level these days.
It’s just faster to “trust” tests with most of the benefit, and developing software faster is very important to capturing a market and accruing revenue if you are building your software for business reasons.
EDIT: My gate-level verification remark is a bit extreme, but it applies to higher layers of the stack. The linux kernel isn’t verified. Drivers are sometimes verified, but not often. There is an HN comment somewhere about building a filesystem in Coq, and while the operations at the filesystem layer are provably correct, the kernel interfaces still fail. The firmware still isn’t proven. The CPU itself running on has undisclosed optimizations in its caches and load/store mechanisms which just aren’t proven, but enabled it to beat the competition on benchmarks, driving sales.
The NMF model was fairly simple to get off the ground, thanks to the maturity of the deployment ecosystem surrounding it. Getting the Set Transformer to beat it in my internal benchmarks has been a journey, hopefully I'll share later this year!