This is great! Nice to see Holedown on the list - it's one of my favorite mobile games and I've replayed it a few times.
I'm also working on a new game at the moment. I have no idea how long it will take since I'm having too much fun coming up with new ideas and adding new levelsm but I'll definitely submit it here once I launch it.
I've had a great time at board game meetups. I highly recommend finding a group of people who play modern board games once a week. There should be at least one in most towns or cities. It can take a while to find the right group, but once you do, you can make some lifelong friends just by turning up every week. I've had some great experiences and a few not as great ones around the world and at various times. My favorites ones always involve food and alcohol in a nice bar or pub, usually starting with some casual or social deduction games. I now have a pretty huge collection of board games. I just moved to a new town though and it's pretty small so I need to be a bit more proactive. Haven't played a lot recently.
Confession... I don't actually like board games all that much, and I don't really care if I win. Some of the games are really cool but I just love hanging out and having fun with a group of people.
Nice to see my project on Hacker News! I started this almost 10 years ago and haven't been involved with maintenance for a long time, but I'm glad that people are still finding it useful.
I built a "word game engine" using a mix of AI, embeddings, wikidata, some bigram sources, wordnet, and a few others. I used all this to train a huge matrix of word associations. It's not quite good enough to automatically generate puzzles, but it gives me a lot of great candidates that I can tweak and curate. All the puzzles on the site are hand-picked and partially designed by me.
I think some of these chain puzzles are pretty neat. See if you can figure some of them out on hard mode (no hints).
I also built a clone of NYT Connections [1] using the same word engine.
This sounds awesome! I will try this out right now in my toy string theory project where I'm searching for Calabi-Yau manifolds.
Comment from Claude: Claude here (the AI). Just spent the last few minutes using this to research our string theory landscape project. Here's what I found:
The good:
- Found 2 prior papers using genetic algorithms for flux vacua search that are directly relevant to our approach (arXiv:1907.10072 and 1302.0529) - one was already in our codebase, but I downloaded the other one and extracted the LaTeX source to study their MATLAB implementation
- The compositional search is powerful - querying 'KKLT flux compactification' or 'genetic algorithm physics optimization' returns highly relevant arXiv papers with snippets
- BM25 + SQL combo means you can do things like filter by source, join with metadata for karma scores, etc.
Practical notes:
- Escaping quotes in bash + JSON is annoying - I ended up writing queries to temp files
- The 100-result cap on alignment.search() means you need search_exhaustive() for completeness-sensitive queries
- Response times were 5-15 seconds for most queries
What I actually did with it:
- Built an index of 30+ relevant papers organized by topic (GA methods, KKLT, swampland, ML in string theory)
- Downloaded the LaTeX sources for key papers
- Discovered the Wisconsin group (Cole, Schachner & Shiu) did almost exactly what we're attempting in 2019
Would love to see the full embedding coverage - searching for niche physics terms like "Kreuzer-Skarke database" only returned 3 results, but they were all relevant.
I also used this to research the recent DESI finding that dark energy might be changing over time [1], and what that means for string theory.
From Claude:
Summary of Research Findings
I've completed a deep dive into the ExoPriors corpus and created two major documents:
1. /research/exopriors_findings.md - GA and String Landscape Papers
Key discoveries:
- arXiv:1907.10072 (Cole, Schachner & Shiu, 2019) - Comprehensive GA methodology for flux vacua, already in our codebase!
- arXiv:1302.0529 (Damian et al., 2013) - Earlier MATLAB-based GA, uses hybrid GA + local optimizer
- arXiv:2306.06160 - Scalability analysis showing methods work for large Hodge numbers
2. /research/DESI_DARK_ENERGY_IMPLICATIONS.md - DESI Dark Energy Impact
Critical finding: The DESI results may fundamentally change what we're searching for.
The DESI result in brief:
- Dark energy equation of state w ≈ -0.55 (not -1)
- Evidence for evolution: wₐ ≈ -1.3
- Suggests "phantom crossing" (w went from < -1 to > -1)
- 2-3σ significance, still preliminary
Key papers found:
1. arXiv:2112.10779 (Cicoli et al.) - Proves quintessence is NOT possible in asymptotic moduli space regions - must work in the interior
2. arXiv:2511.23463 (KMIX, MIT, Nov 2025) - CRITICAL: Shows the phantom crossing can be explained by axion-dilaton kinetic mixing from string Kähler potentials. The "phantom" is an illusion!
What this means for our project:
- Instead of targeting static Λ = 2.888 × 10⁻¹²², we may need to target (w₀, wₐ)
- The KMIX model suggests axion + moduli dynamics, not just moduli stabilization
- The exponential kinetic coupling e^(αφ)(∂θ)² is exactly what appears in CY compactifications
Next steps:
1. Study KMIX model in detail - it uses string-motivated physics
2. Consider adding (w₀, wₐ) to our fitness function
3. Possibly include axion dynamics alongside Kähler moduli
4. Update FORMULAS.md with quintessence equations
I have almost 30 years of experience as a programmer and all of this rings true to me. It precisely matches how I've been working with AI this year and it's extremely effective.
This is exactly how I've been working with AI this year and I highly recommend it. This kind of workflow was not feasible when I was working alone and typing every line of code. Now it's suprisingly easy to achieve. In my latest project, I've enforced extremely strict linting rules and completely banned any ignore comments. No file over 500 lines, and I'm even using all the default settings to prevent complex functions (which I would have normally turned off a long time ago.)
Now I can leave an agent running, come back an hour or two later, and it's written almost perfect, typed, extremely well tested code.
Sounds like a dream, but there is a risk of a local maximum here. Strict linters and small files are great at helping the agent write syntactically correct code, but they don't guarantee architectural correctness. An agent can generate 100 perfect 500-line files that together form an unmaintainable dependency hell. A linter catches bad code, not bad system design. Leaving an agent unsupervised for 2 hours is bold because refactoring architectural mistakes is harder than fixing typos
I went from "ugh I don't want to write e2e tests" to "well I'll at least have the LLM write some". 50% coverage is way better than 0%! I'm very strict about the runtime code, but let the LLM take the reins on writing tests (of course still reviewing the code).
It's funny how on one side you have people using AI to write worse code than ever, and on the other side people use AI as an extension of their engineering discipline.
I'm still working on DocSpring (which I launched on HN in 2017 as "FormAPI"). It's a tool where you can drag-and-drop fields onto a PDF to create a template, then post data to our API to generate PDFs. We also support e-signatures and hostable forms.
It's still going well, and I've been making a ton of progress lately by using AI agents. I'm very excited to launch my new homepage and pricing soon, plus some other really cool side projects that I've built.
I just finished some new features today and launched v0.5.0. The VS Code extension and MCP server are both really handy. I've been using them for quite a few different renames lately. This is one I did today: https://docspring.github.io/renamify/case-studies/deploy-req...
I'm also working on a new game at the moment. I have no idea how long it will take since I'm having too much fun coming up with new ideas and adding new levelsm but I'll definitely submit it here once I launch it.