Hacker Newsnew | past | comments | ask | show | jobs | submit | henwfan's commentslogin

Thanks for buying it and mentioning this. You are right that the current checker is still too strict about matching my variable names and structure, and that is not where I want it to stay.

I am working on a Python validator that compares the parsed code as an AST instead of raw text, so you can use your own variable names and small structural changes as long as the logic is the same. I am aiming to have the first version of this in by the end of December.

After that I want two clear ways to practice. One mode tracks the editorial solution for people who want to drill that exact version, while still allowing harmless differences like different names and small adjustments in structure. The other mode treats your own code as the reference and lets the objectives and feedback adapt to the way you wrote the solution, instead of holding everyone to one fixed template.

If you have thoughts on what would make the checker feel natural and fair for you, feel free to share them here and I will keep them in mind as I make these changes.


That is fair. I went with Google first because it let me ship the first version quickly, but for a tool aimed at developers GitHub and simple email sign in make much more sense.

I am working on both and plan to let people move their account once they are live if they would prefer not to use Google here.


Good point, and that matches other feedback I am seeing.

You are right that in the current version the checker is still too literal about names and structure. In two sum for example it nudges you toward my map name instead of letting you use your own, which is not what I want to optimise for once you already know the idea.

The plan from here is to keep an editorial mode for people who want to follow the exact solution and add a more flexible mode that accepts your own names and structure as long as it is doing the same job. Over time the checker should recognise what you actually wrote and adapt its objectives and feedback to that, instead of forcing everyone into one naming scheme.


Thanks for the kind words, and for taking the time to write concrete suggestions.

GitHub sign in is on the way. Right now it is Google only, but I am adding GitHub so it feels more natural for devs.

For languages, the drills are Python first. Java, C++ and JavaScript will be fully supported by the end of this year across all problems.

The site is dark by default today. A proper light and dark toggle is planned so people can pick what is more comfortable for longer sessions.

Really appreciate you trying it this early and sharing where you would like it to go.


Thanks for taking the time to try it and write this up.

You are right that the current check still leans too much toward my reference solution. It already ignores formatting and whitespace, but it is still quite literal about structure and identifiers, which nudges you toward writing my version instead of your own. There are many valid ways to express the same idea and I do not want to lock people into only mine.

Where I want to take it is two clear modes. One mode tracks the editorial solution for people who want to learn that exact version for an interview, while still allowing harmless changes like different variable names and small structural tweaks. Another mode is more flexible and is meant to accept your own code as long as it is doing the same job. Over time the checker should be able to recognise your solution and adapt its objectives and feedback to what you actually wrote, instead of pushing you into my template. It should care more about whether you applied the right logic under time pressure than whether you matched my phrasing.

There is also a small escape hatch already in the ui. If you completely blank or realise you have missed something, you can press the Stuck button to reveal the reference line and a short explanation, so you still move forward instead of getting blocked by one detail.

You are pushing exactly on the area I plan to invest in most. The first version is intentionally literal so the feedback is never vague, but the goal is for the checker to become more adaptive over time rather than rigid, so it can meet people where they are instead of forcing everyone through one exact solution.


> people who want to learn that exact version for an interview

What is the value in memorizing a specific solution line by line?


That is a good first approximation, but it is a bit more guided than a plain Anki deck. For each problem there is a structured study page and an interactive practice mode.

NeetCode 150 is a popular curated list of LeetCode problems that covers the core interview patterns people expect nowadays, like sliding window, two pointers, trees, graphs, and dynamic programming. I used that set as the base so you are not guessing which problems to focus on, and more problems and patterns are being added on top of that core set regularly.

On the study side, each problem has a consistent structure with the core idea, why that pattern applies, and a first principles walkthrough of the solution. On the practice side, the solution is broken into small steps. Each step has a clear objective in plain language, and you rebuild the code line by line by filling in the missing pieces. After you answer, you see a short first principles explanation tied to the line you just wrote, so you are actively recalling the logic instead of just reading notes.

You can repeat problems and patterns as much as you want, mark problems as solved or unsolved, and filter by pattern so you can focus on the ones you struggle with most. There is not a full automatic review schedule yet. For now you choose what to review, and the goal is to use that progress data to track weak patterns, guide what you should drill next, and add more types of focused drills over time.


Thank you, I really appreciate you signing up.

I agree with you on pattern recognition. AlgoDrill is built around taking patterns people already understand and turning them into something their hands can write quickly under pressure. You rebuild the solution line by line with active recall, small objectives, and first principles explanations after each step, so it is more than just memorizing code.

You are also right about the language gap. Right now the drills are Python first, but I am already working on full support for JavaScript, Java, and C++ across all problems, and I will have all of those in by the end of this year. I want people to be able to practice in the language they actually use every day, so your comment helps a lot.


Another +1 for TypeScript from a new lifetime subscriber. Great site!


Nice comparison. It is pretty similar in spirit to the woodpecker method.

In chess you repeat the same positions until the patterns feel automatic. Here it is LeetCode problems. You keep seeing the same core patterns and rebuild the solution step by step. For each step and line there is a small objective first, and then a short first principles explanation after you answer, so you are not just memorizing code but training pattern recognition and understanding at the same time.


I mostly agree that the interview format itself is strange. I do not think people should be judged mainly on how many patterns they can recall on command.

The reality for a lot of candidates is that they still face rounds that look exactly like that, and they stress out even when they understand the ideas. I built this for that group, where the bottleneck is turning a pattern they already know into code under a clock. Each step in the drills is tied to a first principles explanation, so the focus is on the reasoning behind the pattern, not trivia.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: