Whether it is a good or a bad interview question depends on how you use it. I've used a variant of this when interviewing CS PhD candidates. I don't expect anyone to know how to solve this immediately. If anyone did, I'd assume they'd seen it before, and ask something else.
What I want is to see how someone thinks about algorithmic problems. As they talk through what they're thinking, there are certain to be problems or limitations with their first suggestion. I'll probe on these, and see whether they understand those issues as they're pointed out. I'll give hints and see how they assimilate new ideas and information. In the end, 75% of candidates will get to a reasonable solution, and 25% will get to an optimal one, given some hints along the way. I'm much less interested in whether they get to the optimal one, or even in how fast they get there, than in what happens during the discussion along the path to a solution. You learn a lot about how someone thinks from that. It's also useful to learn how someone communicates - this is also an essential part of a PhD - there's a lot of back-and-forth goes on regarding possible research ideas, so seeing how someone communicates their ideas is useful.
In summary, in an interview, such questions can be good as the basis of a dialog, but are useless unless the interviewer understands this is what they're actually trying to achieve.
You're not getting any insight into the candidate's problem solving ability in general, though. You may be getting insight into how they think about a specific class of algorithmic problems, and possibly (although it's unlikely) algorithmic problems in general.
What this interview question does is confirm your own biases regarding what "algorithmic thinking" is and little more.
Of course you're correct, and this is why such a question only forms a part of such an interview. I'm much more interested in what someone has built. We'll have a good discussion about any software they've built, and what they found interesting or difficult about that. Or basically anything where they showed creativity.
Still, in my area of CS, algorithmic thinking is an important aspect of the sort of systems we design, so this sort of question does help me build up a picture of the person's skills and approach to problems.
I have found though that there's pretty good correlation between how someone goes about answering this question (the mental process they follow, not whether they can jump to a solution), and how interesting the systems they've previously built are. And, although my sample size is small, the people who did best on this question also did best over the next few years on the systems they built and analyzed during their PhD.
OK, I wholeheartedly agree with that. I was thinking, though, that discussing a problem such as the one with the linked list at least would give one sample point of the candidate's ability to reason. In any case, what are you supposed to do if you are restricted to making your decision based on interviews?
It is a data point, but of questionable value generally. Consider this: even academic settings give students multiple chances to demonstrate their true level of competence over a course of months in a very constrained subject and a highly controlled setting. I don't think it's rational to think a panel of interviewers can do it in a matter of hours.
I think interviews are basically just a way we fool ourselves into thinking what is essentially a random chance biased by a self-selected set of applicants is (relatively) more objective.
I don't know that there is a good alternative. I am quite sure the status quo is thoroughly broken.
I've used a variant of this when interviewing CS PhD candidates.
If in fact it's for an actual PhD qualifying exam (or something similar), then this question might be OK.
The problem with the current crisis in interviewing is that it's become almost standard practice to ask questions like this (or its siblings: knapsack, outré graph search or sorting questions, etc) for what are basically run-of-the-mill API monkey / grunt finance programming / etc jobs.
As if the message these companies intend to convey is: "Fuck, we have no idea how to assess these candidates, nor do we have the time. So if we just ask a few gee-whiz questions that 75% of them will fail, then that might be an indication that the other 25% just might be a little smarter. Or at least very good at cramming. Because after all, that's how we got through college, too."
What I want is to see how someone thinks about algorithmic problems. As they talk through what they're thinking, there are certain to be problems or limitations with their first suggestion. I'll probe on these, and see whether they understand those issues as they're pointed out. I'll give hints and see how they assimilate new ideas and information. In the end, 75% of candidates will get to a reasonable solution, and 25% will get to an optimal one, given some hints along the way. I'm much less interested in whether they get to the optimal one, or even in how fast they get there, than in what happens during the discussion along the path to a solution. You learn a lot about how someone thinks from that. It's also useful to learn how someone communicates - this is also an essential part of a PhD - there's a lot of back-and-forth goes on regarding possible research ideas, so seeing how someone communicates their ideas is useful.
In summary, in an interview, such questions can be good as the basis of a dialog, but are useless unless the interviewer understands this is what they're actually trying to achieve.