I'm pretty cynical about how any code evaluation is done.
Amongst the many many ways code evaluations fail, the worst and most typical IMO is that the evaluator has an air of superiority, who marks down things they don't understand, and thinks their own coding is that of an artistic genius, approaches the tasks with zero science or rigor and is unable to articulate anything hard to back up their vague assertions coming out of the assessment "process".
I like giving a cursory glance at candidates github profiles mostly to see what they're interested in, if they participate in open source, things like that. The particular "quality" of code is imho relatively irrelevant.
Besides normal open source contributions and semi-maintained side projects my own github for example is full of one off scripts and weekend project repos that are interesting for an average of N<10 people per year, I still put those up because that N is larger than 0. You won't find the code quality your previous post asked for, or even any test code in some of them. That doesn't mean I don't have any. Just like there's great developers with zero presence on GitHub out there. It's just one of the many signals in the hiring process, as long as I don't see a major red flag in the profile it mostly informs "this person works on interesting stuff in domain X / similar interests like my other devs / extensive experience with platform Y". I don't have time to look at fancy coverage badges in all 30+ repos of a profile, there's plenty of time to talk about specifics later on. I treat github profile links like any other, say to a candidate's personal tech blog. The mere existence is nice and I'll give it a glance but that's about it.
In most domains I've worked in other parts of the CV and public presence are far more informative about a candidate, shoehorning pseudo-scientific metrics about code or test quality on candidate's github profiles seems counterproductive. People have other things to do in their life than maintaining recruitment-friendly github profiles, expecting one is not much different to the odd practice of unpaid take home exercises imho.
Honestly, since we hire everyone on as a contractor first, I generally tend to believe that they can code if it’s on their resume and they’re otherwise a decent person and able to work in a team.
If someone doesn’t work out we just let them go after half a year. So far that hasn’t actually been necessary.
Amongst the many many ways code evaluations fail, the worst and most typical IMO is that the evaluator has an air of superiority, who marks down things they don't understand, and thinks their own coding is that of an artistic genius, approaches the tasks with zero science or rigor and is unable to articulate anything hard to back up their vague assertions coming out of the assessment "process".