>Candidates will be evaluated using a simple quantitative assessment of core competencies.
I've tried implementing that with a coding take-home task, with "mixed results".
On-paper great candidates often refused.
On-paper good candidates made mistakes because they completed it very quickly, while mediocre candidates invested x10 the time and produced "better objective result".
Comparing code turned out to be much less straightforward or objective than I've anticipated: X handed great consistent well-commented code, but didn't null check inputs, Y covered all cases but left internal class implementation details public, and so on and so forth.
I posit it's still the least bad way to evaluate a candidate. If a 'mediocre' candidate is able to complete the task well, they have still completed the task well.
And, IME, 'great' candidates, even with a storied resumes, can simply be good at self-advancement and job application as much as actual competent at their job.
I don't care how someone looks on paper. Nor should you. Your process should optimize for people who are good at the job. You can look at all the negatives and try and guess "maybe that was a false negative!"...but who cares? What's your rate of true/false positives? That's the only data point you have; optimize for that.
In my case I got zero positives. It's easy to build an accurate classifier if recall is ignored. Not a single candidadte completed the task perfectly. Not hiring would be a failure for me.
I've tried implementing that with a coding take-home task, with "mixed results".
On-paper great candidates often refused.
On-paper good candidates made mistakes because they completed it very quickly, while mediocre candidates invested x10 the time and produced "better objective result".
Comparing code turned out to be much less straightforward or objective than I've anticipated: X handed great consistent well-commented code, but didn't null check inputs, Y covered all cases but left internal class implementation details public, and so on and so forth.