Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you're selecting 10% or 5% or 1% that are interesting to bring into an interview, what is a false negative?

I'm not going to claim that the ones selected are the best or that I'm missing ones that I would give a thumbs up to if I had the time to interview every candidate.

Instead I'm looking for "are they applying to this job? Do they have some of the skills that the job is looking for and have claimed to use those particular skills in a past job or project? Do they have a history that demonstrates that they're likely to stay around after becoming a positive contributor?"

I've got an excel spreadsheet where I am to give a 1 to 5 score for each of the skills (1 is "no claim of the skill being used" to 5 of "specifically claimed to use the skill in a current project or role"). The next column is "average tenure in months" and lastly there is a column for red flags or green flags. You will note that nothing in there asks about past company or school. Considering schools can get into trouble for discrimination if the person went overseas or to a HBCU. If you want to call out something (the candidate spelled JavaScript as 'java script', "JavaScript" and "java Script" (with java lower case and bold)) then that can be put in the red flags.

From that list (and multiple people all fill that out for all resumes), HR then selects the ones to move to the next round of interviews.

If I am looking for code the candidate wrote that can cause problems with discrimination suits at this level of the interview. If we look for any blog posts on them or social media they've written, we open ourselves up to lawsuits and claims of biased hiring.

When it comes to code, we have a take home test that is given to those who are selected. Having the take home up front has been met with "but you aren't paying me to do this" or other forms of refusal to do it... and I'm not going to go through and review 100 submissions - I don't have the time for that.

For that code part of the interview, again, there is a rubric that is set down such that anyone reviewing the code should come to the same conclusion.

And while it seems a bit impersonal, the key is that the same criteria is applied to everyone and if someone else was to watch a recording of the interview (this isn't done, but its the hypothetical goal) that they would come to the same conclusions based on that same rubric for evaluating the candidate.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: