by traviskuhl on 11/5/21, 3:08 PM with 71 comments
by kasey_junk on 11/5/21, 3:41 PM
At that company (which was a ~200 engineer, privately held, software company) we found a few things: - in person tests were less predictive than take home tests. - tests that did not provide automated test cases as examples were less predictive than those that did. - there was virtually no predictive power to 'secret test cases' that we ran without providing to the candidate. - no other part of the interview pipeline was predictive at all. Not whiteboarding, not presenting, not personality interviews, not culture fit testing, not credentials, or where experience came from, nothing. That was across all interviewers and candidates.
A few caveats about this: - this was before take home testing had become widespread and many companies screwed it up. At the time we were doing this it was seen as novel and interesting by candidates, not as just one more painful hoop they had to jump through. - we never interviewed enough candidates to get true statistical relevance. - false negatives were our biggest concern, they are extremely hard to measure (and potentially open yourself up to lawsuit). The best we ended up doing was opening up our pipeline to become less selective to account for it. This did not seem to reduce employee quality.
In a more meta-sense, that experience led me to believe that strict hiring pipelines are largely not useful. Bad candidates still get through and good candidates don't. Also, many other things have a much bigger outsized impact on productivity than if a candidate was 'good'. It turns out, humans do not produce at consistent levels all the time and things outside of what you can interview for make more impact (company process, employee health, life events, etc. all have way more impact on employee productivity than their 'score' at interview time).
by lbriner on 11/5/21, 3:42 PM
If we are recruiting a senior, we would expect them to easily complete basic technical tests. If they are more junior we might use them only as an indicator of their ability.
I don't particularly expect a strong correlation between how well they did in the tests and their long-term ability since their value is made up of many things, only one of which is their ability in the tests.
by cap10morgan on 11/5/21, 3:51 PM
by psadri on 11/5/21, 5:45 PM
I have found that investing the time to correctly onboard new team members makes a huge difference. Correctly onboard an average/good hire and they go on to produce solid output and often thrive. On the other hand, you could have a great new hire but because of no/poor onboarding they "sink" instead of swim.
by vannevar on 11/5/21, 4:01 PM
by boldslogan on 11/5/21, 3:24 PM
Do you check the applicants who were denied based on their test and see where they ended up working at. E.g. you are a mid tier start up who rejects someone who ends up working at amazon as a high level engineer – do you mark that a failure?
by daviddever23box on 11/5/21, 3:26 PM
Unless one's focus is research and development, there is a non-zero cost to training for production skills, so it's best to start with someone who understands the delivery process.
Linear metrics are probably less useful, inasmuch as it will become rather obvious as to which employees are self-starting and work well with others, versus those that require motivation or are staunch individualists.
by kqr on 11/5/21, 5:05 PM
The submitted question seems to just brush over this aspect, but so far when I've tried to evaluate interviewing techniques that has been the primary obstacle; people just can't agree on what success means once employed, so anything that tries to correlate interviewing to that will be an equal amount of junk.
by poulsbohemian on 11/5/21, 5:11 PM
I interviewed hundreds of technical people in my career, across dev, test, and ops skill sets. I saw limited correlation between tests and aptitude. If you talk to someone about a project they've done, you know pretty quickly:
1) Can they communicate technical ideas? 2) Can I develop a rapport with this person and work together? 3) Do they understand what they built? Can they talk about tradeoffs they made? Did they learn anything from the experience?
A fizz buzz test isn't a terrible idea, but you also have to have an interviewer that understands how to administer it within the wider context of the interview. If the interviewer themselves doesn't understand it, they aren't qualified to actually administer it.
by dreen on 11/5/21, 4:45 PM
by andrew_ on 11/5/21, 3:57 PM
by AnotherGoodName on 11/5/21, 4:22 PM
https://catonmat.net/programming-competitions-work-performan...
by xeromal on 11/5/21, 4:24 PM
by ipnon on 11/5/21, 4:23 PM
We have short, standardized, broad interviews. We look for what can be added to the team rather than poking holes, and we're still trying to improve.
by Aeolun on 11/5/21, 4:42 PM
So far we’ve hired 7 decent and 3 great people. No truly bad people have made it through that pipeline yet.
I can’t say anything about why, and I’d be prejudiced in any case.
by nonameiguess on 11/5/21, 4:12 PM
Of course, it's really not possible at all to do this at the level of rigor expected of, say, clinical trials. Each new hire will know what type of interview you put them through, and there is no reliable way you can prevent them from telling others.
by a_c on 11/5/21, 4:15 PM
On top of being hard to measure, the data points generated through hiring is just too few and the data collection process is too long and subjective
Just ask your team if they like the new hire, can they make progress together. Things like do you like working with the new hire? Is the new hire bringing in new insights to the team? Is the new hire easy to work with? Is the new hire learning new things.
And most importantly, can the team let go of mismatch fast enough. Overall I would say it is just not worth it in measuring hiring.
by nitwit005 on 11/5/21, 3:49 PM
However, we do hire some contractors essentially without an interview, and it is fairly apparent that's a bad idea.
by maxgfaraday on 11/6/21, 10:14 AM