The Grand Locus / Life for statistical sciences

Subscribe...

Share...

Against recruitment panel interviews

In my first years as a group leader, I had the chance to interview PhD candidates in panels at international calls for students. I quickly stopped interviewing students, but back then I was very surprised that the top candidates often proved less productive than those we had ranked mediocre. How was this possible at all? Panels are unbiased, they combine multiple expertises, they allow for critical discussion, so they should be able to pick the best candidates... right?

It turns out to be less surprising than I thought. Now a little more familiar with the dangers of panel interviews, I decided to see what our colleagues from the academia have to say about it. This is of course where I should have started before interviewing PhD students, but better late than never. If you haven’t met them, let me introduce you to the flaws of the mighty recruitment panel interview...

1. The unproved efficiency of panels

A good place to start. Despite forty years of research, the benefit of recruitment panels over single evaluators is still debated. According to a review published in 2009

Findings to date suggest that panel interviews might not necessarily provide the psychometric benefits expected, but could be important for reasons related to perceived fairness.

The only reliable way to improve an interview process is to add structure. Ask the same questions to all the candidates, standardize the evaluation process and give numeric scores to the items. In spite of the repeated demonstrations that structured interviews are more efficients, they do not gain much popularity. Most likely, recruiters have the impression that the human factor adds to their knowledge of the candidate, whereas this is probably the reverse.

2. The conformity of the group

In the famous conformity experiment of Solomon Asch, the subjects were asked to tell which of three lines is most similar to a reference line. Only one person, the last to give his or her answer was the real test subject. Everybody else was a stooge instructed to give the same wrong answer (for instance A in the example shown here). About 25% of the test subjects were not swayed by the (wrong) majority opinion, but 75% were measurably influenced: either their judgement was altered by the crowd, or they did not speak up for it.

We all believe that we are among the 25% who would “resist”... Which is why we are probably among the 25% who do not. It is not hard to remember one or two occasions when we did not speak up and felt regret afterwards. Two heads are better than one, but only if they are independent.

3. The inconsistency of experts

One of the most uncomfortable discoveries of clinical psychology is that statistics and simple decision rules beat the experts at forecasting. The work of Paul Meehl, published in 1954 has been confirmed and extended to many areas of decision making. The conclusions are always the same: formulas are more accurate than experts at almost everything.

Interviewers often evaluate the same exact CV in different ways, depending on the one they saw before. Computers and formulas are certainly too simple, but they are consistent. Humans, on the other hand, are very perceptive but they are also overly influenced by their environment and their mood. As a result, their judgement has a random component that only deteriorates the accuracy. Worse, experts are confident in their faillible judgement. If you think that this cannot be true of scientists, perhaps this article published in 1978 will make you wonder.

(...) formal training in experimental design, teaching the logic of control groups and baseline predictions, and so on would seem to be a necessary but not sufficient condition. If sophisticated subjects, who are trained in these matters, make similar mistakes to those without training, the prospects for overcoming such tendencies is certainly disheartening.

In short, the degree of confidence of panel members has little to no predictive value about the professional success of the candidate.

4. The foolishness of interviews

Finally, the most important point is that interview is not a good recruitment process. Even structured interviews are far from ideal. John Sullivan lists nothing less than 50 flaws of interviews, but his first item is for me the only one that matters.

Many things just can’t be measured accurately during an interview including: many technical skills, team skills, intelligence, attitude, and physical skills. Giving them a work sample or test is often superior.

That is precisely my opinion, and I would go one step further: A key to success is the environment you struggle to create for your people; why would you evaluate the candidates in another context? You do not know how people will perform before you see what they can do in your team, with your material, working on your projects.

Whatever recruitment method you apply, you may want to answer the question “Will the candidate succeed with you”. Trial periods, giving the candidates real problems to solve and even going for beers with the candidates will tell you more about this than an interview.


« | »



blog comments powered by Disqus

the Blog
Best of
Archive
About

the Lab
The team
Research lines

Blog roll
Simply Stats
Opiniomics
Ivory Idyll
Bits of DNA