Intelligence may be the best indicator of how a prospective employee will do in a job. But exactly how useful such testing is, is an area of contention -- and what does it all mean for an individual employer?
Last month, I discussed the new consensus that personality tests are a poor predictor of future job performance. If personality is being overemphasized, what is a good predictor -- what, if anything, is underemphasized?
A few years ago, a study, "HR Professionals' Beliefs about Effective Human Resource Practices: Correspondence between Research and Practice," published in Human Resource Management, compared the views of HR managers to expert opinion on a range of HR practices.
On some issues, the managers were pretty accurate -- for example, that leaders can be trained, that setting performance goals is more effective than telling people to "do their best," etc. The managers were least accurate around issues of employee selection.
Ironically, this is the topic around which the research community has the best information.
In particular, HR professionals tended to view attitudes, values and personality as being more important than intelligence in predicting job performance when, in fact, intelligence often is one of the better predictors of job performance.
How good a predictor is intelligence? That's where things get tricky.
There is general agreement that many of the factors typically used to predict job performance are not very useful, such as years of education and certain kinds of unstructured interviews. There is also agreement that the very best predictors are where one can observe a job candidate's behavior when performing tasks related to those required in the new job -- either in a previous job or through tests that can approximate those tasks, i.e., "work sample" tests.
Intelligence seems to be the best of everything else -- as well as one of the easiest, cheapest and most reliable to use.
The way in which predictors are judged begins with examining the correlation between intelligence scores and actual job performance.
Measures of intelligence are reasonably straightforward. Getting accurate measures of performance is far harder.
A big problem with much of the research dealing with the link between intelligence and performance is that there are few good sources of data. Most of the research relies on a small number of data points, where employers were able to follow up on employee performance after the tests. Many of these are now decades old, which should raise some concerns if we think the nature of work has changed.
In these studies, raw correlations of about .3 to .4 are common; various statistical corrections for data limitations brings them closer to .5.
To see how much of the actual variation in job performance can be explained using intelligence as a predictor, we take the square of these correlations: So if the correlation between intelligence and performance is .5, then intelligence explains about .25 of the actual variation in performance, or 25 percent.
Is that a lot or a little? If the measure explains 25 percent of the variation, that means 75 percent of the variation remains to be explained. But it is also considerably better than picking at random.
A new set of arguments about improved corrections for the data problems in these older studies suggests that the correlation could be as high as .8, which would represent quite a dramatic improvement in predictive power (explaining .64 percent of performance variations).
As you can see from the examples above, the practical significance of the results grows sharply if the correlations begin to improve -- a correlation between a test and performance of .2 is interesting to academics; but because it only explains .04 percent of the variation in job performance, it isn't of much practical use.
An increase in the correlation with performance from .7 to .8, on the other hand, improves the predictive power by about 25 percentage points, which matters a great deal in practice. That would seem to make intelligence tests the "go-to" assessment for virtually all jobs.
Not everyone is persuaded by the assumptions behind these new claims, however, and a highly technical debate ensued in the August 2007 issue of The Academy of Management Perspectives.
The snag for an individual employer is that while the corrections for data limitations may tell us something about the "true" correlation in a big sample, this won't be the case for a single employer, which would be limited to a smaller sample.
Supporters of intelligence testing, on the other hand, will likely point out that other assessments have the same data limitations.
What do we conclude from all this for an individual employer?
Intelligence probably is the best of the standardized assessments that one can use, although the potential adverse-impact effects of intelligence testing in the employment area remains an important hurdle.
The research suggests that intelligence testing is especially useful for more complicated jobs. But still, the kind of predictive power that an individual employer is likely to experience with these tests is probably not what they would hope -- correlations of .3 or perhaps .4, explaining 9 percent to 16 percent of the actual variation in performance.
Better than the other tests, but not enough to solve the risky problem of selection.
For many of us who were brought up on social science ideas from the 1970s and 1980s, the fact that intelligence is among the very best predictors of job performance seems surprising and a bit troubling because intelligence has such a strong genetic aspect to it.
Are we saying that genes at birth determine job performance a generation later? Well, yes, at least in part. At least, that seems to be the way research results have been heading over the past decade or so. But it is worth remembering that there is still a whole lot of explaining left to do.