The New York Times recently called into question some of the applications of the venerable Dartmouth Atlas of Health to healthcare savings. It's yet another reminder that data is often misused and misunderstood. Here's a primer to help HR leaders judge the value of various benefits programs.
I possess tremendous respect for the word, "significant." My admiration stems from my graduate school adviser, who ingrained in me that the word should be associated with the term, "statistically significant." Ever since, I use that phrase with precision.
And, as HR leaders, you should consider adopting my obsession with the careful use of research terms.
As I've looked at the deluge of benefits information that comes across my desk and through my e-mail with an HR executive's eyes, what often catches my attention is the haphazard use of statistics. I finally decided to put fingers to keyboard when a vendor recently touted a new process and substantiated its impact by comparing outcomes to an industry group's survey-reported average.
The reality, however, is that the vendor and its key competitors always exceeded that standard. The statistic was correct -- but it had nothing to do with the impact of the new product.
I'm not the only one concerned about metric misuse. As Clive Thompson writes in his May 2010 Wired column: "If you don't understand statistics, you don't know what's going on -- and you can't tell when you're being lied to."
And, in the May 16 New York Times magazine, John Allen Paulos describes, in "Metric Mania," the importance of how data is counted and aggregated. Take his example on five-year disease survival rates.
"Suppose that whenever people contract [a] disease, they always get it in their mid-60s and live to the age of 75. ... An early screening program detects such people in their 60s. Because these people live to age 75, the five-year survival rate is 100 percent," Paulos writes.
"People in [a] second region are not screened and thus do not receive their diagnoses until symptoms develop in their early 70s," he continues, "but they, too, die at 75, so their five-year survival rate is 0 percent.
"The laissez-faire approach thus yields the same results as the universal screening program, yet if five-year survival were the criterion for effectiveness, universal screening would be deemed the best practice," Paulos concludes.
Given the desire to choose the best programs for employees, HR leaders need to be able to intelligently interpret the information presented by vendors.
Here are four basic points on research design to help you weed through the statistics:
1. Start with a question, a theory or even a hunch, which I would call a hypothesis.
Let's say your medical carrier tells you that your employees with asthma who are not regularly filling their prescriptions are driving up your emergency-room visits. Your question could be, "Would ER visits in this group go down if we paid for their asthma medications?"
2. Good research design includes both control and experimental groups. The control group serves as the baseline. If the experimental group gets the hoped-for results, but does no better than the control group, then the intervention/program was not responsible for the results.
In other words, the positive change would have happened anyway.
In December 2008, Blue Cross and Blue Shield of Vermont released the results of a well-designed study on the impact of three different worksite wellness approaches compared with a control group. The researchers found that, contrary to predictions and despite receiving no services, "the control group saw real, positive changes ?[and] produced measurable health improvement on their own."
3. To reiterate Paulos's comment: How you count and aggregate data is important. For example, employers often ask disability carriers and third-party administrators about their Social Security Disability Insurance approval rate. Some employers believe this is a reflection of disability-claim-management capabilities.
Timothy Carney and Brett Albren, respective managing director of Medicare Services and managing partner/president for Crowe Paradis, however, explain that the "misconceptions about SSDI approval rates are poignant."
"The overall SSDI approval rate for open and active [long-term disability] claims is about 90 percent for five years and beyond," Tim says. "The rate depends upon the period of time you measure -- whether it's from the date of disability, the first six to 12 months, or after the LTD approval, and can be further impacted by the actual level of SSDI award."
A Crowe Paradis study of 200 data points associated with a SSDI claim found that "only three were statistically significant in predicting approvals: the ICD 9 code (which is the claimant's diagnosis), date of birth and claimant's ZIP code. ZIP code is particularly interesting since it points out that there is no geographic uniformity in the adjudication process," Brett says. "The SSDI adjudication process varies across the country and someone who lives in Houston may have a different experience or outcome than someone who lives in Boston."
Knowing the above, if you want to ask vendors a meaningful question about SSDI, you could make this inquiry: What is your SSDI approval rate for employees who have been out of work with back pain for 36 months, are between 40 and 55 years old and live in Orange County, Calif.? Please measure time out of work from the first day they went out of work.
4. Accept that study subjects -- read: your employees -- generally do not misrepresent themselves.
James Ruotolo, insurance fraud principal for SAS, says "the challenge with fraud is to quantify the scope of the problem." While the dollars associated with insurance fraud are large and tantalizing stories involving staged crimes are growing, the reality is that fraud is often associated with medical providers, not the employees.
"Only about three percent of disability claimants and three to five percent of workers' compensation claimants are being intentionally fraudulent," James says. "And most of the hard fraud in disability is associated with individual disability insurance, not group."
So, this is your statistics primer. We'll come back to this topic from time to time.
In the meantime, remember that people can tell you whatever anecdotes they'd like, but you have the ability to separate stories from statistically significant information by asking the right questions.
Carol Harnett is a widely respected consultant, speaker, writer and trendspotter in the fields of employee benefits, health and productivity management, health and performance innovation, and value-based health. Follow her on Twitter via @carolharnett.