The limited value of ‘statistical significance’ in the real World

Share This Post

Earlier this week I was working from home in the morning. I had the radio on in the background. My normal choice of oral wallpaper is BBC Radio 4. It’s often on, but I’m rarely ‘listening’ to it. I rely on the ‘cocktail party effect’ to pick up on anything vaguely relevant. In other words, listening to the radio for me is a bit like being at a gathering where there is a buzz of conversation in the background. In the main, what we ‘hear’ is filtered out, but if someone where to say something particularly relevant to us, such as our name, then it would tend to attract our attention. So, I am generally deaf to items on such matters as the spotting of a rare bird on a remote Scottish island and the minutiae of fiscal policy here in the UK, but when something about health or science pops up, I can suddenly be all ears.

So, earlier this week my attention was grabbed by an item on the debate about whether the time limit for termination of pregnancy should be dropped from 24 weeks (where it stands now) to 22 or 20 weeks. By the way, this blog article is not ethically, morally or religiously driven, it about science, or rather, the limitations of it.

One side of the argument here states that the abortion time limit should be brought down because babies can (and do) survive when born at an age lower than the current 24-week cut-off. Those opposing the change have generally used the argument that the ‘evidence’ shows that the survivability of infants born very prematurely has not changed in recent years. So, if 24 weeks was good enough when the limit was set, it is good enough now.

The obvious riposte to Gordon Brown’s (and others’) ‘scientifically-based’ argument is that there’s no reason to assume that just because the survivability of very premature infants has not changed, that the abortion time limit right. Maybe we got it ‘wrong’ the first time round and there’s an argument for reviewing the limit.

One individual supporting a review was a woman who was interviewed on Radio 4 who, if I remember correctly, delivered a child at 22 weeks gestation. The child was, she said, left to die. However, because after 36 hours this child had not died, it was duly treated with medical care and survived. According to the mother, while the child was (naturally) a slow-starter, he had caught up and was leading the sort of life you’d expect ‘normal’ children to lead.

It was put to her by the interviewer that infant mortality statistics had not changed, so how could she justify her desire (as if it were not obvious) for the termination time limit to be reduced. What she said, and I’m doing this from memory, was, I think, very telling. She first of all suggested that we need to be a bit careful with statistics. She reiterated the point that children can survive at an age lower than the termination limit. She rounded this off by suggesting that while the statistics may not have changed significantly, for a child who may survive being born very prematurely the issue is very significant indeed.

I think she has a point. And this whole issue reminds me of just how easily we over-rely on the science and statistics. And examples of this, I think, are legion in the medical field.

For example, I have written before about the placebo response and its power in promoting healing. Some (for instance, academics who never go near real patients) dismiss the placebo response as an artefact, and something that is not ‘real’ like the effect you get with, say, a drug that has been ‘proven’ to be effective. My opinion is that if a treatment or approach helps someone, the mechanism behind the improvement is far less important than the fact that they have improved. But I suppose that’s one of the differences between academics and individuals who actually see patients with real problems and who are focused on actually helping people.

Another way science may not be of service of us concerns ‘statistical significance’. This tells us, supposedly, whether there’s some real effect or change going on, or it’s merely something that’s most likely to be due to chance. Statistical significance in scientific studies is denoted by what is known as the P (or probability) value. A value of less than 0.05 is generally regarded as denoting ‘statistical significance’.

Sounds fine so far. Except, I do feel compelled to point out that the choice of 0.05 as a cut-off is utterly arbitrary. It’s a value that the scientific community agree on. It’s a consensus ” it’s not carved in stone like some irrefutable scientific truth. If the scientific community decided that 0.01 was going to be a cut-off, then less things would be ‘statistically significant’. If the limit was set at 0.1 then many more things would be deemed significant. When we understand this, we begin to see just how arbitrary a lot of scientific ‘findings’ really are.

An example of where statistical significance appears to have got in the way of a constructive debate on the subject is vaccination. Our Government here in the UK, most doctors (I suspect) and many commentators would have us believe that vaccination, including the measles, mumps and rubella vaccination (MMR) is ‘safe’. Many will not even entertain the thought that there may be a problem with MMR. They’ll quote the science (some of which is not of the highest quality anyway) in a way that gives the impression, very often, that there is NOTHING AT ALL to worry about.

An analogy may be useful here. Let’s imagine someone decided to do a big study on road safety. Let’s say they counted up the number of times someone, somewhere, crossed the road. And now, let’s imagine, they also count up the number of times someone gets run over (and hurt or killed) as a result of crossing the road. Now, I’m writing this on a plane and can’t even check if these statistics exist. But I think it’s reasonable to assume, that compared to the total number of road crossings, the number of people being knocked down is likely to be very small indeed.

Now imagine we applied some statistical ‘wizardry’ to this (with that arbitrary P value, remember) It’s not too difficult to imagine that one would turn up a result which shows: ‘crossing the road is not associated with a statistically significant increased risk of getting run over.’ Now, many doctors and scientists would interpret this finding as evidence that crossing the road is ‘safe’. However, we all know that while most of the time it is, sometimes it’s not.

Now, getting run over has obvious after effects. Vaccination, on the other hand, may not. The effect, for instance, may be delayed. And also the changes can be more subtle than a broken leg, a ruptured spleen or death. Nevertheless, despite the protestations of some, there is a considerable body of people out there who believe (rightly or wrongly) that their child has been damaged by vaccination. And all too often these individuals are dismissed or patronised.

To get some indication of how some of these parents might feel, imagine for a moment turning up at hospital with your child who has been run over. When you get to casualty the attending doctor asks what happened to your child. You reply that they were run over crossing the road. Now imagine the doctor turns round to you and says with a somewhat withering tone: I don’t think so: Study after study shows that there’s no credible evidence that crossing the road can be harmful to human health.�

Whatever scientists and doctors sometimes contend, the fact remains: accidents can happen.

More To Explore

Walking versus running

I recently read an interesting editorial in the Journal of American College of Cardiology about the relative benefits of walking and running [1]. The editorial

We uses cookies to improve your experience.