Though I haven’t personally seen (m/?)any of them, I have the impression that the: ‘if you were told you had 24 hours left to live, what would you do in the time you had left?’-theme has been dealt with extensively in movies and pop culture. Now how about this related if still quite different question: “If you had the chance to be told at one point exactly when you were to die, when would you like to get that information?”
Not at all? Ten seconds before? A week before? A year before? Right now?
Of course this question is somewhat related to the fact that more information sometimes makes us worse off rather than better off. Knowing is not always better than not knowing. This is a well known fact in i.e. health economics. However the devil is in the details as well and uncertainty sometimes deceives us, makes us think things that aren’t true (and perhaps makes some states of uncertainty preferable to others to us). I’ll illustrate this in a model below. It’s a bit technical, but not too much and you needn’t know any fancy math to understand what’s going on. It’s pretty basic but a lot of people get stuff like this wrong. The “|” thing I’m using below should be read as ‘conditional on‘, that ought to be (if anything) the only thing you haven’t seen before. So let’s set up the following model:
Say you have a genetic test that will determine with 99,9% certainty whether you have terrible disease X. More precisely, assume 1 out of 1000 tests gives a false positive (let you think you have the disease even though you don’t) and that no false negatives ever happen (everybody with the disease will be caught by the test, you will never get a negative test result if you have the disease). X is incurable and deadly; think of it as a ticking time bomb version of the worst disease you can imagine (you don’t know you have it before it gets very bad). Say the background incidence of X is 0,001% (1 out of 100.000 people get it). This is low enough that ‘ordinary people’ would never worry about having the disease (it’s not genetic). Say a public screening protocol is implemented using the test mentioned above. The screening protocol is implemented solely with the purpose of giving people more information about their health status, as no cure exists. Now what would happen?
Let’s say a guy gets a positive test result. What’s the probability that he has the disease? Well, we know that the test is 99,9% accurate, so it should be pretty high, right? Wrong. People familiar with Bayes’ Rule probably know what I’m getting at.
There are six relevant probabilities here:
P(X) = 0,00001 (1/100.000; this is the probability that a random test taker has the disease)
P(not X) = 1 – 1/100.000 = 0,99999 (the -ll- does not have the disease)
P(negative test | X) = 0 (probability that a test taker has the disease but tests negative)
P(positive test | X) = 1 (-ll- and tests positive)
P(positive test | not X) = 0,001 (probability that the test is positive even though you don’t have the disease)
P(negative test | not X) = 0,999 (probability that the test is negative and you don’t have the disease).
Now we calculate P( X | positive test), i.e. the probability that you get a positive result and you actually have the disease. This is equal to
[ P(positive test | X) * P(X) ] / [ P(positive test | X)*P(X) + P(positive test | not X)*P(not X)] =
[ 1*0,00001 ] / [ 1*0,00001 + 0,001*0,99999 ] = 0,009901. Multiply by 100 and you realise that this amounts to less than 1%. The probability that you get a positive result but aren’t sick is of course 1 minus that number, so more than 99% of all people who are tested positive aren’t sick even though the test we’re talking about is 99,9% accurate.
Some information just can’t be unlearned, and people usually are very bad at interpreting probabilities and dealing with numbers like these. Even a lot of doctors get this stuff wrong and might never even have heard about Bayes Rule (or have forgotten all about it even if they have). Note that if people who are actually sick would prefer to know in advance, even a blunt screening process like the one above makes the sick people much better off; they have a far more accurate assessment of their probability of developing the disease than they did before the screening, as their estimate of having X changes from 1/100.000 to ~1/100. The other side of the coin is that some people who’re not sick will think they’re much more likely to be than they really are – perhaps some of them would have preferred never to have been screened.
Note also in the context of i.e. genetic testing that adding additional information to an insurance market can sometimes make that insurance market break down, because the uncertainty that made insurance a sensible move is no longer there. Insurance is about risk diversification and if you take away the risk and replace it with certainties, well there’s not much left. If a life insurance firm knows that you with probability p will die within the next 5 years, there’ll often exist some potential insurance contract making both the firm and you better off. But what’s the price (/premium) of such an insurance contract if p is suddenly no longer uncertain, but rather equal to 1 or 0? What if it’s not about the probability of dying but rather the probability of getting a horrible disease, say, 40 years down the line? Same thing. If uncertainty is replaced by certainty it often also might have some distributional consequences to the parties involved (insurance always involves some element of (statistical) cost sharing across individuals).
Going back to the beginning of the post, I find the “when would you prefer to get that information?”-question a much more interesting question than the “how would you react when you’d already gotten it?” It is not at all, in my mind, an easier question to answer.
No comments yet.