Clinical Epidemiology: The Essentials
The book is an introductory textbook about clinical epidemiology. I don’t think I’ve ever actually read an epidemiology textbook (..a big part of why I wanted to read this), but I’ve read a lot of other stuff on related matters and I think it would be fair to say that this is not a new field to me. Most of the stuff in the book was well known to me and there actually wasn’t a lot of new stuff in there. I happen to know a lot more about many of the topics covered than what’s written in the book; data analysis is data analysis, and whether you’re performing survival analysis using Cox proportional hazard models on medical data or on unemployment data doesn’t really change all that much, except perhaps how to interpret the results.. But I didn’t know that I knew beforehand, and the book did contain enough interesting observations along the way to keep me going; even though many of the things covered were things I’d read about before, I did read the book from cover to cover. Though I should also point out that I emphatically did not spend an equal amount of time on all of it, and that I went over some of it quite fast (I didn’t really need to be reminded what a p-value is…).
The book defines a lot of concepts and shows how they are connected. I think I’d categorize it as ‘methodological, but informal’, in the sense that there’s very little explicit math here aside from the math required to introduce/explain variables of interest. Given the informal structure and the fact that it’s an introductionary text it’s natural that a few imprecise statements creep in here and there; there was in particular one paragraph in the middle of the book where I felt unsure what they were actually trying to say (they were basically saying that you shouldn’t ever use a specific research method, described in a very vague manner, but I knew that if they were criticizing what I believed them to be criticizing then they were obviously overlooking/disregarding benefits of the method which should be traded off against the costs they emphasized). A couple of places they make semi-questionable assumptions about e.g. attrition rate dynamics and selection mechanisms related to stuff like compliance patterns – but given the nature of the book this is perfectly understandable and, I think, forgivable, as a person who’s completely new to the field probably shouldn’t worry too much about those kinds of details yet (you can easily end up confusing people more than you help them by adding too much complexity to an intro textbook). Most of the time the language is precise and to the point.
If you don’t know a lot about this stuff, I think this book is a good starting point. It provides you with a good basis. It’s well written. But it’s not detailed enough (/too informal) and it didn’t teach me enough new stuff for me to really start getting excited about the book – I ended up giving it 3 stars on goodreads. To give you a sense of what the book is like, below I’ve added a few quotes from the book, with my own comments italized and bracketed:
“Although clinical distributions often resemble a normal distribution the resemblance is superficial. As one statistician (4) put it, “The experimental fact is that for most physiological variables the distribution is smooth, unimodal, and skewed, and that mean +/- 2 standard deviations does not cut off the desired 95%. We have no mathematical, statistical, or other theorems that enable us to predict the shape of the distributions of physiologic measurements.”
The shapes of clinical distributions differ from one another because many differences among people, other than random variation, contribute to distributions of clinical measurements. Therefore, if distributions of clinical measurements resemble normal curves, it is largely by accident. Even so, it is often assumed, as a matter of convenience […] that clinical measurements are “normally distributed”.”
“most distributions of clinical variables are not easily divided into “normal” and “abnormal.” They are not inherently dichotomous and do not display sharp breaks or two peaks that characterize normal and abnormal results. This is because disease is usually acquired by degrees, so there is a smooth transition from low to high values with increasing degrees of dysfunction. Laboratory tests reflecting organ failure, such as serum creatinine for kidney failure or ejection fraction for heart failure, behave in this way. Another reason why normals and abnormals are not seen as separate distributions is that even when people with and without a disease have substantially different frequency distributions, the distributions almost always overlap.”
“patients in clinical trials are usually a highly selected, biased sample of all patients with the condition of interest. As heterogeneity is restricted, the internal validity of the study is improved; in other words, there is less opportunity for differences in outcome that are not related to treatment itself. But exclusions come at the price of diminished generalizability.” [As RCT’s (potentially) limited external validity is closely related to questions about compliance/adherence, this aspect was naturally extensively covered in Davies and Kermani – so it’s not exactly news to me. Nevertheless it’s an important aspect often overlooked when evaluating studies of this type, so I figured I should include a remark/quote on the topic here].
“Lead time is the period of time between the detection of a medical condition by screening and when it ordinarily would be diagnosed because a patient experiences symptoms and seeks medical care […] The amount of lead time for a given disease depends on the biological rate of progression of the disease and how early the screening test can detect the disease. When lead time is very short […] treatment of medical conditions picked up on screening is likely to be no more effective than treatment after symptoms appear. On the other hand, when lead time is long […] treatment of the medical condition found on screening can be very effective. […] How can lead time cause biased results in a study of the efficacy of early treatment? […] because of screening, a disease is found earlier than it would have been after the patient developed symptoms. As a result, people who are diagnosed by screening for a deadly disease will, on average, survive longer from the time of diagnosis than people who are diagnosed after they get symptoms, even if early treatment is no more effective than treatment at the time of clinical presentation. In such a situation, screening would appear to help people live longer, spuriously improving survival rates when, in reality, they have been given not more “survival time” but more “disease time.”” [I knew about this phenomenon, but I didn’t know that it had a name. I do now.]
“Length-time bias […] occurs because the proportion of slow-growing lesions diagnosed during screening is greater than the proportion of those diagnosed during usual medical care. [This is because the speed of tumor growth is related to the likelihood that a screening test will ‘be necessary’ to find the cancer – a fast-moving cancer will produce symptoms before the screening test/in between screening tests, whereas a slow-moving cancer will not]. As a result, length-time bias makes it seem that screening and early treatment are more effective than usual care.” [For those who don’t know, I should note that some similar problems may pop up in models applied in labour economics, e.g. when dealing with unemployment data. So the math/intuition behind problems such as these are not unknown to me – as mentioned in the beginning data analysis is to some extent just data analysis, no matter which field of inquiry it’s being applied to.]
“To many people in the Western world, the suggestion that sticking needles into the body and twirling them can decrease pain and control vomiting seems biologically implausible. However, randomized controlled trials of acupuncture have found acupuncture effective.” [I include this quote here only because it cost the book one star in my evaluation. I was probably at around ~3.5 or so before reading this. After reading this, I was at ~2.5. I considered it a serious hit to the credibility of the authors, because it to me displayed a lack of critical thinking and a signal of poor judgment. Here’s a cochrane review on the matter (“We concluded that there was insufficient evidence to judge whether acupuncture is effective in relieving cancer-related pain in adults.”).]
No comments yet.