“Compliance is the degree to which a patient is compliant with the instructions that are given by a healthcare professional and written on the medication label (for example, prescribed dose and time schedule).” (p.8 – I didn’t know that definition before reading the book so it made sense to me to start out with this quote, to make sure people are aware of what this book is about.)
It’s an interesting book with a lot of stuff I didn’t know and/or at the very least hadn’t thought about. A couple of the chapters were quite weak and I basically skipped most of chapter 6, which was written by a pharmaceutical marketing consultant who wrote about branding stuff which I couldn’t care less about – but most of the book was quite good. One of the chapters (chapter 8) very surprisingly included undocumented claims which were to some extent proven wrong in a previous chapter (chapter 3) – it seemed as if the authors of that chapter had not read the previous chapter in question. Here’s what they wrote at the very beginning of their chapter (chapter 8):
“Compliance is important. Better adherence to treatment regimes leads to less healthcare resource utilization overall, as fewer illness recurrence or medication errors leading to side-effects take place.” (p.109)
And here’s what Dr. Dyffrig Hughes told us in chapter 3:
From the studies evaluated, the direction and magnitude of the change in costs and consequences resulting from applying sensitivity analysis to the compliance rate was measured and taken as an indicator of the impact of non-compliance. There was consistency among studies, in that as compliance decreased (whatever the measure), the [health] benefits also decreased [...] There is no consistency, however, in the direction of change in costs resulting from changes in compliance [my bold, US] [...] Whilst some studies show that costs increase as compliance decreases, others showed the opposite trend. This difference did not appear to be related to the nature of the disease, the measure of non-compliance or the assumptions relating to the health benefits experienced by non-compliers.
And here’s even a figure illustrating this point:
A little more from chapter 3 on the same subject: “The economic evaluations described demonstrate that medical expenditures do not always increase because of poor compliance. However, the limitations in the methodology adopted in many of the studies would suggest that the reported changes in healthcare expenditure may not necessarily be observed in practice. It is difficult, therefore, to predict the true economic impact of non-compliance with drug therapy, particularly as evidence relating to discontinuers is often not reported. It is the case, however, that decisions on optimal treatments, based on economic criteria, are influenced by non-compliance [...] Health economic evaluations often fail to include non-compliance with medications. As a significant proportion of evaluations are based on efficacy trials, attention should be given to how their findings might be generalized. In particular, as poor compliance is one of the most important elements responsible for the differences that may exist between the effectiveness and efficacy of an intervention, greater consideration should be given to compliance when generalizing from the results of a controlled clinical trial. An optimal cost-effective treatment strategy chosen on the basis of efficacy data may not be so attractive once real-world compliance figures are taken into account.”
I don’t consider this to be an unforgiveable error in a book like this with a lot of authors writing about different aspects of the problem, but it doesn’t help that the authors of chapter 8 repeat the claim that improved compliance will have cost-saving effects in their conclusion of the chapter as well, and at the very least it doesn’t make them look good to me (a more cautious and tentative approach in the introduction and the conclusion of the chapter would have suited me better). A good editor sh(/w)ould probably have caught something like this.
The efficacy/effectiveness difference he talks about relates to the fact that the results of randomized controlled trials (RCTs) could/should be considered estimates of the health effects related to something close to the ideal treatment scenario, whereas real world implementation (effectiveness) of the treatment in question will often provide patients a sometimes significantly lower health benefit in terms of average treatment effect (or similar metrics), because of differences in the composition of the two groups and the settings of the treatment protocols applied, among other things. RCTs often deliberately try to maximize compliance e.g. by excluding patients who are likely to be non-compliers, and that of course will lead to biased estimates if you apply such estimates to the total patient population. There are many variables affecting how big the potential difference between efficacy and effectiveness may be for a particular drug and they cover that stuff, as well as a lot of other stuff, in the book. Non-compliance rates are much bigger than I’d imagined, but there are a lot of reasons for this that I hadn’t considered. The fact that non-compliance is widespread can be inferred even from the definitions applied in clinical trials:
“ultimately it is the outcome that is important. This might not always require that all doses of a drug are taken. Indeed, in short-term efficacy clinical trials patients who take 80 per cent or more of their medication, based upon pill counts, are usually considered ‘compliant’.” (p.14)
You can fail to take one-fifth of the medicine and still be considered compliant. Indeed as Parkinson, Wei and McDonald put it in their chapter:
“As the reader of this chapter it might be informative to reflect on your own behaviour: can you honestly say that you have always complied fully with every tablet of every prescription and have always finished the course? A very few readers will say yes, with honesty. The reality is that nearly everyone is non-compliant; the variable is the degree of non-compliance.”
A few numbers from the book illustrating the extent of the problem:
“reports (for example, Sung et al., 1998) have suggested that only 37 percent of participants take greater than 90 per cent of all doses of statins over a two-year period. [...]
[Astma:] When patients were aware of being monitored a majority (60 per cent) were fully compliant, but when unaware the majority had a compliance rate between 30 and 51 per cent (Yeung et al., 1994). [...]
Significant levels of non-redemption [of prescriptions], as seen in this study, have subsequently been confirmed within the large UK general practice databases such as GPRD where there is only about 90 per cent concordance between the prescriptions issued by the GP and those recorded as being redeemed at a pharmacy by the UK Prescription Pricing Authority (Rodriguez et al., 2000). [...]
Chapman et al. (2005) recently examined compliance with concomitant antihypertensive and lipid-lowering drug therapy in 8406 enrollees in a US-managed care plan [...] Less than half of patients (44.7 per cent) were adherent with both therapies three months after medication initiation, a figure that decreased to 35.8 per cent at 12 months. [...]
Despite international clinical guidelines recommending lipid-lowering treatment in patients with clinically evident atherosclerotic vascular disease, study after study has documented low treatment rates in this high-risk patient population, thereby creating a clinical practice and public health dilemma (Fonarow and Watson, 2003).
Only about 30 per cent of patients with established CVD and raised serum lipids, and fewer than 10 per cent of individuals eligible for primary prevention, receive lipidlowering therapy. Target total cholesterol concentrations are then achieved in fewer than 50 per cent of patients who do receive such treatment (Primatesta and Poulter, 2000).
Poor patient compliance to medication regimen is a major factor in the lack of success in treating hyperlipidaemia (Schedlbauer et al., 2004). All of the lipid-lowering drugs must be continued indefinitely; when they are stopped, plasma cholesterol concentrations generally return to pretreatment levels (Anon, 1998). [...]
Up to half of the patients treated for hypertension drop out of care entirely within a year of diagnosis (ibid. [WHO, 2003b], Flack et al., 1996). [...]
Non-compliance comes in many forms: depending on the disease area, as many as one in five patients fail to take the first step of collecting a prescription from the pharmacy. Many patients on short-term medications depart from recommended doses within a day or two of starting treatment. And many of those on longer-term medication may take a break from their medication or vary their dose depending on how they feel. A review of the evidence (Horne and Weinman, 1999) concluded that compliance overall is approximately 50 per cent but varies across different medication regimens, different illnesses and different treatment settings.”
A little more stuff from the book:
“Compliance depends on many factors, including the study population (better in educated compared to disadvantaged patients) type of intervention, duration of treatment, complexity of treatment, real or perceived side-effects and life circumstances (see Table 8.1). The reasons are often patient-specific, multifaceted and can change over time. Demographically, the very young, the very old, teenagers and those taking very complex treatment regimes are the least likely to comply. [...]
asymptomatic and chronic diseases needing long-term treatment [...] result in poorer compliance; and [...] the longer the remission in chronic diseases, the lower the compliance (Blackwell, 1976). [...] patient-controlled non-compliance was lower in treatment for diseases in which the relationship between non-compliance and recurrence is very clear, such as diabetes, compared to treatment for diseases in which this relationship is less clear [...] Of course, cognitive deficit, helplessness, poor motivation and withdrawal all lead to forgetfulness and passive or structural noncompliance (Gitlin et al., 1989; Shaw, 1986). [...] most non-compliance is intentional and results from conscious choices. [...]
As a rule, patients cannot be simply classified as compliers or non-compliers. Rather, the level of compliance ranges from patients who take every prescribed dose precisely as directed to those who never do with the typical patient lying between these two extremes. The degree to which patients intend to comply with a regimen can be subdivided into patient-controlled and structural. Patient-controlled factors can be subdivided further into rational behaviour (as seen in patients with Parkinson’s disease who regulate their own dosing) and irrational behaviours (such as self-induced seizures). Structural factors are those beyond the patient’s control, such as impaired memory or difficulty accessing medication (Leppik, 1990). [...]
Compliance and adherence to therapy are complex issues with no obvious ‘one size fits all’ solution available. It appears that actively involving patients in treatment decisions, empowering patients with access to medical information and providing ongoing monitoring all contribute to improved compliance and adherence rates. The challenge for health services, however, is to provide these enhanced levels of support cost-effectively.”
The book is a few years old and sometimes you can tell. I was curious along the way about how much things have changed in the meantime. I’m guessing less than would have been optimal.
I should point out lastly that I have made a goodreads profile. I haven’t added a lot of books to my profile yet, but I may decide to use that site actively in the future. At goodreads I gave the book 3 stars, corresponding to an ‘I liked it’ evalution.
i. Econometric methods for causal evaluation of education policies and practices: a non-technical guide. This one is ‘work-related’; in one of my courses I’m writing a paper and this working paper is one (of many) of the sources I’m planning on using. Most of the papers I work with are unfortunately not freely available online, which is part of why I haven’t linked to them here on the blog.
I should note that there are no equations in this paper, so you should focus on the words ‘a non-technical guide’ rather than the words ‘econometric methods’ in the title – I think this is a very readable paper for the non-expert as well. I should of course also note that I have worked with most of these methods in a lot more detail, and that without the math it’s very hard to understand the details and really know what’s going on e.g. when applying such methods – or related methods such as IV methods on panel data, a topic which was covered in another class just a few weeks ago but which is not covered in this paper.
This is a place to start if you want to know something about applied econometric methods, particularly if you want to know how they’re used in the field of educational economics, and especially if you don’t have a strong background in stats or math. It should be noted that some of the methods covered see wide-spread use in other areas of economics as well; IV is widely used, and the difference-in-differences estimator have seen a lot of applications in health economics.
ii. Regulating the Way to Obesity: Unintended Consequences of Limiting Sugary Drink Sizes. The law of unintended consequences strikes again.
You could argue with some of the assumptions made here (e.g. that prices (/oz) remain constant) but I’m not sure the findings are that sensitive to that assumption, and without an explicit model of the pricing mechanism at work it’s mostly guesswork anyway.
iii. A discussion about the neurobiology of memory. Razib Khan posted a short part of the video recently, so I decided to watch it today. A few relevant wikipedia links: Memory, Dead reckoning, Hebbian theory, Caenorhabditis elegans. I’m skeptical, but I agree with one commenter who put it this way: “I know darn well I’m too ignorant to decide whether Randy is possibly right, or almost certainly wrong — yet I found this interesting all the way through.” I also agree with another commenter who mentioned that it’d have been useful for Gallistel to go into details about the differences between short term and long term memory and how these differences relate to the problem at hand.
“An extensive body of prior research indicates an association between emotion and moral judgment. In the present study, we characterized the predictive power of specific aspects of emotional processing (e.g., empathic concern versus personal distress) for different kinds of moral responders (e.g., utilitarian versus non-utilitarian). Across three large independent participant samples, using three distinct pairs of moral scenarios, we observed a highly specific and consistent pattern of effects. First, moral judgment was uniquely associated with a measure of empathy but unrelated to any of the demographic or cultural variables tested, including age, gender, education, as well as differences in “moral knowledge” and religiosity. Second, within the complex domain of empathy, utilitarian judgment was consistently predicted only by empathic concern, an emotional component of empathic responding. In particular, participants who consistently delivered utilitarian responses for both personal and impersonal dilemmas showed significantly reduced empathic concern, relative to participants who delivered non-utilitarian responses for one or both dilemmas. By contrast, participants who consistently delivered non-utilitarian responses on both dilemmas did not score especially high on empathic concern or any other aspect of empathic responding.”
In case you were wondering, the difference hasn’t got anything to do with a difference in the ability to ‘see things from the other guy’s point of view’: “the current study demonstrates that utilitarian responders may be as capable at perspective taking as non-utilitarian responders. As such, utilitarian moral judgment appears to be specifically associated with a diminished affective reactivity to the emotions of others (empathic concern) that is independent of one’s ability for perspective taking”.
On a small sidenote, I’m not really sure I get the authors at all – one of the questions they ask in the paper’s last part is whether ‘utilitarians are simply antisocial?’ This is such a stupid way to frame this I don’t even know how to begin to respond; I mean, utilitarians make better decisions that save more lives, and that’s consistent with them being antisocial? I should think the ‘social’ thing to do would be to save as many lives as possible. Dead people aren’t very social, and when your actions cause more people to die they also decrease the scope for future social interaction.
v. Lastly, some Khan Academy videos:
(This one may be very hard to understand if you haven’t covered this stuff before, but I figured I might as well post it here. If you don’t know e.g. what myosin and actin is you probably won’t get much out of this video. If you don’t watch it, this part of what’s covered is probably the most important part to take away from it.)
It’s been a long time since I checked out the Brit Cruise information theory playlist, and I was happy to learn that he’s updated it and added some more stuff. I like the way he combines historical stuff with a ‘how does it actually work, and how did people realize that’s how it works’ approach – learning how people figured out stuff is to me sometimes just as fascinating as learning what they figured out:
(Relevant wikipedia links: Leyden jar, Electrostatic generator, Semaphore line. Cruise’ play with the cat and the amber may look funny, but there’s a point to it: “The Greek word for amber is ηλεκτρον (“elektron”) and is the origin of the word “electricity”.” – from the first link).
You can download the book here.
I generally prefer reading books offline but since I found the avax e-book collection, I’ve found it very difficult to argue myself out of reading books from that site (because the site has so many awesome books, including textbooks, available for free).
As for the specific book in question, I like it so far. If you’ve never opened a textbook about medicine, genetics, microbiology or similar stuff before in your life, it’ll probably be too technical for you to benefit much from it; I’d certainly have had an easier time reading chapter two on genetic diseases if I’d had a stronger background in biochemistry, and it’s not like this is a topic I’ve never dealt with before. Chapter 3, on disorders of the immune system, was even worse than chapter two. Microbiology, which is somewhat related to the field of immunology, is also a subject I’ve read about in the past, however that reading has been much more ‘fragmented’ and less systematic than has my reading of e.g. the genetics literature (which is itself rather scattered and unsystematic, compared to my reading of the diabetes literature…). So even though while reading the immunology part I seemed to remember both having seen some of this stuff before in the textbooks as well as having touched upon some of the themes on Khan Academy and Wikipedia, I found some of this stuff quite hard to read and understand. In the genetics section, it helped a lot to be familiar with a lot of the key concepts (‘fitness’, ‘linkage disequilibria’, ‘stages of meiosis’, ‘genotype/phenotype’, ‘Mendelian inheritance’, ‘mutation and drift’, ‘fixation’, …) – I had no such systematic knowledge to rely on when reading about the immunology stuff.
All that said, there’s a lot of good stuff in this book and when you’re not reading a book like this in order to pass an exam you’re not as worried about missing some details – I don’t plan on understanding everything in this book and I feel fine about ‘mentally skipping sections’ which are very technical (meaning reading the words but not fully understanding what the words mean). It doesn’t seem likely to me that the added understanding I’d get from ‘looking up everything’ would add enough to my understanding of the material to justify the costs. I want to enjoy reading this, so I’ll read all the stuff but I won’t look up all the unknown stuff.
As can probably be inferred from the above comments, the book is much more technical than the sexual diseases book I covered a few days ago. I decided to include below a few examples of what this means by quoting a couple of passages from chapter three, on disorders of the immune system:
“Activated macrophages secrete proteolytic enzymes, active metabolites of oxygen (including superoxide anion and other oxygen radicals), arachidonic acid metabolites, cyclic adenosine monophosphate (cAMP), and cytokines such as interleukin-I (IL-I), IL-6, tumor necrosis factor (TNF), and IL-8, among others. Many tissue-specific cells are of macrophage lineage and function to process and present antigen (Langerhans’ cells, oligodendrocytes, etc.).” [...]
“Polymorphonuclear leukocytes (neutrophils) (PMNs) are granulocytic cells that originate in the bone marrow and circulate in blood and tissue. Their primary function is antigen-nonspecific phagocytosis and destruction of foreign particles and organisms. The precency of Feγ receptors on the surface of neutrophils also facilitates the clearance of opsonized microbes through the reticuloendothelial system.” [...]
“in the airway inflammatory response in asthma, eosinophil-derived mediators of inflammation, including major basic protein (MBP), eosinophil-derived neurotoxin (EDN), eosinophil cationic protein (ECP) and lysophospholipase (LPL) are toxic to respiratory epithelium.”
Of course it’s not all like this but if you’re not okay with not always knowing more or less completely what’s going on, you should probably stay away from this book.
I only ever covered two of Steven Farmer’s lectures here on the blog, and back when I blogged them I didn’t watch all of the lectures. Recently I went through some old bookmarks and decided to have a go at that stuff again. He’s pretty good:
Completely unrelated but I figured I should mention it: Tomorrow’s the first day of the London Chess Classic tournament. This chess tournament is as good as it gets; The world’s three highest rated players are all playing, as is the World Champion and the world’s strongest female player. Last year the live commentary was provided mainly by IM Lawrence Trent and GM Daniel King. They did a splendid job, but this year the organizers have upped the ante and found some significantly stronger players to do the job; Nigel Short and David Howell. Both of those guys are former contestants in the tournament. As usual the tournament has an unequal number of contestants, and the player with the bye round will join Short and Howell in the commentator box and give his/her views on the games as they proceed. I’ve been really impressed with the way the live commentary has been handled the last few years, and you can learn a lot by watching this stuff (here’s a direct link). The tournament has implemented a 3/1/0-rule (3 points for a win, 1 for a draw, 0 for a loss) so the number of ‘GM-draws’ is likely to be lower than it often is in these kinds of tournaments – the organizers want to incentivize the players to actually play interesting games, and in the past I think they’ve been successful. If you like chess, this is the place to be for the next one and a half weeks.
I’ve spent way too much money on books this autumn, as well as arguably too much time as well, so I’ve been feeling guilty about that. This guilty conscience has had as a consequence that I didn’t stock up on reading materials after I’d read the interesting stuff from the last amazon batch, something I usually do so that I always have a few books available that I’d potentially like to read if I find myself in the mood. I basically haven’t had many interesting unread books standing on my shelf, and so I haven’t read very much – a fact that I’ve also felt guilty about. Yesterday the contribution to my guilty conscience from not engaging in offline book-reading finally surpassed the contribution to it from spending money and time on reading too much ‘irrelevant stuff’ (i.e. non-exam-related-stuff), and so I ended up reading Ramachandran’s book.
Overall it’s better than Sacks, but I don’t really think that’s saying all that much. At least there aren’t any Wittgenstein quotes in this one (though there are Shakespeare quotes). I think this is the last book of this nature I’ll read – they’re too unsystematic, speculative and messy in their structure and I’d learn a lot more from just reading some chapters in a textbook like this (I probably won’t start out with that as I have this one standing on my shelf..). This is not to say that I didn’t learn anything from the book, and if you want to have a go at one of these easy-to-read introductory pop-sci neurology books, you can do worse (as I have realized). There’s less focus on patients and more focus on the specifics of the stuff that goes wrong and what those specifics tell us about how specific elements of the human brain works than in Sacks, and particularly important here is the fact that Ramachandran has included figures with illustrations of how the brain looks like and which structures are placed where, which was a big help to me during the reading.
I found the stuff on vision and how it works very interesting, so I’ll quote some stuff from that part of the book:
“The human brain contains multiple areas for processing images, each of which is composed of an intricate network of neurons that is specialized for extracting certain types of information from the image. [...] every act of perception, even something as simple as viewing a drawing of a cube, involves an act of judgment by the brain.
In making these judgments, the brain takes advantage of the fact that the world we live in is not chaotic and amorphous; it has stable physical properties. During evolution—and partly during childhood as a result of learning— these stable properties became incorporated into the visual areas of the brain as certain “assumptions” or hidden knowledge about the world that can be used to eliminate ambiguity in perception. For example, when a set of dots move in unison—like the spots on a leopard—they usually belong to a single object. So, any time you see a set of dots moving together, your visual system makes the reasonable inference that they’re not moving like this just by coincidence—that they probably are a single object. And therefore, that’s what you see.” [...]
“because of some quirk in our evolutionary history, each side of your brain sees the opposite half of the world (Figure 4.4). If you look straight ahead, the entire world on your left is mapped onto your right visual cortex and the world to the right of your center of gaze is mapped onto your left visual cortex. [...] this first map serves as a sorting and editorial office where redundant or useless information is discarded wholesale and certain defining attributes of the visual image—such as edges—are strongly emphasized. [...] This edited information is then relayed to an estimated thirty distinct visual areas in the human brain, each of which thus receives a complete and partial map of the visual world. [...] Why do we need thirty areas?6 We really don’t know the answer, but they appear to be highly specialized for extracting different attributes from the visual scene—color, depth, motion and the like. When one or more areas are selectively damaged, you are confronted with paradoxical mental states of the kind seen in a number of neurological patients. [...]
One of the most important principles in vision is that it tries to get away with as little processing as it can to get the job done. To economize on visual processing, the brain takes advantage of statistical regularities in the world—such as the fact that contours are generally continuous or that table surfaces are uniform—and these regularities are captured and wired into the machinery of the visual pathways early in visual processing. When you look at your desk, for instance, it seems likely that the visual system extracts information about its edges and creates a mental representation that resembles a cartoon sketch of the table (again, this initial extraction of edges occurs because your brain is mainly interested in regions of change, of abrupt discontinuity, at the edge of the desk, which is where the information is). The visual system might then apply surface interpolation to “fill in” the color and texture of the table, saying in effect, “Well, there’s this grainy stuff here; it must be the same grainy stuff all over.” This act of interpolation saves an enormous amount of computation; your brain can avoid the burden of scrutinizing every little section of the desk and can simply employ loose guesswork instead [...] what we call perception is really the end result of a dynamic interplay between sensory signals and high-level stored information about visual images from the past. Each time one of us encounters an object, the visual system begins a constant questioning process. Fragmentary evidence comes in and the higher centers say, “Hmmmmm, maybe this is an animal.” Our brains then pose a series of visual questions: as in a twenty questions game. Is it a mammal? A cat? What kind of cat? Tame? Wild? Big? Small? Black or white or tabby? The higher visual centers then project partial “best fit” answers back to lower visual areas including the primary visual cortex. In this manner, the impoverished image is progressively worked on and refined (with bits “filled in,” when appropriate). I think that these massive feed forward and feedback projections are in the business of conducting successive iterations that enable us to home in on the closest approximation to the truth.16 To overstate the argument deliberately, perhaps we are hallucinating all the time and what we call perception is arrived at by simply determining which hallucination best conforms to the current sensory input.”
When I include quotes like the ones above in the post, I feel that I also have to quote some different stuff in order to give you a more complete picture. Here’s one quote which says a lot: “Contrary to what many of my colleagues believe, the message preached by physicians like Deepak Chopra and Andrew Weil is not just New Age psychobabble. It contains important insights into the human organism—ones that deserve serious scientific scrutiny.” So, yeah… Fortunately that quote was on page 221 (if it had been on page 20, I would not have read the rest of the book). In all fairness, he calls for rigorous tests but he also writes that “We have no idea which ones (if any) [of the alternative 'medicine' interventions] work and which ones do not” – which is a, problematic, claim. ‘Alternative medicine’ is ‘alternative’ because it doesn’t work – when health interventions of one kind or another can be shown to work in controlled experiments, they stop being ‘alternative’ treatments; the stuff that works is just called medicine. I know that there are institutional obstacles at play that keeps out treatment options which likely work but will never be profitable enough to justify trying to get through FDA approval, to take an example, but at least as a first approximation that’s how it works. You should probably also know, before you rush out to find the book, that I felt compelled to write words like ‘fool’ and ‘WTF’ in the margin at various occasions – the quote is not the only one of its kind. It’s safe to say that I very rarely do this when I read a book.
There are other ressources than Khan Academy out there, so I thought I’d start out with a few remarks related to those. I’m about to start a course on coursera which I signed up for a long time ago, but I’m actually reconsidering now because I may not be able to find the time. If you don’t know about the site, go have a look around. A friend of mine also linked to this collection of videos from MIT on Electricity and Magnetism – looks very interesting. Anyway, a few Khan Academy videos below:
Just how sensitive blood flow is to vessel radius is an aspect I’d never given much thought, even though this is not exactly the first time I’ve done work on fluid dynamics (there’s also a largish section on that at Khan Academy) or the cardiovascular system. For some reason this video really made that link much more obvious to me, and these dynamics make it easier in my mind to understand why even relatively small changes in blood vessel composition over time can actually impede blood flow quite significantly and turn out to have rather large physiological effects. Math far more often than not helps me to think more clearly about stuff.
Some other videos:
By Siddhartha Mukherjee. It is another one of the books I received in the mail Tuesday. I didn’t plan on reading it before this weekend, but ‘things didn’t go as planned.’ I started out with a few pages Tuesday evening and basically I just couldn’t stop reading. Now I’ve finished the book.
It’s gotten a lot of attention around the web, which was part of why I decided to have a go at it. The attention it has received is not undeserved. I’d have liked more data but then again I always want more data. Here are a few interesting observations from the book:
“In 1870, the per capita consumption in America was less than one cigarette per year. A mere thirty years later, Americans were consuming 3.5 billion cigarettes and 6 billion cigars every year. By 1953, the average annual consumption of cigarettes had reached thirty-five hundred per person. On average, an adult American smoked ten cigarettes every day, an average Englishman twelve, and a Scotsman nearly twenty. [...] between 1940 and 1944, the fraction of female smokers in the United States more than doubled, from 15 to 36 percent.”
If you assume they wouldn’t have started in case there hadn’t been a World War, you can probably add another few million people to the list of war casualties right there. How about later on? “By 1994, the per capita consumption of cigarettes in America had dropped for nearly twenty straight years (from 4,141 in 1974 to 2,500 in 1994), representing the most dramatic downturn in smoking rates in history.” But before that point the results of the changed smoking habits were not hard to observe in the data:
“Between 1970 and 1994, lung cancer deaths among women over the age of fifty-five had increased by 400 percent, more than the rise in the rates of breast and colon cancer combined. This exponential upswing in mortality had effaced nearly all gains in survival not just for lung cancer, but for all other types of cancer. [...] Lung cancer was still the single biggest killer among cancers, responsible for nearly one-fourth of all cancer deaths.”
A few other interesting bits:
“Prostate cancer represents a full third of all cancer incidence in men — sixfold that of leukemia and lymphoma. In autopsies of men over sixty years old, nearly one in every three specimens will bear some evidence of prostatic malignancy.” (but you already knew that first part, right?)
“Cisplatin was unforgettable in more than one sense. The drug provoked an unremitting nausea, a queasiness of such penetrating force and quality that had rarely been encountered in the history of medicine: on average, patients treated with the drug vomited twelve times a day.”
“The incidence of CML remains unchanged from the past: only a few thousand patients are diagnosed with this form of leukemia every year. But the prevalence of CML—the number of patients presently alive with the disease—has dramatically changed with the introduction of Gleevec [a new treatment option - there's much more about it in the book and the wikipedia article also covers this]. As of 2009, CML patients treated with Gleevec are expected to survive an average of thirty years after their diagnosis. Based on that survival figure, Hagop Kantarjian estimates that within the next decade, 250,000 people will be living with CML in America, all of them on targeted therapy. Druker’s drug will alter the national physiognomy of cancer, converting a once-rare disease [people just died of it in the past] into a relatively common one”
He doesn’t cover the economics of cancer and cancer treatment in much detail and present problems and developments in this area are not covered at all. Included in the postscript is however an interview dealing with some of the stuff not covered in the book, and after reading that part I’m in a way glad he didn’t write about this stuff – when dealing with the question of the high costs of relatively recently discovered targeted therapies, he does not even mention FDA’s role in driving up costs when answering that question, which is telling me that this is a subject he simply doesn’t know enough about to cover, at least at the present point in time. If you want to know more the FDA’s role in driving up costs of new medical treatments, including new cancer treatments, Megan McArdle has written about that stuff often though I don’t have a specific link at hand; google is your friend.
The book is very USA-centric, but I didn’t consider that a big issue. It’s also ‘popular science’. That was initially a strong argument for not buying the book, but on the other hand the popular science aspect also means that the book is easy to read and won’t take you very long to get through even though the page count is significant.
It’s a wonderful read.
i. I started writing this post because I felt that I had to share this (click to view full size):
From abstrusegoose. But I decided that I might as well add a few other links as well.
ii. The Cochrane Foundation has just published a new review article on on ‘Pharmacotherapy for mild hypertension’ – it seems that the benefits of treatment are not as great as they have been made out to be. Via this slate article.
iii. (From Razib Khan’s pinboard feed:) How “god” evolved.
vi. In case you haven’t seen it:
v. Voyage of the James Caird. I may have linked to this before, but I don’t think so.
“The voyage of the James Caird was an open boat journey from Elephant Island in the South Shetland Islands to South Georgia in the southern Atlantic Ocean, a distance of 800 nautical miles (1,500 km; 920 mi). Undertaken by Sir Ernest Shackleton and five companions, its objective was to obtain rescue for the main body of the Imperial Trans-Antarctic Expedition of 1914–17, trapped on Elephant Island after the loss of its ship Endurance. History has come to consider the James Caird’s voyage as one of the greatest small-boat journeys ever accomplished.”
Here’s an image:
1500 kilometres and 16 days in a boat like that. And don’t think the trip was over when they reached the shore; those of them who could still travel had 36 hours of continuous travel across the mountainous and glacier-covered island in front of them before they were able to reach their goal, an inhabited whaling station in Stromness.
iv. I haven’t read this, but I assume that it may be of interest to some of you: Intelligence – A Unifying Construct for the Social Sciences, by Richard Lynn and Tatu Vanhanen.
“Each year, the American Cancer Society estimates the numbers of new cancer cases and deaths expected in the United States in the current year and compiles the most recent data on cancer incidence, mortality, and survival based on incidence data from the National Cancer Institute, the Centers for Disease Control and Prevention, and the North American Association of Central Cancer Registries and mortality data from the National Center for Health Statistics. A total of 1,596,670 new cancer cases and 571,950 deaths from cancer are projected to occur in the United States in 2011. Overall cancer incidence rates were stable in men in the most recent time period after decreasing by 1.9% per year from 2001 to 2005; in women, incidence rates have been declining by 0.6% annually since 1998. Overall cancer death rates decreased in all racial/ethnic groups in both men and women from 1998 through 2007, with the exception of American Indian/Alaska Native women, in whom rates were stable. African American and Hispanic men showed the largest annual decreases in cancer death rates during this time period (2.6% and 2.5%, respectively). Lung cancer death rates showed a significant decline in women after continuously increasing since the 1930s. The reduction in the overall cancer death rates since 1990 in men and 1991 in women translates to the avoidance of about 898,000 deaths from cancer. However, this progress has not benefitted all segments of the population equally; cancer death rates for individuals with the least education are more than twice those of the most educated.”
Link to the publication. Some more data (click to view full size):
Sex differences matter a lot for some types of cancers and the differences between the genders are quite significant. Breast cancer cases make up ~30% of all new cancer cases for women and prostate cancer cases make up a similar proportion of new male cases. Note that it makes a lot of sense to report the ‘new cases’-metric rather than the ‘total people afflicted’ if you want to know about the risk of getting the disease; some types of cancers are much more aggressive than others and death rates vary a lot, and so if you looked at a metric like ‘people afflicted’, relatively harmless cancers (e.g. (some types of) prostate cancer) would in some sense be ‘overrepresented’. As the report puts it, “the lifetime probability of being diagnosed with an invasive cancer is higher for men (44%) than women (38%) [...] However, because of the earlier median age of diagnosis for breast cancer compared with other major cancers, women have a slightly higher probability of developing cancer before age 60 years.”
Looking just at the death rate, there’s some variation here; spanning from Utah’s very low death rate of just 135.7 (most likely caused by environmental factors – smoking and drinking in particular) to Kentucky’s 216.5 – the report mentions specifically later on that “lung cancer shows by far the largest geographic variation in cancer occurrence”, which I do not find surprising. Even though the two states mentioned have very different death rates, far most states are in the 170+ range so most inter-state differences aren’t that large especially considering how many different factors impact a variable like this.
When looking at the next table remember to look at the actual percentages as the proportions given are only very rough measures. I think it’s interesting that they included the latter, but it’s probably not a bad idea; to a lot of math-challenged individuals such a fraction may convey significantly more information than do the percentage estimates, and the seemingly much greater degree of ‘precision’ of the probability estimates should not make us forget that these are in fact just that, estimates:
Again, recall that these are averages and averaging can hide important variation in the data. For example, the 6-7% lifetime risk of lung- and bronchial cancers is an measure which both includes heavy smokers and non-smokers. A smoker should rationally assume his risk to be significantly higher than that, and a non-smoker would on the other hand probably get a more accurate risk assessment if he assumes that his/her risk of getting that type of cancer is quite a bit lower than the full-sample estimate.
“Cancer replaced heart disease as the leading cause of death among men and women aged younger than 85 years in 1999 (Fig. 6). The overall cancer death rate decreased by 1.9% per year from 2001 through 2007 in males and by 1.5% in females from 2002 through 2007, compared with smaller declines of 1.5% per year in males from 1993 through 2001 and 0.8% per year in females from 1994 through 2002 (Table 5). Notably, the lung cancer mortality rate in women has begun to decline for the first time in recorded history and more than a decade later than the decline began in men.”
There’s more at the link.
Wildenschild, Kjøller, Sabroe, Erlandsen and Heitmann has published a new study on this. Some stuff from the paper:
“In recent years, health care utilization has increased steadily. Data from Statistics Denmark show that the average number of consultations with a general practitioner increased from 7.2/year in 1999 to 8.0/year in 2005 for women and from 4.5/year to 5.3/year for men during the same period. [...] Concurrently, with the increased utilization of health care, the prevalence of obesity among those aged 16–99 years increased from 5.5% in 1987 to 11.4% in 2005 according to the DHIS. This rise in prevalence of obesity is in accordance with findings from other Danish studies [5,6] and with the development seen in other industrialized countries.[7,8]
Considering the higher incidence of somatic and psychological illness among obese people, it is conceivable that some of the increase in utilization of health care might be attributed to the increase in the prevalence of obesity. Studies examining the impact of the rising prevalence of obesity on the development of health care utilization are generally absent in previous literature, but several studies have shown an association between obesity per se and utilization of various types of health care.[9–22] [...]
The purpose of this study was therefore to examine the impact of the rising prevalence of obesity on utilization of health care in Denmark in 1987–2005. The hypothesis was that the prevalence of obesity would be associated with utilization of health care and thus that the rise in utilization could be partly attributed to the rise in the prevalence of obesity. Another purpose was to examine whether the utilization of health care of obese people has changed during the period.”
I found this paper when looking for data on Danish obesity, which is not as easy to find as you’d perhaps think (for one thing, Statistics Denmark doesn’t have any data on this at all). Even though it’s not easy to find data on this, I did manage to find a 2004 study along the way which is aptly named Major increase in prevalence of overweight and obesity between 1987 and 2001 among Danish adults. The abstract:
The aim of the study was to examine the secular trends in the prevalence of obesity (BMI >or= 30.0 kg/m(2)) and overweight (25.0 <or= BMI < 30.0 kg/m(2)) in Danish adults between 1987 and 2001.
RESEARCH METHODS AND PROCEDURES:
The study included self-reported weight and height of 10,094 men and 9897 women 16 to 98 years old, collected in a series of seven independent cross-sectional surveys. Prevalence and changes in prevalence of obesity and overweight stratified by sex and age groups were determined.
The prevalence of obesity more than doubled between 1987 and 2001, in men from 5.6% to 11.8% [odds ratio (OR) = 2.3, 95% confidence interval (CI) = 1.9 to 2.8, p < 0.0001] and in women from 5.4% to 12.5% (OR = 2.6, 95% CI = 2.1 to 3.2, p < 0.0001), with the largest increase among the 16- to 29-year-old subjects (men, from 0.8% to 7.5%, OR = 10.2, 95% CI = 4.1 to 25.3, p < 0.0001; women, from 1.4% to 9.0% OR = 7.0, 95% CI = 3.5 to 14.1, p < 0.0001). Between 1987 and 2001, the prevalence of overweight increased from 34% to 40% in men and from 17% to 27% in women.
The prevalence of overweight and obesity in Denmark has increased substantially between 1987 and 2001, particularly among young adults, a development that resembles that of other countries. There is clearly a need for early preventive efforts in childhood to limit the number of obesity-related complications in young adults."
Note that this is not the study  mentioned in the original quote, but the numbers are nevertheless very similar; it's quite clear that the estimated change that has taken place is within this neighbourhood – when using data like these. Note also that these results most likely underestimate the true increase over time (the type of data they have to work with is one of the main weaknesses of the original study and the authors don't attempt to hide that); self-reported data on stuff like this are notoriously unreliable and will always cause some bias. This review article found, according to the abstract (couldn’t find a non-gated version online), that ‘The largest increase [in BMI over time] has been documented in studies based on objective data from total populations’, which is not surprising.
Back to the original study on health care utilization – what did they find?
The increase in health care utilization that has occurred in recent years may in part be attributed to a rise in the prevalence of obesity. This increase is particularly seen among obese men. Health care utilization among obese women increased in 1987–2000 only and then leveled from 2000 to 2005. Including variables of obesity-related illness, such as hypertension, diabetes, and back problems, in the analyses suggested a varying significance of these conditions among the subsets of the sample but indicated that they may be at least part of the cause of the increased utilization among obese people. Among men, the association between BMI and health care utilization was dependent on age. Stratification according to age resulted in reduced statistical strength, and results were found to be significant only for those aged 45–64 years and borderline significant for those aged 25–44 and 65+ years. Among men aged 65+, the underweight had the largest probability of health care utilization, as opposed to the other age groups. This finding may be partly attributed to the presence of malignant illness in this group, indicating inverse causality”
These results make good sense to me, particularly that the strength of the association increases with age, though only up to a certain point (the weight-associated effect on -utilization is higher for middle aged than young people, and under-weight individuals is what muddles the waters when looking at the elderly segment + obesity may not be a super big issue if people actually get to reach old age in the first place); it will probably often take a few decades for obesity to cause significant health problems so it makes sense that middle aged obese people are more likely to ‘overutilize’ than people in the lower age brackets. I found this passage interesting:
“From 1987 to 2005, nonresponse to the DHIS increased from 20% in 1987 to 33% in 2005, with the largest increase in nonresponse occurring among those aged 16–24 and 25–44 years. Analyses on nonresponse by BMI to the DHIS in 2005 showed that more obese than normal weight people did not participate. This is in line with results from studies performed during the 1980s that indicated a greater nonresponse among obese people.[36,37] These findings imply that, over the years, nonresponse was generally larger among obese people compared with normal weight people, adding to an increasing underestimation of the prevalences of obesity in the study period. In addition, previous analyses on nonresponse in relation to health care utilization in the DHIS 2000 and 2005 have shown a positive association between nonresponse and health care utilization”
This of course leads to a conclusion which is less strong than it might have been, given better data (the lack of which is hard to blame the authors for):
“We found that the increased burden on the health care system was partly caused by obesity and a change toward an increase in health care use, particularly among obese men. It is likely that the present findings are underestimated due to a possible underestimation of weight, particularly among obese people with health problems, and potential differential selection caused by nonresponse among obese people with health problems.”
One final point, which is very important to remember when interpreting results like these, is that the increased utilization of health care ressources related to increased rates of obesity is not fully explained by (‘standard’) obesity-related illnesses:
“In the present study, associations between obesity and health care utilization were found independent of hypertension, diabetes, and back problems; thus, these illnesses did not fully mediate associations. This is in line with previous findings that suggested that associations between obesity and health care utilization can only partly be attributed to obesity-related illness such as heart disease, hypertension, high cholesterol, diabetes, and arthritis.14″
Even though you don’t get diabetes or hypertension from being fat, you’ll still have to see the doctor more often than will your friends who weigh less than you do.
“Water is essential for maintaining life on Earth but can also serve as a media for many pathogenic organisms, causing a high disease burden globally. However, how the global distribution of water-associated infectious pathogens/diseases looks like and how such distribution is related to possible social and environmental factors remain largely unknown. In this study, we compiled a database on distribution, biology, and epidemiology of water-associated infectious diseases and collected data on population density, annual accumulated temperature, surface water areas, average annual precipitation, and per capita GDP at the global scale. From the database we extracted reported outbreak events from 1991 to 2008 and developed models to explore the association between the distribution of these outbreaks and social and environmental factors. [...]
Worldwide, water-associated infectious diseases are a major cause of morbidity and mortality , , . A conservative estimate indicated that 4.0% of global deaths and 5.7% of the global disease burden (in DALYs) were attributable to a small subset of water, sanitation, and hygiene (WSH) related infectious diseases including diarrheal diseases, schistosomiasis, trachoma, ascariasis, trichuriasis, and hookworm infections , , . Although unknown, the actual disease burden attributable to water-associated pathogens is expected to be much higher. A total of 1415 species of microorganisms have been reported to be pathogenic, among which approximately 348 are water-associated, causing 115 infectious diseases .Yet, their distribution and associated factors at the global scale remain largely unexplored. [...]
The population density was shown to be a significant risk factor for reported outbreaks of all categories of water-associated infectious diseases and the probability of outbreak occurrence increased with the population density. The accumulated temperature was a significant risk factor for water-related diseases only. The analysis suggested that occurrence of water-washed diseases had significantly inverse relationship with surface water areas. Such inverse relationship was also observed between the average annual rainfall and water-borne diseases (including water-carried) and water-related diseases.”
From Global Distribution of Outbreaks of Water-Associated Infectious Diseases by Yang, LeJeune et al.
i. Control of fire by early humans. I read about this stuff in The Human Past as well, but like in so many other cases wikipedia actually has a lot of stuff if you care to look for it. Wikipedia’s treatment of this subject does not seem to be out of line with the evidence presented in THP; generally it seems to be the case that people knew how to make fire around 125-130.000 years ago, but it is not clear when/where this ability first evolved (THP sums it up like this: “Spreads of burned sediment, ash, and charcoal that almost certainly signal fireplaces are conspicuous in many sites occupied by the European Neanderthals and their near-modern African contemporaries after 130,000 years ago, and it is generally assumed that people everywhere after 130,000 years ago could make fire when they needed it. The question is when this ability evolved, or perhaps more precisely, whether a stage of full control followed on one when fire use was sporadic and opportunistic. This issue is difficult to address since sites older than 130,000 years ago are relatively rare and they are mostly open-air localities.” [...] caves are far more likely to preserve fossil fireplaces.” p.117. The problem is that at an open-air site, it’s much more difficult to tell if the fire was made by humans or natural processes.)
ii. Danish phonology. Some interesting aspects:
“Unlike the neighboring Mainland Scandinavian languages Swedish and Norwegian, the prosody of Danish does not have phonemic pitch. Stress is phonemic and distinguishes words like billigst [ˈb̥ilisd̥] “cheapest” and bilist [b̥iˈlisd̥] “car driver”. The main rules for the position of the stress are:
1. Inherited words are normally stressed on the first syllable.
2. The prefixes be-, for-, ge-, u- are unstressed, e.g. for’stå “understand”, be’tale “pay”, u’mulig “impossible” (NB there is also a stressed for- in nouns corresponding to the verbal prefix fore-).
3. In many compound adjectives, especially those ending in -ig and -lig, the stress is replaced from the first to the second syllable, e.g. vidt’løftig “circumstantial”, sand’synlig “probable”.
4. Words of French origin are stressed on the last syllable (except /ə/), e.g. renæ’ssance, mil’jø.
5. Words of Greek and Latin origin are stressed according to the Latin accent rules, i.e. stress on the penultimate if it is long or else on the antepenultimate, e.g. Ari’stoteles, Ho’rats.
6. The learned suffixes -aner, -ansk, -ance, -a/ens, -a/ent, -ere, -i, -ik, -ion, -itet, -ør are stressed, e.g. finge’rere, situa’tion, poli’tik, århusi’aner. The preceding syllable is stressed before the learned suffixes -isk, -iker, -or, e.g. po’lemisk, po’litiker, radi’ator. The suffix -or is stressed in the plural: radia’torer (colloquial: radi’atorer).
7. Verbs lose their stress (and stød, if any) in certain positions:
With an object without a definite or indefinite article: e.g. ’Jens ’spiser et ’barn [ˈjɛns ˈsb̥iːˀsɐ ed̥ ˈb̥ɑːˀn] “Jens eats a child” ~ ’Jens spiser ’børn [ˈjɛns sb̥isɐ ˈb̥ɶɐˀn] “Jens eats children”.
In a fixed phrase with an adverb or an adverbial: ’Helle ’sov ’længe [ˈhɛlə ˈsʌʊˀ ˈlɛŋə] “Helle slept for a long time” ~ ’Helle sov ’længe [ˈhɛlə sʌʊ ˈlɛŋə] “Helle slept late”.
Before the direction adverbs af, hen, hjem, ind, indad, ned, nedad, op, opad, over, ud, udad, under (but not the location adverbs henne. inde, nede, oppe, ovre, ude): e.g. han ’går ’ude på ’gaden [hæn ˈɡɒːˀ ˈuːð̪̩ pʰɔ ˈɡ̊æːð̪̩n] “he walks on the street” ~ han går ’ud på ’gaden [hæn ɡɒ ˈuð̪ˀ pʰɔ ˈɡ̊æːð̪̩n] “he walks into the street”.
The original pitch tone has been replaced by an opposition between syllables with and without the stød. The stød is not a separate phoneme, but a suprasegmental feature that may accompany certain syllables; those with a long vowel or that end with a voiced consonant.
The stød is phonemic since many words are kept apart on the basis of the presence or absence of the stød alone, e.g. løber “runner” [ˈløːb̥ɐ] ≠ løber “runs” [ˈløːˀb̥ɐ / ˈløʊ̯ˀɐ], ånden “breathing” [ˈʌnn̩] ≠ ånden “the spirit” [ˈʌnˀn̩].
It is impossible to predict the presence or absence of the stød; it has to be learned. However there are some main rules:
1. Original monosyllabic words have stød. Words that ended in consonant + r, l, n in Old Danish have the stød even though an anaptyctic vowel was later developed. The postposed definite article, which has become an inseparable part of the word, does not influence the word.
2. All umlauting plurals in -er (ODan. -r) have the stød, e.g. hænder [ˈhɛnˀɐ] “hands”.
3. Most presents from strong verbs (ODan. -r) have the stød, e.g. finder [ˈfenˀɐ] “finds”. Many of the presents of verbs with a preterite in -te have the stød as well (but not the presents of verbs with a preterite in -ede).
4. Monosyllabic words that originally ended in a short vowel + a single n, r, l, v, ð, g do not have the stød. However, when the definite suffix is added, the stød “returns”, e.g. ven [ˈʋɛn] ~ vennen [ˈʋɛnˀn̩] “friend”.
5. Stød is frequently avoided in words with the combinations rp, rt, rk, rs, e.g. vers [ˈʋæɐ̯s] “verse”, kort [ˈkʰɒːd̥] “card, map”/”short”.
6. Most (non-derived) words in -el, -er have the stød. Most words in -en do not have the stød. Nomina agentis in -er do not have the stød.
7. All words with the unstressed prefixes be-, for-, ge- have the stød.
8. There is stød in most compounds that have a replacement of the stress from first to the second syllable.
9. There is frequently the stød in the second part of compound verbs.
10. Monosyllables regularly lose the stød when they are the first part of a compound: mål [ˈmɔːˀl] “target, goal” ~ målmand [ˈmɔːlˌmænˀ] “goalkeeper”. The vowel is sometimes shortened: tag [ˈtˢæːˀ] “roof” ~ tagterrasse [ˈtˢɑʊ̯tˢaˌʁɑsə] ”roof terrace”
11. Words of Greek or Latin origin have the stød on a stressed antepenultimate syllable or a stressed last syllable. A stressed penultimate syllable has the stød if the word ends in -er.”
The non-verbal aspects of human interaction increase the demands on the human brain to deal with complexity immensely in ways we don’t think about, but let’s not pretend that the verbal aspects are necessarily simple and easy to deal with. It’s very hard to remember how much you need to know and learn to master a human language unless you’re in the process of actively doing it.
iii. Borromean rings.
“In mathematics, the Borromean rings consist of three topological circles which are linked and form a Brunnian link, i.e., removing any ring results in two unlinked rings.”
They are weird, that’s what they are. Here’s an image from the article:
iv. Terminal velocity. From the article:
“In fluid dynamics an object is moving at its terminal velocity if its speed is constant due to the restraining force exerted by the fluid through which it is moving.
A free-falling object achieves its terminal velocity when the downward force of gravity (FG) equals the upward force of drag (Fd). This causes the net force on the object to be zero, resulting in an acceleration of zero.
As the object accelerates (usually downwards due to gravity), the drag force acting on the object increases, causing the acceleration to decrease. At a particular speed, the drag force produced will equal the object’s weight (mg). At this point the object ceases to accelerate altogether and continues falling at a constant speed called terminal velocity (also called settling velocity). An object moving downward with greater than terminal velocity (for example because it was thrown downwards or it fell from a thinner part of the atmosphere or it changed shape) will slow down until it reaches terminal velocity. [...]
The reason an object reaches a terminal velocity is that the drag force resisting motion is approximately proportional to the square of its speed. At low speeds, the drag is much less than the gravitational force and so the object accelerates. As it accelerates, the drag increases, until it equals the weight. Drag also depends on the projected area. This is why objects with a large projected area relative to mass, such as parachutes, have a lower terminal velocity than objects with a small projected area relative to mass, such as bullets.”
“Caspases, or cysteine-aspartic proteases or cysteine-dependent aspartate-directed proteases are a family of cysteine proteases that play essential roles in apoptosis (programmed cell death), necrosis, and inflammation.
Caspases are essential in cells for apoptosis, or programmed cell death, in development and most other stages of adult life, and have been termed “executioner” proteins for their roles in the cell. Some caspases are also required in the immune system for the maturation of lymphocytes. Failure of apoptosis is one of the main contributions to tumour development and autoimmune diseases; this, coupled with the unwanted apoptosis that occurs with ischemia or Alzheimer’s disease, has stimulated interest in caspases as potential therapeutic targets since they were discovered in the mid-1990s.”
vii. Darien scheme.
viii. Cinderella effect.
“The Cinderella effect is a term used by psychologists to describe the high incidence of stepchildren being physically abused, emotionally abused, sexually abused, neglected, murdered, or otherwise mistreated at the hands of their stepparents at significantly higher rates than at the hands of their genetic parents. It takes its name from the fairy tale character Cinderella, who in the story was cruelly mistreated by her stepmother and stepsisters.”
The article is messy and I mostly included it in this post to give you the above (most people don’t click the links anyway – which is fine!).
ix. Rotavirus. I remember reading a Danish article at some point about whether the vaccine against the Rotavirus A should be part of a national vaccine-program, but I can’t remember where I read about it. Why would you want a vaccine? Well:
“Rotavirus is the most common cause of severe diarrhoea among infants and young children, and is one of several viruses that cause infections often called stomach flu, despite having no relation to influenza. It is a genus of double-stranded RNA virus in the family Reoviridae. By the age of five, nearly every child in the world has been infected with rotavirus at least once. However, with each infection, immunity develops, and subsequent infections are less severe; adults are rarely affected. There are five species of this virus, referred to as A, B, C, D, and E. Rotavirus A, the most common, causes more than 90% of infections in humans.
The virus is transmitted by the faecal-oral route. It infects and damages the cells that line the small intestine and causes gastroenteritis. Although rotavirus was discovered in 1973 and accounts for up to 50% of hospitalisations for severe diarrhoea in infants and children, its importance is still not widely known within the public health community, particularly in developing countries. In addition to its impact on human health, rotavirus also infects animals, and is a pathogen of livestock.
Rotavirus is usually an easily managed disease of childhood, but worldwide nearly 500,000 children under five years of age still die from rotavirus infection each year and almost two million more become severely ill. [...]
Rotavirus causes 37% of deaths attributable to diarrhoea and 5% of all deaths in children younger than five.“
I think perhaps the numbers of some of the sources in the article are incorrect or mixed up, presumably because new sources have been added at a later point – maybe I’ll go have a closer look and/or edit it later. Anyway,  from above tempts me to add a ‘not in source given’ tag, because I could not see how that claim was supported by the article after searching the document and skimming it to figure out where the claim came from. Maybe I’ll do that later. The article linked to from  is on the economics of RV gastroenteritis and vaccination. On the other hand, this article (found through Scholar, maybe it’s also one of the sources in the article – I haven’t looked) – Nosocomial rotavirus infection in European countries: a review of the epidemiology, severity and economic burden of hospital-acquired rotavirus disease – does support the claim in the wikipedia article:
“The data currently available on the epidemiology, severity and economic burden of nosocomial rotavirus (RV) infections in children younger than 5 years of age in the major European countries are reviewed. In most studies, RV was found to be the major etiologic agent of pediatric nosocomial diarrhea (31-87%), although the number of diarrhea cases associated with other virus infections (eg, noroviruses, astroviruses, adenoviruses) is increasing quickly and almost equals that caused by RVs. Nosocomial RV (NRV) infections are mainly associated with infants 0-5 months of age, whereas community-acquired RV disease is more prevalent in children 6-23 months of age. NRV infections are seasonal in most countries, occurring in winter; this coincides with the winter seasonal peak of other childhood virus infections (eg, respiratory syncytial virus and influenza viruses), thus placing a heavy burden on health infrastructures. A significant proportion (20-40%) of infections are asymptomatic, which contributes to the spread of the virus and might reduce the efficiency of prevention measures given as they are implemented too late. The absence of effective surveillance and of reporting of NRV infections in any of the 6 countries studied (France, Germany, Italy, Poland, Spain and the United Kingdom) results in severe underreporting of NRV cases in hospital databases and therefore in limited awareness of the importance of NRV disease at country level. The burden reported in the medical literature is potentially significant and includes temporary reduction in the quality of children’s lives, increased costs associated with the additional consumption of medical resources (increased length of hospital stay) and constraints on parents’/hospital staff’s professional lives.”
If you, like me, didn’t know what a nocosomial infection is, well that’s just a hospital-acquired infection. RV-infections are not nocosomial infections.
Citations in the wikipedia article are also problematic because I became aware that not all of them are direct citations; for instance,  leads to this article – Rotavirus Overview: The Pediatric Infectious Disease Journal – but that’s a secondary source to the claim. The primary source is a CDC-report: “Centers for Disease Control and Prevention. Epidemiology and Prevention of Vaccine-Preventable Diseases. Atkinson W, Hamborsky J, McIntyre L, et al, eds. 10th ed. Washington, DC: Public Health Foundation; 2007:295-306.”
(Sorry for the lack of updates, this is a difficult time for me.)
A very good introductionary lecture on pharmacology:
I decided to post some wikipedia links to a few of the concepts he covers in the lecture below (however I’m pretty sure the lecture is the more efficient way to learn this stuff, at least the basics):
Eight eligible trials were identified.We excluded a biased trial and included 600,000 women in the analyses. Three trials with adequate randomisation did not show a significant reduction in breast cancer mortality at 13 years (relative risk (RR) 0.90, 95% confidence interval (CI) 0.79 to 1.02); four trials with suboptimal randomisation showed a significant reduction in breast cancer mortality with an RR of 0.75 (95% CI 0.67 to 0.83). The RR for all seven trials combined was 0.81 (95% CI 0.74 to 0.87).
We found that breast cancer mortality was an unreliable outcome that was biased in favour of screening, mainly because of differential misclassification of cause of death. The trials with adequate randomisation did not find an effect of screening on cancer mortality, including breast cancer, after 10 years (RR 1.02, 95% CI 0.95 to 1.10) or on all-cause mortality after 13 years (RR 0.99, 95% CI 0.95 to 1.03).
Numbers of lumpectomies and mastectomies were significantly larger in the screened groups (RR 1.31, 95% CI 1.22 to 1.42) for the two adequately randomised trials that measured this outcome; the use of radiotherapy was similarly increased.
Screening is likely to reduce breast cancer mortality. As the effect was lowest in the adequately randomised trials, a reasonable estimate is a 15% reduction corresponding to an absolute risk reduction of 0.05%. Screening led to 30% overdiagnosis and overtreatment, or an absolute risk increase of 0.5%. This means that for every 2000 women invited for screening throughout 10 years, one will have her life prolonged and 10 healthy women, who would not have been diagnosed if there had not been screening, will be treated unnecessarily. Furthermore, more than 200 women will experience important psychological distress for many months because of false positive findings. It is thus not clear whether screening does more good than harm.”
From this review by Gøtzsche and Nielsen from The Nordic Cochrane Centre. Here’s a relatively recent press release from Cochrane (in Danish). Here’s a related article published a few days ago. By now, it seems that Gøtzsche thinks it is quite clear whether screening does more good than harm:
“I believe the time has come to realise that breast cancer screening programmes can no longer be justified,” Gøtzsche said.”
Maybe there’s a way to modify the current screening programmes somewhat so that they include mainly/only relatively high-risk subpopulations – but identifying just who the high-risk individuals are is never easy, which is part of why screening programmes like these are undertaken in the first place. Either way, if the results reported above are ‘in the right ballpark’ a serious cost/benefit analysis should in my mind lead to a rejection of the current programme(s).
“Background One-third of the world’s men are circumcised, but little is known about possible sexual consequences of male circumcision. In Denmark (∼5% circumcised), we examined associations of male circumcision with a range of sexual measures in both sexes.
Methods Participants in a national health survey (n = 5552) provided information about their own (men) or their spouse’s (women) circumcision status and details about their sex lives. Logistic regression-derived odds ratios (ORs) measured associations of circumcision status with sexual experiences and current difficulties with sexual desire, sexual needs fulfilment and sexual functioning.
Results Age at first intercourse, perceived importance of a good sex life and current sexual activity differed little between circumcised and uncircumcised men or between women with circumcised and uncircumcised spouses. However, circumcised men reported more partners and were more likely to report frequent orgasm difficulties after adjustment for potential confounding factors [11 vs 4%, ORadj = 3.26; 95% confidence interval (CI) 1.42–7.47], and women with circumcised spouses more often reported incomplete sexual needs fulfilment (38 vs 28%, ORadj = 2.09; 95% CI 1.05–4.16) and frequent sexual function difficulties overall (31 vs 22%, ORadj = 3.26; 95% CI 1.15–9.27), notably orgasm difficulties (19 vs 14%, ORadj = 2.66; 95% CI 1.07–6.66) and dyspareunia (12 vs 3%, ORadj = 8.45; 95% CI 3.01–23.74). Findings were stable in several robustness analyses, including one restricted to non-Jews and non-Moslems.
Conclusions Circumcision was associated with frequent orgasm difficulties in Danish men and with a range of frequent sexual difficulties in women, notably orgasm difficulties, dyspareunia and a sense of incomplete sexual needs fulfilment. Thorough examination of these matters in areas where male circumcision is more common is warranted.”
1. From Aerobic Exercise Capacity and Pulmonary Function in Athletes With and Without Type 1 Diabetes, by Komatsu et al. (link):
“In this study, we have shown that athletes with type 1 diabetes have a Vo2peakmax [aerobic exercise capacity] similar to that of athletes without diabetes but a lower anaerobic threshold than that of athletes without diabetes.
In a previous study (6), we demonstrated that nonathletic type 1 diabetic patients have a lower Vo2peak max than healthy subjects. In the present study, we confirm these data in nonathletic type 1 diabetic patients, but the defect (low Vo2peak max) was not found in athletes with type 1 diabetes. These data are in accordance with a study (11) that compared 128 patients with long-duration type 1 diabetes and 36 healthy individuals. [...]
All of the individuals in this study went to heart rate max frequency during the test. However, the type 1 diabetes sedentary group had lower maximum heart rate than the control group, as expected. This was an interesting finding and one in accordance with our previous data (6) in which the diabetic group showed lower maximum frequency during exercise than normal control subjects. This defect could be corrected with regular exercise since the diabetic athlete was able to achieve the same maximum heart rate as a normal athlete.
In this study, we also found that FEV1 [volume that has been exhaled at the end of the first second of forced expiration] was decreased in type 1 diabetic athletes compared with other groups. [...] Abnormalities in lung elasticity behavior can be manifestations of widespread elastin and collagen abnormalities in type 1 diabetic patients (14). These alterations have been demonstrated in diabetes and are, in some respects, similar to those that occur during normal aging.”
I found the ‘lower anaerobic threshold’ particularly interesting as this threshold can probably be considered a significant limiting factor when you run (/half-)marathons and similar. If the threshold is lower, the inevitable buildup of lactic acid will start sooner or at a lower absolute activity level, meaning you simply can’t run as fast.
2. A follow-up on the All-Cause Mortality Trends in a Large Population-Based Cohort With Long-Standing Childhood-Onset Type 1 Diabetes study from ‘The Allegheny County Type 1 Diabetes Registry’, a previous version of which I’m pretty sure I’ve linked to before, has now been done, adding 9 more years of follow-up to the analysis. Here’s the link. Conclusions:
“Although survival has clearly improved, those with diabetes diagnosed most recently (1975–1979) still had amortality rate 5.6 times higher than that seen in the general population, revealing a continuing need for improvements in treatment and care, particularly for women and African Americans with type 1 diabetes.” [...]
“Of note, now with a range of 28–43 years of type 1 diabetes duration, the risk of dying is 7 times higher than that of the local general population, with signiﬁcant improvements in SMR [Standardized Mortality Ratios, US] for those with diabetes diagnosed most recently in this cohort.” [...] This is the largest population-based type 1 diabetes cohort with at least 25 years of follow-up in the U.S. A recent population-based 20-year follow-up study in New Zealand showed the highest SMRs in individuals with type 1 diabetes diagnosed at age <30 (3.3 for men and 4.3 for women) (14). A nationwide Norwegian cohort with childhood-onset (age <15 years) type 1 diabetes recently reported SMRs of 3.9 (male) and 4.0 (female) after 20 years of follow-up (6)."
So what does this look like? The short version is this:
Graph number 3 directly above graphs the survival probability for the groups diagnosed during 65-69, 70-74 and 75-79; as can be seen quite clearly mortality is lower for the people diagnosed later in time, reflecting the progress that has taken place in treatment options and management of the disease. Note that these are not ‘historical figures’ – I got diagnosed in 87, just 8 years after the last of these cutoffs.
The US is quite different from the other countries analyzed in a few respects, in particular when it comes to the outcomes of the females: “The respective male-to-female mortality RRs [rate ratios, US] for these studies are 1.23 in New Zealand, 2.26 in Norway, and 1.29 in the U.K compared with 0.80 for our study. The reason for this discrepancy is unclear, but it appears that female sex completely lost its general survival advantage in our diabetes population. [...] Women in our cohort die at a rate similar to that of men, a result warranting further exploration, as younger women die much less frequently than younger men in the general U.S. population.”
What about race, I hear you ask? Well: “Despite race being a signiﬁcant predictor of mortality within the Allegheny County cohort (hazard ratio 3.2), no differences in SMR were seen by race, the African American SMR tending to be lower than the Caucasian SMR during follow-up (Fig. 2C). This seemingly contradictory result can be explained by the extremely high mortality rates seen in young African-Americans in the general population, particularly resulting from violent deaths (20)”
3. Changes in the Incidence of Lower Extremity Amputations in Individuals With and Without Diabetes in England Between 2004 and 2008, by Vamos et al. (link). From the study:
“RESEARCH DESIGN AND METHODS We identified all patients aged >16 years who underwent any nontraumatic amputation in England between 2004 and 2008 using national hospital activity data from all National Health Service hospitals. Age- and sex-specific incidence rates were calculated using the total diabetes population in England every year. To test for time trend, we fitted Poisson regression models.
RESULTS The absolute number of diabetes-related amputations increased by 14.7%, and the incidence decreased by 9.1%, from 27.5 to 25.0 per 10,000 people with diabetes, during the study period (P > 0.2 for both). The incidence of minor and major amputations did not significantly change (15.7–14.9 and 11.8–10.2 per 10,000 people with diabetes; P = 0.66 and P = 0.29, respectively). Poisson regression analysis showed no statistically significant change in diabetes-related amputation incidence over time (0.98 decrease per year [95% CI 0.93–1.02]; P = 0.12). Nondiabetes-related amputation incidence decreased from 13.6 to 11.9 per 100,000 people without diabetes (0.97 decrease by year [0.93–1.00]; P = 0.059). The relative risk of an individual with diabetes undergoing a lower extremity amputation was 20.3 in 2004 and 21.2 in 2008, compared with that of individuals without diabetes. [...]
In summary, in this study we found no evidence that the incidence of amputations has significantly decreased over the last 5 years among people with diabetes in England. In contrast to the results from regional studies in England, the population burden of amputations increased in people with diabetes at a time when both the number and incidence of amputations decreased in the aging general population. There is strong evidence to support the fact that much of this burden is preventable through existing interventions, and our findings highlight the need to further improve foot care for people with diabetes.”
By Simon P. Hardy. This is another one of those books from the Spring Sale at Stakbogladen (one of the university bookstores) that I got to one-fifth the normal price or so.
Some excerpts from the book:
i) “The variation in size of micro-organisms is not unlimited. The efficiency with which the organism can accumulate nutrients and dispose of waste material through the cytoplasmic membrane will restrict expansion. Many key metabolites (e.g. oxygen) pass passively through the cell wall and cell membrane into the cytosol. The surface area to volume ratio (SA/V) is the limiting factor for the extent to which passively diffusing molecules penetrate the cytosol. [...] The physical packing of the nucleic acid [...] and cytoplasmic components such as polysomes into a bacterial cell will also limit the minimum size achievable.”
ii) “Organisms growing within a fixed volume of culture media with no additional media added to the culture, is called a batch culture and is a closed system. Growing bacteria is a standard procedure in microbiology laboratories, hence the growth curve obtained [described in the previous section] is described in many microbiology texts. It is, however, artificial. Bacteria do not grow in such a closed system in vivo. There will be tremendous variation in the conditions that organisms find themselves in… [...] When nutrients become scarce, bacteria not only reduce total metabolic activity but also synthesise proteins specifically designed to help the cell cope with starvation. In addition, bacteria alter cell wall structure so that the organism is more resistant to damaging chemicals. [...] The significance of the stationary phase has been largely overlooked in preference to studies of growth rate, but there are implications for the transmission of bacteria to new hosts. Many bacteria are spread via contaminated water supplies, where nutrients are low and the antibacterial agents (chlorine) are employed to reduce bacterial numbers. These conditions are exactly opposite to those used to test the antibacterial activity of chlorine in the laboratory (actively growing bacteria cultured in nutrient-rich media).”
iii) “The bacteria that colonise or infect man are mesophiles; that is, have optimal growth temperatures between 20 and 40°C [...]. Thermophiles are those organisms that grow at elevated temperatures (a good example being those found in thermal lakes) and are not known to infect man. Psychrophiles are those that grow at reduced temperatures below 20°C. These labels are not mutually exclusive. Listeria monocytogenes, for example, is a mesophilic organism that can cause infections in man, but can grow at 4°C…
iv) [Aerobes] are organisms that grow in the presense of atmospheric concentrations of oxygen. Strict aerobes will not grow in the absence of oxygen. [...] Microaerophiles need reduced concentrations of oxygen (reduced oxygen tension) in order to grow and will not grow in air nor in the complete absense of oxygen. [...] Obligate or strict anaerobes will not grow in the presence of very low concentrations of oxygen and many will also be killed. Different genera of anaerobic bacteria show a range of oxygen sensitivity. Anaerobes are unable to utilise oxygen for respiration and therefore will not grow in the presence of oxygen. Certain anaerobes may tolerate exposure to oxygen for a period, so a distinction has to be made between oxygen killing organisms and oxygen just inhibiting their growth.”
Facultative anaerobes are organisms that will grow in air but can also grow in anaerobic conditions. [...] Facultative anaerobes will be the best equipped to deal with varying oxygen tensions, whereas the strict aerobes and strict anaerobes can be considered specialists that have adapted to particular gaseous environments.”
v) “Bacteria that live on and infect man grow best at pH 7 (neutral pH) and may be described as neutrophiles. Acidophiles prefer low pH, less than pH 6, whereas alkaliphiles grow at alkaline pH values over 8. Most organisms will tolerate a range of pH values that extend either side of their pH optimum as a bell-shaped curve (although the shape of the plot need not necessarily be symmetrical). The organisms will possess adaptive mechanisms with which to deal with the limits of tolerance, not least because the organisms themselves will force pH changes through their production of acids or bases as metabolic waste products.”
vi) “In contrast to the degree of detail that has been worked out concerning bacterial metabolic pathways, the routine culture of bacteria in laboratories is often little short of mysticism. The use of culture media with defined ingredients is rarely necessary when cheaper, undefined or semi-defined media will suffice. The media used to culture organisms must be considered highly artificial in comparison to the nutritional conditions encountered by organisms when growing in the natural environment, be that in the environment or on man.”
vii) “One difficulty that undermines our confidence in controlling bacteria and viruses is the definition of death in microbes. Death in bacteria and viruses is a retrospective diagnosis. A bacterium is defined as dead when it cannot be grown. If an organism fails to grow when cultured in a broth or on a plate, having previously been successfully cultivated, then we can say it is dead. Or we can say that we failed to grow the organism in the correct conditions. Because it is impossible to prove a negative event (absence of growth) there is always the worry that we have not killed the organism but simply failed to grow it. The reasons for organisms not growing in laboratory culture media are considerable… [...]
A large number of compounds including chemical disinfectants and antibiotics have been identified that can be used in controlling the multiplication of bacteria. Although the mode of action will differ between different compounds, it is helpful to distinguish between whether they act by killing bacteria, in which case they are termed bactericidal or only inhibit bacterial proliferation rather than kill the organism, when they are described as bacteriostatic. [...] When exposed to a lethal agent not all of the bacteria in the culture die immediately but, instead, a proportion of the total will be killed per unit time. [...] In other words, the more organisms there are, the more organisms are killed.”
viii) “Most non-sporing bacteria (i.e. the vegetative form) are killed when heated to 60°C. Yeasts and fungi need temperatures over 80°C. Bacterial endospores, however, are only killed to any significant extent when held at temperatures above 100°C for over 5 minutes. Bacterial endospores, therefore, pose the greatest problems in obtaining sterility. Because water acts as a better conductor of heat than air, the transfer of energy into the microbe is achieved more efficiently when the organisms are heated in moist or wet conditions…”
And so on. Lots of good stuff here. Some of it is quite hard (probably in part because I don’t remember HS chemistry all that well – but also because a lot of background medical knowledge is assumed throughout the book, some of which I have obtained elsewhere – and some of which I have not..)
I’ve chosen to quote quite extensively from the piece (HT: Ed Yong), because I know a lot of you would miss out on all of it if I just posted a link and a short quote. It’s a very good piece, you should read all of it – and yes, there’s a lot more at the link – if you have any interest in this subject:
“The issue has become pressing, in recent years, for reasons of expense. The soaring cost of health care is the greatest threat to the country’s long-term solvency, and the terminally ill account for a lot of it. Twenty-five per cent of all Medicare spending is for the five per cent of patients who are in their final year of life, and most of that money goes for care in their last couple of months which is of little apparent benefit.
Spending on a disease like cancer tends to follow a particular pattern. There are high initial costs as the cancer is treated, and then, if all goes well, these costs taper off. Medical spending for a breast-cancer survivor, for instance, averaged an estimated fifty-four thousand dollars in 2003, the vast majority of it for the initial diagnostic testing, surgery, and, where necessary, radiation and chemotherapy. For a patient with a fatal version of the disease, though, the cost curve is U-shaped, rising again toward the end—to an average of sixty-three thousand dollars during the last six months of life with an incurable breast cancer. Our medical system is excellent at trying to stave off death with eight-thousand-dollar-a-month chemotherapy, three-thousand-dollar-a-day intensive care, five-thousand-dollar-an-hour surgery. But, ultimately, death comes, and no one is good at knowing when to stop.
For all but our most recent history, dying was typically a brief process. Whether the cause was childhood infection, difficult childbirth, heart attack, or pneumonia, the interval between recognizing that you had a life-threatening ailment and death was often just a matter of days or weeks. [...] These days, swift catastrophic illness is the exception; for most people, death comes only after long medical struggle with an incurable condition—advanced cancer, progressive organ failure (usually the heart, kidney, or liver), or the multiple debilities of very old age. In all such cases, death is certain, but the timing isn’t. So everyone struggles with this uncertainty—with how, and when, to accept that the battle is lost. As for last words, they hardly seem to exist anymore. Technology sustains our organs until we are well past the point of awareness and coherence. Besides, how do you attend to the thoughts and concerns of the dying when medicine has made it almost impossible to be sure who the dying even are? Is someone with terminal cancer, dementia, incurable congestive heart failure dying, exactly?
I once cared for a woman in her sixties who had severe chest and abdominal pain from a bowel obstruction that had ruptured her colon, caused her to have a heart attack, and put her into septic shock and renal failure. I performed an emergency operation to remove the damaged length of colon and give her a colostomy. A cardiologist stented her coronary arteries. We put her on dialysis, a ventilator, and intravenous feeding, and stabilized her. After a couple of weeks, though, it was clear that she was not going to get much better. The septic shock had left her with heart and respiratory failure as well as dry gangrene of her foot, which would have to be amputated. She had a large, open abdominal wound with leaking bowel contents, which would require twice-a-day cleaning and dressing for weeks in order to heal. She would not be able to eat. She would need a tracheotomy. Her kidneys were gone, and she would have to spend three days a week on a dialysis machine for the rest of her life.
She was unmarried and without children. So I sat with her sisters in the I.C.U. family room to talk about whether we should proceed with the amputation and the tracheotomy. “Is she dying?” one of the sisters asked me. I didn’t know how to answer the question. I wasn’t even sure what the word “dying” meant anymore. In the past few decades, medical science has rendered obsolete centuries of experience, tradition, and language about our mortality, and created a new difficulty for mankind: how to die.
I asked Marcoux what he hopes to accomplish for terminal lung-cancer patients when they first come to see him. “I’m thinking, Can I get them a pretty good year or two out of this?” he said. “Those are my expectations. For me, the long tail for a patient like her is three to four years.” But this is not what people want to hear. “They’re thinking ten to twenty years. You hear that time and time again. And I’d be the same way if I were in their shoes.”
You’d think doctors would be well equipped to navigate the shoals here, but at least two things get in the way. First, our own views may be unrealistic. A study led by the Harvard researcher Nicholas Christakis asked the doctors of almost five hundred terminally ill patients to estimate how long they thought their patient would survive, and then followed the patients. Sixty-three per cent of doctors overestimated survival time. Just seventeen per cent underestimated it. The average estimate was five hundred and thirty per cent too high. And, the better the doctors knew their patients, the more likely they were to err.
Second, we often avoid voicing even these sentiments. Studies find that although doctors usually tell patients when a cancer is not curable, most are reluctant to give a specific prognosis, even when pressed. More than forty per cent of oncologists report offering treatments that they believe are unlikely to work. In an era in which the relationship between patient and doctor is increasingly miscast in retail terms—“the customer is always right”—doctors are especially hesitant to trample on a patient’s expectations. You worry far more about being overly pessimistic than you do about being overly optimistic. And talking about dying is enormously fraught. When you have a patient like Sara Monopoli, the last thing you want to do is grapple with the truth. I know, because Marcoux wasn’t the only one avoiding that conversation with her. I was, too.
In 1985, the paleontologist and writer Stephen Jay Gould published an extraordinary essay entitled “The Median Isn’t the Message,” after he had been given a diagnosis, three years earlier, of abdominal mesothelioma, a rare and lethal cancer usually associated with asbestos exposure. He went to a medical library when he got the diagnosis and pulled out the latest scientific articles on the disease. “The literature couldn’t have been more brutally clear: mesothelioma is incurable, with a median survival of only eight months after discovery,” he wrote. The news was devastating. But then he began looking at the graphs of the patient-survival curves.
Gould was a naturalist, and more inclined to notice the variation around the curve’s middle point than the middle point itself. What the naturalist saw was remarkable variation. The patients were not clustered around the median survival but, instead, fanned out in both directions. Moreover, the curve was skewed to the right, with a long tail, however slender, of patients who lived many years longer than the eight-month median. This is where he found solace. He could imagine himself surviving far out in that long tail. And he did. Following surgery and experimental chemotherapy, he lived twenty more years before dying, in 2002, at the age of sixty, from a lung cancer that was unrelated to his original disease.
“It has become, in my view, a bit too trendy to regard the acceptance of death as something tantamount to intrinsic dignity,” he wrote in his 1985 essay. “Of course I agree with the preacher of Ecclesiastes that there is a time to love and a time to die—and when my skein runs out I hope to face the end calmly and in my own way. For most situations, however, I prefer the more martial view that death is the ultimate enemy—and I find nothing reproachable in those who rage mightily against the dying of the light.”
I think of Gould and his essay every time I have a patient with a terminal illness. There is almost always a long tail of possibility, however thin. What’s wrong with looking for it? Nothing, it seems to me, unless it means we have failed to prepare for the outcome that’s vastly more probable. The trouble is that we’ve built our medical system and culture around the long tail. We’ve created a multitrillion-dollar edifice for dispensing the medical equivalent of lottery tickets—and have only the rudiments of a system to prepare patients for the near-certainty that those tickets will not win. Hope is not a plan, but hope is our plan.”
There is NO single payor system that compensates physicians on health outcomes. None. There are a bunch that pay for ticking boxes (e.g. did you talk about smoking cessation, did you talk about weight loss, etc.) But not a damn one pays based on outcomes. Why you ask? Because as sure as the sun rises I and every other general surgeon would IMMEDIATELY stop operating on 1. smokers, 2. the obese, and 3 diabetics on an elective basis. They talk a big game, but every time someone points that out to the powers that be they back down. Hell, even REPORTING outcomes has caused a drop in elective CABGs in NY, a rise in emergent ones and worse outcomes across the board.
William Bromberg, in a comment here. I don’t know if that’s completely true, but I’m sure this fundamental problem is in no way limited to surgery even if that’s probably (one of?) the field(s?) where such a change in incentives structures would have the highest impact; in general, if you compensate doctors based on whether the patients get better or not, then it gets harder to get (/enough) doctors to treat the risky cases and/or the very sick.
Maybe the power of the mind isn’t as strong as some people think:
Hróbjartsson and Gøtzsche published a study in 2001 and a follow-up study in 2004 questioning the nature of the placebo effect. … They performed two meta-analyses involving 156 clinical trials in which an experimental drug or treatment protocol was compared to a placebo group and an untreated group, and … found that in studies with a binary outcome, meaning patients were classified as improved or not improved, the placebo group had no statistically significant improvement over the no-treatment group. Similarly, there was no significant placebo effect in studies in which objective outcomes (such as blood pressure) were measured by an independent observer. The placebo effect could only be documented in studies in which the outcomes (improvement or failure to improve) were reported by the subjects themselves.
- 180 grader
- alfred brendel
- Arthur Conan Doyle
- Bent Jensen
- Bill Bryson
- Bill Watterson
- Claude Berri
- current affairs
- Dan Simmons
- David Copperfield
- david lynch
- den kolde krig
- Dinu Lipatti
- Douglas Adams
- economic history
- Edward Grieg
- Eliezer Yudkowsky
- Ezra Levant
- Filippo Pacini
- financial regulation
- Flemming Rose
- foreign aid
- Franz Kafka
- freedom of speech
- Friedrich von Flotow
- Fyodor Dostoevsky
- Game theory
- Garry Kasparov
- George Carlin
- george enescu
- global warming
- Grahame Clark
- harry potter
- health care
- isaac asimov
- Jane Austen
- John Stuart Mill
- Jon Stewart
- Joseph Heller
- karl popper
- Khan Academy
- knowledge sharing
- Leland Yeager
- Marcel Pagnol
- Maria João Pires
- Mark Twain
- Martin Amis
- Martin Paldam
- mikhail gorbatjov
- Mikkel Plum
- Morten Uhrskov Jensen
- Muzio Clementi
- Nikolai Medtner
- North Korea
- nuclear proliferation
- nuclear weapons
- Ole Vagn Christensen
- Oscar Wilde
- Pascal's Wager
- Paul Graham
- people are strange
- public choice
- rambling nonsense
- random stuff
- Richard Dawkins
- Rowan Atkinson
- Saudi Arabia
- science fiction
- Sun Tzu
- Terry Pratchett
- The Art of War
- Thomas Hobbes
- Thomas More
- walter gieseking
- William Easterly