David Friedman recently asked a related question on SSC (he asked about why there are waiting lists for surgical procedures), and I decided that as I’d read some stuff about these topics in the past I might as well answer his question. The answer turned out to be somewhat long/detailed, and I decided I might as well post some of this stuff here as well. In a way my answer to David’s question provides belated coverage of a book I read last year, Appointment Planning in Outpatient Clinics and Diagnostic Facilities, which I have covered only in very limited detail here on the blog before (the third paragraph of this post is the only coverage of the book I’ve provided here).
Below I’ve tried to cover these topics in a manner which would make it unnecessary to also read David’s question and related comments.
The brief Springer publication Appointment Planning in Outpatient Clinics and Diagnostic Facilities has some basic stuff about operations research and queueing theory which is useful for making sense of resource allocation decisions made in the medical sector. I think this is the kind of stuff you’ll want to have a look at if you want to understand these things better.
There are many variables which are important here and which may help explain why waiting lists are common in the health care sector (it’s not just surgery). The quotes below are from the book:
“In a walk-in system, patients are seen without an appointment. […] The main advantage of walk-in systems is that access time is reduced to zero. […] A huge disadvantage of patients walking in, however, is that the usually strong fluctuating arrival stream can result in an overcrowded clinic, leading to long waiting times, high peaks in care provider’s working pressure, and patients leaving without treatment (blocking). On other moments of time the waiting room will be practically empty […] In regular appointment systems workload can be dispersed, although appointment planning is usually time consuming. A walk-in system is most suitable for clinics with short service times and multiple care providers, such as blood withdrawal facilities and pre-anesthesia check-ups for non-complex patients. If the service times are longer or the number of care providers is limited, the probability that patients experience a long waiting time becomes too high, and a regular appointment system would be justified”
“Sometimes it is impossible to provide walk-in service for all patients, for example when specific patients need to be prepared for their consultation, or if specific care providers are required, such as anesthesiologists [I noted in my reply to David that these remarks seem highly relevant for the surgery context]. Also, walk-in patients who experience a full waiting room upon arrival may choose to come back at a later point in time. To make sure that they do have access at that point, clinics usually give these patients an appointment. This combination of walk-in and appointment patients requires a specific appointment system that satisfies the following requirements:
1. The access time for appointment patients is below a certain threshold
2. The waiting time for walk-in patients is below a certain threshold
3. The number of walk-in patients who are sent away due to crowding is minimized
To satisfy these requirements, an appointment system should be developed to determine the optimal scheduling of appointments, not only on a day level but also on a week level. Developing such an appointment system is challenging from a mathematical perspective. […] Due to the high variability that is usually observed in healthcare settings, introducing stochasticity in the modeling process is very important to obtain valuable and reasonable results.”
“Most elective patients will ultimately evolve into semi-urgent or even urgent patients if treatment is extensively prolonged.” That’s ‘on the one hand’ – but of course there’s also the related ‘on the other hand’-observation that: “Quite often a long waiting list results in a decrease in demand”. Patients might get better on their own and/or decide it’s not worth the trouble to see a service provider – or they might deteriorate.
“Some planners tend to maintain separate waiting lists for each patient group. However, if capacity is shared among these groups, the waiting list should be considered as a whole as well. Allocating capacity per patient group usually results in inflexibility and poor performance”.
“mean waiting time increases with the load. When the load is low, a small increase therein has a minimal effect on the mean waiting time. However, when the load is high, a small increase has a tremendous effect on the mean waiting time. For instance, […] increasing the load from 50 to 55 % increases the waiting time by 10 %, but increasing the load from 90 to 95 % increases the waiting time by 100 % […] This explains why a minor change (for example, a small increase in the number of patients, a patient arriving in a bed or a wheelchair) can result in a major increase in waiting times as sometimes seen in outpatient clinics.”
“One of the most important goals of this chapter is to show that it is impossible to use all capacity and at the same time maintain a short, manageable waiting list. A common mistake is to reason as follows:
Suppose total capacity is 100 appointments. Unused capacity is commonly used for urgent and inpatients, that can be called in last minute. 83 % of capacity is used, so there is on average 17 % of capacity available for urgent and inpatients. The urgent/inpatient demand is on average 20 appointments per day. Since 17 appointments are on average not used for elective patients, a surplus capacity of only three appointments is required to satisfy all patient demand.
Even though this is true on average, more urgent and inpatient capacity is required. This is due to the variation in the process; on certain days 100 % of capacity is required to satisfy elective patient demand, thus leaving no room for any other patients. Furthermore, since 17 slots are dedicated to urgent and inpatients, only 83 slots are available for elective patients, which means that ρ is again equal to 1, resulting in an uncontrollable waiting list.” [ρ represents the average proportion of time which the server/service provider is occupied – a key stability requirement is that ρ is smaller than one; if it is not, the length of the queue becomes unstable/explodes. See also this related link].
“The challenge is to make a trade-off between maintaining a waiting list which is of acceptable size and the amount of unused capacity. Since the focus in many healthcare facilities is on avoiding unused capacity, waiting lists tend to grow until “something has to be done.” Then, temporarily surplus capacity is deployed, which is usually more expensive than regular capacity […]. Even though waiting lists have a buffer function (i.e., by creating a reservoir of patients that can be planned when demand is low) it is unavoidable that, even in well-organized facilities, over a longer period of time not all capacity is used.”
I think one way to think about the question of whether it makes sense to have a waiting list or whether you can ‘just use the price variable’ is that if it is possible for you as a provider to optimize over both the waiting time variable and the price variable (i.e., people demanding the service find some positive waiting time to be acceptable when it is combined with a non-zero price reduction), the result you’re going to get is always going to be at least as good as an option where you only have the option of optimizing over price – not including waiting time in the implicit pricing mechanism can be thought of as in a sense a weakly dominated strategy.
A lot of the planning stuff relates to how to handle variable demand, and input heterogeneities can be thought of as one of many parameters which may be important to take into account in the context of how best to deal with variable demand; surgeons aren’t perfect substitutes. Perhaps neither are nurses, or different hospitals (relevant if you’re higher up in the decision making hierarchy). An important aspect is the question of whether a surgeon (or a doctor, or a nurse…) might be doing other stuff instead of surgery during down-periods, and what might be the value of that other stuff s/he might be doing instead. In the surgical context, not only is demand variable over time, there are also issues such as that many different inputs need to be coordinated; you need a surgeon and a scrub nurse and an anesthesiologist. The sequential and interdependent nature of many medical procedures and inputs is likely also a factor in terms of adding complexity; whether a condition requires treatment or not, and/or which treatment may be required, may depend upon the results of a test which has to be analyzed before the treatment is started, and so you for example can’t switch the order of test and treatment, or for that matter treat patient X based on patient Y’s test results; there’s some built-in inflexibility here at the outset. This type of thing also means there are more nodes in the network, and more places where things can go wrong, resulting in longer waiting times than planned.
I think the potential gains in terms of capacity utilization, risk reduction and increased flexibility to be derived from implementing waiting schemes of some kind in the surgery context would mediate strongly against a model without waiting lists, and I think that the surgical field is far from unique in that respect in the context of medical care provision.
“Statistical considerations arise in virtually all areas of science and technology and, beyond these, in issues of public and private policy and in everyday life. While the detailed methods used vary greatly in the level of elaboration involved and often in the way they are described, there is a unity of ideas which gives statistics as a subject both its intellectual challenge and its importance […] In this book we have aimed to discuss the ideas involved in applying statistical methods to advance knowledge and understanding. It is a book not on statistical methods as such but, rather, on how these methods are to be deployed […] We are writing partly for those working as applied statisticians, partly for subject-matter specialists using statistical ideas extensively in their work and partly for masters and doctoral students of statistics concerned with the relationship between the detailed methods and theory they are studying and the effective application of these ideas. Our aim is to emphasize how statistical ideas may be deployed fruitfully rather than to describe the details of statistical techniques.”
I gave the book five stars, but as noted in my review on goodreads I’m not sure the word ‘amazing’ is really fitting – however the book had a lot of good stuff and it had very little stuff for me to quibble about, so I figured it deserved a high rating. The book deals to a very large extent with topics which are in some sense common to pretty much all statistical analyses, regardless of the research context; formulation of research questions/hypotheses, data search, study designs, data analysis, and interpretation. The authors spend quite a few pages talking about hypothesis testing but on the other hand no pages talking about statistical information criteria, a topic with which I’m at this point at least reasonably familiar, and I figure if I had been slightly more critical I’d have subtracted a star for this omission – however I have the impression that I’m at times perhaps too hard on non-fiction books on goodreads so I decided not to punish the book for this omission. Part of the reason why I gave the book five stars is also that I’ve sort of wanted to read a book like this one for a while; I think in some sense it’s the first one of its kind I’ve read. I liked the way the book was structured.
Below I have added some observations from the book, as well as a few comments (I should note that I have had to leave out a lot of good stuff).
“When the data are very extensive, precision estimates calculated from simple standard statistical methods are likely to underestimate error substantially owing to the neglect of hidden correlations. A large amount of data is in no way synonymous with a large amount of information. In some settings at least, if a modest amount of poor quality data is likely to be modestly misleading, an extremely large amount of poor quality data may be extremely misleading.”
“For studies of a new phenomenon it will usually be best to examine situations in which the phenomenon is likely to appear in the most striking form, even if this is in some sense artificial or not representative. This is in line with the well-known precept in mathematical research: study the issue in the simplest possible context that is not entirely trivial, and later generalize.”
“It often […] aids the interpretation of an observational study to consider the question: what would have been done in a comparable experiment?”
“An important and perhaps sometimes underemphasized issue in empirical prediction is that of stability. Especially when repeated application of the same method is envisaged, it is unlikely that the situations to be encountered will exactly mirror those involved in setting up the method. It may well be wise to use a procedure that works well over a range of conditions even if it is sub-optimal in the data used to set up the method.”
“Many investigations have the broad form of collecting similar data repeatedly, for example on different individuals. In this connection the notion of a unit of analysis is often helpful in clarifying an approach to the detailed analysis. Although this notion is more generally applicable, it is clearest in the context of randomized experiments. Here the unit of analysis is that smallest subdivision of the experimental material such that two distinct units might be randomized (randomly allocated) to different treatments. […] In general the unit of analysis may not be the same as the unit of interpretation, that is to say, the unit about which conclusions are to drawn. The most difficult situation is when the unit of analysis is an aggregate of several units of interpretation, leading to the possibility of ecological bias, that is, a systematic difference between, say, the impact of explanatory variables at different levels of aggregation. […] it is important to identify the unit of analysis, which may be different in different parts of the analysis […] on the whole, limited detail is needed in examining the variation within the unit of analysis in question.”
The book briefly discusses issues pertaining to the scale of effort involved when thinking about appropriate study designs and how much/which data to gather for analysis, and notes that often associated costs are not quantified – rather a judgment call is made. An important related point is that e.g. in survey contexts response patterns will tend to depend upon the quantity of information requested; if you ask for too much, few people might reply (…and perhaps it’s also the case that it’s ‘the wrong people’ that reply? The authors don’t touch upon the potential selection bias issue, but it seems relevant). A few key observations from the book on this topic:
“the intrinsic quality of data, for example the response rates of surveys, may be degraded if too much is collected. […] sampling may give higher [data] quality than the study of a complete population of individuals. […] When researchers studied the effect of the expected length (10, 20 or 30 minutes) of a web-based questionnaire, they found that fewer potential respondents started and completed questionnaires expected to take longer (Galesic and Bosnjak, 2009). Furthermore, questions that appeared later in the questionnaire were given shorter and more uniform answers than questions that appeared near the start of the questionnaire.”
Not surprising, but certainly worth keeping in mind. Moving on…
“In general, while principal component analysis may be helpful in suggesting a base for interpretation and the formation of derived variables there is usually considerable arbitrariness involved in its use. This stems from the need to standardize the variables to comparable scales, typically by the use of correlation coefficients. This means that a variable that happens to have atypically small variability in the data will have a misleadingly depressed weight in the principal components.”
The book includes a few pages about the Berkson error model, which I’d never heard about. Wikipedia doesn’t have much about it and I was debating how much to include about this one here – I probably wouldn’t have done more than including the link here if the wikipedia article actually covered this topic in any detail, but it doesn’t. However it seemed important enough to write a few words about it. The basic difference between the ‘classical error model’, i.e. the one everybody knows about, and the Berkson error model is that in the former case the measurement error is statistically independent of the true value of X, whereas in the latter case the measurement error is independent of the measured value; the authors note that this implies that the true values are more variable than the measured values in a Berkson error context. Berkson errors can e.g. happen in experimental contexts where levels of a variable are pre-set by some target, for example in a medical context where a drug is supposed to be administered each X hours; the pre-set levels might then be the measured values, and the true values might be different e.g. if the nurse was late. I thought it important to mention this error model not only because it’s a completely new idea to me that you might encounter this sort of error-generating process, but also because there is no statistical test that you can use to figure out if the standard error model is the appropriate one, or if a Berkson error model is better; which means that you need to be aware of the difference and think about which model works best, based on the nature of the measuring process.
Let’s move on to some quotes dealing with modeling:
“while it is appealing to use methods that are in a reasonable sense fully efficient, that is, extract all relevant information in the data, nevertheless any such notion is within the framework of an assumed model. Ideally, methods should have this efficiency property while preserving good behaviour (especially stability of interpretation) when the model is perturbed. Essentially a model translates a subject-matter question into a mathematical or statistical one and, if that translation is seriously defective, the analysis will address a wrong or inappropriate question […] The greatest difficulty with quasi-realistic models [as opposed to ‘toy models’] is likely to be that they require numerical specification of features for some of which there is very little or no empirical information. Sensitivity analysis is then particularly important.”
“Parametric models typically represent some notion of smoothness; their danger is that particular representations of that smoothness may have strong and unfortunate implications. This difficulty is covered for the most part by informal checking that the primary conclusions do not depend critically on the precise form of parametric representation. To some extent such considerations can be formalized but in the last analysis some element of judgement cannot be avoided. One general consideration that is sometimes helpful is the following. If an issue can be addressed nonparametrically then it will often be better to tackle it parametrically; however, if it cannot be resolved nonparametrically then it is usually dangerous to resolve it parametrically.”
“Once a model is formulated two types of question arise. How can the unknown parameters in the model best be estimated? Is there evidence that the model needs modification or indeed should be abandoned in favour of some different representation? The second question is to be interpreted not as asking whether the model is true [this is the wrong question to ask, as also emphasized by Burnham & Anderson] but whether there is clear evidence of a specific kind of departure implying a need to change the model so as to avoid distortion of the final conclusions. […] it is important in applications to understand the circumstances under which different methods give similar or different conclusions. In particular, if a more elaborate method gives an apparent improvement in precision, what are the assumptions on which that improvement is based? Are they reasonable? […] the hierarchical principle implies, […] with very rare exceptions, that models with interaction terms should include also the corresponding main effects. […] When considering two families of models, it is important to consider the possibilities that both families are adequate, that one is adequate and not the other and that neither family fits the data.” [Do incidentally recall that in the context of interactions, “the term interaction […] is in some ways a misnomer. There is no necessary implication of interaction in the physical sense or synergy in a biological context. Rather, interaction means a departure from additivity […] This is expressed most explicitly by the requirement that, apart from random fluctuations, the difference in outcome between any two levels of one factor is the same at all levels of the other factor. […] The most directly interpretable form of interaction, certainly not removable by [variable] transformation, is effect reversal.”]
“The p-value assesses the data […] via a comparison with that anticipated if H0 were true. If in two different situations the test of a relevant null hypothesis gives approximately the same p-value, it does not follow that the overall strengths of the evidence in favour of the relevant H0 are the same in the two cases.”
“There are […] two sources of uncertainty in observational studies that are not present in randomized experiments. The first is that the ordering of the variables may be inappropriate, a particular hazard in cross-sectional studies. […] if the data are tied to one time point then any presumption of causality relies on a working hypothesis as to whether the components are explanatory or responses. Any check on this can only be from sources external to the current data. […] The second source of uncertainty is that important explanatory variables affecting both the potential cause and the outcome may not be available. […] Retrospective explanations may be convincing if based on firmly established theory but otherwise need to be treated with special caution. It is well known in many fields that ingenious explanations can be constructed retrospectively for almost any finding.”
“The general issue of applying conclusions from aggregate data to specific individuals is essentially that of showing that the individual does not belong to a subaggregate for which a substantially different conclusion applies. In actuality this can at most be indirectly checked for specific subaggregates. […] It is not unknown in the literature to see conclusions such as that there are no treatment differences except for males aged over 80 years, living more than 50 km south of Birmingham and life-long supporters of Aston Villa football club, who show a dramatic improvement under some treatment T. Despite the undoubted importance of this particular subgroup, virtually always such conclusions would seem to be unjustified.” [I loved this example!]
The authors included a few interesting results from an undated Cochrane publication which I thought I should mention. The file-drawer effect is well known, but there are a few other interesting biases at play in a publication bias context. One is time-lag bias, which means that statistically significant results take less time to get published. Another is language bias; statistically significant results are more likely to be published in English publications. A third bias is multiple publication bias; it turns out that papers with statistically significant results are more likely to be published more than once. The last one mentioned is citation bias; papers with statistically significant results are more likely to be cited in the literature.
The authors include these observations in their concluding remarks: “The overriding general principle [in the context of applied statistics], difficult to achieve, is that there should be a seamless flow between statistical and subject-matter considerations. […] in principle seamlessness requires an individual statistician to have views on subject-matter interpretation and subject-matter specialists to be interested in issues of statistical analysis.”
As already mentioned this is a good book. It’s not long, and/but it’s worth reading if you’re in the target group.
ii. “The man who knows everyone’s job isn’t much good at his own.” (-ll-)
iii. “It is amazing what little harm doctors do when one considers all the opportunities they have” (Mark Twain, as quoted in the Oxford Handbook of Clinical Medicine, p.595).
iv. “The only problem with Islamic fundamentalism are the fundamentals of Islam.” (Sam Harris)
v. “[S]ome of the most terrible things in the world are done by people who think, genuinely think, that they’re doing it for the best” (Terry Pratchett, Snuff).
vi. “That was excellently observ’d, say I, when I read a Passage in an Author, where his Opinion agrees with mine. When we differ, there I pronounce him to be mistaken.” (Jonathan Swift)
vii. “Death is nature’s master stroke, albeit a cruel one, because it allows genotypes space to try on new phenotypes.” (Quote from the Oxford Handbook of Clinical Medicine, p.6)
viii. “The purpose of models is not to fit the data but to sharpen the questions.” (Samuel Karlin)
ix. “We may […] view set theory, and mathematics generally, in much the way in which we view theoretical portions of the natural sciences themselves; as comprising truths or hypotheses which are to be vindicated less by the pure light of reason than by the indirect systematic contribution which they make to the organizing of empirical data in the natural sciences.” (Quine)
x. “At root what is needed for scientific inquiry is just receptivity to data, skill in reasoning, and yearning for truth. Admittedly, ingenuity can help too.” (-ll-)
xi. “A statistician carefully assembles facts and figures for others who carefully misinterpret them.” (Quote from Mathematically Speaking – A Dictionary of Quotations, p.329. Only source given in the book is: “Quoted in Evan Esar, 20,000 Quips and Quotes“)
xii. “A knowledge of statistics is like a knowledge of foreign languages or of algebra; it may prove of use at any time under any circumstances.” (Quote from Mathematically Speaking – A Dictionary of Quotations, p. 328. The source provided is: “Elements of Statistics, Part I, Chapter I (p.4)”).
xiii. “We own to small faults to persuade others that we have not great ones.” (Rochefoucauld)
xiv. “There is more self-love than love in jealousy.” (-ll-)
xv. “We should not judge of a man’s merit by his great abilities, but by the use he makes of them.” (-ll-)
xvi. “We should gain more by letting the world see what we are than by trying to seem what we are not.” (-ll-)
xvii. “Put succinctly, a prospective study looks for the effects of causes whereas a retrospective study examines the causes of effects.” (Quote from p.49 of Principles of Applied Statistics, by Cox & Donnelly)
xviii. “… he who seeks for methods without having a definite problem in mind seeks for the most part in vain.” (David Hilbert)
xix. “Give every man thy ear, but few thy voice” (Shakespeare).
xx. “Often the fear of one evil leads us into a worse.” (Nicolas Boileau-Despréaux)
I didn’t finish this book and I didn’t have a lot of nice things to say about it in my review on goodreads, but as I did read roughly half of it and it seemed easy to blog, I figured I might as well cover it here.
I have added some observations from the book and a few comments below:
“While we know that every marriage brings not only promise but substantial risk, to date we know more about the harmful processes in relationships than we do about what makes them work”
“expressions of positivity, especially gratitude, promote relationship maintenance in intimate bonds”
“Baxter and Montgomery (1996) maintain that the closeness of a relationship may be determined by the extent to which the ‘self becomes’ or changes through participation in that relationship, suggesting that boundaries between ‘self’ and ‘other’ are more permeable and fluid in a close, intimate relationship. […] Not surprisingly, this collective sense of an ‘us’ appears to grow stronger with time and age with older couples demonstrating greater levels of we-ness than couples at middle-age”
“It has been demonstrated […] that affirmation by one’s partner that is in keeping with one’s own self-ideal, is associated with better relationship adjustment and stability […]. Moreover, if a spouse’s positive view of his or her mate is more favorable than the mate’s own view, and if the spouse tries to stabilize such positive impressions then, over time, the person’s negative self-view could begin to change for the better […] Perceiving one’s partner as responsive to one’s needs, goals, values and so forth has generally been associated with greater relationship satisfaction and personal well-being […]. The concomitant experience of feeling validated, understood and cared for […] would arguably be that much more imperative when one partner is in distress. Such responsiveness entails the ability “to discern non-verbal cues, and to ‘read between the lines’ about motivations, emotions, and experiences,” […] Being attuned and responsive to non-verbal and para-verbal cues, in turn, is conducive to couple coping because it enables well spouses to be appropriately supportive without having to be explicitly directed or asked.” (I recall thinking that the topic of ‘hidden support’ along these lines was a very important topic to keep in mind in the future when I first read about it. It’s covered in much more detail in one of the previous books I’ve read on related topics, though I can’t recall at the moment if it was in Vangelisti & Perlman, Hargie, or Regan).
“Sexual resilience […] is a term used to describe individuals or couples who are able to withstand, adapt, and find solutions to events and experiences that challenge their sexual relationship.[…] the most common challenges to sexuality include the birth of the first child […]; the onset of a physical or mental illness […]; an emotional blow to the relationship, such as betrayal or hurt; lack of relational intimacy, such as becoming absorbed by other priorities such as career; and changes associated with aging, such as vaginal dryness or erectile dysfunction. […] People who place a relatively low value on sex for physical pleasure and a relatively high value on sex for relational intimacy […] are motivated to engage in sexual activity primarily to feel emotionally close to their partner. […] these individuals may respond to sexual challenges with less distress than those who place a high value on sex for physical pleasure. On an individual level, they are not overly concerned about specific sexual dysfunctions, but are motivated to find alternative ways of continuing to be sexually intimate with their partner, which may or may not include intercourse. […] Facing sexual difficulties with a high value placed on sex for relational intimacy, with strong dyadic adjustment, and with effective and open communication skills primes a couple to respond well to sexual challenges. […] Acceptance, flexibility, and persistence are characteristics most commonly associated with couples who successfully negotiated the challenges to their sexual relationship. […] When the physical pleasure aspect of sex is viewed as an enjoyed, but not essential, component of sex, couples no longer need to rely on perfect sexual functioning to engage in satisfying sex. Physical pleasure can come to be seen as “icing on the cake”, while relational intimacy is the cake itself.”
“Overall findings from neuroimaging studies of resilience suggest that the brains of resilient people are better equipped to tamp down negative emotion. […] In studies of rodents […] and primates […], early stress has consistently been associated with impaired brain development. Specifically, chronic stress has been found to damage neurons and inhibit neurogenesis in the hippocampus and medial prefrontal cortex […]. Stress has the opposite effect on the amygdala, causing dendritic growth accompanied by increased anxiety and aggression […]. Human studies yield results that are consistent with animal studies.”
I won’t cover the human studies in detail, but for example people have found when looking at the brains of children raised under bad conditions in orphanages in Eastern Europe and Asia that children who were adopted early in life (i.e., got away from the terrible conditions early on) had smaller amygdalae than children who were adopted later. They also note that smaller orbitofrontal volumes have been observed in physically abused children, with arguably(?) (I’m not sure about the validity of the instrument applied) a dose-response relationship between severity of abuse and the level of brain differences/changes observed, and smaller hippocampal volumes have been noted in depressed women with a history of childhood maltreatment (their brains were compared with the brains of depressed women without a history of childhood maltreatment).
“The positive associations between social support and physical health may be due in large part to the effect of positive relationships on cortisol levels […]. The presence of close, supportive relationships have been associated with lower cortisol levels in adolescents […], middle class mothers of 2-year old children […], elderly widowed adults […], men and women aged 47–59 […], healthy men […], college students […], 18–36 year olds from the UCLA community […], parents expecting their first child […], and relationship partners […] Overall, studies on relationship quality and cortisol levels suggest that close supportive relationships play an important role in boosting resilience.”
“Sharpe (2000) offered an insightful, developmental approach to understanding mutuality in romantic relationships. She described mutuality as a result of “merging” that consists of several steps, which occur and often overlap in the lifetime of a relationship. With the progression of the relationship, the partners start to recognize differences that exist between them and try to incorporate them into their existing concept of relationship. Additionally, both partners search for “his or her own comfort level and balance between time together and time apart” […]. As merging progresses, partners are able to cultivate their existing commonalities and differences, as well as develop multiple ways of staying connected. In truly mutual couples, both partners respect and validate each other’s views, work together to accomplish common goals, and resolve their differences through compromise. Moreover, a critical achievement of mutuality is the internalization of the loving relationship.”
I gave the book two stars on goodreads. The contributors to this volume are from Brazil, Spain, Mexico, Japan, Turkey, Denmark, and the Czech Republic; the editor is from Taiwan. In most chapters you can tell that the first language of these authors is not English; the language is occasionally quite bad, although you can usually tell what the authors are trying to say.
The book is open access and you can read it here. I have included some quotes from the book below:
“It is estimated that men and women with depression are 20.9 and 27 times, respectively, more likely to commit suicide than those without depression (Briley & Lépine, 2011).” [Well, that’s one way to communicate risk… See also this comment].
“depression is on average twice as common in women as in men (Bromet et al., 2011). […] sex differences have been observed in the prevalence of mental disorders as well as in responses to treatment […] When this [sexual] dimorphism is present [in rats, a common animal model], the drug effect is generally stronger in males than in females.”
“Several reports indicate that follicular stimulating and luteinizing hormones and estradiol oscillations are correlated with the onset or worsening of depression symptoms during early perimenopause […], when major depressive disorder incidence is 3-5 times higher than the male matched population of the same [age] […]. Several longitudinal studies that followed women across the menopausal transition indicate that the risk for significant depressive symptoms increases during the menopausal transition and then decreases in […] early postmenopause […] the impact of hormone oscillations during perimenopause transition may affect the serotonergic system function and increase vulnerability to develop depression.”
“The use of antidepressant drugs for treating patients with depression began in the late 1950s. Since then, many drugs with potential antidepressants have been made available and significant advances have been made in understanding their possible mechanisms of action […]. Only two classes of antidepressants were known until the 80’s: tricyclic antidepressants and monoamine oxidase inhibitors. Both, although effective, were nonspecific and caused numerous side effects […]. Over the past 20 years, new classes of antidepressants have been discovered: selective serotonin reuptake inhibitors, selective serotonin/norepinephrine reuptake inhibitors, serotonin reuptake inhibitors and alpha-2 antagonists, serotonin reuptake stimulants, selective norepinephrine reuptake inhibitors, selective dopamine reuptake inhibitors and alpha-2 adrenoceptor antagonists […] Neither the biological basis of depression […] nor the precise mechanism of antidepressant efficacy are completely understood […]. Indeed, antidepressants are widely prescribed for anxiety and disorders other than depression.”
“Taken together the TCAs and the MAO-Is can be considered to be non-selective or multidimensional drugs, comparable to a more or less rational polypharmacy at the receptor level. This is even when used as monotherapy in the acute therapy of major depression. The new generation of selective antidepressants (the selective serotonin reuptake inhibitors (SSRIs)), or the selective noradrenaline and serotonin reuptake inhibitors (SNRIs) have a selective mechanism of action, thus avoiding polypharmacy. However, the new generation antidepressants such as the SSRIs or SNRIs are less effective than the TCAs. […] The most selective second generation antidepressants have not proved in monotherapy to be more effective on the core symptoms of depression than the first generation TCAs or MAOIs. It is by their safety profiles, either in overdose or in terms of long term side effects, that the second generation antidepressants have outperformed the first generation.”
“Suicide is a serious global public health problem. Nearly 1 million individuals commit suicide every year. […] Suicide […] ranks among the top 10 causes of death in every country, and is one of the three leading causes of death in 15 to 35-year olds.”
“Considering patients that commit suicide, about half of them, at some point, had contact with psychiatric services, yet only a quarter had current or recent contact (Andersen et al., 2000; Lee et al., 2008). A study conducted by Gunnell & Frankel (1994) revealed that 20-25% of those committing suicide had contact with a health care professional in the week before death and 40% had such contact one month before death” (I’m assuming ‘things have changed’ during the last couple of decades, but it would be interesting to know how much they’ve changed).
“In cases of suicide by drug overdose, TCAs have the highest fatal toxicity, followed by serotonin and noradrenalin reuptake inhibitors (SNRIs), specific serotonergic antidepressants (NaSSA) and SSRIs […] SSRIs are considered to be less toxic than TCAs and MAOIs because they have an extended therapeutic window. The ingestion of up to 30 times its recommended daily dose produces little or no symptoms. The intake of 50 to 70 times the recommended daily dose can cause vomiting, mild depression of the CNS or tremors. Death rarely occurs, even at very high doses […] When we talk about suicide and suicide attempt with antidepressants overdose, we are referring mainly to women in their twenties – thirties who are suicide repeaters.”
“Physical pain is one of the most common somatic symptoms in patients that suffer depression and conversely, patients suffering from chronic pain of diverse origins are often depressed. […] While […] data strongly suggest that depression is linked to altered pain perception, pain management has received little attention to date in the field of psychiatric research […] The monoaminergic system influences both mood and pain […], and since many antidepressants modify properties of monoamines, these compounds may be effective in managing chronic pain of diverse origins in non-depressed patients and to alleviate pain in depressed patients. There are abundant evidences in support of the analgesic properties of tricyclic antidepressants (TCAs), particularly amitriptyline, and another TCA, duloxetine, has been approved as an analgesic for diabetic neuropathic pain. By contrast, there is only limited data regarding the analgesic properties of selective serotonin reuptake inhibitors (SSRIs) […]. In general, compounds with noradrenergic and serotonergic modes of action are more effective analgesics […], although the underlying mechanisms of action remain poorly understood […] While the utility of many antidepressant drugs in pain treatment is well established, it remains unclear whether antidepressants alleviate pain by acting on mood (emotional pain) or nociceptive transmission (sensorial pain). Indeed, in many cases, no correlation exists between the level of pain experienced by the patient and the effect of antidepressants on mood. […] Currently, TCAs (amitriptyline, nortriptiline, imipramine and clomipramine) are the most common antidepressants used in the treatment of neuropathic pain processes associated with diabetes, cancer, viral infections and nerve compression. […] TCAs appear to provide effective pain relief at lower doses than those required for their antidepressant effects, while medium to high doses of SNRIs are necessary to produce analgesia”. Do keep in mind here that in a neuropathy setting one should not expect to get anywhere near complete pain relief with these drugs – see also this post.
“Prevalence of a more or less severe depression is approximately double in patients with diabetes compared to a general population [for more on related topics, see incidentally this previous post of mine]. […] Diabetes as a primary disease is typically superimposed by depression as a reactive state. Depression is usually a result of exposure to psycho-social factors that are related to hardship caused by chronic disease. […] Several studies concerning comorbidity of type 1 diabetes and depression identified risk factors of depression development; chronic somatic comorbidity and polypharmacy, female gender, higher age, solitary life, lower than secondary education, lower financial status, cigarette smoking, obesity, diabetes complications and a higher glycosylated hemoglobin [Engum, 2005; Bell, 2005; Hermanns, 2005; Katon, 2004]”
Here are my first two posts about the book, which I have now finished. I gave the book three stars on goodreads, but I’m close to a four star rating and I may change my opinion later – overall it’s a pretty good book. I’ve read about many of the topics covered before but there was also quite a bit of new stuff along the way; as a whole the book spans very widely, but despite this the level of coverage of individual topics is not bad – I actually think the structure of the book makes it more useful as a reference tool than is McPhee et al. (…in terms of reference books which one might find the need to refer to in order to make sense of medical tests and test results, I should of course add that no book can beat Newman & Kohn). I have tried to take this into account along the way in terms of the way I’ve been reading the book, in the sense that I’ve tried to make frequent references in the margin to other relevant works going into more detail about specific topics whenever this seemed like it might be useful, and I think if one does something along those lines systematically a book like this one can become a really powerful tool – you get the short version with the most important information (…or at least what the authors considered to be the most important information) here almost regardless of what topic you’re interested in – I should note in this context that the book has only very limited coverage of mental health topics, so this is one area where you definitely need to go elsewhere for semi-detailed coverage – and if you need more detail than what’s provided in the coverage you’ll also know from your notes where to go next.
In my last post I talked a bit about which topics were covered in the various chapters in the book – I figured it might make sense here to list the remaining chapter titles in this post. After the (long) surgery chapter, the rest of the chapters deal with epidemiology (I thought this was a poor chapter and the authors definitely did not consider this topic to be particularly important; they spent only 12 pages on it), clinical chemistry (lab results, plasma proteins, topics like ‘what is hypo- and hypernatremia’, …), eponymous syndromes (a random collection of diseases, many of which are quite rare), radiology (MRI vs X-ray? When to use, or not use, contrast material? Etc.), ‘reference intervals etc.‘ (the ‘etc.’ part covered drug therapeutic ranges for some commonly used drugs, as well as some important drug interactions – note to self: The effects of antidiabetic drugs are increased by alcohol, beta-blockers, bezafibrate, and MAOIs, and are decreased by contraceptive steroids, corticosteroids, diazoxide, diuretics, and possibly also lithium), practical procedures (I was considering skipping this chapter because I’m never going to be asked to e.g. insert a chest drain and knowing how to do it seems to be of limited benefit to me, but I figured I might as well read it anyway; there were some details about what can go wrong in the context of specific procedures and what should be done when this happens, and this seemed like valuable information. Also, did you know that “There is no evidence that lying flat post procedure prevents headache” in the context of lumbar punctures? I didn’t, and a lot of doctors probably also don’t. You can actually go even further than that: “Despite years of anecdotal advice to the contrary, none of the following has ever been shown to be a risk factor [for post-LP headache]: position during or after the procedure; hydration status before, during, or after; amount of CSF removed; immediate activity or rest post-LP.”), and emergencies.
In this post I won’t cover specific chapters of the book in any detail, rather I’ll talk about a few specific topics and observations I could be bothered to write some stuff about here. Let’s start with some uplifting news about the topic of liver tumours: Most of these (~90%) are secondary (i.e. metastatic) tumours with an excellent prognosis (“Often <6 months”). No, wait just a minute… Nope, you definitely do not want cancer cells to migrate to your liver. Primary tumors, the most common cause of which is hepatitis B infection (…they say in that part of the coverage – but elsewhere in the book they observe that “alcohol is the prime cause of any liver disease”), also don’t have great outcomes, especially not if you don’t get a new liver: “Resecting solitary tumours <3cm across ↑3yr survival to 59% from 13%; but ~50% have recurrence by 3yrs. Liver transplant gives a 5yr survival rate of 70%.” It should be noted in a disease impact context that this type of cancer is far more common in areas of the world with poorly developed health care systems like Africa and China.
Alcoholism is another one of the causes of liver tumors. In the book they include the observation that the lifetime prevalence of alcoholism is around 10% for men and 4% for women, but such numbers are of course close to being completely meaningless almost regardless of where they’re coming from. Alcoholism is dangerous; in cases with established cirrhosis roughly half (52%) of people who do not stop drinking will be dead within 5 years, whereas this is also the case for 23% of the people who do stop drinking. Excessive alcohol consumption can cause alcoholic hepatitis; “[m]ild episodes hardly affect mortality” but in severe cases half will be dead in a month, and in general 40% of people admitted to the hospital for alcoholic hepatitis will be dead within one year of admission. Alcohol can cause portal hypertension (80% of cases are caused by cirrhosis in the UK), which may lead to the development of abnormal blood vessels e.g. in the oesophagus which will have a tendency to cause bleeding, which can be fatal. Roughly 30% of cirrhotics with varices bleed, and rebleeding is common: “After a 1st variceal bleed, 60% rebleed within 1yr” and “40% of rebleeders die of complications.” Alcoholism can kill you in a variety of different ways (acute poisonings and accidents should probably also be included here as well), and many don’t survive long enough to develop cancer.
As mentioned in the first post about the book acute kidney injury is common in a hospital setting. In the following I’ve added a few more observations about renal disease. “Renal pain is usually a dull ache, constant and in the loin.” But renal disease don’t always cause pain, and in general: “There is often a poor correlation between symptoms and severity of renal disease. Progression [in chronic disease] may be so insidious that patients attribute symptoms to age or a minor illnesses. […] Serious renal failure may cause no symptoms at all.” The authors note that odd chronic symptoms like fatigue should not be dismissed without considering a renal function test first. The book has a nice brief overview of the pathophysiology of diabetic nephropathy – this part is slightly technical, but I decided to include it here anyway before moving on to a different topic:
“Early on, glomerular and tubular hypertrophy occur, increasing GFR [glomerular filtration rate, an indicator variable used to assess kidney function] transiently, but ongoing damage from advanced glycosylation end-products (AGE—caused by non-enzymatic glycosylation of proteins from chronic hyperglycaemia) triggers more destructive disease. These AGE trigger an inflammatory response leading to deposition of type IV collagen and mesangial expansion, eventually leading to arterial hyalinization, thickening of the mesangium and glomerular basement membrane and nodular glomerulosclerosis (Kimmelstiel–Wilson lesions). Progression generally occurs in four stages:
1 GFR elevated: early in disease renal blood flow increases, increasing the GFR and leading to microalbuminuria. […]
2 Glomerular hyperfiltration: in the next 5–10yrs mesangial expansion gradually occurs and hyperfiltration at the glomerulus is seen without microalbuminuria.
3 Microalbuminuria: as soon as this is detected it indicates progression of disease, GFR may be raised or normal. This lasts another 5–10yrs.
4 Nephropathy: GFR begins to decline and proteinuria increases.
Patients with type 2 DM may present at the later stages having had undetected hyperglycaemia for many years before diagnosis.”
Vitamin B12 deficiency is quite common, the authors note that it occurs in up to 15% of older people. Severe B12 deficiency is not the sort of thing which will lead to you feeling ‘a bit under the weather’ – it can lead to permanent brain damage and damage to the spinal cord. “Vitamin B12 is found in meat, fish, and dairy products, but not in plants.” It’s important to note that “foods of non-animal origin contain no B12 unless fortified or contain bacteria.” The wiki article incidentally includes even higher prevalence estimates (“It is estimated to occur in about 6% of those under the age of 60 and 20% of those over the age of 60. Rates may be as high as 80% in parts of Africa and Asia.”) than the one included in the book – this vitamin deficiency is common, and if severe it can have devastating consequences.
On bleeding disorders: “After injury, 3 processes halt bleeding: vasoconstriction, gap-plugging by platelets, and the coagulation cascade […]. Disorders of haemostasis fall into these 3 groups. The pattern of bleeding is important — vascular and platelet disorders lead to prolonged bleeding from cuts, bleeding into the skin (eg easy bruising and purpura), and bleeding from mucous membranes (eg epistaxis [nose bleeds], bleeding from gums, menorrhagia). Coagulation disorders cause delayed bleeding into joints and muscle.” An important observation in the context of major bleeds is incidentally this: “Blood should only be given if strictly necessary and there is no alternative. Outcomes are often worse after a transfusion.” The book has some good chapters about the leukaemias, but they’re relatively rare diseases and some of them are depressing (e.g. acute myeloid leukaemia: according to the book coverage death occurs in ~2 months if untreated, and roughly four out of five treated patients are dead within 3 years) so I won’t talk a lot about them. One thing I found somewhat interesting about the blood disorders covered in the book is actually how rare they are, all things considered: “every day each of us makes 175 billion red cells, 70 billion granulocytes, and 175 billion platelets”. There are lots of opportunities for things to go wrong here…
Some ways to prevent traveller’s diarrhea: “If in doubt, boil all water. Chlorination is OK, but doesn’t kill amoebic cysts (get tablets from pharmacies). Filter water before purifying. Distinguish between simple gravity filters and water purifiers (which also attempt to sterilize chemically). […] avoid surface water and intermittent tap supplies. In Africa assume that all unbottled water is unsafe. With bottled water, ensure the rim is clean & dry. Avoid ice. […] Avoid salads and peel your own fruit. If you cannot wash your hands, discard the part of the food that you are holding […] Hot, well-cooked food is best (>70°C for 2min is no guarantee; many pathogens survive boiling for 5min, but few last 15min)”
An important observation related to this book’s coverage about how to control hospital acquired infection: “Cleaning hospitals: Routine cleaning is necessary to ensure that the hospital is visibly clean and free from dust and soiling. 90% of microorganisms are present within ‘visible dirt’, and the purpose of routine cleaning is to eliminate this dirt. Neither soap nor detergents have antimicrobial activity, and the cleaning process depends essentially on mechanical action.”
Falciparum malaria causes one million deaths/year, according to the book, and mortality is close to 100% in untreated severe malaria – treatment reduces this number to 15-20%. Malaria in returning travellers is not particularly common, but there are a couple thousand cases in the UK each year. Malaria prophylaxis does not give full protection, and “[t]here is no good protection for parts of SE Asia.” Multidrug resistance is common.
Here’s my first post about the book. I’ve read roughly 75% of the book at this point (~650 pages). The chapters I’ve read so far have dealt with the topics of: ‘thinking about medicine’ (an introductory chapter), ‘history and examination’, cardiovascular medicine, chest medicine, endocrinology, gastroenterology, renal medicine, haematology, infectious diseases, neurology, oncology and palliative care, rheumatology, and surgery (this last one is a long chapter – ~100 pages – which I have not yet finished). In my first post I (…mostly? I can’t recall if I included one or two observations made later in the coverage as well…) talked about observations included in the first 140 pages of the book, which relate only to the first three topics mentioned above; the chapter about chest medicine starts at page 154. In this post I’ll move on and discuss stuff covered in the chapters about cardiovascular medicine, chest medicine, and endocrinology.
In the previous post I talked a little bit about heart failure, acute coronary syndromes and a few related topics, but there’s a lot more stuff in the chapter about cardiovascular medicine and I figured I should add a few more observations – so let’s talk about aortic stenosis. The most common cause is ‘senile calcification’. The authors state that one should think of aortic stenosis in any elderly person with problems of chest pain, shortness of breath during exercise (exertional dyspnoea), and fainting episodes (syncope). Symptomatic aortic stenosis tends to be bad news; “If symptomatic, prognosis is poor without surgery: 2–3yr survival if angina/syncope; 1–2yr if cardiac failure. If moderate-to-severe and treated medically, mortality can be as high as 50% at 2yrs”. Surgery can improve the prognosis quite substantially; they note elsewhere in the coverage that a xenograft (e.g. from a pig) aortic valve replacement can last (“may require replacement at…”) 8-10 years, whereas a mechanical valve lasts even longer than that. Though it should also be noted in that context that the latter type requires life-long anticoagulation, whereas the former only requires this if there is atrial fibrilation.
Next: Infective endocarditis. Half of all cases of endocarditis occur on normal heart valves; the presentation in that case is one of acute heart failure. So this is one of those cases where your heart can be fine one day, and not many days later it’s toast and you’ll die unless you get treatment (often you’ll die even if you do get treatment as mortality is quite high: “Mortality: 5–50% (related to age and embolic events)”; mortality relates to which organism we’re dealing with: “30% with staphs [S. Aureus]; 14% if bowel organisms; 6% if sensitive streptococci.”). Multiple risk factors are known, but some of those are not easily preventable (renal failure, dermatitis, organ transplantation…); don’t be an IV drug (ab)user, and try to avoid getting (type 2) diabetes.. The authors note that: “There is no proven association between having an interventional procedure (dental or non-dental) and the development of IE”, and: “Antibiotic prophylaxis solely to prevent IE is not recommended”.
Speaking of terrible things that can go wrong with your heart for no good reason, hypertrophic cardiomyopathy (-HCM) is the leading cause of sudden cardiac death in young people, with an estimated prevalence of 1 in 500. “Sudden death may be the first manifestation of HCM in many patients”. Yeah…
The next chapter in the book as mentioned covers chest medicine. At the beginning of the chapter there’s some stuff about what the lungs look like and some stuff about how to figure out whether they’re working or not, or why they’re not working – I won’t talk about that here, but I would note that lung problems can relate to stuff besides ‘just’ lack of oxygen; they can also for example be related to retention of carbon dioxide and associated acidosis. In general I won’t talk much about this chapter’s coverage as I’m aware that I have covered many of the topics included in the book before here on the blog in other posts. It should perhaps be noted that whereas the chapter has two pages about lung tumours and two pages about COPD, it has 6 pages about pneumonia; this is still a very important disease and a major killer. Approximately one in five (the number 21% is included in the book) patients with pneumonia in a hospital setting die. Though it should perhaps also be observed that maybe one reason why more stuff is not included about lung cancer in that chapter is that this disease is just depressing and doctors can’t really do all that much. Carcinoma of the bronchus make up ~19% of all cancers and 27% of cancer deaths in the UK. In terms of prognosis, non-small cell lung cancer has a 50% 2-year mortality in cases where the cancer was not spread at presentation and a 90% 2-year mortality in cases with spread. That’s ‘the one you would prefer’: Small cell lung cancer is worse as small cell tumours “are nearly always disseminated at presentation” – here the untreated median survival is 3 months, increasing to 1-1,5 years if treated. The authors note that only 5% (of all cases, including both types) are ‘cured’ (they presumably use those citation marks for a reason). Malignant mesothelioma, a cancer strongly linked to asbestos exposure most often developing in the pleura, incidentally also has a terrible prognosis (“<2 years”, “Often the diagnosis is only made post-mortem”), however it is relatively rare, with only about ~650 deaths in the UK per year.
5-8% of people in the UK have asthma; I was surprised the number was that high. Most people who get it during childhood either grow out of it or suffer much less as adults, but on the other hand there are also many people who develop chronic asthma late in life. In 2009 approximately 1000 people in the UK died of asthma – unless this number is a big underestimate, it would seem to me that asthma at least in terms of mortality is a relatively mild disease (if 5% of the UK population has asthma, that’s 3 million people – and 1000 deaths among 3 million people is not a lot, especially not considering that half of those deaths were in people above the age of 65). COPD is incidentally another respiratory disease which is more common than I had thought; they note that the estimated prevalence in people above the age of 40 in the UK is 10-20%.
The endocrinology chapter has 10 pages about diabetes, and I won’t talk much about that coverage here as I’ve talked about many of these things before on the blog – however a few observations are worth including and discussing here. The authors note that 4% of all pregnancies are complicated by diabetes, with the large majority of cases (3.5%) being new-onset gestational diabetes. In a way the 0,5% could be considered ‘good news’ because they reflect the fact that outcomes have improved so much that a female diabetic can actually carry a child to term without risking her own life or running a major risk that the fetus dies (“As late as 1980, physicians were still counseling diabetic women to avoid pregnancy” – link). But the 3,5%? That’s not good: “All forms [of diabetes] carry an increased risk to mother and foetus: miscarriage, pre-term labour, pre-eclampsia, congenital malformations, macrosomia, and a worsening of diabetic complications”. I’m not fully convinced this statement is actually completely correct, but there’s no doubt that diabetes during pregnancy is not particularly desirable. As to which part of the statement I’m uncertain about, I think gestational diabetes ‘ought to’ have somewhat different effects than type 1 especially in the context of congenial malformations. Based on my understanding of these things, gestational diabetes should be less likely to cause congenital malformations than type 1 diabetes in the mother; diabetes-related congenital malformations tend to happen/develop very early in pregnancy (for details, see the link above) and gestational pregnancy is closely related to hormonal changes and changing metabolic demands which happen over time during pregnancy. Hormonal changes which occur during pregnancy play a key role in the pathogenesis of gestational diabetes, as the hormonal changes in general increase insulin resistance significantly, which is what causes some non-diabetic women to become diabetic during pregnancy; these same processes incidentally also causes the insulin demands of diabetic pregnant women to increase a lot during pregnancy. You’d expect the inherently diabetogenic hormonal and metabolic processes which happen in pregnancy to play a much smaller role in the beginning of the pregnancy than they do later on, especially as women who develop gestational diabetes during their pregnancy would be likely to be able to compensate early in pregnancy, where the increased metabolic demands are much less severe than they are later on. So I’d expect the risk contribution from ‘classic gestational diabetes’ to be larger in the case of macrosomia than in the case of neural tube defects, where type 1s should probably be expected to dominate – a sort of ‘gestational diabetics don’t develop diabetes early enough in pregnancy for the diabetes to be very likely to have much impact on organogenesis’-argument. This is admittedly not a literature I’m intimately familiar with and maybe I’m wrong, but from my reading of their diabetes-related coverage I sort of feel like the authors shouldn’t be expected to be intimately familiar with the literature either, and I’m definitely not taking their views on these sorts of topics to be correct ‘by default’ at this point. This NHS site/page incidentally seems to support my take on this, as it’s clear that the first occasion for even testing for gestational diabetes is at week 8-12, which is actually after a substantial proportion of diabetes-related organ damage would already be expected to have occurred in the type 1 diabetes context (“There is an increased prevalence of congenital anomalies and spontaneous abortions in diabetic women who are in poor glycemic control during the period of fetal organogenesis, which is nearly complete by 7 wk postconception.” – Sperling et al., see again the link provided above. Note that that entire textbook is almost exclusively about type 1 diabetes, so ‘diabetes’ in the context of that quote equals T1DM), and a glucose tolerance test/screen does not in this setting take place until weeks 24-28.
The two main modifiable risk factors in the context of gestational diabetes are weight and age of pregnancy; the risk of developing gestational diabetes increases with weight and is higher in women above the age of 25. One other sex/gender-related observation to make in the context of diabetes is incidentally that female diabetics are at much higher risk of cardiovascular disease than are non-diabetic females: “DM [diabetes mellitus] removes the vascular advantage conferred by the female sex”. Relatedly, “MI is 4-fold commoner in DM and is more likely to be ‘silent’. Stroke is twice as common.” On a different topic in which I’ve been interested they provided an observation which did not help much: “The role of aspirin prophylaxis […] is uncertain in DM with hypertension.”
They argue in the section about thyroid function tests (p. 209) that people with diabetes mellitus should be screened for abnormalities in thyroid function on the annual review; I’m not actually sure this is done in Denmark and I think it’s not – the DDD annual reports I’ve read have not included this variable, and if it is done I know for a fact that doctors do not report the results to the patient. I’m almost certain they neglected to include a ‘type 1’ in that recommendation, because it makes close to zero sense to screen type 2 diabetics for comorbid autoimmune conditions, and I’d say I’m probably also a little skeptical, though much less skeptical, about annual screenings of all type 1s being potentially cost-effective. Given that autoimmune comorbidities (e.g. Graves’ disease and Hashimoto’s) are much more common in women than in men and that they often present in middle-aged individuals (and given that they’re more common in people who develop diabetes relatively late, unlike me – see Sperling) I would assume I’m relatively low risk and that it would probably not make sense to screen someone like me annually from a cost-benefit/cost-effectiveness perspective; but it might make sense to ask the endocrinologist at my next review about how this stuff is actually being done in Denmark, if only to satisfy my own curiosity. Annual screening of *female*, *type 1* diabetics *above (e.g.) the age of 30* might be a great idea and perhaps less restrictive criteria than that can also be justified relatively easily, but this is an altogether very different recommendation from the suggestion that you should screen all diabetics annually for thyroid problems, which is what they recommend in the book – I guess you can add this one to the list of problems I have with the authors’ coverage of diabetes-related topics (see also my comments in the previous post). The sex- and age-distinction is likely much less important than the ‘type’ restriction and maybe you can justify screening all type 1 diabetics (For example: “Hypothyroid or hyperthyroid AITD [autoimmune thyroid disease] has been observed in 10–24% of patients with type 1 diabetes” – Sperling. Base rates are important here: Type 1 diabetes is rare, and Graves’ disease is rare, but if the same HLA mutation causes both in many cases then the population prevalence is not informative about the risk an individual with diabetes and an HLA mutation has of developing Graves’) – but most diabetics are not type 1 diabetics, and it doesn’t make sense to screen a large number of people without autoimmune disease for autoimmune comorbidities they’re unlikely to have (autoimmunity in diabetes is complicated – see the last part of this comment for a few observations of interest on that topic – but it’s not that complicated; most type 2 diabetics are not sick because of autoimmunity-related disease processes, and type 2 diabetics make up the great majority of people with diabetes mellitus in all patient populations around the world). All this being said, it is worth keeping in mind that despite overt thyroid disease being relatively rare in general, subclinical hypothyroidism is common in middle-aged and elderly individuals (“~10% of those >55yrs”); and the authors recommend treating people in this category who also have DM because they are more likely to develop overt disease (…again it probably makes sense to add a ‘T1’ in front of that DM).
Smoking is sexy, right? (Or at least it used to be…). And alcohol makes other people look sexy, right? In a way I find it a little amusing that alcohol and smoking are nevertheless two of the three big organic causes of erectile dysfunction (the third is diabetes).
How much better does it feel to have sex, compared to how it feels to masturbate? No, they don’t ask that question in the book (leave that to me…) but they do provide part of the answer because actually there are ways to quantify this, sort of: “The prolactin increase (♂ and ♀) after coitus is ~400% greater than after masturbation; post-orgasmic prolactin is part of a feedback loop decreasing arousal by inhibiting central dopaminergic processes. The size of post-orgasmic prolactin increase is a neurohormonal index of sexual satisfaction.”
i. Two lectures from the Institute for Advanced Studies:
The IAS has recently uploaded a large number of lectures on youtube, and the ones I blog here are a few of those where you can actually tell from the title what the lecture is about; I find it outright weird that these people don’t include the topic covered in the lecture in their lecture titles.
As for the video above, as usual for the IAS videos it’s annoying that you can’t hear the questions asked by the audience, but the sound quality of this video is at least quite a bit better than the sound quality of the video below (which has a couple of really annoying sequences, in particular around the 15-16 minutes mark (it gets better), where the image is also causing problems, and in the last couple of minutes of the Q&A things are also not exactly optimal as the lecturer leaves the area covered by the camera in order to write something on the blackboard – but you don’t know what he’s writing and you can’t see the lecturer, because the camera isn’t following him). I found most of the above lecture easier to follow than I did the lecture posted below, though in either case you’ll probably not understand all of it unless you’re an astrophysicist – you definitely won’t in case of the latter lecture. I found it helpful to look up a few topics along the way, e.g. the wiki articles about the virial theorem (/also dealing with virial mass/radius), active galactic nucleus (this is the ‘AGN’ she refers to repeatedly), and the Tully–Fisher relation.
Given how many questions are asked along the way it’s really annoying that you in most cases can’t hear what people are asking about – this is definitely an area where there’s room for improvement in the context of the IAS videos. The lecture was not easy to follow but I figured along the way that I understood enough of it to make it worth watching the lecture to the end (though I’d say you’ll not miss much if you stop after the lecture – around the 1.05 hours mark – and skip the subsequent Q&A). I’ve relatively recently read about related topics, e.g. pulsar formation and wave- and fluid dynamics, and if I had not I probably would not have watched this lecture to the end.
ii. A vocabulary.com update. I’m slowly working my way up to the ‘Running Dictionary’ rank (I’m only a walking dictionary at this point); here’s some stuff from my progress page:
I recently learned from a note added to a list that I’ve actually learned a very large proportion of all words available on vocabulary.com, which probably also means that I may have been too harsh on the word selection algorithm in past posts here on the blog; if there aren’t (/m)any new words left to learn it should not be surprising that the algorithm presents me with words I’ve already mastered, and it’s not the algorithm’s fault that there aren’t more words available for me to learn (well, it is to the extent that you’re of the opinion that questions should be automatically created by the algorithm as well, but I don’t think we’re quite there yet at this point). The aforementioned note was added in June, and here’s the important part: “there are words on your list that Vocabulary.com can’t teach yet. Vocabulary.com can teach over 12,000 words, but sadly, these aren’t among them”. ‘Over 12.000’ – and I’ve mastered 11.300. When the proportion of mastered words is this high, not only will the default random word algorithm mostly present you with questions related to words you’ve already mastered; but it actually also starts to get hard to find lists with many words you’ve not already mastered – I’ll often load lists with one hundred words and then realize that I’ve mastered every word on the list. This is annoying if you have a desire to continually be presented with both new words as well as old ones. Unless vocabulary.com increases the rate with which they add new words I’ll run out of new words to learn, and if that happens I’m sure it’ll be much more difficult for me to find motivation to use the site.
With all that stuff out of the way, if you’re not a regular user of the site I should note – again – that it’s an excellent resource if you desire to increase your vocabulary. Below is a list of words I’ve encountered on the site in recent weeks(/months?):
Copacetic, frumpy, elision, termagant, harridan, quondam, funambulist, phantasmagoria, eyelet, cachinnate, wilt, quidnunc, flocculent, galoot, frangible, prevaricate, clarion, trivet, noisome, revenant, myrmidon (I have included this word once before in a post of this type, but it is in my opinion a very nice word with which more people should be familiar…), debenture, teeter, tart, satiny, romp, auricular, terpsichorean, poultice, ululation, fusty, tangy, honorarium, eyas, bumptious, muckraker, bayou, hobble, omphaloskepsis, extemporize, virago, rarefaction, flibbertigibbet, finagle, emollient.
iii. I don’t think I’d do things exactly the way she’s suggesting here, but the general idea/approach seems to me appealing enough for it to be worth at least keeping in mind if I ever decide to start dating/looking for a partner.
iv. Some wikipedia links:
Tarrare (featured). A man with odd eating habits and an interesting employment history (“Dr. Courville was keen to continue his investigations into Tarrare’s eating habits and digestive system, and approached General Alexandre de Beauharnais with a suggestion that Tarrare’s unusual abilities and behaviour could be put to military use. A document was placed inside a wooden box which was in turn fed to Tarrare. Two days later, the box was retrieved from his excrement, with the document still in legible condition. Courville proposed to de Beauharnais that Tarrare could thus serve as a military courier, carrying documents securely through enemy territory with no risk of their being found if he were searched.” Yeah…).
1740 Batavia massacre (featured).
v. I am also fun.
“We wrote this book not because we know so much, but because we know we remember so little…the problem is not simply the quantity of information, but the diversity of places from which it is dispensed. Trailing eagerly behind the surgeon, the student is admonished never to forget alcohol withdrawal as a cause of post-operative confusion. The scrap of paper on which this is written spends a month in the pocket before being lost for ever in the laundry. At different times, and in inconvenient places, a number of other causes may be presented to the student. Not only are these causes and aphorisms never brought together, but when, as a surgical house officer, the former student faces a confused patient, none is to hand.”
‘But now you don’t need to look for those scraps of paper anymore because we’ve collected all that information right here, in this book,’ the authors would argue. Or at least some of the important information is included here (despite this being a 900+ page textbook, many books on subtopics covered in the book are much longer than that; for example the Holmes et al. textbook dealing only with sexually transmitted diseases is more than twice as long as this one. Of course a book with that kind of page count will only ever be a ‘handbook’ to someone with acromegaly…).
Anyway, I’m currently reading this book and I figured I should probably talk about a few of the observations made in the book here, to make them easier to remember later on. The book is intended to be used as a reference work for doctors so in a way trying to remember stuff written in it is a strange thing to do – the point of the book is after all that you don’t need to remember all that stuff – but I would prefer to remember some of the things written in this book and this’ll be easier to do if I write about them here on the blog, instead of just ‘keeping them hidden in the book’, so to speak.
I’m assuming nobody reading along here are planning on reading this book so I wasn’t sure how much sense it would make to add impressions about the way it’s written etc. here, but I decided to note down a few things on these topics anyway. I have noted along the way that the authors sometimes include comments about a condition which they only cover later in the same chapter, and this has bothered me a few times; on the other hand I’m well aware that when you’re trying to write a book where it’s supposed to be easy to look things up quickly you need to make some key decisions here and there which will be likely to impact the reading experience of people who read the book from cover to cover the way I am negatively. Most chapters are structured a bit the same way the ‘[Topic X] At a glance…’ textbooks I’ve read in the past were (Medical Statistics at a Glance, Nutrition at a Glance, The Endocrine System at a Glance); the chapters vary in length (for example there are roughly 70 pages about cardiovascular medicine, 40 pages about endocrinology, 50 pages about gastroenterology, and 30 pages about renal medicine) but they generally seem to be structured in much the same way; the chapters are segmented – many chapter segments are two-page segments, which were also predominant in the At a glance texts – and each segment deals with a specific topic in some detail, with details about many aspects of the disease/condition in question, such as information about e.g. incidence/prevalence, risk factors, some notes on pathophysiology, presentation/symptoms/signs, diagnostics (tests to perform, key symptoms to keep in mind, etc.), treatment options (drugs/surgery/etc.?, dosage, indications/contraindications, side effects, drug interactions, etc.), potential complications, and prognostic information. Not all chapters are structured in the ‘two-page-segments’ way even though this seems to be the baseline structure in many contexts; it’s clear that they’ve given some thought as to how best to present the information included in the coverage. I recall from the At a glance texts that I occasionally thought that the structure felt unnatural, and that they seemed to have committed to a suboptimal coverage format in the specific context – I have not thought along such lines while reading this book, which to me is a sign that they’ve handled these things well. Deviation from the default format occurs e.g. in the chapter on cardiovascular medicine, which has quite a few successive pages on which various types of ECG abnormalities are illustrated (I looked at that stuff and I like to think that I understand this stuff better than I used to now, but I must admit that this was one of the sections of this book into which I did not put a lot of effort, as it in some sense felt like ‘irrelevant knowledge’ – so don’t expect me to be able to tell a right bundle branch block from an acute anterior myocardial infarction on an EEG without having access to this book…). It’s perhaps important to point out that despite the condensed structure of the book the coverage is reasonably detailed; this is not a book with two pages about ‘heart disease’, it’s a book with two pages about rheumatic fever, two pages about right heart valve disease, two pages about infective endocarditis, two pages about broad complex tachycardia, etc. And many of the pages include a lot of information. I have read textbooks dealing with many of the topics they cover and this is also not my first general ‘clinical medicine’ text (that was McPhee et al.), but I’m learning new stuff from the book even about topics with which I’m familiar, which is really nice. It’s a pretty good book so far, even if it’s not perfect; I’m probably at a four star rating at the moment.
In the parts to follow I’ll talk about some of the observations included in the book which I figured might be worth repeating here.
The first observation: They note in the book that 80% of people above the age of 85 years (in Britain) live at home and that 70% of those people can manage stairs; they argue in the same context that any deterioration in an elderly patient should be considered to be from treatable disease until proven otherwise (i.e., the default should not be to say that ‘that’s probably just ageing’).
“Unintentional weight loss should always ring alarm bells”.
A diabetic is probably well-advised to be aware of some of the signs of peripheral arterial disease. These include loss of hair, pallor, shiny skin, cyanosis (bluish discoloration of the skin), dry skin, scaling, deformed toenails, and lowered skin temperature.
“Normally 400-1300mL of gas is expelled PR in 8-20 discrete (or indiscrete) episodes per day. […] most patients with ‘flatulence’ have no GI disease. Air swallowing (aerophagy) is the main cause of flatus; here N2 is the chief gas. If flatus is mostly methane, N2 and CO2, then fermentation by bowel bacteria is the cause, and reducing carbohydrate intake (eg less lactose and wheat) may help.”
If there are red blood cells in the urine, this is due to cancer or glomerulonephritis (let’s not go into details here – we’ll just call this one ‘kidney disease’ for now) until proven otherwise. Painless visual haematuria (blood in the urine) usually equals bladder cancer – it’s definitely a symptom one should have a talk with a doctor about. The book does not mention this, but it’s important to keep in mind however that red/brownish urine is not always due to blood in the urine; it can also be caused by drugs and vegetable dyes (link). I was very surprised about this one in the context of ways to prevent UTIs: “There is no evidence that post-coital voiding, or pre-voiding, or advice on wiping patterns in females is of benefit.” Drinking more water and drinking cranberry or lingo berry juice daily works/lowers risk.
Kidney function is often impaired in people who are hospitalized, with acute kidney injury (-AKI) occurring in up to 18% of hospital patients. It’s an important risk factor for mortality. Mortality can be very high in people with AKI, for example people admitted with burns who develop AKI have an 80% mortality rate, and with trauma/surgery it’s 60%. Up to 30 % of cases are preventable, and preventable causes include medications (continuing medications as usual e.g. after surgery can be catastrophic, and some of the drugs that can cause kidney problems are drugs people take regularly for chronic conditions such as high blood pressure or diabetes (metformin in particular)) and contrast material used in CT scans and procedures. Kidney function is incidentally often also (chronically) impaired in old people, most of which have no symptoms; “many elderly people fall into CKD [chronic kidney disease] stage 3 but have little or no progression over many years.” Symptoms of chronic kidney disease will usually not present until stage four is reached, but if onset of kidney failure is slow even people in the later stages may remain asymptomatic. The authors question whether it makes sense to label the old people in stage 3 with an illness; I’m not sure I completely agree (lowered kidney function increases cardiovascular risk, and some of those people may want to address this, if possible), but I’d certainly agree with the position that there’s a risk of overdiagnosis here.
A few more observations about kidneys. The chief cause of death from renal failure is cardiovascular disease, and in the first two stages of chronic kidney disease, the risk of dying from cardiovascular disease is higher than the risk of ever reaching stage 5, end-stage-renal-failure. Blood pressure control is very important in kidney disease as the authors argue that even a small drop in blood pressure may save significant kidney function. The causal link between BP and kidney disease goes both ways: “Hypertension often causes renal problems […] and most renal diseases can cause hypertension”. Once people require renal replacement therapy (RRT) such as haemodialysis mortality is high: Annual mortality is ~20%, mainly due to cardiovascular disease. The authors talk a little bit about diabetes and kidney disease in the book and among other things include the following observations:
“Diabetes is best viewed as a vascular disease with the kidney as one of its chief targets for end-organ damage. The single most important intervention in the long-term care of DM is the control of BP, to protect the heart, the brain, and the kidney. Renal damage may be preventable with good BP and glycaemic control.
In type 1 DM nephropathy is rare in the first 5yrs, after 10yrs annual incidence rises to a peak at 15yrs, then falls again. Those who have not developed nephropathy at 35yrs are unlikely to do so. In type 2 DM around 10% have nephropathy at diagnosis and up to half will go on to develop it over the next 20yrs. 20% of people with type 2 DM will develop ESRF.”
I was surprised by the observation above that “Those who have not developed nephropathy at 35yrs are unlikely to do so”, and I’m not sure I’d agree with the authors about that. The incidence of diabetes-related nephropathy peaks after a diabetes duration of 10-20 years and declines thereafter, but it doesn’t go to zero: “The risk for the development of diabetic nephropathy is low in a normoalbuminuric patient with diabetes’ duration of greater than 30 years. Patients who have no proteinuria after 20-25 years have a risk of developing overt renal disease of only approximately 1% per year.” (link). I’d note that a risk of 1% per year translates to a roughly 25% risk of developing overt renal disease over a 30 year time-frame, and that diabetics with the disease might not agree that a risk of that magnitude means that they are ‘unlikely’ to develop nephropathy, even if the annual risk is not high. Even if the annual risk were only half of that, 0,5%, the cumulative risk over a 30 year period would still be 14%, or roughly one in seven – are people with risks of that magnitude really ‘unlikely’ to develop nephropathy? This is certainly arguable. Many type 1 diabetics are diagnosed in childhood (peak incidence is in the early teenage years) and they can expect to live significantly longer than 20-25 years with the disease – if you disregard the ‘tail risk’ here, you seem in my opinion to be likely to neglect a substantial proportion of the total risk. This is incidentally not the only part of the book where I take issue with their coverage of topics related to diabetes, elsewhere in the book they note that:
“People who improve and maintain their fitness live longer […] Avoiding obesity helps too, but weight loss per se is only useful in reducing cardiovascular risk and the risk of developing diabetes when combined with regular exercise.”
Whereas in the case of nephropathy you can sort of argue about the language being imprecise and/or words meaning different things to different people, here things are a bit more clear because this is just plain WRONG. See e.g. Rana et al. (“Obesity and physical inactivity independently contribute to the development of type 2 diabetes; however, the magnitude of risk contributed by obesity is much greater than that imparted by lack of physical activity”). This is in my opinion the sort of error you should not find in a medical textbook.
Moving on to other parts of the coverage, let’s talk about angina. There are two types of angina – stable and unstable angina. Stable angina is induced by effort and relieved by rest. Unstable angina is angina of increasing severity or frequency, and it occurs at rest or minimal exertion. Unstable angina requires hospital admission and urgent treatment as it dramatically increases the risk of myocardial infarction. Some more stuff on related topics from the book:
“ACS [acute coronary syndrome] includes unstable angina and evolving MI [myocardial infarction], which share a common underlying pathology—plaque rupture, thrombosis, and inflammation”. Symptoms are: “Acute central chest pain, lasting >20min, often associated with nausea, sweatiness, dyspnoea [shortness of breath], palpitations [awareness of your heart beat]. May present without chest pain (‘silent’ infarct), eg in the elderly or diabetics. In such patients, presentations may include: syncope [fainting], pulmonary oedema, epigastric pain and vomiting, […] acute confusional state, stroke, and diabetic hyperglycaemic states.”
The two key questions to ask in the context of ACS are whether troponin (a cardiac enzyme) levels are elevated and whether there is ST-segment elevation. If there’s no ST-segment elevation and symptoms settle without a rise in troponin levels -> no myocardial damage (that’s the best case scenario – the alternatives are not as great..). In ACS, many deaths occur very soon after symptoms present; 50 % of deaths occur within two hours of symptom onset. “Up to 7% die before discharge.” Some MI complications have very high associated mortalities, for example a ventricular septal defect following an MI implies a 50% mortality rate during the first week alone.
Heart failure is a state in which the cardiac output is inadequate for the requirements of the body. It’s actually not that uncommon; the prevalence is 1-3% of the general population, increasing to roughly 10% “among elderly patients”. 25-50% die within 5 years of diagnosis, and if admission is needed the five year mortality rises to 75%.
Hypertension is a major risk factor for stroke and MI and according to the authors causes ~50% of all vascular deaths. Aside from malignant hypertension, which is relatively rare, hypertension is usually asymptomatic; the authors note specifically that “Headache is no more common than in the general population.” Isolated systolic hypertension, the most common form of hypertension, affects more than half of all people above the age of 60. “It is not benign: doubles risk of MI, triples risk of CVA [cerebrovascular accident, i.e. stroke].” The authors argue that: “Almost any adult over 50 would benefit from [antihypertensives], whatever their starting BP.” I think that’s downplaying the potential side effects of treatment, but it’s obvious that many people might benefit from treatment. Steps you can take to lower your BP without using medications according to the authors include: Reducing alcohol and salt intake, increasing exercise, reducing weight if obese, stop smoking, low-fat diet. They talk quite a bit about the different medications used to treat hypertension – I won’t cover that stuff in much detail, but I thought it was worth including the observation that ACE-inhibitors may be the 1st choice option in diabetics (especially if there’s renal involvement). On a related note, beta-blockers and thiazides may both increase the risk of new-onset diabetes.
Here’s my first post about the book. As I mentioned in that post, I figured I should limit detailed coverage to the parts of the book dealing with stuff related to diabetic/metabolic neuropathies. There’s a chapter specifically about ‘diabetic and uraemic neuropathies’ in the book and most of the coverage below relates to content covered in that chapter, but I have also included some related observations from other parts of the book as they seemed relevant.
It is noted in the book’s coverage that diabetes is the commonest cause of neuropathy in industrialized countries. There are many ways in which diabetes can affect the nervous system, and not all diabetes-related neuropathies affect peripheral nerves. Apart from distal symmetric polyneuropathy, which can probably in this context be thought of as ‘classic diabetic neuropathy’, focal or multifocal involvement of the peripheral nervous system is also common, and so is autonomic neuropathy. Diabetics are also at increased risk of inflammatory neuropathies such as CIDP – chronic inflammatory demyelinating polyneuropathy (about which the book also has a chapter). Late stage complications of diabetes usually relate to some extent to vessel wall abnormalities and their effects, and the blood vessels supplying the peripheral nerves can be affected just like all other blood vessels; in that context it is of interest to note that the author mentions elsewhere in the book that “tissue ischaemia is more likely to be symptomatic in nerves than in most other organs”. According to the author there isn’t really a great way to classify all the various manifestations of diabetic neuropathy, but most of them fall into one of three groups – distal symmetrical sensorimotor (length-dependent) polyneuropathy (DSSP); autonomic neuropathy; and focal- and multifocal neuropathy. The first one of these is by far the most common, and it is predominantly a sensory neuropathy (‘can you feel this?’ ‘does this hurt?’ ‘Is this water hot or cold?’ – as opposed to motor neuropathy: ‘can you move your arm?’) with no motor deficit.
Neuropathies in diabetics are common – how common? The author notes that the prevalence in several population-based surveys has been found to be around 30% “in studies using restrictive definitions”. The author does not mention this, but given that diabetic neuropathy usually has an insidious onset and given that diabetes-related sensory neuropathy “can be totally asymptomatic”, survey-based measures are if anything likely to underestimate prevalence. Risk increases with age and duration of diabetes; the prevalence of diabetic peripheral neuropathy is more than 50% in type 1 diabetics above the age of 60.
DSSP may lead to numbness, burning feet, a pins and needles sensation and piercing/stabbing pain in affected limbs. The ‘symmetric’ part of the abbreviation means that it usually affects both sides of the body, instead of e.g. just one foot or hand. The length-dependence mentioned in the parenthesis earlier relates in a way to the pathophysiological process. The axons of the peripheral nervous system lack ribosomes, and this means that essential proteins and enzymes needed in distal regions of the nervous system need to be transported great distances through the axons – which again means that neurons with long axons are particularly vulnerable to toxic or metabolic disturbances (introducing a length-dependence aspect in terms of which nerves are affected) which may lead to so-called dying-back axonal degeneration. The sensory loss can be restricted to the toes, extend over the feet, or it can migrate even further up the limbs – when sensory loss extends above the knee, signs and symptoms of nerve damage will usually also be observed in the fingers/hands/forearms. In generalized neuropathies a distinction can be made in terms of which type of nerve fibres are predominantly involved. When small fibres are most affected, sensory effects relating to pain- and temperature perception predominate, whereas light touch, position and vibratory senses are relatively preserved; on the other hand abnormalities of proprioception and sensitivity to light touch, often accompanied by motor deficits, will predominate if larger myelinated fibres are involved. DSSP is a small fibre neuropathy.
One of the ‘problems’ in diabetic neuropathy is actually that whereas sensation is affected, motor function often is not. This might be considered much better than the alternative, but unimpaired motor function actually relates closely to how damage often occurs. Wounds/ulcers developing on the soles of the feet (plantar ulcers) are very common in conditions in which there is sensation loss but no motor involvement/loss of strength; people with absent pain sensation will not know when their feet get hurt, e.g. because of a stone in the shoe or other forms of micro-trauma, but they’re still able to walk around relatively unimpaired and the absence of protective sensation in the limbs can thus lead to overuse of joints and accidental self-injury. A substantial proportion of diabetics with peripheral neuropathy also have lower limb ischaemia from peripheral artery disease, which further increases risk, but even in the absence of ischaemia things can go very wrong (for more details, see Edmonds, Foster, and Sanders – I should perhaps warn that the picture in that link is not a great appetite-stimulant). Of course one related problem here is that you can’t just stop moving around in order to avoid these problems once you’re aware that you have peripheral sensory neuropathy; inactivity will lead to muscle atrophy and ischaemia, and that’s not good for your feet either. The neuropathy may not ‘just’ lead to ulcers, but may also lead to the foot becoming deformed – the incidence of neuroarthropathy is approximately 2%/year in diabetics with peripheral neuropathy. Foot deformity is sometimes of acute onset and may be completely painless, despite leading to (painless) fractures and disorganization of joints. In the context of ulcers it is important that foot ulcers often take a *very* long time to heal, and so they provide excellent entry points for bacteria which among other things can cause chronic osteomyelitis (infection and inflammation of the bone and bone marrow). Pronounced motor involvement is as mentioned often absent in DSSP, but it does sometimes occur, usually at a late stage.
The author notes repeatedly in the text that peripheral neuropathy is sometimes the presenting symptom in type 2 diabetes, and I thought I should include that observation here as well. The high blood glucose may not be what leads the patient to see a doctor – sometimes the fact that he can no longer feel his toes is. At that point the nerve damage which has already occurred will of course usually be irreversible.
When the autonomic nervous system is affected (this is called Diabetic Autonomic Neuropathy, -DAN), this can lead to a variety of different symptoms. Effects of orthostatic hypotension (-OH) are frequent complaints; blackouts, faintness and dizziness or visual obscuration on standing are not always due to side effects of blood pressure medications. The author notes that OH can be aggravated by tricyclic antidepressants which are often used for treating chronic neuropathic pain (diabetics with autonomous nervous system disorder will often have, sometimes painful, peripheral neuropathy as well). Neurogenic male impotence seems to be “extremely common”; this leads to the absence of an erection at any time under any circumstances. The bladder may also be involved, which can lead to increased intervals between voiding and residual urine in the bladder after voiding, which can lead to UTIs. It is noted that retrograde ejaculation is frequent in people with bladder atony. The gastrointestinal system can be affected; this is often asymptomatic, but may lead to diarrhea and constipation causing weight loss and malnutrition. Associated diarrhea may be accompanied by fecal incontinence. DAN can lead to hypoglycemia unawareness, making glycemic control more difficult to accomplish. Sweating disorders are common in the feet. When a limb is affected by neuropathy the limb may lose its ability to sweat, and this may lead to other parts of the body (e.g. the head or upper trunk) engaging in ‘compensatory sweating’ to maintain temperature control. Abnormal pupil responses, e.g. in the form of reduced light reflexes and constricted pupils (miosis), are common in diabetics.
Focal (one nerve) and occasionally also multi-focal (more than one nerve) neuropathic syndromes also occur in the diabetic setting. The book spends quite a bit of time talking about what different nerves do and what happens when they stop working, so it’s hard to paint a broad picture of how these types of problems may present – it all depends on which nerve(s) is (are) affected. Usually in the setting of these disorders the long-term prognosis is good, or at least better than in the setting of DSSP; nerve damage is often not permanent. It seems that in terms of cranial nerve involvement, oculomotor nerve palsies are the most common, but still quite rare, affecting 1-2% of diabetics. Symptoms are rapid onset pain followed by double vision, and “spontaneous and complete recovery invariably occurs within 2-3 months” – I would like to note that as far as diabetes complications go, this is probably about as good as it gets… In so-called proximal diabetic neuropathy (-PDN), another type of mononeuropathy/focal neuropathy, the thighs are involved, with numbness or pain, often of a burning character which is worse at night, as well as muscle wasting. That syndrome progresses over weeks or months, after which the condition usually stabilizes and the pain improves, though residual muscle weakness seems to be common. Unlike in the case of DSSP, deficits in PDN are usually asymmetric, and both motor involvement and gradual recovery is common – it’s important to note in this context that DSSP virtually never improves spontaneously and often has a progressive course. Multi-focal neuropathies affect only a small proportion of diabetics, and in terms of outcome patterns they might be said to lie somewhere in between mononeuropathies and DSSP; outcomes are better than in the case of DSSP, but long-term sequelae are common.
Diabetics are at increased risk of developing pressure palsies in general. According to the author carpal tunnel syndrome occurs in 12% of diabetic patients, and “the incidence of ulnar neuropathy due to microlesions at the elbow level is high”.
In diabetics with renal failure caused by diabetic nephropathy (or presumably for that matter renal failure caused by other things as well, but most diabetics with kidney failure will have diabetic nephropathy) neuropathy is common and often severe. Renal failure impairs nerve function and is responsible for sometimes severe motor deficits in these patients. “Recovery from motor deficits is usually good after kidney transplant”. Carpal tunnel syndrome is very common in patients on long-term dialysis; 20 to 50 % of patients dialysed for 10 years or more are reported to have carpal tunnel syndrome. The presence of neuropathy in renal patients is closely related to renal function; the lower renal function, the more likely neurological symptoms become.
As you’ll learn from this book, a lot of things can cause peripheral neuropathies – and so the author notes that “In focal neuropathy occurring in diabetic patients, a neuropathy of another origin must always be excluded.” It’s not always diabetes, and sometimes missing the true cause can be a really bad thing; for example cancer-associated paraneoplastic syndromes are often associated with neuropathy (“paraneoplastic syndromes affect the PNS [Peripheral Nervous System] in up to one third of patients with solid tumors”), and so missing ‘the true cause’ in the case of a focal neuropathy may mean missing a growing tumour.
In terms of treatment options, “There is no specific treatment for distal symmetric polyneuropathy.” Complications can be treated/ideally prevented, but we have no drugs the primary effects of which are to specifically stop the nerves from dying. Treatment of autonomic neuropathy mostly relates to treating symptoms, in particular symptomatic OH. Treatment of proximal diabetic neuropathy, which is often very painful, relates only to pain management. Multifocal diabetic neuropathy can be treated with corticosteroids, minimizing inflammation.
Due to how common diabetic neuropathy is, most controlled studies on treatment options for neuropathic pain have involved patients with distal diabetic polyneuropathy. Various treatment options exist in the context of peripheral neuropathies, including antidepressants, antiepileptic drugs and opioids, as well as topical patches. In general pharmacological treatments will not cause anywhere near complete pain relief: “For patients receiving pharmacological treatment, the average pain reduction is about 20-30%, and only 20-35% of patients will achieve at least a 50% pain reduction with available drugs. […] often only partial pain relief from neuropathic pain can be expected, and […] sensory deficits are unlikely to respond to treatment.” Treatment of neuropathic pain is often a trial-and-error process.
i. “A topologist is someone who can’t tell the difference between his ass and a hole in the ground, but who can tell the difference between his ass and two holes in the ground.” (Source unknown, quote from the book Mathematically Speaking)
ii. “If you’re trying to choose between two theories and one gives you an excuse for being lazy, the other one is probably right.” (Paul Graham)
iii. “Research is the process of going up alleys to see if they are blind.” (Marston Bates)
iv. “Common sense is not really so common.” (Antoine Arnauld)
v. “Discovery consists of seeing what everybody has seen and thinking what nobody has thought.” (Albert von Szent-Györgyi)
vi. “All screening programmes do harm; some do good as well, and, of these, some do more good than harm at reasonable cost” (Sir Muir Gray, as quoted in the book Simply Rational: Decisionmaking in the Real World, by Gerd Gigerenzer).
vii. “according to rationality norms requiring only internal coherence, one can be perfectly consistent, and yet wrong about everything” (Gerd Gigerenzer, ibid.)
viii. “The rightness of a thing isn’t determined by the amount of courage it takes.” (Mary Renault)
ix. “Boredom on social occasions is an inescapable hazard for the over-educated”. (Susan Howatch)
x. “Life deserves laughter, hence people laugh at it.” (Henryk Sienkiewicz)
xi. “A man is the sum of his misfortunes. One day you’d think misfortune would get tired, but then time is your misfortune.” (William Faulkner)
xii. “Man knows so little about his fellows. In his eyes all men or women act upon what he believes would motivate him if he were mad enough to do what the other man or woman is doing.” (-ll-)
xiii. “True opinions can prevail only if the facts to which they refer are known; if they are not known, false ideas are just as effective as true ones, if not a little more effective.” (Walter Lippmann)
xiv. “Anyone can be heroic from time to time, but a gentleman is something which you have to be all the time. Which isn’t easy.” (Luigi Pirandello)
xv. “Our lies reveal as much about us as our truths.” (J. M. Coetzee)
xvi. “An honorable man will not be bullied by a hypothesis.” (Bergen Evans)
xvii. “Error is a hardy plant; it flourisheth in every soil”. (Martin Tupper)
xviii. “Well-timed silence hath more eloquence than speech.” (-ll-)
xix. “If a mistake is not a stepping stone, it is a mistake.” (Eli Siegel)
xx. “Mathematicians are a kind of Frenchman. They translate into their own language whatever is said to them and forthwith the thing is utterly changed.” (Goethe)
“This monograph introduces, defines, exemplifies, and characterizes hope that is directed toward others rather than toward the self. […] Because vicarious hope remains a relatively neglected topic within hope theory and research, the current work aims to provide, for the first time, a robust conceptualization of other-oriented hope, and to review and critically examine existing literature on other-oriented hope.”
I really should be blogging more interesting books here instead, such as e.g. Gigerenzer’s book, but this one is easy to blog.
I’ll make this post short, but I do want to make sure no-one misses this crucial point, which is the most important observation in the context of this book: The book is a terrible book. Given that I’ve already shared (some of) my negative views about the book on goodreads I won’t go into all the many reasons why you probably shouldn’t read it here as well; instead I’ll share below a few observations from the book which might be of interest to some of the people reading along.
“Whereas other-interest encapsulates a broad and generalized orientation toward valuing, recognizing, facilitating, promoting, and celebrating positive outcomes for others that have occurred in the past or present, or that may occur in the future, other-oriented hope cleaves that portion of other-interest specific to the harbouring of future-oriented hope for others and (where possible) attendant strivings toward meeting those ends. […] Other-oriented hope is viewed as a specific form of other-interest, one in which we reveal our interest in the welfare of others by apportioning some of our future-oriented mental imaginings to others’ welfare in addition to our own, more self-focused, hope. […] we define other-oriented hope as future-oriented belief, desire, and mental imagining surrounding a valued outcome of another person that is uncertain but possible. […] The dimensions emphasized by Novotny (1989) within an illness context are that hope: is future-oriented; involves active engagement; is an inner resource; reflects possibility; is relational; and concerns issues of importance.”
“Schrank et al. (2010) factor analyzed 60-items taken from three existing hope scales. Four dimensions of hope arose, labelled trust and confidence (e.g., goal striving, positive past experience), positive future orientation (e.g., looking forward, making plans), social relations and personal value (e.g., feeling loved and needed), and lack of perspective (e.g., feeling trapped, becoming uninvolved). […] In the most influential psychological perspective on hope, […] Snyder and colleagues posit that hope is “a positive motivational state that is based on an interactively derived sense of successful (a) agency (goal-directed energy), and (b) pathways (planning to meet goals)” […]. According to this view, hope-agency beliefs provide motivation to pursue valued goals, and hope-pathways beliefs provide plausible routes to meet those goals. […] hope is most often construed as an emotion or as an emotion-based coping process.”
“Lapierre et al. (1993) report that wishes for others is a more frequent category among relatively younger elderly participants and among non-impaired relative to impaired participants. The authors suggest that less healthy individuals (i.e., relatively older and impaired) are more self-focused in their aspirations, emphasizing such fundamental goals as preserving their health. […] Herth identified changes [in hope patterns] as a function of age and impairment level of respondents, with those older than 80 and experiencing mild to moderate impairment being more likely to harbour hope focused on others compared to those who were higher functioning. Moreover, those living in long-term care facilities with moderate to severe impairment directed their hope almost entirely toward others. […] [research] strongly points to the element of vulnerability in another person as a situational influence on other-oriented hope. Learning about others’ vulnerability likely triggers compassion or empathy which, in turn, elicits other-oriented hope. […] In addition to other-oriented hope occurring in response to another’s vulnerability, vicarious hope appears also to be triggered by one’s own vulnerability. […] In related work, Hollis et al. (2007) discuss borrowed hope; for those with no hope, others who have hope for them can be impactful, because hope can be viewed as ‘contagious’.”
“Similar to recognized drawbacks or risks of self-oriented hope, other-oriented hope may be associated with a failure to accept things the way they are, frustration upon hope being dashed, risk taking, or the failure to limit losses […] There is also an opportunity-cost to other-oriented hope: Time spent hoping for another is time not spent generating, contemplating, or acting toward either one’s own hope or to yet other people’s hope. […] There may be costs to the recipient of other-oriented hope in the form of feeling coerced or controlled by others whose vicarious hope is not shared by the recipient. Therefore, some forms of other-oriented hope may reveal only the desired outcomes of the hoping agent as opposed to the person to whom the hope applies. In the classic example, a parent’s hope for a child may not be hope that is held by the child him- or herself, and therefore may be experienced as a significant source of undue pressure and stress by the child. Such coercive hope is, in turn, likely to be harmful to the relationship between the person harbouring the other-oriented hope and the target of that hope. […] In an extreme form, other-oriented hope bears resemblance to other-oriented perfectionism. Hewitt and Flett (2004) argue that perfectionism can be directed toward the self or others. In the former case, perfectionism involves expectations placed upon oneself for unreasonably high performance, whereas in the latter case, perfectionism involves expecting others to uphold an unreasonably high standard and expressing criticism when others fail to meet this expectation. It is possible that other-oriented hope occasionally takes the form of other-oriented expectations for perfection. For example, a parent may hope that a child performs well in school, but this could take the form of an overly demanding standard of achievement that is difficult or impossible for the child to attain, creating distress in the child’s life and conflict within the parent-child relationship.”
“McGeer (2004) argues for responsive hope being an optimal point between wishful hope, on the one hand (i.e., desire but too little agency, as in wishful thinking) and willful hope, on the other hand (desire but too much agency, as in an incautious or unrealistic pursuit of one’s dreams). To expand on McGeer’s views, responsive other-oriented hope would fall between wishful other-oriented hope, on the one hand (i.e., desires aimed at others but divorced from an action-orientation toward the fulfillment of such desires), and willful other-oriented hope, on the other hand (i.e., desire for, and overzealous facilitation of, others’ future outcomes, ignoring whether such actions are in the other’s best interest or are endorsed by the other). […] Like self-oriented hope, other-oriented hope can be contested and, in extreme instances, such hope may impede coping, such as by encouraging ongoing denial among family members of the objective circumstances faced by their loved one. Hoping against hope for others may, at times, be more costly than beneficial.”
“Acceptance toward others may be exhibited through not judging others, being tolerant of others who are perceived as different than oneself, being willing to engage with others, and not avoiding others who might be predicted to displease us or upset us. It would appear, therefore, that acceptance, like hope, can be directed toward the self or toward others. Interestingly, acceptance of the self and acceptance of others are included, respectively, in measures of psychological well-being and social well-being (Keyes 2005), suggesting that both self-acceptance and other-acceptance are considered key aspects of psychological health.”
“Davis and Asliturk (2011) review research showing that a realistic orientation toward future outcomes, in which one considers both positive and negative possibilities, is associated with coping more effectively with adversity.”
“Weis and Speridakos (2011) conducted a meta-analysis on 27 studies that employed strategies to enhance hope among both mental health clients and community members. They reported modest effects of such psychotherapy on measures of hope and life satisfaction, but not on measures of psychological distress. The authors caution that effects were relatively small in comparison to other psychoeducational or psychotherapeutic interventions.”
As I’ve observed many times before, a wordpress blog like mine is not a particularly nice place to cover mathematical topics involving equations and lots of Greek letters, so the coverage below will be more or less purely conceptual; don’t take this to mean that the book doesn’t contain formulas. Some parts of the book look like this:
That of course makes the book hard to blog, also for other reasons than just the fact that it’s typographically hard to deal with the equations. In general it’s hard to talk about the content of a book like this one without going into a lot of details outlining how you get from A to B to C – usually you’re only really interested in C, but you need A and B to make sense of C. At this point I’ve sort of concluded that when covering books like this one I’ll only cover some of the main themes which are easy to discuss in a blog post, and I’ve concluded that I should skip coverage of (potentially important) points which might also be of interest if they’re difficult to discuss in a small amount of space, which is unfortunately often the case. I should perhaps observe that although I noted in my goodreads review that in a way there was a bit too much philosophy and a bit too little statistics in the coverage for my taste, you should definitely not take that objection to mean that this book is full of fluff; a lot of that philosophical stuff is ‘formal logic’ type stuff and related comments, and the book in general is quite dense. As I also noted in the goodreads review I didn’t read this book as carefully as I might have done – for example I skipped a couple of the technical proofs because they didn’t seem to be worth the effort – and I’d probably need to read it again to fully understand some of the minor points made throughout the more technical parts of the coverage; so that’s of course a related reason why I don’t cover the book in a great amount of detail here – it’s hard work just to read the damn thing, to talk about the technical stuff in detail here as well would definitely be overkill even if it would surely make me understand the material better.
I have added some observations from the coverage below. I’ve tried to clarify beforehand which question/topic the quote in question deals with, to ease reading/understanding of the topics covered.
On how statistical methods are related to experimental science:
“statistical methods have aims similar to the process of experimental science. But statistics is not itself an experimental science, it consists of models of how to do experimental science. Statistical theory is a logical — mostly mathematical — discipline; its findings are not subject to experimental test. […] The primary sense in which statistical theory is a science is that it guides and explains statistical methods. A sharpened statement of the purpose of this book is to provide explanations of the senses in which some statistical methods provide scientific evidence.”
On mathematics and axiomatic systems (the book goes into much more detail than this):
“It is not sufficiently appreciated that a link is needed between mathematics and methods. Mathematics is not about the world until it is interpreted and then it is only about models of the world […]. No contradiction is introduced by either interpreting the same theory in different ways or by modeling the same concept by different theories. […] In general, a primitive undefined term is said to be interpreted when a meaning is assigned to it and when all such terms are interpreted we have an interpretation of the axiomatic system. It makes no sense to ask which is the correct interpretation of an axiom system. This is a primary strength of the axiomatic method; we can use it to organize and structure our thoughts and knowledge by simultaneously and economically treating all interpretations of an axiom system. It is also a weakness in that failure to define or interpret terms leads to much confusion about the implications of theory for application.”
It’s all about models:
“The scientific method of theory checking is to compare predictions deduced from a theoretical model with observations on nature. Thus science must predict what happens in nature but it need not explain why. […] whether experiment is consistent with theory is relative to accuracy and purpose. All theories are simplifications of reality and hence no theory will be expected to be a perfect predictor. Theories of statistical inference become relevant to scientific process at precisely this point. […] Scientific method is a practice developed to deal with experiments on nature. Probability theory is a deductive study of the properties of models of such experiments. All of the theorems of probability are results about models of experiments.”
But given a frequentist interpretation you can test your statistical theories with the real world, right? Right? Well…
“How might we check the long run stability of relative frequency? If we are to compare mathematical theory with experiment then only finite sequences can be observed. But for the Bernoulli case, the event that frequency approaches probability is stochastically independent of any sequence of finite length. […] Long-run stability of relative frequency cannot be checked experimentally. There are neither theoretical nor empirical guarantees that, a priori, one can recognize experiments performed under uniform conditions and that under these circumstances one will obtain stable frequencies.” [related link]
What should we expect to get out of mathematical and statistical theories of inference?
“What can we expect of a theory of statistical inference? We can expect an internally consistent explanation of why certain conclusions follow from certain data. The theory will not be about inductive rationality but about a model of inductive rationality. Statisticians are used to thinking that they apply their logic to models of the physical world; less common is the realization that their logic itself is only a model. Explanation will be in terms of introduced concepts which do not exist in nature. Properties of the concepts will be derived from assumptions which merely seem reasonable. This is the only sense in which the axioms of any mathematical theory are true […] We can expect these concepts, assumptions, and properties to be intuitive but, unlike natural science, they cannot be checked by experiment. Different people have different ideas about what “seems reasonable,” so we can expect different explanations and different properties. We should not be surprised if the theorems of two different theories of statistical evidence differ. If two models had no different properties then they would be different versions of the same model […] We should not expect to achieve, by mathematics alone, a single coherent theory of inference, for mathematical truth is conditional and the assumptions are not “self-evident.” Faith in a set of assumptions would be needed to achieve a single coherent theory.”
On disagreements about the nature of statistical evidence:
“The context of this section is that there is disagreement among experts about the nature of statistical evidence and consequently much use of one formulation to criticize another. Neyman (1950) maintains that, from his behavioral hypothesis testing point of view, Fisherian significance tests do not express evidence. Royall (1997) employs the “law” of likelihood to criticize hypothesis as well as significance testing. Pratt (1965), Berger and Selke (1987), Berger and Berry (1988), and Casella and Berger (1987) employ Bayesian theory to criticize sampling theory. […] Critics assume that their findings are about evidence, but they are at most about models of evidence. Many theoretical statistical criticisms, when stated in terms of evidence, have the following outline: According to model A, evidence satisfies proposition P. But according to model B, which is correct since it is derived from “self-evident truths,” P is not true. Now evidence can’t be two different ways so, since B is right, A must be wrong. Note that the argument is symmetric: since A appears “self-evident” (to adherents of A) B must be wrong. But both conclusions are invalid since evidence can be modeled in different ways, perhaps useful in different contexts and for different purposes. From the observation that P is a theorem of A but not of B, all we can properly conclude is that A and B are different models of evidence. […] The common practice of using one theory of inference to critique another is a misleading activity.”
Is mathematics a science?
“Is mathematics a science? It is certainly systematized knowledge much concerned with structure, but then so is history. Does it employ the scientific method? Well, partly; hypothesis and deduction are the essence of mathematics and the search for counter examples is a mathematical counterpart of experimentation; but the question is not put to nature. Is mathematics about nature? In part. The hypotheses of most mathematics are suggested by some natural primitive concept, for it is difficult to think of interesting hypotheses concerning nonsense syllables and to check their consistency. However, it often happens that as a mathematical subject matures it tends to evolve away from the original concept which motivated it. Mathematics in its purest form is probably not natural science since it lacks the experimental aspect. Art is sometimes defined to be creative work displaying form, beauty and unusual perception. By this definition pure mathematics is clearly an art. On the other hand, applied mathematics, taking its hypotheses from real world concepts, is an attempt to describe nature. Applied mathematics, without regard to experimental verification, is in fact largely the “conditional truth” portion of science. If a body of applied mathematics has survived experimental test to become trustworthy belief then it is the essence of natural science.”
Then what about statistics – is statistics a science?
“Statisticians can and do make contributions to subject matter fields such as physics, and demography but statistical theory and methods proper, distinguished from their findings, are not like physics in that they are not about nature. […] Applied statistics is natural science but the findings are about the subject matter field not statistical theory or method. […] Statistical theory helps with how to do natural science but it is not itself a natural science.”
I should note that I am, and have for a long time been, in broad agreement with the author’s remarks on the nature of science and mathematics above. Popper, among many others, discussed this topic a long time ago e.g. in The Logic of Scientific Discovery and I’ve basically been of the opinion that (‘pure’) mathematics is not science (‘but rather ‘something else’ … and that doesn’t mean it’s not useful’) for probably a decade. I’ve had a harder time coming to terms with how precisely to deal with statistics in terms of these things, and in that context the book has been conceptually helpful.
Below I’ve added a few links to other stuff also covered in the book:
Radon-Nikodyn theorem. (not covered in the book, but the necessity of using ‘a Radon-Nikodyn derivative’ to obtain an answer to a question being asked was remarked upon at one point, and I had no clue what he was talking about – it seems that the stuff in the link was what he was talking about).
A very specific and relevant link: Berger and Wolpert (1984). The stuff about Birnbaum’s argument covered from p.24 (p.40) and forward is covered in some detail in the book. The author is critical of the model and explains in the book in some detail why that is. See also: On the foundations of statistical inference (Birnbaum, 1962).
This one was mostly review for me, but there was also some new stuff and it was a ‘sort of okay’ lecture even if I was highly skeptical about a few points covered. I was debating whether to even post the lecture on account of those points of contention, but I figured that by adding a few remarks below I could justify doing it. So below a few skeptical comments relating to content covered in the lecture:
a) 28-29 minutes in he mentions that the cutoff for hypertension in diabetics is a systolic pressure above 130. Here opinions definitely differ, and opinions about treatment cutoffs differ; in the annual report from the Danish Diabetes Database they follow up on whether hospitals and other medical decision-making units are following guidelines (I’ve talked about the data on the blog, e.g. here), and the BP goal of involved decision-making units evaluated is currently whether diabetics with systolic BP above 140 receive antihypertensive treatment. This recent Cochrane review concluded that: “At the present time, evidence from randomized trials does not support blood pressure targets lower than the standard targets in people with elevated blood pressure and diabetes” and noted that: “The effect of SBP targets on mortality was compatible with both a reduction and increase in risk […] Trying to achieve the ‘lower’ SBP target was associated with a significant increase in the number of other serious adverse events”.
b) Whether retinopathy screenings should be conducted yearly or biennially is also contested, and opinions differ – this is not mentioned in the lecture, but I sort of figure maybe it should have been. There’s some evidence that annual screening is better (see e.g. this recent review), but the evidence base is not great and clinical outcomes do not seem to differ much in general; as noted in the review, “Observational and economic modelling studies in low-risk patients show little difference in clinical outcomes between screening intervals of 1 year or 2 years”. To stratify based on risk seems desirable from a cost-effectiveness standpoint, but how to stratify optimally seems to not be completely clear at the present point in time.
c) The Somogyi phenomenon is highly contested, and I was very surprised about his coverage of this topic – ‘he’s a doctor lecturing on this topic, he should know better’. As the wiki notes: “Although this theory is well known among clinicians and individuals with diabetes, there is little scientific evidence to support it.” I’m highly skeptical, and I seriously question the advice of lowering insulin in the context of morning hyperglycemia. As observed in Cryer’s text: “there is now considerable evidence against the Somogyi hypothesis (Guillod et al. 2007); morning hyperglycemia is the result of insulin lack, not post-hypoglycemic insulin resistance (Havlin and Cryer 1987; Tordjman et al. 1987; Hirsch et al. 1990). There is a dawn phenomenon—a growth hormone–mediated increase in the nighttime to morning plasma glucose concentration (Campbell et al. 1985)—but its magnitude is small (Periello et al. 1991).”
I decided not to embed this lecture in the post mainly because the resolution is unsatisfactorily low so that a substantial proportion of the visual content is frankly unintelligible; I figured this would bother others more than it did me and that a semi-satisfactory compromise solution in terms of coverage would be to link to the lecture, but not embed it here. You can hear what the lecturer is saying, which was enough for me, but you can’t make out stuff like effect differences, p-values, or many of the details in the graphic illustrations included. Despite the title of the lecture on youtube, the lecture actually mainly consists of a brief overview of pharmacological treatment options for diabetes.
If you want to skip the introduction, the first talk/lecture starts around 5 minutes and 30 seconds into the video. Note that despite the long running time of this video the lectures themselves only take about 50 minutes in total; the rest of it is post-lecture Q&A and discussion.
I figured I ought to blog this book at some point, and today I decided to take out the time to do it. This is the second book by Darwin I’ve read – for blog content dealing with Darwin’s book The Voyage of the Beagle, see these posts. The two books are somewhat different; Beagle is sort of a travel book written by a scientist who decided to write down his observations during his travels, whereas Origin is a sort of popular-science research treatise – for more details on Beagle, see the posts linked above. If you plan on reading both the way I did I think you should aim to read them in the order they are written.
I did not rate the book on goodreads because I could not think of a fair way to rate the book; it’s a unique and very important contribution to the history of science, but how do you weigh the other dimensions? I decided not to try. Some of the people reviewing the book on goodreads call the book ‘dry’ or ‘dense’, but I’d say that I found the book quite easy to read compared to quite a few of the other books I’ve been reading this year and it doesn’t actually take that long to read; thus I read a quite substantial proportion of the book during a one day trip to Copenhagen and back. The book can be read by most literate people living in the 21st century – you do not need to know any evolutionary biology to read this book – but that said, how you read the book will to some extent depend upon how much you know about the topics about which Darwin theorizes in his book. I had a conversation with my brother about the book a short while after I’d read it, and I recall noting during that conversation that in my opinion one would probably get more out of reading this book if one has at least some knowledge of geology (for example some knowledge about the history of the theory of continental drift – this book was written long before the theory of plate tectonics was developed), paleontology, Mendel’s laws/genetics/the modern synthesis and modern evolutionary thought, ecology and ethology, etc. Whether or not you actually do ‘get more out of the book’ if you already know some stuff about the topics about which Darwin speaks is perhaps an open question, but I think a case can certainly be made that someone who already knows a bit about evolution and related topics will read this book in a different manner than will someone who knows very little about these topics. I should perhaps in this context point out to people new to this blog that even though I hardly consider myself an expert on these sorts of topics, I have nevertheless read quite a bit of stuff about those things in the past – books like this, this, this, this, this, this, this, this, this, this, this, this, this, this, and this one – so I was reading the book perhaps mainly from the vantage point of someone at least somewhat familiar both with many of the basic ideas and with a lot of the refinements of these ideas that people have added to the science of biology since Darwin’s time. One of the things my knowledge of modern biology and related topics had not prepared me for was how moronic some of the ideas of Darwin’s critics were at the time and how stupid some of the implicit alternatives were, and this is actually part of the fun of reading this book; there was a lot of stuff back then which even many of the people presumably held in high regard really had no clue about, and even outrageously idiotic ideas were seemingly taken quite seriously by people involved in the debate. I assume that biologists still to this day have to spend quite a bit of time and effort dealing with ignorant idiots (see also this), but back in Darwin’s day these people were presumably to a much greater extent taken seriously even among people in the scientific community, if indeed they were not themselves part of the scientific community.
Darwin was not right about everything and there’s a lot of stuff that modern biologists know which he had no idea about, so naturally some mistaken ideas made their way into Origin as well; for example the idea of the inheritance of acquired characteristics (Lamarckian inheritance) occasionally pops up and is implicitly defended in the book as a credible complement to natural selection, as also noted in Oliver Francis’ afterword to the book. On a general note it seems that Darwin did a better job convincing people about the importance of the concept of evolution than he did convincing people that the relevant mechanism behind evolution was natural selection; at least that’s what’s argued in wiki’s featured article on the history of evolutionary thought (to which I have linked before here on the blog).
Darwin emphasizes more than once in the book that evolution is a very slow process which takes a lot of time (for example: “I do believe that natural selection will always act very slowly, often only at long intervals of time, and generally on only a very few of the inhabitants of the same region at the same time”, p.123), and arguably this is also something about which he is part right/part wrong because the speed with which natural selection ‘makes itself felt’ depends upon a variety of factors, and it can be really quite fast in some contexts (see e.g. this and some of the topics covered in books like this one); though you can appreciate why he held the views he did on that topic.
A big problem confronted by Darwin was that he didn’t know how genes work, so in a sense the whole topic of the ‘mechanics of the whole thing’ – the ‘nuts and bolts’ – was more or less a black box to him (I have included a few quotes which indirectly relate to this problem in my coverage of the book below; as can be inferred from those quotes Darwin wasn’t completely clueless, but he might have benefited greatly from a chat with Gregor Mendel…) – in a way a really interesting thing about the book is how plausible the theory of natural selection is made out to be despite this blatantly obvious (at least to the modern reader) problem. Darwin was incidentally well aware there was a problem; just 6 pages into the first chapter of the book he observes frankly that: “The laws governing inheritance are quite unknown”. Some of the quotes below, e.g. on reciprocal crosses, illustrate that he was sort of scratching the surface, but in the book he never does more than that.
Below I have added some quotes from the book.
“Certainly no clear line of demarcation has as yet been drawn between species and sub-species […]; or, again, between sub-species and well-marked varieties, or between lesser varieties and individual differences. These differences blend into each other in an insensible series; and a series impresses the mind with the idea of an actual passage. […] I look at individual differences, though of small interest to the systematist, as of high importance […], as being the first step towards such slight varieties as are barely thought worth recording in works on natural history. And I look at varieties which are in any degree more distinct and permanent, as steps leading to more strongly marked and more permanent varieties; and at these latter, as leading to sub-species, and to species. […] I attribute the passage of a variety, from a state in which it differs very slightly from its parent to one in which it differs more, to the action of natural selection in accumulating […] differences of structure in certain definite directions. Hence I believe a well-marked variety may be justly called an incipient species […] I look at the term species as one arbitrarily given, for the sake of convenience, to a set of individuals closely resembling each other, and that it does not essentially differ from the term variety, which is given to less distinct and more fluctuating forms. The term variety, again, in comparison with mere individual differences, is also applied arbitrarily, and for mere convenience’ sake. […] the species of large genera present a strong analogy with varieties. And we can clearly understand these analogies, if species have once existed as varieties, and have thus originated: whereas, these analogies are utterly inexplicable if each species has been independently created.”
“Owing to [the] struggle for life, any variation, however slight and from whatever cause proceeding, if it be in any degree profitable to an individual of any species, in its infinitely complex relations to other organic beings and to external nature, will tend to the preservation of that individual, and will generally be inherited by its offspring. The offspring, also, will thus have a better chance of surviving, for, of the many individuals of any species which are periodically born, but a small number can survive. I have called this principle, by which each slight variation, if useful, is preserved, by the term of Natural Selection, in order to mark its relation to man’s power of selection. We have seen that man by selection can certainly produce great results, and can adapt organic beings to his own uses, through the accumulation of slight but useful variations, given to him by the hand of Nature. But Natural Selection, as we shall hereafter see, is a power incessantly ready for action, and is as immeasurably superior to man’s feeble efforts, as the works of Nature are to those of Art. […] In looking at Nature, it is most necessary to keep the foregoing considerations always in mind – never to forget that every single organic being around us may be said to be striving to the utmost to increase in numbers; that each lives by a struggle at some period of its life; that heavy destruction inevitably falls either on the young or old, during each generation or at recurrent intervals. Lighten any check, mitigate the destruction ever so little, and the number of the species will almost instantaneously increase to any amount. The face of Nature may be compared to a yielding surface, with ten thousand sharp wedges packed close together and driven inwards by incessant blows, sometimes one wedge being struck, and then another with greater force. […] A corollary of the highest importance may be deduced from the foregoing remarks, namely, that the structure of every organic being is related, in the most essential yet often hidden manner, to that of all other organic beings, with which it comes into competition for food or residence, or from which it has to escape, or on which it preys.”
“Under nature, the slightest difference of structure or constitution may well turn the nicely-balanced scale in the struggle for life, and so be preserved. How fleeting are the wishes and efforts of man! how short his time! And consequently how poor will his products be, compared with those accumulated by nature during whole geological periods. […] It may be said that natural selection is daily and hourly scrutinising, throughout the world, every variation, even the slightest; rejecting that which is bad, preserving and adding up all that is good; silently and insensibly working, whenever and wherever opportunity offers, at the improvement of each organic being in relation to its organic and inorganic conditions of life. We see nothing of these slow changes in progress, until the hand of time has marked the long lapses of ages, and then so imperfect is our view into long past geological ages, that we only see that the forms of life are now different from what they formerly were.”
“I have collected so large a body of facts, showing, in accordance with the almost universal belief of breeders, that with animals and plants a cross between different varieties, or between individuals of the same variety but of another strain, gives vigour and fertility to the offspring; and on the other hand, that close interbreeding diminishes vigour and fertility; that these facts alone incline me to believe that it is a general law of nature (utterly ignorant though we be of the meaning of the law) that no organic being self-fertilises itself for an eternity of generations; but that a cross with another individual is occasionally perhaps at very long intervals — indispensable. […] in many organic beings, a cross between two individuals is an obvious necessity for each birth; in many others it occurs perhaps only at long intervals; but in none, as I suspect, can self-fertilisation go on for perpetuity.”
“as new species in the course of time are formed through natural selection, others will become rarer and rarer, and finally extinct. The forms which stand in closest competition with those undergoing modification and improvement, will naturally suffer most. […] Whatever the cause may be of each slight difference in the offspring from their parents – and a cause for each must exist – it is the steady accumulation, through natural selection, of such differences, when beneficial to the individual, which gives rise to all the more important modifications of structure, by which the innumerable beings on the face of this earth are enabled to struggle with each other, and the best adapted to survive.”
“Natural selection, as has just been remarked, leads to divergence of character and to much extinction of the less improved and intermediate forms of life. On these principles, I believe, the nature of the affinities of all organic beings may be explained. It is a truly wonderful fact – the wonder of which we are apt to overlook from familiarity – that all animals and all plants throughout all time and space should be related to each other in group subordinate to group, in the manner which we everywhere behold – namely, varieties of the same species most closely related together, species of the same genus less closely and unequally related together, forming sections and sub-genera, species of distinct genera much less closely related, and genera related in different degrees, forming sub-families, families, orders, sub-classes, and classes. The several subordinate groups in any class cannot be ranked in a single file, but seem rather to be clustered round points, and these round other points, and so on in almost endless cycles. On the view that each species has been independently created, I can see no explanation of this great fact in the classification of all organic beings; but, to the best of my judgment, it is explained through inheritance and the complex action of natural selection, entailing extinction and divergence of character […] The affinities of all the beings of the same class have sometimes been represented by a great tree. I believe this simile largely speaks the truth. The green and budding twigs may represent existing species; and those produced during each former year may represent the long succession of extinct species. At each period of growth all the growing twigs have tried to branch out on all sides, and to overtop and kill the surrounding twigs and branches, in the same manner as species and groups of species have tried to overmaster other species in the great battle for life. The limbs divided into great branches, and these into lesser and lesser branches, were themselves once, when the tree was small, budding twigs; and this connexion of the former and present buds by ramifying branches may well represent the classification of all extinct and living species in groups subordinate to groups. Of the many twigs which flourished when the tree was a mere bush, only two or three, now grown into great branches, yet survive and bear all the other branches; so with the species which lived during long-past geological periods, very few now have living and modified descendants. From the first growth of the tree, many a limb and branch has decayed and dropped off; and these lost branches of various sizes may represent those whole orders, families, and genera which have now no living representatives, and which are known to us only from having been found in a fossil state. As we here and there see a thin straggling branch springing from a fork low down in a tree, and which by some chance has been favoured and is still alive on its summit, so we occasionally see an animal like the Ornithorhynchus or Lepidosiren, which in some small degree connects by its affinities two large branches of life, and which has apparently been saved from fatal competition by having inhabited a protected station. As buds give rise by growth to fresh buds, and these, if vigorous, branch out and overtop on all sides many a feebler branch, so by generation I believe it has been with the great Tree of Life, which fills with its dead and broken branches the crust of the earth, and covers the surface with its ever branching and beautiful ramifications.”
“No one has been able to point out what kind, or what amount, of difference in any recognisable character is sufficient to prevent two species crossing. It can be shown that plants most widely different in habit and general appearance, and having strongly marked differences in every part of the flower, even in the pollen, in the fruit, and in the cotyledons, can be crossed. […] By a reciprocal cross between two species, I mean the case, for instance, of a stallion-horse being first crossed with a female-ass, and then a male-ass with a mare: these two species may then be said to have been reciprocally crossed. There is often the widest possible difference in the facility of making reciprocal crosses. Such cases are highly important, for they prove that the capacity in any two species to cross is often completely independent of their systematic affinity, or of any recognisable difference in their whole organisation. On the other hand, these cases clearly show that the capacity for crossing is connected with constitutional differences imperceptible by us, and confined to the reproductive system. […] fertility in the hybrid is independent of its external resemblance to either pure parent. […] The foregoing rules and facts […] appear to me clearly to indicate that the sterility both of first crosses and of hybrids is simply incidental or dependent on unknown differences, chiefly in the reproductive systems, of the species which are crossed. […] Laying aside the question of fertility and sterility, in all other respects there seems to be a general and close similarity in the offspring of crossed species, and of crossed varieties. If we look at species as having been specially created, and at varieties as having been produced by secondary laws, this similarity would be an astonishing fact. But it harmonizes perfectly with the view that there is no essential distinction between species and varieties. […] the facts briefly given in this chapter do not seem to me opposed to, but even rather to support the view, that there is no fundamental distinction between species and varieties.”
“Believing, from reasons before alluded to, that our continents have long remained in nearly the same relative position, though subjected to large, but partial oscillations of level, I am strongly inclined to…” (…’probably get some things wrong…’, US)
“In considering the distribution of organic beings over the face of the globe, the first great fact which strikes us is, that neither the similarity nor the dissimilarity of the inhabitants of various regions can be accounted for by their climatal and other physical conditions. Of late, almost every author who has studied the subject has come to this conclusion. […] A second great fact which strikes us in our general review is, that barriers of any kind, or obstacles to free migration, are related in a close and important manner to the differences between the productions of various regions. […] A third great fact, partly included in the foregoing statements, is the affinity of the productions of the same continent or sea, though the species themselves are distinct at different points and stations. It is a law of the widest generality, and every continent offers innumerable instances. Nevertheless the naturalist in travelling, for instance, from north to south never fails to be struck by the manner in which successive groups of beings, specifically distinct, yet clearly related, replace each other. […] We see in these facts some deep organic bond, prevailing throughout space and time, over the same areas of land and water, and independent of their physical conditions. The naturalist must feel little curiosity, who is not led to inquire what this bond is. This bond, on my theory, is simply inheritance […] The dissimilarity of the inhabitants of different regions may be attributed to modification through natural selection, and in a quite subordinate degree to the direct influence of different physical conditions. The degree of dissimilarity will depend on the migration of the more dominant forms of life from one region into another having been effected with more or less ease, at periods more or less remote; on the nature and number of the former immigrants; and on their action and reaction, in their mutual struggles for life; the relation of organism to organism being, as I have already often remarked, the most important of all relations. Thus the high importance of barriers comes into play by checking migration; as does time for the slow process of modification through natural selection. […] On this principle of inheritance with modification, we can understand how it is that sections of genera, whole genera, and even families are confined to the same areas, as is so commonly and notoriously the case.”
“the natural system is founded on descent with modification […] and […] all true classification is genealogical; […] community of descent is the hidden bond which naturalists have been unconsciously seeking, […] not some unknown plan or creation, or the enunciation of general propositions, and the mere putting together and separating objects more or less alike.”
This is a book full of quotes on the topic of mathematics. As is always the case for books full of quotations, most of the quotes in this book aren’t very good, but occasionally you come across a quote or two that enable you to justify reading on. I’ll likely include some of the good/interesting quotes in the book in future ‘quotes’ posts. Below I’ve added some sample quotes from the book. I’ve read roughly three-fifths of the book so far and I’m currently hovering around a two-star rating on goodreads.
“Since authors seldom, if ever, say what they mean, the following glossary is offered to neophytes in mathematical research to help them understand the language that surrounds the formulas …
ANALOGUE. This is an a. of: I have to have some excuse for publishing it.
APPLICATIONS. This is of interest in a.: I have to have some excuse for publishing it.
COMPLETE. The proof is now c.: I can’t finish it. […]
DIFFICULT. This problem is d.: I don’t know the answer. (Cf. Trivial)
GENERALITY. Without loss of g.: I have done an easy special case. […]
INTERESTING. X’s paper is I.: I don’t understand it.
KNOWN. This is a k. result but I reproduce the proof for convenience of the reader: My paper isn’t long enough. […]
NEW. This was proved by X but the following n. proof may present points of interest: I can’t understand X.
NOTATION. To simplify the n.: It is too much trouble to change now.
OBSERVED. It will be o. that: I hope you have not noticed that.
OBVIOUS. It is o.: I can’t prove it.
READER. The details may be left to the r.: I can’t do it. […]
STRAIGHTFORWARD. By a s. computation: I lost my notes.
TRIVIAL. This problem is t.: I know the answer (Cf. Difficult).
WELL-KNOWN. The result is w.: I can’t find the reference.” (Pétard, H. [Pondiczery, E.S.]).
Here are a few quotes similar to the ones above, provided by a different, unknown source:
“BRIEFLY: I’m running out of time, so I’ll just write and talk faster. […]
HE’S ONE OF THE GREAT LIVING MATHEMATICIANS: He’s written 5 papers and I’ve read 2 of them. […]
I’VE HEARD SO MUCH ABOUT YOU: Stalling a minute may give me time to recall who you are. […]
QUANTIFY: I can’t find anything wrong with your proof except that it won’t work if x is a moon of Jupiter (popular in applied math courses). […]
SKETCH OF A PROOF: I couldn’t verify all the details, so I’ll break it down into the parts I couldn’t prove.
YOUR TALK WAS VERY INTERESTING: I can’t think of anything to say about your talk.” (‘Unknown’)
“Mathematics is neither a description of nature nor an explanation of its operation; it is not concerned with physical motion or with the metaphysical generation of quantities. It is merely the symbolic logic of possible relations, and as such is concerned with neither approximate nor absolute truth, but only with hypothetical truth. That is, mathematics determines which conclusions will follow logically from given premises. The conjunction of mathematics and philosophy, or of mathematics and science is frequently of great service in suggesting new problems and points of view.” (Carl Boyer)
“It’s the nature of mathematics to pose more problems than it can solve.” (Ivars Peterson)
“the social scientist who lacks a mathematical mind and regards a mathematical formula as a magic recipe, rather than as the formulation of a supposition, does not hold forth much promise. A mathematical formula is never more than a precise statement. It must not be made into a Procrustean bed […] The chief merit of mathematization is that it compels us to become conscious of what we are assuming.” (Bertrand de Jouvenel)
“As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.” (Albert Einstein)
“[Mathematics] includes much that will neither hurt one who does not know it nor help one who does.” (J. B. Mencke)
“Pure mathematics consists entirely of asseverations to the extent that, if such and such a proposition is true of anything, then such and such another proposition is true of anything. It is essential not to discuss whether the first proposition is really true, and not to mention what the anything is, of which it is supposed to be true … If our hypothesis is about anything, and not about some one or more particular things, then our deductions constitute mathematics. Thus mathematics may be defined as the subject in which we never know what we are talking about, nor whether what we are saying is true.” (Bertrand Russell)
“Mathematical rigor is like clothing; in its style it ought to suit the occasion, and it diminishes comfort and restricts freedom of movement if it is either too loose or too tight.” (G. F. Simmons).
“at a great distance from its empirical source, or after much “abstract” inbreeding, a mathematical subject is in danger of degeneration. At the inception the style is usually classical; when it shows signs of becoming baroque, then the danger signal is up … In any event, whenever this stage is reached, the only remedy seems to me to be the rejuvenating return to the source: the reinjection of more or less directly empirical ideas.” (John von Neumann)
I could theoretically write a lot of posts about this handbook, but I’m probably not going to do that. As I’ve mentioned before I own a physical copy of this book, and blogging physical books is a pain in the neck compared to blogging e-books – this is one of the main reasons why I’m only now starting to blog the book, despite having finished it some time ago.
The book is a 600+ pages long handbook (752 pages if you include glossary, index etc.), and it has 16 chapters on various topics. Though I’m far from sure, I’d estimate that I spent something like 50 hours on the book altogether so far – 3 hours per chapter on average – and that’s just for ‘reading the pages’, so to speak; if I do decide to blog this book in any amount of detail, the amount of time spent on the material in there will go up quite a bit.
So what’s the book about – what is ‘cognitive psychology’? Here are a few remarks on these topics from the preface and the first chapter:
“the leading contemporary approach to human cognition involves studying the brain as well as behaviour. We have used the term “cognitive psychology” in the title of this book to refer to this approach, which forms the basis for our coverage of human cognition. Note, however, that the term “cognitive neuroscience” is often used to describe this approach. […] Note that the distinction between cognitive psychology and cognitive neuroscience is often blurred – the term ‘cognitive psychology” can be used in a broader sense to include cognitive neuroscience. Indeed, it is in that broader sense that it is used in the title of this book.”
The first chapter – about ‘approaches to human cognition’ – is a bit dense, but I decided to talk a little about it anyway because it seemed like a good way to give you some idea about what the book is about and which sort of content you’ll encounter in it. In the chapter the authors outline four different approaches to human cognition and talk about each of these in a bit of detail. Experimental cognitive psychology is an approach which basically limits itself to behavioural evidence. What they term cognitive neuroscience is an approach using evidence from both behaviour and the brain (that can be accomplished by having people do stuff while their brain activity is being monitored). Cognitive neuropsychology is an approach where you try to use data from brain-damaged individuals to help understand how normal cognition works. The last approach, computational cognitive science, I recently dealt with in the Science of Reading handbook – this approach involves constructing computational models to understand/simulate specific aspects of human cognition. All four approaches are used throughout the book to obtain a greater understanding of the topics covered.
The introductory chapter also gives the reader some information about what the brain looks like and how it’s structured, adds some comments about distinctions between various forms of processing, such as bottom-up processing and top-down processing and serial processing and parallel processing, and adds information about common techniques used to study brain activity in neuroscience (single-unit recording, event-related potentials, positron emission tomography, fMRI, efMRI, magnetoencephalography, and transcranial magnetic stimulation). I don’t want to go too much into the specifics of all those topics here, but I should note that I was unaware of the existence of TMS (transcranial magnetic stimulation) research methodologies and that it sounds like an interesting approach; basically what people do when they use this approach is to use magnetic pulses to try to (briefly, for a short amount of time) disrupt the functioning of some area of the brain and then evaluate performance on cognitive tasks performed while the brain area in question is disrupted – if people perform more poorly on a given task when the brain area in question is disrupted by the magnetic field, it might indicate that the brain area is involved in that task. For various reasons it’s not unproblematic to interpret the results of TMS research and there are various limitations to the application of this method, but this is experimental manipulation of a kind I’d basically assumed did not exist in this field before I started out reading the book.
It’s noted in the first chapter that: “much research in cognitive psychology suffers from a relative lack of ecological validity […] and paradigm specificity (findings do not generalise from one paradigm to others). The same limitations apply to cognitive neuroscience since cognitive neuroscientists generally use tasks previously developed by cognitive psychologists. Indeed, the problem of ecological validity may be greater in cognitive neuroscience.” In the context of cognitive neuropsychology, there are also various problems which I’m reasonably sure I’ve talked about here before – for example brain damage is rarely conveniently localized to just one brain area the researcher happens to be interested in, and the use of compensatory strategies by individuals with brain damage may cause problems with interpretation. Small sample sizes and large patient heterogeneities within these samples also do not help. As for the last approach, computational cognitive science, the problems mentioned are probably mostly the ones you’d expect; the models developed are rarely used to make new predictions because they’re often too general to really make them at all easy to evaluate one way or the other (lots of free parameters you can fit however you like), and despite their complexity they tend to ignore a lot of presumably highly relevant details.
The above was an outline of some stuff covered in the first chapter. The book as mentioned has 16 chapters. ‘Part 1’ deals with visual perception and attention – there’s a lot of stuff about that kind of thing in the book, almost 200 pages – and includes chapters about ‘basic processes in visual perception’, ‘object and face recognition’, ‘perception, motion, and action’, and ‘attention and performance’. Part 2 deals with memory, including chapters about ‘learning, memory, and forgetting’, ‘long-term memory systems’ and ‘everyday memory’. That part I found interesting and I hope I’ll manage to find the time to cover some of that stuff here later on. Part 3 deals with language and includes chapters about ‘reading and speech perception’, ‘language comprehension’, and ‘language production’. I recall wondering a long time ago on this blog if people doing research on those kinds of topics distinguished between language production and language comprehension; it’s pretty obvious that they do.. Part 5 deals with ‘thinking and reasoning’ and includes chapters about ‘problem solving and expertise’, ‘judgment and decision making’, and ‘inductive and deductive reasoning’. Interestingly the first of these chapters talks quite a bit about chess, because chess expertise is one of the research areas people have looked at when looking at the topic of expertise. I may decide to talk about these things later on, but I’m not sure I’ll cover the stuff in part 5 in much detail because Gigerenzer (whose research the authors discuss in chapter 13) covers some related topics in his book Simply Rational, which I’m currently reading, and I frankly like his coverage better (I should perhaps clarify in light of the previous remarks that Gigerenzer does not cover chess, but rather talks about other topics also covered in that section – the coverage overlap relates to Gigerenzer’s work on heuristics). The last part of the book has a chapter on cognition and emotion and a chapter about consciousness.
As you read the chapters, the authors start out by outlining some key features/distinctions of interest. They talk about what the specific theory/hypothesis/etc. is about, then they talk about the research results, and then they give their own evaluation of the research and conclude the coverage with outlining some limitations of the available research. Multiple topics are covered this way – presentation, research, evaluation, limitations – in each chapter, and when multiple competing hypotheses/approaches have been presented the evaluations will highlight strengths and weaknesses of each approach. Along the way you’ll encounter boxes at the bottom of the pages with bolded ‘key terms’ and definitions of those terms, as well as figures and tables with research results and illustrations of brain areas involved; key terms are also bolded in the text, so even if you don’t completely destroy the book by painting all over the pages with highlighters of different colours the way I do, it should be reasonably easy to navigate the content on a second reading. Usually the research on a given topic will be divided into sections if multiple approaches have been used to elucidate problems of interest; so there’ll be one section dealing with cognitive neuropsychology research, and another section about the cognitive neuroscience results. All chapters end with a brief outline of key terms/models/approaches encountered in the chapter and some of the main results discussed. The book is well structured. Coverage is in my opinion a bit superficial, which is one of the main reasons why I only gave the book three stars, and the authors are not always as skeptical as I’d have liked them to be – I did not always agree with the conclusions they drew from the research they discussed in the chapters, and occasionally I think they miss alternative explanations or misinterpret what the data is telling us. Some of the theoretical approaches they discuss in the text I frankly considered (/next to) worthless and a waste of time. It’s been a while since I finished the book and of course I don’t recall details as well as I’d like, but from what I remember and what I’ve gathered from a brief skim again while writing the post it’s far from a terrible book and on a general note it covers some interesting stuff – we’ll see how much of it I’ll manage to talk about here on the blog in the time to come. Regardless of how much more time I’ll be able to devote to the book here on the blog, this post should at least have given you some idea about which topics are covered in the book and how they’re covered.
i. “If we keep an open mind, too much is likely to fall into it.” (Natalie Clifford Barney)
ii. “The advantage of love at first sight is that it delays a second sight.” (-ll-)
iii. “They used to call it the ‘Great War’. But I’ll be damned if I could tell you what was so ‘great’ about it. They also called it ‘the war to end all wars’…’cause they figured it was so big and awful that the world’d just have to come to its senses and make damn sure we never fought another one ever again.
That woulda been a helluva nice story.
But the truth’s got an ugly way of killin’ nice stories.” (Max Brooks)
iv. “Bromidic though it may sound, some questions don’t have answers, which is a terribly difficult lesson to learn.” (Katharine Graham)
v. “Cynicism is an unpleasant way of saying the truth.” (Lillian Hellman)
vi. “Lonely people, in talking to each other can make each other lonelier.” (-ll-)
vii. “When they [Hugh Walpole and Arnold Bennett] had gone, Plum [P. G. Wodehouse] and Guy [Guy Bolton] looked at each other with that glassy expression in their eyes which visiting literary men so often induce. They were feeling a little faint.
‘These authors!’ said Guy […Bolton, the author].
‘One really ought to meet them only in their books’, said Plum.” (quote from the book ‘Bring on the Girls’, written by Wodehouse and Bolton… The humour in this book is delightfully ‘meta’ at times. See also my review of the book here).
viii. “Illness must be considered to be as natural as health.” (William Saroyan)
ix. “An age is called Dark not because the light fails to shine, but because people refuse to see it.” (James Michener)
x. “I am terrified of restrictive religious doctrine, having learned from history that when men who adhere to any form of it are in control, common men like me are in peril.” (-ll-)
xi. “You can safely assume you’ve created God in your own image when it turns out that God hates all the same people you do.” (Anne Lamott)
xii. “People don’t ever seem to realise that doing what’s right’s no guarantee against misfortune.” (William McFee)
xiii. “If once a man indulges himself in murder, very soon he comes to think little of robbing; and from robbing he comes next to drinking and Sabbath-breaking, and from that to incivility and procrastination. Once begun upon this downward path, you never know where you are to stop. Many a man has dated his ruin from some murder or other that perhaps he thought little of at the time.” (Thomas De Quincey)
xiv. “In many walks of life, a conscience is a more expensive encumbrance than a wife or a carriage.” (-ll-)
xv. “A promise is binding in the inverse ratio of the numbers to whom it is made.” (-ll-)
xvi. “No safety without risk, and what you risk reveals what you value.” (Jeanette Winterson)
xvii. “When was the last time you looked at anything, solely, and concentratedly, and for its own sake? Ordinary life passes in a near blur. If we go to the theatre or the cinema, the images before us change constantly, and there is the distraction of language. Our loved ones are so well known to us that there is no need to look at them, and one of the gentle jokes of married life is that we do not.” (-ll-)
xviii. “Because we don’t know when we will die, we get to think of life as an inexhaustible well. Yet everything happens only a certain number of times, and a very small number really. How many more times will you remember a certain afternoon of your childhood, some afternoon that is so deeply a part of your being that you can’t even conceive of your life without it? Perhaps four or five times more, perhaps not even that. How many more times will you watch the full moon rise? Perhaps twenty. And yet it all seems limitless.” (Paul Bowles)
xix. “Praise out of season, or tactlessly bestowed, can freeze the heart as much as blame.” (Pearl S. Buck)
xx. “You cannot make yourself feel something you do not feel, but you can make yourself do right in spite of your feelings.” (-ll-).
This will be my last post about the book. Yesterday I finished reading Darwin’s Origin of Species, which was my 100th book this year (here’s the list), but I can’t face blogging that book at the moment so coverage of that one will have to wait a bit.
In my second post about this book I had originally planned to cover chapter 7 – ‘Analysing costs’ – but as I didn’t like to spend too much time on the post I ended up cutting it short. This omission of coverage in the last post means that some themes to be discussed below are closely related to stuff covered in the second post, whereas on the other hand most of the remaining material, more specifically the material from chapters 8, 9 and 10, deal with decision analytic modelling, a quite different topic; in other words the coverage will be slightly more fragmented and less structured than I’d have liked it to be, but there’s not really much to do about that (it doesn’t help in this respect that I decided to not cover chapter 8, but doing that as well was out of the question).
I’ll start with coverage of some of the things they talk about in chapter 7, which as mentioned deals with how to analyze costs in a cost-effectiveness analysis context. They observe in the chapter that health cost data are often skewed to the right, for several reasons (costs incurred by an individual cannot be negative; for many patients the costs may be zero; some study participants may require much more care than the rest, creating a long tail). One way to address skewness is to use the median instead of the mean as the variable of interest, but a problem with this approach is that the median will not be as useful to policy-makers as will be the mean; as the mean times the population of interest will give a good estimate of the total costs of an intervention, whereas the median is not a very useful variable in the context of arriving at an estimate of the total costs. Doing data transformations and analyzing transformed data is another way to deal with skewness, but their use in cost effectiveness analysis have been questioned for a variety of reasons discussed in the chapter (to give a couple of examples, data transformation methods perform badly if inappropriate transformations are used, and many transformations cannot be used if there are data points with zero costs in the data, which is very common). Of the non-parametric methods aimed at dealing with skewness they discuss a variety of tests which are rarely used, as well as the bootstrap, the latter being one approach which has gained widespread use. They observe in the context of the bootstrap that “it has increasingly been recognized that the conditions the bootstrap requires to produce reliable parameter estimates are not fundamentally different from the conditions required by parametric methods” and note in a later chapter (chapter 11) that: “it is not clear that boostrap results in the presence of severe skewness are likely to be any more or less valid than parametric results […] bootstrap and parametric methods both rely on sufficient sample sizes and are likely to be valid or invalid in similar circumstances. Instead, interest in the bootstrap has increasingly focused on its usefulness in dealing simultaneously with issues such as censoring, missing data, multiple statistics of interest such as costs and effects, and non-normality.” Going back to the coverage in chapter 7, in the context of skewness they also briefly touch upon the potential use of a GLM framework to address this problem.
Data is often missing in cost datasets. Some parts of their coverage of these topics was to me but a review of stuff already covered in Bartholomew. Data can be missing for different reasons and through different mechanisms; one distinction is among data missing completely at random (MCAR), missing at random (MAR) (“missing data are correlated in an observable way with the mechanism that generates the cost, i.e. after adjusting the data for observable differences between complete and missing cases, the cost for those with missing data is the same, except for random variation, as for those with complete data”), and not missing at random (NMAR); the last type is also called non-ignorably missing data, and if you have that sort of data the implication is that the costs of those in the observed and unobserved groups differ in unpredictable ways, and if you ignore the process that drives these differences you’ll probably end up with a biased estimator. Another way to distinguish between different types of missing data is to look at patterns within the dataset, where you have:
“*univariate missingness – a single variable in a dataset is causing a problem through missing values, while the remaining variables contain complete information
*unit non-response – no data are recorded for any of the variables for some patients
*monotone missing – caused, for example, by drop-out in panel or longitudinal studies, resulting in variables observed up to a certain time point or wave but not beyond that
*multivariate missing – also called item non-response or general missingness, where some but not all of the variables are missing for some of the subjects.”
The authors note that the most common types of missingness in cost information analyses are the latter two. They discuss some techniques for dealing with missing data, such as complete-case analysis, available-case analysis, and imputation, but I won’t go into the details here. In the last parts of the chapter they talk a little bit about censoring, which can be viewed as a specific type of missing data, and ways to deal with it. Censoring happens when follow-up information on some subjects is not available for the full duration of interest, which may be caused e.g. by attrition (people dropping out of the trial), or insufficient follow up (the final date of follow-up might be set before all patients reach the endpoint of interest, e.g. death). The two most common methods for dealing with censored cost data are the Kaplan-Meier sample average (-KMSA) estimator and the inverse probability weighting (-IPW) estimator, both of which are non-parametric interval methods. “Comparisons of the IPW and KMSA estimators have shown that they both perform well over different levels of censoring […], and both are considered reasonable approaches for dealing with censoring.” One difference between the two is that the KMSA, unlike the IPW, is not appropriate for dealing with censoring due to attrition unless the attrition is MCAR (and it almost never is), because the KM estimator, and by extension the KMSA estimator, assumes that censoring is independent of the event of interest.
The focus in chapter 8 is on decision tree models, and I decided to skip that chapter as most of it is known stuff which I felt no need to review here (do remember that I to a large extent use this blog as an extended memory, so I’m not only(/mainly?) writing this stuff for other people..). Chapter 9 deals with Markov models, and I’ll talk a little bit about those in the following.
“Markov models analyse uncertain processes over time. They are suited to decisions where the timing of events is important and when events may happen more than once, and therefore they are appropriate where the strategies being evaluated are of a sequential or repetitive nature. Whereas decision trees model uncertain events at chance nodes, Markov models differ in modelling uncertain events as transitions between health states. In particular, Markov models are suited to modelling long-term outcomes, where costs and effects are spread over a long period of time. Therefore Markov models are particularly suited to chronic diseases or situations where events are likely to recur over time […] Over the last decade there has been an increase in the use of Markov models for conducting economic evaluations in a health-care setting […]
A Markov model comprises a finite set of health states in which an individual can be found. The states are such that in any given time interval, the individual will be in only one health state. All individuals in a particular health state have identical characteristics. The number and nature of the states are governed by the decisions problem. […] Markov models are concerned with transitions during a series of cycles consisting of short time intervals. The model is run for several cycles, and patients move between states or remain in the same state between cycles […] Movements between states are defined by transition probabilities which can be time dependent or constant over time. All individuals within a given health state are assumed to be identical, and this leads to a limitation of Markov models in that the transition probabilities only depend on the current health state and not on past health states […the process is memoryless…] – this is known as the Markovian assumption”.
The note that in order to build and analyze a Markov model, you need to do the following: *define states and allowable transitions [for example from ‘non-dead’ to ‘dead’ is okay, but going the other way is, well… For a Markov process to end, you need at least one state that cannot be left after it has been reached, and those states are termed ‘absorbing states’], *specify initial conditions in terms of starting probabilities/initial distribution of patients, *specify transition probabilities, *specify a cycle length, *set a stopping rule, *determine rewards, *implement discounting if required, *analysis and evaluation of the model, and *exploration of uncertainties. They talk about each step in more detail in the book, but I won’t go too much into this.
Markov models may be governed by transitions that are either constant over time or time-dependent. In a Markov chain transition probabilities are constant over time, whereas in a Markov process transition probabilities vary over time (/from cycle to cycle). In a simple Markov model the baseline assumption is that transitions only occur once in each cycle and usually the transition is modelled as taking place either at the beginning or the end of cycles, but in reality transitions can take place at any point in time during the cycle. One way to deal with the problem of misidentification (people assumed to be in one health state throughout the cycle even though they’ve transfered to another health state during the cycle) is to use half-cycle corrections, in which an assumption is made that on average state transitions occur halfway through the cycle, instead of at the beginning or the end of a cycle. They note that: “the important principle with the half-cycle correction is not when the transitions occur, but when state membership (i.e. the proportion of the cohort in that state) is counted. The longer the cycle length, the more important it may be to use half-cycle corrections.” When state transitions are assumed to take place may influence factors such as cost discounting (if the cycle is long, it can be important to get the state transition timing reasonably right).
When time dependency is introduced into the model, there are in general two types of time dependencies that impact on transition probabilities in the models. One is time dependency depending on the number of cycles since the start of the model (this is e.g. dealing with how transition probabilities depend on factors like age), whereas the other, which is more difficult to implement, deals with state dependence (curiously they don’t use these two words, but I’ve worked with state dependence models before in labour economics and this is what we’re dealing with here); i.e. here the transition probability will depend upon how long you’ve been in a given state.
Below I mostly discuss stuff covered in chapter 10, however I also include a few observations from the final chapter, chapter 11 (on ‘Presenting cost-effectiveness results’). Chapter 10 deals with how to represent uncertainty in decision analytic models. This is an important topic because as noted later in the book, “The primary objective of economic evaluation should not be hypothesis testing, but rather the estimation of the central parameter of interest—the incremental cost-effectiveness ratio—along with appropriate representation of the uncertainty surrounding that estimate.” In chapter 10 a distinction is made between variability, heterogeneity, and uncertainty. Variability has also been termed first-order uncertainty or stochastic uncertainty, and pertains to variation observed when recording information on resource use or outcomes within a homogenous sample of individuals. Heterogeneity relates to differences between patients which can be explained, at least in part. They distinguish between two types of uncertainty, structural uncertainty – dealing with decisions and assumptions made about the structure of the model – and parameter uncertainty, which of course relates to the precision of the parameters estimated. After briefly talking about ways to deal with these, they talk about sensitivity analysis.
“Sensitivity analysis involves varying parameter estimates across a range and seeing how this impacts on he model’s results. […] The simplest form is a one-way analysis where each parameter estimate is varied independently and singly to observe the impact on the model results. […] One-way sensitivity analysis can give some insight into the factors influencing the results, and may provide a validity check to assess what happens when particular variables take extreme values. However, it is likely to grossly underestimate overall uncertainty, and ignores correlation between parameters.”
Multi-way sensitivity analysis is a more refined approach, in which more than one parameter estimate is varied – this is sometimes termed scenario analysis. A different approach is threshold analysis, where one attempts to identify the critical value of one or more variables so that the conclusion/decision changes. All of these approaches are deterministic approaches, and they are not without problems. “They fail to take account of the joint parameter uncertainty and correlation between parameters, and rather than providing the decision-maker with a useful indication of the likelihood of a result, they simply provide a range of results associated with varying one or more input estimates.” So of course an alternative has been developed, namely probabilistic sensitivity analysis (-PSA), which already in the mid-80es started to be used in health economic decision analyses.
“PSA permits the joint uncertainty across all the parameters in the model to be addressed at the same time. It involves sampling model parameter values from distributions imposed on variables in the model. […] The types of distribution imposed are dependent on the nature of the input parameters [but] decision analytic models for the purpose of economic evaluation tend to use homogenous types of input parameters, namely costs, life-years, QALYs, probabilities, and relative treatment effects, and consequently the number of distributions that are frequently used, such as the beta, gamma, and log-normal distributions, is relatively small. […] Uncertainty is then propagated through the model by randomly selecting values from these distributions for each model parameter using Monte Carlo simulation“.