David Friedman recently asked a related question on SSC (he asked about why there are waiting lists for surgical procedures), and I decided that as I’d read some stuff about these topics in the past I might as well answer his question. The answer turned out to be somewhat long/detailed, and I decided I might as well post some of this stuff here as well. In a way my answer to David’s question provides belated coverage of a book I read last year, Appointment Planning in Outpatient Clinics and Diagnostic Facilities, which I have covered only in very limited detail here on the blog before (the third paragraph of this post is the only coverage of the book I’ve provided here).
Below I’ve tried to cover these topics in a manner which would make it unnecessary to also read David’s question and related comments.
The brief Springer publication Appointment Planning in Outpatient Clinics and Diagnostic Facilities has some basic stuff about operations research and queueing theory which is useful for making sense of resource allocation decisions made in the medical sector. I think this is the kind of stuff you’ll want to have a look at if you want to understand these things better.
There are many variables which are important here and which may help explain why waiting lists are common in the health care sector (it’s not just surgery). The quotes below are from the book:
“In a walk-in system, patients are seen without an appointment. […] The main advantage of walk-in systems is that access time is reduced to zero. […] A huge disadvantage of patients walking in, however, is that the usually strong fluctuating arrival stream can result in an overcrowded clinic, leading to long waiting times, high peaks in care provider’s working pressure, and patients leaving without treatment (blocking). On other moments of time the waiting room will be practically empty […] In regular appointment systems workload can be dispersed, although appointment planning is usually time consuming. A walk-in system is most suitable for clinics with short service times and multiple care providers, such as blood withdrawal facilities and pre-anesthesia check-ups for non-complex patients. If the service times are longer or the number of care providers is limited, the probability that patients experience a long waiting time becomes too high, and a regular appointment system would be justified”
“Sometimes it is impossible to provide walk-in service for all patients, for example when specific patients need to be prepared for their consultation, or if specific care providers are required, such as anesthesiologists [I noted in my reply to David that these remarks seem highly relevant for the surgery context]. Also, walk-in patients who experience a full waiting room upon arrival may choose to come back at a later point in time. To make sure that they do have access at that point, clinics usually give these patients an appointment. This combination of walk-in and appointment patients requires a specific appointment system that satisfies the following requirements:
1. The access time for appointment patients is below a certain threshold
2. The waiting time for walk-in patients is below a certain threshold
3. The number of walk-in patients who are sent away due to crowding is minimized
To satisfy these requirements, an appointment system should be developed to determine the optimal scheduling of appointments, not only on a day level but also on a week level. Developing such an appointment system is challenging from a mathematical perspective. […] Due to the high variability that is usually observed in healthcare settings, introducing stochasticity in the modeling process is very important to obtain valuable and reasonable results.”
“Most elective patients will ultimately evolve into semi-urgent or even urgent patients if treatment is extensively prolonged.” That’s ‘on the one hand’ – but of course there’s also the related ‘on the other hand’-observation that: “Quite often a long waiting list results in a decrease in demand”. Patients might get better on their own and/or decide it’s not worth the trouble to see a service provider – or they might deteriorate.
“Some planners tend to maintain separate waiting lists for each patient group. However, if capacity is shared among these groups, the waiting list should be considered as a whole as well. Allocating capacity per patient group usually results in inflexibility and poor performance”.
“mean waiting time increases with the load. When the load is low, a small increase therein has a minimal effect on the mean waiting time. However, when the load is high, a small increase has a tremendous effect on the mean waiting time. For instance, […] increasing the load from 50 to 55 % increases the waiting time by 10 %, but increasing the load from 90 to 95 % increases the waiting time by 100 % […] This explains why a minor change (for example, a small increase in the number of patients, a patient arriving in a bed or a wheelchair) can result in a major increase in waiting times as sometimes seen in outpatient clinics.”
“One of the most important goals of this chapter is to show that it is impossible to use all capacity and at the same time maintain a short, manageable waiting list. A common mistake is to reason as follows:
Suppose total capacity is 100 appointments. Unused capacity is commonly used for urgent and inpatients, that can be called in last minute. 83 % of capacity is used, so there is on average 17 % of capacity available for urgent and inpatients. The urgent/inpatient demand is on average 20 appointments per day. Since 17 appointments are on average not used for elective patients, a surplus capacity of only three appointments is required to satisfy all patient demand.
Even though this is true on average, more urgent and inpatient capacity is required. This is due to the variation in the process; on certain days 100 % of capacity is required to satisfy elective patient demand, thus leaving no room for any other patients. Furthermore, since 17 slots are dedicated to urgent and inpatients, only 83 slots are available for elective patients, which means that ρ is again equal to 1, resulting in an uncontrollable waiting list.” [ρ represents the average proportion of time which the server/service provider is occupied – a key stability requirement is that ρ is smaller than one; if it is not, the length of the queue becomes unstable/explodes. See also this related link].
“The challenge is to make a trade-off between maintaining a waiting list which is of acceptable size and the amount of unused capacity. Since the focus in many healthcare facilities is on avoiding unused capacity, waiting lists tend to grow until “something has to be done.” Then, temporarily surplus capacity is deployed, which is usually more expensive than regular capacity […]. Even though waiting lists have a buffer function (i.e., by creating a reservoir of patients that can be planned when demand is low) it is unavoidable that, even in well-organized facilities, over a longer period of time not all capacity is used.”
I think one way to think about the question of whether it makes sense to have a waiting list or whether you can ‘just use the price variable’ is that if it is possible for you as a provider to optimize over both the waiting time variable and the price variable (i.e., people demanding the service find some positive waiting time to be acceptable when it is combined with a non-zero price reduction), the result you’re going to get is always going to be at least as good as an option where you only have the option of optimizing over price – not including waiting time in the implicit pricing mechanism can be thought of as in a sense a weakly dominated strategy.
A lot of the planning stuff relates to how to handle variable demand, and input heterogeneities can be thought of as one of many parameters which may be important to take into account in the context of how best to deal with variable demand; surgeons aren’t perfect substitutes. Perhaps neither are nurses, or different hospitals (relevant if you’re higher up in the decision making hierarchy). An important aspect is the question of whether a surgeon (or a doctor, or a nurse…) might be doing other stuff instead of surgery during down-periods, and what might be the value of that other stuff s/he might be doing instead. In the surgical context, not only is demand variable over time, there are also issues such as that many different inputs need to be coordinated; you need a surgeon and a scrub nurse and an anesthesiologist. The sequential and interdependent nature of many medical procedures and inputs is likely also a factor in terms of adding complexity; whether a condition requires treatment or not, and/or which treatment may be required, may depend upon the results of a test which has to be analyzed before the treatment is started, and so you for example can’t switch the order of test and treatment, or for that matter treat patient X based on patient Y’s test results; there’s some built-in inflexibility here at the outset. This type of thing also means there are more nodes in the network, and more places where things can go wrong, resulting in longer waiting times than planned.
I think the potential gains in terms of capacity utilization, risk reduction and increased flexibility to be derived from implementing waiting schemes of some kind in the surgery context would mediate strongly against a model without waiting lists, and I think that the surgical field is far from unique in that respect in the context of medical care provision.
“Statistical considerations arise in virtually all areas of science and technology and, beyond these, in issues of public and private policy and in everyday life. While the detailed methods used vary greatly in the level of elaboration involved and often in the way they are described, there is a unity of ideas which gives statistics as a subject both its intellectual challenge and its importance […] In this book we have aimed to discuss the ideas involved in applying statistical methods to advance knowledge and understanding. It is a book not on statistical methods as such but, rather, on how these methods are to be deployed […] We are writing partly for those working as applied statisticians, partly for subject-matter specialists using statistical ideas extensively in their work and partly for masters and doctoral students of statistics concerned with the relationship between the detailed methods and theory they are studying and the effective application of these ideas. Our aim is to emphasize how statistical ideas may be deployed fruitfully rather than to describe the details of statistical techniques.”
I gave the book five stars, but as noted in my review on goodreads I’m not sure the word ‘amazing’ is really fitting – however the book had a lot of good stuff and it had very little stuff for me to quibble about, so I figured it deserved a high rating. The book deals to a very large extent with topics which are in some sense common to pretty much all statistical analyses, regardless of the research context; formulation of research questions/hypotheses, data search, study designs, data analysis, and interpretation. The authors spend quite a few pages talking about hypothesis testing but on the other hand no pages talking about statistical information criteria, a topic with which I’m at this point at least reasonably familiar, and I figure if I had been slightly more critical I’d have subtracted a star for this omission – however I have the impression that I’m at times perhaps too hard on non-fiction books on goodreads so I decided not to punish the book for this omission. Part of the reason why I gave the book five stars is also that I’ve sort of wanted to read a book like this one for a while; I think in some sense it’s the first one of its kind I’ve read. I liked the way the book was structured.
Below I have added some observations from the book, as well as a few comments (I should note that I have had to leave out a lot of good stuff).
“When the data are very extensive, precision estimates calculated from simple standard statistical methods are likely to underestimate error substantially owing to the neglect of hidden correlations. A large amount of data is in no way synonymous with a large amount of information. In some settings at least, if a modest amount of poor quality data is likely to be modestly misleading, an extremely large amount of poor quality data may be extremely misleading.”
“For studies of a new phenomenon it will usually be best to examine situations in which the phenomenon is likely to appear in the most striking form, even if this is in some sense artificial or not representative. This is in line with the well-known precept in mathematical research: study the issue in the simplest possible context that is not entirely trivial, and later generalize.”
“It often […] aids the interpretation of an observational study to consider the question: what would have been done in a comparable experiment?”
“An important and perhaps sometimes underemphasized issue in empirical prediction is that of stability. Especially when repeated application of the same method is envisaged, it is unlikely that the situations to be encountered will exactly mirror those involved in setting up the method. It may well be wise to use a procedure that works well over a range of conditions even if it is sub-optimal in the data used to set up the method.”
“Many investigations have the broad form of collecting similar data repeatedly, for example on different individuals. In this connection the notion of a unit of analysis is often helpful in clarifying an approach to the detailed analysis. Although this notion is more generally applicable, it is clearest in the context of randomized experiments. Here the unit of analysis is that smallest subdivision of the experimental material such that two distinct units might be randomized (randomly allocated) to different treatments. […] In general the unit of analysis may not be the same as the unit of interpretation, that is to say, the unit about which conclusions are to drawn. The most difficult situation is when the unit of analysis is an aggregate of several units of interpretation, leading to the possibility of ecological bias, that is, a systematic difference between, say, the impact of explanatory variables at different levels of aggregation. […] it is important to identify the unit of analysis, which may be different in different parts of the analysis […] on the whole, limited detail is needed in examining the variation within the unit of analysis in question.”
The book briefly discusses issues pertaining to the scale of effort involved when thinking about appropriate study designs and how much/which data to gather for analysis, and notes that often associated costs are not quantified – rather a judgment call is made. An important related point is that e.g. in survey contexts response patterns will tend to depend upon the quantity of information requested; if you ask for too much, few people might reply (…and perhaps it’s also the case that it’s ‘the wrong people’ that reply? The authors don’t touch upon the potential selection bias issue, but it seems relevant). A few key observations from the book on this topic:
“the intrinsic quality of data, for example the response rates of surveys, may be degraded if too much is collected. […] sampling may give higher [data] quality than the study of a complete population of individuals. […] When researchers studied the effect of the expected length (10, 20 or 30 minutes) of a web-based questionnaire, they found that fewer potential respondents started and completed questionnaires expected to take longer (Galesic and Bosnjak, 2009). Furthermore, questions that appeared later in the questionnaire were given shorter and more uniform answers than questions that appeared near the start of the questionnaire.”
Not surprising, but certainly worth keeping in mind. Moving on…
“In general, while principal component analysis may be helpful in suggesting a base for interpretation and the formation of derived variables there is usually considerable arbitrariness involved in its use. This stems from the need to standardize the variables to comparable scales, typically by the use of correlation coefficients. This means that a variable that happens to have atypically small variability in the data will have a misleadingly depressed weight in the principal components.”
The book includes a few pages about the Berkson error model, which I’d never heard about. Wikipedia doesn’t have much about it and I was debating how much to include about this one here – I probably wouldn’t have done more than including the link here if the wikipedia article actually covered this topic in any detail, but it doesn’t. However it seemed important enough to write a few words about it. The basic difference between the ‘classical error model’, i.e. the one everybody knows about, and the Berkson error model is that in the former case the measurement error is statistically independent of the true value of X, whereas in the latter case the measurement error is independent of the measured value; the authors note that this implies that the true values are more variable than the measured values in a Berkson error context. Berkson errors can e.g. happen in experimental contexts where levels of a variable are pre-set by some target, for example in a medical context where a drug is supposed to be administered each X hours; the pre-set levels might then be the measured values, and the true values might be different e.g. if the nurse was late. I thought it important to mention this error model not only because it’s a completely new idea to me that you might encounter this sort of error-generating process, but also because there is no statistical test that you can use to figure out if the standard error model is the appropriate one, or if a Berkson error model is better; which means that you need to be aware of the difference and think about which model works best, based on the nature of the measuring process.
Let’s move on to some quotes dealing with modeling:
“while it is appealing to use methods that are in a reasonable sense fully efficient, that is, extract all relevant information in the data, nevertheless any such notion is within the framework of an assumed model. Ideally, methods should have this efficiency property while preserving good behaviour (especially stability of interpretation) when the model is perturbed. Essentially a model translates a subject-matter question into a mathematical or statistical one and, if that translation is seriously defective, the analysis will address a wrong or inappropriate question […] The greatest difficulty with quasi-realistic models [as opposed to ‘toy models’] is likely to be that they require numerical specification of features for some of which there is very little or no empirical information. Sensitivity analysis is then particularly important.”
“Parametric models typically represent some notion of smoothness; their danger is that particular representations of that smoothness may have strong and unfortunate implications. This difficulty is covered for the most part by informal checking that the primary conclusions do not depend critically on the precise form of parametric representation. To some extent such considerations can be formalized but in the last analysis some element of judgement cannot be avoided. One general consideration that is sometimes helpful is the following. If an issue can be addressed nonparametrically then it will often be better to tackle it parametrically; however, if it cannot be resolved nonparametrically then it is usually dangerous to resolve it parametrically.”
“Once a model is formulated two types of question arise. How can the unknown parameters in the model best be estimated? Is there evidence that the model needs modification or indeed should be abandoned in favour of some different representation? The second question is to be interpreted not as asking whether the model is true [this is the wrong question to ask, as also emphasized by Burnham & Anderson] but whether there is clear evidence of a specific kind of departure implying a need to change the model so as to avoid distortion of the final conclusions. […] it is important in applications to understand the circumstances under which different methods give similar or different conclusions. In particular, if a more elaborate method gives an apparent improvement in precision, what are the assumptions on which that improvement is based? Are they reasonable? […] the hierarchical principle implies, […] with very rare exceptions, that models with interaction terms should include also the corresponding main effects. […] When considering two families of models, it is important to consider the possibilities that both families are adequate, that one is adequate and not the other and that neither family fits the data.” [Do incidentally recall that in the context of interactions, “the term interaction […] is in some ways a misnomer. There is no necessary implication of interaction in the physical sense or synergy in a biological context. Rather, interaction means a departure from additivity […] This is expressed most explicitly by the requirement that, apart from random fluctuations, the difference in outcome between any two levels of one factor is the same at all levels of the other factor. […] The most directly interpretable form of interaction, certainly not removable by [variable] transformation, is effect reversal.”]
“The p-value assesses the data […] via a comparison with that anticipated if H0 were true. If in two different situations the test of a relevant null hypothesis gives approximately the same p-value, it does not follow that the overall strengths of the evidence in favour of the relevant H0 are the same in the two cases.”
“There are […] two sources of uncertainty in observational studies that are not present in randomized experiments. The first is that the ordering of the variables may be inappropriate, a particular hazard in cross-sectional studies. […] if the data are tied to one time point then any presumption of causality relies on a working hypothesis as to whether the components are explanatory or responses. Any check on this can only be from sources external to the current data. […] The second source of uncertainty is that important explanatory variables affecting both the potential cause and the outcome may not be available. […] Retrospective explanations may be convincing if based on firmly established theory but otherwise need to be treated with special caution. It is well known in many fields that ingenious explanations can be constructed retrospectively for almost any finding.”
“The general issue of applying conclusions from aggregate data to specific individuals is essentially that of showing that the individual does not belong to a subaggregate for which a substantially different conclusion applies. In actuality this can at most be indirectly checked for specific subaggregates. […] It is not unknown in the literature to see conclusions such as that there are no treatment differences except for males aged over 80 years, living more than 50 km south of Birmingham and life-long supporters of Aston Villa football club, who show a dramatic improvement under some treatment T. Despite the undoubted importance of this particular subgroup, virtually always such conclusions would seem to be unjustified.” [I loved this example!]
The authors included a few interesting results from an undated Cochrane publication which I thought I should mention. The file-drawer effect is well known, but there are a few other interesting biases at play in a publication bias context. One is time-lag bias, which means that statistically significant results take less time to get published. Another is language bias; statistically significant results are more likely to be published in English publications. A third bias is multiple publication bias; it turns out that papers with statistically significant results are more likely to be published more than once. The last one mentioned is citation bias; papers with statistically significant results are more likely to be cited in the literature.
The authors include these observations in their concluding remarks: “The overriding general principle [in the context of applied statistics], difficult to achieve, is that there should be a seamless flow between statistical and subject-matter considerations. […] in principle seamlessness requires an individual statistician to have views on subject-matter interpretation and subject-matter specialists to be interested in issues of statistical analysis.”
As already mentioned this is a good book. It’s not long, and/but it’s worth reading if you’re in the target group.
ii. “The man who knows everyone’s job isn’t much good at his own.” (-ll-)
iii. “It is amazing what little harm doctors do when one considers all the opportunities they have” (Mark Twain, as quoted in the Oxford Handbook of Clinical Medicine, p.595).
iv. “A first-rate theory predicts; a second-rate theory forbids and a third-rate theory explains after the event.” (Aleksander Isaakovich Kitaigorodski)
v. “[S]ome of the most terrible things in the world are done by people who think, genuinely think, that they’re doing it for the best” (Terry Pratchett, Snuff).
vi. “That was excellently observ’d, say I, when I read a Passage in an Author, where his Opinion agrees with mine. When we differ, there I pronounce him to be mistaken.” (Jonathan Swift)
vii. “Death is nature’s master stroke, albeit a cruel one, because it allows genotypes space to try on new phenotypes.” (Quote from the Oxford Handbook of Clinical Medicine, p.6)
viii. “The purpose of models is not to fit the data but to sharpen the questions.” (Samuel Karlin)
ix. “We may […] view set theory, and mathematics generally, in much the way in which we view theoretical portions of the natural sciences themselves; as comprising truths or hypotheses which are to be vindicated less by the pure light of reason than by the indirect systematic contribution which they make to the organizing of empirical data in the natural sciences.” (Quine)
x. “At root what is needed for scientific inquiry is just receptivity to data, skill in reasoning, and yearning for truth. Admittedly, ingenuity can help too.” (-ll-)
xi. “A statistician carefully assembles facts and figures for others who carefully misinterpret them.” (Quote from Mathematically Speaking – A Dictionary of Quotations, p.329. Only source given in the book is: “Quoted in Evan Esar, 20,000 Quips and Quotes“)
xii. “A knowledge of statistics is like a knowledge of foreign languages or of algebra; it may prove of use at any time under any circumstances.” (Quote from Mathematically Speaking – A Dictionary of Quotations, p. 328. The source provided is: “Elements of Statistics, Part I, Chapter I (p.4)”).
xiii. “We own to small faults to persuade others that we have not great ones.” (Rochefoucauld)
xiv. “There is more self-love than love in jealousy.” (-ll-)
xv. “We should not judge of a man’s merit by his great abilities, but by the use he makes of them.” (-ll-)
xvi. “We should gain more by letting the world see what we are than by trying to seem what we are not.” (-ll-)
xvii. “Put succinctly, a prospective study looks for the effects of causes whereas a retrospective study examines the causes of effects.” (Quote from p.49 of Principles of Applied Statistics, by Cox & Donnelly)
xviii. “… he who seeks for methods without having a definite problem in mind seeks for the most part in vain.” (David Hilbert)
xix. “Give every man thy ear, but few thy voice” (Shakespeare).
xx. “Often the fear of one evil leads us into a worse.” (Nicolas Boileau-Despréaux)
I didn’t finish this book and I didn’t have a lot of nice things to say about it in my review on goodreads, but as I did read roughly half of it and it seemed easy to blog, I figured I might as well cover it here.
I have added some observations from the book and a few comments below:
“While we know that every marriage brings not only promise but substantial risk, to date we know more about the harmful processes in relationships than we do about what makes them work”
“expressions of positivity, especially gratitude, promote relationship maintenance in intimate bonds”
“Baxter and Montgomery (1996) maintain that the closeness of a relationship may be determined by the extent to which the ‘self becomes’ or changes through participation in that relationship, suggesting that boundaries between ‘self’ and ‘other’ are more permeable and fluid in a close, intimate relationship. […] Not surprisingly, this collective sense of an ‘us’ appears to grow stronger with time and age with older couples demonstrating greater levels of we-ness than couples at middle-age”
“It has been demonstrated […] that affirmation by one’s partner that is in keeping with one’s own self-ideal, is associated with better relationship adjustment and stability […]. Moreover, if a spouse’s positive view of his or her mate is more favorable than the mate’s own view, and if the spouse tries to stabilize such positive impressions then, over time, the person’s negative self-view could begin to change for the better […] Perceiving one’s partner as responsive to one’s needs, goals, values and so forth has generally been associated with greater relationship satisfaction and personal well-being […]. The concomitant experience of feeling validated, understood and cared for […] would arguably be that much more imperative when one partner is in distress. Such responsiveness entails the ability “to discern non-verbal cues, and to ‘read between the lines’ about motivations, emotions, and experiences,” […] Being attuned and responsive to non-verbal and para-verbal cues, in turn, is conducive to couple coping because it enables well spouses to be appropriately supportive without having to be explicitly directed or asked.” (I recall thinking that the topic of ‘hidden support’ along these lines was a very important topic to keep in mind in the future when I first read about it. It’s covered in much more detail in one of the previous books I’ve read on related topics, though I can’t recall at the moment if it was in Vangelisti & Perlman, Hargie, or Regan).
“Sexual resilience […] is a term used to describe individuals or couples who are able to withstand, adapt, and find solutions to events and experiences that challenge their sexual relationship.[…] the most common challenges to sexuality include the birth of the first child […]; the onset of a physical or mental illness […]; an emotional blow to the relationship, such as betrayal or hurt; lack of relational intimacy, such as becoming absorbed by other priorities such as career; and changes associated with aging, such as vaginal dryness or erectile dysfunction. […] People who place a relatively low value on sex for physical pleasure and a relatively high value on sex for relational intimacy […] are motivated to engage in sexual activity primarily to feel emotionally close to their partner. […] these individuals may respond to sexual challenges with less distress than those who place a high value on sex for physical pleasure. On an individual level, they are not overly concerned about specific sexual dysfunctions, but are motivated to find alternative ways of continuing to be sexually intimate with their partner, which may or may not include intercourse. […] Facing sexual difficulties with a high value placed on sex for relational intimacy, with strong dyadic adjustment, and with effective and open communication skills primes a couple to respond well to sexual challenges. […] Acceptance, flexibility, and persistence are characteristics most commonly associated with couples who successfully negotiated the challenges to their sexual relationship. […] When the physical pleasure aspect of sex is viewed as an enjoyed, but not essential, component of sex, couples no longer need to rely on perfect sexual functioning to engage in satisfying sex. Physical pleasure can come to be seen as “icing on the cake”, while relational intimacy is the cake itself.”
“Overall findings from neuroimaging studies of resilience suggest that the brains of resilient people are better equipped to tamp down negative emotion. […] In studies of rodents […] and primates […], early stress has consistently been associated with impaired brain development. Specifically, chronic stress has been found to damage neurons and inhibit neurogenesis in the hippocampus and medial prefrontal cortex […]. Stress has the opposite effect on the amygdala, causing dendritic growth accompanied by increased anxiety and aggression […]. Human studies yield results that are consistent with animal studies.”
I won’t cover the human studies in detail, but for example people have found when looking at the brains of children raised under bad conditions in orphanages in Eastern Europe and Asia that children who were adopted early in life (i.e., got away from the terrible conditions early on) had smaller amygdalae than children who were adopted later. They also note that smaller orbitofrontal volumes have been observed in physically abused children, with arguably(?) (I’m not sure about the validity of the instrument applied) a dose-response relationship between severity of abuse and the level of brain differences/changes observed, and smaller hippocampal volumes have been noted in depressed women with a history of childhood maltreatment (their brains were compared with the brains of depressed women without a history of childhood maltreatment).
“The positive associations between social support and physical health may be due in large part to the effect of positive relationships on cortisol levels […]. The presence of close, supportive relationships have been associated with lower cortisol levels in adolescents […], middle class mothers of 2-year old children […], elderly widowed adults […], men and women aged 47–59 […], healthy men […], college students […], 18–36 year olds from the UCLA community […], parents expecting their first child […], and relationship partners […] Overall, studies on relationship quality and cortisol levels suggest that close supportive relationships play an important role in boosting resilience.”
“Sharpe (2000) offered an insightful, developmental approach to understanding mutuality in romantic relationships. She described mutuality as a result of “merging” that consists of several steps, which occur and often overlap in the lifetime of a relationship. With the progression of the relationship, the partners start to recognize differences that exist between them and try to incorporate them into their existing concept of relationship. Additionally, both partners search for “his or her own comfort level and balance between time together and time apart” […]. As merging progresses, partners are able to cultivate their existing commonalities and differences, as well as develop multiple ways of staying connected. In truly mutual couples, both partners respect and validate each other’s views, work together to accomplish common goals, and resolve their differences through compromise. Moreover, a critical achievement of mutuality is the internalization of the loving relationship.”
I gave the book two stars on goodreads. The contributors to this volume are from Brazil, Spain, Mexico, Japan, Turkey, Denmark, and the Czech Republic; the editor is from Taiwan. In most chapters you can tell that the first language of these authors is not English; the language is occasionally quite bad, although you can usually tell what the authors are trying to say.
The book is open access and you can read it here. I have included some quotes from the book below:
“It is estimated that men and women with depression are 20.9 and 27 times, respectively, more likely to commit suicide than those without depression (Briley & Lépine, 2011).” [Well, that’s one way to communicate risk… See also this comment].
“depression is on average twice as common in women as in men (Bromet et al., 2011). […] sex differences have been observed in the prevalence of mental disorders as well as in responses to treatment […] When this [sexual] dimorphism is present [in rats, a common animal model], the drug effect is generally stronger in males than in females.”
“Several reports indicate that follicular stimulating and luteinizing hormones and estradiol oscillations are correlated with the onset or worsening of depression symptoms during early perimenopause […], when major depressive disorder incidence is 3-5 times higher than the male matched population of the same [age] […]. Several longitudinal studies that followed women across the menopausal transition indicate that the risk for significant depressive symptoms increases during the menopausal transition and then decreases in […] early postmenopause […] the impact of hormone oscillations during perimenopause transition may affect the serotonergic system function and increase vulnerability to develop depression.”
“The use of antidepressant drugs for treating patients with depression began in the late 1950s. Since then, many drugs with potential antidepressants have been made available and significant advances have been made in understanding their possible mechanisms of action […]. Only two classes of antidepressants were known until the 80’s: tricyclic antidepressants and monoamine oxidase inhibitors. Both, although effective, were nonspecific and caused numerous side effects […]. Over the past 20 years, new classes of antidepressants have been discovered: selective serotonin reuptake inhibitors, selective serotonin/norepinephrine reuptake inhibitors, serotonin reuptake inhibitors and alpha-2 antagonists, serotonin reuptake stimulants, selective norepinephrine reuptake inhibitors, selective dopamine reuptake inhibitors and alpha-2 adrenoceptor antagonists […] Neither the biological basis of depression […] nor the precise mechanism of antidepressant efficacy are completely understood […]. Indeed, antidepressants are widely prescribed for anxiety and disorders other than depression.”
“Taken together the TCAs and the MAO-Is can be considered to be non-selective or multidimensional drugs, comparable to a more or less rational polypharmacy at the receptor level. This is even when used as monotherapy in the acute therapy of major depression. The new generation of selective antidepressants (the selective serotonin reuptake inhibitors (SSRIs)), or the selective noradrenaline and serotonin reuptake inhibitors (SNRIs) have a selective mechanism of action, thus avoiding polypharmacy. However, the new generation antidepressants such as the SSRIs or SNRIs are less effective than the TCAs. […] The most selective second generation antidepressants have not proved in monotherapy to be more effective on the core symptoms of depression than the first generation TCAs or MAOIs. It is by their safety profiles, either in overdose or in terms of long term side effects, that the second generation antidepressants have outperformed the first generation.”
“Suicide is a serious global public health problem. Nearly 1 million individuals commit suicide every year. […] Suicide […] ranks among the top 10 causes of death in every country, and is one of the three leading causes of death in 15 to 35-year olds.”
“Considering patients that commit suicide, about half of them, at some point, had contact with psychiatric services, yet only a quarter had current or recent contact (Andersen et al., 2000; Lee et al., 2008). A study conducted by Gunnell & Frankel (1994) revealed that 20-25% of those committing suicide had contact with a health care professional in the week before death and 40% had such contact one month before death” (I’m assuming ‘things have changed’ during the last couple of decades, but it would be interesting to know how much they’ve changed).
“In cases of suicide by drug overdose, TCAs have the highest fatal toxicity, followed by serotonin and noradrenalin reuptake inhibitors (SNRIs), specific serotonergic antidepressants (NaSSA) and SSRIs […] SSRIs are considered to be less toxic than TCAs and MAOIs because they have an extended therapeutic window. The ingestion of up to 30 times its recommended daily dose produces little or no symptoms. The intake of 50 to 70 times the recommended daily dose can cause vomiting, mild depression of the CNS or tremors. Death rarely occurs, even at very high doses […] When we talk about suicide and suicide attempt with antidepressants overdose, we are referring mainly to women in their twenties – thirties who are suicide repeaters.”
“Physical pain is one of the most common somatic symptoms in patients that suffer depression and conversely, patients suffering from chronic pain of diverse origins are often depressed. […] While […] data strongly suggest that depression is linked to altered pain perception, pain management has received little attention to date in the field of psychiatric research […] The monoaminergic system influences both mood and pain […], and since many antidepressants modify properties of monoamines, these compounds may be effective in managing chronic pain of diverse origins in non-depressed patients and to alleviate pain in depressed patients. There are abundant evidences in support of the analgesic properties of tricyclic antidepressants (TCAs), particularly amitriptyline, and another TCA, duloxetine, has been approved as an analgesic for diabetic neuropathic pain. By contrast, there is only limited data regarding the analgesic properties of selective serotonin reuptake inhibitors (SSRIs) […]. In general, compounds with noradrenergic and serotonergic modes of action are more effective analgesics […], although the underlying mechanisms of action remain poorly understood […] While the utility of many antidepressant drugs in pain treatment is well established, it remains unclear whether antidepressants alleviate pain by acting on mood (emotional pain) or nociceptive transmission (sensorial pain). Indeed, in many cases, no correlation exists between the level of pain experienced by the patient and the effect of antidepressants on mood. […] Currently, TCAs (amitriptyline, nortriptiline, imipramine and clomipramine) are the most common antidepressants used in the treatment of neuropathic pain processes associated with diabetes, cancer, viral infections and nerve compression. […] TCAs appear to provide effective pain relief at lower doses than those required for their antidepressant effects, while medium to high doses of SNRIs are necessary to produce analgesia”. Do keep in mind here that in a neuropathy setting one should not expect to get anywhere near complete pain relief with these drugs – see also this post.
“Prevalence of a more or less severe depression is approximately double in patients with diabetes compared to a general population [for more on related topics, see incidentally this previous post of mine]. […] Diabetes as a primary disease is typically superimposed by depression as a reactive state. Depression is usually a result of exposure to psycho-social factors that are related to hardship caused by chronic disease. […] Several studies concerning comorbidity of type 1 diabetes and depression identified risk factors of depression development; chronic somatic comorbidity and polypharmacy, female gender, higher age, solitary life, lower than secondary education, lower financial status, cigarette smoking, obesity, diabetes complications and a higher glycosylated hemoglobin [Engum, 2005; Bell, 2005; Hermanns, 2005; Katon, 2004]”
Here are my first two posts about the book, which I have now finished. I gave the book three stars on goodreads, but I’m close to a four star rating and I may change my opinion later – overall it’s a pretty good book. I’ve read about many of the topics covered before but there was also quite a bit of new stuff along the way; as a whole the book spans very widely, but despite this the level of coverage of individual topics is not bad – I actually think the structure of the book makes it more useful as a reference tool than is McPhee et al. (…in terms of reference books which one might find the need to refer to in order to make sense of medical tests and test results, I should of course add that no book can beat Newman & Kohn). I have tried to take this into account along the way in terms of the way I’ve been reading the book, in the sense that I’ve tried to make frequent references in the margin to other relevant works going into more detail about specific topics whenever this seemed like it might be useful, and I think if one does something along those lines systematically a book like this one can become a really powerful tool – you get the short version with the most important information (…or at least what the authors considered to be the most important information) here almost regardless of what topic you’re interested in – I should note in this context that the book has only very limited coverage of mental health topics, so this is one area where you definitely need to go elsewhere for semi-detailed coverage – and if you need more detail than what’s provided in the coverage you’ll also know from your notes where to go next.
In my last post I talked a bit about which topics were covered in the various chapters in the book – I figured it might make sense here to list the remaining chapter titles in this post. After the (long) surgery chapter, the rest of the chapters deal with epidemiology (I thought this was a poor chapter and the authors definitely did not consider this topic to be particularly important; they spent only 12 pages on it), clinical chemistry (lab results, plasma proteins, topics like ‘what is hypo- and hypernatremia’, …), eponymous syndromes (a random collection of diseases, many of which are quite rare), radiology (MRI vs X-ray? When to use, or not use, contrast material? Etc.), ‘reference intervals etc.‘ (the ‘etc.’ part covered drug therapeutic ranges for some commonly used drugs, as well as some important drug interactions – note to self: The effects of antidiabetic drugs are increased by alcohol, beta-blockers, bezafibrate, and MAOIs, and are decreased by contraceptive steroids, corticosteroids, diazoxide, diuretics, and possibly also lithium), practical procedures (I was considering skipping this chapter because I’m never going to be asked to e.g. insert a chest drain and knowing how to do it seems to be of limited benefit to me, but I figured I might as well read it anyway; there were some details about what can go wrong in the context of specific procedures and what should be done when this happens, and this seemed like valuable information. Also, did you know that “There is no evidence that lying flat post procedure prevents headache” in the context of lumbar punctures? I didn’t, and a lot of doctors probably also don’t. You can actually go even further than that: “Despite years of anecdotal advice to the contrary, none of the following has ever been shown to be a risk factor [for post-LP headache]: position during or after the procedure; hydration status before, during, or after; amount of CSF removed; immediate activity or rest post-LP.”), and emergencies.
In this post I won’t cover specific chapters of the book in any detail, rather I’ll talk about a few specific topics and observations I could be bothered to write some stuff about here. Let’s start with some uplifting news about the topic of liver tumours: Most of these (~90%) are secondary (i.e. metastatic) tumours with an excellent prognosis (“Often <6 months”). No, wait just a minute… Nope, you definitely do not want cancer cells to migrate to your liver. Primary tumors, the most common cause of which is hepatitis B infection (…they say in that part of the coverage – but elsewhere in the book they observe that “alcohol is the prime cause of any liver disease”), also don’t have great outcomes, especially not if you don’t get a new liver: “Resecting solitary tumours <3cm across ↑3yr survival to 59% from 13%; but ~50% have recurrence by 3yrs. Liver transplant gives a 5yr survival rate of 70%.” It should be noted in a disease impact context that this type of cancer is far more common in areas of the world with poorly developed health care systems like Africa and China.
Alcoholism is another one of the causes of liver tumors. In the book they include the observation that the lifetime prevalence of alcoholism is around 10% for men and 4% for women, but such numbers are of course close to being completely meaningless almost regardless of where they’re coming from. Alcoholism is dangerous; in cases with established cirrhosis roughly half (52%) of people who do not stop drinking will be dead within 5 years, whereas this is also the case for 23% of the people who do stop drinking. Excessive alcohol consumption can cause alcoholic hepatitis; “[m]ild episodes hardly affect mortality” but in severe cases half will be dead in a month, and in general 40% of people admitted to the hospital for alcoholic hepatitis will be dead within one year of admission. Alcohol can cause portal hypertension (80% of cases are caused by cirrhosis in the UK), which may lead to the development of abnormal blood vessels e.g. in the oesophagus which will have a tendency to cause bleeding, which can be fatal. Roughly 30% of cirrhotics with varices bleed, and rebleeding is common: “After a 1st variceal bleed, 60% rebleed within 1yr” and “40% of rebleeders die of complications.” Alcoholism can kill you in a variety of different ways (acute poisonings and accidents should probably also be included here as well), and many don’t survive long enough to develop cancer.
As mentioned in the first post about the book acute kidney injury is common in a hospital setting. In the following I’ve added a few more observations about renal disease. “Renal pain is usually a dull ache, constant and in the loin.” But renal disease don’t always cause pain, and in general: “There is often a poor correlation between symptoms and severity of renal disease. Progression [in chronic disease] may be so insidious that patients attribute symptoms to age or a minor illnesses. […] Serious renal failure may cause no symptoms at all.” The authors note that odd chronic symptoms like fatigue should not be dismissed without considering a renal function test first. The book has a nice brief overview of the pathophysiology of diabetic nephropathy – this part is slightly technical, but I decided to include it here anyway before moving on to a different topic:
“Early on, glomerular and tubular hypertrophy occur, increasing GFR [glomerular filtration rate, an indicator variable used to assess kidney function] transiently, but ongoing damage from advanced glycosylation end-products (AGE—caused by non-enzymatic glycosylation of proteins from chronic hyperglycaemia) triggers more destructive disease. These AGE trigger an inflammatory response leading to deposition of type IV collagen and mesangial expansion, eventually leading to arterial hyalinization, thickening of the mesangium and glomerular basement membrane and nodular glomerulosclerosis (Kimmelstiel–Wilson lesions). Progression generally occurs in four stages:
1 GFR elevated: early in disease renal blood flow increases, increasing the GFR and leading to microalbuminuria. […]
2 Glomerular hyperfiltration: in the next 5–10yrs mesangial expansion gradually occurs and hyperfiltration at the glomerulus is seen without microalbuminuria.
3 Microalbuminuria: as soon as this is detected it indicates progression of disease, GFR may be raised or normal. This lasts another 5–10yrs.
4 Nephropathy: GFR begins to decline and proteinuria increases.
Patients with type 2 DM may present at the later stages having had undetected hyperglycaemia for many years before diagnosis.”
Vitamin B12 deficiency is quite common, the authors note that it occurs in up to 15% of older people. Severe B12 deficiency is not the sort of thing which will lead to you feeling ‘a bit under the weather’ – it can lead to permanent brain damage and damage to the spinal cord. “Vitamin B12 is found in meat, fish, and dairy products, but not in plants.” It’s important to note that “foods of non-animal origin contain no B12 unless fortified or contain bacteria.” The wiki article incidentally includes even higher prevalence estimates (“It is estimated to occur in about 6% of those under the age of 60 and 20% of those over the age of 60. Rates may be as high as 80% in parts of Africa and Asia.”) than the one included in the book – this vitamin deficiency is common, and if severe it can have devastating consequences.
On bleeding disorders: “After injury, 3 processes halt bleeding: vasoconstriction, gap-plugging by platelets, and the coagulation cascade […]. Disorders of haemostasis fall into these 3 groups. The pattern of bleeding is important — vascular and platelet disorders lead to prolonged bleeding from cuts, bleeding into the skin (eg easy bruising and purpura), and bleeding from mucous membranes (eg epistaxis [nose bleeds], bleeding from gums, menorrhagia). Coagulation disorders cause delayed bleeding into joints and muscle.” An important observation in the context of major bleeds is incidentally this: “Blood should only be given if strictly necessary and there is no alternative. Outcomes are often worse after a transfusion.” The book has some good chapters about the leukaemias, but they’re relatively rare diseases and some of them are depressing (e.g. acute myeloid leukaemia: according to the book coverage death occurs in ~2 months if untreated, and roughly four out of five treated patients are dead within 3 years) so I won’t talk a lot about them. One thing I found somewhat interesting about the blood disorders covered in the book is actually how rare they are, all things considered: “every day each of us makes 175 billion red cells, 70 billion granulocytes, and 175 billion platelets”. There are lots of opportunities for things to go wrong here…
Some ways to prevent traveller’s diarrhea: “If in doubt, boil all water. Chlorination is OK, but doesn’t kill amoebic cysts (get tablets from pharmacies). Filter water before purifying. Distinguish between simple gravity filters and water purifiers (which also attempt to sterilize chemically). […] avoid surface water and intermittent tap supplies. In Africa assume that all unbottled water is unsafe. With bottled water, ensure the rim is clean & dry. Avoid ice. […] Avoid salads and peel your own fruit. If you cannot wash your hands, discard the part of the food that you are holding […] Hot, well-cooked food is best (>70°C for 2min is no guarantee; many pathogens survive boiling for 5min, but few last 15min)”
An important observation related to this book’s coverage about how to control hospital acquired infection: “Cleaning hospitals: Routine cleaning is necessary to ensure that the hospital is visibly clean and free from dust and soiling. 90% of microorganisms are present within ‘visible dirt’, and the purpose of routine cleaning is to eliminate this dirt. Neither soap nor detergents have antimicrobial activity, and the cleaning process depends essentially on mechanical action.”
Falciparum malaria causes one million deaths/year, according to the book, and mortality is close to 100% in untreated severe malaria – treatment reduces this number to 15-20%. Malaria in returning travellers is not particularly common, but there are a couple thousand cases in the UK each year. Malaria prophylaxis does not give full protection, and “[t]here is no good protection for parts of SE Asia.” Multidrug resistance is common.
Here’s my first post about the book. I’ve read roughly 75% of the book at this point (~650 pages). The chapters I’ve read so far have dealt with the topics of: ‘thinking about medicine’ (an introductory chapter), ‘history and examination’, cardiovascular medicine, chest medicine, endocrinology, gastroenterology, renal medicine, haematology, infectious diseases, neurology, oncology and palliative care, rheumatology, and surgery (this last one is a long chapter – ~100 pages – which I have not yet finished). In my first post I (…mostly? I can’t recall if I included one or two observations made later in the coverage as well…) talked about observations included in the first 140 pages of the book, which relate only to the first three topics mentioned above; the chapter about chest medicine starts at page 154. In this post I’ll move on and discuss stuff covered in the chapters about cardiovascular medicine, chest medicine, and endocrinology.
In the previous post I talked a little bit about heart failure, acute coronary syndromes and a few related topics, but there’s a lot more stuff in the chapter about cardiovascular medicine and I figured I should add a few more observations – so let’s talk about aortic stenosis. The most common cause is ‘senile calcification’. The authors state that one should think of aortic stenosis in any elderly person with problems of chest pain, shortness of breath during exercise (exertional dyspnoea), and fainting episodes (syncope). Symptomatic aortic stenosis tends to be bad news; “If symptomatic, prognosis is poor without surgery: 2–3yr survival if angina/syncope; 1–2yr if cardiac failure. If moderate-to-severe and treated medically, mortality can be as high as 50% at 2yrs”. Surgery can improve the prognosis quite substantially; they note elsewhere in the coverage that a xenograft (e.g. from a pig) aortic valve replacement can last (“may require replacement at…”) 8-10 years, whereas a mechanical valve lasts even longer than that. Though it should also be noted in that context that the latter type requires life-long anticoagulation, whereas the former only requires this if there is atrial fibrilation.
Next: Infective endocarditis. Half of all cases of endocarditis occur on normal heart valves; the presentation in that case is one of acute heart failure. So this is one of those cases where your heart can be fine one day, and not many days later it’s toast and you’ll die unless you get treatment (often you’ll die even if you do get treatment as mortality is quite high: “Mortality: 5–50% (related to age and embolic events)”; mortality relates to which organism we’re dealing with: “30% with staphs [S. Aureus]; 14% if bowel organisms; 6% if sensitive streptococci.”). Multiple risk factors are known, but some of those are not easily preventable (renal failure, dermatitis, organ transplantation…); don’t be an IV drug (ab)user, and try to avoid getting (type 2) diabetes.. The authors note that: “There is no proven association between having an interventional procedure (dental or non-dental) and the development of IE”, and: “Antibiotic prophylaxis solely to prevent IE is not recommended”.
Speaking of terrible things that can go wrong with your heart for no good reason, hypertrophic cardiomyopathy (-HCM) is the leading cause of sudden cardiac death in young people, with an estimated prevalence of 1 in 500. “Sudden death may be the first manifestation of HCM in many patients”. Yeah…
The next chapter in the book as mentioned covers chest medicine. At the beginning of the chapter there’s some stuff about what the lungs look like and some stuff about how to figure out whether they’re working or not, or why they’re not working – I won’t talk about that here, but I would note that lung problems can relate to stuff besides ‘just’ lack of oxygen; they can also for example be related to retention of carbon dioxide and associated acidosis. In general I won’t talk much about this chapter’s coverage as I’m aware that I have covered many of the topics included in the book before here on the blog in other posts. It should perhaps be noted that whereas the chapter has two pages about lung tumours and two pages about COPD, it has 6 pages about pneumonia; this is still a very important disease and a major killer. Approximately one in five (the number 21% is included in the book) patients with pneumonia in a hospital setting die. Though it should perhaps also be observed that maybe one reason why more stuff is not included about lung cancer in that chapter is that this disease is just depressing and doctors can’t really do all that much. Carcinoma of the bronchus make up ~19% of all cancers and 27% of cancer deaths in the UK. In terms of prognosis, non-small cell lung cancer has a 50% 2-year mortality in cases where the cancer was not spread at presentation and a 90% 2-year mortality in cases with spread. That’s ‘the one you would prefer’: Small cell lung cancer is worse as small cell tumours “are nearly always disseminated at presentation” – here the untreated median survival is 3 months, increasing to 1-1,5 years if treated. The authors note that only 5% (of all cases, including both types) are ‘cured’ (they presumably use those citation marks for a reason). Malignant mesothelioma, a cancer strongly linked to asbestos exposure most often developing in the pleura, incidentally also has a terrible prognosis (“<2 years”, “Often the diagnosis is only made post-mortem”), however it is relatively rare, with only about ~650 deaths in the UK per year.
5-8% of people in the UK have asthma; I was surprised the number was that high. Most people who get it during childhood either grow out of it or suffer much less as adults, but on the other hand there are also many people who develop chronic asthma late in life. In 2009 approximately 1000 people in the UK died of asthma – unless this number is a big underestimate, it would seem to me that asthma at least in terms of mortality is a relatively mild disease (if 5% of the UK population has asthma, that’s 3 million people – and 1000 deaths among 3 million people is not a lot, especially not considering that half of those deaths were in people above the age of 65). COPD is incidentally another respiratory disease which is more common than I had thought; they note that the estimated prevalence in people above the age of 40 in the UK is 10-20%.
The endocrinology chapter has 10 pages about diabetes, and I won’t talk much about that coverage here as I’ve talked about many of these things before on the blog – however a few observations are worth including and discussing here. The authors note that 4% of all pregnancies are complicated by diabetes, with the large majority of cases (3.5%) being new-onset gestational diabetes. In a way the 0,5% could be considered ‘good news’ because they reflect the fact that outcomes have improved so much that a female diabetic can actually carry a child to term without risking her own life or running a major risk that the fetus dies (“As late as 1980, physicians were still counseling diabetic women to avoid pregnancy” – link). But the 3,5%? That’s not good: “All forms [of diabetes] carry an increased risk to mother and foetus: miscarriage, pre-term labour, pre-eclampsia, congenital malformations, macrosomia, and a worsening of diabetic complications”. I’m not fully convinced this statement is actually completely correct, but there’s no doubt that diabetes during pregnancy is not particularly desirable. As to which part of the statement I’m uncertain about, I think gestational diabetes ‘ought to’ have somewhat different effects than type 1 especially in the context of congenial malformations. Based on my understanding of these things, gestational diabetes should be less likely to cause congenital malformations than type 1 diabetes in the mother; diabetes-related congenital malformations tend to happen/develop very early in pregnancy (for details, see the link above) and gestational pregnancy is closely related to hormonal changes and changing metabolic demands which happen over time during pregnancy. Hormonal changes which occur during pregnancy play a key role in the pathogenesis of gestational diabetes, as the hormonal changes in general increase insulin resistance significantly, which is what causes some non-diabetic women to become diabetic during pregnancy; these same processes incidentally also causes the insulin demands of diabetic pregnant women to increase a lot during pregnancy. You’d expect the inherently diabetogenic hormonal and metabolic processes which happen in pregnancy to play a much smaller role in the beginning of the pregnancy than they do later on, especially as women who develop gestational diabetes during their pregnancy would be likely to be able to compensate early in pregnancy, where the increased metabolic demands are much less severe than they are later on. So I’d expect the risk contribution from ‘classic gestational diabetes’ to be larger in the case of macrosomia than in the case of neural tube defects, where type 1s should probably be expected to dominate – a sort of ‘gestational diabetics don’t develop diabetes early enough in pregnancy for the diabetes to be very likely to have much impact on organogenesis’-argument. This is admittedly not a literature I’m intimately familiar with and maybe I’m wrong, but from my reading of their diabetes-related coverage I sort of feel like the authors shouldn’t be expected to be intimately familiar with the literature either, and I’m definitely not taking their views on these sorts of topics to be correct ‘by default’ at this point. This NHS site/page incidentally seems to support my take on this, as it’s clear that the first occasion for even testing for gestational diabetes is at week 8-12, which is actually after a substantial proportion of diabetes-related organ damage would already be expected to have occurred in the type 1 diabetes context (“There is an increased prevalence of congenital anomalies and spontaneous abortions in diabetic women who are in poor glycemic control during the period of fetal organogenesis, which is nearly complete by 7 wk postconception.” – Sperling et al., see again the link provided above. Note that that entire textbook is almost exclusively about type 1 diabetes, so ‘diabetes’ in the context of that quote equals T1DM), and a glucose tolerance test/screen does not in this setting take place until weeks 24-28.
The two main modifiable risk factors in the context of gestational diabetes are weight and age of pregnancy; the risk of developing gestational diabetes increases with weight and is higher in women above the age of 25. One other sex/gender-related observation to make in the context of diabetes is incidentally that female diabetics are at much higher risk of cardiovascular disease than are non-diabetic females: “DM [diabetes mellitus] removes the vascular advantage conferred by the female sex”. Relatedly, “MI is 4-fold commoner in DM and is more likely to be ‘silent’. Stroke is twice as common.” On a different topic in which I’ve been interested they provided an observation which did not help much: “The role of aspirin prophylaxis […] is uncertain in DM with hypertension.”
They argue in the section about thyroid function tests (p. 209) that people with diabetes mellitus should be screened for abnormalities in thyroid function on the annual review; I’m not actually sure this is done in Denmark and I think it’s not – the DDD annual reports I’ve read have not included this variable, and if it is done I know for a fact that doctors do not report the results to the patient. I’m almost certain they neglected to include a ‘type 1’ in that recommendation, because it makes close to zero sense to screen type 2 diabetics for comorbid autoimmune conditions, and I’d say I’m probably also a little skeptical, though much less skeptical, about annual screenings of all type 1s being potentially cost-effective. Given that autoimmune comorbidities (e.g. Graves’ disease and Hashimoto’s) are much more common in women than in men and that they often present in middle-aged individuals (and given that they’re more common in people who develop diabetes relatively late, unlike me – see Sperling) I would assume I’m relatively low risk and that it would probably not make sense to screen someone like me annually from a cost-benefit/cost-effectiveness perspective; but it might make sense to ask the endocrinologist at my next review about how this stuff is actually being done in Denmark, if only to satisfy my own curiosity. Annual screening of *female*, *type 1* diabetics *above (e.g.) the age of 30* might be a great idea and perhaps less restrictive criteria than that can also be justified relatively easily, but this is an altogether very different recommendation from the suggestion that you should screen all diabetics annually for thyroid problems, which is what they recommend in the book – I guess you can add this one to the list of problems I have with the authors’ coverage of diabetes-related topics (see also my comments in the previous post). The sex- and age-distinction is likely much less important than the ‘type’ restriction and maybe you can justify screening all type 1 diabetics (For example: “Hypothyroid or hyperthyroid AITD [autoimmune thyroid disease] has been observed in 10–24% of patients with type 1 diabetes” – Sperling. Base rates are important here: Type 1 diabetes is rare, and Graves’ disease is rare, but if the same HLA mutation causes both in many cases then the population prevalence is not informative about the risk an individual with diabetes and an HLA mutation has of developing Graves’) – but most diabetics are not type 1 diabetics, and it doesn’t make sense to screen a large number of people without autoimmune disease for autoimmune comorbidities they’re unlikely to have (autoimmunity in diabetes is complicated – see the last part of this comment for a few observations of interest on that topic – but it’s not that complicated; most type 2 diabetics are not sick because of autoimmunity-related disease processes, and type 2 diabetics make up the great majority of people with diabetes mellitus in all patient populations around the world). All this being said, it is worth keeping in mind that despite overt thyroid disease being relatively rare in general, subclinical hypothyroidism is common in middle-aged and elderly individuals (“~10% of those >55yrs”); and the authors recommend treating people in this category who also have DM because they are more likely to develop overt disease (…again it probably makes sense to add a ‘T1’ in front of that DM).
Smoking is sexy, right? (Or at least it used to be…). And alcohol makes other people look sexy, right? In a way I find it a little amusing that alcohol and smoking are nevertheless two of the three big organic causes of erectile dysfunction (the third is diabetes).
How much better does it feel to have sex, compared to how it feels to masturbate? No, they don’t ask that question in the book (leave that to me…) but they do provide part of the answer because actually there are ways to quantify this, sort of: “The prolactin increase (♂ and ♀) after coitus is ~400% greater than after masturbation; post-orgasmic prolactin is part of a feedback loop decreasing arousal by inhibiting central dopaminergic processes. The size of post-orgasmic prolactin increase is a neurohormonal index of sexual satisfaction.”