My working assumption as I was reading part two of the book was that I would not be covering that part of the book in much detail here because it would simply be too much work to make such posts legible to the readership of this blog. However I then later, while writing this post, had the thought that given that almost nobody reads along here anyway (I’m not complaining, mind you – this is how I like it these days), the main beneficiary of my blog posts will always be myself, which lead to the related observation/notion that I should not be limiting my coverage of interesting stuff here simply because some hypothetical and probably nonexistent readership out there might not be able to follow the coverage. So when I started out writing this post I was working under the assumption that it would be my last post about the book, but I now feel sure that if I find the time I’ll add at least one more post about the book’s statistics coverage. On a related note I am explicitly making the observation here that this post was written for my benefit, not yours. You can read it if you like, or not, but it was not really written for you.
I have added bold a few places to emphasize key concepts and observations from the quoted paragraphs and in order to make the post easier for me to navigate later (all the italics below are on the other hand those of the authors of the book).
“Biodemography is a multidisciplinary branch of science that unites under its umbrella various analytic approaches aimed at integrating biological knowledge and methods and traditional demographic analyses to shed more light on variability in mortality and health across populations and between individuals. Biodemography of aging is a special subfield of biodemography that focuses on understanding the impact of processes related to aging on health and longevity.”
“Mortality rates as a function of age are a cornerstone of many demographic analyses. The longitudinal age trajectories of biomarkers add a new dimension to the traditional demographic analyses: the mortality rate becomes a function of not only age but also of these biomarkers (with additional dependence on a set of sociodemographic variables). Such analyses should incorporate dynamic characteristics of trajectories of biomarkers to evaluate their impact on mortality or other outcomes of interest. Traditional analyses using baseline values of biomarkers (e.g., Cox proportional hazards or logistic regression models) do not take into account these dynamics. One approach to the evaluation of the impact of biomarkers on mortality rates is to use the Cox proportional hazards model with time-dependent covariates; this approach is used extensively in various applications and is available in all popular statistical packages. In such a model, the biomarker is considered a time-dependent covariate of the hazard rate and the corresponding regression parameter is estimated along with standard errors to make statistical inference on the direction and the significance of the effect of the biomarker on the outcome of interest (e.g., mortality). However, the choice of the analytic approach should not be governed exclusively by its simplicity or convenience of application. It is essential to consider whether the method gives meaningful and interpretable results relevant to the research agenda. In the particular case of biodemographic analyses, the Cox proportional hazards model with time-dependent covariates is not the best choice.”
“Longitudinal studies of aging present special methodological challenges due to inherent characteristics of the data that need to be addressed in order to avoid biased inference. The challenges are related to the fact that the populations under study (aging individuals) experience substantial dropout rates related to death or poor health and often have co-morbid conditions related to the disease of interest. The standard assumption made in longitudinal analyses (although usually not explicitly mentioned in publications) is that dropout (e.g., death) is not associated with the outcome of interest. While this can be safely assumed in many general longitudinal studies (where, e.g., the main causes of dropout might be the administrative end of the study or moving out of the study area, which are presumably not related to the studied outcomes), the very nature of the longitudinal outcomes (e.g., measurements of some physiological biomarkers) analyzed in a longitudinal study of aging assumes that they are (at least hypothetically) related to the process of aging. Because the process of aging leads to the development of diseases and, eventually, death, in longitudinal studies of aging an assumption of non-association of the reason for dropout and the outcome of interest is, at best, risky, and usually is wrong. As an illustration, we found that the average trajectories of different physiological indices of individuals dying at earlier ages markedly deviate from those of long-lived individuals, both in the entire Framingham original cohort […] and also among carriers of specific alleles […] In such a situation, panel compositional changes due to attrition affect the averaging procedure and modify the averages in the total sample. Furthermore, biomarkers are subject to measurement error and random biological variability. They are usually collected intermittently at examination times which may be sparse and typically biomarkers are not observed at event times. It is well known in the statistical literature that ignoring measurement errors and biological variation in such variables and using their observed “raw” values as time-dependent covariates in a Cox regression model may lead to biased estimates and incorrect inferences […] Standard methods of survival analysis such as the Cox proportional hazards model (Cox 1972) with time-dependent covariates should be avoided in analyses of biomarkers measured with errors because they can lead to biased estimates.”
“Statistical methods aimed at analyses of time-to-event data jointly with longitudinal measurements have become known in the mainstream biostatistical literature as “joint models for longitudinal and time-to-event data” (“survival” or “failure time” are often used interchangeably with “time-to-event”) or simply “joint models.” This is an active and fruitful area of biostatistics with an explosive growth in recent years. […] The standard joint model consists of two parts, the first representing the dynamics of longitudinal data (which is referred to as the “longitudinal sub-model”) and the second one modeling survival or, generally, time-to-event data (which is referred to as the “survival sub-model”). […] Numerous extensions of this basic model have appeared in the joint modeling literature in recent decades, providing great flexibility in applications to a wide range of practical problems. […] The standard parameterization of the joint model (11.2) assumes that the risk of the event at age t depends on the current “true” value of the longitudinal biomarker at this age. While this is a reasonable assumption in general, it may be argued that additional dynamic characteristics of the longitudinal trajectory can also play a role in the risk of death or onset of a disease. For example, if two individuals at the same age have exactly the same level of some biomarker at this age, but the trajectory for the first individual increases faster with age than that of the second one, then the first individual can have worse survival chances for subsequent years. […] Therefore, extensions of the basic parameterization of joint models allowing for dependence of the risk of an event on such dynamic characteristics of the longitudinal trajectory can provide additional opportunities for comprehensive analyses of relationships between the risks and longitudinal trajectories. Several authors have considered such extended models. […] joint models are computationally intensive and are sometimes prone to convergence problems [however such] models provide more efficient estimates of the effect of a covariate […] on the time-to-event outcome in the case in which there is […] an effect of the covariate on the longitudinal trajectory of a biomarker. This means that analyses of longitudinal and time-to-event data in joint models may require smaller sample sizes to achieve comparable statistical power with analyses based on time-to-event data alone (Chen et al. 2011).”
“To be useful as a tool for biodemographers and gerontologists who seek biological explanations for observed processes, models of longitudinal data should be based on realistic assumptions and reflect relevant knowledge accumulated in the field. An example is the shape of the risk functions. Epidemiological studies show that the conditional hazards of health and survival events considered as functions of risk factors often have U- or J-shapes […], so a model of aging-related changes should incorporate this information. In addition, risk variables, and, what is very important, their effects on the risks of corresponding health and survival events, experience aging-related changes and these can differ among individuals. […] An important class of models for joint analyses of longitudinal and time-to-event data incorporating a stochastic process for description of longitudinal measurements uses an epidemiologically-justified assumption of a quadratic hazard (i.e., U-shaped in general and J-shaped for variables that can take values only on one side of the U-curve) considered as a function of physiological variables. Quadratic hazard models have been developed and intensively applied in studies of human longitudinal data”.
“Various approaches to statistical model building and data analysis that incorporate unobserved heterogeneity are ubiquitous in different scientific disciplines. Unobserved heterogeneity in models of health and survival outcomes can arise because there may be relevant risk factors affecting an outcome of interest that are either unknown or not measured in the data. Frailty models introduce the concept of unobserved heterogeneity in survival analysis for time-to-event data. […] Individual age trajectories of biomarkers can differ due to various observed as well as unobserved (and unknown) factors and such individual differences propagate to differences in risks of related time-to-event outcomes such as the onset of a disease or death. […] The joint analysis of longitudinal and time-to-event data is the realm of a special area of biostatistics named “joint models for longitudinal and time-to-event data” or simply “joint models” […] Approaches that incorporate heterogeneity in populations through random variables with continuous distributions (as in the standard joint models and their extensions […]) assume that the risks of events and longitudinal trajectories follow similar patterns for all individuals in a population (e.g., that biomarkers change linearly with age for all individuals). Although such homogeneity in patterns can be justifiable for some applications, generally this is a rather strict assumption […] A population under study may consist of subpopulations with distinct patterns of longitudinal trajectories of biomarkers that can also have different effects on the time-to-event outcome in each subpopulation. When such subpopulations can be defined on the base of observed covariate(s), one can perform stratified analyses applying different models for each subpopulation. However, observed covariates may not capture the entire heterogeneity in the population in which case it may be useful to conceive of the population as consisting of latent subpopulations defined by unobserved characteristics. Special methodological approaches are necessary to accommodate such hidden heterogeneity. Within the joint modeling framework, a special class of models, joint latent class models, was developed to account for such heterogeneity […] The joint latent class model has three components. First, it is assumed that a population consists of a fixed number of (latent) subpopulations. The latent class indicator represents the latent class membership and the probability of belonging to the latent class is specified by a multinomial logistic regression function of observed covariates. It is assumed that individuals from different latent classes have different patterns of longitudinal trajectories of biomarkers and different risks of event. The key assumption of the model is conditional independence of the biomarker and the time-to-events given the latent classes. Then the class-specific models for the longitudinal and time-to-event outcomes constitute the second and third component of the model thus completing its specification. […] the latent class stochastic process model […] provides a useful tool for dealing with unobserved heterogeneity in joint analyses of longitudinal and time-to-event outcomes and taking into account hidden components of aging in their joint influence on health and longevity. This approach is also helpful for sensitivity analyses in applications of the original stochastic process model. We recommend starting the analyses with the original stochastic process model and estimating the model ignoring possible hidden heterogeneity in the population. Then the latent class stochastic process model can be applied to test hypotheses about the presence of hidden heterogeneity in the data in order to appropriately adjust the conclusions if a latent structure is revealed.”
“The longitudinal genetic-demographic model (or the genetic-demographic model for longitudinal data) […] combines three sources of information in the likelihood function: (1) follow-up data on survival (or, generally, on some time-to-event) for genotyped individuals; (2) (cross-sectional) information on ages at biospecimen collection for genotyped individuals; and (3) follow-up data on survival for non-genotyped individuals. […] Such joint analyses of genotyped and non-genotyped individuals can result in substantial improvements in statistical power and accuracy of estimates compared to analyses of the genotyped subsample alone if the proportion of non-genotyped participants is large. Situations in which genetic information cannot be collected for all participants of longitudinal studies are not uncommon. They can arise for several reasons: (1) the longitudinal study may have started some time before genotyping was added to the study design so that some initially participating individuals dropped out of the study (i.e., died or were lost to follow-up) by the time of genetic data collection; (2) budget constraints prohibit obtaining genetic information for the entire sample; (3) some participants refuse to provide samples for genetic analyses. Nevertheless, even when genotyped individuals constitute a majority of the sample or the entire sample, application of such an approach is still beneficial […] The genetic stochastic process model […] adds a new dimension to genetic biodemographic analyses, combining information on longitudinal measurements of biomarkers available for participants of a longitudinal study with follow-up data and genetic information. Such joint analyses of different sources of information collected in both genotyped and non-genotyped individuals allow for more efficient use of the research potential of longitudinal data which otherwise remains underused when only genotyped individuals or only subsets of available information (e.g., only follow-up data on genotyped individuals) are involved in analyses. Similar to the longitudinal genetic-demographic model […], the benefits of combining data on genotyped and non-genotyped individuals in the genetic SPM come from the presence of common parameters describing characteristics of the model for genotyped and non-genotyped subsamples of the data. This takes into account the knowledge that the non-genotyped subsample is a mixture of carriers and non-carriers of the same alleles or genotypes represented in the genotyped subsample and applies the ideas of heterogeneity analyses […] When the non-genotyped subsample is substantially larger than the genotyped subsample, these joint analyses can lead to a noticeable increase in the power of statistical estimates of genetic parameters compared to estimates based only on information from the genotyped subsample. This approach is applicable not only to genetic data but to any discrete time-independent variable that is observed only for a subsample of individuals in a longitudinal study.”
“Despite an existing tradition of interpreting differences in the shapes or parameters of the mortality rates (survival functions) resulting from the effects of exposure to different conditions or other interventions in terms of characteristics of individual aging, this practice has to be used with care. This is because such characteristics are difficult to interpret in terms of properties of external and internal processes affecting the chances of death. An important question then is: What kind of mortality model has to be developed to obtain parameters that are biologically interpretable? The purpose of this chapter is to describe an approach to mortality modeling that represents mortality rates in terms of parameters of physiological changes and declining health status accompanying the process of aging in humans. […] A traditional (demographic) description of changes in individual health/survival status is performed using a continuous-time random Markov process with a finite number of states, and age-dependent transition intensity functions (transitions rates). Transitions to the absorbing state are associated with death, and the corresponding transition intensity is a mortality rate. Although such a description characterizes connections between health and mortality, it does not allow for studying factors and mechanisms involved in the aging-related health decline. Numerous epidemiological studies provide compelling evidence that health transition rates are influenced by a number of factors. Some of them are fixed at the time of birth […]. Others experience stochastic changes over the life course […] The presence of such randomly changing influential factors violates the Markov assumption, and makes the description of aging-related changes in health status more complicated. […] The age dynamics of influential factors (e.g., physiological variables) in connection with mortality risks has been described using a stochastic process model of human mortality and aging […]. Recent extensions of this model have been used in analyses of longitudinal data on aging, health, and longevity, collected in the Framingham Heart Study […] This model and its extensions are described in terms of a Markov stochastic process satisfying a diffusion-type stochastic differential equation. The stochastic process is stopped at random times associated with individuals’ deaths. […] When an individual’s health status is taken into account, the coefficients of the stochastic differential equations become dependent on values of the jumping process. This dependence violates the Markov assumption and renders the conditional Gaussian property invalid. So the description of this (continuously changing) component of aging-related changes in the body also becomes more complicated. Since studying age trajectories of physiological states in connection with changes in health status and mortality would provide more realistic scenarios for analyses of available longitudinal data, it would be a good idea to find an appropriate mathematical description of the joint evolution of these interdependent processes in aging organisms. For this purpose, we propose a comprehensive model of human aging, health, and mortality in which the Markov assumption is fulfilled by a two-component stochastic process consisting of jumping and continuously changing processes. The jumping component is used to describe relatively fast changes in health status occurring at random times, and the continuous component describes relatively slow stochastic age-related changes of individual physiological states. […] The use of stochastic differential equations for random continuously changing covariates has been studied intensively in the analysis of longitudinal data […] Such a description is convenient since it captures the feedback mechanism typical of biological systems reflecting regular aging-related changes and takes into account the presence of random noise affecting individual trajectories. It also captures the dynamic connections between aging-related changes in health and physiological states, which are important in many applications.”
The links above are links to topics I looked up while reading the second half of the book. The first link is quite relevant to the book’s coverage as a comprehensive longitudinal Grade of Membership (-GoM) model is covered in chapter 17. Relatedly, chapter 18 covers linear latent structure (-LLS) models, and as observed in the book LLS is a generalization of GoM. As should be obvious from the nature of the links some of the stuff included in the second half of the text is highly technical, and I’ll readily admit I was not fully able to understand all the details included in the coverage of chapters 17 and 18 in particular. On account of the technical nature of the coverage in Part 2 I’m not sure I’ll cover the second half of the book in much detail, though I probably shall devote at least one more post to some of those topics, as they were quite interesting even if some of the details were difficult to follow.
I have almost finished the book at this point, and I have already decided to both give the book five stars and include it on my list of favorite books on goodreads; it’s really well written, and it provides consistently highly detailed coverage of very high quality. As I also noted in the first post about the book the authors have given readability aspects some thought, and I am sure most readers would learn quite a bit from this text even if they were to skip some of the more technical chapters. The main body of Part 2 of the book, the subtitle of which is ‘Statistical Modeling of Aging, Health, and Longevity’, is however probably in general not worth the effort of reading unless you have a solid background in statistics.
This post includes some observations and quotes from the last chapters of the book’s Part 1.
“The proportion of older adults in the U.S. population is growing. This raises important questions about the increasing prevalence of aging-related diseases, multimorbidity issues, and disability among the elderly population. […] In 2009, 46.3 million people were covered by Medicare: 38.7 million of them were aged 65 years and older, and 7.6 million were disabled […]. By 2031, when the baby-boomer generation will be completely enrolled, Medicare is expected to reach 77 million individuals […]. Because the Medicare program covers 95 % of the nation’s aged population […], the prediction of future Medicare costs based on these data can be an important source of health care planning.”
“Three essential components (which could be also referred as sub-models) need to be developed to construct a modern model of forecasting of population health and associated medical costs: (i) a model of medical cost projections conditional on each health state in the model, (ii) health state projections, and (iii) a description of the distribution of initial health states of a cohort to be projected […] In making medical cost projections, two major effects should be taken into account: the dynamics of the medical costs during the time periods comprising the date of onset of chronic diseases and the increase of medical costs during the last years of life. In this chapter, we investigate and model the first of these two effects. […] the approach developed in this chapter generalizes the approach known as “life tables with covariates” […], resulting in a new family of forecasting models with covariates such as comorbidity indexes or medical costs. In sum, this chapter develops a model of the relationships between individual cost trajectories following the onset of aging-related chronic diseases. […] The underlying methodological idea is to aggregate the health state information into a single (or several) covariate(s) that can be determinative in predicting the risk of a health event (e.g., disease incidence) and whose dynamics could be represented by the model assumptions. An advantage of such an approach is its substantial reduction of the degrees of freedom compared with existing forecasting models (e.g., the FEM model, Goldman and RAND Corporation 2004). […] We found that the time patterns of medical cost trajectories were similar for all diseases considered and can be described in terms of four components having the meanings of (i) the pre-diagnosis cost associated with initial comorbidity represented by medical expenditures, (ii) the cost peak associated with the onset of each disease, (iii) the decline/reduction in medical expenditures after the disease onset, and (iv) the difference between post- and pre-diagnosis cost levels associated with an acquired comorbidity. The description of the trajectories was formalized by a model which explicitly involves four parameters reflecting these four components.”
As I noted earlier in my coverage of the book, I don’t think the model above fully captures all relevant cost contributions of the diseases included, as the follow-up period was too short to capture all relevant costs to be included in the part iv model component. This is definitely a problem in the context of diabetes. But then again nothing in theory stops people from combining the model above with other models which are better at dealing with the excess costs associated with long-term complications of chronic diseases, and the model results were intriguing even if the model likely underperforms in a few specific disease contexts.
“Models of medical cost projections usually are based on regression models estimated with the majority of independent predictors describing demographic status of the individual, patient’s health state, and level of functional limitations, as well as their interactions […]. If the health states needs to be described by a number of simultaneously manifested diseases, then detailed stratification over the categorized variables or use of multivariate regression models allows for a better description of the health states. However, it can result in an abundance of model parameters to be estimated. One way to overcome these difficulties is to use an approach in which the model components are demographically-based aggregated characteristics that mimic the effects of specific states. The model developed in this chapter is an example of such an approach: the use of a comorbidity index rather than of a set of correlated categorical regressor variables to represent the health state allows for an essential reduction in the degrees of freedom of the problem.”
“Unlike mortality, the onset time of chronic disease is difficult to define with high precision due to the large variety of disease-specific criteria for onset/incident case identification […] there is always some arbitrariness in defining the date of chronic disease onset, and a unified definition of date of onset is necessary for population studies with a long-term follow-up.”
“Individual age trajectories of physiological indices are the product of a complicated interplay among genetic and non-genetic (environmental, behavioral, stochastic) factors that influence the human body during the course of aging. Accordingly, they may differ substantially among individuals in a cohort. Despite this fact, the average age trajectories for the same index follow remarkable regularities. […] some indices tend to change monotonically with age: the level of blood glucose (BG) increases almost monotonically; pulse pressure (PP) increases from age 40 until age 85, then levels off and shows a tendency to decline only at later ages. The age trajectories of other indices are non-monotonic: they tend to increase first and then decline. Body mass index (BMI) increases up to about age 70 and then declines, diastolic blood pressure (DBP) increases until age 55–60 and then declines, systolic blood pressure (SBP) increases until age 75 and then declines, serum cholesterol (SCH) increases until age 50 in males and age 70 in females and then declines, ventricular rate (VR) increases until age 55 in males and age 45 in females and then declines. With small variations, these general patterns are similar in males and females. The shapes of the age-trajectories of the physiological variables also appear to be similar for different genotypes. […] The effects of these physiological indices on mortality risk were studied in Yashin et al. (2006), who found that the effects are gender and age specific. They also found that the dynamic properties of the individual age trajectories of physiological indices may differ dramatically from one individual to the next.”
“An increase in the mortality rate with age is traditionally associated with the process of aging. This influence is mediated by aging-associated changes in thousands of biological and physiological variables, some of which have been measured in aging studies. The fact that the age trajectories of some of these variables differ among individuals with short and long life spans and healthy life spans indicates that dynamic properties of the indices affect life history traits. Our analyses of the FHS data clearly demonstrate that the values of physiological indices at age 40 are significant contributors both to life span and healthy life span […] suggesting that normalizing these variables around age 40 is important for preventing age-associated morbidity and mortality later in life. […] results [also] suggest that keeping physiological indices stable over the years of life could be as important as their normalizing around age 40.”
“The results […] indicate that, in the quest of identifying longevity genes, it may be important to look for candidate genes with pleiotropic effects on more than one dynamic characteristic of the age-trajectory of a physiological variable, such as genes that may influence both the initial value of a trait (intercept) and the rates of its changes over age (slopes). […] Our results indicate that the dynamic characteristics of age-related changes in physiological variables are important predictors of morbidity and mortality risks in aging individuals. […] We showed that the initial value (intercept), the rate of changes (slope), and the variability of a physiological index, in the age interval 40–60 years, significantly influenced both mortality risk and onset of unhealthy life at ages 60+ in our analyses of the Framingham Heart Study data. That is, these dynamic characteristics may serve as good predictors of late life morbidity and mortality risks. The results also suggest that physiological changes taking place in the organism in middle life may affect longevity through promoting or preventing diseases of old age. For non-monotonically changing indices, we found that having a later age at the peak value of the index […], a lower peak value […], a slower rate of decline in the index at older ages […], and less variability in the index over time, can be beneficial for longevity. Also, the dynamic characteristics of the physiological indices were, overall, associated with mortality risk more significantly than with onset of unhealthy life.”
“Decades of studies of candidate genes show that they are not linked to aging-related traits in a straightforward manner […]. Recent genome-wide association studies (GWAS) have reached fundamentally the same conclusion by showing that the traits in late life likely are controlled by a relatively large number of common genetic variants […]. Further, GWAS often show that the detected associations are of tiny effect […] the weak effect of genes on traits in late life can be not only because they confer small risks having small penetrance but because they confer large risks but in a complex fashion […] In this chapter, we consider several examples of complex modes of gene actions, including genetic tradeoffs, antagonistic genetic effects on the same traits at different ages, and variable genetic effects on lifespan. The analyses focus on the APOE common polymorphism. […] The analyses reported in this chapter suggest that the e4 allele can be protective against cancer with a more pronounced role in men. This protective effect is more characteristic of cancers at older ages and it holds in both the parental and offspring generations of the FHS participants. Unlike cancer, the effect of the e4 allele on risks of CVD is more pronounced in women. […] [The] results […] explicitly show that the same allele can change its role on risks of CVD in an antagonistic fashion from detrimental in women with onsets at younger ages to protective in women with onsets at older ages. […] e4 allele carriers have worse survival compared to non-e4 carriers in each cohort. […] Sex stratification shows sexual dimorphism in the effect of the e4 allele on survival […] with the e4 female carriers, particularly, being more exposed to worse survival. […] The results of these analyses provide two important insights into the role of genes in lifespan. First, they provide evidence on the key role of aging-related processes in genetic susceptibility to lifespan. For example, taking into account the specifics of aging-related processes gains 18 % in estimates of the RRs and five orders of magnitude in significance in the same sample of women […] without additional investments in increasing sample sizes and new genotyping. The second is that a detailed study of the role of aging-related processes in estimates of the effects of genes on lifespan (and healthspan) helps in detecting more homogeneous [high risk] sub-samples”.
“The aging of populations in developed countries requires effective strategies to extend healthspan. A promising solution could be to yield insights into the genetic predispositions for endophenotypes, diseases, well-being, and survival. It was thought that genome-wide association studies (GWAS) would be a major breakthrough in this endeavor. Various genetic association studies including GWAS assume that there should be a deterministic (unconditional) genetic component in such complex phenotypes. However, the idea of unconditional contributions of genes to these phenotypes faces serious difficulties which stem from the lack of direct evolutionary selection against or in favor of such phenotypes. In fact, evolutionary constraints imply that genes should be linked to age-related phenotypes in a complex manner through different mechanisms specific for given periods of life. Accordingly, the linkage between genes and these traits should be strongly modulated by age-related processes in a changing environment, i.e., by the individuals’ life course. The inherent sensitivity of genetic mechanisms of complex health traits to the life course will be a key concern as long as genetic discoveries continue to be aimed at improving human health.”
“Despite the common understanding that age is a risk factor of not just one but a large portion of human diseases in late life, each specific disease is typically considered as a stand-alone trait. Independence of diseases was a plausible hypothesis in the era of infectious diseases caused by different strains of microbes. Unlike those diseases, the exact etiology and precursors of diseases in late life are still elusive. It is clear, however, that the origin of these diseases differs from that of infectious diseases and that age-related diseases reflect a complicated interplay among ontogenetic changes, senescence processes, and damages from exposures to environmental hazards. Studies of the determinants of diseases in late life provide insights into a number of risk factors, apart from age, that are common for the development of many health pathologies. The presence of such common risk factors makes chronic diseases and hence risks of their occurrence interdependent. This means that the results of many calculations using the assumption of disease independence should be used with care. Chapter 4 argued that disregarding potential dependence among diseases may seriously bias estimates of potential gains in life expectancy attributable to the control or elimination of a specific disease and that the results of the process of coping with a specific disease will depend on the disease elimination strategy, which may affect mortality risks from other diseases.”
Lately I’ve been reading some of George MacDonald Fraser’s Flashman books, which have been quite enjoyable reads in general; I’m reading the books in the order in which the actions in the books supposedly took place, not in the order in which the books were published, and a large number of the words included below are words I encountered in the first three of the books I read (i.e. Flashman, Royal Flash, and Flashman’s Lady); I decided the post already at that point included a large number of words (the post includes roughly 120 words), so I saw no need to add additional words from the other books in the series in this post as well. I have reviewed a few of the Flashman books I’ve read on goodreads here, here, and here.
All the quotes included in this post are from The Faber Book of Aphorisms, which I am currently reading.
i. “It is never any good dwelling on good-bys. It is not the being together that it prolongs, it is the parting.” (Elizabeth Bibesco)
ii. “Good manners are made up of petty sacrifices.” (Ralph Waldo Emerson)
iii. “One learns taciturnity best among people without it, and loquacity among the taciturn.” (Jean Paul Richter)
iv. “A man never reveals his character more vividly than when portraying the character of another.” (-ll-)
v. “That we seldom repent of talking too little and very often of talking too much is a … maxim that everybody knows and nobody practices.” (Jean de La Bruyère)
vi. “Never trust a man who speaks well of everybody.” (John Churton Collins)
vii. “People not used to the world … are unskillful enough to show what they have sense enough not to tell.” (Philip Dormer Stanhope, 4th Earl of Chesterfield)
viii. “To most men, experience is like the stern lights of a ship, which illumine only the track it has passed.” (Samuel Taylor Coleridge)
ix. “Those who know the least obey the best.” (George Farquhar)
x. “Monkeys are superior to men in this: when a monkey looks into a mirror, he sees a monkey.” (Malcolm de Chazal)
xi. “It can be shown that a mathematical web of some kind can be woven about any universe containing several objects. The fact that our universe lends itself to mathematical treatment is not a fact of any great philosophical significance.” (Bertrand Russell)
xii. “You can change your faith without changing gods, and vice versa.” (Stanisław Jerzy Lec)
xiii. “Religion is the masterpiece of the art of animal training, for it trains people as to how they shall think.” (Arthur Schopenhauer)
xiv. “The vanity of being known to be trusted with a secret is generally one of the chief motives to disclose it.” (Samuel Johnson)
xv. “No man is exempt from saying silly things; the mischief is to say them deliberately.” (Michel de Montaigne)
xvi. “Many promising reconciliations have broken down because, while both parties came prepared to forgive, neither party came prepared to be forgiven.” (Charles Williams)
xvii. “Ambition is pitiless. Any merit that it cannot use it finds despicable.” (Joseph Joubert)
xviii. “Experience is the name everyone gives to his mistakes.” (Oscar Wilde)
xix. “Nothing is enough to the man for whom enough is too little.” (Epicurus)
xx. “To measure up to all that is demanded of him, a man must overestimate his capacities.” (Johann Wolfgang von Goethe)
Over the last couple of weeks I’ve been reading James Herriot’s books and yesterday I finished the last one in the series. The five books (or 8, if you’re British – see the wiki…) I read – I skipped the ‘dog stories’ publication on that list because that book is just a collection of stories included in the other books – contain almost 2500 pages (2479, according to the goodreads numbers provided in the context of the editions I’ve been reading), and they also contained quite a few unfamiliar/nice words and expressions, many of which are included below. If you’re curious about the Herriot books you can read my goodreads reviews of the books here, here (very short), here, and here (I didn’t review The Lord God Made Them All).
i. “Fraud and falsehood only dread examination. Truth invites it.” (Thomas Cooper)
ii. “However well equipped our language, it can never be forearmed against all possible cases that may arise and call for description: fact is richer than diction.” (J. L. Austin)
iii. “There is no loneliness like the loneliness of crowds, especially to those who are unaccustomed to them.” (H. Rider Haggard)
iv. “All men are moral. Only their neighbors are not.” (John Steinbeck)
v. “The unfortunate thing is that, because wishes sometimes come true, the agony of hoping is perpetuated.” (Marguerite Cleenewerck de Crayencour)
vi. “All cruel people describe themselves as paragons of frankness.” (Tennessee Williams)
vii. “If you do not have the capacity for happiness with a little money, great wealth will not bring it to you.” (William Feather)
viii. “Anyone who can think clearly can write clearly. But neither is easy.” (-ll-)
ix. “No one’s reputation is quite what he himself perceives it ought to be.” (Christopher Vokes)
x. “[T]he question is not how to avoid procrastination, but how to procrastinate well. There are three variants of procrastination, depending on what you do instead of working on something: you could work on (a) nothing, (b) something less important, or (c) something more important. That last type, I’d argue, is good procrastination.” (Paul Graham)
xi. “At every period of history, people have believed things that were just ridiculous, and believed them so strongly that you risked ostracism or even violence by saying otherwise. If our own time were any different, that would be remarkable. As far as I can tell it isn’t.” (-ll-)
xii. “There can be no doubt that the knowledge of logic is of considerable practical importance for everyone who desires to think and infer correctly.” (Alfred Tarski)
xiii. “Logic and truth are two very different things, but they often look the same to the mind that’s performing the logic.” (Theodore Sturgeon)
xiv.”I don’t like it; I can’t approve of it; I have always thought it most regrettable that earnest and ethical Thinkers like ourselves should go scuttling through space in this undignified manner. Is it seemly that I, at my age, should be hurled with my books of reference, and bed-clothes, and hot-water bottle, across the sky at the unthinkable rate of nineteen miles a second? As I say, I don’t at all like it.” (Logan Pearsall Smith, All Trivia).
xv. “That we should practice what we preach is generally admitted; but anyone who preaches what he and his hearers practise must incur the gravest moral disapprobation.” (-ll-)
xvi. “Our names are labels, plainly printed on the bottled essense of our past behaviour.” (-ll-)
xvii. “It’s an odd thing about this Universe that though we all disagree with each other, we are all of us always in the right.” (-ll-)
xviii. “Those who say everything is pleasant and everyone delightful, come to the awful fate of believing what they say.” (-ll-)
xix. “He who goes against the fashion is himself its slave.” (-ll-)
xx. “When I read in the Times about India and all its problems and populations; when I look at the letters in large type of important personages, and find myself face to face with the Questions, Movements, and great Activities of the Age, ‘Where do I come in?’ I ask uneasily.
Then in the great Times-reflected world I find the corner where I play my humble but necessary part. For I am one of the unpraised, unrewarded millions without whom Statistics would be a bankrupt science. It is we who are born, who marry, who die, in constant ratios; who regularly lose so many umbrellas, post just so many unaddressed letters every year. And there are enthusiasts among us, Heroes who, without the least thought of their own convenience, allow omnibuses to run over them, or throw themselves, great-heartedly, month by month, in fixed numbers, from London bridges.” (-ll-)
In my first post about the book I included a few general remarks about the book and what it’s about. In this post I’ll continue my coverage of the book, starting with a few quotes from and observations related to the content in chapter 4 (‘Evidence for Dependence Among Diseases‘).
“To compare the effects of public health policies on a population’s characteristics, researchers commonly estimate potential gains in life expectancy that would result from eradication or reduction of selected causes of death. For example, Keyfitz (1977) estimated that eradication of cancer would result in 2.265 years of increase in male life expectancy at birth (or by 3 % compared to its 1964 level). Lemaire (2005) found that the potential gain in the U.S. life expectancy from cancer eradication would not exceed 3 years for both genders. Conti et al. (1999) calculated that the potential gain in life expectancy from cancer eradication in Italy would be 3.84 years for males and 2.77 years for females. […] All these calculations assumed independence between cancer and other causes of death. […] for today’s populations in developed countries, where deaths from chronic non-communicable diseases are in the lead, this assumption might no longer be valid. An important feature of such chronic diseases is that they often develop in clusters manifesting positive correlations with each other. The conventional view is that, in a case of such dependence, the effect of cancer eradication on life expectancy would be even smaller.”
I think the great majority of people you asked would have assumed that the beneficial effect of hypothetical cancer eradication in humans on human life expectancy would be much larger than this, but that’s just an impression. I’ve seen estimates like these before, so I was not surprised – but I think many people would be if they knew this. A very large number of people die as a result of developing cancer today, but the truth of the matter is that if they hadn’t died from cancer they’d have died anyway, and on average probably not really all that much later. I linked to Richard Alexander’s comments on this topic in my last post about the book, and again his observations apply so I thought I might as well add the relevant quote from the book here:
“In the course of working against senescence, selection will tend to remove, one by one, the most frequent sources of mortality as a result of senescence. Whenever a single cause of mortality, such as a particular malfunction of any vital organ, becomes the predominant cause of mortality, then selection will more effectively reduce the significance of that particular defect (meaning those who lack it will outreproduce) until some other achieves greater relative significance. […] the result will be that all organs and systems will tend to deteriorate together. […] The point is that as we age, and as senescence proceeds, large numbers of potential sources of mortality tend to lurk ever more malevolently just “below the surface,”so that, unfortunately, the odds are very high against any dramatic lengthening of the maximum human lifetime through technology.”
Remove one cause of death and there are plenty of others standing in line behind it. We already knew that; two hundred years ago one out of every four deaths in England was the result of tuberculosis, but developing treatments for tuberculosis and other infectious diseases did not mean that English people stopped dying; these days they just die from cardiovascular disease and cancer instead. Do note in the context of that quote that Alexander is talking about the maximum human lifetime, not average life expectancy; again, we know and have known for a long time that human technology can have a dramatic effect on the latter variable. Of course a shift in one distribution will be likely to have spill-over effects on the other (if more people are alive at the age of 70, the potential group of people also living on to reach e.g. 100 years is higher, even if the mortality rate for the 70-100 year old group did not change) the point is just that these effects are secondary effects and are likely to be marginal at best.
Anyway, some more stuff from the chapter. Just like the previous chapter in the book did, this one also includes analyses of very large data sets:
“The Multiple Cause of Death (MCD) data files contain information about underlying and secondary causes of death in the U.S. during 1968–2010. In total, they include more than 65 million individual death certificate records. […] we used data for the period 1979–2004.”
There’s some formal modelling stuff in the chapter which I won’t go into in detail here, this is the chapter in which I encountered the comment about ‘the multivariate lognormal frailty model’ I included in my first post about the book. One of the things the chapter looks at are the joint frequencies of deaths from cancer and other fatal diseases; it turns out that there are multiple diseases that are negatively related with cancer as a cause of death when you look at the population-level data mentioned above. The chapter goes into some of the biological mechanisms which may help explain why these associations look the way they do, and I’ll quote a little from that part of the coverage. A key idea here is (as always..?) that there are tradeoffs at play; some genetic variants may help protect you against e.g. cancer, but at the same time increase the risk of other diseases for the same reason that they protect you against cancer. In the context of the relationship between cancer deaths and deaths from other diseases they note in the conclusion that: “One potential biological mechanism underlying the negative correlation among cancer and other diseases could be related to the differential role of apoptosis in the development of these diseases.” The chapter covers that stuff in significantly more detail, and I decided to add some observations from the chapter on these topics below:
“Studying the role of the p53 gene in the connection between cancer and cellular aging, Campisi (2002, 2003) suggested that longevity may depend on a balance between tumor suppression and tissue renewal mechanisms. […] Although the mechanism by which p53 regulates lifespan remains to be determined, […] findings highlight the possibility that careful manipulation of p53 activity during adult life may result in beneficial effects on healthy lifespan. Other tumor suppressor genes are also involved in regulation of longevity. […] In humans, Dumont et al. (2003) demonstrated that a replacement of arginine (Arg) by proline (Pro) at position 72 of human p53 decreases its ability to initiate apoptosis, suggesting that these variants may differently affect longevity and vulnerability to cancer. Van Heemst et al. (2005) showed that individuals with the Pro/Pro genotype of p53 corresponding to reduced apoptosis in cells had significantly increased overall survival (by 41%) despite a more than twofold increased proportion of cancer deaths at ages 85+, together with a decreased proportion of deaths from senescence related causes such as COPD, fractures, renal failure, dementia, and senility. It was suggested that human p53 may protect against cancer but at a cost of longevity. […] Other biological factors may also play opposing roles in cancer and aging and thus contribute to respective trade-offs […]. E.g., higher levels of IGF-1 [have been] linked to both cancer and attenuation of phenotypes of physical senescence, such as frailty, sarcopenia, muscle atrophy, and heart failure, as well as to better muscle regeneration”.
“The connection between cancer and longevity may potentially be mediated by trade-offs between cancer and other diseases which do not necessarily involve any basic mechanism of aging per se. In humans, it could result, for example, from trade-offs between vulnerabilities to cancer and AD, or to cancer and CVD […] There may be several biological mechanisms underlying the negative correlation among cancer and these diseases. One can be related to the differential role of apoptosis in their development. For instance, in stroke, the number of dying neurons following brain ischemia (and thus probability of paralysis or death) may be less in the case of a downregulated apoptosis. As for cancer, the downregulated apoptosis may, conversely, mean a higher risk of the disease because more cells may survive damage associated with malignant transformation. […] Also, the role of the apoptosis may be different or even opposite in the development of cancer and Alzheimer’s disease (AD). Indeed, suppressed apoptosis is a hallmark of cancer, while increased apoptosis is a typical feature of AD […]. If so, then chronically upregulated apoptosis (e.g., due to a genetic polymorphism) may potentially be protective against cancer, but be deleterious in relation to AD. […] Increased longevity can be associated not only with increased but also with decreased chances of cancer. […] The most popular to-date “anti-aging” intervention, caloric restriction, often results in increased maximal life span along with reduced tumor incidence in laboratory rodents […] Because the rate of apoptosis was significantly and consistently higher in food restricted mice regardless of age, James et al. (1998) suggested that caloric restriction may have a cancer-protective effect primarily due to the upregulated apoptosis in these mice.”
Below I’ll discuss content covered in chapter 5, which deals with ‘Factors That May Increase Vulnerability to Cancer and Longevity in Modern Human Populations’. I’ll start out with a few quotes:
“Currently, the overall cancer incidence rate (age-adjusted) in the less developed world is roughly half that seen in the more developed world […] For countries with similar levels of economic development but different climate and ethnic characteristics […], the cancer rate patterns look much more similar than for the countries that share the same geographic location, climate, and ethnic distribution, but differ in the level of economic development […]. This suggests that different countries may share common factors linked to economic prosperity that could be primarily responsible for the modern increases in overall cancer risk. […] Population aging (increases in the proportion of older people) may […] partly explain the rise in the global cancer burden […]; however, it cannot explain increases in age-specific cancer incidence rates over time […]. Improved diagnostics and elevated exposures to carcinogens may explain increases in rates for selected cancer sites, but they cannot fully explain the increase in the overall cancer risk, nor incidence rate trends for most individual cancers (Jemal et al. 2008, 2013).”
“[W]e propose that the association between the overall cancer risk and the economic progress and spread of the Western lifestyle could in part be explained by the higher proportion of individuals more susceptible to cancer in the populations of developed countries, and discuss several mechanisms of such an increase in the proportion of the vulnerable. […] mechanisms include but are not limited to: (i) Improved survival of frail individuals. […] (ii) Avoiding or reducing traditional exposures. Excessive disinfection and hygiene typical of the developed world can diminish exposure to some factors that were abundant in the past […] Insufficiently or improperly trained immune systems may be less capable of resisting cancer. (iii) Burden of novel exposures. Some new medicines, cleaning agents, foods, etc., that are not carcinogenic themselves may still affect the natural ways of processing carcinogens in the body, and through this increase a person’s susceptibility to established carcinogens. [If this one sounds implausible to you, I’ll remind you that drug metabolism is complicated – US] […] (iv) Some of the factors linked to economic prosperity and the Western lifestyle (e.g., delayed childbirth and food enriched with growth factors) may antagonistically influence aging and cancer risk.”
They provide detailed coverage of all of these mechanisms in the chapter, below I have included a few select observations from that part of the coverage.
“There was a dramatic decline in infant and childhood mortality in developed countries during the last century. For example, the infant mortality rate in the United States was about 6 % of live births in 1935, 3 % in 1950, 1.3 % in 1980, and 0.6 % in 2010. That is, it declined tenfold over the course of 75 years […] Because almost all children (including those with immunity deficiencies) survive, the proportion of the children who are inherently more vulnerable could be higher in the more developed countries. This is consistent with a typically higher proportion of children with chronic inflammatory immune disorders such as asthma and allergy in the populations of developed countries compared to less developed ones […] Over-reduction of such traditional exposures may result in an insufficiently/improperly trained immune system early in life, which could make it less able to resist diseases, including cancer later in life […] There is accumulating evidence of the important role of these effects in cancer risk. […] A number of studies have connected excessive disinfection and lack of antigenic stimulation (especially in childhood) of the immune system in Westernized communities with increased risks of both chronic inflammatory diseases and cancer […] The IARC data on migrants to Israel […] allow for comparison of the age trajectories of cancer incidence rates between adult Jews who live in Israel but were born in other countries […] [These data] show that Jews born in less developed regions (Africa and Asia) have overall lower cancer risk than those born in the more developed regions (Europe and America). The discrepancy is unlikely to be due to differences in cancer diagnostics because at the moment of diagnosis all these people were citizens of the same country with the same standard of medical care. These results suggest that surviving childhood and growing up in a less developed country with diverse environmental exposures might help form resistance to cancer that lasts even after moving to a high risk country.”
I won’t go much into the ‘burden of novel exposures’ part, but I should note that exposures that may be relevant include factors like paracetamol use and antibiotics for treatment of H. pylori. Paracetamol is not considered carcinogenic by the IARC, but we know from animal studies that if you give rats paratamol and then expose them to an established carcinogen (with the straightforward name N-nitrosoethyl-N-hydroxyethylamine), the number of rats developing kidney cancer goes up. In the context of H. pylori, we know that these things may cause stomach cancer, but when you treat rats with metronidazol (which is used to treat H. pylori) and expose them to an established carcinogen, they’re more likely to develop colon cancer. The link between colon cancer and antibiotics use has been noted in other contexts as well; decreased microbial diversity after antibiotics use may lead to suppression of the bifidobacteria and promotion of E. coli in the colon, the metabolic products of which may lead to increased cancer risk. Over time an increase in colon cancer risk and a decrease in stomach cancer risk has been observed in developed societies, but aside from changes in diet another factor which may play a role is population-wide exposure to antibiotics. Colon and stomach cancers are incidentally not the only ones of interest in this particular context; it has also been found that exposure to chloramphenicol, a broad-spectrum antibiotic used since the 40es, increases the risk of lymphoma in mice when the mice are exposed to a known carcinogen, despite the drug itself again not being clearly carcinogenic on its own.
Many new exposures aside from antibiotics are of course relevant. Two other drug-related ones that might be worth mentioning are hormone replacement therapy and contraceptives. HRT is not as commonly used today as it was in the past, but to give some idea of the scope here, half of all women in the US aged 50-65 are estimated to have been on HRT at the peak of its use, around the turn of the millennium, and HRT is assumed to be partly responsible for the higher incidence of hormone-related cancers observed in female populations living in developed countries. It’s of some note that the use of HRT dropped dramatically shortly after this peak (from 61 million prescriptions in 2001 to 21 million in 2004), and that the incidence of estrogen-receptor positive cancers subsequently dropped. As for oral contraceptives, these have been in use since the 1960s, and combined hormonal contraceptives are known to increase the risk of liver- and breast cancer, while seemingly also having a protective effect against endometrial cancer and ovarian cancer. The authors speculate that some of the cancer incidence changes observed in the US during the latter half of the last century, with a decline in female endometrial and ovarian cancer combined with an increase in breast- and liver cancer, could in part be related to widespread use of these drugs. An estimated 10% of all women of reproductive age alive in the world, and 16% of those living in the US, are estimated to be using combined hormonal contraceptives. In the context of the protective effect of the drugs, it should perhaps be noted that endometrial cancer in particular is strongly linked to obesity so if you are not overweight you are relatively low-risk.
Many ‘exposures’ in a cancer context are not drug-related. For example women in Western societies tend to go into menopause at a higher age, and higher age of menopause has been associated with hormone-related cancers; but again the picture is not clear in terms of how the variable affects longevity, considering that later menopause has also been linked to increased longevity in several large studies. In the studies the women did have higher mortality from the hormone-related cancers, but on the other hand they were less likely to die from some of the other causes, such as pneumonia, influenza, and falls. Age of childbirth is also a variable where there are significant differences between developed countries and developing countries, and this variable may also be relevant to cancer incidence as it has been linked to breast cancer and melanoma; in one study women who first gave birth after the age of 35 had a 40% increased risk of breast cancer compared to mothers who gave birth before the age of 20 (good luck ‘controlling for everything’ in a context like that, but…), and in a meta-analysis the relative risk for melanoma was 1.47 for women in the oldest age group having given birth, compared to the youngest (again, good luck controlling for everything, but at least it’s not just one study). Lest you think this literature only deals with women, it’s also been found that parental age seems to be linked to cancers in the offspring (higher parental age -> higher cancer risk in the offspring), though the effect sizes are not mentioned in the coverage.
Here’s what they conclude at the end of the chapter:
“Some of the factors associated with economic prosperity and a Western lifestyle may influence both aging and vulnerability to cancer, sometimes oppositely. Current evidence supports a possibility of trade-offs between cancer and aging-related phenotypes […], which could be influenced by delayed reproduction and exposures to growth factors […]. The latter may be particularly beneficial at very old age. This is because the higher levels of growth factors may attenuate some phenotypes of physical senescence, such as decline in regenerative and healing ability, sarcopenia, frailty, elderly fractures and heart failure due to muscles athrophy. They may also increase the body’s vulnerability to cancer, e.g., through growth promoting and anti-apoptotic effects […]. The increase in vulnerability to cancer due to growth factors can be compatible with extreme longevity because cancer is a major contributor to mortality mainly before age 85, while senescence-related causes (such as physical frailty) become major contributors to mortality at oldest old ages (85+). In this situation, the impact of growth factors on vulnerability to death could be more deleterious in middle-to-old life (~before 85) and more beneficial at older ages (85+).
The complex relationships between aging, cancer, and longevity are challenging. This complexity warns against simplified approaches to extending longevity without taking into account the possible trade-offs between phenotypes of physical aging and various health disorders, as well as the differential impacts of such tradeoffs on mortality risks at different ages (e.g., Ukraintseva and Yashin 2003a; Yashin et al. 2009; Ukraintseva et al. 2010, 2016).”
This is my second and last post about the book, which will include some quotes from the second half of the book, as well as some comments.
“Different countries have adopted very different health care financing systems. In fact, it is arguable that the arrangements for financing of health care are more variable between different countries than the financing of any other good or service. […] The mechanisms adopted to deal with moral hazard are similar in all systems, whilst the mechanisms adopted to deal with adverse selection and incomplete coverage are very different. Compulsory insurance is used by social insurance and taxation [schemes] to combat adverse selection and incomplete coverage. Private insurance relies instead on experience rating to address adverse selection and a mix of retrospective reimbursement and selective contracting and vertical integration to deal with incomplete coverage.”
I have mentioned this before here on the blog (and elsewhere), but it is worth reiterating because you seem to sometimes encounter people who do not know this; there are some problems you’ll have to face when you’re dealing with insurance markets which will be there regardless of which entity is in charge of the insurance scheme. It doesn’t matter if your insurance system is government based or if the government is not involved in the insurance scheme at all, moral hazard will be there either way as a potential problem and you’re going to have to deal with that somehow. In econ 101 you tend to learn that ‘markets are great’, but this is one of those problems which are not going to go away by privatization.
On top of common problems faced by all insurers/insurance systems, different types of -systems will also tend to face a different mix of potential problems, some of which are likely to merit special attention in the specific setting in question. Some problems tend to be much more common in some specific settings than they are in others, which means that to some extent when you’re deciding on what might be ‘the ‘best’ institutional setup’, part of what you’re deciding on is which problem you are most concerned about addressing. In an evaluation context it should be pointed out in that context that the fact that most systems are mixes of different systems rather than ‘pure systems’, which they are, means that evaluation problems tend to be harder than they might otherwise have been. To add to this complexity as noted above the ways insurers deal with the same problem may not necessarily be the same in different institutional setups, which is worth having in mind when performance is evaluated (i.e., the fact that country A has included in the insurance system a feature X intending to address problem Q does not mean that country B, which has not included X in the system, does not attempt to address problem Q; B may just be using feature Y instead of feature X to do so).
Chapter 7 of the book deals with Equity in health care, and although I don’t want to cover that chapter in any detail a few observations from the text I did find worth including in this post:
“In the 1930s, only 43% of the [UK] population were covered by the national insurance scheme, mainly men in manual and low-paid occupations, and covered only for GP services. Around 21 million people were not covered by any health insurance, and faced potentially catastrophic expenditure should they become ill.”
“The literature on equity in the finance of health care has focused largely on the extent to which health care is financed according to ability to pay, and in particular on whether people with different levels of income make […] different payments, which is a vertical equity concern. Much less attention has been paid to horizontal equity, which considers the extent to which people with the same income make the same payments. […] There is horizontal inequity if people with the same ability to pay for health care, for example the same income, pay different amounts for it. […] tax-based payments and social health insurance payments tend to have less horizontal inequity than private health insurance payments and direct out-of-pocket payments. […] there are many concepts of equity that could be pursued; these are limited only by our capacity to think about the different ways in which resources could be allocated. It is unsurprising therefore that so many concepts of equity are discussed in the literature.”
Chapter 8 is about ‘Health care labour markets’. Again I won’t cover the chapter in much detail – people interested in such topics might like to have a look at this paper, which I concluded from a brief skim looks like it covers a few of the topics also discussed in the chapter – but I did want to include a few data:
“[S]alaries and wages paid to health care workers account for a substantial component of total health expenditure: the average country devotes over 40% of its government-funded health expenditure to paying its health workforce […], though there are regional variations [from ~30% in Africa to ~50% in the US and the Middle East – the data source is WHO, and the numbers are from 2006]. […] The WHO estimates there are around 59 million paid health workers worldwide […], around nine workers for every 1 000 population, with around two-thirds of the total providing health care and one third working in a non-clinical capacity.”
The last few chapters of the book cover mostly topics I have dealt with before, in more detail – for example are most topics covered here which are also covered in Gray et al. covered in much more detail in the latter book, which is natural as this text is mostly an introductory undergraduate text whereas the Gray et al. text is not (the latter book was based on material taught in a course called ‘Advanced Methods of Cost-Effectiveness Analysis’) – or topics in which I’m not actually all that interested (e.g. things like ‘extra-welfarism‘). Below I have added some quotes from the remaining chapters. I apologize in advance for repeating myself, given the fact that I probably covered a lot of this stuff back when I covered Gray et al., but on the other hand I read that book a while ago anyway:
“Simply providing information on costs and benefits is in itself not evaluative. Rather, in economic evaluation this information is structured in such a way as to enable alternative uses of resources to be judged. There are many criteria that might be used for such judgements. […] The criteria that are the focus of economic analysis are efficiency and equity […] in practice efficiency is dealt with far more often and with greater attention to precise numerical estimates. […] In publicly provided health programmes, market forces might be weak or there might be none at all. Economic evaluation is largely concerned with measuring efficiency in areas where there is public involvement and there are no markets to generate the kind of information – for example, prices and profits – that enable us to judge this. […] The question of how costs and benefits are to be measured and weighed against each other is obviously a fundamental issue, and indeed forms the main body of work on the topic. The answers to this question are often pragmatic, but they also have very strong guides from theory.”
“[M]any support economic evaluation as a useful technique even where it falls short of being a full cost–benefit analysis [‘CBA’ – US], as it provides at least some useful information. A partial cost–benefit analysis usually means that some aspects of cost or benefit have been identified but not valued, and the usefulness of the information depends on whether we believe that if the missing elements were to be valued they would alter the balance of costs and benefits. […] A special case of a partial economic evaluation is where costs are valued but benefits are not. […] This kind of partial efficiency is dealt with by a different type of economic evaluation known as cost-effectiveness analysis (CEA). […] One rationale for CEA is that whilst costs are usually measured in terms of money, it may be much more difficult to measure benefits that way. […] Cost-effectiveness analysis tries to identify where more benefit can be produced at the same cost or a lower cost can be achieved for the same benefit. […] there are many cases where we may wish to compare alternatives in which neither benefits nor costs are held constant. In this case, a cost-effectiveness ratio (CER) – the cost per unit of output or effect – is calculated to compare the alternatives, with the implication that the lower the CER the better. […] CBA seeks to answer whether or not a particular output is worth the cost. CEA seeks to answer the question of which among two or more alternatives provides the most output for a given cost, or the lowest cost for a given output. CBA therefore asks whether or not we should do things, while CEA asks what is the best way to do things that are worth doing.”
“The major preoccupation of economic evaluation in health care has been measurement of costs and benefits – what should be measured and how it should be measured – rather than the aims of the analysis. […] techniques such as CBA and CEA are […] defined by measurement rather than economic theory. […] much of the economic evaluation literature gives the label cost-minimisation analysis to what was traditionally called CEA, and specifically restricts the term CEA to choices between alternatives that have similar types of effects but differing levels of effect and costs. […] It can be difficult to specify what the appropriate measure of effect is in CEA. […] care is […] required to ensure that whichever measure of effect is chosen does not mislead or bias the analysis – for example, if one intervention is better at preventing non-fatal heart attacks but is worse at preventing fatal attacks, the choice of effect measure will be crucial.”
“[Health] indicators are usually measures of the value of health, although not usually expressed in money terms. As a result, a third important type of economic evaluation has arisen, called cost–utility analysis (CUA). […] the health measure usually used in CUA is gains in quality-adjusted life years […] it is essentially a composite measure of gains in life expectancy and health-related quality of life. […] the most commonly used practice in CUA is to use the QALY and moreover to assume that each QALY is worth the same irrespective of who gains it and by what route. […] Similarly, CBA in practice focuses on sums of benefits compared to sums of costs, not on the distribution of these between people with different characteristics. It also does not usually take account of whether society places different weights on benefits experienced by different people; for example, there is evidence that many people would prefer health services to put a higher priority on improving the health of younger rather than older people (Tsuchiya et al., 2003).”
“Because CEA does not give a direct comparison between the value of effects and costs, decision rules are far more complex than for CBA and are bounded by restrictions on their applicability. The problem arises when the alternatives being appraised do not have equal costs or benefits, but instead there is a trade-off: the greater benefit that one of the alternatives has is achieved at a higher cost [this is not a rare occurrence, to put it mildly…]. The key problem is how that trade-off is to be represented, and how it can then be interpreted; essentially, encapsulating cost-effectiveness in a single index that can unambiguously be interpreted for decision-making purposes.”
“Although cost-effectiveness analysis can be very useful, its essential inability to help in the kind of choices that cost–benefit analysis allows – an absolute recommendation for a particular activity rather than one contingent on a comparison with alternatives – has proved such a strong limitation that means have been sought to overcome it. The key to this has been the cost-effectiveness threshold or ceiling ratio, which is essentially a level of the CER that any intervention must meet if it is to be regarded as cost-effective. It can also be interpreted as the decision maker’s willingness to pay for a unit of effectiveness. […] One of the problems with this kind of approach is that it is no longer consistent with the conventional aim of CEA. Except under special conditions, it is not consistent with output maximisation constrained by a budget. […] It is useful to distinguish between a comparator that is essentially ‘do nothing about the problem […]’ and one that is ‘another way of doing something about that problem’. The CER that arises from the second of these is […] an incremental cost-effectiveness ratio (ICER) […] in most cases the ICER is the correct measure to use. […] A problem [with using ICERs] is that if only the ICER is evaluated, it must be assumed that the alternative used in the comparator is itself cost-effective; if it is not, the ICER may mislead.”
“The basis of economic costing is […] quite distinct from accounting or financial cost approaches. The process of costing involves three steps: (1) identify and describe the changes in resource use, both increases and decreases, that are associated with the options to be evaluated; (2) quantify those changes in resource use in physical units; and (3) value those resources. […] many markets are not fully competitive. For example, the wages paid to doctors may be a reflection of the lobbying power of medical associations or restrictions to licensing, rather than the value of their skills […] The prices of drugs may reflect the effect of government regulations on licensing, pricing and intellectual property. Deviations of price from opportunity cost may arise from factors such as imperfect competition […] or from distortions to markets created by government interventions. Where these are known, prices should be adjusted […] In practice, such adjustments are difficult to make and would rely on good information on the underlying costs of production, which is often not available. Further, where the perspective is that of the health service, there is an argument for not adjusting prices, on the grounds that the prevailing prices, even if inefficient, are those they must pay and are relevant to their budget. […] Where prices are used, it is important to consider whether the option being evaluated will, if implemented, result in price changes. […] Valuing resource use becomes still more difficult in cases where there are no markets. This includes the value of patients’ time in seeking and receiving care or of caregivers’ time in providing informal supportive care. The latter can be an important element of costs and […] may be particularly important in the evaluation of health care options that rely on such inputs.”
“[A]lthough the emphasis in economic evaluation is on marginal changes in costs and benefits, the available data frequently relate to average costs […] There are two issues with using average cost data. First, the addition to or reduction in costs from increased or decreased resource use may be higher, lower or the same as the average cost. Unfortunately, knowing what the relationship is between average and marginal cost requires information on the latter – the absence of which is the reason average costs are used! Secondly, average cost data obscure potentially important issues with respect to the technical efficiency of providers. If average costs are derived in one setting, for example a hospital, this assumes that the hospital is using the optimal combination of inputs. If average costs are derived from multiple settings, they will include a variety of underlying production technologies and a variety of underlying levels of production efficiency. Average costs are therefore less than ideal, because they comprise a ‘black box’ of underlying cost and production decisions. […] Approaches to costing fall into two broad types: macro- or ‘top-down’ costing, and micro- or ‘bottom-up’ costing […] distinguished largely on the basis of the level of disaggregation […] A top-down approach may involve using pre-existing data on total or average costs and apportioning these in some way to the options being evaluated. […] In contrast, a bottom-up approach identifies, quantifies and values resources in a disaggregated way, so that each element of costs is estimated individually and they are summed up at the end. […] The separation of top-down and bottom-up costing approaches is not always clear. For example, often top-down studies are used to calculate unit costs, which are then combined with resource use data in bottom-up studies.”
“Health care programmes can affect both length and quality of life; these in turn interact with both current and future health care use, relating both to the condition of interest and to other conditions. Weinstein and Stason (1977) argue that the cost of ‘saving’ life in one way should include the future costs to the health service of death from other causes. […] In practice, different analysts respond to this issue in different ways: examples may be found of economic evaluations of mammography screening that do […] and do not […] incorporate future health care costs. Methodological differences of this sort reduce the ability to make valid comparisons between results. In practical terms, this issue is a matter of researcher discretion”.
The stuff included in the last paragraph above is closely linked to stuff covered in the biodemography text I’m currently reading, and I expect to cover related topics in some detail in the future here on the blog. Below a few final observations from the book about discounting:
“It is generally accepted that future costs should be discounted in an economic evaluation and, in CBA, it is also relatively non-controversial that benefits, in monetary terms, should also be discounted. In contrast, there is considerable debate surrounding the issue of whether to discount health outcomes such as QALYs, and what the appropriate discount rate is. […] The debate […] concentrates on the issue of whether people have a time preference for receiving health benefits now rather than in the future in the same way that they might have a time preference for gaining monetary benefits now rather than later in life. Arguments both for and against this view are plausible, and the issue is currently unresolved. […] The effect of not discounting health benefits is to improve the cost-effectiveness of all health care programmes that have benefits beyond the current time period, because not discounting increases the magnitude of the health benefits. But as well as affecting the apparent cost-effectiveness of programmes relative to some benchmark or threshold, the choice of whether to discount will also affect the cost-effectiveness of different health care programmes relative to each other […] Discounting health benefits tends to make those health care programmes with benefits realised mostly in the future, such as prevention, less cost-effective relative to those with benefits realised mostly in the present, such as cure.”
Here’s one of my previous posts in the series about the book. In this post I’ll cover material dealing with two acute hyperglycemia-related diabetic complications (DKA and HHS – see below…) as well as multiple topics related to diabetes and stroke. I’ll start out with a few quotes from the book about DKA and HHS:
“DKA [diabetic ketoacidosis] is defined by a triad of hyperglycemia, ketosis, and acidemia and occurs in the absolute or near-absolute absence of insulin. […] DKA accounts for the bulk of morbidity and mortality in children with T1DM. National population-based studies estimate DKA mortality at 0.15% in the United States (4), 0.18–0.25% in Canada (4, 5), and 0.31% in the United Kingdom (6). […] Rates reach 25–67% in those who are newly diagnosed (4, 8, 9). The rates are higher in younger children […] The risk of DKA among patients with pre-existing diabetes is 1–10% annual per person […] DKA can present with mild-to-severe symptoms. […] polyuria and polydipsia […] patients may present with signs of dehydration, such as tachycardia and dry mucus membranes. […] Vomiting, abdominal pain, malaise, and weight loss are common presenting symptoms […] Signs related to the ketoacidotic state include hyperventilation with deep breathing (Kussmaul’s respiration) which is a compensatory respiratory response to an underlying metabolic acidosis. Acetonemia may cause a fruity odor to the breath. […] Elevated glucose levels are almost always present; however, euglycemic DKA has been described (19). Anion-gap metabolic acidosis is the hallmark of this condition and is caused by elevated ketone bodies.”
“Clinically significant cerebral edema occurs in approximately 1% of patients with diabetic ketoacidosis […] DKA-related cerebral edema may represent a continuum. Mild forms resulting in subtle edema may result in modest mental status abnormalities whereas the most severe manifestations result in overt cerebral injury. […] Cerebral edema typically presents 4–12 h after the treatment for DKA is started (28, 29), but can occur at any time. […] Increased intracranial pressure with cerebral edema has been recognized as the leading cause of morbidity and mortality in pediatric patients with DKA (59). Mortality from DKA-related cerebral edema in children is high, up to 90% […] and accounts for 60–90% of the mortality seen in DKA […] many patients are left with major neurological deficits (28, 31, 35).”
“The hyperosmolar hyperglycemic state (HHS) is also an acute complication that may occur in patients with diabetes mellitus. It is seen primarily in patients with T2DM and has previously been referred to as “hyperglycemic hyperosmolar non-ketotic coma” or “hyperglycemic hyperosmolar non-ketotic state” (13). HHS is marked by profound dehydration and hyperglycemia and often by some degree of neurological impairment. The term hyperglycemic hyperosmolar state is used because (1) ketosis may be present and (2) there may be varying degrees of altered sensorium besides coma (13). Like DKA, the basic underlying disorder is inadequate circulating insulin, but there is often enough insulin to inhibit free fatty acid mobilization and ketoacidosis. […] Up to 20% of patients diagnosed with HHS do not have a previous history of diabetes mellitus (14). […] Kitabchi et al. estimated the rate of hospital admissions due to HHS to be lower than DKA, accounting for less than 1% of all primary diabetic admissions (13). […] Glucose levels rise in the setting of relative insulin deficiency. The low levels of circulating insulin prevent lipolysis, ketogenesis, and ketoacidosis (62) but are unable to suppress hyperglycemia, glucosuria, and water losses. […] HHS typically presents with one or more precipitating factors, similar to DKA. […] Acute infections […] account for approximately 32–50% of precipitating causes (13). […] The mortality rates for HHS vary between 10 and 20% (14, 93).”
It should perhaps be noted explicitly that the mortality rates for these complications are particularly high in the settings of either very young individuals (DKA) or in elderly individuals (HHS) who might have multiple comorbidities. Relatedly HHS often develops acutely specifically in settings where the precipitating factor is something really unpleasant like pneumonia or a cardiovascular event, so a high-ish mortality rate is perhaps not that surprising. Nor is it surprising that very young brains are particularly vulnerable in the context of DKA (I already discussed some of the research on these matters in some detail in an earlier post about this book).
This post to some extent covered the topic of ‘stroke in general’, however I wanted to include here also some more data specifically on diabetes-related matters about this topic. Here’s a quote to start off with:
“DM [Diabetes Mellitus] has been consistently shown to represent a strong independent risk factor of ischemic stroke. […] The contribution of hyperglycemia to increased stroke risk is not proven. […] the relationship between hyperglycemia and stroke remains subject of debate. In this respect, the association between hyperglycemia and cerebrovascular disease is established less strongly than the association between hyperglycemia and coronary heart disease. […] The course of stroke in patients with DM is characterized by higher mortality, more severe disability, and higher recurrence rate […] It is now well accepted that the risk of stroke in individuals with DM is equal to that of individuals with a history of myocardial infarction or stroke, but no DM (24–26). This was confirmed in a recently published large retrospective study which enrolled all inhabitants of Denmark (more than 3 million people out of whom 71,802 patients with DM) and were followed-up for 5 years. In men without DM the incidence of stroke was 2.5 in those without and 7.8% in those with prior myocardial infarction, whereas in patients with DM it was 9.6 in those without and 27.4% in those with history of myocardial infarction. In women the numbers were 2.5, 9.0, 10.0, and 14.2%, respectively (22).”
That study incidentally is very nice for me in particular to know about, given that I am a Danish diabetic. I do not here face any of the usual tiresome questions about ‘external validity’ and issues pertaining to ‘extrapolating out of sample’ – not only is it quite likely I’ve actually looked at some of the data used in that analysis myself, I also know that I am almost certainly one of the people included in the analysis. Of course you need other data as well to assess risk (e.g. age, see the previously linked post), but this is pretty clean as far as it goes. Moving on…
“The number of deaths from stroke attributable to DM is highest in low-and-middle-income countries […] the relative risk conveyed by DM is greater in younger subjects […] It is not well known whether type 1 or type 2 DM affects stroke risk differently. […] In the large cohort of women enrolled in the Nurses’ Health Study (116,316 women followed for up to 26 years) it was shown that the incidence of total stroke was fourfold higher in women with type 1 DM and twofold higher among women with type 2 DM than for non-diabetic women (33). […] The impact of DM duration as a stroke risk factor has not been clearly defined. […] In this context it is important to note that the actual duration of type 2 DM is difficult to determine precisely […and more generally: “the date of onset of a certain chronic disease is a quantity which is not defined as precisely as mortality“, as Yashin et al. put it – I also talked about this topic in my previous post, but it’s important when you’re looking at these sorts of things and is worth reiterating – US]. […] Traditional risk factors for stroke such as arterial hypertension, dyslipidemia, atrial fibrillation, heart failure, and previous myocardial infarction are more common in people with DM […]. However, the impact of DM on stroke is not just due to the higher prevalence of these risk factors, as the risk of mortality and morbidity remains over twofold increased after correcting for these factors (4, 37). […] It is informative to distinguish between factors that are non-specific and specific to DM. DM-specific factors, including chronic hyperglycemia, DM duration, DM type and complications, and insulin resistance, may contribute to an elevated stroke risk either by amplification of the harmful effect of other “classical” non-specific risk factors, such as hypertension, or by acting independently.”
More than a few variables are known to impact stroke risk, but the fact that many of the risk factors are related to each other (‘fat people often also have high blood pressure’) makes it hard to figure out which variables are most important, how they interact with each other, etc., etc. One might in that context perhaps conceptualize the metabolic syndrome (-MS) as a sort of indicator variable indicating whether a relatively common set of such related potential risk factors of interest are present or not – it is worth noting in that context that the authors include in the text the observation that: “it is yet uncertain if the whole concept of the MS entails more than its individual components. The clustering of risk factors complicates the assessment of the contribution of individual components to the risk of vascular events, as well as assessment of synergistic or interacting effects.” MS confers a two-threefold increased stroke risk, depending on the definition and the population analyzed, so there’s definitely some relevant stuff included in that box, but in the context of developing new treatment options and better assess risk it might be helpful to – to put it simplistically – know if variable X is significantly more important than variable Y (and how the variables interact, etc., etc.). But this sort of information is hard to get.
There’s more than one type of stroke, and the way diabetes modifies the risk of various stroke types is not completely clear:
“Most studies have consistently shown that DM is an important risk factor for ischemic stroke, while the incidence of hemorrhagic stroke in subjects with DM does not seem to be increased. Consequently, the ratio of ischemic to hemorrhagic stroke is higher in patients with DM than in those stroke patients without DM [recall the base rates I’ve mentioned before in the coverage of this book: 80% of strokes are ischemic strokes in Western countries, and 15 % hemorrhagic] […] The data regarding an association between DM and the risk of hemorrhagic stroke are quite conflicting. In the most series no increased risk of cerebral hemorrhage was found (10, 101), and in the Copenhagen Stroke Registry, hemorrhagic stroke was even six times less frequent in diabetic patients than in non-diabetic subjects (102). […] However, in another prospective population-based study DM was associated with an increased risk of primary intracerebral hemorrhage (103). […] The significance of DM as a risk factor of hemorrhagic stroke could differ depending on ethnicity of subjects or type of DM. In the large Nurses’ Health Study type 1 DM increased the risk of hemorrhagic stroke by 3.8 times while type 2 DM did not increase such a risk (96). […] It is yet unclear if DM predominantly predisposes to either large or small vessel ischemic stroke. Nevertheless, lacunar stroke (small, less than 15mm in diameter infarction, cyst-like, frequently multiple) is considered to be the typical type of stroke in diabetic subjects (105–107), and DM may be present in up to 28–43% of patients with cerebral lacunar infarction (108–110).”
The Danish results mentioned above might not be as useful to me as they were before if the type is important, because the majority of those diabetics included were type 2 diabetics. I know from personal experience that it is difficult to type-identify diabetics using the Danish registry data available if you want to work with population-level data, and any type of scheme attempting this will be subject to potentially large misidentification problems. Some subgroups can be presumably correctly identified using diagnostic codes, but a very large number of individuals will be left out of the analyses if you only rely on identification strategies where you’re (at least reasonably?) certain about the type. I’ve worked on these identification problems during my graduate work so perhaps a few more things are worth mentioning here. In the context of diabetic subgroup analyses, misidentification is in general a much larger problem in the context of type 1 results than in the context of type 2 results; unless the study design takes the large prevalence difference of the two conditions into account, the type 1 sample will be much smaller than the type 2 sample in pretty much all analytical contexts, so a small number of misidentified type 2 individuals can have large impacts on the results of the type 1 sample. Type 1s misidentified as type 2 individuals is in general to be expected to be a much smaller problem in terms of the validity of the type 2 analysis; misidentification of that type will cause a loss of power in the context of the type 1 subgroup analysis, which is already low to start with (and it’ll also make the type 1 subgroup analysis even more vulnerable to misidentified type 2s), but it won’t much change the results of the type 2 subgroup analysis in any significant way. Relatedly, even if enough type 2 patients are misidentified to cause problems with the interpretation of the type 1 subgroup analysis, this would not on its own be a good reason to doubt the results of the type 2 subgroup analysis. Another thing to note in terms of these things is that given that misidentification will tend to lead to ‘mixing’, i.e. it’ll make the subgroup results look similar, when outcomes are not similar in the type 1 and the type 2 individuals then this might be taken to be an indicator that something potentially interesting might be going on, because most analyses will struggle with some level of misidentification which will tend to reduce the power of tests of group differences.
What about stroke outcomes? A few observations were included on that topic above, but the book has a lot more stuff on that – some observations on this topic:
“DM is an independent risk factor of death from stroke […]. Tuomilehto et al. (35) calculated that 16% of all stroke mortality in men and 33% in women could be directly attributed to DM. Patients with DM have higher hospital and long-term stroke mortality, more pronounced residual neurological deficits, and more severe disability after acute cerebrovascular accidents […]. The 1-year mortality rate, for example, was twofold higher in diabetic patients compared to non-diabetic subjects (50% vs. 25%) […]. Only 20% of people with DM survive over 5 years after the first stroke and half of these patients die within the first year (36, 128). […] The mechanisms underlying the worse outcome of stroke in diabetic subjects are not fully understood. […] Regarding prevention of stroke in patients with DM, it may be less relevant than in non-DM subjects to distinguish between primary and secondary prevention as all patients with DM are considered to be high-risk subjects regardless of the history of cerebrovascular accidents or the presence of clinical and subclinical vascular lesions. […] The influence of the mode of antihyperglycemic treatment on the risk of stroke is uncertain.”
Control of blood pressure is very important in the diabetic setting:
“There are no doubts that there is a linear relation between elevated systolic blood pressure and the risk of stroke, both in people with or without DM. […] Although DM and arterial hypertension represent significant independent risk factors for stroke if they co-occur in the same patient the risk increases dramatically. A prospective study of almost 50 thousand subjects in Finland followed up for 19 years revealed that the hazard ratio for stroke incidence was 1.4, 2.0, 2.5, 3.5, and 4.5 and for stroke mortality was 1.5, 2.6, 3.1, 5.6, and 9.3, respectively, in subjects with an isolated modestly elevated blood pressure (systolic 140–159/diastolic 90–94 mmHg), isolated more severe hypertension (systolic >159 mmHg, diastolic >94 mmHg, or use of antihypertensive drugs), with isolated DM only, with both DM and modestly elevated blood pressure, and with both DM and more severe hypertension, relative to subjects without either of the risk factors (168). […] it remains unclear whether some classes of antihypertensive agents provide a stronger protection against stroke in diabetic patients than others. […] effective antihypertensive treatment is highly beneficial for reduction of stroke risk in diabetic patients, but the advantages of any particular class of antihypertensive medications are not substantially proven.”
Treatment of dyslipidemia is also very important, but here it does seem to matter how you treat it:
“It seems that the beneficial effect of statins is dose-dependent. The lower the LDL level that is achieved the stronger the cardiovascular protection. […] Recently, the results of the meta-analysis of 14 randomized trials of statins in 18,686 patients with DM had been published. It was calculated that statins use in diabetic patients can result in a 21% reduction of the risk of any stroke per 1 mmol/l reduction of LDL achieved […] There is no evidence from trials that supports efficacy of fibrates for stroke prevention in diabetic patients. […] No reduction of stroke risk by fibrates was shown also in a meta-analysis of eight trials enrolled 12,249 patients with type 2 DM (204).”
“Significant reductions in stroke risk in diabetic patients receiving antiplatelet therapy were found in large-scale controlled trials (205). It appears that based on the high incidence of stroke and prevalence of stroke risk factors in the diabetic population the benefits of routine aspirin use for primary and secondary stroke prevention outweigh its potential risk of hemorrhagic stroke especially in patients older than 30 years having at least one additional risk factor (206). […] both guidelines issued by the AHA/ADA or the ESC/EASD on the prevention of cardiovascular disease in patients with DM support the use of aspirin in a dose of 50–325 mg daily for the primary prevention of stroke in subjects older than 40 years of age and additional risk factors, such as DM […] The newer antiplatelet agent, clopidogrel, was more efficacious in prevention of ischemic stroke than aspirin with greater risk reduction in the diabetic cohort especially in those treated with insulin compared to non-diabetics in CAPRIE trial (209). However, the combination of aspirin and clopidogrel does not appear to be more efficacious and safe compared to clopidogrel or aspirin alone”.
When you treat all risk factors aggressively, it turns out that the elevated stroke risk can be substantially reduced. Again the data on this stuff is from Denmark:
“Gaede et al. (216) have shown in the Steno 2 study that intensive multifactorial intervention aimed at correction of hyperglycemia, hypertension, dyslipidemia, and microalbuminuria along with aspirin use resulted in a reduction of cardiovascular morbidity including non-fatal stroke […] recently the results of the extended 13.3 years follow-up of this study were presented and the reduction of cardiovascular mortality by 57% and morbidity by 59% along with the reduction of the number of non-fatal stroke (6 vs. 30 events) in intensively treated group was convincingly demonstrated (217). Antihypertensive, hypolipidemic treatment, use of aspirin should thus be recommended as either primary or secondary prevention of stroke for patients with DM.”
“The goal of this monograph is to show how questions about the connections between and among aging, health, and longevity can be addressed using the wealth of available accumulated knowledge in the field, the large volumes of genetic and non-genetic data collected in longitudinal studies, and advanced biodemographic models and analytic methods. […] This monograph visualizes aging-related changes in physiological variables and survival probabilities, describes methods, and summarizes the results of analyses of longitudinal data on aging, health, and longevity in humans performed by the group of researchers in the Biodemography of Aging Research Unit (BARU) at Duke University during the past decade. […] the focus of this monograph is studying dynamic relationships between aging, health, and longevity characteristics […] our focus on biodemography/biomedical demography meant that we needed to have an interdisciplinary and multidisciplinary biodemographic perspective spanning the fields of actuarial science, biology, economics, epidemiology, genetics, health services research, mathematics, probability, and statistics, among others.”
The quotes above are from the book‘s preface. In case this aspect was not clear from the comments above, this is the kind of book where you’ll randomly encounter sentences like these:
“The simplest model describing negative correlations between competing risks is the multivariate lognormal frailty model. We illustrate the properties of such model for the bivariate case.”
“The time-to-event sub-model specifies the latent class-specific expressions for the hazard rates conditional on the vector of biomarkers Yt and the vector of observed covariates X …”
…which means that some parts of the book are really hard to blog; it simply takes more effort to deal with this stuff here than it’s worth. As a result of this my coverage of the book will not provide a remotely ‘balanced view’ of the topics covered in it; I’ll skip a lot of the technical stuff because I don’t think it makes much sense to cover specific models and algorithms included in the book in detail here. However I should probably also emphasize while on this topic that although the book is in general not an easy read, it’s hard to read because ‘this stuff is complicated’, not because the authors are not trying. The authors in fact make it clear already in the preface that some chapters are more easy to read than are others and that some chapters are actually deliberately written as ‘guideposts and way-stations‘, as they put it, in order to make it easier for the reader to find the stuff in which he or she is most interested (“the interested reader can focus directly on the chapters/sections of greatest interest without having to read the entire volume“) – they have definitely given readability aspects some thought, and I very much like the book so far; it’s full of great stuff and it’s very well written.
I have had occasion to question a few of the observations they’ve made, for example I was a bit skeptical about a few of the conclusions they drew in chapter 6 (‘Medical Cost Trajectories and Onset of Age-Associated Diseases’), but this was related to what some would certainly consider to be minor details. In the chapter they describe a model of medical cost trajectories where the post-diagnosis follow-up period is 20 months; this is in my view much too short a follow-up period to draw conclusions about medical cost trajectories in the context of type 2 diabetes, one of the diseases included in the model, which I know because I’m intimately familiar with the literature on that topic; you need to look 7-10 years ahead to get a proper sense of how this variable develops over time – and it really is highly relevant to include those later years, because if you do not you may miss out on a large proportion of the total cost given that a substantial proportion of the total cost of diabetes relate to complications which tend to take some years to develop. If your cost analysis is based on a follow-up period as short as that of that model you may also on a related note draw faulty conclusions about which medical procedures and -subsidies are sensible/cost effective in the setting of these patients, because highly adherent patients may be significantly more expensive in a short run analysis like this one (they show up to their medical appointments and take their medications…) but much cheaper in the long run (…because they take their medications they don’t go blind or develop kidney failure). But as I say, it’s a minor point – this was one condition out of 20 included in the analysis they present, and if they’d addressed all the things that pedants like me might take issue with, the book would be twice as long and it would likely no longer be readable. Relatedly, the model they discuss in that chapter is far from unsalvageable; it’s just that one of the components of interest – ‘the difference between post- and pre-diagnosis cost levels associated with an acquired comorbidity’ – in the case of at least one disease is highly unlikely to be correct (given the authors’ interpretation of the variable), because there’s some stuff of relevance which the model does not include. I found the model quite interesting, despite the shortcomings, and the results were definitely surprising. (No, the above does not in my opinion count as an example of coverage of a ‘specific model […] in detail’. Or maybe it does, but I included no equations. On reflection I probably can’t promise much more than that, sometimes the details are interesting…)
Anyway, below I’ve added some quotes from the first few chapters of the book and a few remarks along the way.
“The genetics of aging, longevity, and mortality has become the subject of intensive analyses […]. However, most estimates of genetic effects on longevity in GWAS have not reached genome-wide statistical significance (after applying the Bonferroni correction for multiple testing) and many findings remain non-replicated. Possible reasons for slow progress in this field include the lack of a biologically-based conceptual framework that would drive development of statistical models and methods for genetic analyses of data [here I was reminded of Burnham & Anderson’s coverage, in particular their criticism of mindless ‘Let the computer find out’-strategies – the authors of that chapter seem to share their skepticism…], the presence of hidden genetic heterogeneity, the collective influence of many genetic factors (each with small effects), the effects of rare alleles, and epigenetic effects, as well as molecular biological mechanisms regulating cellular functions. […] Decades of studies of candidate genes show that they are not linked to aging-related traits in a straightforward fashion (Finch and Tanzi 1997; Martin 2007). Recent genome-wide association studies (GWAS) have supported this finding by showing that the traits in late life are likely controlled by a relatively large number of common genetic variants […]. Further, GWAS often show that the detected associations are of tiny size (Stranger et al. 2011).”
I think this ties in well with what I’ve previously read on these and related topics – see e.g. the second-last paragraph quoted in my coverage of Richard Alexander’s book, or some of the remarks included in Roberts et al. Anyway, moving on:
“It is well known from epidemiology that values of variables describing physiological states at a given age are associated with human morbidity and mortality risks. Much less well known are the facts that not only the values of these variables at a given age, but also characteristics of their dynamic behavior during the life course are also associated with health and survival outcomes. This chapter [chapter 8 in the book, US] shows that, for monotonically changing variables, the value at age 40 (intercept), the rate of change (slope), and the variability of a physiological variable, at ages 40–60, significantly influence both health-span and longevity after age 60. For non-monotonically changing variables, the age at maximum, the maximum value, the rate of decline after reaching the maximum (right slope), and the variability in the variable over the life course may influence health-span and longevity. This indicates that such characteristics can be important targets for preventive measures aiming to postpone onsets of complex diseases and increase longevity.”
The chapter from which the quotes in the next two paragraphs are taken was completely filled with data from the Framingham Heart Study, and it was hard for me to know what to include here and what to leave out – so you should probably just consider the stuff I’ve included below as samples of the sort of observations included in that part of the coverage.
“To mediate the influence of internal or external factors on lifespan, physiological variables have to show associations with risks of disease and death at different age intervals, or directly with lifespan. For many physiological variables, such associations have been established in epidemiological studies. These include body mass index (BMI), diastolic blood pressure (DBP), systolic blood pressure (SBP), pulse pressure (PP), blood glucose (BG), serum cholesterol (SCH), hematocrit (H), and ventricular rate (VR). […] the connection between BMI and mortality risk is generally J-shaped […] Although all age patterns of physiological indices are non-monotonic functions of age, blood glucose (BG) and pulse pressure (PP) can be well approximated by monotonically increasing functions for both genders. […] the average values of body mass index (BMI) increase with age (up to age 55 for males and 65 for females), and then decline for both sexes. These values do not change much between ages 50 and 70 for males and between ages 60 and 70 for females. […] Except for blood glucose, all average age trajectories of physiological indices differ between males and females. Statistical analysis confirms the significance of these differences. In particular, after age 35 the female BMI increases faster than that of males. […] [When comparing women with less than or equal to 11 years of education [‘LE’] to women with 12 or more years of education [HE]:] The average values of BG for both groups are about the same until age 45. Then the BG curve for the LE females becomes higher than that of the HE females until age 85 where the curves intersect. […] The average values of BMI in the LE group are substantially higher than those among the HE group over the entire age interval. […] The average values of BG for the HE and LE males are very similar […] However, the differences between groups are much smaller than for females.”
They also in the chapter compared individuals with short life-spans [‘SL’, died before the age of 75] and those with long life-spans [‘LL’, 100 longest-living individuals in the relevant sample] to see if the variables/trajectories looked different. They did, for example: “trajectories for the LL females are substantially different from those for the SL females in all eight indices. Specifically, the average values of BG are higher and increase faster in the SL females. The entire age trajectory of BMI for the LL females is shifted to the right […] The average values of DBP [diastolic blood pressure, US] among the SL females are higher […] A particularly notable observation is the shift of the entire age trajectory of BMI for the LL males and females to the right (towards an older age), as compared with the SL group, and achieving its maximum at a later age. Such a pattern is markedly different from that for healthy and unhealthy individuals. The latter is mostly characterized by the higher values of BMI for the unhealthy people, while it has similar ages at maximum for both the healthy and unhealthy groups. […] Physiological aging changes usually develop in the presence of other factors affecting physiological dynamics and morbidity/mortality risks. Among these other factors are year of birth, gender, education, income, occupation, smoking, and alcohol use. An important limitation of most longitudinal studies is the lack of information regarding external disturbances affecting individuals in their day-today life.”
I incidentally noted while I was reading that chapter that a relevant variable ‘lurking in the shadows’ in the context of the male and female BMI trajectories might be changing smoking habits over time; I have not looked at US data on this topic, but I do know that the smoking patterns of Danish males and females during the latter half of the last century were markedly different and changed really quite dramatically in just a few decades; a lot more males than females smoked in the 60es, whereas the proportions of male- and female smokers today are much more similar, because a lot of males have given up smoking (I refer Danish readers to this blog post which I wrote some years ago on these topics). The authors of the chapter incidentally do look a little at data on smokers and they observe that smokers’ BMI are lower than non-smokers (not surprising), and that the smokers’ BMI curve (displaying the relationship between BMI and age) grows at a slower rate than the BMI curve of non-smokers (that this was to be expected is perhaps less clear, at least to me – the authors don’t interpret these specific numbers, they just report them).
“To better address the challenge of “healthy aging” and to reduce economic burdens of aging-related diseases, key factors driving the onset and progression of diseases in older adults must be identified and evaluated. An identification of disease-specific age patterns with sufficient precision requires large databases that include various age-specific population groups. Collections of such datasets are costly and require long periods of time. That is why few studies have investigated disease-specific age patterns among older U.S. adults and there is limited knowledge of factors impacting these patterns. […] Information collected in U.S. Medicare Files of Service Use (MFSU) for the entire Medicare-eligible population of older U.S. adults can serve as an example of observational administrative data that can be used for analysis of disease-specific age patterns. […] In this chapter, we focus on a series of epidemiologic and biodemographic characteristics that can be studied using MFSU.”
“Two datasets capable of generating national level estimates for older U.S. adults are the Surveillance, Epidemiology, and End Results (SEER) Registry data linked to MFSU (SEER-M) and the National Long Term Care Survey (NLTCS), also linked to MFSU (NLTCS-M). […] The SEER-M data are the primary dataset analyzed in this chapter. The expanded SEER registry covers approximately 26 % of the U.S. population. In total, the Medicare records for 2,154,598 individuals are available in SEER-M […] For the majority of persons, we have continuous records of Medicare services use from 1991 (or from the time the person reached age 65 after 1990) to his/her death. […] The NLTCS-M data contain two of the six waves of the NLTCS: namely, the cohorts of years 1994 and 1999. […] In total, 34,077 individuals were followed-up between 1994 and 1999. These individuals were given the detailed NLTCS interview […] which has information on risk factors. More than 200 variables were selected”
In short, these data sets are very large, and contain a lot of information. Here are some results/data:
“Among studied diseases, incidence rates of Alzheimer’s disease, stroke, and heart failure increased with age, while the rates of lung and breast cancers, angina pectoris, diabetes, asthma, emphysema, arthritis, and goiter became lower at advanced ages. [..] Several types of age-patterns of disease incidence could be described. The first was a monotonic increase until age 85–95, with a subsequent slowing down, leveling off, and decline at age 100. This pattern was observed for myocardial infarction, stroke, heart failure, ulcer, and Alzheimer’s disease. The second type had an earlier-age maximum and a more symmetric shape (i.e., an inverted U-shape) which was observed for lung and colon cancers, Parkinson’s disease, and renal failure. The majority of diseases (e.g., prostate cancer, asthma, and diabetes mellitus among them) demonstrated a third shape: a monotonic decline with age or a decline after a short period of increased rates. […] The occurrence of age-patterns with a maximum and, especially, with a monotonic decline contradicts the hypothesis that the risk of geriatric diseases correlates with an accumulation of adverse health events […]. Two processes could be operative in the generation of such shapes. First, they could be attributed to the effect of selection […] when frail individuals do not survive to advanced ages. This approach is popular in cancer modeling […] The second explanation could be related to the possibility of under-diagnosis of certain chronic diseases at advanced ages (due to both less pronounced disease symptoms and infrequent doctor’s office visits); however, that possibility cannot be assessed with the available data […this is because the data sets are based on Medicare claims – US]”
“The most detailed U.S. data on cancer incidence come from the SEER Registry […] about 60 % of malignancies are diagnosed in persons aged 65+ years old […] In the U.S., the estimated percent of cancer patients alive after being diagnosed with cancer (in 2008, by current age) was 13 % for those aged 65–69, 25 % for ages 70–79, and 22 % for ages 80+ years old (compared with 40 % of those aged younger than 65 years old) […] Diabetes affects about 21 % of the U.S. population aged 65+ years old (McDonald et al. 2009). However, while more is known about the prevalence of diabetes, the incidence of this disease among older adults is less studied. […] [In multiple previous studies] the incidence rates of diabetes decreased with age for both males and females. In the present study, we find similar patterns […] The prevalence of asthma among the U.S. population aged 65+ years old in the mid-2000s was as high as 7 % […] older patients are more likely to be underdiagnosed, untreated, and hospitalized due to asthma than individuals younger than age 65 […] asthma incidence rates have been shown to decrease with age […] This trend of declining asthma incidence with age is in agreement with our results.”
“The prevalence and incidence of Alzheimer’s disease increase exponentially with age, with the most notable rise occurring through the seventh and eight decades of life (Reitz et al. 2011). […] whereas dementia incidence continues to increase beyond age 85, the rate of increase slows down [which] suggests that dementia diagnosed at advanced ages might be related not to the aging process per se, but associated with age-related risk factors […] Approximately 1–2 % of the population aged 65+ and up to 3–5 % aged 85+ years old suffer from Parkinson’s disease […] There are few studies of Parkinson’s disease incidence, especially in the oldest old, and its age patterns at advanced ages remain controversial”.
“One disadvantage of large administrative databases is that certain factors can produce systematic over/underestimation of the number of diagnosed diseases or of identification of the age at disease onset. One reason for such uncertainties is an incorrect date of disease onset. Other sources are latent disenrollment and the effects of study design. […] the date of onset of a certain chronic disease is a quantity which is not defined as precisely as mortality. This uncertainty makes difficult the construction of a unified definition of the date of onset appropriate for population studies.”
“[W]e investigated the phenomenon of multimorbidity in the U.S. elderly population by analyzing mutual dependence in disease risks, i.e., we calculated disease risks for individuals with specific pre-existing conditions […]. In total, 420 pairs of diseases were analyzed. […] For each pair, we calculated age patterns of unconditional incidence rates of the diseases, conditional rates of the second (later manifested) disease for individuals after onset of the first (earlier manifested) disease, and the hazard ratio of development of the subsequent disease in the presence (or not) of the first disease. […] three groups of interrelations were identified: (i) diseases whose risk became much higher when patients had a certain pre-existing (earlier diagnosed) disease; (ii) diseases whose risk became lower than in the general population when patients had certain pre-existing conditions […] and (iii) diseases for which “two-tail” effects were observed: i.e., when the effects are significant for both orders of disease precedence; both effects can be direct (either one of the diseases from a disease pair increases the risk of the other disease), inverse (either one of the diseases from a disease pair decreases the risk of the other disease), or controversial (one disease increases the risk of the other, but the other disease decreases the risk of the first disease from the disease pair). In general, the majority of disease pairs with increased risk of the later diagnosed disease in both orders of precedence were those in which both the pre-existing and later occurring diseases were cancers, and also when both diseases were of the same organ. […] Generally, the effect of dependence between risks of two diseases diminishes with advancing age. […] Identifying mutual relationships in age-associated disease risks is extremely important since they indicate that development of […] diseases may involve common biological mechanisms.”
“in population cohorts, trends in prevalence result from combinations of trends in incidence, population at risk, recovery, and patients’ survival rates. Trends in the rates for one disease also may depend on trends in concurrent diseases, e.g., increasing survival from CHD contributes to an increase in the cancer incidence rate if the individuals who survived were initially susceptible to both diseases.”
Here’s the first post about the book. I finished it a while ago but I recently realized I had not completed my intended coverage of the book here on the blog back then, and as some of the book’s material sort-of-kind-of relates to material encountered in a book I’m currently reading (Biodemography of Aging) I decided I might as well finish my coverage of the book now in order to review some things I might have forgot in the meantime, by providing coverage here of some of the material covered in the second half of the book. It’s a nice book with some interesting observations, but as I also pointed out in my first post it is definitely not an easy read. Below I have included some observations from the book’s second half.
“The aged lung is characterised by airspace enlargement similar to, but not identical with acquired emphysema . Such tissue damage is detected even in non-smokers above 50 years of age as the septa of the lung alveoli are destroyed and the enlarged alveolar structures result in a decreased surface for gas exchange […] Additional problems are that surfactant production decreases with age  increasing the effort needed to expand the lungs during inhalation in the already reduced thoracic cavity volume where the weakened muscles are unable to thoroughly ventilate. […] As ageing is associated with respiratory muscle strength reduction, coughing becomes difficult making it progressively challenging to eliminate inhaled particles, pollens, microbes, etc. Additionally, ciliary beat frequency (CBF) slows down with age impairing the lungs’ first line of defence: mucociliary clearance  as the cilia can no longer repel invading microorganisms and particles. Consequently e.g. bacteria can more easily colonise the airways leading to infections that are frequent in the pulmonary tract of the older adult.”
“With age there are dramatic changes in neutrophil function, including reduced chemotaxis, phagocytosis and bactericidal mechanisms […] reduced bactericidal function will predispose to infection but the reduced chemotaxis also has consequences for lung tissue as this results in increased tissue bystander damage from neutrophil elastases released during migration […] It is currently accepted that alterations in pulmonary PPAR profile, more precisely loss of PPARγ activity, can lead to inflammation, allergy, asthma, COPD, emphysema, fibrosis, and cancer […]. Since it has been reported that PPARγ activity decreases with age, this provides a possible explanation for the increasing incidence of these lung diseases and conditions in older individuals .”
“Age is an important risk factor for cancer and subjects aged over 60 also have a higher risk of comorbidities. Approximately 50 % of neoplasms occur in patients older than 70 years […] a major concern for poor prognosis is with cancer patients over 70–75 years. These patients have a lower functional reserve, a higher risk of toxicity after chemotherapy, and an increased risk of infection and renal complications that lead to a poor quality of life. […] [Whereas] there is a difference in organs with higher cancer incidence in developed versus developing countries [,] incidence increases with ageing almost irrespective of country […] The findings from Surveillance, Epidemiology and End Results Program [SEER – incidentally I likely shall at some point discuss this one in much more detail, as the aforementioned biodemography textbook covers this data in a lot of detail.. – US]  show that almost a third of all cancer are diagnosed after the age of 75 years and 70 % of cancer-related deaths occur after the age of 65 years. […] The traditional clinical trial focus is on younger and healthier patient, i.e. with few or no co-morbidities. These restrictions have resulted in a lack of data about the optimal treatment for older patients  and a poor evidence base for therapeutic decisions. […] In the older patient, neutropenia, anemia, mucositis, cardiomyopathy and neuropathy — the toxic effects of chemotherapy — are more pronounced […] The correction of comorbidities and malnutrition can lead to greater safety in the prescription of chemotherapy […] Immunosenescence is a general classification for changes occurring in the immune system during the ageing process, as the distribution and function of cells involved in innate and adaptive immunity are impaired or remodelled […] Immunosenescence is considered a major contributor to cancer development in aged individuals“.
“Dementia and age-related vision loss are major causes of disability in our ageing population and it is estimated that a third of people aged over 75 are affected. […] age is the largest risk factor for the development of neurodegenerative diseases […] older patients with comorbidities such as atherosclerosis, type II diabetes or those suffering from repeated or chronic systemic bacterial and viral infections show earlier onset and progression of clinical symptoms […] analysis of post-mortem brain tissue from healthy older individuals has provided evidence that the presence of misfolded proteins alone does not correlate with cognitive decline and dementia, implying that additional factors are critical for neural dysfunction. We now know that innate immune genes and life-style contribute to the onset and progression of age-related neuronal dysfunction, suggesting that chronic activation of the immune system plays a key role in the underlying mechanisms that lead to irreversible tissue damage in the CNS. […] Collectively these studies provide evidence for a critical role of inflammation in the pathogenesis of a range of neurodegenerative diseases, but the factors that drive or initiate inflammation remain largely elusive.”
“The effect of infection, mimicked experimentally by administration of bacterial lipopolysaccharide (LPS) has revealed that immune to brain communication is a critical component of a host organism’s response to infection and a collection of behavioural and metabolic adaptations are initiated over the course of the infection with the purpose of restricting the spread of a pathogen, optimising conditions for a successful immune response and preventing the spread of infection to other organisms . These behaviours are mediated by an innate immune response and have been termed ‘sickness behaviours’ and include depression, reduced appetite, anhedonia, social withdrawal, reduced locomotor activity, hyperalgesia, reduced motivation, cognitive impairment and reduced memory encoding and recall […]. Metabolic adaptation to infection include fever, altered dietary intake and reduction in the bioavailability of nutrients that may facilitate the growth of a pathogen such as iron and zinc . These behavioural and metabolic adaptions are evolutionary highly conserved and also occur in humans”.
“Sickness behaviour and transient microglial activation are beneficial for individuals with a normal, healthy CNS, but in the ageing or diseased brain the response to peripheral infection can be detrimental and increases the rate of cognitive decline. Aged rodents exhibit exaggerated sickness and prolonged neuroinflammation in response to systemic infection […] Older people who contract a bacterial or viral infection or experience trauma postoperatively, also show exaggerated neuroinflammatory responses and are prone to develop delirium, a condition which results in a severe short term cognitive decline and a long term decline in brain function […] Collectively these studies demonstrate that peripheral inflammation can increase the accumulation of two neuropathological hallmarks of AD, further strengthening the hypothesis that inflammation i[s] involved in the underlying pathology. […] Studies from our own laboratory have shown that AD patients with mild cognitive impairment show a fivefold increased rate of cognitive decline when contracting a systemic urinary tract or respiratory tract infection […] Apart from bacterial infection, chronic viral infections have also been linked to increased incidence of neurodegeneration, including cytomegalovirus (CMV). This virus is ubiquitously distributed in the human population, and along with other age-related diseases such as cardiovascular disease and cancer, has been associated with increased risk of developing vascular dementia and AD [66, 67].”
“Frailty is associated with changes to the immune system, importantly the presence of a pro-inflammatory environment and changes to both the innate and adaptive immune system. Some of these changes have been demonstrated to be present before the clinical features of frailty are apparent suggesting the presence of potentially modifiable mechanistic pathways. To date, exercise programme interventions have shown promise in the reversal of frailty and related physical characteristics, but there is no current evidence for successful pharmacological intervention in frailty. […] In practice, acute illness in a frail person results in a disproportionate change in a frail person’s functional ability when faced with a relatively minor physiological stressor, associated with a prolonged recovery time […] Specialist hospital services such as surgery , hip fractures  and oncology  have now begun to recognise frailty as an important predictor of mortality and morbidity.”
I should probably mention here that this is another area where there’s an overlap between this book and the biodemography text I’m currently reading; chapter 7 of the latter text is about ‘Indices of Cumulative Deficits’ and covers this kind of stuff in a lot more detail than does this one, including e.g. detailed coverage of relevant statistical properties of one such index. Anyway, back to the coverage:
“Population based studies have demonstrated that the incidence of infection and subsequent mortality is higher in populations of frail people. […] The prevalence of pneumonia in a nursing home population is 30 times higher than the general population [39, 40]. […] The limited data available demonstrates that frailty is associated with a state of chronic inflammation. There is also evidence that inflammageing predates a diagnosis of frailty suggesting a causative role. […] A small number of studies have demonstrated a dysregulation of the innate immune system in frailty. Frail adults have raised white cell and neutrophil count. […] High white cell count can predict frailty at a ten year follow up . […] A recent meta-analysis and four individual systematic reviews have found beneficial evidence of exercise programmes on selected physical and functional ability […] exercise interventions may have no positive effect in operationally defined frail individuals. […] To date there is no clear evidence that pharmacological interventions improve or ameliorate frailty.”
“[A]s we get older the time and intensity at which we exercise is severely reduced. Physical inactivity now accounts for a considerable proportion of age-related disease and mortality. […] Regular exercise has been shown to improve neutrophil microbicidal functions which reduce the risk of infectious disease. Exercise participation is also associated with increased immune cell telomere length, and may be related to improved vaccine responses. The anti-inflammatory effect of regular exercise and negative energy balance is evident by reduced inflammatory immune cell signatures and lower inflammatory cytokine concentrations. […] Reduced physical activity is associated with a positive energy balance leading to increased adiposity and subsequently systemic inflammation . […] Elevated neutrophil counts accompany increased inflammation with age and the increased ratio of neutrophils to lymphocytes is associated with many age-related diseases including cancer . Compared to more active individuals, less active and overweight individuals have higher circulating neutrophil counts . […] little is known about the intensity, duration and type of exercise which can provide benefits to neutrophil function. […] it remains unclear whether exercise and physical activity can override the effects of NK cell dysfunction in the old. […] A considerable number of studies have assessed the effects of acute and chronic exercise on measures of T-cell immunesenescence including T cell subsets, phenotype, proliferation, cytokine production, chemotaxis, and co-stimulatory capacity. […] Taken together exercise appears to promote an anti-inflammatory response which is mediated by altered adipocyte function and improved energy metabolism leading to suppression of pro-inflammatory cytokine production in immune cells.”
“This book is written to provide […] a useful balance of theoretical treatment, description of empirical analyses and breadth of content for use in undergraduate modules in health economics for economics students, and for students taking a health economics module as part of their postgraduate training. Although we are writing from a UK perspective, we have attempted to make the book as relevant internationally as possible by drawing on examples, case studies and boxed highlights, not just from the UK, but from a wide range of countries”
I’m currently reading this book. The coverage has been somewhat disappointing because it’s mostly an undergraduate text which has so far mainly been covering concepts and ideas I’m already familiar with, but it’s not terrible – just okay-ish. I have added some observations from the first half of the book below.
“Health economics is the application of economic theory, models and empirical techniques to the analysis of decision making by people, health care providers and governments with respect to health and health care. […] Health economics has evolved into a highly specialised field, drawing on related disciplines including epidemiology, statistics, psychology, sociology, operations research and mathematics […] health economics is not shorthand for health care economics. […] Health economics studies not only the provision of health care, but also how this impacts on patients’ health. Other means by which health can be improved are also of interest, as are the determinants of ill-health. Health economics studies not only how health care affects population health, but also the effects of education, housing, unemployment and lifestyles.”
“Economic analyses have been used to explain the rise in obesity. […] The studies show that reasons for the rise in obesity include: *Technological innovation in food production and transportation that has reduced the cost of food preparation […] *Agricultural innovation and falling food prices that has led to an expansion in food supply […] *A decline in physical activity, both at home and at work […] *An increase in the number of fast-food outlets, resulting in changes to the relative prices of meals […]. *A reduction in the prevalence of smoking, which leads to increases in weight (Chou et al., 2004).”
“[T]he evidence is that ageing is in reality a relatively small factor in rising health care costs. The popular view is known as the ‘expansion of morbidity’ hypothesis. Gruenberg (1977) suggested that the decline in mortality that has led to an increase in the number of older people is because fewer people die from illnesses that they have, rather than because disease incidence and prevalence are lower. Lower mortality is therefore accompanied by greater morbidity and disability. However, Fries (1980) suggested an alternative hypothesis, ‘compression of morbidity’. Lower mortality rates are due to better health amongst the population, so people not only live longer, they are in better health when old. […] Zweifel et al. (1999) examined the hypothesis that the main determinant of high health care costs amongst older people is not the time since they were born, but the time until they die. Their results, confirmed by many subsequent studies, is that proximity to death does indeed explain higher health care costs better than age per se. Seshamani and Gray (2004) estimated that in the UK this is a factor up to 15 years before death, and annual costs increase tenfold during the last 5 years of life. The consensus is that ageing per se contributes little to the continuing rise in health expenditures that all countries face. Much more important drivers are improved quality of care, access to care, and more expensive new technology.”
“The difference between AC [average cost] and MC [marginal cost] is very important in applied health economics. Very often data are available on the average cost of health care services but not on their marginal cost. However, using average costs as if they were marginal costs may mislead. For example, hospital costs will be reduced by schemes that allow some patients to be treated in the community rather than being admitted. Given data on total costs of inpatient stays, it is possible to calculate an average cost per patient. It is tempting to conclude that avoiding an admission will reduce costs by that amount. However, the average includes patients with different levels of illness severity, and the more severe the illness the more costly they will be to treat. Less severely ill patients are most likely to be suitable for treatment in the community, so MC will be lower than AC. Such schemes will therefore produce a lower cost reduction than the estimate of AC suggests.
A problem with multi-product cost functions is that it is not possible to define meaningfully what the AC of a particular product is. If different products share some inputs, the costs of those inputs cannot be solely attributed to any one of them. […] In practice, when multi-product organisations such as hospitals calculate costs for particular products, they use accounting rules to share out the costs of all inputs and calculate average not marginal costs.”
“Studies of economies of scale in the health sector do not give a consistent and generalisable picture. […] studies of scope economies [also] do not show any consistent and generalisable picture. […] The impact of hospital ownership type on a range of key outcomes is generally ambiguous, with different studies yielding conflicting results. […] The association between hospital ownership and patient outcomes is unclear. The evidence is mixed and inconclusive regarding the impact of hospital ownership on access to care, morbidity, mortality, and adverse events.“
“Public goods are goods that are consumed jointly by all consumers. The strict economics definition of a public good is that they have two characteristics. The first is non-rivalry. This means that the consumption of a good or service by one person does not prevent anyone else from consuming it. Non-rival goods therefore have large marginal external benefits, which make them socially very desirable but privately unprofitable to provide. Examples of nonrival goods are street lighting and pavements. The second is non-excludability. This means that it is not possible to provide a good or service to one person without letting others also consume it. […] This may lead to a free-rider problem, in which people are unwilling to pay for goods and services that are of value to them. […] Note the distinction between public goods, which are goods and services that are non-rival and non-excludable, and publicly provided goods, which are goods or services that are provided by the government for any reason. […] Most health care products and services are not public goods because they are both rival and excludable. […] However, some health care, particularly public health programmes, does have public good properties.”
“[H]ealth care is typically consumed under conditions of uncertainty with respect to the timing of health care expenditure […] and the amount of expenditure on health care that is required […] The usual solution to such problems is insurance. […] Adverse selection exists when exactly the wrong people, from the point of view of the insurance provider, choose to buy insurance: those with high risks. […] Those who are most likely to buy health insurance are those who have a relatively high probability of becoming ill and maybe also incur greater costs than the average when they are ill. […] Adverse selection arises because of the asymmetry of information between insured and insurer. […] Two approaches are adopted to prevent adverse selection. The first is experience rating, where the insurance provider sets a different insurance premium for different risk groups. Those who apply for health insurance might be asked to undergo a medical examination and
to disclose any relevant facts concerning their risk status. […] There are two problems with this approach. First, the cost of acquiring the appropriate information may be high. […] Secondly, it might encourage insurance providers to ‘cherry pick’ people, only choosing to provide insurance to the low risk. This may mean that high-risk people are unable to obtain health insurance at all. […] The second approach is to make health insurance compulsory. […] The problem with this is that low-risk people effectively subsidise the health insurance payments of those with higher risks, which may be regarded […] as inequitable.”
“Health insurance changes the economic incentives facing both the consumers and the providers of health care. One manifestation of these changes is the existence of moral hazard. This is a phenomenon common to all forms of insurance. The suggestion is that when people are insured against risks and their consequences, they are less careful about minimising them. […] Moral hazard arises when it is possible to alter the probability of the insured event, […] or the size of the insured loss […] The extent of the problem depends on the price elasticity of demand […] Three main mechanisms can be used to reduce moral hazard. The first is co-insurance. Many insurance policies require that when an event occurs the insured shares the insured loss […] with the insurer. The co-insurance rate is the percentage of the insured loss that is paid by the insured. The co-payment is the amount that they pay. […] The second is deductibles. A deductible is an amount of money the insured pays when a claim is made irrespective of co-insurance. The insurer will not pay the insured loss unless the deductible is paid by the insured. […] The third is no-claims bonuses. These are payments made by insurers to discourage claims. They usually take the form of reduced insurance premiums in the next period. […] No-claims bonuses typically discourage insurance claims where the payout by the insurer is small. “
“The method of reimbursement relates to the way in which health care providers are paid for the services they provide. It is useful to distinguish between reimbursement methods, because they can affect the quantity and quality of health care. […] Retrospective reimbursement at full cost means that hospitals receive payment in full for all health care expenditures incurred in some pre-specified period of time. Reimbursement is retrospective in the sense that not only are hospitals paid after they have provided treatment, but also in that the size of the payment is determined after treatment is provided. […] Which model is used depends on whether hospitals are reimbursed for actual costs incurred, or on a fee-for-service (FFS) basis. […] Since hospital income [in these models] depends on the actual costs incurred (actual costs model) or on the volume of services provided (FFS model) there are few incentives to minimise costs. […] Prospective reimbursement implies that payments are agreed in advance and are not directly related to the actual costs incurred. […] incentives to reduce costs are greater, but payers may need to monitor the quality of care provided and access to services. If the hospital receives the same income regardless of quality, there is a financial incentive to provide low-quality care […] The problem from the point of view of the third-party payer is how best to monitor the activities of health care providers, and how to encourage them to act in a mutually beneficial way. This problem might be reduced if health care providers and third-party payers are linked in some way so that they share common goals. […] Integration between third-party payers and health care providers is a key feature of managed care.“
One of the prospective imbursement models applied today may be of particular interest to Danes, as the DRG system is a big part of the financial model of the Danish health care system – so I’ve added a few details about this type of system below:
“An example of prospectively set costs per case is the diagnostic-related groups (DRG) pricing scheme introduced into the Medicare system in the USA in 1984, and subsequently used in a number of other countries […] Under this scheme, DRG payments are based on average costs per case in each diagnostic group derived from a sample of hospitals. […] Predicted effects of the DRG pricing scheme are cost shifting, patient shifting and DRG creep. Cost shifting and patient shifting are ways of circumventing the cost-minimising effects of DRG pricing by shifting patients or some of the services provided to patients out of the DRG pricing scheme and into other parts of the system not covered by DRG pricing. For example, instead of being provided on an inpatient basis, treatment might be provided on an outpatient basis where it is reimbursed retrospectively. DRG creep arises when hospitals classify cases into DRGs that carry a higher payment, indicating that they are more complicated than they really are. This might arise, for instance, when cases have multiple diagnoses.”
I liked the book. Below I have added some sample observations from the book, as well as a collection of links to various topics covered/mentioned in the book.
“To make a variety of rocks, there needs to be a variety of minerals. The Earth has shown a capacity for making an increasing variety of minerals throughout its existence. Life has helped in this [but] [e]ven a dead planet […] can evolve a fine array of minerals and rocks. This is done simply by stretching out the composition of the original homogeneous magma. […] Such stretching of composition would have happened as the magma ocean of the earliest […] Earth cooled and began to solidify at the surface, forming the first crust of this new planet — and the starting point, one might say, of our planet’s rock cycle. When magma cools sufficiently to start to solidify, the first crystals that form do not have the same composition as the overall magma. In a magma of ‘primordial Earth’ type, the first common mineral to form was probably olivine, an iron-and-magnesium-rich silicate. This is a dense mineral, and so it tends to sink. As a consequence the remaining magma becomes richer in elements such as calcium and aluminium. From this, at temperatures of around 1,000°C, the mineral plagioclase feldspar would then crystallize, in a calcium-rich variety termed anorthite. This mineral, being significantly less dense than olivine, would tend to rise to the top of the cooling magma. On the Moon, itself cooling and solidifying after its fiery birth, layers of anorthite crystals several kilometres thick built up as the rock — anorthosite — of that body’s primordial crust. This anorthosite now forms the Moon’s ancient highlands, subsequently pulverized by countless meteorite impacts. This rock type can be found on Earth, too, particularly within ancient terrains. […] Was the Earth’s first surface rock also anorthosite? Probably—but we do not know for sure, as the Earth, a thoroughly active planet throughout its existence, has consumed and obliterated nearly all of the crust that formed in the first several hundred million years of its existence, in a mysterious interval of time that we now call the Hadean Eon. […] The earliest rocks that we know of date from the succeeding Archean Eon.”
“Where plates are pulled apart, then pressure is released at depth, above the ever-opening tectonic rift, for instance beneath the mid-ocean ridge that runs down the centre of the Atlantic Ocean. The pressure release from this crustal stretching triggers decompression melting in the rocks at depth. These deep rocks — peridotite — are dense, being rich in the iron- and magnesium-bearing mineral olivine. Heated to the point at which melting just begins, so that the melt fraction makes up only a few percentage points of the total, those melt droplets are enriched in silica and aluminium relative to the original peridotite. The melt will have a composition such that, when it cools and crystallizes, it will largely be made up of crystals of plagioclase feldspar together with pyroxene. Add a little more silica and quartz begins to appear. With less silica, olivine crystallizes instead of quartz.
The resulting rock is basalt. If there was anything like a universal rock of rocky planet surfaces, it is basalt. On Earth it makes up almost all of the ocean floor bedrock — in other words, the ocean crust, that is, the surface layer, some 10 km thick. Below, there is a boundary called the Mohorovičič Discontinuity (or ‘Moho’ for short)[…]. The Moho separates the crust from the dense peridotitic mantle rock that makes up the bulk of the lithosphere. […] Basalt makes up most of the surface of Venus, Mercury, and Mars […]. On the Moon, the ‘mare’ (‘seas’) are not of water but of basalt. Basalt, or something like it, will certainly be present in large amounts on the surfaces of rocky exoplanets, once we are able to bring them into close enough focus to work out their geology. […] At any one time, ocean floor basalts are the most common rock type on our planet’s surface. But any individual piece of ocean floor is, geologically, only temporary. It is the fate of almost all ocean crust — islands, plateaux, and all — to be destroyed within ocean trenches, sliding down into the Earth along subduction zones, to be recycled within the mantle. From that destruction […] there arise the rocks that make up the most durable component of the Earth’s surface: the continents.”
“Basaltic magmas are a common starting point for many other kinds of igneous rocks, through the mechanism of fractional crystallization […]. Remove the early-formed crystals from the melt, and the remaining melt will evolve chemically, usually in the direction of increasing proportions of silica and aluminium, and decreasing amounts of iron and magnesium. These magmas will therefore produce intermediate rocks such as andesites and diorites in the finely and coarsely crystalline varieties, respectively; and then more evolved silica-rich rocks such as rhyolites (fine), microgranites (medium), and granites (coarse). […] Granites themselves can evolve a little further, especially at the late stages of crystallization of large bodies of granite magma. The final magmas are often water-rich ones that contain many of the incompatible elements (such as thorium, uranium, and lithium), so called because they are difficult to fit within the molecular frameworks of the common igneous minerals. From these final ‘sweated-out’ magmas there can crystallize a coarsely crystalline rock known as pegmatite — famous because it contains a wide variety of minerals (of the ~4,500 minerals officially recognized on Earth […] some 500 have been recognized in pegmatites).”
“The less oxygen there is [at the area of deposition], the more the organic matter is preserved into the rock record, and it is where the seawater itself, by the sea floor, has little or no oxygen that some of the great carbon stores form. As animals cannot live in these conditions, organic-rich mud can accumulate quietly and undisturbed, layer by layer, here and there entombing the skeleton of some larger planktonic organism that has fallen in from the sunlit, oxygenated waters high above. It is these kinds of sediments that […] generate[d] the oil and gas that currently power our civilization. […] If sedimentary layers have not been buried too deeply, they can remain as soft muds or loose sands for millions of years — sometimes even for hundreds of millions of years. However, most buried sedimentary layers, sooner or later, harden and turn into rock, under the combined effects of increasing heat and pressure (as they become buried ever deeper under subsequent layers of sediment) and of changes in chemical environment. […] As rocks become buried ever deeper, they become progressively changed. At some stage, they begin to change their character and depart from the condition of sedimentary strata. At this point, usually beginning several kilometres below the surface, buried igneous rocks begin to transform too. The process of metamorphism has started, and may progress until those original strata become quite unrecognizable.”
“Frozen water is a mineral, and this mineral can make up a rock, both on Earth and, very commonly, on distant planets, moons, and comets […]. On Earth today, there are large deposits of ice strata on the cold polar regions of Antarctica and Greenland, with smaller amounts in mountain glaciers […]. These ice strata, the compressed remains of annual snowfalls, have simply piled up, one above the other, over time; on Antarctica, they reach almost 5 km in thickness and at their base are about a million years old. […] The ice cannot pile up for ever, however: as the pressure builds up it begins to behave plastically and to slowly flow downslope, eventually melting or, on reaching the sea, breaking off as icebergs. As the ice mass moves, it scrapes away at the underlying rock and soil, shearing these together to form a mixed deposit of mud, sand, pebbles, and characteristic striated (ice-scratched) cobbles and boulders […] termed a glacial till. Glacial tills, if found in the ancient rock record (where, hardened, they are referred to as tillites), are a sure clue to the former presence of ice.”
“At first approximation, the mantle is made of solid rock and is not […] a seething mass of magma that the fragile crust threatens to founder into. This solidity is maintained despite temperatures that, towards the base of the mantle, are of the order of 3,000°C — temperatures that would very easily melt rock at the surface. It is the immense pressures deep in the Earth, increasing more or less in step with temperature, that keep the mantle rock in solid form. In more detail, the solid rock of the mantle may include greater or lesser (but usually lesser) amounts of melted material, which locally can gather to produce magma chambers […] Nevertheless, the mantle rock is not solid in the sense that we might imagine at the surface: it is mobile, and much of it is slowly moving plastically, taking long journeys that, over many millions of years, may encompass the entire thickness of the mantle (the kinds of speeds estimated are comparable to those at which tectonic plates move, of a few centimetres a year). These are the movements that drive plate tectonics and that, in turn, are driven by the variation in temperature (and therefore density) from the contact region with the hot core, to the cooler regions of the upper mantle.”
“The outer core will not transmit certain types of seismic waves, which indicates that it is molten. […] Even farther into the interior, at the heart of the Earth, this metal magma becomes rock once more, albeit a rock that is mostly crystalline iron and nickel. However, it was not always so. The core used to be liquid throughout and then, some time ago, it began to crystallize into iron-nickel rock. Quite when this happened has been widely debated, with estimates ranging from over three billion years ago to about half a billion years ago. The inner core has now grown to something like 2,400 km across. Even allowing for the huge spans of geological time involved, this implies estimated rates of solidification that are impressive in real time — of some thousands of tons of molten metal crystallizing into solid form per second.”
“Rocks are made out of minerals, and those minerals are not a constant of the universe. A little like biological organisms, they have evolved and diversified through time. As the minerals have evolved, so have the rocks that they make up. […] The pattern of evolution of minerals was vividly outlined by Robert Hazen and his colleagues in what is now a classic paper published in 2008. They noted that in the depths of outer space, interstellar dust, as analysed by the astronomers’ spectroscopes, seems to be built of only about a dozen minerals […] Their component elements were forged in supernova explosions, and these minerals condensed among the matter and radiation that streamed out from these stellar outbursts. […] the number of minerals on the new Earth [shortly after formation was] about 500 (while the smaller, largely dry Moon has about 350). Plate tectonics began, with its attendant processes of subduction, mountain building, and metamorphism. The number of minerals rose to about 1,500 on a planet that may still have been biologically dead. […] The origin and spread of life at first did little to increase the number of mineral species, but once oxygen-producing photosynthesis started, then there was a great leap in mineral diversity as, for each mineral, various forms of oxide and hydroxide could crystallize. After this step, about two and a half billion years ago, there were over 4,000 minerals, most of them vanishingly rare. Since then, there may have been a slight increase in their numbers, associated with such events as the appearance and radiation of metazoan animals and plants […] Humans have begun to modify the chemistry and mineralogy of the Earth’s surface, and this has included the manufacture of many new types of mineral. […] Human-made minerals are produced in laboratories and factories around the world, with many new forms appearing every year. […] Materials sciences databases now being compiled suggest that more than 50,000 solid, inorganic, crystalline species have been created in the laboratory.”
Some links of interest:
Rock. Presolar grains. Silicate minerals. Silicon–oxygen tetrahedron. Quartz. Olivine. Feldspar. Mica. Jean-Baptiste Biot. Meteoritics. Achondrite/Chondrite/Chondrule. Carbonaceous chondrite. Iron–nickel alloy. Widmanstätten pattern. Giant-impact hypothesis (in the book this is not framed as a hypothesis nor is it explicitly referred to as the GIH; it’s just taken to be the correct account of what happened back then – US). Alfred Wegener. Arthur Holmes. Plate tectonics. Lithosphere. Asthenosphere. Fractional Melting (couldn’t find a wiki link about this exact topic; the MIT link is quite technical – sorry). Hotspot (geology). Fractional crystallization. Metastability. Devitrification. Porphyry (geology). Phenocryst. Thin section. Neptunism. Pyroclastic flow. Ignimbrite. Pumice. Igneous rock. Sedimentary rock. Weathering. Slab (geology). Clay minerals. Conglomerate (geology). Breccia. Aeolian processes. Hummocky cross-stratification. Ralph Alger Bagnold. Montmorillonite. Limestone. Ooid. Carbonate platform. Turbidite. Desert varnish. Evaporite. Law of Superposition. Stratigraphy. Pressure solution. Compaction (geology). Recrystallization (geology). Cleavage (geology). Phyllite. Aluminosilicate. Gneiss. Rock cycle. Ultramafic rock. Serpentinite. Pressure-Temperature-time paths. Hornfels. Impactite. Ophiolite. Xenolith. Kimberlite. Transition zone (Earth). Mantle convection. Mantle plume. Core–mantle boundary. Post-perovskite. Earth’s inner core. Inge Lehmann. Stromatolites. Banded iron formations. Microbial mat. Quorum sensing. Cambrian explosion. Bioturbation. Biostratigraphy. Coral reef. Radiolaria. Carbonate compensation depth. Paleosol. Bone bed. Coprolite. Allan Hills 84001. Tharsis. Pedestal crater. Mineraloid. Concrete.
“A recent study estimated that 234 million surgical procedures requiring anaesthesia are performed worldwide annually. Anaesthesia is the largest hospital specialty in the UK, with over 12,000 practising anaesthetists […] In this book, I give a short account of the historical background of anaesthetic practice, a review of anaesthetic equipment, techniques, and medications, and a discussion of how they work. The risks and side effects of anaesthetics will be covered, and some of the subspecialties of anaesthetic practice will be explored.”
I liked the book, and I gave it three stars on goodreads; I was closer to four stars than two. Below I have added a few sample observations from the book, as well as what turned out in the end to be actually a quite considerable number of links (more than 60 it turned out, from a brief count) to topics/people/etc. discussed or mentioned in the text. I decided to spend a bit more time finding relevant links than I’ve previously done when writing link-heavy posts, so in this post I have not limited myself to wikipedia articles and I e.g. also link directly to primary literature discussed in the coverage. The links provided are, as usual, meant to be indicators of which kind of stuff is covered in the book, rather than an alternative to the book; some of the wikipedia articles in particular I assume are not very good (the main point of a link to a wikipedia article of questionable quality should probably be taken to be an indication that I consider ‘awareness of the existence of concept X’ to be of interest/important also to people who have not read this book, even if no great resource on the topic was immediately at hand to me).
Sample observations from the book:
“[G]eneral anaesthesia is not sleep. In physiological terms, the two states are very dissimilar. The term general anaesthesia refers to the state of unconsciousness which is deliberately produced by the action of drugs on the patient. Local anaesthesia (and its related terms) refers to the numbness produced in a part of the body by deliberate interruption of nerve function; this is typically achieved without affecting consciousness. […] The purpose of inhaling ether vapour [in the past] was so that surgery would be painless, not so that unconsciousness would necessarily be produced. However, unconsciousness and immobility soon came to be considered desirable attributes […] For almost a century, lying still was the only reliable sign of adequate anaesthesia.”
“The experience of pain triggers powerful emotional consequences, including fear, anger, and anxiety. A reasonable word for the emotional response to pain is ‘suffering’. Pain also triggers the formation of memories which remind us to avoid potentially painful experiences in the future. The intensity of pain perception and suffering also depends on the mental state of the subject at the time, and the relationship between pain, memory, and emotion is subtle and complex. […] The effects of adrenaline are responsible for the appearance of someone in pain: pale, sweating, trembling, with a rapid heart rate and breathing. Additionally, a hormonal storm is activated, readying the body to respond to damage and fight infection. This is known as the stress response. […] Those responses may be abolished by an analgesic such as morphine, which will counteract all those changes. For this reason, it is routine to use analgesic drugs in addition to anaesthetic ones. […] Typical anaesthetic agents are poor at suppressing the stress response, but analgesics like morphine are very effective. […] The hormonal stress response can be shown to be harmful, especially to those who are already ill. For example, the increase in blood coagulability which evolved to reduce blood loss as a result of injury makes the patient more likely to suffer a deep venous thrombosis in the leg veins.”
“If we monitor the EEG of someone under general anaesthesia, certain identifiable changes to the signal occur. In general, the frequency spectrum of the signal slows. […] Next, the overall power of the signal diminishes. In very deep general anaesthesia, short periods of electrical silence, known as burst suppression, can be observed. Finally, the overall randomness of the signal, its entropy, decreases. In short, the EEG of someone who is anaesthetized looks completely different from someone who is awake. […] Depth of anaesthesia is no longer considered to be a linear concept […] since it is clear that anaesthesia is not a single process. It is now believed that the two most important components of anaesthesia are unconsciousness and suppression of the stress response. These can be represented on a three-dimensional diagram called a response surface. [Here’s incidentally a recent review paper on related topics, US]”
“Before the widespread advent of anaesthesia, there were very few painkilling options available. […] Alcohol was commonly given as a means of enhancing the patient’s courage prior to surgery, but alcohol has almost no effect on pain perception. […] For many centuries, opium was the only effective pain-relieving substance known. […] For general anaesthesia to be discovered, certain prerequisites were required. On the one hand, the idea that surgery without pain was achievable had to be accepted as possible. Despite tantalizing clues from history, this idea took a long time to catch on. The few workers who pursued this idea were often openly ridiculed. On the other, an agent had to be discovered that was potent enough to render a patient suitably unconscious to tolerate surgery, but not so potent that overdose (hence accidental death) was too likely. This agent also needed to be easy to produce, tolerable for the patient, and easy enough for untrained people to administer. The herbal candidates (opium, mandrake) were too unreliable or dangerous. The next reasonable candidate, and every agent since, was provided by the proliferating science of chemistry.”
“Inducing anaesthesia by intravenous injection is substantially quicker than the inhalational method. Inhalational induction may take several minutes, while intravenous induction happens in the time it takes for the blood to travel from the needle to the brain (30 to 60 seconds). The main benefit of this is not convenience or comfort but patient safety. […] It was soon discovered that the ideal balance is to induce anaesthesia intravenously, but switch to an inhalational agent […] to keep the patient anaesthetized during the operation. The template of an intravenous induction followed by maintenance with an inhalational agent is still widely used today. […] Most of the drawbacks of volatile agents disappear when the patient is already anaesthetized [and] volatile agents have several advantages for maintenance. First, they are predictable in their effects. Second, they can be conveniently administered in known quantities. Third, the concentration delivered or exhaled by the patient can be easily and reliably measured. Finally, at steady state, the concentration of volatile agent in the patient’s expired air is a close reflection of its concentration in the patient’s brain. This gives the anaesthetist a reliable way of ensuring that enough anaesthetic is present to ensure the patient remains anaesthetized.”
“All current volatile agents are colourless liquids that evaporate into a vapour which produces general anaesthesia when inhaled. All are chemically stable, which means they are non-flammable, and not likely to break down or be metabolized to poisonous products. What distinguishes them from each other are their specific properties: potency, speed of onset, and smell. Potency of an inhalational agent is expressed as MAC, the minimum alveolar concentration required to keep 50% of adults unmoving in response to a standard surgical skin incision. MAC as a concept was introduced […] in 1963, and has proven to be a very useful way of comparing potencies of different anaesthetic agents. […] MAC correlates with observed depth of anaesthesia. It has been known for over a century that potency correlates very highly with lipid solubility; that is, the more soluble an agent is in lipid […], the more potent an anaesthetic it is. This is known as the Meyer-Overton correlation […] Speed of onset is inversely proportional to water solubility. The less soluble in water, the more rapidly an agent will take effect. […] Where immobility is produced at around 1.0 MAC, amnesia is produced at a much lower dose, typically 0.25 MAC, and unconsciousness at around 0.5 MAC. Therefore, a patient may move in response to a surgical stimulus without either being conscious of the stimulus, or remembering it afterwards.”
“The most useful way to estimate the body’s physiological reserve is to assess the patient’s tolerance for exercise. Exercise is a good model of the surgical stress response. The greater the patient’s tolerance for exercise, the better the perioperative outcome is likely to be […] For a smoker who is unable to quit, stopping for even a couple of days before the operation improves outcome. […] Dying ‘on the table’ during surgery is very unusual. Patients who die following surgery usually do so during convalescence, their weakened state making them susceptible to complications such as wound breakdown, chest infections, deep venous thrombosis, and pressure sores.”
“Mechanical ventilation is based on the principle of intermittent positive pressure ventilation (IPPV), gas being ‘blown’ into the patient’s lungs from the machine. […] Inflating a patient’s lungs is a delicate process. Healthy lung tissue is fragile, and can easily be damaged by overdistension (barotrauma). While healthy lung tissue is light and spongy, and easily inflated, diseased lung tissue may be heavy and waterlogged and difficult to inflate, and therefore may collapse, allowing blood to pass through it without exchanging any gases (this is known as shunt). Simply applying higher pressures may not be the answer: this may just overdistend adjacent areas of healthier lung. The ventilator must therefore provide a series of breaths whose volume and pressure are very closely controlled. Every aspect of a mechanical breath may now be adjusted by the anaesthetist: the volume, the pressure, the frequency, and the ratio of inspiratory time to expiratory time are only the basic factors.”
“All anaesthetic drugs are poisons. Remember that in achieving a state of anaesthesia you intend to poison someone, but not kill them – so give as little as possible. [Introductory quote to a chapter, from an Anaesthetics textbook – US] […] Other cells besides neurons use action potentials as the basis of cellular signalling. For example, the synchronized contraction of heart muscle is performed using action potentials, and action potentials are transmitted from nerves to skeletal muscle at the neuromuscular junction to initiate movement. Local anaesthetic drugs are therefore toxic to the heart and brain. In the heart, local anaesthetic drugs interfere with normal contraction, eventually stopping the heart. In the brain, toxicity causes seizures and coma. To avoid toxicity, the total dose is carefully limited”.
Links of interest:
Arthur Ernest Guedel.
Henry Hill Hickman.
William Thomas Green Morton.
James Young Simpson.
Joseph Thomas Clover.
Principles of Total Intravenous Anaesthesia (TIVA).
Laryngeal mask airway.
Gate control theory of pain.
Hartmann’s solution (…what this is called seems to be depending on whom you ask, but it’s called Hartmann’s solution in the book…).
Epidural nerve block.
Intensive care medicine.
Bjørn Aage Ibsen.
Pearse et al. (results of paper briefly discussed in the book).
Awareness under anaesthesia (skip the first page).
Pollard et al. (2007).
Postoperative nausea and vomiting.
Postoperative cognitive dysfunction.
Monk et al. (2008).
(Smbc, second one here. There were a lot of relevant ones to choose from – this one also seems ‘relevant’. And this one. And this one. This one? This one? This one? Maybe this one? In the end I decided to only include the two comics displayed above, but you should be aware of the others…)
The book is a bit dated, it was published before the LHC even started operations. But it’s a decent read. I can’t say I liked it as much as I liked the other books in the series which I recently covered, on galaxies and the laws of thermodynamics, mostly because this book was a bit more pop-science-y than those books, and so the level of coverage was at times a little bit disappointing compared to the level of coverage provided in the aforementioned books throughout their coverage – but that said the book is far from terrible, I learned a lot, and I can imagine the author faced a very difficult task.
Below I have added a few observations from the book and some links to articles about some key concepts and things mentioned/covered in the book.
“[T]oday we view the collisions between high-energy particles as a means of studying the phenomena that ruled when the universe was newly born. We can study how matter was created and discover what varieties there were. From this we can construct the story of how the material universe has developed from that original hot cauldron to the cool conditions here on Earth today, where matter is made from electrons, without need for muons and taus, and where the seeds of atomic nuclei are just the up and down quarks, without need for strange or charming stuff.
In very broad terms, this is the story of what has happened. The matter that was born in the hot Big Bang consisted of quarks and particles like the electron. As concerns the quarks, the strange, charm, bottom, and top varieties are highly unstable, and died out within a fraction of a second, the weak force converting them into their more stable progeny, the up and down varieties which survive within us today. A similar story took place for the electron and its heavier versions, the muon and tau. This latter pair are also unstable and died out, courtesy of the weak force, leaving the electron as survivor. In the process of these decays, lots of neutrinos and electromagnetic radiation were also produced, which continue to swarm throughout the universe some 14 billion years later.
The up and down quarks and the electrons were the survivors while the universe was still very young and hot. As it cooled, the quarks were stuck to one another, forming protons and neutrons. The mutual gravitational attraction among these particles gathered them into large clouds that were primaeval stars. As they bumped into one another in the heart of these stars, the protons and neutrons built up the seeds of heavier elements. Some stars became unstable and exploded, ejecting these atomic nuclei into space, where they trapped electrons to form atoms of matter as we know it. […] What we can now do in experiments is in effect reverse the process and observe matter change back into its original primaeval forms.”
“A fully grown human is a bit less than two metres tall. […] to set the scale I will take humans to be about 1 metre in ‘order of magnitude’ […yet another smbc comic springs to mind here…] […] Then, going to the large scales of astronomy, we have the radius of the Earth, some 107 m […]; that of the Sun is 109 m; our orbit around the Sun is 1011 m […] note that the relative sizes of the Earth, Sun, and our orbit are factors of about 100. […] Whereas the atom is typically 10–10 m across, its central nucleus measures only about 10–14 to 10–15 m. So beware the oft-quoted analogy that atoms are like miniature solar systems with the ‘planetary electrons’ encircling the ‘nuclear sun’. The real solar system has a factor 1/100 between our orbit and the size of the central Sun; the atom is far emptier, with 1/10,000 as the corresponding ratio between the extent of its central nucleus and the radius of the atom. And this emptiness continues. Individual protons and neutrons are about 10–15 m in diameter […] the relative size of quark to proton is some 1/10,000 (at most!). The same is true for the ‘planetary’ electron relative to the proton ‘sun’: 1/10,000 rather than the ‘mere’ 1/100 of the real solar system. So the world within the atom is incredibly empty.”
“Our inability to see atoms has to do with the fact that light acts like a wave and waves do not scatter easily from small objects. To see a thing, the wavelength of the beam must be smaller than that thing is. Therefore, to see molecules or atoms needs illuminations whose wavelengths are similar to or smaller than them. Light waves, like those our eyes are sensitive to, have wavelength about 10–7 m […]. This is still a thousand times bigger than the size of an atom. […] To have any chance of seeing molecules and atoms we need light with wavelengths much shorter than these. [And so we move into the world of X-ray crystallography and particle accelerators] […] To probe deep within atoms we need a source of very short wavelength. […] the technique is to use the basic particles […], such as electrons and protons, and speed them in electric fields. The higher their speed, the greater their energy and momentum and the shorter their associated wavelength. So beams of high-energy particles can resolve things as small as atoms.”
“About 400 billion neutrinos from the Sun pass through each one of us each second.”
“For a century beams of particles have been used to reveal the inner structure of atoms. These have progressed from naturally occurring alpha and beta particles, courtesy of natural radioactivity, through cosmic rays to intense beams of electrons, protons, and other particles at modern accelerators. […] Different particles probe matter in complementary ways. It has been by combining the information from [the] various approaches that our present rich picture has emerged. […] It was the desire to replicate the cosmic rays under controlled conditions that led to modern high-energy physics at accelerators. […] Electrically charged particles are accelerated by electric forces. Apply enough electric force to an electron, say, and it will go faster and faster in a straight line […] Under the influence of a magnetic field, the path of a charged particle will curve. By using electric fields to speed them, and magnetic fields to bend their trajectory, we can steer particles round circles over and over again. This is the basic idea behind huge rings, such as the 27-km-long accelerator at CERN in Geneva. […] our ability to learn about the origins and nature of matter have depended upon advances on two fronts: the construction of ever more powerful accelerators, and the development of sophisticated means of recording the collisions.”
Weak interaction (‘good article’).
Resonance (particle physics).
Particle accelerator/Cyclotron/Synchrotron/Linear particle accelerator.
Sudbury Neutrino Observatory.
W and Z bosons.
Electroweak interaction (/theory).
Charm (quantum number).
Inverse beta decay.
Below is a list of books I’ve read in 2017.
The letters ‘f’, ‘nf.’ and ‘m’ in the parentheses indicate which type of book it was; ‘f’ refers to ‘fiction’ books, ‘nf’ to ‘non-fiction’ books, and the ‘m’ category covers ‘miscellaneous’ books. The numbers in the parentheses correspond to the goodreads ratings I thought the books deserved.
As usual I’ll try to update the post regularly throughout the year.
1. Brief Candles (3, f). Manning Coles.
5. All Clear (5, f). Connie Willis.
6. The Laws of Thermodynamics: A Very Short Introduction (4, nf. Oxford University Press). Blog coverage here.
13. Kai Lung’s Golden Hours (4, f). Ernest Bramah.
17. All Trivia – A collection of reflections & aphorisms (2, m). Logan Pearsall Smith. Short goodreads review here.
19. Kai Lung Beneath the Mulberry-Tree (4, f). Ernest Bramah.
22. The Winds of Marble Arch and Other Stories (f.). Connie Willis. Many of the comments that applied to the book above (see my review link) also applies here (in part because a substantial number of stories are in fact included in both books).
29. The Lord God Made Them All (4, m). James Herriot.
33. Royal Flash (4, f). George MacDonald Fraser.
36. Flash for Freedom! (3, f). George MacDonald Fraser.
37. Flashman and the Redskins (4, f). George MacDonald Fraser.
38. Biodemography of Aging: Determinants of Healthy Life Span and Longevity (5, nf. Springer). Long, takes a lot of work. I added this book to my list of favorite books on goodreads. Blog coverage here, here, and here.
“Among the hundreds of laws that describe the universe, there lurks a mighty handful. These are the laws of thermodynamics, which summarize the properties of energy and its transformation from one form to another. […] The mighty handful consists of four laws, with the numbering starting inconveniently at zero and ending at three. The first two laws (the ‘zeroth’ and the ‘first’) introduce two familiar but nevertheless enigmatic properties, the temperature and the energy. The third of the four (the ‘second law’) introduces what many take to be an even more elusive property, the entropy […] The second law is one of the all-time great laws of science […]. The fourth of the laws (the ‘third law’) has a more technical role, but rounds out the structure of the subject and both enables and foils its applications.”
“Classical thermodynamics is the part of thermodynamics that emerged during the nineteenth century before everyone was fully convinced about the reality of atoms, and concerns relationships between bulk properties. You can do classical thermodynamics even if you don’t believe in atoms. Towards the end of the nineteenth century, when most scientists accepted that atoms were real and not just an accounting device, there emerged the version of thermodynamics called statistical thermodynamics, which sought to account for the bulk properties of matter in terms of its constituent atoms. The ‘statistical’ part of the name comes from the fact that in the discussion of bulk properties we don’t need to think about the behaviour of individual atoms but we do need to think about the average behaviour of myriad atoms. […] In short, whereas dynamics deals with the behaviour of individual bodies, thermodynamics deals with the average behaviour of vast numbers of them.”
“In everyday language, heat is both a noun and a verb. Heat flows; we heat. In thermodynamics heat is not an entity or even a form of energy: heat is a mode of transfer of energy. It is not a form of energy, or a fluid of some kind, or anything of any kind. Heat is the transfer of energy by virtue of a temperature difference. Heat is the name of a process, not the name of an entity.”
“The supply of 1J of energy as heat to 1 g of water results in an increase in temperature of about 0.2°C. Substances with a high heat capacity (water is an example) require a larger amount of heat to bring about a given rise in temperature than those with a small heat capacity (air is an example). In formal thermodynamics, the conditions under which heating takes place must be specified. For instance, if the heating takes place under conditions of constant pressure with the sample free to expand, then some of the energy supplied as heat goes into expanding the sample and therefore to doing work. Less energy remains in the sample, so its temperature rises less than when it is constrained to have a constant volume, and therefore we report that its heat capacity is higher. The difference between heat capacities of a system at constant volume and at constant pressure is of most practical significance for gases, which undergo large changes in volume as they are heated in vessels that are able to expand.”
“Heat capacities vary with temperature. An important experimental observation […] is that the heat capacity of every substance falls to zero when the temperature is reduced towards absolute zero (T = 0). A very small heat capacity implies that even a tiny transfer of heat to a system results in a significant rise in temperature, which is one of the problems associated with achieving very low temperatures when even a small leakage of heat into a sample can have a serious effect on the temperature”.
“A crude restatement of Clausius’s statement is that refrigerators don’t work unless you turn them on.”
“The Gibbs energy is of the greatest importance in chemistry and in the field of bioenergetics, the study of energy utilization in biology. Most processes in chemistry and biology occur at constant temperature and pressure, and so to decide whether they are spontaneous and able to produce non-expansion work we need to consider the Gibbs energy. […] Our bodies live off Gibbs energy. Many of the processes that constitute life are non-spontaneous reactions, which is why we decompose and putrefy when we die and these life-sustaining reactions no longer continue. […] In biology a very important ‘heavy weight’ reaction involves the molecule adenosine triphosphate (ATP). […] When a terminal phosphate group is snipped off by reaction with water […], to form adenosine diphosphate (ADP), there is a substantial decrease in Gibbs energy, arising in part from the increase in entropy when the group is liberated from the chain. Enzymes in the body make use of this change in Gibbs energy […] to bring about the linking of amino acids, and gradually build a protein molecule. It takes the effort of about three ATP molecules to link two amino acids together, so the construction of a typical protein of about 150 amino acid groups needs the energy released by about 450 ATP molecules. […] The ADP molecules, the husks of dead ATP molecules, are too valuable just to discard. They are converted back into ATP molecules by coupling to reactions that release even more Gibbs energy […] and which reattach a phosphate group to each one. These heavy-weight reactions are the reactions of metabolism of the food that we need to ingest regularly.”
Links of interest below – the stuff covered in the links is the sort of stuff covered in this book:
Laws of thermodynamics (article includes links to many other articles of interest, including links to each of the laws mentioned above).
Intensive and extensive properties.
Conservation of energy.
Microscopic view of heat.
Reversible process (thermodynamics).
Coefficient of performance.
Helmholtz free energy.
Gibbs free energy.
I have added some observations from the book below, as well as some links covering people/ideas/stuff discussed/mentioned in the book.
“On average, out of every 100 newly born star systems, 60 are binaries and 40 are triples. Solitary stars like the Sun are later ejected from triple systems formed in this way.”
“…any object will become a black hole if it is sufficiently compressed. For any mass, there is a critical radius, called the Schwarzschild radius, for which this occurs. For the Sun, the Schwarzschild radius is just under 3 km; for the Earth, it is just under 1 cm. In either case, if the entire mass of the object were squeezed within the appropriate Schwarzschild radius it would become a black hole.”
“It only became possible to study the centre of our Galaxy when radio telescopes and other instruments that do not rely on visible light became available. There is a great deal of dust in the plane of the Milky Way […] This blocks out visible light. But longer wavelengths penetrate the dust more easily. That is why sunsets are red – short wavelength (blue) light is scattered out of the line of sight by dust in the atmosphere, while the longer wavelength red light gets through to your eyes. So our understanding of the galactic centre is largely based on infrared and radio observations.”
“there is strong evidence that the Milky Way Galaxy is a completely ordinary disc galaxy, a typical representative of its class. Since that is the case, it means that we can confidently use our inside knowledge of the structure and evolution of our own Galaxy, based on close-up observations, to help our understanding of the origin and nature of disc galaxies in general. We do not occupy a special place in the Universe; but this was only finally established at the end of the 20th century. […] in the decades following Hubble’s first measurements of the cosmological distance scale, the Milky Way still seemed like a special place. Hubble’s calculation of the distance scale implied that other galaxies are relatively close to our Galaxy, and so they would not have to be very big to appear as large as they do on the sky; the Milky Way seemed to be by far the largest galaxy in the Universe. We now know that Hubble was wrong. […] the value he initially found for the Hubble Constant was about seven times bigger than the value accepted today. In other words, all the extragalactic distances Hubble inferred were seven times too small. But this was not realized overnight. The cosmological distance scale was only revised slowly, over many decades, as observations improved and one error after another was corrected. […] The importance of determining the cosmological distance scale accurately, more than half a century after Hubble’s pioneering work, was still so great that it was a primary justification for the existence of the Hubble Space Telescope (HST).”
“The key point to grasp […] is that the expansion described by [Einstein’s] equations is an expansion of space as time passes. The cosmological redshift is not a Doppler effect caused by galaxies moving outward through space, as if fleeing from the site of some great explosion, but occurs because the space between the galaxies is stretching. So the spaces between galaxies increase while light is on its way from one galaxy to another. This stretches the light waves to longer wavelengths, which means shifting them towards the red end of the spectrum. […] The second key point about the universal expansion is that it does not have a centre. There is nothing special about the fact that we observe galaxies receding with redshifts proportional to their distances from the Milky Way. […] whichever galaxy you happen to be sitting in, you will see the same thing – redshift proportional to distance.”
“The age of the Universe is determined by studying some of the largest things in the Universe, clusters of galaxies, and analysing their behaviour using the general theory of relativity. Our understanding of how stars work, from which we calculate their ages, comes from studying some of the smallest things in the Universe, the nuclei of atoms, and using the other great theory of 20th-century physics, quantum mechanics, to calculate how nuclei fuse with one another to release the energy that keeps stars shining. The fact that the two ages agree with one another, and that the ages of the oldest stars are just a little bit less than the age of the Universe, is one of the most compelling reasons to think that the whole of 20th-century physics works and provides a good description of the world around us, from the very small scale to the very large scale.”
“Planets are small objects orbiting a large central mass, and the gravity of the Sun dominates their motion. Because of this, the speed with which a planet moves […] is inversely proportional to the square of its distance from the centre of the Solar System. Jupiter is farther from the Sun than we are, so it moves more slowly in its orbit than the Earth, as well as having a larger orbit. But all the stars in the disc of a galaxy move at the same speed. Stars farther out from the centre still have bigger orbits, so they still take longer to complete one circuit of the galaxy. But they are all travelling at essentially the same orbital speed through space.”
“The importance of studying objects at great distances across the Universe is that when we look at an object that is, say, 10 billion light years away, we see it by light which left it 10 billion years ago. This is the ‘look back time’, and it means that telescopes are in a sense time machines, showing us what the Universe was like when it was younger. The light from a distant galaxy is old, in the sense that it has been a long time on its journey; but the galaxy we see using that light is a young galaxy. […] For distant objects, because light has taken a long time on its journey to us, the Universe has expanded significantly while the light was on its way. […] This raises problems defining exactly what you mean by the ‘present distance’ to a remote galaxy”
“Among the many advantages that photographic and electronic recording methods have over the human eye, the most fundamental is that the longer they look, the more they see. Human eyes essentially give us a real-time view of our surroundings, and allow us to see things – such as stars – that are brighter than a certain limit. If an object is too faint to see, once your eyes have adapted to the dark no amount of staring in its direction will make it visible. But the detectors attached to modern telescopes keep on adding up the light from faint sources as long as they are pointing at them. A longer exposure will reveal fainter objects than a short exposure does, as the photons (particles of light) from the source fall on the detector one by one and the total gradually grows.”
“Nobody can be quite sure where the supermassive black holes at the hearts of galaxies today came from, but it seems at least possible that […] merging of black holes left over from the first generation of stars [in the universe] began the process by which supermassive black holes, feeding off the matter surrounding them, formed. […] It seems very unlikely that supermassive black holes formed first and then galaxies grew around them; they must have formed together, in a process sometimes referred to as co-evolution, from the seeds provided by the original black holes of a few hundred solar masses and the raw materials of the dense clouds of baryons in the knots in the filamentary structure. […] About one in a hundred of the galaxies seen at low redshifts are actively involved in the late stages of mergers, but these processes take so little time, compared with the age of the Universe, that the statistics imply that about half of all the galaxies visible nearby are the result of mergers between similarly sized galaxies in the past seven or eight billion years. Disc galaxies like the Milky Way seem themselves to have been built up from smaller sub-units, starting out with the spheroid and adding bits and pieces as time passed. […] there were many more small galaxies when the Universe was young than we see around us today. This is exactly what we would expect if many of the small galaxies have either grown larger through mergers or been swallowed up by larger galaxies.”
Links of interest:
Galaxy (‘featured article’).
The Great Debate.
Henrietta Swan Leavitt (‘good article’).
Ejnar Hertzsprung. (Before reading this book, I had no idea one of the people behind the famous Hertzsprung–Russell diagram was a Dane. I blame my physics teachers. I was probably told this by one of them, but if the guy in question had been a better teacher, I’d have listened, and I’d have known this.).
Globular cluster (‘featured article’).
Redshift (‘featured article’).
Refracting telescope/Reflecting telescope.
General relativity (featured).
The Big Bang theory (featured).
Age of the universe.
Type Ia supernova.
Cosmic microwave background.
Cold dark matter.
Active galactic nucleus.
Hubble Ultra-Deep Field.
Ultimate fate of the universe.
Some quotes from the book below.
“Tests that are used in clinical neuropsychology in most cases examine one or more aspects of cognitive domains, which are theoretical constructs in which a multitude of cognitive processes are involved. […] By definition, a subdivision in cognitive domains is arbitrary, and many different classifications exist. […] for a test to be recommended, several criteria must be met. First, a test must have adequate reliability: the test must yield similar outcomes when applied over multiple test sessions, i.e., have good test–retest reliability. […] Furthermore, the interobserver reliability is important, in that the test must have a standardized assessment procedure and is scored in the same manner by different examiners. Second, the test must have adequate validity. Here, different forms of validity are important. Content validity is established by expert raters with respect to item formulation, item selection, etc. Construct validity refers to the underlying theoretical construct that the test is assumed to measure. To assess construct validity, both convergent and divergent validities are important. Convergent validity refers to the amount of agreement between a given test and other tests that measure the same function. In turn, a test with a good divergent validity correlates minimally with tests that measure other cognitive functions. Moreover, predictive validity (or criterion validity) is related to the degree of correlation between the test score and an external criterion, for example, the correlation between a cognitive test and functional status. […] it should be stressed that cognitive tests alone cannot be used as ultimate proof for organic brain damage, but should be used in combination with more direct measures of cerebral abnormalities, such as neuroimaging.”
“Intelligence is a theoretically ill-defined construct. In general, it refers to the ability to think in an abstract manner and solve new problems. Typically, two forms of intelligence are distinguished, crystallized intelligence (academic skills and knowledge that one has acquired during schooling) and fluid intelligence (the ability to solve new problems). Crystallized intelligence is better preserved in patients with brain disease than fluid intelligence (3). […] From a neuropsychological viewpoint, the concept of intelligence as a unitary construct (often referred to as g-factor) does not provide valuable information, since deficits in specific cognitive functions may be averaged out in the total IQ score. Thus, in most neuropsychological studies, intelligence tests are included because of specific subtests that are assumed to measure specific cognitive functions, and the performance profile is analyzed rather than considering the IQ measure as a compound score in isolation.”
“Attention is a concept that in general relates to the selection of relevant information from our environment and the suppression of irrelevant information (selective or “focused” attention), the ability to shift attention between tasks (divided attention), and to maintain a state of alertness to incoming stimuli over longer periods of time (concentration and vigilance). Many different structures in the human brain are involved in attentional processing and, consequently, disorders in attention occur frequently after brain disease or damage (21). […] Speed of information processing is not a localized cognitive function, but depends greatly on the integrity of the cerebral network as a whole, the subcortical white matter and the interhemispheric and intrahemispheric connections. It is one of the cognitive functions that clearly declines with age and it is highly susceptible to brain disease or dysfunction of any kind.”
“The MiniMental State Examination (MMSE) is a screening instrument that has been developed to determine whether older adults have cognitive impairments […] numerous studies have shown that the MMSE has poor sensitivity and specificity, as well as a low-test–retest reliability […] the MMSE has been developed to determine cognitive decline that is typical for Alzheimer’s dementia, but has been found less useful in determining cognitive decline in nondemented patients (44) or in patients with other forms of dementia. This is important since odds ratios for both vascular dementia and Alzheimer’s dementia are increased in diabetes (45). Notwithstanding this increased risk, most patients with diabetes have subtle cognitive deficits (46, 47) that may easily go undetected using gross screening instruments such as the MMSE. For research in diabetes a high sensitivity is thus especially important. […] ceiling effects in test performance often result in a lack of sensitivity. Subtle impairments are easily missed, resulting in a high proportion of false-negative cases […] In general, tests should be cognitively demanding to avoid ceiling effects in patients with mild cognitive dysfunction.[…] sensitive domains such as speed of information processing, (working) memory, attention, and executive function should be examined thoroughly in diabetes patients, whereas other domains such as language, motor function, and perception are less likely to be affected. Intelligence should always be taken into account, and confounding factors such as mood, emotional distress, and coping are crucial for the interpretation of the neuropsychological test results.”
“The life-time risk of any dementia has been estimated to be more than 1 in 5 for women and 1 in 6 for men (2). Worldwide, about 24 million people have dementia, with 4.6 million new cases of dementia every year (3). […] Dementia can be caused by various underlying diseases, the most common of which is Alzheimer’s disease (AD) accounting for roughly 70% of cases in the elderly. The second most common cause of dementia is vascular dementia (VaD), accounting for 16% of cases. Other, less common, causes include dementia with Lewy bodies (DLB) and frontotemporal lobar degeneration (FTLD). […] It is estimated that both the incidence and the prevalence [of AD] double with every 5-year increase in age. Other risk factors for AD include female sex and vascular risk factors, such as diabetes, hypercholesterolaemia and hypertension […] In contrast with AD, progression of cognitive deficits [in VaD] is mostly stepwise and with an acute or subacute onset. […] it is clear that cerebrovascular disease is one of the major causes of cognitive decline. Vascular risk factors such as diabetes mellitus and hypertension have been recognized as risk factors for VaD […] Although pure vascular dementia is rare, cerebrovascular pathology is frequently observed on MRI and in pathological studies of patients clinically diagnosed with AD […] Evidence exists that AD and cerebrovascular pathology act synergistically (60).”
“In type 1 diabetes the annual prevalence of severe hypoglycemia (requiring help for recovery) is 30–40% while the annual incidence varies depending on the duration of diabetes. In insulin-treated type 2 diabetes, the frequency is lower but increases with duration of insulin therapy. […] In normal health, blood glucose is maintained within a very narrow range […] The functioning of the brain is optimal within this range; cognitive function rapidly becomes impaired when the blood glucose falls below 3.0 mmol/l (54 mg/dl) (3). Similarly, but much less dramatically, cognitive function deteriorates when the brain is exposed to high glucose concentrations” (I did not know the latter for certain, but I certainly have had my suspicions for a long time).
“When exogenous insulin is injected into a non-diabetic adult human, peripheral tissues such as skeletal muscle and adipose tissue rapidly take up glucose, while hepatic glucose output is suppressed. This causes blood glucose to fall and triggers a series of counterregulatory events to counteract the actions of insulin; this prevents a progressive decline in blood glucose and subsequently reverses the hypoglycemia. In people with insulin-treated diabetes, many of the homeostatic mechanisms that regulate blood glucose are either absent or deficient. [If you’re looking for more details on these topics, it should perhaps be noted here that Philip Cryer’s book on these topics is very nice and informative]. […] The initial endocrine response to a fall in blood glucose in non-diabetic humans is the suppression of endogenous insulin secretion. This is followed by the secretion of the principal counterregulatory hormones, glucagon and epinephrine (adrenaline) (5). Cortisol and growth hormone also contribute, but have greater importance in promoting recovery during exposure to prolonged hypoglycemia […] Activation of the peripheral sympathetic nervous system and the adrenal glands provokes the release of a copious quantity of catecholamines, epinephrine, and norepinephrine […] Glucagon is secreted from the alpha cells of the pancreatic islets, apparently in response to localized neuroglycopenia and independent of central neural control. […] The large amounts of catecholamines that are secreted in response to hypoglycemia exert other powerful physiological effects that are unrelated to counterregulation. These include major hemodynamic actions with direct effects on the heart and blood pressure. […] regional blood flow changes occur during hypoglycemia that encourages the transport of substrates to the liver for gluconeogenesis and simultaneously of glucose to the brain. Organs that have no role in the response to acute stress, such as the spleen and kidneys, are temporarily under-perfused. The mobilisation and activation of white blood cells are accompanied by hemorheological effects, promoting increased viscosity, coagulation, and fibrinolysis and may influence endothelial function (6). In normal health these acute physiological changes probably exert no harmful effects, but may acquire pathological significance in people with diabetes of long duration.”
“The more complex and attention-demanding cognitive tasks, and those that require speeded responses are more affected by hypoglycemia than simple tasks or those that do not require any time restraint (3). The overall speed of response of the brain in making decisions is slowed, yet for many tasks, accuracy is preserved at the expense of speed (8, 9). Many aspects of mental performance become impaired when blood glucose falls below 3.0 mmol/l […] Recovery of cognitive function does not occur immediately after the blood glucose returns to normal, but in some cognitive domains may be delayed for 60 min or more (3), which is of practical importance to the performance of tasks that require complex cognitive functions, such as driving. […] [the] major changes that occur during hypoglycemia – counterregulatory hormone secretion, symptom generation, and cognitive dysfunction – occur as components of a hierarchy of responses, each being triggered as the blood glucose falls to its glycemic threshold. […] In nondiabetic individuals, the glycemic thresholds are fixed and reproducible (10), but in people with diabetes, these thresholds are dynamic and plastic, and can be modified by external factors such as glycemic control or exposure to preceding (antecedent) hypoglycemia (11). Changes in the glycemic thresholds for the responses to hypoglycemia underlie the effects of the acquired hypoglycemia syndromes that can develop in people with insulin-treated diabetes […] the incidence of severe hypoglycemia in people with insulin-treated type 2 diabetes increases steadily with duration of insulin therapy […], as pancreatic beta-cell failure develops. The under-recognized risk of severe hypoglycemia in insulin-treated type 2 diabetes is of great practical importance as this group is numerically much larger than people with type 1 diabetes and encompasses many older, and some very elderly, people who may be exposed to much greater danger because they often have co-morbidities such as macrovascular disease, osteoporosis, and general frailty.”
“Hypoglycemia occurs when a mismatch develops between the plasma concentrations of glucose and insulin, particularly when the latter is inappropriately high, which is common during the night. Hypoglycemia can result when too much insulin is injected relative to oral intake of carbohydrate or when a meal is missed or delayed after insulin has been administered. Strenuous exercise can precipitate hypoglycemia through accelerated absorption of insulin and depletion of muscle glycogen stores. Alcohol enhances the risk of prolonged hypoglycemia by inhibiting hepatic gluconeogenesis, but the hypoglycemia may be delayed for several hours. Errors of dosage or timing of insulin administration are common, and there are few conditions where the efficacy of the treatment can be influenced by so many extraneous factors. The time–action profiles of different insulins can be modified by factors such as the ambient temperature or the site and depth of injection and the person with diabetes has to constantly try to balance insulin requirement with diet and exercise. It is therefore not surprising that hypoglycemia occurs so frequently. […] The lower the median blood glucose during the day, the greater the frequency
of symptomatic and biochemical hypoglycemia […] Strict glycemic control can […] induce the acquired hypoglycemia syndromes, impaired awareness of hypoglycemia (a major risk factor for severe hypoglycemia), and counterregulatory hormonal deficiencies (which interfere with blood glucose recovery). […] Severe hypoglycemia is more common at the extremes of age – in very young children and in elderly people. […] In type 1 diabetes the frequency of severe hypoglycemia increases with duration of diabetes (12), while in type 2 diabetes it is associated with increasing duration of insulin treatment (18). […] Around one quarter of all episodes of severe hypoglycemia result in coma […] In 10% of episodes of severe hypoglycemia affecting people with type 1 diabetes and around 30% of those in people with insulin-treated type 2 diabetes, the assistance of the emergency medical services is required (23). However, most episodes (both mild and severe) are treated in the community, and few people require admission to hospital.”
“Severe hypoglycemia is potentially dangerous and has a significant mortality and morbidity, particularly in older people with insulin-treated diabetes who often have premature macrovascular disease. The hemodynamic effects of autonomic stimulation may provoke acute vascular events such as myocardial ischemia and infarction, cardiac failure, cerebral ischemia, and stroke (6). In clinical practice the cardiovascular and cerebrovascular consequences of hypoglycemia are frequently overlooked because the role of hypoglycemia in precipitating the vascular event is missed. […] The profuse secretion of catecholamines in response to hypoglycemia provokes a fall in plasma potassium and causes electrocardiographic (ECG) changes, which in some individuals may provoke a cardiac arrhythmia […]. A possible mechanism that has been observed with ECG recordings during hypoglycemia is prolongation of the QT interval […]. Hypoglycemia-induced arrhythmias during sleep have been implicated as the cause of the “dead in bed” syndrome that is recognized in young people with type 1 diabetes (40). […] Total cerebral blood flow is increased during acute hypoglycemia while regional blood flow within the brain is altered acutely. Blood flow increases in the frontal cortex, presumably as a protective compensatory mechanism to enhance the supply of available glucose to the most vulnerable part of the brain. These regional vascular changes become permanent in people who are exposed to recurrent severe hypoglycemia and in those with impaired awareness of hypoglycemia, and are then present during normoglycemia (41). This probably represents an adaptive response of the brain to recurrent exposure to neuroglycopenia. However, these permanent hypoglycemia-induced changes in regional cerebral blood flow may encourage localized neuronal ischemia, particularly if the cerebral circulation is already compromised by the development of cerebrovascular disease associated with diabetes. […] Hypoglycemia-induced EEG changes can persist for days or become permanent, particularly after recurrent severe hypoglycemia”.
“In the large British Diabetic Association Cohort Study of people who had developed type 1 diabetes before the age of 30, acute metabolic complications of diabetes were the greatest single cause of excess death under the age of 30; hypoglycemia was the cause of death in 18% of males and 6% of females in the 20–49 age group (47).”
“[The] syndromes of counterregulatory hormonal deficiencies and impaired awareness of hypoglycemia (IAH) develop over a period of years and ultimately affect a substantial proportion of people with type 1 diabetes and a lesser number with insulin-treated type 2 diabetes. They are considered to be components of hypoglycemia-associated autonomic failure (HAAF), through down-regulation of the central mechanisms within the brain that would normally activate glucoregulatory responses to hypoglycemia, including the release of counterregulatory hormones and the generation of warning symptoms (48). […] The glucagon secretory response to hypoglycemia becomes diminished or absent within a few years of the onset of insulin-deficient diabetes. With glucagon deficiency alone, blood glucose recovery from hypoglycemia is not noticeably affected because the secretion of epinephrine maintains counterregulation. However, almost half of those who have type 1 diabetes of 20 years duration have evidence of impairment of both glucagon and epinephrine in response to hypoglycemia (49); this seriously delays blood glucose recovery and allows progression to more severe and prolonged hypoglycemia when exposed to low blood glucose. People with type 1 diabetes who have these combined counterregulatory hormonal deficiencies have a 25-fold higher risk of experiencing severe hypoglycemia if they are subjected to intensive insulin therapy compared with those who have lost their glucagon response but have retained epinephrine secretion […] Impaired awareness is not an “all or none” phenomenon. “Partial” impairment of awareness may develop, with the individual being aware of some episodes of hypoglycemia but not others (53). Alternatively, the intensity or number of symptoms may be reduced, and neuroglycopenic symptoms predominate. […] total absence of any symptoms, albeit subtle, is very uncommon […] IAH affects 20–25% of patients with type 1 diabetes (11, 55) and less than 10% with type 2 diabetes (24), becomes more prevalent with increasing duration of diabetes (12) […], and predisposes the patient to a sixfold higher risk of severe hypoglycemia than people who retain normal awareness (56). When IAH is associated with strict glycemic control during intensive insulin therapy or has followed episodes of recurrent severe hypoglycemia, it may be reversible by relaxing glycemic control or by avoiding further hypoglycemia (11), but in many patients with type 1 diabetes of long duration, it appears to be a permanent defect. […] The modern management of diabetes strives to achieve strict glycemic control using intensive therapy to avoid or minimize the long-term complications of diabetes; this strategy tends to increase the risk of hypoglycemia and promotes development of the acquired hypoglycemia syndromes.”
Below I have posted a list of the 156 books I read to completion in 2016, as well as links to blog posts covering the books and reviews of the books which I’ve written on goodreads. At the bottom of the post I have also added the 7 books I did not finish this year, as well as some related links and comments. The post you read now is unlikely to be the final edition of this post, as I’ll continue to add links and comments to the post also in 2017 if/when I blog or review books mentioned below.
As I also mentioned earlier in the year, I have been reading a lot of fiction this year and not enough non-fiction. Regarding the ‘technical aspects’ of the list below, as usual the letters ‘f’ and ‘nf.’ in the parentheses correspond to ‘fiction’ and ‘non-fiction’, respectively, whereas the ‘m’ category covers ‘miscellaneous’ books. The numbers in the parentheses correspond to the goodreads ratings I thought the books deserved.
I did a brief count of the books on the list and concluded that the list includes 30 books categorized as non-fiction, 20 books in the miscellaneous category, and 106 books categorized as fiction. As usual non-fiction works published by Springer make up a substantial proportion of the non-fiction books I read (20 %), with another 20 % accounted for by Oxford University Press, Princeton University Press and Wiley/Wiley-Blackwell. Some of the authors in the fiction category have also featured on the lists previously (Christie, Wodehouse, Bryson), but other names are new – new names include: Dick Francis (39 books), Tom Sharpe (16 books), David Sedaris (7 books), Mario Puzo (4 books), Gerard Durrell (3 books), and Connie Willis (3 books).
I shared my ‘year in books’ on goodreads, and that link includes a few summary stats as well as cover images of the books (annoyingly a large-ish proportion of the non-fiction books have not added cover pictures, but it’s even so a neat visualization tool). With 156 books finished this year I read almost exactly 3 books per week on average, and the goodreads tools also tell me that I read 47.281 pages during the year. As I don’t believe goodreads includes the page counts of partially read books in that tool, this is probably a slight underestimate but it’s in that neighbourhood anyway; this corresponds to ~130 pages per day on average (129,5) throughout the year, or roughly 900 pages per week. The average length of the books I finished was 309 pages, again according to goodreads.
Since I started blogging, I have published roughly 500 posts about books I’ve read – I actually realized while writing this post that the next post I publish on this site categorized under ‘books’ will be post number 500 in that category. As should be obvious from the list below, as a rule I do not cover fiction books on this blog, aside from in the context of quote posts where I may occasionally include a few quotes from books I’ve read (I decided early on not to include links to such posts on lists like these, as that would be too much work). In the context of quotes I should probably add here to readers not already aware of this that I recently decided to move/copy a large number of quotes from this site to goodreads, and that I now update my goodreads quote collection more frequently than I do the quote collection on this blog; at this point, my quote collection on goodreads includes 1347 quotes. For a few more details about this aspect of the goodreads site, see incidentally this post.
Both Dick Francis and Connie Willis were introduced to me by the SSC commentariat and this link includes a lot of other author recommendations which might be of interest to you. I should perhaps also note before moving on to the list that I have recently added a not-insignificant number of books to my list of favourite books on goodreads. I have (retrospectively) slightly modified my implicit selection criteria for adding books to the list; previously if a book had taught me a lot but I did not give it a five star rating or I figured it wasn’t at least very close to perfect, it wasn’t going to get anywhere near my list of favourite books. I figured recently that perhaps I should also include on the list books which had taught me a lot, books that had changed my way of looking at the world, even if they were not very close to perfect in most respects. I’m still not quite sure what is the best categorization approach, but as of now the list includes some books which did not feature on the list in the near past and I figured I might mention the list explicitly here also because people perusing a list like the one below are presumably in part doing it because they’re looking for good books to read, and my inclusion of a book on that list can still at least be taken to be a qualified recommendation of the book.
1. 4.50 from Paddington (4, f). Agatha Christie.
3. Hickory Dickory Dock (3, f). Agatha Christie.
6. A Caribbean Mystery (3, f). Agatha Christie.
7. A Rulebook for Arguments (Hackett Student Handbooks) (1, nf. Hackett Publishing). Very short goodreads review here.
8. The Clocks (2, f). Agatha Christie.
15. By the Pricking of My Thumbs (2, f). Agatha Christie.
16. The Godfather (4, f). Mario Puzo.
21. A Few Quick Ones (3, f). P. G. Wodehouse.
22. Ice in the Bedroom (4, f). P. G. Wodehouse.
24. The Secret of Chimneys (2, f). Agatha Christie.
26. Something Fishy (3, f). P. G. Wodehouse.
27. Do Butlers Burgle Banks? (3,f). P.G. Wodehouse.
28. The Mirror Crack’d from Side to Side (1, f). Agatha Christie. Boring story, almost didn’t finish it.
29. Frozen Assets (4, f). P. G. Wodehouse.
30. A Cooperative Species: Human Reciprocity and Its Evolution (5, nf. Princeton University Press). Goodreads review here. Blog coverage here.
31. If I Were You (4, f). P. G. Wodehouse.
32. On the Shortness of Life (nf.). Seneca the Younger.
33. Barmy in Wonderland (3, f). P. G. Wodehouse.
38. Company for Henry (4, f). P. G. Wodehouse.
39. Bachelors Anonymous (5, f). P. G. Wodehouse. A short book, but very funny.
40. The Second World War (5, nf.) Winston Churchill. Very long, the book is a thousand pages long abridgement of 6 different volumes written by Churchill. Blog coverage here, here, here, and here. I added this book to my list of favourite books on goodreads.
41. The Old Reliable (3, f). P. G. Wodehouse.
42. Performing Flea (4, m). P. G. Wodehouse, William Townend.
45. The Road to Little Dribbling: Adventures of an American in Britain (3, m). Bill Bryson.
46. Bryson’s Dictionary of Troublesome Words: A Writer’s Guide to Getting It Right (3, nf.). Bill Bryson. Goodreads review here.
48. Shakespeare: The World as Stage (2, m). Bill Bryson.
50. The Sicilian (3, f). Mario Puzo.
53. Pre-Industrial Societies: Anatomy of the Pre-Modern World (5, nf. Oneworld Publications). Goodreads review here. I added this book to my list of favourite books on goodreads.
56. Aunts Aren’t Gentlemen (3, f). P. G. Wodehouse.
57. What If?: Serious Scientific Answers to Absurd Hypothetical Questions (2, m. Randall Munroe). Short goodreads review here.
61. Wilt In Nowhere (3, f). Tom Sharpe.
63. Monstrous Regiment (3, f). Terry Pratchett.
67. Vintage Stuff (2, f). Tom Sharpe.
70. Suicide Prevention and New Technologies: Evidence Based Practice (1, nf. Palgrave Macmillan). Long(-ish) goodreads review here.
72. Diabetes and the Metabolic Syndrome in Mental Health (2, nf. Lippincott Williams & Wilkins). Goodreads review here. Blog coverage here and here.
77. The Great Pursuit (3, f). Tom Sharpe.
78. Riotous Assembly (4, f). Tom Sharpe.
79. Indecent Exposure (3, f). Tom Sharpe.
87. Naked (3, m). David Sedaris.
91. When You Are Engulfed in Flames (3, m). David Sedaris.
93. Poor Richard’s Almanack (m). Benjamin Franklin.
97. The Garden of the Gods (3, m). Gerard Durrell.
100. The Thirteen Problems (2, f). Agatha Christie.
101. Dead Cert (4, f). Dick Francis.
102. Nerve (3, f). Dick Francis.
103. For Kicks (3, f). Dick Francis.
104. Odds Against (3, f). Dick Francis.
108. Blood Sport (3, f). Dick Francis.
110. Forfeit (2, f). Dick Francis.
117. High Stakes (4, f). Dick Francis.
118. In the Frame (3, f). Dick Francis.
119. Knockdown (3, f). Dick Francis.
121. Managing Diabetic Nephropathies in Clinical Practice (4, nf. Springer). Very short goodreads review here. Blog coverage here.
129. Proof (2, f). Dick Francis.
130. Break In (3, f). Dick Francis.
131. Integrated Diabetes Care: A Multidisciplinary Approach (4, nf. Springer). Goodreads review here. Blog coverage here and here.
136. Longshot (4, f). Dick Francis.
141. Decider (3, f). Dick Francis.
142. Essential Microbiology and Hygiene for Food Professionals (2, nf. CRC Press). Short goodreads review here.
143. Wild Horses (2, f). Dick Francis.
144. Come to Grief (4, f). Dick Francis.
145. To the Hilt (2, f). Dick Francis.
147. Second Wind (2, f). Dick Francis.
149. Under Orders (4, f). Dick Francis.
Books I did not finish:
Raising Steam (?, f). Terry Pratchett. These days I mostly use Pratchett’s books as a treat, the few remaining books in the Discworld series which I have yet to read I consider to be books which I feel that I have to make myself deserve to be allowed to read. I started out reading this book because I felt terrible at the time, but I decided after having read a hundred pages or so that I had not in fact deserved to read the book, and so I put it away again. Unlike the two books above I do not consider this book to be bad, that’s not why I didn’t finish it.
Anna Karenina (?, f). Tolstoy. As I pointed out in my short review, “so far (I stopped around page 140) it’s been a story about miserable Russians, and I can’t read that kind of stuff right now.” Again, I would not say this book is bad, but I could not read that kind of stuff at the time.
The Language Instinct: How the Mind Creates Language (nf., Harper Perennial Modern Classics). Pinker’s book may be one of the last popular science books I’ll read, at least for a while – I find that I simply can’t read this kind of book anymore (which is annoying, because I also bought Jonathan Haidt’s The Righteous Mind this year, and I worry that I’ll never be able to read that book, despite the content being at least somewhat interesting, simply on account of the way the book is likely to be written). As I noted while reading the book, “I’ve realized by now that I’ve probably at this point grown to strongly dislike reading popular science books. I’ve disliked other PS books I’ve read in the semi-near past as well, but I always figured I had specific reasons for disliking a particular book. At this point it seems like it’s a general thing. I don’t like these books any more. Too imprecise language, claims are consistently way too strong, etc., etc..” My reading experience of Pinker’s book was definitely not improved by the fact that I have read textbooks on topics closely related to those covered in the book in the past (Eysenck and Keane, Snowling et al.).
Physiology at a Glance (?, Wiley-Blackwell). ‘Too much work, considering the pay-off’, would probably be the short version of why I didn’t finish this one – but this should not be taken as an indication that the book is bad. Despite the words ‘at a glance’ in the title, each short chapter (2 pages) in this book roughly matches the amount of material usually covered in an academic lecture (this is the general structure of the ‘at a glance books’), which means that the book takes quite a bit more work than the limited page count might indicate. The fact that I knew many of the things covered didn’t mean that the book was much faster to read than it otherwise might have been; it still took a lot of time and effort to digest the material. I’m sure there’s some stuff in the book which I don’t know and stuff I’ve forgot, and I did learn some new stuff from the chapters I did read, so I’m conflicted about whether or not to pick it up again later – it may be worth it at some point. However back when I was reading it I decided in the end to just put the book away and read something else instead. If you’re looking for a dense and to-the-point introduction to physiology/anatomy, I’m sure you could do a lot worse than this book.
100 Endgames You Must Know: Vital Lessons for Every Chess Player (?, nf. New in Chess). If I just wanted to be able to say that I had ‘read’ this book, I would have finished it a long time ago, but this is not the sort of book you just ‘read’. The positions covered need to be studied and analyzed in detail, the positions need to be played out, perhaps reviewed (depending on how ambitious you are about your chess). I’m more than half-way through (p. 140 or so), but I rarely feel like working on this stuff as it’s more fun to play chess than to systematically improve your chess in the manner you’ll do if you work on the material covered in this book. It’s a great endgame book, but it takes a lot of work.