My working assumption as I was reading part two of the book was that I would not be covering that part of the book in much detail here because it would simply be too much work to make such posts legible to the readership of this blog. However I then later, while writing this post, had the thought that given that almost nobody reads along here anyway (I’m not complaining, mind you – this is how I like it these days), the main beneficiary of my blog posts will always be myself, which lead to the related observation/notion that I should not be limiting my coverage of interesting stuff here simply because some hypothetical and probably nonexistent readership out there might not be able to follow the coverage. So when I started out writing this post I was working under the assumption that it would be my last post about the book, but I now feel sure that if I find the time I’ll add at least one more post about the book’s statistics coverage. On a related note I am explicitly making the observation here that this post was written for my benefit, not yours. You can read it if you like, or not, but it was not really written for you.
I have added bold a few places to emphasize key concepts and observations from the quoted paragraphs and in order to make the post easier for me to navigate later (all the italics below are on the other hand those of the authors of the book).
“Biodemography is a multidisciplinary branch of science that unites under its umbrella various analytic approaches aimed at integrating biological knowledge and methods and traditional demographic analyses to shed more light on variability in mortality and health across populations and between individuals. Biodemography of aging is a special subfield of biodemography that focuses on understanding the impact of processes related to aging on health and longevity.”
“Mortality rates as a function of age are a cornerstone of many demographic analyses. The longitudinal age trajectories of biomarkers add a new dimension to the traditional demographic analyses: the mortality rate becomes a function of not only age but also of these biomarkers (with additional dependence on a set of sociodemographic variables). Such analyses should incorporate dynamic characteristics of trajectories of biomarkers to evaluate their impact on mortality or other outcomes of interest. Traditional analyses using baseline values of biomarkers (e.g., Cox proportional hazards or logistic regression models) do not take into account these dynamics. One approach to the evaluation of the impact of biomarkers on mortality rates is to use the Cox proportional hazards model with time-dependent covariates; this approach is used extensively in various applications and is available in all popular statistical packages. In such a model, the biomarker is considered a time-dependent covariate of the hazard rate and the corresponding regression parameter is estimated along with standard errors to make statistical inference on the direction and the significance of the effect of the biomarker on the outcome of interest (e.g., mortality). However, the choice of the analytic approach should not be governed exclusively by its simplicity or convenience of application. It is essential to consider whether the method gives meaningful and interpretable results relevant to the research agenda. In the particular case of biodemographic analyses, the Cox proportional hazards model with time-dependent covariates is not the best choice.”
“Longitudinal studies of aging present special methodological challenges due to inherent characteristics of the data that need to be addressed in order to avoid biased inference. The challenges are related to the fact that the populations under study (aging individuals) experience substantial dropout rates related to death or poor health and often have co-morbid conditions related to the disease of interest. The standard assumption made in longitudinal analyses (although usually not explicitly mentioned in publications) is that dropout (e.g., death) is not associated with the outcome of interest. While this can be safely assumed in many general longitudinal studies (where, e.g., the main causes of dropout might be the administrative end of the study or moving out of the study area, which are presumably not related to the studied outcomes), the very nature of the longitudinal outcomes (e.g., measurements of some physiological biomarkers) analyzed in a longitudinal study of aging assumes that they are (at least hypothetically) related to the process of aging. Because the process of aging leads to the development of diseases and, eventually, death, in longitudinal studies of aging an assumption of non-association of the reason for dropout and the outcome of interest is, at best, risky, and usually is wrong. As an illustration, we found that the average trajectories of different physiological indices of individuals dying at earlier ages markedly deviate from those of long-lived individuals, both in the entire Framingham original cohort […] and also among carriers of specific alleles […] In such a situation, panel compositional changes due to attrition affect the averaging procedure and modify the averages in the total sample. Furthermore, biomarkers are subject to measurement error and random biological variability. They are usually collected intermittently at examination times which may be sparse and typically biomarkers are not observed at event times. It is well known in the statistical literature that ignoring measurement errors and biological variation in such variables and using their observed “raw” values as time-dependent covariates in a Cox regression model may lead to biased estimates and incorrect inferences […] Standard methods of survival analysis such as the Cox proportional hazards model (Cox 1972) with time-dependent covariates should be avoided in analyses of biomarkers measured with errors because they can lead to biased estimates.”
“Statistical methods aimed at analyses of time-to-event data jointly with longitudinal measurements have become known in the mainstream biostatistical literature as “joint models for longitudinal and time-to-event data” (“survival” or “failure time” are often used interchangeably with “time-to-event”) or simply “joint models.” This is an active and fruitful area of biostatistics with an explosive growth in recent years. […] The standard joint model consists of two parts, the first representing the dynamics of longitudinal data (which is referred to as the “longitudinal sub-model”) and the second one modeling survival or, generally, time-to-event data (which is referred to as the “survival sub-model”). […] Numerous extensions of this basic model have appeared in the joint modeling literature in recent decades, providing great flexibility in applications to a wide range of practical problems. […] The standard parameterization of the joint model (11.2) assumes that the risk of the event at age t depends on the current “true” value of the longitudinal biomarker at this age. While this is a reasonable assumption in general, it may be argued that additional dynamic characteristics of the longitudinal trajectory can also play a role in the risk of death or onset of a disease. For example, if two individuals at the same age have exactly the same level of some biomarker at this age, but the trajectory for the first individual increases faster with age than that of the second one, then the first individual can have worse survival chances for subsequent years. […] Therefore, extensions of the basic parameterization of joint models allowing for dependence of the risk of an event on such dynamic characteristics of the longitudinal trajectory can provide additional opportunities for comprehensive analyses of relationships between the risks and longitudinal trajectories. Several authors have considered such extended models. […] joint models are computationally intensive and are sometimes prone to convergence problems [however such] models provide more efficient estimates of the effect of a covariate […] on the time-to-event outcome in the case in which there is […] an effect of the covariate on the longitudinal trajectory of a biomarker. This means that analyses of longitudinal and time-to-event data in joint models may require smaller sample sizes to achieve comparable statistical power with analyses based on time-to-event data alone (Chen et al. 2011).”
“To be useful as a tool for biodemographers and gerontologists who seek biological explanations for observed processes, models of longitudinal data should be based on realistic assumptions and reflect relevant knowledge accumulated in the field. An example is the shape of the risk functions. Epidemiological studies show that the conditional hazards of health and survival events considered as functions of risk factors often have U- or J-shapes […], so a model of aging-related changes should incorporate this information. In addition, risk variables, and, what is very important, their effects on the risks of corresponding health and survival events, experience aging-related changes and these can differ among individuals. […] An important class of models for joint analyses of longitudinal and time-to-event data incorporating a stochastic process for description of longitudinal measurements uses an epidemiologically-justified assumption of a quadratic hazard (i.e., U-shaped in general and J-shaped for variables that can take values only on one side of the U-curve) considered as a function of physiological variables. Quadratic hazard models have been developed and intensively applied in studies of human longitudinal data”.
“Various approaches to statistical model building and data analysis that incorporate unobserved heterogeneity are ubiquitous in different scientific disciplines. Unobserved heterogeneity in models of health and survival outcomes can arise because there may be relevant risk factors affecting an outcome of interest that are either unknown or not measured in the data. Frailty models introduce the concept of unobserved heterogeneity in survival analysis for time-to-event data. […] Individual age trajectories of biomarkers can differ due to various observed as well as unobserved (and unknown) factors and such individual differences propagate to differences in risks of related time-to-event outcomes such as the onset of a disease or death. […] The joint analysis of longitudinal and time-to-event data is the realm of a special area of biostatistics named “joint models for longitudinal and time-to-event data” or simply “joint models” […] Approaches that incorporate heterogeneity in populations through random variables with continuous distributions (as in the standard joint models and their extensions […]) assume that the risks of events and longitudinal trajectories follow similar patterns for all individuals in a population (e.g., that biomarkers change linearly with age for all individuals). Although such homogeneity in patterns can be justifiable for some applications, generally this is a rather strict assumption […] A population under study may consist of subpopulations with distinct patterns of longitudinal trajectories of biomarkers that can also have different effects on the time-to-event outcome in each subpopulation. When such subpopulations can be defined on the base of observed covariate(s), one can perform stratified analyses applying different models for each subpopulation. However, observed covariates may not capture the entire heterogeneity in the population in which case it may be useful to conceive of the population as consisting of latent subpopulations defined by unobserved characteristics. Special methodological approaches are necessary to accommodate such hidden heterogeneity. Within the joint modeling framework, a special class of models, joint latent class models, was developed to account for such heterogeneity […] The joint latent class model has three components. First, it is assumed that a population consists of a fixed number of (latent) subpopulations. The latent class indicator represents the latent class membership and the probability of belonging to the latent class is specified by a multinomial logistic regression function of observed covariates. It is assumed that individuals from different latent classes have different patterns of longitudinal trajectories of biomarkers and different risks of event. The key assumption of the model is conditional independence of the biomarker and the time-to-events given the latent classes. Then the class-specific models for the longitudinal and time-to-event outcomes constitute the second and third component of the model thus completing its specification. […] the latent class stochastic process model […] provides a useful tool for dealing with unobserved heterogeneity in joint analyses of longitudinal and time-to-event outcomes and taking into account hidden components of aging in their joint influence on health and longevity. This approach is also helpful for sensitivity analyses in applications of the original stochastic process model. We recommend starting the analyses with the original stochastic process model and estimating the model ignoring possible hidden heterogeneity in the population. Then the latent class stochastic process model can be applied to test hypotheses about the presence of hidden heterogeneity in the data in order to appropriately adjust the conclusions if a latent structure is revealed.”
“The longitudinal genetic-demographic model (or the genetic-demographic model for longitudinal data) […] combines three sources of information in the likelihood function: (1) follow-up data on survival (or, generally, on some time-to-event) for genotyped individuals; (2) (cross-sectional) information on ages at biospecimen collection for genotyped individuals; and (3) follow-up data on survival for non-genotyped individuals. […] Such joint analyses of genotyped and non-genotyped individuals can result in substantial improvements in statistical power and accuracy of estimates compared to analyses of the genotyped subsample alone if the proportion of non-genotyped participants is large. Situations in which genetic information cannot be collected for all participants of longitudinal studies are not uncommon. They can arise for several reasons: (1) the longitudinal study may have started some time before genotyping was added to the study design so that some initially participating individuals dropped out of the study (i.e., died or were lost to follow-up) by the time of genetic data collection; (2) budget constraints prohibit obtaining genetic information for the entire sample; (3) some participants refuse to provide samples for genetic analyses. Nevertheless, even when genotyped individuals constitute a majority of the sample or the entire sample, application of such an approach is still beneficial […] The genetic stochastic process model […] adds a new dimension to genetic biodemographic analyses, combining information on longitudinal measurements of biomarkers available for participants of a longitudinal study with follow-up data and genetic information. Such joint analyses of different sources of information collected in both genotyped and non-genotyped individuals allow for more efficient use of the research potential of longitudinal data which otherwise remains underused when only genotyped individuals or only subsets of available information (e.g., only follow-up data on genotyped individuals) are involved in analyses. Similar to the longitudinal genetic-demographic model […], the benefits of combining data on genotyped and non-genotyped individuals in the genetic SPM come from the presence of common parameters describing characteristics of the model for genotyped and non-genotyped subsamples of the data. This takes into account the knowledge that the non-genotyped subsample is a mixture of carriers and non-carriers of the same alleles or genotypes represented in the genotyped subsample and applies the ideas of heterogeneity analyses […] When the non-genotyped subsample is substantially larger than the genotyped subsample, these joint analyses can lead to a noticeable increase in the power of statistical estimates of genetic parameters compared to estimates based only on information from the genotyped subsample. This approach is applicable not only to genetic data but to any discrete time-independent variable that is observed only for a subsample of individuals in a longitudinal study.”
“Despite an existing tradition of interpreting differences in the shapes or parameters of the mortality rates (survival functions) resulting from the effects of exposure to different conditions or other interventions in terms of characteristics of individual aging, this practice has to be used with care. This is because such characteristics are difficult to interpret in terms of properties of external and internal processes affecting the chances of death. An important question then is: What kind of mortality model has to be developed to obtain parameters that are biologically interpretable? The purpose of this chapter is to describe an approach to mortality modeling that represents mortality rates in terms of parameters of physiological changes and declining health status accompanying the process of aging in humans. […] A traditional (demographic) description of changes in individual health/survival status is performed using a continuous-time random Markov process with a finite number of states, and age-dependent transition intensity functions (transitions rates). Transitions to the absorbing state are associated with death, and the corresponding transition intensity is a mortality rate. Although such a description characterizes connections between health and mortality, it does not allow for studying factors and mechanisms involved in the aging-related health decline. Numerous epidemiological studies provide compelling evidence that health transition rates are influenced by a number of factors. Some of them are fixed at the time of birth […]. Others experience stochastic changes over the life course […] The presence of such randomly changing influential factors violates the Markov assumption, and makes the description of aging-related changes in health status more complicated. […] The age dynamics of influential factors (e.g., physiological variables) in connection with mortality risks has been described using a stochastic process model of human mortality and aging […]. Recent extensions of this model have been used in analyses of longitudinal data on aging, health, and longevity, collected in the Framingham Heart Study […] This model and its extensions are described in terms of a Markov stochastic process satisfying a diffusion-type stochastic differential equation. The stochastic process is stopped at random times associated with individuals’ deaths. […] When an individual’s health status is taken into account, the coefficients of the stochastic differential equations become dependent on values of the jumping process. This dependence violates the Markov assumption and renders the conditional Gaussian property invalid. So the description of this (continuously changing) component of aging-related changes in the body also becomes more complicated. Since studying age trajectories of physiological states in connection with changes in health status and mortality would provide more realistic scenarios for analyses of available longitudinal data, it would be a good idea to find an appropriate mathematical description of the joint evolution of these interdependent processes in aging organisms. For this purpose, we propose a comprehensive model of human aging, health, and mortality in which the Markov assumption is fulfilled by a two-component stochastic process consisting of jumping and continuously changing processes. The jumping component is used to describe relatively fast changes in health status occurring at random times, and the continuous component describes relatively slow stochastic age-related changes of individual physiological states. […] The use of stochastic differential equations for random continuously changing covariates has been studied intensively in the analysis of longitudinal data […] Such a description is convenient since it captures the feedback mechanism typical of biological systems reflecting regular aging-related changes and takes into account the presence of random noise affecting individual trajectories. It also captures the dynamic connections between aging-related changes in health and physiological states, which are important in many applications.”
The links above are links to topics I looked up while reading the second half of the book. The first link is quite relevant to the book’s coverage as a comprehensive longitudinal Grade of Membership (-GoM) model is covered in chapter 17. Relatedly, chapter 18 covers linear latent structure (-LLS) models, and as observed in the book LLS is a generalization of GoM. As should be obvious from the nature of the links some of the stuff included in the second half of the text is highly technical, and I’ll readily admit I was not fully able to understand all the details included in the coverage of chapters 17 and 18 in particular. On account of the technical nature of the coverage in Part 2 I’m not sure I’ll cover the second half of the book in much detail, though I probably shall devote at least one more post to some of those topics, as they were quite interesting even if some of the details were difficult to follow.
I have almost finished the book at this point, and I have already decided to both give the book five stars and include it on my list of favorite books on goodreads; it’s really well written, and it provides consistently highly detailed coverage of very high quality. As I also noted in the first post about the book the authors have given readability aspects some thought, and I am sure most readers would learn quite a bit from this text even if they were to skip some of the more technical chapters. The main body of Part 2 of the book, the subtitle of which is ‘Statistical Modeling of Aging, Health, and Longevity’, is however probably in general not worth the effort of reading unless you have a solid background in statistics.
This post includes some observations and quotes from the last chapters of the book’s Part 1.
“The proportion of older adults in the U.S. population is growing. This raises important questions about the increasing prevalence of aging-related diseases, multimorbidity issues, and disability among the elderly population. […] In 2009, 46.3 million people were covered by Medicare: 38.7 million of them were aged 65 years and older, and 7.6 million were disabled […]. By 2031, when the baby-boomer generation will be completely enrolled, Medicare is expected to reach 77 million individuals […]. Because the Medicare program covers 95 % of the nation’s aged population […], the prediction of future Medicare costs based on these data can be an important source of health care planning.”
“Three essential components (which could be also referred as sub-models) need to be developed to construct a modern model of forecasting of population health and associated medical costs: (i) a model of medical cost projections conditional on each health state in the model, (ii) health state projections, and (iii) a description of the distribution of initial health states of a cohort to be projected […] In making medical cost projections, two major effects should be taken into account: the dynamics of the medical costs during the time periods comprising the date of onset of chronic diseases and the increase of medical costs during the last years of life. In this chapter, we investigate and model the first of these two effects. […] the approach developed in this chapter generalizes the approach known as “life tables with covariates” […], resulting in a new family of forecasting models with covariates such as comorbidity indexes or medical costs. In sum, this chapter develops a model of the relationships between individual cost trajectories following the onset of aging-related chronic diseases. […] The underlying methodological idea is to aggregate the health state information into a single (or several) covariate(s) that can be determinative in predicting the risk of a health event (e.g., disease incidence) and whose dynamics could be represented by the model assumptions. An advantage of such an approach is its substantial reduction of the degrees of freedom compared with existing forecasting models (e.g., the FEM model, Goldman and RAND Corporation 2004). […] We found that the time patterns of medical cost trajectories were similar for all diseases considered and can be described in terms of four components having the meanings of (i) the pre-diagnosis cost associated with initial comorbidity represented by medical expenditures, (ii) the cost peak associated with the onset of each disease, (iii) the decline/reduction in medical expenditures after the disease onset, and (iv) the difference between post- and pre-diagnosis cost levels associated with an acquired comorbidity. The description of the trajectories was formalized by a model which explicitly involves four parameters reflecting these four components.”
As I noted earlier in my coverage of the book, I don’t think the model above fully captures all relevant cost contributions of the diseases included, as the follow-up period was too short to capture all relevant costs to be included in the part iv model component. This is definitely a problem in the context of diabetes. But then again nothing in theory stops people from combining the model above with other models which are better at dealing with the excess costs associated with long-term complications of chronic diseases, and the model results were intriguing even if the model likely underperforms in a few specific disease contexts.
“Models of medical cost projections usually are based on regression models estimated with the majority of independent predictors describing demographic status of the individual, patient’s health state, and level of functional limitations, as well as their interactions […]. If the health states needs to be described by a number of simultaneously manifested diseases, then detailed stratification over the categorized variables or use of multivariate regression models allows for a better description of the health states. However, it can result in an abundance of model parameters to be estimated. One way to overcome these difficulties is to use an approach in which the model components are demographically-based aggregated characteristics that mimic the effects of specific states. The model developed in this chapter is an example of such an approach: the use of a comorbidity index rather than of a set of correlated categorical regressor variables to represent the health state allows for an essential reduction in the degrees of freedom of the problem.”
“Unlike mortality, the onset time of chronic disease is difficult to define with high precision due to the large variety of disease-specific criteria for onset/incident case identification […] there is always some arbitrariness in defining the date of chronic disease onset, and a unified definition of date of onset is necessary for population studies with a long-term follow-up.”
“Individual age trajectories of physiological indices are the product of a complicated interplay among genetic and non-genetic (environmental, behavioral, stochastic) factors that influence the human body during the course of aging. Accordingly, they may differ substantially among individuals in a cohort. Despite this fact, the average age trajectories for the same index follow remarkable regularities. […] some indices tend to change monotonically with age: the level of blood glucose (BG) increases almost monotonically; pulse pressure (PP) increases from age 40 until age 85, then levels off and shows a tendency to decline only at later ages. The age trajectories of other indices are non-monotonic: they tend to increase first and then decline. Body mass index (BMI) increases up to about age 70 and then declines, diastolic blood pressure (DBP) increases until age 55–60 and then declines, systolic blood pressure (SBP) increases until age 75 and then declines, serum cholesterol (SCH) increases until age 50 in males and age 70 in females and then declines, ventricular rate (VR) increases until age 55 in males and age 45 in females and then declines. With small variations, these general patterns are similar in males and females. The shapes of the age-trajectories of the physiological variables also appear to be similar for different genotypes. […] The effects of these physiological indices on mortality risk were studied in Yashin et al. (2006), who found that the effects are gender and age specific. They also found that the dynamic properties of the individual age trajectories of physiological indices may differ dramatically from one individual to the next.”
“An increase in the mortality rate with age is traditionally associated with the process of aging. This influence is mediated by aging-associated changes in thousands of biological and physiological variables, some of which have been measured in aging studies. The fact that the age trajectories of some of these variables differ among individuals with short and long life spans and healthy life spans indicates that dynamic properties of the indices affect life history traits. Our analyses of the FHS data clearly demonstrate that the values of physiological indices at age 40 are significant contributors both to life span and healthy life span […] suggesting that normalizing these variables around age 40 is important for preventing age-associated morbidity and mortality later in life. […] results [also] suggest that keeping physiological indices stable over the years of life could be as important as their normalizing around age 40.”
“The results […] indicate that, in the quest of identifying longevity genes, it may be important to look for candidate genes with pleiotropic effects on more than one dynamic characteristic of the age-trajectory of a physiological variable, such as genes that may influence both the initial value of a trait (intercept) and the rates of its changes over age (slopes). […] Our results indicate that the dynamic characteristics of age-related changes in physiological variables are important predictors of morbidity and mortality risks in aging individuals. […] We showed that the initial value (intercept), the rate of changes (slope), and the variability of a physiological index, in the age interval 40–60 years, significantly influenced both mortality risk and onset of unhealthy life at ages 60+ in our analyses of the Framingham Heart Study data. That is, these dynamic characteristics may serve as good predictors of late life morbidity and mortality risks. The results also suggest that physiological changes taking place in the organism in middle life may affect longevity through promoting or preventing diseases of old age. For non-monotonically changing indices, we found that having a later age at the peak value of the index […], a lower peak value […], a slower rate of decline in the index at older ages […], and less variability in the index over time, can be beneficial for longevity. Also, the dynamic characteristics of the physiological indices were, overall, associated with mortality risk more significantly than with onset of unhealthy life.”
“Decades of studies of candidate genes show that they are not linked to aging-related traits in a straightforward manner […]. Recent genome-wide association studies (GWAS) have reached fundamentally the same conclusion by showing that the traits in late life likely are controlled by a relatively large number of common genetic variants […]. Further, GWAS often show that the detected associations are of tiny effect […] the weak effect of genes on traits in late life can be not only because they confer small risks having small penetrance but because they confer large risks but in a complex fashion […] In this chapter, we consider several examples of complex modes of gene actions, including genetic tradeoffs, antagonistic genetic effects on the same traits at different ages, and variable genetic effects on lifespan. The analyses focus on the APOE common polymorphism. […] The analyses reported in this chapter suggest that the e4 allele can be protective against cancer with a more pronounced role in men. This protective effect is more characteristic of cancers at older ages and it holds in both the parental and offspring generations of the FHS participants. Unlike cancer, the effect of the e4 allele on risks of CVD is more pronounced in women. […] [The] results […] explicitly show that the same allele can change its role on risks of CVD in an antagonistic fashion from detrimental in women with onsets at younger ages to protective in women with onsets at older ages. […] e4 allele carriers have worse survival compared to non-e4 carriers in each cohort. […] Sex stratification shows sexual dimorphism in the effect of the e4 allele on survival […] with the e4 female carriers, particularly, being more exposed to worse survival. […] The results of these analyses provide two important insights into the role of genes in lifespan. First, they provide evidence on the key role of aging-related processes in genetic susceptibility to lifespan. For example, taking into account the specifics of aging-related processes gains 18 % in estimates of the RRs and five orders of magnitude in significance in the same sample of women […] without additional investments in increasing sample sizes and new genotyping. The second is that a detailed study of the role of aging-related processes in estimates of the effects of genes on lifespan (and healthspan) helps in detecting more homogeneous [high risk] sub-samples”.
“The aging of populations in developed countries requires effective strategies to extend healthspan. A promising solution could be to yield insights into the genetic predispositions for endophenotypes, diseases, well-being, and survival. It was thought that genome-wide association studies (GWAS) would be a major breakthrough in this endeavor. Various genetic association studies including GWAS assume that there should be a deterministic (unconditional) genetic component in such complex phenotypes. However, the idea of unconditional contributions of genes to these phenotypes faces serious difficulties which stem from the lack of direct evolutionary selection against or in favor of such phenotypes. In fact, evolutionary constraints imply that genes should be linked to age-related phenotypes in a complex manner through different mechanisms specific for given periods of life. Accordingly, the linkage between genes and these traits should be strongly modulated by age-related processes in a changing environment, i.e., by the individuals’ life course. The inherent sensitivity of genetic mechanisms of complex health traits to the life course will be a key concern as long as genetic discoveries continue to be aimed at improving human health.”
“Despite the common understanding that age is a risk factor of not just one but a large portion of human diseases in late life, each specific disease is typically considered as a stand-alone trait. Independence of diseases was a plausible hypothesis in the era of infectious diseases caused by different strains of microbes. Unlike those diseases, the exact etiology and precursors of diseases in late life are still elusive. It is clear, however, that the origin of these diseases differs from that of infectious diseases and that age-related diseases reflect a complicated interplay among ontogenetic changes, senescence processes, and damages from exposures to environmental hazards. Studies of the determinants of diseases in late life provide insights into a number of risk factors, apart from age, that are common for the development of many health pathologies. The presence of such common risk factors makes chronic diseases and hence risks of their occurrence interdependent. This means that the results of many calculations using the assumption of disease independence should be used with care. Chapter 4 argued that disregarding potential dependence among diseases may seriously bias estimates of potential gains in life expectancy attributable to the control or elimination of a specific disease and that the results of the process of coping with a specific disease will depend on the disease elimination strategy, which may affect mortality risks from other diseases.”
“The goal of this monograph is to show how questions about the connections between and among aging, health, and longevity can be addressed using the wealth of available accumulated knowledge in the field, the large volumes of genetic and non-genetic data collected in longitudinal studies, and advanced biodemographic models and analytic methods. […] This monograph visualizes aging-related changes in physiological variables and survival probabilities, describes methods, and summarizes the results of analyses of longitudinal data on aging, health, and longevity in humans performed by the group of researchers in the Biodemography of Aging Research Unit (BARU) at Duke University during the past decade. […] the focus of this monograph is studying dynamic relationships between aging, health, and longevity characteristics […] our focus on biodemography/biomedical demography meant that we needed to have an interdisciplinary and multidisciplinary biodemographic perspective spanning the fields of actuarial science, biology, economics, epidemiology, genetics, health services research, mathematics, probability, and statistics, among others.”
The quotes above are from the book‘s preface. In case this aspect was not clear from the comments above, this is the kind of book where you’ll randomly encounter sentences like these:
“The simplest model describing negative correlations between competing risks is the multivariate lognormal frailty model. We illustrate the properties of such model for the bivariate case.”
“The time-to-event sub-model specifies the latent class-specific expressions for the hazard rates conditional on the vector of biomarkers Yt and the vector of observed covariates X …”
…which means that some parts of the book are really hard to blog; it simply takes more effort to deal with this stuff here than it’s worth. As a result of this my coverage of the book will not provide a remotely ‘balanced view’ of the topics covered in it; I’ll skip a lot of the technical stuff because I don’t think it makes much sense to cover specific models and algorithms included in the book in detail here. However I should probably also emphasize while on this topic that although the book is in general not an easy read, it’s hard to read because ‘this stuff is complicated’, not because the authors are not trying. The authors in fact make it clear already in the preface that some chapters are more easy to read than are others and that some chapters are actually deliberately written as ‘guideposts and way-stations‘, as they put it, in order to make it easier for the reader to find the stuff in which he or she is most interested (“the interested reader can focus directly on the chapters/sections of greatest interest without having to read the entire volume“) – they have definitely given readability aspects some thought, and I very much like the book so far; it’s full of great stuff and it’s very well written.
I have had occasion to question a few of the observations they’ve made, for example I was a bit skeptical about a few of the conclusions they drew in chapter 6 (‘Medical Cost Trajectories and Onset of Age-Associated Diseases’), but this was related to what some would certainly consider to be minor details. In the chapter they describe a model of medical cost trajectories where the post-diagnosis follow-up period is 20 months; this is in my view much too short a follow-up period to draw conclusions about medical cost trajectories in the context of type 2 diabetes, one of the diseases included in the model, which I know because I’m intimately familiar with the literature on that topic; you need to look 7-10 years ahead to get a proper sense of how this variable develops over time – and it really is highly relevant to include those later years, because if you do not you may miss out on a large proportion of the total cost given that a substantial proportion of the total cost of diabetes relate to complications which tend to take some years to develop. If your cost analysis is based on a follow-up period as short as that of that model you may also on a related note draw faulty conclusions about which medical procedures and -subsidies are sensible/cost effective in the setting of these patients, because highly adherent patients may be significantly more expensive in a short run analysis like this one (they show up to their medical appointments and take their medications…) but much cheaper in the long run (…because they take their medications they don’t go blind or develop kidney failure). But as I say, it’s a minor point – this was one condition out of 20 included in the analysis they present, and if they’d addressed all the things that pedants like me might take issue with, the book would be twice as long and it would likely no longer be readable. Relatedly, the model they discuss in that chapter is far from unsalvageable; it’s just that one of the components of interest – ‘the difference between post- and pre-diagnosis cost levels associated with an acquired comorbidity’ – in the case of at least one disease is highly unlikely to be correct (given the authors’ interpretation of the variable), because there’s some stuff of relevance which the model does not include. I found the model quite interesting, despite the shortcomings, and the results were definitely surprising. (No, the above does not in my opinion count as an example of coverage of a ‘specific model […] in detail’. Or maybe it does, but I included no equations. On reflection I probably can’t promise much more than that, sometimes the details are interesting…)
Anyway, below I’ve added some quotes from the first few chapters of the book and a few remarks along the way.
“The genetics of aging, longevity, and mortality has become the subject of intensive analyses […]. However, most estimates of genetic effects on longevity in GWAS have not reached genome-wide statistical significance (after applying the Bonferroni correction for multiple testing) and many findings remain non-replicated. Possible reasons for slow progress in this field include the lack of a biologically-based conceptual framework that would drive development of statistical models and methods for genetic analyses of data [here I was reminded of Burnham & Anderson’s coverage, in particular their criticism of mindless ‘Let the computer find out’-strategies – the authors of that chapter seem to share their skepticism…], the presence of hidden genetic heterogeneity, the collective influence of many genetic factors (each with small effects), the effects of rare alleles, and epigenetic effects, as well as molecular biological mechanisms regulating cellular functions. […] Decades of studies of candidate genes show that they are not linked to aging-related traits in a straightforward fashion (Finch and Tanzi 1997; Martin 2007). Recent genome-wide association studies (GWAS) have supported this finding by showing that the traits in late life are likely controlled by a relatively large number of common genetic variants […]. Further, GWAS often show that the detected associations are of tiny size (Stranger et al. 2011).”
I think this ties in well with what I’ve previously read on these and related topics – see e.g. the second-last paragraph quoted in my coverage of Richard Alexander’s book, or some of the remarks included in Roberts et al. Anyway, moving on:
“It is well known from epidemiology that values of variables describing physiological states at a given age are associated with human morbidity and mortality risks. Much less well known are the facts that not only the values of these variables at a given age, but also characteristics of their dynamic behavior during the life course are also associated with health and survival outcomes. This chapter [chapter 8 in the book, US] shows that, for monotonically changing variables, the value at age 40 (intercept), the rate of change (slope), and the variability of a physiological variable, at ages 40–60, significantly influence both health-span and longevity after age 60. For non-monotonically changing variables, the age at maximum, the maximum value, the rate of decline after reaching the maximum (right slope), and the variability in the variable over the life course may influence health-span and longevity. This indicates that such characteristics can be important targets for preventive measures aiming to postpone onsets of complex diseases and increase longevity.”
The chapter from which the quotes in the next two paragraphs are taken was completely filled with data from the Framingham Heart Study, and it was hard for me to know what to include here and what to leave out – so you should probably just consider the stuff I’ve included below as samples of the sort of observations included in that part of the coverage.
“To mediate the influence of internal or external factors on lifespan, physiological variables have to show associations with risks of disease and death at different age intervals, or directly with lifespan. For many physiological variables, such associations have been established in epidemiological studies. These include body mass index (BMI), diastolic blood pressure (DBP), systolic blood pressure (SBP), pulse pressure (PP), blood glucose (BG), serum cholesterol (SCH), hematocrit (H), and ventricular rate (VR). […] the connection between BMI and mortality risk is generally J-shaped […] Although all age patterns of physiological indices are non-monotonic functions of age, blood glucose (BG) and pulse pressure (PP) can be well approximated by monotonically increasing functions for both genders. […] the average values of body mass index (BMI) increase with age (up to age 55 for males and 65 for females), and then decline for both sexes. These values do not change much between ages 50 and 70 for males and between ages 60 and 70 for females. […] Except for blood glucose, all average age trajectories of physiological indices differ between males and females. Statistical analysis confirms the significance of these differences. In particular, after age 35 the female BMI increases faster than that of males. […] [When comparing women with less than or equal to 11 years of education [‘LE’] to women with 12 or more years of education [HE]:] The average values of BG for both groups are about the same until age 45. Then the BG curve for the LE females becomes higher than that of the HE females until age 85 where the curves intersect. […] The average values of BMI in the LE group are substantially higher than those among the HE group over the entire age interval. […] The average values of BG for the HE and LE males are very similar […] However, the differences between groups are much smaller than for females.”
They also in the chapter compared individuals with short life-spans [‘SL’, died before the age of 75] and those with long life-spans [‘LL’, 100 longest-living individuals in the relevant sample] to see if the variables/trajectories looked different. They did, for example: “trajectories for the LL females are substantially different from those for the SL females in all eight indices. Specifically, the average values of BG are higher and increase faster in the SL females. The entire age trajectory of BMI for the LL females is shifted to the right […] The average values of DBP [diastolic blood pressure, US] among the SL females are higher […] A particularly notable observation is the shift of the entire age trajectory of BMI for the LL males and females to the right (towards an older age), as compared with the SL group, and achieving its maximum at a later age. Such a pattern is markedly different from that for healthy and unhealthy individuals. The latter is mostly characterized by the higher values of BMI for the unhealthy people, while it has similar ages at maximum for both the healthy and unhealthy groups. […] Physiological aging changes usually develop in the presence of other factors affecting physiological dynamics and morbidity/mortality risks. Among these other factors are year of birth, gender, education, income, occupation, smoking, and alcohol use. An important limitation of most longitudinal studies is the lack of information regarding external disturbances affecting individuals in their day-today life.”
I incidentally noted while I was reading that chapter that a relevant variable ‘lurking in the shadows’ in the context of the male and female BMI trajectories might be changing smoking habits over time; I have not looked at US data on this topic, but I do know that the smoking patterns of Danish males and females during the latter half of the last century were markedly different and changed really quite dramatically in just a few decades; a lot more males than females smoked in the 60es, whereas the proportions of male- and female smokers today are much more similar, because a lot of males have given up smoking (I refer Danish readers to this blog post which I wrote some years ago on these topics). The authors of the chapter incidentally do look a little at data on smokers and they observe that smokers’ BMI are lower than non-smokers (not surprising), and that the smokers’ BMI curve (displaying the relationship between BMI and age) grows at a slower rate than the BMI curve of non-smokers (that this was to be expected is perhaps less clear, at least to me – the authors don’t interpret these specific numbers, they just report them).
“To better address the challenge of “healthy aging” and to reduce economic burdens of aging-related diseases, key factors driving the onset and progression of diseases in older adults must be identified and evaluated. An identification of disease-specific age patterns with sufficient precision requires large databases that include various age-specific population groups. Collections of such datasets are costly and require long periods of time. That is why few studies have investigated disease-specific age patterns among older U.S. adults and there is limited knowledge of factors impacting these patterns. […] Information collected in U.S. Medicare Files of Service Use (MFSU) for the entire Medicare-eligible population of older U.S. adults can serve as an example of observational administrative data that can be used for analysis of disease-specific age patterns. […] In this chapter, we focus on a series of epidemiologic and biodemographic characteristics that can be studied using MFSU.”
“Two datasets capable of generating national level estimates for older U.S. adults are the Surveillance, Epidemiology, and End Results (SEER) Registry data linked to MFSU (SEER-M) and the National Long Term Care Survey (NLTCS), also linked to MFSU (NLTCS-M). […] The SEER-M data are the primary dataset analyzed in this chapter. The expanded SEER registry covers approximately 26 % of the U.S. population. In total, the Medicare records for 2,154,598 individuals are available in SEER-M […] For the majority of persons, we have continuous records of Medicare services use from 1991 (or from the time the person reached age 65 after 1990) to his/her death. […] The NLTCS-M data contain two of the six waves of the NLTCS: namely, the cohorts of years 1994 and 1999. […] In total, 34,077 individuals were followed-up between 1994 and 1999. These individuals were given the detailed NLTCS interview […] which has information on risk factors. More than 200 variables were selected”
In short, these data sets are very large, and contain a lot of information. Here are some results/data:
“Among studied diseases, incidence rates of Alzheimer’s disease, stroke, and heart failure increased with age, while the rates of lung and breast cancers, angina pectoris, diabetes, asthma, emphysema, arthritis, and goiter became lower at advanced ages. [..] Several types of age-patterns of disease incidence could be described. The first was a monotonic increase until age 85–95, with a subsequent slowing down, leveling off, and decline at age 100. This pattern was observed for myocardial infarction, stroke, heart failure, ulcer, and Alzheimer’s disease. The second type had an earlier-age maximum and a more symmetric shape (i.e., an inverted U-shape) which was observed for lung and colon cancers, Parkinson’s disease, and renal failure. The majority of diseases (e.g., prostate cancer, asthma, and diabetes mellitus among them) demonstrated a third shape: a monotonic decline with age or a decline after a short period of increased rates. […] The occurrence of age-patterns with a maximum and, especially, with a monotonic decline contradicts the hypothesis that the risk of geriatric diseases correlates with an accumulation of adverse health events […]. Two processes could be operative in the generation of such shapes. First, they could be attributed to the effect of selection […] when frail individuals do not survive to advanced ages. This approach is popular in cancer modeling […] The second explanation could be related to the possibility of under-diagnosis of certain chronic diseases at advanced ages (due to both less pronounced disease symptoms and infrequent doctor’s office visits); however, that possibility cannot be assessed with the available data […this is because the data sets are based on Medicare claims – US]”
“The most detailed U.S. data on cancer incidence come from the SEER Registry […] about 60 % of malignancies are diagnosed in persons aged 65+ years old […] In the U.S., the estimated percent of cancer patients alive after being diagnosed with cancer (in 2008, by current age) was 13 % for those aged 65–69, 25 % for ages 70–79, and 22 % for ages 80+ years old (compared with 40 % of those aged younger than 65 years old) […] Diabetes affects about 21 % of the U.S. population aged 65+ years old (McDonald et al. 2009). However, while more is known about the prevalence of diabetes, the incidence of this disease among older adults is less studied. […] [In multiple previous studies] the incidence rates of diabetes decreased with age for both males and females. In the present study, we find similar patterns […] The prevalence of asthma among the U.S. population aged 65+ years old in the mid-2000s was as high as 7 % […] older patients are more likely to be underdiagnosed, untreated, and hospitalized due to asthma than individuals younger than age 65 […] asthma incidence rates have been shown to decrease with age […] This trend of declining asthma incidence with age is in agreement with our results.”
“The prevalence and incidence of Alzheimer’s disease increase exponentially with age, with the most notable rise occurring through the seventh and eight decades of life (Reitz et al. 2011). […] whereas dementia incidence continues to increase beyond age 85, the rate of increase slows down [which] suggests that dementia diagnosed at advanced ages might be related not to the aging process per se, but associated with age-related risk factors […] Approximately 1–2 % of the population aged 65+ and up to 3–5 % aged 85+ years old suffer from Parkinson’s disease […] There are few studies of Parkinson’s disease incidence, especially in the oldest old, and its age patterns at advanced ages remain controversial”.
“One disadvantage of large administrative databases is that certain factors can produce systematic over/underestimation of the number of diagnosed diseases or of identification of the age at disease onset. One reason for such uncertainties is an incorrect date of disease onset. Other sources are latent disenrollment and the effects of study design. […] the date of onset of a certain chronic disease is a quantity which is not defined as precisely as mortality. This uncertainty makes difficult the construction of a unified definition of the date of onset appropriate for population studies.”
“[W]e investigated the phenomenon of multimorbidity in the U.S. elderly population by analyzing mutual dependence in disease risks, i.e., we calculated disease risks for individuals with specific pre-existing conditions […]. In total, 420 pairs of diseases were analyzed. […] For each pair, we calculated age patterns of unconditional incidence rates of the diseases, conditional rates of the second (later manifested) disease for individuals after onset of the first (earlier manifested) disease, and the hazard ratio of development of the subsequent disease in the presence (or not) of the first disease. […] three groups of interrelations were identified: (i) diseases whose risk became much higher when patients had a certain pre-existing (earlier diagnosed) disease; (ii) diseases whose risk became lower than in the general population when patients had certain pre-existing conditions […] and (iii) diseases for which “two-tail” effects were observed: i.e., when the effects are significant for both orders of disease precedence; both effects can be direct (either one of the diseases from a disease pair increases the risk of the other disease), inverse (either one of the diseases from a disease pair decreases the risk of the other disease), or controversial (one disease increases the risk of the other, but the other disease decreases the risk of the first disease from the disease pair). In general, the majority of disease pairs with increased risk of the later diagnosed disease in both orders of precedence were those in which both the pre-existing and later occurring diseases were cancers, and also when both diseases were of the same organ. […] Generally, the effect of dependence between risks of two diseases diminishes with advancing age. […] Identifying mutual relationships in age-associated disease risks is extremely important since they indicate that development of […] diseases may involve common biological mechanisms.”
“in population cohorts, trends in prevalence result from combinations of trends in incidence, population at risk, recovery, and patients’ survival rates. Trends in the rates for one disease also may depend on trends in concurrent diseases, e.g., increasing survival from CHD contributes to an increase in the cancer incidence rate if the individuals who survived were initially susceptible to both diseases.”
Here’s the first post about the book. I finished it a while ago but I recently realized I had not completed my intended coverage of the book here on the blog back then, and as some of the book’s material sort-of-kind-of relates to material encountered in a book I’m currently reading (Biodemography of Aging) I decided I might as well finish my coverage of the book now in order to review some things I might have forgot in the meantime, by providing coverage here of some of the material covered in the second half of the book. It’s a nice book with some interesting observations, but as I also pointed out in my first post it is definitely not an easy read. Below I have included some observations from the book’s second half.
“The aged lung is characterised by airspace enlargement similar to, but not identical with acquired emphysema . Such tissue damage is detected even in non-smokers above 50 years of age as the septa of the lung alveoli are destroyed and the enlarged alveolar structures result in a decreased surface for gas exchange […] Additional problems are that surfactant production decreases with age  increasing the effort needed to expand the lungs during inhalation in the already reduced thoracic cavity volume where the weakened muscles are unable to thoroughly ventilate. […] As ageing is associated with respiratory muscle strength reduction, coughing becomes difficult making it progressively challenging to eliminate inhaled particles, pollens, microbes, etc. Additionally, ciliary beat frequency (CBF) slows down with age impairing the lungs’ first line of defence: mucociliary clearance  as the cilia can no longer repel invading microorganisms and particles. Consequently e.g. bacteria can more easily colonise the airways leading to infections that are frequent in the pulmonary tract of the older adult.”
“With age there are dramatic changes in neutrophil function, including reduced chemotaxis, phagocytosis and bactericidal mechanisms […] reduced bactericidal function will predispose to infection but the reduced chemotaxis also has consequences for lung tissue as this results in increased tissue bystander damage from neutrophil elastases released during migration […] It is currently accepted that alterations in pulmonary PPAR profile, more precisely loss of PPARγ activity, can lead to inflammation, allergy, asthma, COPD, emphysema, fibrosis, and cancer […]. Since it has been reported that PPARγ activity decreases with age, this provides a possible explanation for the increasing incidence of these lung diseases and conditions in older individuals .”
“Age is an important risk factor for cancer and subjects aged over 60 also have a higher risk of comorbidities. Approximately 50 % of neoplasms occur in patients older than 70 years […] a major concern for poor prognosis is with cancer patients over 70–75 years. These patients have a lower functional reserve, a higher risk of toxicity after chemotherapy, and an increased risk of infection and renal complications that lead to a poor quality of life. […] [Whereas] there is a difference in organs with higher cancer incidence in developed versus developing countries [,] incidence increases with ageing almost irrespective of country […] The findings from Surveillance, Epidemiology and End Results Program [SEER – incidentally I likely shall at some point discuss this one in much more detail, as the aforementioned biodemography textbook covers this data in a lot of detail.. – US]  show that almost a third of all cancer are diagnosed after the age of 75 years and 70 % of cancer-related deaths occur after the age of 65 years. […] The traditional clinical trial focus is on younger and healthier patient, i.e. with few or no co-morbidities. These restrictions have resulted in a lack of data about the optimal treatment for older patients  and a poor evidence base for therapeutic decisions. […] In the older patient, neutropenia, anemia, mucositis, cardiomyopathy and neuropathy — the toxic effects of chemotherapy — are more pronounced […] The correction of comorbidities and malnutrition can lead to greater safety in the prescription of chemotherapy […] Immunosenescence is a general classification for changes occurring in the immune system during the ageing process, as the distribution and function of cells involved in innate and adaptive immunity are impaired or remodelled […] Immunosenescence is considered a major contributor to cancer development in aged individuals“.
“Dementia and age-related vision loss are major causes of disability in our ageing population and it is estimated that a third of people aged over 75 are affected. […] age is the largest risk factor for the development of neurodegenerative diseases […] older patients with comorbidities such as atherosclerosis, type II diabetes or those suffering from repeated or chronic systemic bacterial and viral infections show earlier onset and progression of clinical symptoms […] analysis of post-mortem brain tissue from healthy older individuals has provided evidence that the presence of misfolded proteins alone does not correlate with cognitive decline and dementia, implying that additional factors are critical for neural dysfunction. We now know that innate immune genes and life-style contribute to the onset and progression of age-related neuronal dysfunction, suggesting that chronic activation of the immune system plays a key role in the underlying mechanisms that lead to irreversible tissue damage in the CNS. […] Collectively these studies provide evidence for a critical role of inflammation in the pathogenesis of a range of neurodegenerative diseases, but the factors that drive or initiate inflammation remain largely elusive.”
“The effect of infection, mimicked experimentally by administration of bacterial lipopolysaccharide (LPS) has revealed that immune to brain communication is a critical component of a host organism’s response to infection and a collection of behavioural and metabolic adaptations are initiated over the course of the infection with the purpose of restricting the spread of a pathogen, optimising conditions for a successful immune response and preventing the spread of infection to other organisms . These behaviours are mediated by an innate immune response and have been termed ‘sickness behaviours’ and include depression, reduced appetite, anhedonia, social withdrawal, reduced locomotor activity, hyperalgesia, reduced motivation, cognitive impairment and reduced memory encoding and recall […]. Metabolic adaptation to infection include fever, altered dietary intake and reduction in the bioavailability of nutrients that may facilitate the growth of a pathogen such as iron and zinc . These behavioural and metabolic adaptions are evolutionary highly conserved and also occur in humans”.
“Sickness behaviour and transient microglial activation are beneficial for individuals with a normal, healthy CNS, but in the ageing or diseased brain the response to peripheral infection can be detrimental and increases the rate of cognitive decline. Aged rodents exhibit exaggerated sickness and prolonged neuroinflammation in response to systemic infection […] Older people who contract a bacterial or viral infection or experience trauma postoperatively, also show exaggerated neuroinflammatory responses and are prone to develop delirium, a condition which results in a severe short term cognitive decline and a long term decline in brain function […] Collectively these studies demonstrate that peripheral inflammation can increase the accumulation of two neuropathological hallmarks of AD, further strengthening the hypothesis that inflammation i[s] involved in the underlying pathology. […] Studies from our own laboratory have shown that AD patients with mild cognitive impairment show a fivefold increased rate of cognitive decline when contracting a systemic urinary tract or respiratory tract infection […] Apart from bacterial infection, chronic viral infections have also been linked to increased incidence of neurodegeneration, including cytomegalovirus (CMV). This virus is ubiquitously distributed in the human population, and along with other age-related diseases such as cardiovascular disease and cancer, has been associated with increased risk of developing vascular dementia and AD [66, 67].”
“Frailty is associated with changes to the immune system, importantly the presence of a pro-inflammatory environment and changes to both the innate and adaptive immune system. Some of these changes have been demonstrated to be present before the clinical features of frailty are apparent suggesting the presence of potentially modifiable mechanistic pathways. To date, exercise programme interventions have shown promise in the reversal of frailty and related physical characteristics, but there is no current evidence for successful pharmacological intervention in frailty. […] In practice, acute illness in a frail person results in a disproportionate change in a frail person’s functional ability when faced with a relatively minor physiological stressor, associated with a prolonged recovery time […] Specialist hospital services such as surgery , hip fractures  and oncology  have now begun to recognise frailty as an important predictor of mortality and morbidity.”
I should probably mention here that this is another area where there’s an overlap between this book and the biodemography text I’m currently reading; chapter 7 of the latter text is about ‘Indices of Cumulative Deficits’ and covers this kind of stuff in a lot more detail than does this one, including e.g. detailed coverage of relevant statistical properties of one such index. Anyway, back to the coverage:
“Population based studies have demonstrated that the incidence of infection and subsequent mortality is higher in populations of frail people. […] The prevalence of pneumonia in a nursing home population is 30 times higher than the general population [39, 40]. […] The limited data available demonstrates that frailty is associated with a state of chronic inflammation. There is also evidence that inflammageing predates a diagnosis of frailty suggesting a causative role. […] A small number of studies have demonstrated a dysregulation of the innate immune system in frailty. Frail adults have raised white cell and neutrophil count. […] High white cell count can predict frailty at a ten year follow up . […] A recent meta-analysis and four individual systematic reviews have found beneficial evidence of exercise programmes on selected physical and functional ability […] exercise interventions may have no positive effect in operationally defined frail individuals. […] To date there is no clear evidence that pharmacological interventions improve or ameliorate frailty.”
“[A]s we get older the time and intensity at which we exercise is severely reduced. Physical inactivity now accounts for a considerable proportion of age-related disease and mortality. […] Regular exercise has been shown to improve neutrophil microbicidal functions which reduce the risk of infectious disease. Exercise participation is also associated with increased immune cell telomere length, and may be related to improved vaccine responses. The anti-inflammatory effect of regular exercise and negative energy balance is evident by reduced inflammatory immune cell signatures and lower inflammatory cytokine concentrations. […] Reduced physical activity is associated with a positive energy balance leading to increased adiposity and subsequently systemic inflammation . […] Elevated neutrophil counts accompany increased inflammation with age and the increased ratio of neutrophils to lymphocytes is associated with many age-related diseases including cancer . Compared to more active individuals, less active and overweight individuals have higher circulating neutrophil counts . […] little is known about the intensity, duration and type of exercise which can provide benefits to neutrophil function. […] it remains unclear whether exercise and physical activity can override the effects of NK cell dysfunction in the old. […] A considerable number of studies have assessed the effects of acute and chronic exercise on measures of T-cell immunesenescence including T cell subsets, phenotype, proliferation, cytokine production, chemotaxis, and co-stimulatory capacity. […] Taken together exercise appears to promote an anti-inflammatory response which is mediated by altered adipocyte function and improved energy metabolism leading to suppression of pro-inflammatory cytokine production in immune cells.”
I liked the book. Below I have added some sample observations from the book, as well as a collection of links to various topics covered/mentioned in the book.
“To make a variety of rocks, there needs to be a variety of minerals. The Earth has shown a capacity for making an increasing variety of minerals throughout its existence. Life has helped in this [but] [e]ven a dead planet […] can evolve a fine array of minerals and rocks. This is done simply by stretching out the composition of the original homogeneous magma. […] Such stretching of composition would have happened as the magma ocean of the earliest […] Earth cooled and began to solidify at the surface, forming the first crust of this new planet — and the starting point, one might say, of our planet’s rock cycle. When magma cools sufficiently to start to solidify, the first crystals that form do not have the same composition as the overall magma. In a magma of ‘primordial Earth’ type, the first common mineral to form was probably olivine, an iron-and-magnesium-rich silicate. This is a dense mineral, and so it tends to sink. As a consequence the remaining magma becomes richer in elements such as calcium and aluminium. From this, at temperatures of around 1,000°C, the mineral plagioclase feldspar would then crystallize, in a calcium-rich variety termed anorthite. This mineral, being significantly less dense than olivine, would tend to rise to the top of the cooling magma. On the Moon, itself cooling and solidifying after its fiery birth, layers of anorthite crystals several kilometres thick built up as the rock — anorthosite — of that body’s primordial crust. This anorthosite now forms the Moon’s ancient highlands, subsequently pulverized by countless meteorite impacts. This rock type can be found on Earth, too, particularly within ancient terrains. […] Was the Earth’s first surface rock also anorthosite? Probably—but we do not know for sure, as the Earth, a thoroughly active planet throughout its existence, has consumed and obliterated nearly all of the crust that formed in the first several hundred million years of its existence, in a mysterious interval of time that we now call the Hadean Eon. […] The earliest rocks that we know of date from the succeeding Archean Eon.”
“Where plates are pulled apart, then pressure is released at depth, above the ever-opening tectonic rift, for instance beneath the mid-ocean ridge that runs down the centre of the Atlantic Ocean. The pressure release from this crustal stretching triggers decompression melting in the rocks at depth. These deep rocks — peridotite — are dense, being rich in the iron- and magnesium-bearing mineral olivine. Heated to the point at which melting just begins, so that the melt fraction makes up only a few percentage points of the total, those melt droplets are enriched in silica and aluminium relative to the original peridotite. The melt will have a composition such that, when it cools and crystallizes, it will largely be made up of crystals of plagioclase feldspar together with pyroxene. Add a little more silica and quartz begins to appear. With less silica, olivine crystallizes instead of quartz.
The resulting rock is basalt. If there was anything like a universal rock of rocky planet surfaces, it is basalt. On Earth it makes up almost all of the ocean floor bedrock — in other words, the ocean crust, that is, the surface layer, some 10 km thick. Below, there is a boundary called the Mohorovičič Discontinuity (or ‘Moho’ for short)[…]. The Moho separates the crust from the dense peridotitic mantle rock that makes up the bulk of the lithosphere. […] Basalt makes up most of the surface of Venus, Mercury, and Mars […]. On the Moon, the ‘mare’ (‘seas’) are not of water but of basalt. Basalt, or something like it, will certainly be present in large amounts on the surfaces of rocky exoplanets, once we are able to bring them into close enough focus to work out their geology. […] At any one time, ocean floor basalts are the most common rock type on our planet’s surface. But any individual piece of ocean floor is, geologically, only temporary. It is the fate of almost all ocean crust — islands, plateaux, and all — to be destroyed within ocean trenches, sliding down into the Earth along subduction zones, to be recycled within the mantle. From that destruction […] there arise the rocks that make up the most durable component of the Earth’s surface: the continents.”
“Basaltic magmas are a common starting point for many other kinds of igneous rocks, through the mechanism of fractional crystallization […]. Remove the early-formed crystals from the melt, and the remaining melt will evolve chemically, usually in the direction of increasing proportions of silica and aluminium, and decreasing amounts of iron and magnesium. These magmas will therefore produce intermediate rocks such as andesites and diorites in the finely and coarsely crystalline varieties, respectively; and then more evolved silica-rich rocks such as rhyolites (fine), microgranites (medium), and granites (coarse). […] Granites themselves can evolve a little further, especially at the late stages of crystallization of large bodies of granite magma. The final magmas are often water-rich ones that contain many of the incompatible elements (such as thorium, uranium, and lithium), so called because they are difficult to fit within the molecular frameworks of the common igneous minerals. From these final ‘sweated-out’ magmas there can crystallize a coarsely crystalline rock known as pegmatite — famous because it contains a wide variety of minerals (of the ~4,500 minerals officially recognized on Earth […] some 500 have been recognized in pegmatites).”
“The less oxygen there is [at the area of deposition], the more the organic matter is preserved into the rock record, and it is where the seawater itself, by the sea floor, has little or no oxygen that some of the great carbon stores form. As animals cannot live in these conditions, organic-rich mud can accumulate quietly and undisturbed, layer by layer, here and there entombing the skeleton of some larger planktonic organism that has fallen in from the sunlit, oxygenated waters high above. It is these kinds of sediments that […] generate[d] the oil and gas that currently power our civilization. […] If sedimentary layers have not been buried too deeply, they can remain as soft muds or loose sands for millions of years — sometimes even for hundreds of millions of years. However, most buried sedimentary layers, sooner or later, harden and turn into rock, under the combined effects of increasing heat and pressure (as they become buried ever deeper under subsequent layers of sediment) and of changes in chemical environment. […] As rocks become buried ever deeper, they become progressively changed. At some stage, they begin to change their character and depart from the condition of sedimentary strata. At this point, usually beginning several kilometres below the surface, buried igneous rocks begin to transform too. The process of metamorphism has started, and may progress until those original strata become quite unrecognizable.”
“Frozen water is a mineral, and this mineral can make up a rock, both on Earth and, very commonly, on distant planets, moons, and comets […]. On Earth today, there are large deposits of ice strata on the cold polar regions of Antarctica and Greenland, with smaller amounts in mountain glaciers […]. These ice strata, the compressed remains of annual snowfalls, have simply piled up, one above the other, over time; on Antarctica, they reach almost 5 km in thickness and at their base are about a million years old. […] The ice cannot pile up for ever, however: as the pressure builds up it begins to behave plastically and to slowly flow downslope, eventually melting or, on reaching the sea, breaking off as icebergs. As the ice mass moves, it scrapes away at the underlying rock and soil, shearing these together to form a mixed deposit of mud, sand, pebbles, and characteristic striated (ice-scratched) cobbles and boulders […] termed a glacial till. Glacial tills, if found in the ancient rock record (where, hardened, they are referred to as tillites), are a sure clue to the former presence of ice.”
“At first approximation, the mantle is made of solid rock and is not […] a seething mass of magma that the fragile crust threatens to founder into. This solidity is maintained despite temperatures that, towards the base of the mantle, are of the order of 3,000°C — temperatures that would very easily melt rock at the surface. It is the immense pressures deep in the Earth, increasing more or less in step with temperature, that keep the mantle rock in solid form. In more detail, the solid rock of the mantle may include greater or lesser (but usually lesser) amounts of melted material, which locally can gather to produce magma chambers […] Nevertheless, the mantle rock is not solid in the sense that we might imagine at the surface: it is mobile, and much of it is slowly moving plastically, taking long journeys that, over many millions of years, may encompass the entire thickness of the mantle (the kinds of speeds estimated are comparable to those at which tectonic plates move, of a few centimetres a year). These are the movements that drive plate tectonics and that, in turn, are driven by the variation in temperature (and therefore density) from the contact region with the hot core, to the cooler regions of the upper mantle.”
“The outer core will not transmit certain types of seismic waves, which indicates that it is molten. […] Even farther into the interior, at the heart of the Earth, this metal magma becomes rock once more, albeit a rock that is mostly crystalline iron and nickel. However, it was not always so. The core used to be liquid throughout and then, some time ago, it began to crystallize into iron-nickel rock. Quite when this happened has been widely debated, with estimates ranging from over three billion years ago to about half a billion years ago. The inner core has now grown to something like 2,400 km across. Even allowing for the huge spans of geological time involved, this implies estimated rates of solidification that are impressive in real time — of some thousands of tons of molten metal crystallizing into solid form per second.”
“Rocks are made out of minerals, and those minerals are not a constant of the universe. A little like biological organisms, they have evolved and diversified through time. As the minerals have evolved, so have the rocks that they make up. […] The pattern of evolution of minerals was vividly outlined by Robert Hazen and his colleagues in what is now a classic paper published in 2008. They noted that in the depths of outer space, interstellar dust, as analysed by the astronomers’ spectroscopes, seems to be built of only about a dozen minerals […] Their component elements were forged in supernova explosions, and these minerals condensed among the matter and radiation that streamed out from these stellar outbursts. […] the number of minerals on the new Earth [shortly after formation was] about 500 (while the smaller, largely dry Moon has about 350). Plate tectonics began, with its attendant processes of subduction, mountain building, and metamorphism. The number of minerals rose to about 1,500 on a planet that may still have been biologically dead. […] The origin and spread of life at first did little to increase the number of mineral species, but once oxygen-producing photosynthesis started, then there was a great leap in mineral diversity as, for each mineral, various forms of oxide and hydroxide could crystallize. After this step, about two and a half billion years ago, there were over 4,000 minerals, most of them vanishingly rare. Since then, there may have been a slight increase in their numbers, associated with such events as the appearance and radiation of metazoan animals and plants […] Humans have begun to modify the chemistry and mineralogy of the Earth’s surface, and this has included the manufacture of many new types of mineral. […] Human-made minerals are produced in laboratories and factories around the world, with many new forms appearing every year. […] Materials sciences databases now being compiled suggest that more than 50,000 solid, inorganic, crystalline species have been created in the laboratory.”
Some links of interest:
Rock. Presolar grains. Silicate minerals. Silicon–oxygen tetrahedron. Quartz. Olivine. Feldspar. Mica. Jean-Baptiste Biot. Meteoritics. Achondrite/Chondrite/Chondrule. Carbonaceous chondrite. Iron–nickel alloy. Widmanstätten pattern. Giant-impact hypothesis (in the book this is not framed as a hypothesis nor is it explicitly referred to as the GIH; it’s just taken to be the correct account of what happened back then – US). Alfred Wegener. Arthur Holmes. Plate tectonics. Lithosphere. Asthenosphere. Fractional Melting (couldn’t find a wiki link about this exact topic; the MIT link is quite technical – sorry). Hotspot (geology). Fractional crystallization. Metastability. Devitrification. Porphyry (geology). Phenocryst. Thin section. Neptunism. Pyroclastic flow. Ignimbrite. Pumice. Igneous rock. Sedimentary rock. Weathering. Slab (geology). Clay minerals. Conglomerate (geology). Breccia. Aeolian processes. Hummocky cross-stratification. Ralph Alger Bagnold. Montmorillonite. Limestone. Ooid. Carbonate platform. Turbidite. Desert varnish. Evaporite. Law of Superposition. Stratigraphy. Pressure solution. Compaction (geology). Recrystallization (geology). Cleavage (geology). Phyllite. Aluminosilicate. Gneiss. Rock cycle. Ultramafic rock. Serpentinite. Pressure-Temperature-time paths. Hornfels. Impactite. Ophiolite. Xenolith. Kimberlite. Transition zone (Earth). Mantle convection. Mantle plume. Core–mantle boundary. Post-perovskite. Earth’s inner core. Inge Lehmann. Stromatolites. Banded iron formations. Microbial mat. Quorum sensing. Cambrian explosion. Bioturbation. Biostratigraphy. Coral reef. Radiolaria. Carbonate compensation depth. Paleosol. Bone bed. Coprolite. Allan Hills 84001. Tharsis. Pedestal crater. Mineraloid. Concrete.
I have quoted from the book before, but I decided that this book deserves to be blogged in more detail. I’m close to finishing the book at this point (it’s definitely taken longer than it should have), and I’ll probably give it 5 stars on goodreads; I might also add it to my list of favourite books on the site. In this post I’ve added some quotes and ideas from the book, and a few comments. Before going any further I should note that it’s frankly impossible to cover anywhere near all the ideas covered in the book here on the blog, so if you’re even remotely interested in these kinds of things you really should pick up a copy of the book and read all of it.
“I believe that something crucial has been missing from all of the great debates of history, among philosophers, politicians, theologians, and thinkers from other and diverse backgrounds, on the issues of morality, ethics, justice, right and wrong. […] those who have tried to analyze morality have failed to treat the human traits that underlie moral behavior as outcomes of evolution […] for many conflicts of interest, compromises and enforceable contracts represent the only real solutions. Appeals to morality, I will argue, are simply the invoking of such compromises and contracts in particular ways. […] the process of natural selection that has given rise to all forms of life, including humans, operates such that success has always been relative. One consequence is that organisms resulting from the long-term cumulative effects of selection are expected to resist efforts to reveal their interests fully to others, and also efforts to place limits on their striving or to decide for them when their interests are being “fully” satisfied. These are all reasons why we should expect no “terminus” – ever – to debates on moral and ethical issues.” (these comments I also included in the quotes post to which I link at the beginning, but I thought it was worth including them in this post as well even so – US).
“I am convinced that biology can never offer […] easy or direct answers to the questions of what is right and wrong. I explicitly reject the attitude that whatever biology tells us is so is also what ought to be (David Hume’s so-called “naturalistic fallacy”) […] there are within biology no magic solutions to moral problems. […] Knowledge of the human background in organic evolution can [however] provide a deeper self-understanding by an increasing proportion of the world’s population; self-understanding that I believe can contribute to answering the serious questions of social living.”
“If there had been no recent discoveries in biology that provided new ways of looking at the concept of moral systems, then I would be optimistic indeed to believe that I could say much that is new. But there have been such discoveries. […] The central point in these writings [Hamilton, Williams, Trivers, Cavalli-Sforza, Feldman, Dawkins, Wilson, etc. – US] […] is that natural selection has apparently been maximizing the survival by reproduction of genes, as they have been defined by evolutionists, and that, with respect to the activities of individuals, this includes effects on copies of their genes, even copies located in other individuals. In other words, we are evidently evolved not only to aid the genetic materials in our own bodies, by creating and assisting descendants, but also to assist, by nepotism, copies of our genes that reside in collateral (nondescendant) relatives. […] ethics, morality, human conduct, and the human psyche are to be understood only if societies are seen as collections of individuals seeking their own self-interests […] In some respects these ideas run contrary to what people have believed and been taught about morality and human values: I suspect that nearly all humans believe it is a normal part of the functioning of every human individual now and then to assist someone else in the realization of that person’s own interests to the actual net expense of those of the altruist. What [the above-mentioned writings] tells us is that, despite our intuitions, there is not a shred of evidence to support this view of beneficence, and a great deal of convincing theory suggests that any such view will eventually be judged false. This implies that we will have to start all over again to describe and understand ourselves, in terms alien to our intuitions […] It is […] a goal of this book to contribute to this redescription and new understanding, and especially to discuss why our intuitions should have misinformed us.”
“Social behavior evolves as a succession of ploys and counterploys, and for humans these ploys are used, not only among individuals within social groups, but between and among small and large groups of up to hundreds of millions of individuals. The value of an evolutionary approach to human sociality is thus not to determine the limits of our actions so that we can abide by them. Rather, it is to examine our life strategies so that we can change them when we wish, as a result of understanding them. […] my use of the word biology in no way implies that moral systems have some kind of explicit genetic background, are genetically determined, or cannot be altered by adjusting the social environment. […] I mean simply to suggest that if we wish to understand those aspects of our behavior commonly regarded as involving morality or ethics, it will help to reconsider our behavior as a product of evolution by natural selection. The principal reason for this suggestion is that natural selection operates according to general principles which make its effects highly predictive, even with respect to traits and circumstances that have not yet been analyzed […] I am interested […] not in determining what is moral and immoral, in the sense of what people ought to be doing, but in elucidating the natural history of ethics and morality – in discovering how and why humans initiated and developed the ideas we have about right and wrong.”
I should perhaps mention here that sort-of-kind-of related stuff is covered in Aureli et al. (see e.g. this link), and that some parts of that book will probably make you understand Alexander’s ideas a lot better even if perhaps he didn’t read those specific authors – mainly because it gets a lot easier to imagine the sort of mechanisms which might be at play here if you’ve read this sort of literature. Here’s one relevant quote from the coverage of that book, which also deals with the question Alexander discusses above, and in a lot more detail throughout his book, namely ‘where our morality comes from?’
“we make two fundamental assertions regarding the evolution of morality: (1) there are specific types of behavior demonstrated by both human and nonhuman primates that hint at a shared evolutionary background to morality; and (2) there are theoretical and actual connections between morality and conflict resolution in both nonhuman primates and human development. […] the transition from nonmoral or premoral to moral is more gradual than commonly assumed. No magic point appears in either evolutionary history or human development at which morality suddenly comes into existence. In both early childhood and in animals closely related to us, we can recognize behaviors (and, in the case of children, judgments) that are essential building blocks of the morality of the human adult. […] the decision making and emotions underlying moral judgments are generated within the individual rather than being simply imposed by society. They are a product of evolution, an integrated part of the human genetic makeup, that makes the child construct a moral perspective through interactions with other members of its species. […] Much research has shown that children acquire morality through a social-cognitive process; children make connections between acts and consequences. Through a gradual process, children develop concepts of justice, fairness, and equality, and they apply these concepts to concrete everyday situations […] we assert that emotions such as empathy and sympathy provide an experiential basis by which children construct moral judgments. Emotional reactions from others, such as distress or crying, provide experiential information that children use to judge whether an act is right or wrong […] when a child hits another child, a crying response provides emotional information about the nature of the act, and this information enables the child, in part, to determine whether and why the transgression is wrong. Therefore, recognizing signs of distress in another person may be a basic requirement of the moral judgment process. The fact that responses to distress in another have been documented both in infancy and in the nonhuman primate literature provides initial support for the idea that these types of moral-like experiences are common to children and nonhuman primates.”
Alexander’s coverage is quite different from that found in Aureli et al.,, but some of the contributors to the latter work deal with similar questions to the ones in which he’s interested, using approaches not employed in Alexander’s book – so this is another place to look if you’re interested in these topics. Margalit’s The Emergence of Norms is also worth mentioning. Part of the reason why I mention these books here is incidentally that they’re not talked about in Alexander’s coverage (for very natural reasons, I should add, in the case of the former book at least; Natural Conflict Resolution was published more than a decade after Alexander wrote his book…).
“In the hierarchy of explanatory principles governing the traits of living organisms, evolutionary reductionism – the development of principles from the evolutionary process – tends to subsume all other kinds. Proximate-cause reductionism (or reduction by dissection) sometimes advances our understanding of the whole phenomena. […] When evolutionary reduction becomes trivial in the study of life it is for a reason different from incompleteness; rather, it is because the breadth of the generalization distances it too significantly from the particular problem that may be at hand. […] the greatest weakness of reduction by generalization is not that it is likely to be trivial but that errors are probable through unjustified leaps from hypothesis to conclusion […] Critics such as Gould and Lewontin […] do not discuss the facts that (a) all students of human behavior (not just those who take evolution into account) run the risk of leaping unwarrantedly from hypothesis to conclusion and (b) just-so stories were no less prevalent and hypothesis-testing no more prevalent in studies of human behavior before evolutionary biologists began to participate. […] I believe that failure by biologists and others to distinguish proximate- or partial-cause and evolutionary- or ultimate-cause reductionism […] is in some part responsible for the current chasm between the social and the biological sciences and the resistance to so-called biological approaches to understanding humans. […] Both approaches are essential to progress in biology and the social sciences, and it would be helpful if their relationship, and that of their respective practitioners, were not seen as adversarial.”
“Humans are not accustomed to dealing with their own strategies of life as if they had been tuned by natural selection. […] People are not generally aware of what their lifetimes have been evolved to accomplish, and, even if they are roughly aware of this, they do not easily accept that their everyday activities are in any sense means to that end. […] The theory of lifetimes most widely accepted among biologists is that individuals have evolved to maximize the likelihood of survival of not themselves, but their genes, and that they do this by reproducing and tending in various ways offspring and other carriers of their own genes […] In this theory, survival of the individual – and its growth, development, and learning – are proximate mechanisms of reproductive success, which is a proximate mechanism of genic survival. Only the genes have evolved to survive. […] To say that we are evolved to serve the interests of our genes in no way suggests that we are obliged to serve them. […] Evolution is surely most deterministic for those still unaware of it. If this argument is correct, it may be the first to carry us from is to ought, i.e., if we desire to be the conscious masters of our own fates, and if conscious effort in that direction is the most likely vehicle of survival and happiness, then we ought to study evolution.”
“People are sometimes comfortable with the notion that certain activities can be labeled as “purely cultural” because they also believe that there are behaviors that can be labeled “purely genetic.” Neither is true: the environment contributes to the expression of all behaviors, and culture is best described as part of the environment.”
“Happiness and its anticipation are […] proximate mechanisms that lead us to perform and repeat acts that in the environments of history, at least, would have led to greater reproductive success.”
“The remarkable difference between the patterns of senescence in semelparous (one-time breeding) and iteroparous (repeat-breeding) organisms is probably one of the best simple demonstrations of the central significance of reproduction in the individual’s lifetime. How, otherwise, could we explain the fact that those who reproduce but once, like salmon and soybeans, tend to die suddenly right afterward, while those like ourselves who have residual reproductive possibilities after the initial reproductive act decline or senesce gradually? […] once an organism has completed all possibilities of reproducing (through both offspring production and assistance, and helping other relatives), then selection can no longer affect its survival: any physiological or other breakdown that destroys it may persist and even spread if it is genetically linked to a trait that is expressed earlier and is reproductively beneficial. […] selection continually works against senescence, but is just never able to defeat it entirely. […] senescence leads to a generalized deterioration rather than one owing to a single effect or a few effects […] In the course of working against senescence, selection will tend to remove, one by one, the most frequent sources of mortality as a result of senescence. Whenever a single cause of mortality, such as a particular malfunction of any vital organ, becomes the predominant cause of mortality, then selection will more effectively reduce the significance of that particular defect (meaning those who lack it will outreproduce) until some other achieves greater relative significance. […] the result will be that all organs and systems will tend to deteriorate together. […] The point is that as we age, and as senescence proceeds, large numbers of potential sources of mortality tend to lurk ever more malevolently just “below the surface,” so that, unfortunately, the odds are very high against any dramatic lengthening of the maximum human lifetime through technology. […] natural selection maximizes the likelihood of genetic survival, which is incompatible with eliminating senescence. […] Senescence, and the finiteness of lifetimes, have evolved as incidental effects […] Organisms compete for genetic survival and the winners (in evolutionary terms) are those who sacrifice their phenotypes (selves) earlier when this results in greater reproduction.”
“altruism appears to diminish with decreasing degree of relatedness in sexual species whenever it is studied – in humans as well as nonhuman species”
I recently read Nick Middleton’s short publication on this topic and decided it was worth blogging it here. I gave the publication 3 stars on goodreads; you can read my goodreads review of the book here.
In this post I’ll quote a bit from the book and add some details I thought were interesting.
“None of [the] approaches to desert definition is foolproof. All have their advantages and drawbacks. However, each approach delivers […] global map[s] of deserts and semi-deserts that [are] broadly similar […] Roughly, deserts cover about one-quarter of our planet’s land area, and semi-deserts another quarter.”
“High temperatures and a paucity of rainfall are two aspects of climate that many people routinely associate with deserts […] However, desert climates also embrace other extremes. Many arid zones experience freezing temperatures and snowfall is commonplace, particularly in those situated outside the tropics. […] For much of the time, desert skies are cloud-free, meaning deserts receive larger amounts of sunshine than any other natural environment. […] Most of the water vapour in the world’s atmosphere is supplied by evaporation from the oceans, so the more remote a location is from this source the more likely it is that any moisture in the air will have been lost by precipitation before it reaches continental interiors. The deserts of Central Asia illustrate this principle well: most of the moisture in the air is lost before it reaches the heart of the continent […] A clear distinction can be made between deserts in continental interiors and those on their coastal margins when it comes to the range of temperatures experienced. Oceans tend to exert a moderating influence on temperature, reducing extremes, so the greatest ranges of temperature are found far from the sea while coastal deserts experience a much more limited range. […] Freezing temperatures occur particularly in the mid-latitude deserts, but by no means exclusively so. […] snowfall occurs at the Algerian oasis towns of Ouagla and Ghardaia, in the northern Sahara, as often as once every 10 years on average.”
“[One] characteristic of rainfall in deserts is its variability from year to year which in many respects makes annual average statistics seem like nonsense. A very arid desert area may go for several years with no rain at all […]. It may then receive a whole ‘average’ year’s rainfall in just one storm […] Rainfall in deserts is also typically very variable in space as well as time. Hence, desert rainfall is frequently described as being ‘spotty’. This spottiness occurs because desert storms are often convective, raining in a relatively small area, perhaps just a few kilometres across. […] Climates can vary over a wide range of spatial scales […] Changes in temperature, wind, relative humidity, and other elements of climate can be detected over short distances, and this variability on a small scale creates distinctive climates in small areas. These are microclimates, different in some way from the conditions prevailing over the surrounding area as a whole. At the smallest scale, the shade given by an individual plant can be described as a microclimate. Over larger distances, the surface temperature of the sand in a dune will frequently be significantly different from a nearby dry salt lake because of the different properties of the two types of surface. […] Microclimates are important because they exert a critical control over all sorts of phenomena. These include areas suitable for plant and animal communities to develop, the ways in which rocks are broken down, and the speed at which these processes occur.”
“The level of temperature prevailing when precipitation occurs is important for an area’s water balance and its degree of aridity. A rainy season that occurs during the warm summer months, when evaporation is greatest, makes for a climate that is more arid than if precipitation is distributed more evenly throughout the year.”
“The extremely arid conditions of today[‘s Sahara Desert] have prevailed for only a few thousand years. There is lots of evidence to suggest that the Sahara was lush, nearly completely covered with grasses and shrubs, with many lakes that supported antelope, giraffe, elephant, hippopotamus, crocodile, and human populations in regions that today have almost no measurable precipitation. This ‘African Humid Period’ began around 15,000 years ago and came to an end around 10,000 years later. […] Globally, at the height of the most recent glacial period some 18,000 years ago, almost 50% of the land area between 30°N and 30°S was covered by two vast belts of sand, often called ‘sand seas’. Today, about 10% of this area is covered by sand seas. […] Around one-third of the Arabian subcontinent is covered by sandy deserts”.
“Much of the drainage in deserts is internal, as in Central Asia. Their rivers never reach the sea, but take water to interior basins. […] Salt is a common constituent of desert soils. The generally low levels of rainfall means that salts are seldom washed away through soils and therefore tend to accumulate in certain parts of the landscape. Large amounts of common salt (sodium chloride, or halite), which is very soluble in water, are found in some hyper-arid deserts.”
“Many deserts are very rich in rare and unique species thanks to their evolution in relative geographical isolation. Many of these plants and animals have adapted in remarkable ways to deal with the aridity and extremes of temperature. Indeed, some of these adaptations contribute to the apparent lifelessness of deserts simply because a good way to avoid some of the harsh conditions is to hide. Some small creatures spend hot days burrowed beneath the soil surface. In a similar way, certain desert plants spend most of the year and much of their lives dormant, as seeds waiting for the right conditions, brought on by a burst of rainfall. Given that desert rainstorms can be very variable in time and in space, many activities in the desert ecosystem occur only sporadically, as pulses of activity driven by the occasional cloudburst. […] The general scarcity of water is the most important, though by no means the only, environmental challenge faced by desert organisms. Limited supplies of food and nutrients, friable soils, high levels of solar radiation, high daytime temperatures, and the large diurnal temperature range are other challenges posed by desert conditions. These conditions are not always distributed evenly across a desert landscape, and the existence of more benign microenvironments is particularly important for desert plants and animals. Patches of terrain that are more biologically productive than their surroundings occur in even the most arid desert, geographical patterns caused by many factors, not only the simple availability of water.”
A small side note here: The book includes brief coverage of things like crassulacean acid metabolism and related topics covered in much more detail in Beer et al. I’m not going to go into that stuff here as this stuff was in my opinion much better covered in the latter book (some people might disagree, but people who would do that would at least have to admit that the coverage in Beer et al. is/was much more comprehensive than is Middleton’s coverage in this book). There are quite a few other topics included in the book which I did not include coverage of here in the post but I mention this topic in particular in part because I thought it was actually a good example underscoring how this book is very much just a very brief introduction; you can write book chapters, if not books, about some of the topics Middleton devotes a couple of paragraphs to in his coverage, which is but to be expected given the nature and range of coverage of the publication.
Plants aren’t ‘smart’ given any conventional definition of the word, but as I’ve talked about before here on the blog (e.g. here) when you look closer at the way they grow and ‘behave’ over the very long term, some of the things they do are actually at the very least ‘not really all that stupid’:
“The seeds of annuals germinate only when enough water is available to support the entire life cycle. Germinating after just a brief shower could be fatal, so mechanisms have developed for seeds to respond solely when sufficient water is available. Seeds germinate only when their protective seed coats have been broken down, allowing water to enter the seed and growth to begin. The seed coats of many desert species contain chemicals that repel water. These compounds are washed away by large amounts of water, but a short shower will not generate enough to remove all the water-repelling chemicals. Other species have very thick seed coats that are gradually worn away physically by abrasion as moving water knocks the seeds against stones and pebbles.”
What about animals? One thing I learned from this publication is that it turns out that being a mammal will, all else equal, definitely not give you a competitive edge in a hot desert environment:
“The need to conserve water is important to all creatures that live in hot deserts, but for mammals it is particularly crucial. In all environments mammals typically maintain a core body temperature of around 37–38°C, and those inhabiting most non-desert regions face the challenge of keeping their body temperature above the temperature of their environmental surrounds. In hot deserts, where environmental temperatures substantially exceed the body temperature on a regular basis, mammals face the reverse challenge. The only mechanism that will move heat out of an animal’s body against a temperature gradient is the evaporation of water, so maintenance of the core body temperature requires use of the resource that is by definition scarce in drylands.”
Humans? What about them?
“Certain aspects of a traditional mobile lifestyle have changed significantly for some groups of nomadic peoples. Herders in the Gobi desert in Mongolia pursue a way of life that in many ways has changed little since the times of the greatest of all nomadic leaders, Chinggis Khan, 750 years ago. They herd the same animals, eat the same foods, wear the same clothes, and still live in round felt-covered tents, traditional dwellings known in Mongolian as gers. Yet many gers now have a set of solar panels on the roof that powers a car battery, allowing an electric light to extend the day inside the tent. Some also have a television set.” (these remarks incidentally somehow reminded me of this brilliant Gary Larson cartoon)
“People have constructed dams to manage water resources in arid regions for thousands of years. One of the oldest was the Marib dam in Yemen, built about 3,000 years ago. Although this structure was designed to control water from flash floods, rather than for storage, the diverted flow was used to irrigate cropland. […] Although groundwater has been exploited for desert farmland using hand-dug underground channels for a very long time, the discovery of reserves of groundwater much deeper below some deserts has led to agricultural use on much larger scales in recent times. These deep groundwater reserves tend to be non-renewable, having built up during previous climatic periods of greater rainfall. Use of this fossil water has in many areas resulted in its rapid depletion.”
“Significant human impacts are thought to have a very long history in some deserts. One possible explanation for the paucity of rainfall in the interior of Australia is that early humans severely modified the landscape through their use of fire. Aboriginal people have used fire extensively in Central Australia for more than 20,000 years, particularly as an aid to hunting, but also for many other purposes, from clearing passages to producing smoke signals and promoting the growth of preferred plants. The theory suggests that regular burning converted the semi-arid zone’s mosaic of trees, shrubs, and grassland into the desert scrub seen today. This gradual change in the vegetation could have resulted in less moisture from plants reaching the atmosphere and hence the long-term desertification of the continent.” (I had never heard about this theory before, and so I of course have no idea if it’s correct or not – but it’s an interesting idea).
“It has been said that if a drug has no side effects, then it is unlikely to work. Drug therapy labours under the fundamental problem that usually every single cell in the body has to be treated just to exert a beneficial effect on a small group of cells, perhaps in one tissue. Although drug-targeting technology is improving rapidly, most of us who take an oral dose are still faced with the problem that the vast majority of our cells are being unnecessarily exposed to an agent that at best will have no effect, but at worst will exert many unwanted effects. Essentially, all drug treatment is really a compromise between positive and negative effects in the patient. […] This book is intended to provide a basic grounding in human drug metabolism, although it is useful if the reader has some knowledge of biochemistry, physiology and pharmacology from other sources. In addition, a qualitative understanding of chemistry can illuminate many facets of drug metabolism and toxicity. Although chemistry can be intimidating, I have tried to make the chemical aspects of drug metabolism as user-friendly as possible.”
I’m currently reading this book. To say that it is ‘useful if the reader has some knowledge’ of the topics mentioned is putting it mildly; I’d say it’s mandatory – my advice would be to stay far away from this book if you know nothing of pharmacology, biochem, and physiology. I know enough to follow most of the coverage, at least in terms of the big picture stuff, but some of the biochemistry details I frankly have been unable to follow; I think I could probably understand all of it if I were willing to look up all the words and concepts with which I’m unfamiliar, but I’m not willing to spend the time to do that. In this context it should also be mentioned that the book is very well written, in the sense that it is perfectly possible to read the book and follow the basic outline of what’s going on without necessarily understanding all details, so I don’t feel that the coverage in any way discourages me from reading the book the way I am – the significance of that hydrogen bond in the diagram will probably become apparent to you later, and even if it doesn’t you’ll probably manage.
In terms of general remarks about the book, a key point to be mentioned early on is also that the book is very dense and has a lot of interesting stuff. I find it hard at the moment to justify devoting time to blogging, but if that were not the case I’d probably feel tempted to cover this book in a lot of detail, with multiple posts delving into specific fascinating aspects of the coverage. Despite this being a book where I don’t really understand everything that’s going on all the time, I’m definitely at a five star rating at the moment, and I’ve read close to two-thirds of it at this point.
A few quotes:
“The process of drug development weeds out agents [or at least tries to weed out agents… – US] that have seriously negative actions and usually releases onto the market drugs that may have a profile of side effects, but these are relatively minor within a set concentration range where the drug’s pharmacological action is most effective. This range, or ‘therapeutic window’ is rather variable, but it will give some indication of the most ‘efficient’ drug concentration. This effectively means the most beneficial pharmacodynamic effects for the minimum side effects.”
If the dose is too low, you have a case of drug failure, where the drug doesn’t work. If the dose is too high, you experience toxicity. Both outcomes are problematic, but they manifest in different ways. Drug failure is usually a gradual process (days – “Therapeutic drug failure is usually a gradual process, where the time frame may be days before the problem is detected”), whereas toxicity may be of very rapid onset (hours).
“To some extent, every patient has a unique therapeutic window for each drug they take, as there is such huge variation in our pharmacodynamic drug sensitivities. This book is concerned with what systems influence how long a drug stays in our bodies. […] [The therapeutic index] has been defined as the ratio between the lethal or toxic dose and the effective dose that shows the normal range of pharmacological effect. In practice, a drug […] is listed as having a narrow TI if there is less than a twofold difference between the lethal and effective doses, or a twofold difference in the minimum toxic and minimum effective concentrations. Back in the 1960s, many drugs in common use had narrow TIs […] that could be toxic at relatively low levels. Over the last 30 years, the drug industry has aimed to replace this type of drug with agents with much higher TIs. […] However, there are many drugs […] which remain in use that have narrow or relatively narrow TIs”.
“metabolites are usually removed from the cell faster than the parent drug”
“The kidneys are mostly responsible for […] removal, known as elimination. The kidneys cannot filter large chemical entities like proteins, but they can remove the majority of smaller chemicals, depending on size, charge and water solubility. […] the kidney is a lipophilic (oil-loving) organ […] So the kidney is not efficient at eliminating lipophilic chemicals. One of the major roles of the liver is to use biotransforming enzymes to ensure that lipophilic agents are made water soluble enough to be cleared by the kidney. So the liver has an essential but indirect role in clearance, in that it must extract the drug from the circulation, biotransform (metabolize) it, then return the water-soluble product to the blood for the kidney to remove. The liver can also actively clear or physically remove its metabolic products from the circulation by excreting them in bile, where they travel through the gut to be eliminated in faeces.”
“Cell structures eventually settled around the format we see now, a largely aqueous cytoplasm bounded by a predominantly lipophilic protective membrane. Although the membrane does prevent entry and exit of many potential toxins, it is no barrier to other lipophilic molecules. If these molecules are highly lipophilic, they will passively diffuse into and become trapped in the membrane. If they are slightly less lipophilic, they will pass through it into the organism. So aside from ‘ housekeeping ’ enzyme systems, some enzymatic protection would have been needed against invading molecules from the immediate environment. […] the majority of living organisms including ourselves now possess some form of effective biotransformational enzyme capability which can detoxify and eliminate most hydrocarbons and related molecules. This capability has been effectively ‘stolen’ from bacteria over millions of years. The main biotransformational protection against aromatic hydrocarbons is a series of enzymes so named as they absorb UV light at 450 nm when reduced and bound to carbon monoxide. These specialized enzymes were termed cytochrome P450 monooxygenases or sometimes oxido-reductases. They are often referred to as ‘CYPs’ or ‘P450s’. […] All the CYPs accomplish their functions using the same basic mechanism, but each enzyme is adapted to dismantle particular groups of chemical structures. It is a testament to millions of years of ‘ research and development ’ in the evolution of CYPs, that perhaps 50,000 or more man-made chemical entities enter the environment for the first time every year and the vast majority can be oxidized by at least one form of CYP. […] To date, nearly 60 human CYPs have been identified […] It is likely that hundreds more CYP-mediated endogenous functions remain to be discovered. […] CYPs belong to a group of enzymes which all have similar core structures and modes of operation. […] Their importance to us is underlined by their key role in more than 75 per cent of all drug biotransformations.”
I would add a note here that a very large proportion of this book is, perhaps unsurprisingly in view of the above, about those CYPs; how they work, what exactly it is that they do, which different kinds there are and what roles they play in the metabolism of specific drugs and chemical compounds, variation in gene expression across individuals and across populations in the context of specific CYPs and how such variation may relate to differences in drug metabolism, etc.
“Drugs often parallel endogenous molecules in their oil solubility, although many are considerably more lipophilic than these molecules. Generally, drugs, and xenobiotic compounds, have to be fairly oil soluble or they would not be absorbed from the GI tract. Once absorbed these molecules could change both the structure and function of living systems and their oil solubility makes these molecules rather ‘elusive’, in the sense that they can enter and leave cells according to their concentration and are temporarily beyond the control of the living system. This problem is compounded by the difficulty encountered by living systems in the removal of lipophilic molecules. […] even after the kidney removes them from blood by filtering them, the lipophilicity of drugs, toxins and endogenous steroids means that as soon as they enter the collecting tubules, they can immediately return to the tissue of the tubules, as this is more oil-rich than the aqueous urine. So the majority of lipophilic molecules can be filtered dozens of times and only low levels are actually excreted. In addition, very high lipophilicity molecules like some insecticides and fire retardants might never leave adipose tissue at all […] This means that for lipophilic agents:
*the more lipophilic they are, the more these agents are trapped in membranes, affecting fluidity and causing disruption at high levels;
* if they are hormones, they can exert an irreversible effect on tissues that is outside normal physiological control;
*if they are toxic, they can potentially damage endogenous structures;
* if they are drugs, they are also free to cause any pharmacological effect for a considerable period of time.”
“A sculptor was once asked how he would go about sculpting an elephant from a block of stone. His response was ‘knock off all the bits that did not look like an elephant’. Similarly, drug-metabolizing CYPs have one main imperative, to make molecules more water-soluble. Every aspect of their structure and function, their position in the liver, their initial selection of substrate, binding, substrate orientation and catalytic cycling, is intended to accomplish this deceptively simple aim.”
“The use of therapeutic drugs is a constant battle to pharmacologically influence a system that is actively undermining the drugs’ effects by removing them as fast as possible. The processes of oxidative and conjugative metabolism, in concert with efflux pump systems, act to clear a variety of chemicals from the body into the urine or faeces, in the most rapid and efficient manner. The systems that manage these processes also sense and detect increases in certain lipophilic substances and this boosts the metabolic capability to respond to the increased load.”
“The aim of drug therapy is to provide a stable, predictable pharmacological effect that can be adjusted to the needs of the individual patient for as long is deemed clinically necessary. The physician may start drug therapy at a dosage that is decided on the basis of previous clinical experience and standard recommendations. At some point, the dosage might be increased if the desired effects were not forthcoming, or reduced if side effects are intolerable to the patient. This adjustment of dosage can be much easier in drugs that have a directly measurable response, such as a change in clotting time. However, in some drugs, this adjustment process can take longer to achieve than others, as the pharmacological effect, once attained, is gradually lost over a period of days. The dosage must be escalated to regain the original effect, sometimes several times, until the patient is stable on the dosage. In some cases, after some weeks of taking the drug, the initial pharmacological effect seen in the first few days now requires up to eight times the initial dosage to reproduce. It thus takes a significant period of time to create a stable pharmacological effect on a constant dose. In the same patients, if another drug is added to the regimen, it may not have any effect at all. In other patients, sudden withdrawal of perhaps only one drug in a regimen might lead to a gradual but serious intensification of the other drug’s side effects.”
“acceleration of drug metabolism as a response to the presence of certain drugs is known as ‘enzyme induction’ and drugs which cause it are often referred to as ‘inducers’ of drug metabolism. The process can be defined as: ‘An adaptive increase in the metabolizing capacity of a tissue’; this means that a drug or chemical is capable of inducing an increase in the transcription and translation of specific CYP isoforms, which are often (although not always) the most efficient metabolizers of that chemical. […] A new drug is generally regarded as an inducer if it produces a change in drug clearance which is equal to or greater than 40 per cent of an established potent inducer, usually taken as rifampicin. […] inducers are usually (but not always) lipophilic, contain aromatic groups and consequently, if they were not oxidized, they would be very persistent in living systems. CYP enzymes have evolved to oxidize this very type of agent; indeed, an elaborate and very effective system has also evolved to modulate the degree of CYP oxidation of these agents, so it is clear that living systems regard inducers as a particular threat among lipophilic agents in general. The process of induction is dynamic and closely controlled. The adaptive increase is constantly matched to the level of exposure to the drug, from very minor almost undetectable increases in CYP protein synthesis, all the way to a maximum enzyme synthesis that leads to the clearance of grammes of a chemical per day. Once exposure to the drug or toxin ceases, the adaptive increase in metabolizing capacity will subside gradually to the previous low level, usually within a time period of a few days. This varies according to the individual and the drug. […] it is clear there is almost limitless capacity for variation in terms of the basic pre-set responsiveness of the system as well as its susceptibility to different inducers and groups of inducers. Indeed, induction in different patients has been observed to differ by more than 20-fold.”
This one I added mostly because I didn’t know this and I thought it was worth including it here because it would make it easier for me to remember later (i.e., not because I figured other people might find this interesting):
“CYP2E1 is very sensitive to diet, even becoming induced by high fat/low carbohydrate intakes. Surprisingly, starvation and diabetes also promote CYP2E1 functionality. Insulin levels fall during diet restriction, starvation and in diabetes and the formation of functional 2E1 is suppressed by insulin, so these conditions promote the increase of 2E1 metabolic capability. One of the consequences of diabetes and starvation is the major shift from glucose to fatty acid/tryglyceride oxidation, of which some of the by-products are small, hydrophilic and potentially toxic ‘ketone bodies’. These agents can cause a CNS intoxicating effect which is seen in diabetics who are very hypoglycaemic, they may appear ‘drunk’ and their breath will smell as if they had been drinking.”
A more general related point which may be of more interest to other people reading along here is that this is far from the only CYP which is sensitive to diet, and that diet-mediated effects may be very significant. I may go into this in more detail in a later post. Note that grapefruit is a major potentially problematic dietary component in many drug contexts:
“Although patients have been heroically consuming grapefruit juice for their health for decades, it took until the late 1980s before its effects on drug clearance were noted and several more years before it was realized that there could be a major problem with drug interactions […] The most noteworthy feature of the effect of grapefruit juice is its potency from a single ‘dose’ which coincides with a typical single breakfast intake of the juice, say around 200–300 ml. Studies with CYP3A substrates such as midazolam have shown that it can take up to three days before the effects wear off, which is consistent with the synthesis of new enzyme. […] there are a number of drugs that are subject to a very high gut wall component to their ‘first-pass’ metabolism […]; these include midazolam, terfenadine, lovastatin, simvastatin and astemizole. Their gut CYP clearance is so high that if the juice inhibits it, the concentration reaching the liver can increase six- or sevenfold. If the liver normally only extracts a relatively minor proportion of the parent agent, then plasma levels of such drugs increase dramatically towards toxicity […] the inhibitor effects of grapefruit juice in high first – pass drugs is particularly clinically relevant as it can occur after one exposure of the juice.”
It may sound funny, but there are two pages in this book about the effects of grapefruit juice, including a list of ‘Drugs that should not be taken with grapefruit juice’. Grapefruit is a well-known so-called mechanism-based inhibitor, and it may impact the metabolism of a lot of different drugs. It is far from the only known dietary component which may cause problems in a drug metabolism context – for example “cranberry juice has been known for some time as an inhibitor of warfarin metabolism”. On a general note the author remarks that: “There are hundreds of fruit preparations available that have been specifically marketed for their […] antioxidant capacities, such as purple grape, pomegranate, blueberry and acai juices. […] As they all contain large numbers of diverse phenolics and are pharmacologically active, they should be consumed with some caution during drug therapy.”
i. Some new words I’ve encountered (not all of them are from vocabulary.com, but many of them are):
Uxoricide, persnickety, logy, philoprogenitive, impassive, hagiography, gunwale, flounce, vivify, pelage, irredentism, pertinacity,callipygous, valetudinarian, recrudesce, adjuration, epistolary, dandle, picaresque, humdinger, newel, lightsome, lunette, inflect, misoneism, cormorant, immanence, parvenu, sconce, acquisitiveness, lingual, Macaronic, divot, mettlesome, logomachy, raffish, marginalia, omnifarious, tatter, licit.
ii. A lecture:
I got annoyed a few times by the fact that you can’t tell where he’s pointing when he’s talking about the slides, which makes the lecture harder to follow than it ought to be, but it’s still an interesting lecture.
iii. Facts about Dihydrogen Monoxide. Includes coverage of important neglected topics such as ‘What is the link between Dihydrogen Monoxide and school violence?’ After reading the article, I am frankly outraged that this stuff’s still legal!
iv. Some wikipedia links of interest:
“Steganography […] is the practice of concealing a file, message, image, or video within another file, message, image, or video. The word steganography combines the Greek words steganos (στεγανός), meaning “covered, concealed, or protected”, and graphein (γράφειν) meaning “writing”. […] Generally, the hidden messages appear to be (or be part of) something else: images, articles, shopping lists, or some other cover text. For example, the hidden message may be in invisible ink between the visible lines of a private letter. Some implementations of steganography that lack a shared secret are forms of security through obscurity, whereas key-dependent steganographic schemes adhere to Kerckhoffs’s principle.
The advantage of steganography over cryptography alone is that the intended secret message does not attract attention to itself as an object of scrutiny. Plainly visible encrypted messages—no matter how unbreakable—arouse interest, and may in themselves be incriminating in countries where encryption is illegal. Thus, whereas cryptography is the practice of protecting the contents of a message alone, steganography is concerned with concealing the fact that a secret message is being sent, as well as concealing the contents of the message.”
H. H. Holmes. A really nice guy.
“Herman Webster Mudgett (May 16, 1861 – May 7, 1896), better known under the name of Dr. Henry Howard Holmes or more commonly just H. H. Holmes, was one of the first documented serial killers in the modern sense of the term. In Chicago, at the time of the 1893 World’s Columbian Exposition, Holmes opened a hotel which he had designed and built for himself specifically with murder in mind, and which was the location of many of his murders. While he confessed to 27 murders, of which nine were confirmed, his actual body count could be up to 200. He brought an unknown number of his victims to his World’s Fair Hotel, located about 3 miles (4.8 km) west of the fair, which was held in Jackson Park. Besides being a serial killer, H. H. Holmes was also a successful con artist and a bigamist. […]
Holmes purchased an empty lot across from the drugstore where he built his three-story, block-long hotel building. Because of its enormous structure, local people dubbed it “The Castle”. The building was 162 feet long and 50 feet wide. […] The ground floor of the Castle contained Holmes’ own relocated drugstore and various shops, while the upper two floors contained his personal office and a labyrinth of rooms with doorways opening to brick walls, oddly-angled hallways, stairways leading to nowhere, doors that could only be opened from the outside and a host of other strange and deceptive constructions. Holmes was constantly firing and hiring different workers during the construction of the Castle, claiming that “they were doing incompetent work.” His actual reason was to ensure that he was the only one who fully understood the design of the building.”
“The Minnesota Starvation Experiment […] was a clinical study performed at the University of Minnesota between November 19, 1944 and December 20, 1945. The investigation was designed to determine the physiological and psychological effects of severe and prolonged dietary restriction and the effectiveness of dietary rehabilitation strategies.
The motivation of the study was twofold: First, to produce a definitive treatise on the subject of human starvation based on a laboratory simulation of severe famine and, second, to use the scientific results produced to guide the Allied relief assistance to famine victims in Europe and Asia at the end of World War II. It was recognized early in 1944 that millions of people were in grave danger of mass famine as a result of the conflict, and information was needed regarding the effects of semi-starvation—and the impact of various rehabilitation strategies—if postwar relief efforts were to be effective.”
“most of the subjects experienced periods of severe emotional distress and depression.:161 There were extreme reactions to the psychological effects during the experiment including self-mutilation (one subject amputated three fingers of his hand with an axe, though the subject was unsure if he had done so intentionally or accidentally). Participants exhibited a preoccupation with food, both during the starvation period and the rehabilitation phase. Sexual interest was drastically reduced, and the volunteers showed signs of social withdrawal and isolation.:123–124 […] One of the crucial observations of the Minnesota Starvation Experiment […] is that the physical effects of the induced semi-starvation during the study closely approximate the conditions experienced by people with a range of eating disorders such as anorexia nervosa and bulimia nervosa.”
Post-vasectomy pain syndrome. Vasectomy reversal is a risk people probably know about, but this one seems to also be worth being aware of if one is considering having a vasectomy.
Transport in the Soviet Union (‘good article’). A few observations from the article:
“By the mid-1970s, only eight percent of the Soviet population owned a car. […] From 1924 to 1971 the USSR produced 1 million vehicles […] By 1975 only 8 percent of rural households owned a car. […] Growth of motor vehicles had increased by 224 percent in the 1980s, while hardcore surfaced roads only increased by 64 percent. […] By the 1980s Soviet railways had become the most intensively used in the world. Most Soviet citizens did not own private transport, and if they did, it was difficult to drive long distances due to the poor conditions of many roads. […] Road transport played a minor role in the Soviet economy, compared to domestic rail transport or First World road transport. According to historian Martin Crouch, road traffic of goods and passengers combined was only 14 percent of the volume of rail transport. It was only late in its existence that the Soviet authorities put emphasis on road construction and maintenance […] Road transport as a whole lagged far behind that of rail transport; the average distance moved by motor transport in 1982 was 16.4 kilometres (10.2 mi), while the average for railway transport was 930 km per ton and 435 km per ton for water freight. In 1982 there was a threefold increase in investment since 1960 in motor freight transport, and more than a thirtyfold increase since 1940.”
i. “The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data.” (John Tukey)
ii. “Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise.” (-ll-)
iii. “They who can no longer unlearn have lost the power to learn.” (John Lancaster Spalding)
iv. “If there are but few who interest thee, why shouldst thou be disappointed if but few find thee interesting?” (-ll-)
v. “Since the mass of mankind are too ignorant or too indolent to think seriously, if majorities are right it is by accident.” (-ll-)
vi. “As they are the bravest who require no witnesses to their deeds of daring, so they are the best who do right without thinking whether or not it shall be known.” (-ll-)
vii. “Perfection is beyond our reach, but they who earnestly strive to become perfect, acquire excellences and virtues of which the multitude have no conception.” (-ll-)
viii. “We are made ridiculous less by our defects than by the affectation of qualities which are not ours.” (-ll-)
ix. “If thy words are wise, they will not seem so to the foolish: if they are deep the shallow will not appreciate them. Think not highly of thyself, then, when thou art praised by many.” (-ll-)
x. “Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity. ” (George E. P. Box)
xi. “Intense ultraviolet (UV) radiation from the young Sun acted on the atmosphere to form small amounts of very many gases. Most of these dissolved easily in water, and fell out in rain, making Earth’s surface water rich in carbon compounds. […] the most important chemical of all may have been cyanide (HCN). It would have formed easily in the upper atmosphere from solar radiation and meteorite impact, then dissolved in raindrops. Today it is broken down almost at once by oxygen, but early in Earth’s history it built up at low concentrations in lakes and oceans. Cyanide is a basic building block for more complex organic molecules such as amino acids and nucleic acid bases. Life probably evolved in chemical conditions that would kill us instantly!” (Richard Cowen, History of Life, p.8)
xii. “Dinosaurs dominated land communities for 100 million years, and it was only after dinosaurs disappeared that mammals became dominant. It’s difficult to avoid the suspicion that dinosaurs were in some way competitively superior to mammals and confined them to small body size and ecological insignificance. […] Dinosaurs dominated many guilds in the Cretaceous, including that of large browsers. […] in terms of their reconstructed behavior […] dinosaurs should be compared not with living reptiles, but with living mammals and birds. […] By the end of the Cretaceous there were mammals with varied sets of genes but muted variation in morphology. […] All Mesozoic mammals were small. Mammals with small bodies can play only a limited number of ecological roles, mainly insectivores and omnivores. But when dinosaurs disappeared at the end of the Cretaceous, some of the Paleocene mammals quickly evolved to take over many of their ecological roles” (ibid., pp. 145, 154, 222, 227-228)
xiii. “To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.” (Ronald Fisher)
xiv. “Ideas are incestuous.” (Howard Raiffa)
xv. “Game theory […] deals only with the way in which ultrasmart, all knowing people should behave in competitive situations, and has little to say to Mr. X as he confronts the morass of his problem. ” (-ll-)
xvi. “One of the principal objects of theoretical research is to find the point of view from which the subject appears in the greatest simplicity.” (Josiah Williard Gibbs)
xvii. “Nothing is as dangerous as an ignorant friend; a wise enemy is to be preferred.” (Jean de La Fontaine)
xviii. “Humility is a virtue all preach, none practice; and yet everybody is content to hear.” (John Selden)
xix. “Few men make themselves masters of the things they write or speak.” (-ll-)
xx. “Wise men say nothing in dangerous times.” (-ll-)
I figured I ought to blog this book at some point, and today I decided to take out the time to do it. This is the second book by Darwin I’ve read – for blog content dealing with Darwin’s book The Voyage of the Beagle, see these posts. The two books are somewhat different; Beagle is sort of a travel book written by a scientist who decided to write down his observations during his travels, whereas Origin is a sort of popular-science research treatise – for more details on Beagle, see the posts linked above. If you plan on reading both the way I did I think you should aim to read them in the order they are written.
I did not rate the book on goodreads because I could not think of a fair way to rate the book; it’s a unique and very important contribution to the history of science, but how do you weigh the other dimensions? I decided not to try. Some of the people reviewing the book on goodreads call the book ‘dry’ or ‘dense’, but I’d say that I found the book quite easy to read compared to quite a few of the other books I’ve been reading this year and it doesn’t actually take that long to read; thus I read a quite substantial proportion of the book during a one day trip to Copenhagen and back. The book can be read by most literate people living in the 21st century – you do not need to know any evolutionary biology to read this book – but that said, how you read the book will to some extent depend upon how much you know about the topics about which Darwin theorizes in his book. I had a conversation with my brother about the book a short while after I’d read it, and I recall noting during that conversation that in my opinion one would probably get more out of reading this book if one has at least some knowledge of geology (for example some knowledge about the history of the theory of continental drift – this book was written long before the theory of plate tectonics was developed), paleontology, Mendel’s laws/genetics/the modern synthesis and modern evolutionary thought, ecology and ethology, etc. Whether or not you actually do ‘get more out of the book’ if you already know some stuff about the topics about which Darwin speaks is perhaps an open question, but I think a case can certainly be made that someone who already knows a bit about evolution and related topics will read this book in a different manner than will someone who knows very little about these topics. I should perhaps in this context point out to people new to this blog that even though I hardly consider myself an expert on these sorts of topics, I have nevertheless read quite a bit of stuff about those things in the past – books like this, this, this, this, this, this, this, this, this, this, this, this, this, this, and this one – so I was reading the book perhaps mainly from the vantage point of someone at least somewhat familiar both with many of the basic ideas and with a lot of the refinements of these ideas that people have added to the science of biology since Darwin’s time. One of the things my knowledge of modern biology and related topics had not prepared me for was how moronic some of the ideas of Darwin’s critics were at the time and how stupid some of the implicit alternatives were, and this is actually part of the fun of reading this book; there was a lot of stuff back then which even many of the people presumably held in high regard really had no clue about, and even outrageously idiotic ideas were seemingly taken quite seriously by people involved in the debate. I assume that biologists still to this day have to spend quite a bit of time and effort dealing with ignorant idiots (see also this), but back in Darwin’s day these people were presumably to a much greater extent taken seriously even among people in the scientific community, if indeed they were not themselves part of the scientific community.
Darwin was not right about everything and there’s a lot of stuff that modern biologists know which he had no idea about, so naturally some mistaken ideas made their way into Origin as well; for example the idea of the inheritance of acquired characteristics (Lamarckian inheritance) occasionally pops up and is implicitly defended in the book as a credible complement to natural selection, as also noted in Oliver Francis’ afterword to the book. On a general note it seems that Darwin did a better job convincing people about the importance of the concept of evolution than he did convincing people that the relevant mechanism behind evolution was natural selection; at least that’s what’s argued in wiki’s featured article on the history of evolutionary thought (to which I have linked before here on the blog).
Darwin emphasizes more than once in the book that evolution is a very slow process which takes a lot of time (for example: “I do believe that natural selection will always act very slowly, often only at long intervals of time, and generally on only a very few of the inhabitants of the same region at the same time”, p.123), and arguably this is also something about which he is part right/part wrong because the speed with which natural selection ‘makes itself felt’ depends upon a variety of factors, and it can be really quite fast in some contexts (see e.g. this and some of the topics covered in books like this one); though you can appreciate why he held the views he did on that topic.
A big problem confronted by Darwin was that he didn’t know how genes work, so in a sense the whole topic of the ‘mechanics of the whole thing’ – the ‘nuts and bolts’ – was more or less a black box to him (I have included a few quotes which indirectly relate to this problem in my coverage of the book below; as can be inferred from those quotes Darwin wasn’t completely clueless, but he might have benefited greatly from a chat with Gregor Mendel…) – in a way a really interesting thing about the book is how plausible the theory of natural selection is made out to be despite this blatantly obvious (at least to the modern reader) problem. Darwin was incidentally well aware there was a problem; just 6 pages into the first chapter of the book he observes frankly that: “The laws governing inheritance are quite unknown”. Some of the quotes below, e.g. on reciprocal crosses, illustrate that he was sort of scratching the surface, but in the book he never does more than that.
Below I have added some quotes from the book.
“Certainly no clear line of demarcation has as yet been drawn between species and sub-species […]; or, again, between sub-species and well-marked varieties, or between lesser varieties and individual differences. These differences blend into each other in an insensible series; and a series impresses the mind with the idea of an actual passage. […] I look at individual differences, though of small interest to the systematist, as of high importance […], as being the first step towards such slight varieties as are barely thought worth recording in works on natural history. And I look at varieties which are in any degree more distinct and permanent, as steps leading to more strongly marked and more permanent varieties; and at these latter, as leading to sub-species, and to species. […] I attribute the passage of a variety, from a state in which it differs very slightly from its parent to one in which it differs more, to the action of natural selection in accumulating […] differences of structure in certain definite directions. Hence I believe a well-marked variety may be justly called an incipient species […] I look at the term species as one arbitrarily given, for the sake of convenience, to a set of individuals closely resembling each other, and that it does not essentially differ from the term variety, which is given to less distinct and more fluctuating forms. The term variety, again, in comparison with mere individual differences, is also applied arbitrarily, and for mere convenience’ sake. […] the species of large genera present a strong analogy with varieties. And we can clearly understand these analogies, if species have once existed as varieties, and have thus originated: whereas, these analogies are utterly inexplicable if each species has been independently created.”
“Owing to [the] struggle for life, any variation, however slight and from whatever cause proceeding, if it be in any degree profitable to an individual of any species, in its infinitely complex relations to other organic beings and to external nature, will tend to the preservation of that individual, and will generally be inherited by its offspring. The offspring, also, will thus have a better chance of surviving, for, of the many individuals of any species which are periodically born, but a small number can survive. I have called this principle, by which each slight variation, if useful, is preserved, by the term of Natural Selection, in order to mark its relation to man’s power of selection. We have seen that man by selection can certainly produce great results, and can adapt organic beings to his own uses, through the accumulation of slight but useful variations, given to him by the hand of Nature. But Natural Selection, as we shall hereafter see, is a power incessantly ready for action, and is as immeasurably superior to man’s feeble efforts, as the works of Nature are to those of Art. […] In looking at Nature, it is most necessary to keep the foregoing considerations always in mind – never to forget that every single organic being around us may be said to be striving to the utmost to increase in numbers; that each lives by a struggle at some period of its life; that heavy destruction inevitably falls either on the young or old, during each generation or at recurrent intervals. Lighten any check, mitigate the destruction ever so little, and the number of the species will almost instantaneously increase to any amount. The face of Nature may be compared to a yielding surface, with ten thousand sharp wedges packed close together and driven inwards by incessant blows, sometimes one wedge being struck, and then another with greater force. […] A corollary of the highest importance may be deduced from the foregoing remarks, namely, that the structure of every organic being is related, in the most essential yet often hidden manner, to that of all other organic beings, with which it comes into competition for food or residence, or from which it has to escape, or on which it preys.”
“Under nature, the slightest difference of structure or constitution may well turn the nicely-balanced scale in the struggle for life, and so be preserved. How fleeting are the wishes and efforts of man! how short his time! And consequently how poor will his products be, compared with those accumulated by nature during whole geological periods. […] It may be said that natural selection is daily and hourly scrutinising, throughout the world, every variation, even the slightest; rejecting that which is bad, preserving and adding up all that is good; silently and insensibly working, whenever and wherever opportunity offers, at the improvement of each organic being in relation to its organic and inorganic conditions of life. We see nothing of these slow changes in progress, until the hand of time has marked the long lapses of ages, and then so imperfect is our view into long past geological ages, that we only see that the forms of life are now different from what they formerly were.”
“I have collected so large a body of facts, showing, in accordance with the almost universal belief of breeders, that with animals and plants a cross between different varieties, or between individuals of the same variety but of another strain, gives vigour and fertility to the offspring; and on the other hand, that close interbreeding diminishes vigour and fertility; that these facts alone incline me to believe that it is a general law of nature (utterly ignorant though we be of the meaning of the law) that no organic being self-fertilises itself for an eternity of generations; but that a cross with another individual is occasionally perhaps at very long intervals — indispensable. […] in many organic beings, a cross between two individuals is an obvious necessity for each birth; in many others it occurs perhaps only at long intervals; but in none, as I suspect, can self-fertilisation go on for perpetuity.”
“as new species in the course of time are formed through natural selection, others will become rarer and rarer, and finally extinct. The forms which stand in closest competition with those undergoing modification and improvement, will naturally suffer most. […] Whatever the cause may be of each slight difference in the offspring from their parents – and a cause for each must exist – it is the steady accumulation, through natural selection, of such differences, when beneficial to the individual, which gives rise to all the more important modifications of structure, by which the innumerable beings on the face of this earth are enabled to struggle with each other, and the best adapted to survive.”
“Natural selection, as has just been remarked, leads to divergence of character and to much extinction of the less improved and intermediate forms of life. On these principles, I believe, the nature of the affinities of all organic beings may be explained. It is a truly wonderful fact – the wonder of which we are apt to overlook from familiarity – that all animals and all plants throughout all time and space should be related to each other in group subordinate to group, in the manner which we everywhere behold – namely, varieties of the same species most closely related together, species of the same genus less closely and unequally related together, forming sections and sub-genera, species of distinct genera much less closely related, and genera related in different degrees, forming sub-families, families, orders, sub-classes, and classes. The several subordinate groups in any class cannot be ranked in a single file, but seem rather to be clustered round points, and these round other points, and so on in almost endless cycles. On the view that each species has been independently created, I can see no explanation of this great fact in the classification of all organic beings; but, to the best of my judgment, it is explained through inheritance and the complex action of natural selection, entailing extinction and divergence of character […] The affinities of all the beings of the same class have sometimes been represented by a great tree. I believe this simile largely speaks the truth. The green and budding twigs may represent existing species; and those produced during each former year may represent the long succession of extinct species. At each period of growth all the growing twigs have tried to branch out on all sides, and to overtop and kill the surrounding twigs and branches, in the same manner as species and groups of species have tried to overmaster other species in the great battle for life. The limbs divided into great branches, and these into lesser and lesser branches, were themselves once, when the tree was small, budding twigs; and this connexion of the former and present buds by ramifying branches may well represent the classification of all extinct and living species in groups subordinate to groups. Of the many twigs which flourished when the tree was a mere bush, only two or three, now grown into great branches, yet survive and bear all the other branches; so with the species which lived during long-past geological periods, very few now have living and modified descendants. From the first growth of the tree, many a limb and branch has decayed and dropped off; and these lost branches of various sizes may represent those whole orders, families, and genera which have now no living representatives, and which are known to us only from having been found in a fossil state. As we here and there see a thin straggling branch springing from a fork low down in a tree, and which by some chance has been favoured and is still alive on its summit, so we occasionally see an animal like the Ornithorhynchus or Lepidosiren, which in some small degree connects by its affinities two large branches of life, and which has apparently been saved from fatal competition by having inhabited a protected station. As buds give rise by growth to fresh buds, and these, if vigorous, branch out and overtop on all sides many a feebler branch, so by generation I believe it has been with the great Tree of Life, which fills with its dead and broken branches the crust of the earth, and covers the surface with its ever branching and beautiful ramifications.”
“No one has been able to point out what kind, or what amount, of difference in any recognisable character is sufficient to prevent two species crossing. It can be shown that plants most widely different in habit and general appearance, and having strongly marked differences in every part of the flower, even in the pollen, in the fruit, and in the cotyledons, can be crossed. […] By a reciprocal cross between two species, I mean the case, for instance, of a stallion-horse being first crossed with a female-ass, and then a male-ass with a mare: these two species may then be said to have been reciprocally crossed. There is often the widest possible difference in the facility of making reciprocal crosses. Such cases are highly important, for they prove that the capacity in any two species to cross is often completely independent of their systematic affinity, or of any recognisable difference in their whole organisation. On the other hand, these cases clearly show that the capacity for crossing is connected with constitutional differences imperceptible by us, and confined to the reproductive system. […] fertility in the hybrid is independent of its external resemblance to either pure parent. […] The foregoing rules and facts […] appear to me clearly to indicate that the sterility both of first crosses and of hybrids is simply incidental or dependent on unknown differences, chiefly in the reproductive systems, of the species which are crossed. […] Laying aside the question of fertility and sterility, in all other respects there seems to be a general and close similarity in the offspring of crossed species, and of crossed varieties. If we look at species as having been specially created, and at varieties as having been produced by secondary laws, this similarity would be an astonishing fact. But it harmonizes perfectly with the view that there is no essential distinction between species and varieties. […] the facts briefly given in this chapter do not seem to me opposed to, but even rather to support the view, that there is no fundamental distinction between species and varieties.”
“Believing, from reasons before alluded to, that our continents have long remained in nearly the same relative position, though subjected to large, but partial oscillations of level, I am strongly inclined to…” (…’probably get some things wrong…’, US)
“In considering the distribution of organic beings over the face of the globe, the first great fact which strikes us is, that neither the similarity nor the dissimilarity of the inhabitants of various regions can be accounted for by their climatal and other physical conditions. Of late, almost every author who has studied the subject has come to this conclusion. […] A second great fact which strikes us in our general review is, that barriers of any kind, or obstacles to free migration, are related in a close and important manner to the differences between the productions of various regions. […] A third great fact, partly included in the foregoing statements, is the affinity of the productions of the same continent or sea, though the species themselves are distinct at different points and stations. It is a law of the widest generality, and every continent offers innumerable instances. Nevertheless the naturalist in travelling, for instance, from north to south never fails to be struck by the manner in which successive groups of beings, specifically distinct, yet clearly related, replace each other. […] We see in these facts some deep organic bond, prevailing throughout space and time, over the same areas of land and water, and independent of their physical conditions. The naturalist must feel little curiosity, who is not led to inquire what this bond is. This bond, on my theory, is simply inheritance […] The dissimilarity of the inhabitants of different regions may be attributed to modification through natural selection, and in a quite subordinate degree to the direct influence of different physical conditions. The degree of dissimilarity will depend on the migration of the more dominant forms of life from one region into another having been effected with more or less ease, at periods more or less remote; on the nature and number of the former immigrants; and on their action and reaction, in their mutual struggles for life; the relation of organism to organism being, as I have already often remarked, the most important of all relations. Thus the high importance of barriers comes into play by checking migration; as does time for the slow process of modification through natural selection. […] On this principle of inheritance with modification, we can understand how it is that sections of genera, whole genera, and even families are confined to the same areas, as is so commonly and notoriously the case.”
“the natural system is founded on descent with modification […] and […] all true classification is genealogical; […] community of descent is the hidden bond which naturalists have been unconsciously seeking, […] not some unknown plan or creation, or the enunciation of general propositions, and the mere putting together and separating objects more or less alike.”
Here’s my first post about the book. I’d probably have liked the book better if I hadn’t read the Cognitive Psychology text before this one, as knowledge from that book has made me think a few times in specific contexts that ‘that’s a bit more complicated than you’re making it out to be’ – as I also mentioned in the first post, the book is a bit too popular science-y for my taste. I have been reading other books in the last few days – for example I started reading Darwin a couple of days ago – and so I haven’t really spent much time on this one since my first post; however I have read the first 10 chapters (out of 14) by now, and below I’ve added a few observations from the chapters in the middle.
“In 1958, in a now-legendary, perhaps infamous experiment, the psychologist Harry Harlow of the University of Wisconsin removed newborn rhesus monkeys from their mothers. He presented these newborns instead with two surrogates, one made of wire and one made of cloth […]. Either stand-in could be rigged with a milk bottle, but regardless of which “mother” provided food, infant monkeys spent most of their time clinging to the one made of cloth, running to it immediately when startled or upset. They visited the wire mother only when that surrogate provided food, and then, only for as long as it took to feed.2
Harlow found that monkeys deprived of tactile comfort showed significant delays in their progress, both mentally and emotionally. Those deprived of tactile comfort and also raised in isolation from other monkeys developed additional behavioral aberrations, often severe, from which they never recovered. Even after they had rejoined the troop, these deprived monkeys would sit alone and rock back and forth. They were overly aggressive with their playmates, and later in life they remained unable to form normal attachments. They were, in fact, socially inept — a deficiency that extended down into the most basic biological behaviors. If a socially deprived female was approached by a normal male during the time when hormones made her sexually receptive, she would squat on the floor rather than present her hindquarters. When a previously isolated male approached a receptive female, he would clasp her head instead of her hindquarters, then engage in pelvic thrusts. […] Females raised in isolation became either incompetent or abusive mothers. Even monkeys raised in cages where they could see, smell, and hear — but not touch — other monkeys developed what the neuroscientist Mary Carlson has called an “autistic-like syndrome,” with excessive grooming, self-clasping, social withdrawal, and rocking. As Carlson told a reporter, “You were not really a monkey unless you were raised in an interactive monkey environment.””
In the authors’ coverage of oxytocin’s various roles in human- and animal social interaction they’re laying it on a bit thick in my opinion, and the less than skeptical coverage there leads me to also be somewhat skeptical of their coverage of the topic of mirror neurons, also on account of stuff like this. However I decided to add a little of the coverage of this topic anyway:
“In the 1980s the neurophysiologist Giacomo Rizzolatti began experimenting with macaque monkeys, running electrodes directly into their brains and giving them various objects to handle. The wiring was so precise that it allowed Rizzolatti and his colleagues to identify the specific monkey neurons that were activated at any moment.
When the monkeys carried out an action, such as reaching for a peanut, an area in the premotor cortex called F5 would fire […]. But then the scientists noticed something quite unexpected. When one of the researchers picked up a peanut to hand it to the monkey, those same motor neurons in the monkey’s brain fired. It was as if the animal itself had picked up the peanut. Likewise, the same neurons that fired when the monkey put a peanut in its mouth would fire when the monkey watched a researcher put a peanut in his mouth. […] Rizzolatti gave these structures the name “mirror neurons.” They fire even when the critical point of the action—the person’s hand grasping the peanut, for instance — is hidden from view behind some object, provided that the monkey knows there is a peanut back there. Even simply hearing the action — a peanut shell being cracked — can trigger the response. In all these instances, it is the goal rather than the observed action itself that is being mirrored in the monkey’s neural response. […] Rizzolatti and his colleagues confirmed the role of goals […] by performing brain scans while people watched humans, monkeys, and dogs opening and closing their jaws as if biting. Then they repeated the scans while the study subjects watched humans speak, monkeys smack their lips, and dogs bark.9 When the participants watched any of the three species carrying out the biting motion, the same areas of their brains were activated that activate when humans themselves bite. That is, observing actions that could reasonably be performed by humans, even when the performers were monkeys or dogs, activated the appropriate portion of the mirror neuron system in the human brain. […] the mirror neuron system isn’t simply “monkey see, monkey do,” or even “human see, human do.” It functions to give the observing individual knowledge of the observed action from a “personal” perspective. This “personal” understanding of others’ actions, it appears, promotes our understanding of and resonance with others.”
“In a study of how people monitor social cues, when researchers gave participants facts related to interpersonal or collective social ties presented in a diary format, those who were lonely remembered a greater proportion of this information than did those who were not lonely. Feeling lonely increases a person’s attentiveness to social cues just as being hungry increases a person’s attentiveness to food cues.28 […] They [later] presented images of twenty-four male and female faces depicting four emotions — anger, fear, happiness, and sadness — in two modes, high intensity and low intensity. The faces appeared individually for only one second, during which participants had to judge the emotional timbre. The higher the participants’ level of loneliness, the less accurate their interpretation of the facial expressions.”
“As we try to determine the meaning of events around us, we humans are not particularly good at knowing the causes of our own feelings or behavior. We overestimate our own strengths and underestimate our faults. We overestimate the importance of our contribution to group activities, the pervasiveness of our beliefs within the wider population, and the likelihood that an event we desire will occur.3 A At the same time we underestimate the contribution of others, as well as the likelihood that risks in the world apply to us. Events that unfold unexpectedly are not reasoned about as much as they are rationalized, and the act of remembering itself […] is far more of a biased reconstruction than an accurate recollection of events. […] Amid all the standard distortions we engage in, […] loneliness also sets us apart by making us more fragile, negative, and self-critical. […] One of the distinguishing characteristics of people who have become chronically lonely is the perception that they are doomed to social failure, with little if any control over external circumstances. Awash in pessimism, and feeling the need to protect themselves at every turn, they tend to withdraw, or to rely on the passive forms of coping under stress […] The social strategy that loneliness induces — high in social avoidance, low in social approach — also predicts future loneliness. The cynical worldview induced by loneliness, which consists of alienation and little faith in others, in turn, has been shown to contribute to actual social rejection. This is how feeling lonely creates self-fulfilling prophesies. If you maintain a subjective sense of rejection long enough, over time you are far more likely to confront the actual social rejection that you dread.8 […] In an effort to protect themselves against disappointment and the pain of rejection, the lonely can come up with endless numbers of reasons why a particular effort to reach out will be pointless, or why a particular relationship will never work. This may help explain why, when we’re feeling lonely, we undermine ourselves by assuming that we lack social skills that in fact, we do have available.”
“Because the emotional system that governs human self-preservation was built for a primitive environment and simple, direct dangers, it can be extremely naïve. It is impressionable and prefers shallow, social, and anecdotal information to abstract data. […] A sense of isolation can make [humans] feel unsafe. When we feel unsafe, we do the same thing a hunter-gatherer on the plains of Africa would do — we scan the horizon for threats. And just like a hunter-gatherer hearing an ominous sound in the brush, the lonely person too often assumes the worst, tightens up, and goes into the psychological equivalent of a protective crouch.”
“One might expect that a lonely person, hungry to fulfill unmet social needs, would be very accepting of a new acquaintance, just as a famished person might take pleasure in food that was not perfectly prepared or her favorite item on the menu. However, when people feel lonely they are actually far less accepting of potential new friends than when they feel socially contented.17 Studies show that lonely undergraduates hold more negative perceptions of their roommates than do their nonlonely peers.”
This is not a very ‘meaty’ post, but it’s been a long time since I had one of these and I figured it was time for another one. As always links and comments are welcome.
i. The unbearable accuracy of stereotypes. I made a mental note of reading this paper later a long time ago, but I’ve been busy with other things. Today I skimmed it and decided that it looks interesting enough to give it a detailed read later. Some remarks from the summary towards the end of the paper:
“The scientific evidence provides more evidence of accuracy than of inaccuracy in social stereotypes. The most appropriate generalization based on the evidence is that people’s beliefs about groups are usually moderately to highly accurate, and are occasionally highly inaccurate. […] This pattern of empirical support for moderate to high stereotype accuracy is not unique to any particular target or perceiver group. Accuracy has been found with racial and ethnic groups, gender, occupations, and college groups. […] The pattern of moderate to high stereotype accuracy is not unique to any particular research team or methodology. […] This pattern of moderate to high stereotype accuracy is not unique to the substance of the stereotype belief. It occurs for stereotypes regarding personality traits, demographic characteristics, achievement, attitudes, and behavior. […] The strong form of the exaggeration hypothesis – either defining stereotypes as exaggerations or as claiming that stereotypes usually lead to exaggeration – is not supported by data. Exaggeration does sometimes occur, but it does not appear to occur much more frequently than does accuracy or underestimation, and may even occur less frequently.”
ii. I’ve spent approximately 150 hours on vocabulary.com altogether at this point (having ‘mastered’ ~10.200 words in the process). A few words I’ve recently encountered on the site: Nescience (note to self: if someone calls you ‘nescient’ during a conversation, in many contexts that’ll be an insult, not a compliment) (Related note to self: I should find myself some smarter enemies, who use words like ‘nescient’…), eristic, carrel, oleaginous, decal, gable, epigone, armoire, chalet, cashmere, arrogate, ovine.
iv. A while back I posted a few comments on SSC and I figured I might as well link to them here (at least it’ll make it easier for me to find them later on). Here is where I posted a few comments on a recent study dealing with Ramadan-related IQ effects, a topic which I’ve covered here on the blog before, and here I discuss some of the benefits of not having low self-esteem.
On a completely unrelated note, today I left a comment in a reddit thread about ‘Books That Challenged You / Made You See the World Differently’ which may also be of interest to readers of this blog. I realized while writing the comment that this question is probably getting more and more difficult for me to answer as time goes by. It really all depends upon what part of the world you want to see in a different light; which aspects you’re most interested in. For people wondering about where the books about mathematics and statistics were in that comment (I do like to think these fields play some role in terms of ‘how I see the world‘), I wasn’t really sure which book to include on such topics, if any; I can’t think of any single math or stats textbook that’s dramatically changed the way I thought about the world – to the extent that my knowledge about these topics has changed how I think about the world, it’s been a long drawn-out process.
People who care the least bit about such things probably already know that a really strong tournament is currently being played in St. Louis, the so-called Sinquefield Cup, so I’m not going to talk about that here (for resources and relevant links, go here).
I talked about the strong rating pools on ICC not too long ago, but one thing I did not mention when discussing this topic back then was that yes, I also occasionally win against some of those grandmasters the rating pool throws at me – at least I’ve won a few times against GMs by now in bullet. I’m aware that for many ‘serious chess players’ bullet ‘doesn’t really count’ because the time dimension is much more important than it is in other chess settings, but to people who think skill doesn’t matter much in bullet I’d say they should have a match with Hikaru Nakamura and see how well they do against him (if you’re interested in how that might turn out, see e.g. this video – and keep in mind that at the beginning of the video Nakamura had already won 8 games in a row, out of 8, against his opponent in the first games, who incidentally is not exactly a beginner). The skill-sets required do not overlap perfectly between bullet and classical time control games, but when I started playing bullet online I quickly realized that good players really require very little time to completely outplay people who just play random moves (fast). Below I have posted a screencap I took while kibitzing a game of one of my former opponents, an anonymous GM from Germany, against whom I currently have a 2.5/6 score, with two wins, one draw, and three losses (see the ‘My score vs CPE’ box).
I like to think of a score like this as at least some kind of accomplishment, though admittedly perhaps not a very big one.
Also in chess-related news, I’m currently reading Jesús de la Villa’s 100 Endgames book, which Christof Sielecki has said some very nice things about. A lot of the stuff I’ve encountered so far is stuff I’ve seen before, positions I’ve already encountered and worked on, endgame principles I’m familiar with, etc., but not all of it is known stuff and I really like the structure of the book. There are a lot of pages left, and as it is I’m planning to read this book from cover to cover, which is something I usually do not do when I read chess books (few people do, judging from various comments I’ve seen people make in all kinds of different contexts).
Lastly, a lecture:
This will be my last post about the book. After having spent a few hours on the post I started to realize the post would become very long if I were to cover all the remaining chapters, and so in the end I decided not to discuss material from chapter 12 (‘How some marine plants modify the environment for other organisms’) here, even though I actually thought some of that stuff was quite interesting. I may decide to talk briefly about some of the stuff in that chapter in another blogpost later on (but most likely I won’t). For a few general remarks about the book, see my second post about it.
Some stuff from the last half of the book below:
“The light reactions of marine plants are similar to those of terrestrial plants […], except that pigments other than chlorophylls a and b and carotenoids may be involved in the capturing of light […] and that special arrangements between the two photosystems may be different […]. Similarly, the CO2-fixation and -reduction reactions are also basically the same in terrestrial and marine plants. Perhaps one should put this the other way around: Terrestrial-plant photosynthesis is similar to marine-plant photosynthesis, which is not surprising since plants have evolved in the oceans for 3.4 billion years and their descendants on land for only 350–400 million years. […] In underwater marine environments, the accessibility to CO2 is low mainly because of the low diffusivity of solutes in liquid media, and for CO2 this is exacerbated by today’s low […] ambient CO2 concentrations. Therefore, there is a need for a CCM also in marine plants […] CCMs in cyanobacteria are highly active and accumulation factors (the internal vs. external CO2 concentrations ratio) can be of the order of 800–900 […] CCMs in eukaryotic microalgae are not as effective at raising internal CO2 concentrations as are those in cyanobacteria, but […] microalgal CCMs result in CO2 accumulation factors as high as 180 […] CCMs are present in almost all marine plants. These CCMs are based mainly on various forms of HCO3− [bicarbonate] utilisation, and may raise the intrachloroplast (or, in cyanobacteria, intracellular or intra-carboxysome) CO2 to several-fold that of seawater. Thus, Rubisco is in effect often saturated by CO2, and photorespiration is therefore often absent or limited in marine plants.”
“we view the main difference in photosynthesis between marine and terrestrial plants as the latter’s ability to acquire Ci [inorganic carbon] (in most cases HCO3−) from the external medium and concentrate it intracellularly in order to optimise their photosynthetic rates or, in some cases, to be able to photosynthesise at all. […] CO2 dissolved in seawater is, under air-equilibrated conditions and given today’s seawater pH, in equilibrium with a >100 times higher concentration of HCO3−, and it is therefore not surprising that most marine plants utilise the latter Ci form for their photosynthetic needs. […] any plant that utilises bulk HCO3− from seawater must convert it to CO2 somewhere along its path to Rubisco. This can be done in different ways by different plants and under different conditions”
“The conclusion that macroalgae use HCO3− stems largely from results of experiments in which concentrations of CO2 and HCO3− were altered (chiefly by altering the pH of the seawater) while measuring photosynthetic rates, or where the plants themselves withdrew these Ci forms as they photosynthesised in a closed system as manifested by a pH increase (so-called pH-drift experiments) […] The reason that the pH in the surrounding seawater increases as plants photosynthesise is first that CO2 is in equilibrium with carbonic acid (H2CO3), and so the acidity decreases (i.e. pH rises) as CO2 is used up. At higher pH values (above ∼9), when all the CO2 is used up, then a decrease in HCO3− concentrations will also result in increased pH since the alkalinity is maintained by the formation of OH […] some algae can also give off OH− to the seawater medium in exchange for HCO3− uptake, bringing the pH up even further (to >10).”
“Carbonic anhydrase (CA) is a ubiquitous enzyme, found in all organisms investigated so far (from bacteria, through plants, to mammals such as ourselves). This may be seen as remarkable, since its only function is to catalyse the inter-conversion between CO2 and HCO3− in the reaction CO2 + H2O ↔ H2CO3; we can exchange the latter Ci form to HCO3− since this is spontaneously formed by H2CO3 and is present at a much higher equilibrium concentration than the latter. Without CA, the equilibrium between CO2 and HCO3− is a slow process […], but in the presence of CA the reaction becomes virtually instantaneous. Since CO2 and HCO3− generate different pH values of a solution, one of the roles of CA is to regulate intracellular pH […] another […] function is to convert HCO3− to CO2 somewhere en route towards the latter’s final fixation by Rubisco.”
“with very few […] exceptions, marine macrophytes are not C 4 plants. Also, while a CAM-like [Crassulacean acid metabolism-like, see my previous post about the book for details] feature of nightly uptake of Ci may complement that of the day in some brown algal kelps, this is an exception […] rather than a rule for macroalgae in general. Thus, virtually no marine macroalgae are C 4 or CAM plants, and instead their CCMs are dependent on HCO3− utilization, which brings about high concentrations of CO2 in the vicinity of Rubisco. In Ulva, this type of CCM causes the intra-cellular CO2 concentration to be some 200 μM, i.e. ∼15 times higher than that in seawater.“
“deposition of calcium carbonate (CaCO3) as either calcite or aragonite in marine organisms […] can occur within the cells, but for macroalgae it usually occurs outside of the cell membranes, i.e. in the cell walls or other intercellular spaces. The calcification (i.e. CaCO3 formation) can sometimes continue in darkness, but is normally greatly stimulated in light and follows the rate of photosynthesis. During photosynthesis, the uptake of CO2 will lower the total amount of dissolved inorganic carbon (Ci) and, thus, increase the pH in the seawater surrounding the cells, thereby increasing the saturation state of CaCO3. This, in turn, favours calcification […]. Conversely, it has been suggested that calcification might enhance the photosynthetic rate by increasing the rate of conversion of HCO3− to CO2 by lowering the pH. Respiration will reduce calcification rates when released CO2 increases Ci and/but lowers intercellular pH.”
“photosynthesis is most efficient at very low irradiances and increasingly inefficient as irradiances increase. This is most easily understood if we regard ‘efficiency’ as being dependent on quantum yield: At low ambient irradiances (the light that causes photosynthesis is also called ‘actinic’ light), almost all the photon energy conveyed through the antennae will result in electron flow through (or charge separation at) the reaction centres of photosystem II […]. Another way to put this is that the chances for energy funneled through the antennae to encounter an oxidised (or ‘open’) reaction centre are very high. Consequently, almost all of the photons emitted by the modulated measuring light will be consumed in photosynthesis, and very little of that photon energy will be used for generating fluorescence […] the higher the ambient (or actinic) light, the less efficient is photosynthesis (quantum yields are lower), and the less likely it is for photon energy funnelled through the antennae (including those from the measuring light) to find an open reaction centre, and so the fluorescence generated by the latter light increases […] Alpha (α), which is a measure of the maximal photosynthetic efficiency (or quantum yield, i.e. photosynthetic output per photons received, or absorbed […] by a specific leaf/thallus area, is high in low-light plants because pigment levels (or pigment densities per surface area) are high. In other words, under low-irradiance conditions where few photons are available, the probability that they will all be absorbed is higher in plants with a high density of photosynthetic pigments (or larger ‘antennae’ […]). In yet other words, efficient photon absorption is particularly important at low irradiances, where the higher concentration of pigments potentially optimises photosynthesis in low-light plants. In high-irradiance environments, where photons are plentiful, their efficient absorption becomes less important, and instead it is reactions downstream of the light reactions that become important in the performance of optimal rates of photosynthesis. The CO2-fixing capability of the enzyme Rubisco, which we have indicated as a bottleneck for the entire photosynthetic apparatus at high irradiances, is indeed generally higher in high-light than in low-light plants because of its higher concentration in the former. So, at high irradiances where the photon flux is not limiting to photosynthetic rates, the activity of Rubisco within the CO2-fixation and -reduction part of photosynthesis becomes limiting, but is optimised in high-light plants by up-regulation of its formation. […] photosynthetic responses have often been explained in terms of adaptation to low light being brought about by alterations in either the number of ‘photosynthetic units’ or their size […] There are good examples of both strategies occurring in different species of algae”.
“In general, photoinhibition can be defined as the lowering of photosynthetic rates at high irradiances. This is mainly due to the rapid (sometimes within minutes) degradation of […] the D1 protein. […] there are defense mechanisms [in plants] that divert excess light energy to processes different from photosynthesis; these processes thus cause a downregulation of the entire photosynthetic process while protecting the photosynthetic machinery from excess photons that could cause damage. One such process is the xanthophyll cycle. […] It has […] been suggested that the activity of the CCM in marine plants […] can be a source of energy dissipation. If CO2 levels are raised inside the cells to improve Rubisco activity, some of that CO2 can potentially leak out of the cells, and so raising the net energy cost of CO2 accumulation and, thus, using up large amounts of energy […]. Indirect evidence for this comes from experiments in which CCM activity is down-regulated by elevated CO2”
“Photoinhibition is often divided into dynamic and chronic types, i.e. the former is quickly remedied (e.g. during the day[…]) while the latter is more persistent (e.g. over seasons […] the mechanisms for down-regulating photosynthesis by diverting photon energies and the reducing power of electrons away from the photosynthetic systems, including the possibility of detoxifying oxygen radicals, is important in high-light plants (that experience high irradiances during midday) as well as in those plants that do see significant fluctuations in irradiance throughout the day (e.g. intertidal benthic plants). While low-light plants may lack those systems of down-regulation, one must remember that they do not live in environments of high irradiances, and so seldom or never experience high irradiances. […] If plants had a mind, one could say that it was worth it for them to invest in pigments, but unnecessary to invest in high amounts of Rubisco, when growing under low-light conditions, and necessary for high-light growing plants to invest in Rubisco, but not in pigments. Evolution has, of course, shaped these responses”.
“shallow-growing corals […] show two types of photoinhibition: a dynamic type that remedies itself at the end of each day and a more chronic type that persists over longer time periods. […] Bleaching of corals occurs when they expel their zooxanthellae to the surrounding water, after which they either die or acquire new zooxanthellae of other types (or clades) that are better adapted to the changes in the environment that caused the bleaching. […] Active Ci acquisition mechanisms, whether based on localised active H+ extrusion and acidification and enhanced CO2 supply, or on active transport of HCO3−, are all energy requiring. As a consequence it is not surprising that the CCM activity is decreased at lower light levels […] a whole spectrum of light-responses can be found in seagrasses, and those are often in co-ordinance with the average daily irradiances where they grow. […] The function of chloroplast clumping in Halophila stipulacea appears to be protection of the chloroplasts from high irradiances. Thus, a few peripheral chloroplasts ‘sacrifice’ themselves for the good of many others within the clump that will be exposed to lower irradiances. […] While water is an effective filter of UV radiation (UVR)2, many marine organisms are sensitive to UVR and have devised ways to protect themselves against this harmful radiation. These ways include the production of UV-filtering compounds called mycosporine-like amino acids (MAAs), which is common also in seagrasses”.
“Many algae and seagrasses grow in the intertidal and are, accordingly, exposed to air during various parts of the day. On the one hand, this makes them amenable to using atmospheric CO2, the diffusion rate of which is some 10 000 times higher in air than in water. […] desiccation is […] the big drawback when growing in the intertidal, and excessive desiccation will lead to death. When some of the green macroalgae left the seas and formed terrestrial plants some 400 million years ago (the latter of which then ‘invaded’ Earth), there was a need for measures to evolve that on the one side ensured a water supply to the above-ground parts of the plants (i.e. roots1) and, on the other, hindered the water entering the plants to evaporate (i.e. a water-impermeable cuticle). Macroalgae lack those barriers against losing intracellular water, and are thus more prone to desiccation, the rate of which depends on external factors such as heat and humidity and internal factors such as thallus thickness. […] the mechanisms of desiccation tolerance in macroalgae is not well understood on the cellular level […] there seems to be a general correlation between the sensitivity of the photosynthetic apparatus (more than the respiratory one) to desiccation and the occurrence of macroalgae along a vertical gradient in the intertidal: the less sensitive (i.e. the more tolerant), the higher up the algae can grow. This is especially true if the sensitivity to desiccation is measured as a function of the ability to regain photosynthetic rates following rehydration during re-submergence. While this correlation exists, the mechanism of protecting the photosynthetic system against desiccation is largely unknown”.
As pointed out in the review, ‘it’s really mostly a biochemistry text.’ At least there’s a lot of that stuff in there (‘it get’s better towards the end’, would be one way to put it – the last chapters deal mostly with other topics, such as measurement and brief notes on some not-particularly-well-explored ecological dynamics of potential interest), and if you don’t want to read a book which deals in some detail with topics and concepts like alkalinity, crassulacean acid metabolism, photophosphorylation, photosynthetic reaction centres, Calvin cycle (also known straightforwardly as the ‘reductive pentose phosphate cycle’…), enzymes with names like Ribulose-1,5-bisphosphate carboxylase/oxygenase (‘RuBisCO’ among friends…) and phosphoenolpyruvate carboxylase (‘PEP-case’ among friends…), mycosporine-like amino acid, 4,4′-Diisothiocyanatostilbene-2,2′-disulfonic acid (‘DIDS’ among friends), phosphoenolpyruvate, photorespiration, carbonic anhydrase, C4 carbon fixation, cytochrome b6f complex, … – well, you should definitely not read this book. If you do feel like reading about these sorts of things, having a look at the book seems to me a better idea than reading the wiki articles.
I’m not a biochemist but I could follow a great deal of what was going on in this book, which is perhaps a good indication of how well written the book is. This stuff’s interesting and complicated, and the authors cover most of it quite well. The book has way too much stuff for it to make sense to cover all of it here, but I do want to cover some more stuff from the book, so I’ve added some quotes below.
“Water velocities are central to marine photosynthetic organisms because they affect the transport of nutrients such as Ci [inorganic carbon] towards the photosynthesising cells, as well as the removal of by-products such as excess O2 during the day. Such bulk transport is especially important in aquatic media since diffusion rates there are typically some 10 000 times lower than in air […] It has been established that increasing current velocities will increase photosynthetic rates and, thus, productivity of macrophytes as long as they do not disrupt the thalli of macroalgae or the leaves of seagrasses”.
“Photosynthesis is the process by which the energy of light is used in order to form energy-rich organic compounds from low-energy inorganic compounds. In doing so, electrons from water (H2O) reduce carbon dioxide (CO2) to carbohydrates. […] The process of photosynthesis can conveniently be separated into two parts: the ‘photo’ part in which light energy is converted into chemical energy bound in the molecule ATP and reducing power is formed as NADPH [another friend with a long name], and the ‘synthesis’ part in which that ATP and NADPH are used in order to reduce CO2 to sugars […]. The ‘photo’ part of photosynthesis is, for obvious reasons, also called its light reactions while the ‘synthesis’ part can be termed CO2-fixation and -reduction, or the Calvin cycle after one of its discoverers; this part also used to be called the ‘dark reactions’ [or light-independent reactions] of photosynthesis because it can proceed in vitro (= outside the living cell, e.g. in a test-tube) in darkness provided that ATP and NADPH are added artificially. […] ATP and NADPH are the energy source and reducing power, respectively, formed by the light reactions, that are subsequently used in order to reduce carbon dioxide (CO2) to sugars (synonymous with carbohydrates) in the Calvin cycle. Molecular oxygen (O2) is formed as a by-product of photosynthesis.”
“In photosynthetic bacteria (such as the cyanobacteria), the light reactions are located at the plasma membrane and internal membranes derived as invaginations of the plasma membrane. […] most of the CO2-fixing enzyme ribulose-bisphosphate carboxylase/oxygenase […] is here located in structures termed carboxysomes. […] In all other plants (including algae), however, the entire process of photosynthesis takes place within intracellular compartments called chloroplasts which, as the name suggests, are chlorophyll-containing plastids (plastids are those compartments in cells that are associated with photosynthesis).”
“Photosynthesis can be seen as a process in which part of the radiant energy from sunlight is ‘harvested’ by plants in order to supply chemical energy for growth. The first step in such light harvesting is the absorption of photons by photosynthetic pigments. The photosynthetic pigments are special in that they not only convert the energy of absorbed photons to heat (as do most other pigments), but largely convert photon energy into a flow of electrons; the latter is ultimately used to provide chemical energy to reduce CO2 to carbohydrates. […] Pigments are substances that can absorb different wavelengths selectively and so appear as the colour of those photons that are less well absorbed (and, therefore, are reflected, or transmitted, back to our eyes). (An object is black if all photons are absorbed, and white if none are absorbed.) In plants and animals, the pigment molecules within the cells and their organelles thus give them certain colours. The green colour of many plant parts is due to the selective absorption of chlorophylls […], while other substances give colour to, e.g. flowers or fruits. […] Chlorophyll is a major photosynthetic pigment, and chlorophyll a is present in all plants, including all algae and the cyanobacteria. […] The molecular sub-structure of the chlorophyll’s ‘head’ makes it absorb mainly blue and red light […], while green photons are hardly absorbed but, rather, reflected back to our eyes […] so that chlorophyll-containing plant parts look green. […] In addition to chlorophyll a, all plants contain carotenoids […] All these accessory pigments act to fill in the ‘green window’ generated by the chlorophylls’ non-absorbance in that band […] and, thus, broaden the spectrum of light that can be utilized […] beyond that absorbed by chlorophyll.”
“Photosynthesis is principally a redox process in which carbon dioxide (CO2) is reduced to carbohydrates (or, in a shorter word, sugars) by electrons derived from water. […] since water has an energy level (or redox potential) that is much lower than that of sugar, or, more precisely, than that of the compound that finally reduces CO2 to sugars (i.e. NADPH), it follows that energy must be expended in the process; this energy stems from the photons of light. […] Redox reactions are those reactions in which one compound, B, becomes reduced by receiving electrons from another compound, A, the latter then becomes oxidised by donating the electrons to B. The reduction of B can only occur if the electron-donating compound A has a higher energy level, or […] has a redox potential that is higher, or more negative in terms of electron volts, than that of compound B. The redox potential, or reduction potential, […] can thus be seen as a measure of the ease by which a compound can become reduced […] the greater the difference in redox potential between compounds B and A, the greater the tendency that B will be reduced by A. In photosynthesis, the redox potential of the compound that finally reduces CO2, i.e. NADPH, is more negative than that from which the electrons for this reduction stems, i.e. H2O, and the entire process can therefore not occur spontaneously. Instead, light energy is used in order to boost electrons from H2O through intermediary compounds to such high redox potentials that they can, eventually, be used for CO2 reduction. In essence, then, the light reactions of photosynthesis describe how photon energy is used to boost electrons from H2O to an energy level (or redox potential) high (or negative) enough to reduce CO2 to sugars.”
“Fluorescence in general is the generation of light (emission of photons) from the energy released during de-excitation of matter previously excited by electromagnetic energy. In photosynthesis, fluorescence occurs as electrons of chlorophyll undergo de-excitation, i.e. return to the original orbital from which they were knocked out by photons. […] there is an inverse (or negative) correlation between fluorescence yield (i.e. the amount of fluorescence generated per photons absorbed by chlorophyll) and photosynthetic yield (i.e. the amount of photosynthesis performed per photons similarly absorbed).”
“In some cases, more photon energy is received by a plant than can be used for photosynthesis, and this can lead to photo-inhibition or photo-damage […]. Therefore, many plants exposed to high irradiances possess ways of dissipating such excess light energy, the most well known of which is the xanthophyll cycle. In principle, energy is shuttled between various carotenoids collectively called xanthophylls and is, in the process, dissipated as heat.”
“In order to ‘fix’ CO2 (= incorporate it into organic matter within the cell) and reduce it to sugars, the NADPH and ATP formed in the light reactions are used in a series of chemical reactions that take place in the stroma of the chloroplasts (or, in prokaryotic autotrophs such as cyanobacteria, the cytoplasm of the cells); each reaction is catalysed by its specific enzyme, and the bottleneck for the production of carbohydrates is often considered to be the enzyme involved in its first step, i.e. the fixation of CO2 [this enzyme is RubisCO] […] These CO2-fixation and -reduction reactions are known as the Calvin cycle […] or the C3 cycle […] The latter name stems from the fact that the first stable product of CO2 fixation in the cycle is a 3-carbon compound called phosphoglyceric acid (PGA): Carbon dioxide in the stroma is fixed onto a 5-carbon sugar called ribulose-bisphosphate (RuBP) in order to form 2 molecules of PGA […] It should be noted that this reaction does not produce a reduced, energy-rich, carbon compound, but is only the first, ‘CO2– fixing’, step of the Calvin cycle. In subsequent steps, PGA is energized by the ATP formed through photophosphorylation and is reduced by NADPH […] to form a 3-carbon phosphorylated sugar […] here denoted simply as triose phosphate (TP); these reactions can be called the CO2-reduction step of the Calvin cycle […] 1/6 of the TPs formed leave the cycle while 5/6 are needed in order to re-form RuBP molecules in what we can call the regeneration part of the cycle […]; it is this recycling of most of the final product of the Calvin cycle (i.e. TP) to re-form RuBP that lends it to be called a biochemical ‘cycle’ rather than a pathway.”
“Rubisco […] not only functions as a carboxylase, but […] also acts as an oxygenase […] When Rubisco reacts with oxygen instead of CO2, only 1 molecule of PGA is formed together with 1 molecule of the 2-carbon compound phosphoglycolate […] Not only is there no gain in organic carbon by this reaction, but CO2 is actually lost in the further metabolism of phosphoglycolate, which comprises a series of reactions termed photorespiration […] While photorespiration is a complex process […] it is also an apparently wasteful one […] and it is not known why this process has evolved in plants altogether. […] Photorespiration can reduce the net photosynthetic production by up to 25%.”
“Because of Rubisco’s low affinity to CO2 as compared with the low atmospheric, and even lower intracellular, CO2 concentration […], systems have evolved in some plants by which CO2 can be concentrated at the vicinity of this enzyme; these systems are accordingly termed CO2 concentrating mechanisms (CCM). For terrestrial plants, this need for concentrating CO2 is exacerbated in those that grow in hot and/or arid areas where water needs to be saved by partly or fully closing stomata during the day, thus restricting also the influx of CO2 from an already CO2-limiting atmosphere. Two such CCMs exist in terrestrial plants: the C4 cycle and the Crassulacean acid metabolism (CAM) pathway. […] The C 4 cycle is called so because the first stable product of CO2-fixation is not the 3-carbon compound PGA (as in the Calvin cycle) but, rather, malic acid (often referred to by its anion malate) or aspartic acid (or its anion aspartate), both of which are 4-carbon compounds. […] C4 [terrestrial] plants are […] more common in areas of high temperature, especially when accompanied with scarce rains, than in areas with higher rainfall […] While atmospheric CO2 is fixed […] via the C4 cycle, it should be noted that this biochemical cycle cannot reduce CO2 to high energy containing sugars […] since the Calvin cycle is the only biochemical system that can reduce CO2 to energy-rich carbohydrates in plants, it follows that the CO2 initially fixed by the C4 cycle […] is finally reduced via the Calvin cycle also in C4 plants. In summary, the C 4 cycle can be viewed as being an additional CO2 sequesterer, or a biochemical CO2 ‘pump’, that concentrates CO2 for the rather inefficient enzyme Rubisco in C4 plants that grow under conditions where the CO2 supply is extremely limited because partly closed stomata restrict its influx into the photosynthesising cells.”
“Crassulacean acid metabolism (CAM) is similar to the C 4 cycle in that atmospheric CO2 […] is initially fixed via PEP-case into the 4-carbon compound malate. However, this fixation is carried out during the night […] The ecological advantage behind CAM metabolism is that a CAM plant can grow, or at least survive, under prolonged (sometimes months) conditions of severe water stress. […] CAM plants are typical of the desert flora, and include most cacti. […] The principal difference between C 4 and CAM metabolism is that in C4 plants the initial fixation of atmospheric CO2 and its final fixation and reduction in the Calvin cycle is separated in space (between mesophyll and bundle-sheath cells) while in CAM plants the two processes are separated in time (between the initial fixation of CO2 during the night and its re-fixation and reduction during the day).”
i. Motte-and-bailey castle (‘good article’).
“A motte-and-bailey castle is a fortification with a wooden or stone keep situated on a raised earthwork called a motte, accompanied by an enclosed courtyard, or bailey, surrounded by a protective ditch and palisade. Relatively easy to build with unskilled, often forced labour, but still militarily formidable, these castles were built across northern Europe from the 10th century onwards, spreading from Normandy and Anjou in France, into the Holy Roman Empire in the 11th century. The Normans introduced the design into England and Wales following their invasion in 1066. Motte-and-bailey castles were adopted in Scotland, Ireland, the Low Countries and Denmark in the 12th and 13th centuries. By the end of the 13th century, the design was largely superseded by alternative forms of fortification, but the earthworks remain a prominent feature in many countries. […]
Various methods were used to build mottes. Where a natural hill could be used, scarping could produce a motte without the need to create an artificial mound, but more commonly much of the motte would have to be constructed by hand. Four methods existed for building a mound and a tower: the mound could either be built first, and a tower placed on top of it; the tower could alternatively be built on the original ground surface and then buried within the mound; the tower could potentially be built on the original ground surface and then partially buried within the mound, the buried part forming a cellar beneath; or the tower could be built first, and the mound added later.
Regardless of the sequencing, artificial mottes had to be built by piling up earth; this work was undertaken by hand, using wooden shovels and hand-barrows, possibly with picks as well in the later periods. Larger mottes took disproportionately more effort to build than their smaller equivalents, because of the volumes of earth involved. The largest mottes in England, such as Thetford, are estimated to have required up to 24,000 man-days of work; smaller ones required perhaps as little as 1,000. […] Taking into account estimates of the likely available manpower during the period, historians estimate that the larger mottes might have taken between four and nine months to build. This contrasted favourably with stone keeps of the period, which typically took up to ten years to build. Very little skilled labour was required to build motte and bailey castles, which made them very attractive propositions if forced peasant labour was available, as was the case after the Norman invasion of England. […]
The type of soil would make a difference to the design of the motte, as clay soils could support a steeper motte, whilst sandier soils meant that a motte would need a more gentle incline. Where available, layers of different sorts of earth, such as clay, gravel and chalk, would be used alternatively to build in strength to the design. Layers of turf could also be added to stabilise the motte as it was built up, or a core of stones placed as the heart of the structure to provide strength. Similar issues applied to the defensive ditches, where designers found that the wider the ditch was dug, the deeper and steeper the sides of the scarp could be, making it more defensive. […]
Although motte-and-bailey castles are the best known castle design, they were not always the most numerous in any given area. A popular alternative was the ringwork castle, involving a palisade being built on top of a raised earth rampart, protected by a ditch. The choice of motte and bailey or ringwork was partially driven by terrain, as mottes were typically built on low ground, and on deeper clay and alluvial soils. Another factor may have been speed, as ringworks were faster to build than mottes. Some ringwork castles were later converted into motte-and-bailey designs, by filling in the centre of the ringwork to produce a flat-topped motte. […]
In England, William invaded from Normandy in 1066, resulting in three phases of castle building in England, around 80% of which were in the motte-and-bailey pattern. […] around 741 motte-and-bailey castles [were built] in England and Wales alone. […] Many motte-and-bailey castles were occupied relatively briefly and in England many were being abandoned by the 12th century, and others neglected and allowed to lapse into disrepair. In the Low Countries and Germany, a similar transition occurred in the 13th and 14th centuries. […] One factor was the introduction of stone into castle building. The earliest stone castles had emerged in the 10th century […] Although wood was a more powerful defensive material than was once thought, stone became increasingly popular for military and symbolic reasons.”
ii. Battle of Midway (featured). Lots of good stuff in there. One aspect I had not been aware of beforehand was that Allied codebreakers also here (I was quite familiar with the works of Turing and others in Bletchley Park) played a key role:
“Admiral Nimitz had one priceless advantage: cryptanalysts had partially broken the Japanese Navy’s JN-25b code. Since the early spring of 1942, the US had been decoding messages stating that there would soon be an operation at objective “AF”. It was not known where “AF” was, but Commander Joseph J. Rochefort and his team at Station HYPO were able to confirm that it was Midway; Captain Wilfred Holmes devised a ruse of telling the base at Midway (by secure undersea cable) to broadcast an uncoded radio message stating that Midway’s water purification system had broken down. Within 24 hours, the code breakers picked up a Japanese message that “AF was short on water.” HYPO was also able to determine the date of the attack as either 4 or 5 June, and to provide Nimitz with a complete IJN order of battle. Japan had a new codebook, but its introduction had been delayed, enabling HYPO to read messages for several crucial days; the new code, which had not yet been cracked, came into use shortly before the attack began, but the important breaks had already been made.[nb 8]
As a result, the Americans entered the battle with a very good picture of where, when, and in what strength the Japanese would appear. Nimitz knew that the Japanese had negated their numerical advantage by dividing their ships into four separate task groups, all too widely separated to be able to support each other.[nb 9] […] The Japanese, by contrast, remained almost totally unaware of their opponent’s true strength and dispositions even after the battle began. […] Four Japanese aircraft carriers — Akagi, Kaga, Soryu and Hiryu, all part of the six-carrier force that had attacked Pearl Harbor six months earlier — and a heavy cruiser were sunk at a cost of the carrier Yorktown and a destroyer. After Midway and the exhausting attrition of the Solomon Islands campaign, Japan’s capacity to replace its losses in materiel (particularly aircraft carriers) and men (especially well-trained pilots) rapidly became insufficient to cope with mounting casualties, while the United States’ massive industrial capabilities made American losses far easier to bear. […] The Battle of Midway has often been called “the turning point of the Pacific”. However, the Japanese continued to try to secure more strategic territory in the South Pacific, and the U.S. did not move from a state of naval parity to one of increasing supremacy until after several more months of hard combat. Thus, although Midway was the Allies’ first major victory against the Japanese, it did not radically change the course of the war. Rather, it was the cumulative effects of the battles of Coral Sea and Midway that reduced Japan’s ability to undertake major offensives.”
One thing which really strikes you (well, struck me) when reading this stuff is how incredibly capital-intensive the war at sea really was; this was one of the most important sea battles of the Second World War, yet the total Japanese death toll at Midway was just 3,057. To put that number into perspective, it is significantly smaller than the average number of people killed each day in Stalingrad (according to one estimate, the Soviets alone suffered 478,741 killed or missing during those roughly 5 months (~150 days), which comes out at roughly 3000/day).
iii. History of time-keeping devices (featured). ‘Exactly what it says on the tin’, as they’d say on TV Tropes.
It took a long time to get from where we were to where we are today; the horologists of the past faced a lot of problems you’ve most likely never even thought about. What do you do for example do if your ingenious water clock has trouble keeping time because variation in water temperature causes issues? Well, you use mercury instead of water, of course! (“Since Yi Xing’s clock was a water clock, it was affected by temperature variations. That problem was solved in 976 by Zhang Sixun by replacing the water with mercury, which remains liquid down to −39 °C (−38 °F).”).
iv. Microbial metabolism.
“Microbial metabolism is the means by which a microbe obtains the energy and nutrients (e.g. carbon) it needs to live and reproduce. Microbes use many different types of metabolic strategies and species can often be differentiated from each other based on metabolic characteristics. The specific metabolic properties of a microbe are the major factors in determining that microbe’s ecological niche, and often allow for that microbe to be useful in industrial processes or responsible for biogeochemical cycles. […]
All microbial metabolisms can be arranged according to three principles:
1. How the organism obtains carbon for synthesising cell mass:
- autotrophic – carbon is obtained from carbon dioxide (CO2)
- heterotrophic – carbon is obtained from organic compounds
- mixotrophic – carbon is obtained from both organic compounds and by fixing carbon dioxide
2. How the organism obtains reducing equivalents used either in energy conservation or in biosynthetic reactions:
- lithotrophic – reducing equivalents are obtained from inorganic compounds
- organotrophic – reducing equivalents are obtained from organic compounds
3. How the organism obtains energy for living and growing:
- chemotrophic – energy is obtained from external chemical compounds
- phototrophic – energy is obtained from light
In practice, these terms are almost freely combined. […] Most microbes are heterotrophic (more precisely chemoorganoheterotrophic), using organic compounds as both carbon and energy sources. […] Heterotrophic microbes are extremely abundant in nature and are responsible for the breakdown of large organic polymers such as cellulose, chitin or lignin which are generally indigestible to larger animals. Generally, the breakdown of large polymers to carbon dioxide (mineralization) requires several different organisms, with one breaking down the polymer into its constituent monomers, one able to use the monomers and excreting simpler waste compounds as by-products, and one able to use the excreted wastes. There are many variations on this theme, as different organisms are able to degrade different polymers and secrete different waste products. […]
Biochemically, prokaryotic heterotrophic metabolism is much more versatile than that of eukaryotic organisms, although many prokaryotes share the most basic metabolic models with eukaryotes, e. g. using glycolysis (also called EMP pathway) for sugar metabolism and the citric acid cycle to degrade acetate, producing energy in the form of ATP and reducing power in the form of NADH or quinols. These basic pathways are well conserved because they are also involved in biosynthesis of many conserved building blocks needed for cell growth (sometimes in reverse direction). However, many bacteria and archaea utilize alternative metabolic pathways other than glycolysis and the citric acid cycle. […] The metabolic diversity and ability of prokaryotes to use a large variety of organic compounds arises from the much deeper evolutionary history and diversity of prokaryotes, as compared to eukaryotes. […]
Many microbes (phototrophs) are capable of using light as a source of energy to produce ATP and organic compounds such as carbohydrates, lipids, and proteins. Of these, algae are particularly significant because they are oxygenic, using water as an electron donor for electron transfer during photosynthesis. Phototrophic bacteria are found in the phyla Cyanobacteria, Chlorobi, Proteobacteria, Chloroflexi, and Firmicutes. Along with plants these microbes are responsible for all biological generation of oxygen gas on Earth. […] As befits the large diversity of photosynthetic bacteria, there are many different mechanisms by which light is converted into energy for metabolism. All photosynthetic organisms locate their photosynthetic reaction centers within a membrane, which may be invaginations of the cytoplasmic membrane (Proteobacteria), thylakoid membranes (Cyanobacteria), specialized antenna structures called chlorosomes (Green sulfur and non-sulfur bacteria), or the cytoplasmic membrane itself (heliobacteria). Different photosynthetic bacteria also contain different photosynthetic pigments, such as chlorophylls and carotenoids, allowing them to take advantage of different portions of the electromagnetic spectrum and thereby inhabit different niches. Some groups of organisms contain more specialized light-harvesting structures (e.g. phycobilisomes in Cyanobacteria and chlorosomes in Green sulfur and non-sulfur bacteria), allowing for increased efficiency in light utilization. […]
Most photosynthetic microbes are autotrophic, fixing carbon dioxide via the Calvin cycle. Some photosynthetic bacteria (e.g. Chloroflexus) are photoheterotrophs, meaning that they use organic carbon compounds as a carbon source for growth. Some photosynthetic organisms also fix nitrogen […] Nitrogen is an element required for growth by all biological systems. While extremely common (80% by volume) in the atmosphere, dinitrogen gas (N2) is generally biologically inaccessible due to its high activation energy. Throughout all of nature, only specialized bacteria and Archaea are capable of nitrogen fixation, converting dinitrogen gas into ammonia (NH3), which is easily assimilated by all organisms. These prokaryotes, therefore, are very important ecologically and are often essential for the survival of entire ecosystems. This is especially true in the ocean, where nitrogen-fixing cyanobacteria are often the only sources of fixed nitrogen, and in soils, where specialized symbioses exist between legumes and their nitrogen-fixing partners to provide the nitrogen needed by these plants for growth.
Nitrogen fixation can be found distributed throughout nearly all bacterial lineages and physiological classes but is not a universal property. Because the enzyme nitrogenase, responsible for nitrogen fixation, is very sensitive to oxygen which will inhibit it irreversibly, all nitrogen-fixing organisms must possess some mechanism to keep the concentration of oxygen low. […] The production and activity of nitrogenases is very highly regulated, both because nitrogen fixation is an extremely energetically expensive process (16–24 ATP are used per N2 fixed) and due to the extreme sensitivity of the nitrogenase to oxygen.” (A lot of the stuff above was of course for me either review or closely related to stuff I’ve already read in the coverage provided in Beer et al., a book I’ve talked about before here on the blog).
v. Uranium (featured). It’s hard to know what to include here as the article has a lot of stuff, but I found this part in particular, well, interesting:
“During the Cold War between the Soviet Union and the United States, huge stockpiles of uranium were amassed and tens of thousands of nuclear weapons were created using enriched uranium and plutonium made from uranium. Since the break-up of the Soviet Union in 1991, an estimated 600 short tons (540 metric tons) of highly enriched weapons grade uranium (enough to make 40,000 nuclear warheads) have been stored in often inadequately guarded facilities in the Russian Federation and several other former Soviet states. Police in Asia, Europe, and South America on at least 16 occasions from 1993 to 2005 have intercepted shipments of smuggled bomb-grade uranium or plutonium, most of which was from ex-Soviet sources. From 1993 to 2005 the Material Protection, Control, and Accounting Program, operated by the federal government of the United States, spent approximately US $550 million to help safeguard uranium and plutonium stockpiles in Russia. This money was used for improvements and security enhancements at research and storage facilities. Scientific American reported in February 2006 that in some of the facilities security consisted of chain link fences which were in severe states of disrepair. According to an interview from the article, one facility had been storing samples of enriched (weapons grade) uranium in a broom closet before the improvement project; another had been keeping track of its stock of nuclear warheads using index cards kept in a shoe box.”
Some other observations from the article below:
“Uranium is a naturally occurring element that can be found in low levels within all rock, soil, and water. Uranium is the 51st element in order of abundance in the Earth’s crust. Uranium is also the highest-numbered element to be found naturally in significant quantities on Earth and is almost always found combined with other elements. Along with all elements having atomic weights higher than that of iron, it is only naturally formed in supernovae. The decay of uranium, thorium, and potassium-40 in the Earth’s mantle is thought to be the main source of heat that keeps the outer core liquid and drives mantle convection, which in turn drives plate tectonics. […]
Natural uranium consists of three major isotopes: uranium-238 (99.28% natural abundance), uranium-235 (0.71%), and uranium-234 (0.0054%). […] Uranium-238 is the most stable isotope of uranium, with a half-life of about 4.468×109 years, roughly the age of the Earth. Uranium-235 has a half-life of about 7.13×108 years, and uranium-234 has a half-life of about 2.48×105 years. For natural uranium, about 49% of its alpha rays are emitted by each of 238U atom, and also 49% by 234U (since the latter is formed from the former) and about 2.0% of them by the 235U. When the Earth was young, probably about one-fifth of its uranium was uranium-235, but the percentage of 234U was probably much lower than this. […]
Worldwide production of U3O8 (yellowcake) in 2013 amounted to 70,015 tonnes, of which 22,451 t (32%) was mined in Kazakhstan. Other important uranium mining countries are Canada (9,331 t), Australia (6,350 t), Niger (4,518 t), Namibia (4,323 t) and Russia (3,135 t). […] Australia has 31% of the world’s known uranium ore reserves and the world’s largest single uranium deposit, located at the Olympic Dam Mine in South Australia. There is a significant reserve of uranium in Bakouma a sub-prefecture in the prefecture of Mbomou in Central African Republic. […] Uranium deposits seem to be log-normal distributed. There is a 300-fold increase in the amount of uranium recoverable for each tenfold decrease in ore grade. In other words, there is little high grade ore and proportionately much more low grade ore available.”
vi. Radiocarbon dating (featured).
Radiocarbon dating (also referred to as carbon dating or carbon-14 dating) is a method of determining the age of an object containing organic material by using the properties of radiocarbon (14C), a radioactive isotope of carbon. The method was invented by Willard Libby in the late 1940s and soon became a standard tool for archaeologists. Libby received the Nobel Prize for his work in 1960. The radiocarbon dating method is based on the fact that radiocarbon is constantly being created in the atmosphere by the interaction of cosmic rays with atmospheric nitrogen. The resulting radiocarbon combines with atmospheric oxygen to form radioactive carbon dioxide, which is incorporated into plants by photosynthesis; animals then acquire 14C by eating the plants. When the animal or plant dies, it stops exchanging carbon with its environment, and from that point onwards the amount of 14C it contains begins to reduce as the 14C undergoes radioactive decay. Measuring the amount of 14C in a sample from a dead plant or animal such as piece of wood or a fragment of bone provides information that can be used to calculate when the animal or plant died. The older a sample is, the less 14C there is to be detected, and because the half-life of 14C (the period of time after which half of a given sample will have decayed) is about 5,730 years, the oldest dates that can be reliably measured by radiocarbon dating are around 50,000 years ago, although special preparation methods occasionally permit dating of older samples.
The idea behind radiocarbon dating is straightforward, but years of work were required to develop the technique to the point where accurate dates could be obtained. […]
The development of radiocarbon dating has had a profound impact on archaeology. In addition to permitting more accurate dating within archaeological sites than did previous methods, it allows comparison of dates of events across great distances. Histories of archaeology often refer to its impact as the “radiocarbon revolution”.”
I’ve read about these topics before in a textbook setting (e.g. here), but/and I should note that the article provides quite detailed coverage and I think most people will encounter some new information by having a look at it even if they’re superficially familiar with this topic. The article has a lot of stuff about e.g. ‘what you need to correct for’, which some of you might find interesting.
vii. Raccoon (featured). One interesting observation from the article:
“One aspect of raccoon behavior is so well known that it gives the animal part of its scientific name, Procyon lotor; “lotor” is neo-Latin for “washer”. In the wild, raccoons often dabble for underwater food near the shore-line. They then often pick up the food item with their front paws to examine it and rub the item, sometimes to remove unwanted parts. This gives the appearance of the raccoon “washing” the food. The tactile sensitivity of raccoons’ paws is increased if this rubbing action is performed underwater, since the water softens the hard layer covering the paws. However, the behavior observed in captive raccoons in which they carry their food to water to “wash” or douse it before eating has not been observed in the wild. Naturalist Georges-Louis Leclerc, Comte de Buffon, believed that raccoons do not have adequate saliva production to moisten food thereby necessitating dousing, but this hypothesis is now considered to be incorrect. Captive raccoons douse their food more frequently when a watering hole with a layout similar to a stream is not farther away than 3 m (10 ft). The widely accepted theory is that dousing in captive raccoons is a fixed action pattern from the dabbling behavior performed when foraging at shores for aquatic foods. This is supported by the observation that aquatic foods are doused more frequently. Cleaning dirty food does not seem to be a reason for “washing”. Experts have cast doubt on the veracity of observations of wild raccoons dousing food.
And here’s another interesting set of observations:
“In Germany—where the racoon is called the Waschbär (literally, “wash-bear” or “washing bear”) due to its habit of “dousing” food in water—two pairs of pet raccoons were released into the German countryside at the Edersee reservoir in the north of Hesse in April 1934 by a forester upon request of their owner, a poultry farmer. He released them two weeks before receiving permission from the Prussian hunting office to “enrich the fauna.”  Several prior attempts to introduce raccoons in Germany were not successful. A second population was established in eastern Germany in 1945 when 25 raccoons escaped from a fur farm at Wolfshagen, east of Berlin, after an air strike. The two populations are parasitologically distinguishable: 70% of the raccoons of the Hessian population are infected with the roundworm Baylisascaris procyonis, but none of the Brandenburgian population has the parasite. The estimated number of raccoons was 285 animals in the Hessian region in 1956, over 20,000 animals in the Hessian region in 1970 and between 200,000 and 400,000 animals in the whole of Germany in 2008. By 2012 it was estimated that Germany now had more than a million raccoons.“
I’m currently reading this book. Below some observations from part 1.
“The term autotroph is usually associated with the photosynthesising plants (including algae and cyanobacteria) and heterotroph with animals and some other groups of organisms that need to be provided high-energy containing organic foods (e.g. the fungi and many bacteria). However, many exceptions exist: Some plants are parasitic and may be devoid of chlorophyll and, thus, lack photosynthesis altogether6, and some animals contain chloroplasts or photosynthesising algae or
cyanobacteria and may function, in part, autotrophically; some corals rely on the photosynthetic algae within their bodies to the extent that they don’t have to eat at all […] If some plants are heterotrophic and some animals autotrophic, what then differentiates plants from animals? It is usually said that what differs the two groups is the absence (animals) or presence (plants) of a cell wall. The cell wall is deposited outside the cell membrane in plants, and forms a type of exo-skeleton made of polysaccharides (e.g. cellulose or agar in some red algae, or silica in the case of diatoms) that renders rigidity to plant cells and to the whole plant.”
“For the autotrophs, […] there was an advantage if they could live close to the shores where inorganic nutrient concentrations were higher (because of mineral-rich runoffs from land) than in the upper water layer of off-shore locations. However, living closer to shore also meant greater effects of wave action, which would alter, e.g. the light availability […]. Under such conditions, there would be an advantage to be able to stay put in the seawater, and under those conditions it is thought that filamentous photosynthetic organisms were formed from autotrophic cells (ca. 650 million years ago), which eventually resulted in macroalgae (some 450 million years ago) featuring holdfast tissues that could adhere them to rocky substrates. […] Very briefly now, the green macroalgae were the ancestors of terrestrial plants, which started to invade land ca. 400 million years ago (followed by the animals).”
“Marine ‘plants’ (= all photoautotrophic organisms of the seas) can be divided into phytoplankton (‘drifters’, mostly unicellular) and phytobenthos (connected to the bottom, mostly multicellular/macroscopic).
The phytoplankton can be divided into cyanobacteria (prokaryotic) and microalgae (eukaryotic) […]. The phytobenthos can be divided into macroalgae and seagrasses (marine angiosperms, which invaded the shallow seas some 90 million years ago). The micro- and macro-algae are divided into larger groups as based largely on their pigment composition [e.g. ‘red algae‘, ‘brown algae‘, …]
There are some 150 currently recognised species of marine cyanobacteria, ∼20 000 species of eukaryotic microalgae, several thousand species of macroalgae and 50(!) species of seagrasses. Altogether these marine plants are accountable for approximately half of Earth’s photosynthetic (or primary) production.
The abiotic factors that are conducive to photosynthesis and plant growth in the marine environment differ from those of terrestrial environments mainly with regard to light and inorganic carbon (Ci) sources. Light is strongly attenuated in the marine environment by absorption and scatter […] While terrestrial plants rely of atmospheric CO2 for their photosynthesis, marine plants utilise largely the >100 times higher concentration of HCO3− as the main Ci source for their photosynthetic needs. Nutrients other than CO2, that may limit plant growth in the marine environment include nitrogen (N), phosphorus (P), iron (Fe) and, for the diatoms, silica (Si).”
“The conversion of the plentiful atmospheric N2 gas (∼78% in air) into bio-available N-rich cellular constituents is a fundamental process that sustains life on Earth. For unknown reasons this process is restricted to selected representatives among the prokaryotes: archaea and bacteria. N2 fixing organisms, also termed diazotrophs (dia = two; azo = nitrogen), are globally wide-spread in terrestrial and aquatic environments, from polar regions to hot deserts, although their abundance varies widely. [Why is nitrogen important, I hear you ask? Well, when you hear the word ‘nitrogen’ in biology texts, think ‘protein’ – “Because nitrogen is relatively easy to measure and protein is not, protein content is often estimated by assaying organic nitrogen, which comprises from 15 to 18% of plant proteins” (Herrera et al. – see this post]. […] . Cyanobacteria dominate marine diazotrophs and occupy large segments of marine open waters […] sustained N2 fixation […] is a highly energy-demanding process. […] in all diazotrophs, the nitrogenase enzyme complex […] of marine cyanobacteria requires high Fe levels […] Another key nutrient is phosphorus […] which has a great impact on growth and N2 fixation in marine cyanobacteria. […] Recent model-based estimates of N2 fixation suggest that unicellular cyanobacteria contribute significantly to global ocean N budgets.”
“For convenience, we often divide the phytoplankton into different size classes, the pico-phytoplankton (0.2–2 μm effective cell diameter, ECD4); the nanophytoplankton (2–20 μm ECD) and the microphytoplankton (20–200 μm ECD). […] most of the major marine microalgal groups are found in all three size classes […] a 2010 paper estimate that these plants utilise 46 Gt carbon yearly, which can be divided into 15 Gt for the microphytoplankton, 20 Gt for the nanophytoplankton and 11 Gt for the picophytoplankton. Thus, the very small (nano- + pico-forms) of phytoplankton (including cyanobacterial forms) contribute 2/3 of the overall planktonic production (which, again, constitutes about half of the global production”).
“Many primarily non-photosynthetic organisms have developed symbioses with microalgae and cyanobacteria; these photosynthetic intruders are here referred to as photosymbionts. […] Most photosymbionts are endosymbiotic (living within the host) […] In almost all cases, these micro-algae are in symbiosis with invertebrates. Here the alga provides the animal with organic products of photosynthesis, while the invertebrate host can supply CO2 and other inorganic nutrients including nitrogen and phosphorus to the alga […]. In cases where cyanobacteria form the photosymbiont, their ‘caloric’ nutritional value is more questionable, and they may instead produce toxins that deter other animals from eating the host […] Many reef-building […] corals contain symbiotic zooxanthellae within the digestive cavity of their polyps, and in general corals that have symbiotic algae grow much faster than those without them. […] The loss of zooxanthellae from the host is known as coral bleaching […] Certain sea slugs contain functional chloroplasts that were ingested (but not digested) as part of larger algae […]. After digesting the rest of the alga, these chloroplasts are imbedded within the slugs’ digestive tract in a process called kleptoplasty (the ‘stealing’ of plastids). Even though this is not a true symbiosis (the chloroplasts are not organisms and do not gain anything from the association), the photosynthetic activity aids in the nutrition of the slugs for up to several months, thus either complementing their nutrition or carrying them through periods when food is scarce or absent.”
“90–100 million years ago, when there was a rise in seawater levels, some of the grasses that grew close to the seashores found themselves submerged in seawater. One piece of evidence that supports [the] terrestrial origin [of marine angiosperms] can be seen in the fact that residues of stomata can be found at the base of the leaves. In terrestrial plants, the stomata restrict water loss from the leaves, but since seagrasses are principally submerged in a liquid medium, the stomata became absent in the bulk parts of the leaves. These marine angiosperms, or seagrasses, thus evolved from those coastal grasses that successfully managed to adapt to being submerged in saline waters. Another theory has it that the ancestors of seagrasses were freshwater plants that, therefore, only had to adapt to water of a higher salinity. In both cases, the seagrasses exemplify a successful readaptation to marine life […] While there may exist some 20 000 or more species of macroalgae […], there are only some 50 species of seagrasses, most of which are found in tropical seas. […] the ability to extract nutrients from the sediment renders the seagrasses at an advantage over (the root-less) macroalgae in nutrient-poor waters. […] one of the basic differences in habitat utilisation between macroalgae and seagrasses is that the former usually grow on rocky substrates where they are held in place by their holdfasts, while seagrasses inhabit softer sediments where they are held in place by their root systems. Unlike macroalgae, where the whole plant surface is photosynthetically active, large proportions of seagrass plants are comprised of the non-photosynthetic roots and rhizomes. […] This means […] that seagrasses need more light in order to survive than do many algae […] marine plants usually contain less structural tissues than their terrestrial counterparts”.
“if we define ‘visible light’ as the electromagnetic wave upon which those energy-containing particles called quanta ‘ride’ that cause vision in higher animals (those quanta are also called photons) and compare it with light that causes photosynthesis, we find, interestingly, that the two processes use approximately the same wavelengths: While mammals largely use the 380–750 nm (nm = 10-9 m) wavelength band for vision, plants use the 400–700-nm band for photosynthesis; the latter is therefore also termed photosynthetically active radiation (PAR […] If a student
asks “but how come that animals and plants use almost identical wavelengths of radiation for so very different purposes?”, my answer is “sorry, but we don’t have the time to discuss that now”, meaning that while I think it has to do with too high and too low quantum energies below and above those wavelengths, I really don’t know.”
“energy (E) of a photon is inversely proportional to its wavelength […] a blue photon of 400 nm wavelength contains almost double the energy of a red one of 700 nm, while the photons of PAR between those two extremes carry decreasing energies as wavelengths increase. Accordingly, low-energy photons (i.e. of high wavelengths, e.g. those of reddish light) are absorbed to a greater extent by water molecules along a depth gradient than are photons of higher energy (i.e. lower wavelengths, e.g. bluish light), and so the latter penetrate deeper down in clear oceanic waters […] In water, the spectral distribution of PAR reaching a plant is different from that on land. This is because water not only attenuates the light intensity (or, more correctly, the photon flux, or irradiance […]), but, as mentioned above and detailed below, the attenuation with depth is wavelength dependent; therefore, plants living in the oceans will receive different spectra of light dependent on depth […] The two main characteristics of seawater that determine the quantity and quality of the irradiance penetrating to a certain depth are absorption and scatter. […] Light absorption in the oceans is a property of the water molecules, which absorb photons according to their energy […] Thus, red photons of low energy are more readily absorbed than, e.g. blue ones; only <1% of the incident red photons (calculated for 650 nm) penetrate to 20 m depth in clear waters while some 60% of the blue photons (450 nm) remain at that depth. […] Scatter […] is mainly caused by particles suspended in the water column (rather than by the water molecules themselves, although they too scatter light a little). Unlike absorption, scatter affects short-wavelength photons more than long-wavelength ones […] in turbid waters, photons of decreasing wavelengths are increasingly scattered. Since water molecules are naturally also present, they absorb the higher wavelengths, and the colours penetrating deepest in turbid waters are those between the highly scattered blue and highly absorbed red, e.g. green. The greenish colour of many coastal waters is therefore often due not only to the presence of chlorophyll-containing phytoplankton, but because, again, reddish photons are absorbed, bluish photons are scattered, and the midspectrum (i.e. green) fills the bulk part of the water column.”
“the open ocean, several kilometres or miles from the shore, almost always appears as blue. The reason for this is that in unpolluted, particle-free, waters, the preferential absorption of long-wavelength (low-energy) photons is what mainly determines the spectral distribution of light attenuation. Thus, short-wavelength (high-energy) bluish photons penetrate deepest and ‘fill up’ the bulk of the water column with their colour. Since water molecules also scatter a small proportion of those photons […], it follows that these largely water-penetrating photons are eventually also reflected back to our eyes. Or, in other words, out of the very low scattering in clear oceanic waters, the photons available to be scattered and, thus, reflected to our eyes, are mainly the bluish ones, and that is why the clear deep oceans look blue. (It is often said that the oceans are blue because the blue sky is reflected by the water surface. However, sailors will testify to the truism that the oceans are also deep blue in heavily overcast weathers, and so that explanation of the general blueness of the oceans is not valid.)”
“Although marine plants can be found in a wide range of temperature regimes, from the tropics to polar regions, the large bodies of water that are the environment for most marine plants have relatively constant temperatures, at least on a day-to-day basis. […] For marine plants that are found in intertidal regions, however, temperature variation during a single day can be very high as the plants find themselves alternately exposed to air […] Marine plants from tropical and temperate regions tend to have distinct temperature ranges for growth […] and growth optima. […] among most temperate species of microalgae, temperature optima for growth are in the range 18–25 ◦C, while some Antarctic diatoms show optima at 4–6 ◦C with no growth above a critical temperature of 7–12 ◦C. By contrast, some tropical diatoms will not grow below 15–17 ◦C. Similar responses are found in macroalgae and seagrasses. However, although some marine plants have a restricted temperature range for growth (so-called stenothermal species; steno = narrow and thermal relates to temperature), most show some growth over a broad range of temperatures and can be considered eurythermal (eury = wide).”
Sorry for the infrequent updates. I realized blogging Wodehouse books takes more time than I’d imagined, so posting this sort of stuff is probably a better idea.
“On the first day of the evacuation, only 7,669 men were evacuated, but by the end of the eighth day, a total of 338,226 soldiers had been rescued by a hastily assembled fleet of over 800 boats. Many of the troops were able to embark from the harbour’s protective mole onto 39 British destroyers and other large ships, while others had to wade out from the beaches, waiting for hours in the shoulder-deep water. Some were ferried from the beaches to the larger ships by the famous little ships of Dunkirk, a flotilla of hundreds of merchant marine boats, fishing boats, pleasure craft, and lifeboats called into service for the emergency. The BEF lost 68,000 soldiers during the French campaign and had to abandon nearly all of their tanks, vehicles, and other equipment.”
One way to make sense of the scale of the operations here is to compare them with the naval activities on D-day four years later. The British evacuated more people from France during three consecutive days in 1940 (30th and 31st of May, and 1st of June) than the Allies (Americans and British combined) landed on D-day four years later, and the British evacuated roughly as many people on the 31st of May (68,014) as they landed by sea on D-day (75,215). Here’s a part of the story I did not know:
“Three British divisions and a host of logistic and labour troops were cut off to the south of the Somme by the German “race to the sea”. At the end of May, a further two divisions began moving to France with the hope of establishing a Second BEF. The majority of the 51st (Highland) Division was forced to surrender on 12 June, but almost 192,000 Allied personnel, 144,000 of them British, were evacuated through various French ports from 15–25 June under the codename Operation Ariel. […] More than 100,000 evacuated French troops were quickly and efficiently shuttled to camps in various parts of southwestern England, where they were temporarily lodged before being repatriated. British ships ferried French troops to Brest, Cherbourg, and other ports in Normandy and Brittany, although only about half of the repatriated troops were deployed against the Germans before the surrender of France. For many French soldiers, the Dunkirk evacuation represented only a few weeks’ delay before being killed or captured by the German army after their return to France.”
ii. A pretty awesome display by the current world chess champion:
If you feel the same way I do about Maurice Ashley, you’ll probably want to skip the first few minutes of this video. Don’t miss the games, though – this is great stuff. Do keep in mind when watching this video that the clock is a really important part of this event; other players in the past have played a lot more people at the same time while blindfolded than Carlsen does here – “Although not a full-time chess professional [Najdorf] was one of the world’s leading chess players in the 1950s and 1960s and he excelled in playing blindfold chess: he broke the world record twice, by playing blindfold 40 games in Rosario, 1943, and 45 in São Paulo, 1947, becoming the world blindfold chess champion” (link) – but a game clock changes things a lot. A few comments and discussion here.
In very slightly related news, I recently got in my first win against a grandmaster in a bullet game on the ICC.
iii. Gastric-brooding frog.
“The genus was unique because it contained the only two known frog species that incubated the prejuvenile stages of their offspring in the stomach of the mother. […] What makes these frogs unique among all frog species is their form of parental care. Following external fertilization by the male, the female would take the eggs or embryos into her mouth and swallow them. […] Eggs found in females measured up to 5.1 mm in diameter and had large yolk supplies. These large supplies are common among species that live entirely off yolk during their development. Most female frogs had around 40 ripe eggs, almost double that of the number of juveniles ever found in the stomach (21–26). This means one of two things, that the female fails to swallow all the eggs or the first few eggs to be swallowed are digested. […] During the period that the offspring were present in the stomach the frog would not eat. […] The birth process was widely spaced and may have occurred over a period of as long as a week. However, if disturbed the female may regurgitate all the young frogs in a single act of propulsive vomiting.”
Fascinating creatures.. Unfortunately they’re no longer around (they’re classified as extinct).
iv. I’m sort of conflicted about what to think about this:
“Epidemiological studies show that patients with type-2-diabetes (T2DM) and individuals with a diabetes-independent elevation in blood glucose have an increased risk for developing dementia, specifically dementia due to Alzheimer’s disease (AD). These observations suggest that abnormal glucose metabolism likely plays a role in some aspects of AD pathogenesis, leading us to investigate the link between aberrant glucose metabolism, T2DM, and AD in murine models. […] Recent epidemiological studies demonstrate that individuals with type-2 diabetes (T2DM) are 2–4 times more likely to develop AD (3–5), individuals with elevated blood glucose levels are at an increased risk to develop dementia (5), and those with elevated blood glucose levels have a more rapid conversion from mild cognitive impairment (MCI) to AD (6), suggesting that disrupted glucose homeostasis could play a […] causal role in AD pathogenesis. Although several prominent features of T2DM, including increased insulin resistance and decreased insulin production, are at the forefront of AD research (7–10), questions regarding the effects of elevated blood glucose independent of insulin resistance on AD pathology remain largely unexplored. In order to investigate the potential role of glucose metabolism in AD, we combined glucose clamps and in vivo microdialysis as a method to measure changes in brain metabolites in awake, freely moving mice during a hyperglycemic challenge. Our findings suggest that acute hyperglycemia raises interstitial fluid (ISF) Aβ levels by altering neuronal activity, which increases Aβ production. […] Since extracellular Aβ, and subsequently tau, aggregate in a concentration-dependent manner during the preclinical period of AD while individuals are cognitively normal (27), our findings suggest that repeated episodes of transient hyperglycemia, such as those found in T2DM, could both initiate and accelerate plaque accumulation. Thus, the correlation between hyperglycemia and increased ISF Aβ provides one potential explanation for the increased risk of AD and dementia in T2DM patients or individuals with elevated blood glucose levels. In addition, our work suggests that KATP channels within the hippocampus act as metabolic sensors and couple alterations in glucose concentrations with changes in electrical activity and extracellular Aβ levels. Not only does this offer one mechanistic explanation for the epidemiological link between T2DM and AD, but it also provides a potential therapeutic target for AD. Given that FDA-approved drugs already exist for the modulation of KATP channels and previous work demonstrates the benefits of sulfonylureas for treating animal models of AD (26), the identification of these channels as a link between hyperglycemia and AD pathology creates an avenue for translational research in AD.”
Why am I conflicted? Well, on the one hand it’s nice to know that they’re making progress in terms of figuring out why people get Alzheimer’s and potential therapeutic targets are being identified. On the other hand this – “our findings suggest that repeated episodes of transient hyperglycemia […] could both initiate and accelerate plaque accumulation” – is bad news if you’re a type 1 diabetic (I’d much rather have them identify risk factors to which I’m not exposed).
v. I recently noticed that Khan Academy has put up some videos about diabetes. From the few ones I’ve had a look at they don’t seem to contain much stuff I don’t already know so I’m not sure I’ll explore this playlist in any more detail, but I figured I might as well share a few of the videos here; the first one is about the pathophysiology of type 1 diabetes and the second one’s about diabetic nephropathy (kidney disease):
vi. On Being the Right Size, by J. B. S. Haldane. A neat little text. A few quotes:
“To the mouse and any smaller animal [gravity] presents practically no dangers. You can drop a mouse down a thousand-yard mine shaft; and, on arriving at the bottom, it gets a slight shock and walks away, provided that the ground is fairly soft. A rat is killed, a man is broken, a horse splashes. For the resistance presented to movement by the air is proportional to the surface of the moving object. Divide an animal’s length, breadth, and height each by ten; its weight is reduced to a thousandth, but its surface only to a hundredth. So the resistance to falling in the case of the small animal is relatively ten times greater than the driving force.
An insect, therefore, is not afraid of gravity; it can fall without danger, and can cling to the ceiling with remarkably little trouble. It can go in for elegant and fantastic forms of support like that of the daddy-longlegs. But there is a force which is as formidable to an insect as gravitation to a mammal. This is surface tension. A man coming out of a bath carries with him a film of water of about one-fiftieth of an inch in thickness. This weighs roughly a pound. A wet mouse has to carry about its own weight of water. A wet fly has to lift many times its own weight and, as everyone knows, a fly once wetted by water or any other liquid is in a very serious position indeed. An insect going for a drink is in as great danger as a man leaning out over a precipice in search of food. If it once falls into the grip of the surface tension of the water—that is to say, gets wet—it is likely to remain so until it drowns. A few insects, such as water-beetles, contrive to be unwettable; the majority keep well away from their drink by means of a long proboscis. […]
It is an elementary principle of aeronautics that the minimum speed needed to keep an aeroplane of a given shape in the air varies as the square root of its length. If its linear dimensions are increased four times, it must fly twice as fast. Now the power needed for the minimum speed increases more rapidly than the weight of the machine. So the larger aeroplane, which weighs sixty-four times as much as the smaller, needs one hundred and twenty-eight times its horsepower to keep up. Applying the same principle to the birds, we find that the limit to their size is soon reached. An angel whose muscles developed no more power weight for weight than those of an eagle or a pigeon would require a breast projecting for about four feet to house the muscles engaged in working its wings, while to economize in weight, its legs would have to be reduced to mere stilts. Actually a large bird such as an eagle or kite does not keep in the air mainly by moving its wings. It is generally to be seen soaring, that is to say balanced on a rising column of air. And even soaring becomes more and more difficult with increasing size. Were this not the case eagles might be as large as tigers and as formidable to man as hostile aeroplanes.
But it is time that we pass to some of the advantages of size. One of the most obvious is that it enables one to keep warm. All warmblooded animals at rest lose the same amount of heat from a unit area of skin, for which purpose they need a food-supply proportional to their surface and not to their weight. Five thousand mice weigh as much as a man. Their combined surface and food or oxygen consumption are about seventeen times a man’s. In fact a mouse eats about one quarter its own weight of food every day, which is mainly used in keeping it warm. For the same reason small animals cannot live in cold countries. In the arctic regions there are no reptiles or amphibians, and no small mammals. The smallest mammal in Spitzbergen is the fox. The small birds fly away in winter, while the insects die, though their eggs can survive six months or more of frost. The most successful mammals are bears, seals, and walruses.” [I think he’s a bit too categorical in his statements here and this topic is more contested today than it probably was when he wrote his text – see wikipedia’s coverage of Bergmann’s rule].
i. Lock (water transport). Zumerchik and Danver’s book covered this kind of stuff as well, sort of, and I figured that since I’m not going to blog the book – for reasons provided in my goodreads review here – I might as well add a link or two here instead. The words ‘sort of’ above are in my opinion justified because the book coverage is so horrid you’d never even know what a lock is used for from reading that book; you’d need to look that up elsewhere.
On a related note there’s a lot of stuff in that book about the history of water transport etc. which you probably won’t get from these articles, but having a look here will give you some idea about which sort of topics many of the chapters of the book are dealing with. Also, stuff like this and this. The book coverage of the latter topic is incidentally much, much more detailed than is that wiki article, and the article – as well as many other articles about related topics (economic history, etc.) on the wiki, to the extent that they even exist – could clearly be improved greatly by adding content from books like this one. However I’m not going to be the guy doing that.
ii. Congruence (geometry).
I’d note that this is a topic which seems to be reasonably well covered on wikipedia; there’s for example also a ‘good article’ on the Everglades and a featured article about the Everglades National Park. A few quotes and observations from the article:
“The geography and ecology of the Everglades involve the complex elements affecting the natural environment throughout the southern region of the U.S. state of Florida. Before drainage, the Everglades were an interwoven mesh of marshes and prairies covering 4,000 square miles (10,000 km2). […] Although sawgrass and sloughs are the enduring geographical icons of the Everglades, other ecosystems are just as vital, and the borders marking them are subtle or nonexistent. Pinelands and tropical hardwood hammocks are located throughout the sloughs; the trees, rooted in soil inches above the peat, marl, or water, support a variety of wildlife. The oldest and tallest trees are cypresses, whose roots are specially adapted to grow underwater for months at a time.”
“A vast marshland could only have been formed due to the underlying rock formations in southern Florida. The floor of the Everglades formed between 25 million and 2 million years ago when the Florida peninsula was a shallow sea floor. The peninsula has been covered by sea water at least seven times since the earliest bedrock formation. […] At only 5,000 years of age, the Everglades is a young region in geological terms. Its ecosystems are in constant flux as a result of the interplay of three factors: the type and amount of water present, the geology of the region, and the frequency and severity of fires. […] Water is the dominant element in the Everglades, and it shapes the land, vegetation, and animal life of South Florida. The South Florida climate was once arid and semi-arid, interspersed with wet periods. Between 10,000 and 20,000 years ago, sea levels rose, submerging portions of the Florida peninsula and causing the water table to rise. Fresh water saturated the limestone, eroding some of it and creating springs and sinkholes. The abundance of fresh water allowed new vegetation to take root, and through evaporation formed thunderstorms. Limestone was dissolved by the slightly acidic rainwater. The limestone wore away, and groundwater came into contact with the surface, creating a massive wetland ecosystem. […] Only two seasons exist in the Everglades: wet (May to November) and dry (December to April). […] The Everglades are unique; no other wetland system in the world is nourished primarily from the atmosphere. […] Average annual rainfall in the Everglades is approximately 62 inches (160 cm), though fluctuations of precipitation are normal.”
“Between 1871 and 2003, 40 tropical cyclones struck the Everglades, usually every one to three years.”
“Islands of trees featuring dense temperate or tropical trees are called tropical hardwood hammocks. They may rise between 1 and 3 feet (0.30 and 0.91 m) above water level in freshwater sloughs, sawgrass prairies, or pineland. These islands illustrate the difficulty of characterizing the climate of the Everglades as tropical or subtropical. Hammocks in the northern portion of the Everglades consist of more temperate plant species, but closer to Florida Bay the trees are tropical and smaller shrubs are more prevalent. […] Islands vary in size, but most range between 1 and 10 acres (0.40 and 4.05 ha); the water slowly flowing around them limits their size and gives them a teardrop appearance from above. The height of the trees is limited by factors such as frost, lightning, and wind: the majority of trees in hammocks grow no higher than 55 feet (17 m). […] There are more than 50 varieties of tree snails in the Everglades; the color patterns and designs unique to single islands may be a result of the isolation of certain hammocks. […] An estimated 11,000 species of seed-bearing plants and 400 species of land or water vertebrates live in the Everglades, but slight variations in water levels affect many organisms and reshape land formations.”
“Because much of the coast and inner estuaries are built by mangroves—and there is no border between the coastal marshes and the bay—the ecosystems in Florida Bay are considered part of the Everglades. […] Sea grasses stabilize sea beds and protect shorelines from erosion by absorbing energy from waves. […] Sea floor patterns of Florida Bay are formed by currents and winds. However, since 1932, sea levels have been rising at a rate of 1 foot (0.30 m) per 100 years. Though mangroves serve to build and stabilize the coastline, seas may be rising more rapidly than the trees are able to build.”
iv. Chang and Eng Bunker. Not a long article, but interesting:
“Chang (Chinese: 昌; pinyin: Chāng; Thai: จัน, Jan, rtgs: Chan) and Eng (Chinese: 恩; pinyin: Ēn; Thai: อิน In) Bunker (May 11, 1811 – January 17, 1874) were Thai-American conjoined twin brothers whose condition and birthplace became the basis for the term “Siamese twins”.”
I loved some of the implicit assumptions in this article: “Determined to live as normal a life they could, Chang and Eng settled on their small plantation and bought slaves to do the work they could not do themselves. […] Chang and Adelaide [his wife] would become the parents of eleven children. Eng and Sarah [‘the other wife’] had ten.”
A ‘normal life’ indeed… The women the twins married were incidentally sisters who ended up disliking each other (I can’t imagine why…).
v. Genie (feral child). This is a very long article, and you should be warned that many parts of it may not be pleasant to read. From the article:
“Genie (born 1957) is the pseudonym of a feral child who was the victim of extraordinarily severe abuse, neglect and social isolation. Her circumstances are prominently recorded in the annals of abnormal child psychology. When Genie was a baby her father decided that she was severely mentally retarded, causing him to dislike her and withhold as much care and attention as possible. Around the time she reached the age of 20 months Genie’s father decided to keep her as socially isolated as possible, so from that point until she reached 13 years, 7 months, he kept her locked alone in a room. During this time he almost always strapped her to a child’s toilet or bound her in a crib with her arms and legs completely immobilized, forbade anyone from interacting with her, and left her severely malnourished. The extent of Genie’s isolation prevented her from being exposed to any significant amount of speech, and as a result she did not acquire language during childhood. Her abuse came to the attention of Los Angeles child welfare authorities on November 4, 1970.
In the first several years after Genie’s early life and circumstances came to light, psychologists, linguists and other scientists focused a great deal of attention on Genie’s case, seeing in her near-total isolation an opportunity to study many aspects of human development. […] In early January 1978 Genie’s mother suddenly decided to forbid all of the scientists except for one from having any contact with Genie, and all testing and scientific observations of her immediately ceased. Most of the scientists who studied and worked with Genie have not seen her since this time. The only post-1977 updates on Genie and her whereabouts are personal observations or secondary accounts of them, and all are spaced several years apart. […]
Genie’s father had an extremely low tolerance for noise, to the point of refusing to have a working television or radio in the house. Due to this, the only sounds Genie ever heard from her parents or brother on a regular basis were noises when they used the bathroom. Although Genie’s mother claimed that Genie had been able to hear other people talking in the house, her father almost never allowed his wife or son to speak and viciously beat them if he heard them talking without permission. They were particularly forbidden to speak to or around Genie, so what conversations they had were therefore always very quiet and out of Genie’s earshot, preventing her from being exposed to any meaningful language besides her father’s occasional swearing. […] Genie’s father fed Genie as little as possible and refused to give her solid food […]
In late October 1970, Genie’s mother and father had a violent argument in which she threatened to leave if she could not call her parents. He eventually relented, and later that day Genie’s mother was able to get herself and Genie away from her husband while he was out of the house […] She and Genie went to live with her parents in Monterey Park. Around three weeks later, on November 4, after being told to seek disability benefits for the blind, Genie’s mother decided to do so in nearby Temple City, California and brought Genie along with her.
On account of her near-blindness, instead of the disabilities benefits office Genie’s mother accidentally entered the general social services office next door. The social worker who greeted them instantly sensed something was not right when she first saw Genie and was shocked to learn Genie’s true age was 13, having estimated from her appearance and demeanor that she was around 6 or 7 and possibly autistic. She notified her supervisor, and after questioning Genie’s mother and confirming Genie’s age they immediately contacted the police. […]
Upon admission to Children’s Hospital, Genie was extremely pale and grossly malnourished. She was severely undersized and underweight for her age, standing 4 ft 6 in (1.37 m) and weighing only 59 pounds (27 kg) […] Genie’s gross motor skills were extremely weak; she could not stand up straight nor fully straighten any of her limbs. Her movements were very hesitant and unsteady, and her characteristic “bunny walk”, in which she held her hands in front of her like claws, suggested extreme difficulty with sensory processing and an inability to integrate visual and tactile information. She had very little endurance, only able to engage in any physical activity for brief periods of time. […]
Despite tests conducted shortly after her admission which determined Genie had normal vision in both eyes she could not focus them on anything more than 10 feet (3 m) away, which corresponded to the dimensions of the room she was kept in. She was also completely incontinent, and gave no response whatsoever to extreme temperatures. As Genie never ate solid food as a child she was completely unable to chew and had very severe dysphagia, completely unable to swallow any solid or even soft food and barely able to swallow liquids. Because of this she would hold anything which she could not swallow in her mouth until her saliva broke it down, and if this took too long she would spit it out and mash it with her fingers. She constantly salivated and spat, and continually sniffed and blew her nose on anything that happened to be nearby.
Genie’s behavior was typically highly anti-social, and proved extremely difficult for others to control. She had no sense of personal property, frequently pointing to or simply taking something she wanted from someone else, and did not have any situational awareness whatsoever, acting on any of her impulses regardless of the setting. […] Doctors found it extremely difficult to test Genie’s mental age, but on two attempts they found Genie scored at the level of a 13-month-old. […] When upset Genie would wildly spit, blow her nose into her clothing, rub mucus all over her body, frequently urinate, and scratch and strike herself. These tantrums were usually the only times Genie was at all demonstrative in her behavior. […] Genie clearly distinguished speaking from other environmental sounds, but she remained almost completely silent and was almost entirely unresponsive to speech. When she did vocalize, it was always extremely soft and devoid of tone. Hospital staff initially thought that the responsiveness she did show to them meant she understood what they were saying, but later determined that she was instead responding to nonverbal signals that accompanied their speaking. […] Linguists later determined that in January 1971, two months after her admission, Genie only showed understanding of a few names and about 15–20 words. Upon hearing any of these, she invariably responded to them as if they had been spoken in isolation. Hospital staff concluded that her active vocabulary at that time consisted of just two short phrases, “stop it” and “no more”. Beyond negative commands, and possibly intonation indicating a question, she showed no understanding of any grammar whatsoever. […] Genie had a great deal of difficulty learning to count in sequential order. During Genie’s stay with the Riglers, the scientists spent a great deal of time attempting to teach her to count. She did not start to do so at all until late 1972, and when she did her efforts were extremely deliberate and laborious. By 1975 she could only count up to 7, which even then remained very difficult for her.”
“From January 1978 until 1993, Genie moved through a series of at least four additional foster homes and institutions. In some of these locations she was further physically abused and harassed to extreme degrees, and her development continued to regress. […] Genie is a ward of the state of California, and is living in an undisclosed location in the Los Angeles area. In May 2008, ABC News reported that someone who spoke under condition of anonymity had hired a private investigator who located Genie in 2000. She was reportedly living a relatively simple lifestyle in a small private facility for mentally underdeveloped adults, and appeared to be happy. Although she only spoke a few words, she could still communicate fairly well in sign language.“
“The extinction of the arboreal primates and the reduction or extinction of several browsing groups […] are strong evidence for the retreat of the forests during the early Oligocene and their replacement by open woodlands or even drier biotopes. […] Among the most distinctive species to enter Europe after the “Grande Coupure” were the first true rhinoceroses [which] achieved a high diversity and were going to characterize the mammalian faunas of Europe for millions of years, until the extinction of the last woolly rhinos during the late Pleistocene. […] the evolution of this group produced the largest terrestrial mammals of any time. The giant Paraceratherium […] was 6 m tall at the shoulders and had a 1.5-m-long skull […]. The males of this animal weighed around 15 tons, while the females were somewhat smaller, about 10 tons.” [Wikipedia has a featured article about these things here].
“One of the most significant features of the early Oligocene small-mammal communities was the first entry of lagomorphs into Europe. The lagomorphs — that is, the order of mammals that includes today’s hares and rabbits — originated very early on the Asian continent and from there colonized North America. The presence of the Turgai Strait prevented this group from entering Europe during the Eocene. […] the most characteristic immigrants during the early Oligocene were the cricetids of the genus Atavocricetodon. The cricetids are today represented in Europe by hamsters, reduced to three or four species […] These cricetids are typical inhabitants of the cold steppes of eastern Europe and Central Asia, and their limited representation in today’s European ecosystems does not reflect their importance in the history of the Cenozoic mammalian faunas of Eurasia. After its first entry following the “Grande Coupure,” this group experienced extraordinary success, diversifying into several genera and species. Even more significantly, the cricetids gave rise to the rodent groups that were going to be dominant during the Pliocene and Pleistocene — that is, the murids (the family of mice and rats) and arvicolids (the family of voles). […] In addition, new carnivore families, like the nimravids, appeared […]. The nimravids were once regarded as true felids (the family that includes today’s big and small cats) because of their similar dental and cranial adaptations. […] one of the more distinctive attributes of the nimravids was their long, laterally flattened upper canines, which were similar to those of the Miocene and Pliocene saber-toothed cats […]. However, most of these features have proved to be the result of a similar adaptation to hypercarnivorism, and the nimravids are now placed in a separate family of early carnivores whose evolution paralleled that of the large saber-toothed felids.” [Actually some of the nimravids were in some sense ‘even more sabertoothed’ than the (‘true’) saber-toothed cats which came later: “Although [the nimravid] Eusmilus bidentatus was no larger than a modern lynx, the adaptations for gape seen on its skull and mandible are more advanced than in any of the felid sabertooths of the European Pliocene and Pleistocene.”]
“About 30 million years ago, a new glacial phase began, and for 4 million years Antarctica was subjected to multiple glaciation episodes. The global sea level experienced the largest lowering in the whole Cenozoic, dropping by about 150 m […]. A possible explanation for this new glacial event lies in the final opening of the Drake Passage between Antarctica and South America, which led to the completion of a fully circumpolar circulation and impeded any heat exchange between Antarctic waters and the warmer equatorial waters. A second, perhaps complementary cause for this glacial pulse is probably related to the final opening of the seaway between Greenland and Norway. The cold Arctic waters, largely isolated since the Mesozoic, spread at this time into the North Atlantic. The main effect of this cooling was a new extension of the dry landscapes on the European and western Asian lands. For instance, we know from pollen evidence that a desert vegetation was dominant in the Levant during the late Oligocene and earliest Miocene […] This glacial event led to the extinction of several forms that had persisted from the Eocene”.
“Among the carnivores, the late Oligocene saw the decline and local extinction of the large nimravids [Key word: local. They came back to Europe later during the early Miocene, and “the nimravids maintained a remarkable stability throughout the Miocene, probably in relation to a low speciation rate”]. In contrast, the group of archaic feloids that had arisen during the early Oligocene […] continued its evolution into the late Oligocene and diversified into a number of genera […] The other group of large carnivores that spread during the late Oligocene were the “bear-dog” amphicyonids, which from that time on became quite diverse, with many different ecological adaptations. […] The late Oligocene saw, in addition to the bearlike amphicyonids, the spread of the first true ursids […]. The members of this genus did not have the massive body dimensions of today’s bears but were medium-size omnivores […] Another group of carnivores that spread successfully during the late Oligocene were the mustelids, the family that includes today’s martens, badgers, skunks, and otters. […] In contrast to these successes, the creodonts of the genus Hyaenodon, which had survived all periods of crisis since the Eocene, declined during the late Oligocene. The last Hyaenodon in Europe was recorded at the end of the Oligocene […], and did not survive into the Miocene. This was the end in Europe of a long-lived group of successful carnivorans that had filled the large-predator guild for millions of years. However, as with other Oligocene groups, […] the hyaenodonts persisted in Africa and, from there, made a short incursion into Europe during the early Miocene”.
“After a gradual warming during the late Oligocene, global temperatures reached a climatic optimum during the early Miocene […] Shallow seas covered several nearshore areas in Europe […] as a consequence of a general sea-level rise. A broad connection was established between the Indian Ocean and both the Mediterranean and Paratethys Seas […] Widespread warm-water faunas including tropical fishes and nautiloids have been found, indicating conditions similar to those of the present-day Guinea Gulf, with mean surface-water temperatures around 25 to 27°C. Important reef formations bounded most of the shallow-water Mediterranean basins. […] Reef-building corals that today inhabit the Great Barrier Reef within a temperature range of 19 to 28°C became well established on North Island, New Zealand […] The early Miocene climate was warm and humid, indicating tropical conditions […]. Rich, extensive woodlands with varied kinds of plants developed in different parts of southern Europe […] The climatic optimum of the early Miocene also led to a maximum development of mangroves. These subtropical floras extended as far north as eastern Siberia and Kamchatka”.
“Despite the climatic stability of the early Miocene, an important tectonic event disrupted the evolution of the Eurasian faunas during this epoch. About 19 million years ago, the graben system along the Red Sea Fault, active in the south since the late Oligocene, opened further […] Consequently, the Arabian plate rotated counterclockwise and collided with the Anatolian plate. The marine gateway from the Mediterranean toward the Indo-Pacific closed, and a continental migration bridge (known as the Gomphothere Bridge) between Eurasia and Africa came into existence. This event had enormous consequences for the further evolution of the terrestrial faunas of Eurasia and Africa. Since the late Eocene, Africa had evolved in isolation, developing its own autochthonous fauna. Part of this fauna consisted of a number of endemic Oligocene survivors, such as anthracotheres, hyaenodonts, and primates, for which Africa had acted as a refuge […] The first evidence of an African–Eurasian exchange was the presence of the anthracothere Brachyodus in a number of early Miocene sites in Europe […] a second dispersal event from Africa, that of the gomphothere and deinothere proboscideans, had much more lasting effects. […] Today we can easily identify any proboscidean by its long proboscis and tusks. However, the primitive proboscideans from the African Eocene had a completely different appearance and are hardly recognizable as the ancestors of today’s elephants. Instead, they were hippolike semiamphibious ungulates with massive, elongated bodies supported by rather short legs. […] The first proboscideans entering Europe were the so-called gomphotheres […] which dispersed worldwide during the early Miocene from Africa to Europe, Asia, and North America […]. Gomphotherium was the size of an Indian elephant, about 2.5 m high at the withers. Its skull and dentition, however, were different from those of modern elephants. Gomphotherium’s skull was long […] and displayed not two but four tusks, one pair in the upper jaw and the other pair at the end of the lower jaw. […] Shortly after the entry of Gomphotherium and Zygolophodon [a second group of mastodons], a third proboscidean group, the deinotheres, successfully settled in Eurasia. Unlike the previous genera, the deinotheres were not elephantoids but represented a different, now totally extinct kind of proboscidean.”
“The dispersal of not only the African proboscideans but also many eastern immigrants contributed to a significant increase in the diversity of the impoverished early Miocene terrestrial biotas. The entry of this set of immigrants probably led to the extinction of a number of late Oligocene and early Miocene survivors, such as tapirids, anthracotherids, and primitive suids [pigs] and moschoids. In addition to the events that affected the Middle East area, sea-level fluctuations enabled short-lived mammal exchanges across the Bering Strait between Eurasia and North America, permitting the arrival of the browsing horse Anchitherium in Eurasia […] Widely used for biostragraphic purposes, the dispersal of Anchitherium was the first of a number of similar isolated events undergone by North American equids that entered Eurasia and rapidly spread on this continental area.”
“A new marine transgression, known as the Langhian Transgression, characterized the beginning of the middle Miocene, affecting the circum-Mediterranean area. Consequently, the seaway to the Indo-Pacific reopened for a short time, restoring the circum-equatorial warm-water circulation. […] tropical conditions became established as far north as Poland in marine coastal and open-sea waters. After the optimal conditions of the early Miocene, the middle Miocene was a period of global oceanic reorganization, representing a major change in the climatic evolution of the Cenozoic. Before this process began, high-latitude paleoclimatic conditions were generally warm although oscillating, but they rapidly cooled thereafter, leading to an abrupt high-latitude cooling event at about 14.5 million years ago […] Increased production of cold, deep Antarctic waters caused the extinction of several oceanic benthic foraminifers that had persisted from the late Oligocene–early Miocene and promoted a significant evolutionary turnover of the oceanic assemblages from about 16 to 14 million years ago […] This middle Miocene cooling was associated with a major growth of the Eastern Antarctic Ice Sheets (EAIS) […] Middle Miocene polar cooling and east Antarctic ice growth had severe effects on middle- to low-latitude terrestrial environments. There was a climatic trend to cooler winters and decreased summer rainfall. Seasonal, summer-drought-adapted schlerophyllous vegetation progressively evolved and spread geographically during the Miocene, replacing the laurophyllous evergreen forests that were adapted to moist, subtropical and tropical conditions with temperate winters and abundant summer rainfalls […] These effects were clearly seen in a wide area to the south of the Paratethys Sea, extending from eastern Europe to western Asia. According to the ideas of the American paleontologist Ray Bernor, this region, known as the Greek-Iranian (or sub-Paratethyan) Province, acted as a woodland environmental “hub” for a corridor of open habitats that extended from northwestern Africa eastward across Arabia into Afghanistan, north into the eastern Mediterranean area, and northeast into northern China. The Greek-Iranian Province records the first evidence of open woodlands in which a number of large, progressive open-country mammals—such as hyaenids, thick-enameled hominoids, bovids, and giraffids — diversified and dispersed into eastern Africa and southwestern Asia […] the peculiar biotope developed in the Greek-Iranian Province acted as the background from which the African savannas evolved during the Pliocene and Pleistocene.”
“The most outstanding effect of the Middle Miocene Event is seen among the herbivorous community, which showed a trend toward developing larger body sizes, more-hypsodont teeth, and more-elongated distal limb segments […]. Increasing body size in herbivores is related to a higher ingestion of fibrous and low-quality vegetation. Browsers and grazers have to be large because they need long stomachs and intestines to process a large quantity of low-energy food (this is why they have to eat almost continuously). Because of the mechanism of rumination, ruminants are the only herbivores that can escape this rule and subsist at small sizes. Increasing hypsodonty and high-crowned teeth are directly related to the ingestion of more-abrasive vegetation […] Finally, the elongation of the distal limb segments is related to increasing cursoriality. The origin of cursoriality can be linked to the expansion of the home range in open, low-productive habitats. […] At the taxonomic level, this habitat change in the low latitudes involved the rapid adaptive radiation of woodland ruminants (bovids and giraffids). […] Gazelles dispersed into Europe at this time from their possible Afro-Arabian origins […] Not only gazelles but also the giraffids experienced a wide adaptive radiation into Africa after their dispersal from Asia. […] Among the suids [pigs], the listriodontines evolved in a peculiar way in northern Africa, leading to giant forms such as Kubanochoerus, with a weight of about 500 kg, which in some species may have reached 800 kg.”