The Biology of Moral Systems (II)

There are multiple really great books I have read ‘recently’ and which I have either not blogged at all, or not blogged in anywhere near the amount of detail they deserve; Alexander’s book is one of those books. I hope to get rid of some of the backlog soon. You can read my first post about the book here, and it might be a good idea to do so as I won’t allude to material covered in the first post here. In this post I have added some quotes from and comments related to the book’s second chapter, ‘A Biological View of Morality’.

“Moral systems are systems of indirect reciprocity. They exist because confluences of interest within groups are used to deal with conflicts of interest between groups. Indirect reciprocity develops because interactions are repeated, or flow among a society’s members, and because information about subsequent interactions can be gleaned from observing the reciprocal interactions of others.
To establish moral rules is to impose rewards and punishments (typically assistance and ostracism, respectively) to control social acts that, respectively, help or hurt others. To be regarded as moral, a rule typically must represent widespread opinion, reflecting the fact that it must apply with a certain degree of indiscrimininateness.”

“Moral philosophers have not treated the beneficence of humans as a part, somehow, of their selfishness; yet, as Trivers (1971) suggested, the biologist’s view of lifetimes leads directly to this argument. In other words, the normally expressed beneficence, or altruism, of parenthood and nepotism and the temporary altruism (or social investment) of reciprocity are expected to result in greater returns than their alternatives.
If biologists are correct, all that philosophers refer to as altruistic or utilitarian behavior by individuals will actually represent either the temporary altruism (phenotypic beneficence or social investment) of indirect somatic effort [‘Direct somatic effort refers to self-help that involves no other persons. Indirect somatic effort involves reciprocity, which may be direct or indirect. Returns from direct and indirect reciprocity may be immediate or delayed’ – Alexander spends some pages classifying human effort in terms of such ‘atoms of sociality’, which are useful devices for analytical purposes, but I decided not to cover that stuff in detail here – US] or direct and indirect nepotism. The exceptions are what might be called evolutionary mistakes or accidents that result in unreciprocated or “genetic” altruism, deleterious to both the phenotype and genotype of the altruist; such mistakes can occur in all of the above categories” [I should point out that Boyd and Richerson’s book Not by Genes Alone – another great book which I hope to blog soon – is worth having a look at if after reading Alexander’s book you think that he does not cover the topic of how and why such mistakes might happen in the amount of detail it deserves; they also cover related topics in some detail, from a different angle – US]

“It is my impression that many moral philosophers do not approach the problem of morality and ethics as if it arose as an effort to resolve conflicts of interests. Their involvement in conflicts of interest seems to come about obliquely through discussions of individuals’ views with respect to moral behavior, or their proximate feelings about morality – almost as if questions about conflicts of interest arise only because we operate under moral systems, rather than vice versa.”

“The problem, in developing a theory of moral systems that is consistent with evolutionary theory from biology, is in accounting for the altruism of moral behavior in genetically selfish terms. I believe this can be done by interpreting moral systems as systems of indirect reciprocity.
I regard indirect reciprocity as a consequence of direct reciprocity occurring in the presence of interested audiences – groups of individuals who continually evaluate the members of their society as possible future interactants from whom they would like to gain more than they lose […] Even in directly reciprocal interactions […] net losses to self […] may be the actual aim of one or even both individuals, if they are being scrutinized by others who are likely to engage either individual subsequently in reciprocity of greater significance than that occurring in the scrutinized acts. […] Systems of indirect reciprocity, and therefore moral systems, are social systems structured around the importance of status. The concept of status implies that an individual’s privileges, or its access to resources, are controlled in part by how others collectively think of him (hence, treat him) as a result of past interactions (including observations of interactions with others). […] The consequences of indirect reciprocity […] include the concomitant spread of altruism (as social investment genetically valuable to the altruist), rules, and efforts to cheat […]. I would not contend that we always carry out cost-benefit analyses on these issues deliberately or consciously. I do, however, contend that such analyses occur, sometimes consciously, sometimes not, and that we are evolved to be exceedingly accurate and quick at making them […] [A] conscience [is what] I have interpreted (Alexander, 1979a) as the “still small voice that tells us how far we can go in serving our own interests without incurring intolerable risks.””

“The long-term existence of complex patterns of indirect reciprocity […] seems to favor the evolution of keen abilities to (1) make one’s self seem more beneficent than is the case; and (2) influence others to be beneficent in such fashions as to be deleterious to themselves and beneficial to the moralizer, e.g. to lead others to (a) invest too much, (b) invest wrongly in the moralizer or his relatives and friends, or (c) invest indiscriminately on a larger scale than would otherwise be the case. According to this view, individuals are expected to parade the idea of much beneficence, and even of indiscriminate altruism as beneficial, so as to encourage people in general to engage in increasing amounts of social investment whether or not it is beneficial to their interests. […] They may also be expected to depress the fitness of competitors by identifying them, deceptively or not, as reciprocity cheaters (in other words, to moralize and gossip); to internalize rules or evolve the ability to acquire a conscience, interpreted […] as the ability to use or own judgment to serve our own interests; and to self-deceive and display false sincerity as defenses against detection of cheating and attributions of deliberateness in cheating […] Everyone will with to appear more beneficent than he is. There are two reasons: (1) this appearance, if credible, is more likely to lead to direct social rewards than its alternatives; (2) it is also more likely to encourage others to be more beneficent.”

“Consciousness and related aspects of the human psyche (self-awareness, self-reflection, foresight, planning, purpose, conscience, free will, etc.) are here hypothesized to represent a system for competing with other humans for status, resources, and eventually reproductive success. More specifically, the collection of these attributes is viewed as a means of seeing ourselves and our life situations as others see us and our life situations – most particularly in ways that will cause (the most and the most important of) them to continue to interact with us in fashions that will benefit us and seem to benefit them.
Consciousness, then, is a game of life in which the participants are trying to comprehend what is in one another’s minds before, and more effectively than, it can be done in reverse.”

“Provided with a means of relegating our deceptions to the subconsciousness […] false sincerity becomes easier and detection more difficult. There are reasons for believing that one does not need to know his own personal interests consciously in order to serve them as much as he needs to know the interests of others to thwart them. […] I have suggested that consciousness is a way of making our social behavior so unpredictable as to allow us to outmaneuver others; and that we press into subconsciousness (as opposed to forgetting) those things that remain useful to us but would be detrimental to us if others knew about them, and on which we are continually tested and would have to lie deliberately if they remained in our conscious mind […] Conscious concealment of interests, or disavowal, is deliberate deception, considered more reprehensible than anything not conscious. Indeed, if one does not know consciously what his interests are, he cannot, in some sense, be accused of deception even though he may be using an evolved ability of self-deception to deceive others. So it is not always – maybe not usually – in our evolutionary or surrogate-evolutionary interests to make them conscious […] If people can be fooled […] then there will be continual selection for becoming better at fooling others […]. This may include causing them to think that it will be best for them to help you when it is not. This ploy works because of the thin line everybody must continually tread with respect to not showing selfishness. If some people are self-destructively beneficent (i.e., make altruistic mistakes), and if people often cannot tell if one is such a mistake-maker, it might be profitable even to try to convince others that one is such a mistake-maker so as to be accepted as a cooperator or so that the other will be beneficent in expectation of large returns (through “mistakes”) later. […] Reciprocity may work this way because it is grounded evolutionarily in nepotism, appropriate dispensing of nepotism (as well as reciprocity) depends upon learning, and the wrong things can be learned. [Boyd and Richerson talk about this particular aspect, the learning part, in much more detail in their books – US] Self-deception, then may not be a pathological or detrimental trait, at least in most people most of the time. Rather, it may have evolved as a way to deceive others.”

“The only time that utilitarianism (promoting the greatest good to the greatest number) is predicted by evolutionary theory is when the interests of the group (the “greatest number”) and the individual coincide, and in such cases utilitarianism is not really altruistic in either the biologists’ or the philosophers’ sense of the term. […] If Kohlberg means to imply that a significant proportion of the populace of the world either implicitly or explicitly favors a system in which everyone (including himself) behaves so as to bring the greatest good to the greatest number, then I simply believe that he is wrong. If he supposes that only a relatively few – particularly moral philosophers and some others like them – have achieved this “stage,” then I also doubt the hypothesis. I accept that many people are aware of this concept of utility, that a small minority may advocate it, and that an even smaller minority may actually believe that they behave according to it. I speculate, however, that with a few inadvertent or accidental exceptions, no one actually follows this precept. I see the concept as having its main utility as a goal towards which one may exhort others to aspire, and towards which one may behave as if (or talk as if) aspiring, which actually practicing complex forms of self-interest.”

“Generally speaking, the bigger the group, the more complex the social organization, and the greater the group’s unity of purpose the more limited is individual entrepreneurship.”

“The function or raison d’etre [sic] of moral systems is evidently to provide the unity required to enable the group to compete successfully with other human groups. […] the argument that human evolution has been guided to some large extent by intergroup competition and aggression […] is central to the theory of morality presented here”.

June 29, 2017 Posted by | Books, Biology, Philosophy, Evolutionary biology, Anthropology, Genetics | Leave a comment

Harnessing phenotypic heterogeneity to design better therapies

Unlike many of the IAS lectures I’ve recently blogged this one is a new lecture – it was uploaded earlier this week. I have to say that I was very surprised – and disappointed – that the treatment strategy discussed in the lecture had not already been analyzed in a lot of detail and been implemented in clinical practice for some time. Why would you not expect the composition of cancer cell subtypes in the tumour microenvironment to change when you start treatment – in any setting where a subgroup of cancer cells has a different level of responsiveness to treatment than ‘the average’, that would to me seem to be the expected outcome. And concepts such as drug holidays and dose adjustments as treatment responses to evolving drug resistance/treatment failure seem like such obvious approaches to try out here (…the immunologists dealing with HIV infection have been studying such things for decades). I guess ‘better late than never’.

A few papers mentioned/discussed in the lecture:

Impact of Metabolic Heterogeneity on Tumor Growth, Invasion, and Treatment Outcomes.
Adaptive vs continuous cancer therapy: Exploiting space and trade-offs in drug scheduling.
Exploiting evolutionary principles to prolong tumor control in preclinical models of breast cancer.

June 11, 2017 Posted by | Cancer/oncology, Genetics, Immunology, Lectures, Mathematics, Medicine, Studies | Leave a comment

Standing on the Shoulders of Mice: Aging T-cells

Most of the lecture is not about mice, but rather about stuff like this and this (both papers are covered in the lecture). I’ve read about related topics before (see e.g this), but if you haven’t some parts of the lecture will probably be too technical for you to follow.

May 3, 2017 Posted by | Cancer/oncology, Cardiology, Genetics, Immunology, Lectures, Medicine, Papers | Leave a comment

Biodemography of aging (IV)

My working assumption as I was reading part two of the book was that I would not be covering that part of the book in much detail here because it would simply be too much work to make such posts legible to the readership of this blog. However I then later, while writing this post, had the thought that given that almost nobody reads along here anyway (I’m not complaining, mind you – this is how I like it these days), the main beneficiary of my blog posts will always be myself, which lead to the related observation/notion that I should not be limiting my coverage of interesting stuff here simply because some hypothetical and probably nonexistent readership out there might not be able to follow the coverage. So when I started out writing this post I was working under the assumption that it would be my last post about the book, but I now feel sure that if I find the time I’ll add at least one more post about the book’s statistics coverage. On a related note I am explicitly making the observation here that this post was written for my benefit, not yours. You can read it if you like, or not, but it was not really written for you.

I have added bold a few places to emphasize key concepts and observations from the quoted paragraphs and in order to make the post easier for me to navigate later (all the italics below are on the other hand those of the authors of the book).

Biodemography is a multidisciplinary branch of science that unites under its umbrella various analytic approaches aimed at integrating biological knowledge and methods and traditional demographic analyses to shed more light on variability in mortality and health across populations and between individuals. Biodemography of aging is a special subfield of biodemography that focuses on understanding the impact of processes related to aging on health and longevity.”

“Mortality rates as a function of age are a cornerstone of many demographic analyses. The longitudinal age trajectories of biomarkers add a new dimension to the traditional demographic analyses: the mortality rate becomes a function of not only age but also of these biomarkers (with additional dependence on a set of sociodemographic variables). Such analyses should incorporate dynamic characteristics of trajectories of biomarkers to evaluate their impact on mortality or other outcomes of interest. Traditional analyses using baseline values of biomarkers (e.g., Cox proportional hazards or logistic regression models) do not take into account these dynamics. One approach to the evaluation of the impact of biomarkers on mortality rates is to use the Cox proportional hazards model with time-dependent covariates; this approach is used extensively in various applications and is available in all popular statistical packages. In such a model, the biomarker is considered a time-dependent covariate of the hazard rate and the corresponding regression parameter is estimated along with standard errors to make statistical inference on the direction and the significance of the effect of the biomarker on the outcome of interest (e.g., mortality). However, the choice of the analytic approach should not be governed exclusively by its simplicity or convenience of application. It is essential to consider whether the method gives meaningful and interpretable results relevant to the research agenda. In the particular case of biodemographic analyses, the Cox proportional hazards model with time-dependent covariates is not the best choice.

“Longitudinal studies of aging present special methodological challenges due to inherent characteristics of the data that need to be addressed in order to avoid biased inference. The challenges are related to the fact that the populations under study (aging individuals) experience substantial dropout rates related to death or poor health and often have co-morbid conditions related to the disease of interest. The standard assumption made in longitudinal analyses (although usually not explicitly mentioned in publications) is that dropout (e.g., death) is not associated with the outcome of interest. While this can be safely assumed in many general longitudinal studies (where, e.g., the main causes of dropout might be the administrative end of the study or moving out of the study area, which are presumably not related to the studied outcomes), the very nature of the longitudinal outcomes (e.g., measurements of some physiological biomarkers) analyzed in a longitudinal study of aging assumes that they are (at least hypothetically) related to the process of aging. Because the process of aging leads to the development of diseases and, eventually, death, in longitudinal studies of aging an assumption of non-association of the reason for dropout and the outcome of interest is, at best, risky, and usually is wrong. As an illustration, we found that the average trajectories of different physiological indices of individuals dying at earlier ages markedly deviate from those of long-lived individuals, both in the entire Framingham original cohort […] and also among carriers of specific alleles […] In such a situation, panel compositional changes due to attrition affect the averaging procedure and modify the averages in the total sample. Furthermore, biomarkers are subject to measurement error and random biological variability. They are usually collected intermittently at examination times which may be sparse and typically biomarkers are not observed at event times. It is well known in the statistical literature that ignoring measurement errors and biological variation in such variables and using their observed “raw” values as time-dependent covariates in a Cox regression model may lead to biased estimates and incorrect inferences […] Standard methods of survival analysis such as the Cox proportional hazards model (Cox 1972) with time-dependent covariates should be avoided in analyses of biomarkers measured with errors because they can lead to biased estimates.

“Statistical methods aimed at analyses of time-to-event data jointly with longitudinal measurements have become known in the mainstream biostatistical literature as “joint models for longitudinal and time-to-event data” (“survival” or “failure time” are often used interchangeably with “time-to-event”) or simply “joint models.” This is an active and fruitful area of biostatistics with an explosive growth in recent years. […] The standard joint model consists of two parts, the first representing the dynamics of longitudinal data (which is referred to as the “longitudinal sub-model”) and the second one modeling survival or, generally, time-to-event data (which is referred to as the “survival sub-model”). […] Numerous extensions of this basic model have appeared in the joint modeling literature in recent decades, providing great flexibility in applications to a wide range of practical problems. […] The standard parameterization of the joint model (11.2) assumes that the risk of the event at age t depends on the current “true” value of the longitudinal biomarker at this age. While this is a reasonable assumption in general, it may be argued that additional dynamic characteristics of the longitudinal trajectory can also play a role in the risk of death or onset of a disease. For example, if two individuals at the same age have exactly the same level of some biomarker at this age, but the trajectory for the first individual increases faster with age than that of the second one, then the first individual can have worse survival chances for subsequent years. […] Therefore, extensions of the basic parameterization of joint models allowing for dependence of the risk of an event on such dynamic characteristics of the longitudinal trajectory can provide additional opportunities for comprehensive analyses of relationships between the risks and longitudinal trajectories. Several authors have considered such extended models. […] joint models are computationally intensive and are sometimes prone to convergence problems [however such] models provide more efficient estimates of the effect of a covariate […] on the time-to-event outcome in the case in which there is […] an effect of the covariate on the longitudinal trajectory of a biomarker. This means that analyses of longitudinal and time-to-event data in joint models may require smaller sample sizes to achieve comparable statistical power with analyses based on time-to-event data alone (Chen et al. 2011).”

“To be useful as a tool for biodemographers and gerontologists who seek biological explanations for observed processes, models of longitudinal data should be based on realistic assumptions and reflect relevant knowledge accumulated in the field. An example is the shape of the risk functions. Epidemiological studies show that the conditional hazards of health and survival events considered as functions of risk factors often have U- or J-shapes […], so a model of aging-related changes should incorporate this information. In addition, risk variables, and, what is very important, their effects on the risks of corresponding health and survival events, experience aging-related changes and these can differ among individuals. […] An important class of models for joint analyses of longitudinal and time-to-event data incorporating a stochastic process for description of longitudinal measurements uses an epidemiologically-justified assumption of a quadratic hazard (i.e., U-shaped in general and J-shaped for variables that can take values only on one side of the U-curve) considered as a function of physiological variables. Quadratic hazard models have been developed and intensively applied in studies of human longitudinal data”.

“Various approaches to statistical model building and data analysis that incorporate unobserved heterogeneity are ubiquitous in different scientific disciplines. Unobserved heterogeneity in models of health and survival outcomes can arise because there may be relevant risk factors affecting an outcome of interest that are either unknown or not measured in the data. Frailty models introduce the concept of unobserved heterogeneity in survival analysis for time-to-event data. […] Individual age trajectories of biomarkers can differ due to various observed as well as unobserved (and unknown) factors and such individual differences propagate to differences in risks of related time-to-event outcomes such as the onset of a disease or death. […] The joint analysis of longitudinal and time-to-event data is the realm of a special area of biostatistics named “joint models for longitudinal and time-to-event data” or simply “joint models” […] Approaches that incorporate heterogeneity in populations through random variables with continuous distributions (as in the standard joint models and their extensions […]) assume that the risks of events and longitudinal trajectories follow similar patterns for all individuals in a population (e.g., that biomarkers change linearly with age for all individuals). Although such homogeneity in patterns can be justifiable for some applications, generally this is a rather strict assumption […] A population under study may consist of subpopulations with distinct patterns of longitudinal trajectories of biomarkers that can also have different effects on the time-to-event outcome in each subpopulation. When such subpopulations can be defined on the base of observed covariate(s), one can perform stratified analyses applying different models for each subpopulation. However, observed covariates may not capture the entire heterogeneity in the population in which case it may be useful to conceive of the population as consisting of latent subpopulations defined by unobserved characteristics. Special methodological approaches are necessary to accommodate such hidden heterogeneity. Within the joint modeling framework, a special class of models, joint latent class models, was developed to account for such heterogeneity […] The joint latent class model has three components. First, it is assumed that a population consists of a fixed number of (latent) subpopulations. The latent class indicator represents the latent class membership and the probability of belonging to the latent class is specified by a multinomial logistic regression function of observed covariates. It is assumed that individuals from different latent classes have different patterns of longitudinal trajectories of biomarkers and different risks of event. The key assumption of the model is conditional independence of the biomarker and the time-to-events given the latent classes. Then the class-specific models for the longitudinal and time-to-event outcomes constitute the second and third component of the model thus completing its specification. […] the latent class stochastic process model […] provides a useful tool for dealing with unobserved heterogeneity in joint analyses of longitudinal and time-to-event outcomes and taking into account hidden components of aging in their joint influence on health and longevity. This approach is also helpful for sensitivity analyses in applications of the original stochastic process model. We recommend starting the analyses with the original stochastic process model and estimating the model ignoring possible hidden heterogeneity in the population. Then the latent class stochastic process model can be applied to test hypotheses about the presence of hidden heterogeneity in the data in order to appropriately adjust the conclusions if a latent structure is revealed.”

The longitudinal genetic-demographic model (or the genetic-demographic model for longitudinal data) […] combines three sources of information in the likelihood function: (1) follow-up data on survival (or, generally, on some time-to-event) for genotyped individuals; (2) (cross-sectional) information on ages at biospecimen collection for genotyped individuals; and (3) follow-up data on survival for non-genotyped individuals. […] Such joint analyses of genotyped and non-genotyped individuals can result in substantial improvements in statistical power and accuracy of estimates compared to analyses of the genotyped subsample alone if the proportion of non-genotyped participants is large. Situations in which genetic information cannot be collected for all participants of longitudinal studies are not uncommon. They can arise for several reasons: (1) the longitudinal study may have started some time before genotyping was added to the study design so that some initially participating individuals dropped out of the study (i.e., died or were lost to follow-up) by the time of genetic data collection; (2) budget constraints prohibit obtaining genetic information for the entire sample; (3) some participants refuse to provide samples for genetic analyses. Nevertheless, even when genotyped individuals constitute a majority of the sample or the entire sample, application of such an approach is still beneficial […] The genetic stochastic process model […] adds a new dimension to genetic biodemographic analyses, combining information on longitudinal measurements of biomarkers available for participants of a longitudinal study with follow-up data and genetic information. Such joint analyses of different sources of information collected in both genotyped and non-genotyped individuals allow for more efficient use of the research potential of longitudinal data which otherwise remains underused when only genotyped individuals or only subsets of available information (e.g., only follow-up data on genotyped individuals) are involved in analyses. Similar to the longitudinal genetic-demographic model […], the benefits of combining data on genotyped and non-genotyped individuals in the genetic SPM come from the presence of common parameters describing characteristics of the model for genotyped and non-genotyped subsamples of the data. This takes into account the knowledge that the non-genotyped subsample is a mixture of carriers and non-carriers of the same alleles or genotypes represented in the genotyped subsample and applies the ideas of heterogeneity analyses […] When the non-genotyped subsample is substantially larger than the genotyped subsample, these joint analyses can lead to a noticeable increase in the power of statistical estimates of genetic parameters compared to estimates based only on information from the genotyped subsample. This approach is applicable not only to genetic data but to any discrete time-independent variable that is observed only for a subsample of individuals in a longitudinal study.

“Despite an existing tradition of interpreting differences in the shapes or parameters of the mortality rates (survival functions) resulting from the effects of exposure to different conditions or other interventions in terms of characteristics of individual aging, this practice has to be used with care. This is because such characteristics are difficult to interpret in terms of properties of external and internal processes affecting the chances of death. An important question then is: What kind of mortality model has to be developed to obtain parameters that are biologically interpretable? The purpose of this chapter is to describe an approach to mortality modeling that represents mortality rates in terms of parameters of physiological changes and declining health status accompanying the process of aging in humans. […] A traditional (demographic) description of changes in individual health/survival status is performed using a continuous-time random Markov process with a finite number of states, and age-dependent transition intensity functions (transitions rates). Transitions to the absorbing state are associated with death, and the corresponding transition intensity is a mortality rate. Although such a description characterizes connections between health and mortality, it does not allow for studying factors and mechanisms involved in the aging-related health decline. Numerous epidemiological studies provide compelling evidence that health transition rates are influenced by a number of factors. Some of them are fixed at the time of birth […]. Others experience stochastic changes over the life course […] The presence of such randomly changing influential factors violates the Markov assumption, and makes the description of aging-related changes in health status more complicated. […] The age dynamics of influential factors (e.g., physiological variables) in connection with mortality risks has been described using a stochastic process model of human mortality and aging […]. Recent extensions of this model have been used in analyses of longitudinal data on aging, health, and longevity, collected in the Framingham Heart Study […] This model and its extensions are described in terms of a Markov stochastic process satisfying a diffusion-type stochastic differential equation. The stochastic process is stopped at random times associated with individuals’ deaths. […] When an individual’s health status is taken into account, the coefficients of the stochastic differential equations become dependent on values of the jumping process. This dependence violates the Markov assumption and renders the conditional Gaussian property invalid. So the description of this (continuously changing) component of aging-related changes in the body also becomes more complicated. Since studying age trajectories of physiological states in connection with changes in health status and mortality would provide more realistic scenarios for analyses of available longitudinal data, it would be a good idea to find an appropriate mathematical description of the joint evolution of these interdependent processes in aging organisms. For this purpose, we propose a comprehensive model of human aging, health, and mortality in which the Markov assumption is fulfilled by a two-component stochastic process consisting of jumping and continuously changing processes. The jumping component is used to describe relatively fast changes in health status occurring at random times, and the continuous component describes relatively slow stochastic age-related changes of individual physiological states. […] The use of stochastic differential equations for random continuously changing covariates has been studied intensively in the analysis of longitudinal data […] Such a description is convenient since it captures the feedback mechanism typical of biological systems reflecting regular aging-related changes and takes into account the presence of random noise affecting individual trajectories. It also captures the dynamic connections between aging-related changes in health and physiological states, which are important in many applications.”

April 23, 2017 Posted by | Biology, Books, Demographics, Genetics, Mathematics, Statistics | Leave a comment

Biodemography of aging (III)

Latent class representation of the Grade of Membership model.
Singular value decomposition.
Affine space.
Lebesgue measure.
General linear position.

The links above are links to topics I looked up while reading the second half of the book. The first link is quite relevant to the book’s coverage as a comprehensive longitudinal Grade of Membership (-GoM) model is covered in chapter 17. Relatedly, chapter 18 covers linear latent structure (-LLS) models, and as observed in the book LLS is a generalization of GoM. As should be obvious from the nature of the links some of the stuff included in the second half of the text is highly technical, and I’ll readily admit I was not fully able to understand all the details included in the coverage of chapters 17 and 18 in particular. On account of the technical nature of the coverage in Part 2 I’m not sure I’ll cover the second half of the book in much detail, though I probably shall devote at least one more post to some of those topics, as they were quite interesting even if some of the details were difficult to follow.

I have almost finished the book at this point, and I have already decided to both give the book five stars and include it on my list of favorite books on goodreads; it’s really well written, and it provides consistently highly detailed coverage of very high quality. As I also noted in the first post about the book the authors have given readability aspects some thought, and I am sure most readers would learn quite a bit from this text even if they were to skip some of the more technical chapters. The main body of Part 2 of the book, the subtitle of which is ‘Statistical Modeling of Aging, Health, and Longevity’, is however probably in general not worth the effort of reading unless you have a solid background in statistics.

This post includes some observations and quotes from the last chapters of the book’s Part 1.

“The proportion of older adults in the U.S. population is growing. This raises important questions about the increasing prevalence of aging-related diseases, multimorbidity issues, and disability among the elderly population. […] In 2009, 46.3 million people were covered by Medicare: 38.7 million of them were aged 65 years and older, and 7.6 million were disabled […]. By 2031, when the baby-boomer generation will be completely enrolled, Medicare is expected to reach 77 million individuals […]. Because the Medicare program covers 95 % of the nation’s aged population […], the prediction of future Medicare costs based on these data can be an important source of health care planning.”

“Three essential components (which could be also referred as sub-models) need to be developed to construct a modern model of forecasting of population health and associated medical costs: (i) a model of medical cost projections conditional on each health state in the model, (ii) health state projections, and (iii) a description of the distribution of initial health states of a cohort to be projected […] In making medical cost projections, two major effects should be taken into account: the dynamics of the medical costs during the time periods comprising the date of onset of chronic diseases and the increase of medical costs during the last years of life. In this chapter, we investigate and model the first of these two effects. […] the approach developed in this chapter generalizes the approach known as “life tables with covariates” […], resulting in a new family of forecasting models with covariates such as comorbidity indexes or medical costs. In sum, this chapter develops a model of the relationships between individual cost trajectories following the onset of aging-related chronic diseases. […] The underlying methodological idea is to aggregate the health state information into a single (or several) covariate(s) that can be determinative in predicting the risk of a health event (e.g., disease incidence) and whose dynamics could be represented by the model assumptions. An advantage of such an approach is its substantial reduction of the degrees of freedom compared with existing forecasting models  (e.g., the FEM model, Goldman and RAND Corporation 2004). […] We found that the time patterns of medical cost trajectories were similar for all diseases considered and can be described in terms of four components having the meanings of (i) the pre-diagnosis cost associated with initial comorbidity represented by medical expenditures, (ii) the cost peak associated with the onset of each disease, (iii) the decline/reduction in medical expenditures after the disease onset, and (iv) the difference between post- and pre-diagnosis cost levels associated with an acquired comorbidity. The description of the trajectories was formalized by a model which explicitly involves four parameters reflecting these four components.”

As I noted earlier in my coverage of the book, I don’t think the model above fully captures all relevant cost contributions of the diseases included, as the follow-up period was too short to capture all relevant costs to be included in the part iv model component. This is definitely a problem in the context of diabetes. But then again nothing in theory stops people from combining the model above with other models which are better at dealing with the excess costs associated with long-term complications of chronic diseases, and the model results were intriguing even if the model likely underperforms in a few specific disease contexts.

Moving on…

“Models of medical cost projections usually are based on regression models estimated with the majority of independent predictors describing demographic status of the individual, patient’s health state, and level of functional limitations, as well as their interactions […]. If the health states needs to be described by a number of simultaneously manifested diseases, then detailed stratification over the categorized variables or use of multivariate regression models allows for a better description of the health states. However, it can result in an abundance of model parameters to be estimated. One way to overcome these difficulties is to use an approach in which the model components are demographically-based aggregated characteristics that mimic the effects of specific states. The model developed in this chapter is an example of such an approach: the use of a comorbidity index rather than of a set of correlated categorical regressor variables to represent the health state allows for an essential reduction in the degrees of freedom of the problem.”

“Unlike mortality, the onset time of chronic disease is difficult to define with high precision due to the large variety of disease-specific criteria for onset/incident case identification […] there is always some arbitrariness in defining the date of chronic disease onset, and a unified definition of date of onset is necessary for population studies with a long-term follow-up.”

“Individual age trajectories of physiological indices are the product of a complicated interplay among genetic and non-genetic (environmental, behavioral, stochastic) factors that influence the human body during the course of aging. Accordingly, they may differ substantially among individuals in a cohort. Despite this fact, the average age trajectories for the same index follow remarkable regularities. […] some indices tend to change monotonically with age: the level of blood glucose (BG) increases almost monotonically; pulse pressure (PP) increases from age 40 until age 85, then levels off and shows a tendency to decline only at later ages. The age trajectories of other indices are non-monotonic: they tend to increase first and then decline. Body mass index (BMI) increases up to about age 70 and then declines, diastolic blood pressure (DBP) increases until age 55–60 and then declines, systolic blood pressure (SBP) increases until age 75 and then declines, serum cholesterol (SCH) increases until age 50 in males and age 70 in females and then declines, ventricular rate (VR) increases until age 55 in males and age 45 in females and then declines. With small variations, these general patterns are similar in males and females. The shapes of the age-trajectories of the physiological variables also appear to be similar for different genotypes. […] The effects of these physiological indices on mortality risk were studied in Yashin et al. (2006), who found that the effects are gender and age specific. They also found that the dynamic properties of the individual age trajectories of physiological indices may differ dramatically from one individual to the next.”

“An increase in the mortality rate with age is traditionally associated with the process of aging. This influence is mediated by aging-associated changes in thousands of biological and physiological variables, some of which have been measured in aging studies. The fact that the age trajectories of some of these variables differ among individuals with short and long life spans and healthy life spans indicates that dynamic properties of the indices affect life history traits. Our analyses of the FHS data clearly demonstrate that the values of physiological indices at age 40 are significant contributors both to life span and healthy life span […] suggesting that normalizing these variables around age 40 is important for preventing age-associated morbidity and mortality later in life. […] results [also] suggest that keeping physiological indices stable over the years of life could be as important as their normalizing around age 40.”

“The results […] indicate that, in the quest of identifying longevity genes, it may be important to look for candidate genes with pleiotropic effects on more than one dynamic characteristic of the age-trajectory of a physiological variable, such as genes that may influence both the initial value of a trait (intercept) and the rates of its changes over age (slopes). […] Our results indicate that the dynamic characteristics of age-related changes in physiological variables are important predictors of morbidity and mortality risks in aging individuals. […] We showed that the initial value (intercept), the rate of changes (slope), and the variability of a physiological index, in the age interval 40–60 years, significantly influenced both mortality risk and onset of unhealthy life at ages 60+ in our analyses of the Framingham Heart Study data. That is, these dynamic characteristics may serve as good predictors of late life morbidity and mortality risks. The results also suggest that physiological changes taking place in the organism in middle life may affect longevity through promoting or preventing diseases of old age. For non-monotonically changing indices, we found that having a later age at the peak value of the index […], a lower peak value […], a slower rate of decline in the index at older ages […], and less variability in the index over time, can be beneficial for longevity. Also, the dynamic characteristics of the physiological indices were, overall, associated with mortality risk more significantly than with onset of unhealthy life.”

“Decades of studies of candidate genes show that they are not linked to aging-related traits in a straightforward manner […]. Recent genome-wide association studies (GWAS) have reached fundamentally the same conclusion by showing that the traits in late life likely are controlled by a relatively large number of common genetic variants […]. Further, GWAS often show that the detected associations are of tiny effect […] the weak effect of genes on traits in late life can be not only because they confer small risks having small penetrance but because they confer large risks but in a complex fashion […] In this chapter, we consider several examples of complex modes of gene actions, including genetic tradeoffs, antagonistic genetic effects on the same traits at different ages, and variable genetic effects on lifespan. The analyses focus on the APOE common polymorphism. […] The analyses reported in this chapter suggest that the e4 allele can be protective against cancer with a more pronounced role in men. This protective effect is more characteristic of cancers at older ages and it holds in both the parental and offspring generations of the FHS participants. Unlike cancer, the effect of the e4 allele on risks of CVD is more pronounced in women. […] [The] results […] explicitly show that the same allele can change its role on risks of CVD in an antagonistic fashion from detrimental in women with onsets at younger ages to protective in women with onsets at older ages. […] e4 allele carriers have worse survival compared to non-e4 carriers in each cohort. […] Sex stratification shows sexual dimorphism in the effect of the e4 allele on survival […] with the e4 female carriers, particularly, being more exposed to worse survival. […] The results of these analyses provide two important insights into the role of genes in lifespan. First, they provide evidence on the key role of aging-related processes in genetic susceptibility to lifespan. For example, taking into account the specifics of aging-related processes gains 18 % in estimates of the RRs and five orders of magnitude in significance in the same sample of women […] without additional investments in increasing sample sizes and new genotyping. The second is that a detailed study of the role of aging-related processes in estimates of the effects of genes on lifespan (and healthspan) helps in detecting more homogeneous [high risk] sub-samples”.

“The aging of populations in developed countries requires effective strategies to extend healthspan. A promising solution could be to yield insights into the genetic predispositions for endophenotypes, diseases, well-being, and survival. It was thought that genome-wide association studies (GWAS) would be a major breakthrough in this endeavor. Various genetic association studies including GWAS assume that there should be a deterministic (unconditional) genetic component in such complex phenotypes. However, the idea of unconditional contributions of genes to these phenotypes faces serious difficulties which stem from the lack of direct evolutionary selection against or in favor of such phenotypes. In fact, evolutionary constraints imply that genes should be linked to age-related phenotypes in a complex manner through different mechanisms specific for given periods of life. Accordingly, the linkage between genes and these traits should be strongly modulated by age-related processes in a changing environment, i.e., by the individuals’ life course. The inherent sensitivity of genetic mechanisms of complex health traits to the life course will be a key concern as long as genetic discoveries continue to be aimed at improving human health.”

“Despite the common understanding that age is a risk factor of not just one but a large portion of human diseases in late life, each specific disease is typically considered as a stand-alone trait. Independence of diseases was a plausible hypothesis in the era of infectious diseases caused by different strains of microbes. Unlike those diseases, the exact etiology and precursors of diseases in late life are still elusive. It is clear, however, that the origin of these diseases differs from that of infectious diseases and that age-related diseases reflect a complicated interplay among ontogenetic changes, senescence processes, and damages from exposures to environmental hazards. Studies of the determinants of diseases in late life provide insights into a number of risk factors, apart from age, that are common for the development of many health pathologies. The presence of such common risk factors makes chronic diseases and hence risks of their occurrence interdependent. This means that the results of many calculations using the assumption of disease independence should be used with care. Chapter 4 argued that disregarding potential dependence among diseases may seriously bias estimates of potential gains in life expectancy attributable to the control or elimination of a specific disease and that the results of the process of coping with a specific disease will depend on the disease elimination strategy, which may affect mortality risks from other diseases.”

April 17, 2017 Posted by | Biology, Books, Cancer/oncology, Demographics, Economics, Epidemiology, Genetics, Medicine, Statistics | Leave a comment

Biodemography of aging (II)

In my first post about the book I included a few general remarks about the book and what it’s about. In this post I’ll continue my coverage of the book, starting with a few quotes from and observations related to the content in chapter 4 (‘Evidence for Dependence Among Diseases‘).

“To compare the effects of public health policies on a population’s characteristics, researchers commonly estimate potential gains in life expectancy that would result from eradication or reduction of selected causes of death. For example, Keyfitz (1977) estimated that eradication of cancer would result in 2.265 years of increase in male life expectancy at birth (or by 3 % compared to its 1964 level). Lemaire (2005) found that the potential gain in the U.S. life expectancy from cancer eradication would not exceed 3 years for both genders. Conti et al. (1999) calculated that the potential gain in life expectancy from cancer eradication in Italy would be 3.84 years for males and 2.77 years for females. […] All these calculations assumed independence between cancer and other causes of death. […] for today’s populations in developed countries, where deaths from chronic non-communicable diseases are in the lead, this assumption might no longer be valid. An important feature of such chronic diseases is that they often develop in clusters manifesting positive correlations with each other. The conventional view is that, in a case of such dependence, the effect of cancer eradication on life expectancy would be even smaller.”

I think the great majority of people you asked would have assumed that the beneficial effect of hypothetical cancer eradication in humans on human life expectancy would be much larger than this, but that’s just an impression. I’ve seen estimates like these before, so I was not surprised – but I think many people would be if they knew this. A very large number of people die as a result of developing cancer today, but the truth of the matter is that if they hadn’t died from cancer they’d have died anyway, and on average probably not really all that much later. I linked to Richard Alexander’s comments on this topic in my last post about the book, and again his observations apply so I thought I might as well add the relevant quote from the book here:

“In the course of working against senescence, selection will tend to remove, one by one, the most frequent sources of mortality as a result of senescence. Whenever a single cause of mortality, such as a particular malfunction of any vital organ, becomes the predominant cause of mortality, then selection will more effectively reduce the significance of that particular defect (meaning those who lack it will outreproduce) until some other achieves greater relative significance. […] the result will be that all organs and systems will tend to deteriorate together. […] The point is that as we age, and as senescence proceeds, large numbers of potential sources of mortality tend to lurk ever more malevolently just “below the surface,”so that, unfortunately, the odds are very high against any dramatic lengthening of the maximum human lifetime through technology.”

Remove one cause of death and there are plenty of others standing in line behind it. We already knew that; two hundred years ago one out of every four deaths in England was the result of tuberculosis, but developing treatments for tuberculosis and other infectious diseases did not mean that English people stopped dying; these days they just die from cardiovascular disease and cancer instead. Do note in the context of that quote that Alexander is talking about the maximum human lifetime, not average life expectancy; again, we know and have known for a long time that human technology can have a dramatic effect on the latter variable. Of course a shift in one distribution will be likely to have spill-over effects on the other (if more people are alive at the age of 70, the potential group of people also living on to reach e.g. 100 years is higher, even if the mortality rate for the 70-100 year old group did not change) the point is just that these effects are secondary effects and are likely to be marginal at best.

Anyway, some more stuff from the chapter. Just like the previous chapter in the book did, this one also includes analyses of very large data sets:

The Multiple Cause of Death (MCD) data files contain information about underlying and secondary causes of death in the U.S. during 1968–2010. In total, they include more than 65 million individual death certificate records. […] we used data for the period 1979–2004.”

There’s some formal modelling stuff in the chapter which I won’t go into in detail here, this is the chapter in which I encountered the comment about ‘the multivariate lognormal frailty model’ I included in my first post about the book. One of the things the chapter looks at are the joint frequencies of deaths from cancer and other fatal diseases; it turns out that there are multiple diseases that are negatively related with cancer as a cause of death when you look at the population-level data mentioned above. The chapter goes into some of the biological mechanisms which may help explain why these associations look the way they do, and I’ll quote a little from that part of the coverage. A key idea here is (as always..?) that there are tradeoffs at play; some genetic variants may help protect you against e.g. cancer, but at the same time increase the risk of other diseases for the same reason that they protect you against cancer. In the context of the relationship between cancer deaths and deaths from other diseases they note in the conclusion that: “One potential biological mechanism underlying the negative correlation among cancer and other diseases could be related to the differential role of apoptosis in the development of these diseases.” The chapter covers that stuff in significantly more detail, and I decided to add some observations from the chapter on these topics below:

“Studying the role of the p53 gene in the connection between cancer and cellular aging, Campisi (2002, 2003) suggested that longevity may depend on a balance between tumor suppression and tissue renewal mechanisms. […] Although the mechanism by which p53 regulates lifespan remains to be determined, […] findings highlight the possibility that careful manipulation of p53 activity during adult life may result in beneficial effects on healthy lifespan. Other tumor suppressor genes are also involved in regulation of longevity. […] In humans, Dumont et al. (2003) demonstrated that a replacement of arginine (Arg) by proline (Pro) at position 72 of human p53 decreases its ability to initiate apoptosis, suggesting that these variants may differently affect longevity and vulnerability to cancer. Van Heemst et al. (2005) showed that individuals with the Pro/Pro genotype of p53 corresponding to reduced apoptosis in cells had significantly increased overall survival (by 41%) despite a more than twofold increased proportion of cancer deaths at ages 85+, together with a decreased proportion of deaths from senescence related causes such as COPD, fractures, renal failure, dementia, and senility. It was suggested that human p53 may protect against cancer but at a cost of longevity. […] Other biological factors may also play opposing roles in cancer and aging and thus contribute to respective trade-offs […]. E.g., higher levels of IGF-1 [have been] linked to both cancer and attenuation of phenotypes of physical senescence, such as frailty, sarcopenia, muscle atrophy, and heart failure, as well as to better muscle regeneration”.

“The connection between cancer and longevity may potentially be mediated by trade-offs between cancer and other diseases which do not necessarily involve any basic mechanism of aging per se. In humans, it could result, for example, from trade-offs between vulnerabilities to cancer and AD, or to cancer and CVD […] There may be several biological mechanisms underlying the negative correlation among cancer and these diseases. One can be related to the differential role of apoptosis in their development. For instance, in stroke, the number of dying neurons following brain ischemia (and thus probability of paralysis or death) may be less in the case of a downregulated apoptosis. As for cancer, the downregulated apoptosis may, conversely, mean a higher risk of the disease because more cells may survive damage associated with malignant transformation. […] Also, the role of the apoptosis may be different or even opposite in the development of cancer and Alzheimer’s disease (AD). Indeed, suppressed apoptosis is a hallmark of cancer, while increased apoptosis is a typical feature of AD […]. If so, then chronically upregulated apoptosis (e.g., due to a genetic polymorphism) may potentially be protective against cancer, but be deleterious in relation to AD. […] Increased longevity can be associated not only with increased but also with decreased chances of cancer. […] The most popular to-date “anti-aging” intervention, caloric restriction, often results in increased maximal life span along with reduced tumor incidence in laboratory rodents […] Because the rate of apoptosis was significantly and consistently higher in food restricted mice regardless of age, James et al. (1998) suggested that caloric restriction may have a cancer-protective effect primarily due to the upregulated apoptosis in these mice.”

Below I’ll discuss content covered in chapter 5, which deals with ‘Factors That May Increase Vulnerability to Cancer and Longevity in Modern Human Populations’. I’ll start out with a few quotes:

“Currently, the overall cancer incidence rate (age-adjusted) in the less developed world is roughly half that seen in the more developed world […] For countries with similar levels of economic development but different climate and ethnic characteristics […], the cancer rate patterns look much more similar than for the countries that share the same geographic location, climate, and ethnic distribution, but differ in the level of economic development […]. This suggests that different countries may share common factors linked to economic prosperity that could be primarily responsible for the modern increases in overall cancer risk. […] Population aging (increases in the proportion of older people) may […] partly explain the rise in the global cancer burden […]; however, it cannot explain increases in age-specific cancer incidence rates over time […]. Improved diagnostics and elevated exposures to carcinogens may explain increases in rates for selected cancer sites, but they cannot fully explain the increase in the overall cancer risk, nor incidence rate trends for most individual cancers (Jemal et al. 2008, 2013).”

“[W]e propose that the association between the overall cancer risk and the economic progress and spread of the Western lifestyle could in part be explained by the higher proportion of individuals more susceptible to cancer in the populations of developed countries, and discuss several mechanisms of such an increase in the proportion of the vulnerable. […] mechanisms include but are not limited to: (i) Improved survival of frail individuals. […] (ii) Avoiding or reducing traditional exposures. Excessive disinfection and hygiene typical of the developed world can diminish exposure to some factors that were abundant in the past […] Insufficiently or improperly trained immune systems may be less capable of resisting cancer. (iii) Burden of novel exposures. Some new medicines, cleaning agents, foods, etc., that are not carcinogenic themselves may still affect the natural ways of processing carcinogens in the body, and through this increase a person’s susceptibility to established carcinogens. [If this one sounds implausible to you, I’ll remind you that drug metabolism is complicatedUS] […] (iv) Some of the factors linked to economic prosperity and the Western lifestyle (e.g., delayed childbirth and food enriched with growth factors) may antagonistically influence aging and cancer risk.”

They provide detailed coverage of all of these mechanisms in the chapter, below I have included a few select observations from that part of the coverage.

“There was a dramatic decline in infant and childhood mortality in developed countries during the last century. For example, the infant mortality rate in the United States was about 6 % of live births in 1935, 3 % in 1950, 1.3 % in 1980, and 0.6 % in 2010. That is, it declined tenfold over the course of 75 years […] Because almost all children (including those with immunity deficiencies) survive, the proportion of the children who are inherently more vulnerable could be higher in the more developed countries. This is consistent with a typically higher proportion of children with chronic inflammatory immune disorders such as asthma and allergy in the populations of developed countries compared to less developed ones […] Over-reduction of such traditional exposures may result in an insufficiently/improperly trained immune system early in life, which could make it less able to resist diseases, including cancer later in life […] There is accumulating evidence of the important role of these effects in cancer risk. […] A number of studies have connected excessive disinfection and lack of antigenic stimulation (especially in childhood) of the immune system in Westernized communities with increased risks of both chronic inflammatory diseases and cancer […] The IARC data on migrants to Israel […] allow for comparison of the age trajectories of cancer incidence rates between adult Jews who live in Israel but were born in other countries […] [These data] show that Jews born in less developed regions (Africa and Asia) have overall lower cancer risk than those born in the more developed regions (Europe and America).  The discrepancy is unlikely to be due to differences in cancer diagnostics because at the moment of diagnosis all these people were citizens of the same country with the same standard of medical care. These results suggest that surviving childhood and growing up in a less developed country with diverse environmental exposures might help form resistance to cancer that lasts even after moving to a high risk country.”

I won’t go much into the ‘burden of novel exposures’ part, but I should note that exposures that may be relevant include factors like paracetamol use and antibiotics for treatment of H. pylori. Paracetamol is not considered carcinogenic by the IARC, but we know from animal studies that if you give rats paratamol and then expose them to an established carcinogen (with the straightforward name N-nitrosoethyl-N-hydroxyethylamine), the number of rats developing kidney cancer goes up. In the context of H. pylori, we know that these things may cause stomach cancer, but when you treat rats with metronidazol (which is used to treat H. pylori) and expose them to an established carcinogen, they’re more likely to develop colon cancer. The link between colon cancer and antibiotics use has been noted in other contexts as well; decreased microbial diversity after antibiotics use may lead to suppression of the bifidobacteria and promotion of E. coli in the colon, the metabolic products of which may lead to increased cancer risk. Over time an increase in colon cancer risk and a decrease in stomach cancer risk has been observed in developed societies, but aside from changes in diet another factor which may play a role is population-wide exposure to antibiotics. Colon and stomach cancers are incidentally not the only ones of interest in this particular context; it has also been found that exposure to chloramphenicol, a broad-spectrum antibiotic used since the 40es, increases the risk of lymphoma in mice when the mice are exposed to a known carcinogen, despite the drug itself again not being clearly carcinogenic on its own.

Many new exposures aside from antibiotics are of course relevant. Two other drug-related ones that might be worth mentioning are hormone replacement therapy and contraceptives. HRT is not as commonly used today as it was in the past, but to give some idea of the scope here, half of all women in the US aged 50-65 are estimated to have been on HRT at the peak of its use, around the turn of the millennium, and HRT is assumed to be partly responsible for the higher incidence of hormone-related cancers observed in female populations living in developed countries. It’s of some note that the use of HRT dropped dramatically shortly after this peak (from 61 million prescriptions in 2001 to 21 million in 2004), and that the incidence of estrogen-receptor positive cancers subsequently dropped. As for oral contraceptives, these have been in use since the 1960s, and combined hormonal contraceptives are known to increase the risk of liver- and breast cancer, while seemingly also having a protective effect against endometrial cancer and ovarian cancer. The authors speculate that some of the cancer incidence changes observed in the US during the latter half of the last century, with a decline in female endometrial and ovarian cancer combined with an increase in breast- and liver cancer, could in part be related to widespread use of these drugs. An estimated 10% of all women of reproductive age alive in the world, and 16% of those living in the US, are estimated to be using combined hormonal contraceptives. In the context of the protective effect of the drugs, it should perhaps be noted that endometrial cancer in particular is strongly linked to obesity so if you are not overweight you are relatively low-risk.

Many ‘exposures’ in a cancer context are not drug-related. For example women in Western societies tend to go into menopause at a higher age, and higher age of menopause has been associated with hormone-related cancers; but again the picture is not clear in terms of how the variable affects longevity, considering that later menopause has also been linked to increased longevity in several large studies. In the studies the women did have higher mortality from the hormone-related cancers, but on the other hand they were less likely to die from some of the other causes, such as pneumonia, influenza, and falls. Age of childbirth is also a variable where there are significant differences between developed countries and developing countries, and this variable may also be relevant to cancer incidence as it has been linked to breast cancer and melanoma; in one study women who first gave birth after the age of 35 had a 40% increased risk of breast cancer compared to mothers who gave birth before the age of 20 (good luck ‘controlling for everything’ in a context like that, but…), and in a meta-analysis the relative risk for melanoma was 1.47 for women in the oldest age group having given birth, compared to the youngest (again, good luck controlling for everything, but at least it’s not just one study). Lest you think this literature only deals with women, it’s also been found that parental age seems to be linked to cancers in the offspring (higher parental age -> higher cancer risk in the offspring), though the effect sizes are not mentioned in the coverage.

Here’s what they conclude at the end of the chapter:

“Some of the factors associated with economic prosperity and a Western lifestyle may influence both aging and vulnerability to cancer, sometimes oppositely. Current evidence supports a possibility of trade-offs between cancer and aging-related phenotypes […], which could be influenced by delayed reproduction and exposures to growth factors […]. The latter may be particularly beneficial at very old age. This is because the higher levels of growth factors may attenuate some phenotypes of physical senescence, such as decline in regenerative and healing ability, sarcopenia, frailty, elderly fractures and heart failure due to muscles athrophy. They may also increase the body’s vulnerability to cancer, e.g., through growth promoting and anti-apoptotic effects […]. The increase in vulnerability to cancer due to growth factors can be compatible with extreme longevity because cancer is a major contributor to mortality mainly before age 85, while senescence-related causes (such as physical frailty) become major contributors to mortality at oldest old ages (85+). In this situation, the impact of growth factors on vulnerability to death could be more deleterious in middle-to-old life (~before 85) and more beneficial at older ages (85+).

The complex relationships between aging, cancer, and longevity are challenging. This complexity warns against simplified approaches to extending longevity without taking into account the possible trade-offs between phenotypes of physical aging and various health disorders, as well as the differential impacts of such tradeoffs on mortality risks at different ages (e.g., Ukraintseva and Yashin 2003a; Yashin et al. 2009; Ukraintseva et al. 2010, 2016).”

March 7, 2017 Posted by | Books, Cancer/oncology, Epidemiology, Genetics, Immunology, Medicine, Pharmacology | Leave a comment

Biodemography of aging (I)

“The goal of this monograph is to show how questions about the connections between and among aging, health, and longevity can be addressed using the wealth of available accumulated knowledge in the field, the large volumes of genetic and non-genetic data collected in longitudinal studies, and advanced biodemographic models and analytic methods. […] This monograph visualizes aging-related changes in physiological variables and survival probabilities, describes methods, and summarizes the results of analyses of longitudinal data on aging, health, and longevity in humans performed by the group of researchers in the Biodemography of Aging Research Unit (BARU) at Duke University during the past decade. […] the focus of this monograph is studying dynamic relationships between aging, health, and longevity characteristics […] our focus on biodemography/biomedical demography meant that we needed to have an interdisciplinary and multidisciplinary biodemographic perspective spanning the fields of actuarial science, biology, economics, epidemiology, genetics, health services research, mathematics, probability, and statistics, among others.”

The quotes above are from the book‘s preface. In case this aspect was not clear from the comments above, this is the kind of book where you’ll randomly encounter sentences like these:

The simplest model describing negative correlations between competing risks is the multivariate lognormal frailty model. We illustrate the properties of such model for the bivariate case.

“The time-to-event sub-model specifies the latent class-specific expressions for the hazard rates conditional on the vector of biomarkers Yt and the vector of observed covariates X …”

…which means that some parts of the book are really hard to blog; it simply takes more effort to deal with this stuff here than it’s worth. As a result of this my coverage of the book will not provide a remotely ‘balanced view’ of the topics covered in it; I’ll skip a lot of the technical stuff because I don’t think it makes much sense to cover specific models and algorithms included in the book in detail here. However I should probably also emphasize while on this topic that although the book is in general not an easy read, it’s hard to read because ‘this stuff is complicated’, not because the authors are not trying. The authors in fact make it clear already in the preface that some chapters are more easy to read than are others and that some chapters are actually deliberately written as ‘guideposts and way-stations‘, as they put it, in order to make it easier for the reader to find the stuff in which he or she is most interested (“the interested reader can focus directly on the chapters/sections of greatest interest without having to read the entire volume“) – they have definitely given readability aspects some thought, and I very much like the book so far; it’s full of great stuff and it’s very well written.

I have had occasion to question a few of the observations they’ve made, for example I was a bit skeptical about a few of the conclusions they drew in chapter 6 (‘Medical Cost Trajectories and Onset of Age-Associated Diseases’), but this was related to what some would certainly consider to be minor details. In the chapter they describe a model of medical cost trajectories where the post-diagnosis follow-up period is 20 months; this is in my view much too short a follow-up period to draw conclusions about medical cost trajectories in the context of type 2 diabetes, one of the diseases included in the model, which I know because I’m intimately familiar with the literature on that topic; you need to look 7-10 years ahead to get a proper sense of how this variable develops over time – and it really is highly relevant to include those later years, because if you do not you may miss out on a large proportion of the total cost given that a substantial proportion of the total cost of diabetes relate to complications which tend to take some years to develop. If your cost analysis is based on a follow-up period as short as that of that model you may also on a related note draw faulty conclusions about which medical procedures and -subsidies are sensible/cost effective in the setting of these patients, because highly adherent patients may be significantly more expensive in a short run analysis like this one (they show up to their medical appointments and take their medications…) but much cheaper in the long run (…because they take their medications they don’t go blind or develop kidney failure). But as I say, it’s a minor point – this was one condition out of 20 included in the analysis they present, and if they’d addressed all the things that pedants like me might take issue with, the book would be twice as long and it would likely no longer be readable. Relatedly, the model they discuss in that chapter is far from unsalvageable; it’s just that one of the components of interest –  ‘the difference between post- and pre-diagnosis cost levels associated with an acquired comorbidity’ – in the case of at least one disease is highly unlikely to be correct (given the authors’ interpretation of the variable), because there’s some stuff of relevance which the model does not include. I found the model quite interesting, despite the shortcomings, and the results were definitely surprising. (No, the above does not in my opinion count as an example of coverage of a ‘specific model […] in detail’. Or maybe it does, but I included no equations. On reflection I probably can’t promise much more than that, sometimes the details are interesting…)

Anyway, below I’ve added some quotes from the first few chapters of the book and a few remarks along the way.

“The genetics of aging, longevity, and mortality has become the subject of intensive analyses […]. However, most estimates of genetic effects on longevity in GWAS have not reached genome-wide statistical significance (after applying the Bonferroni correction for multiple testing) and many findings remain non-replicated. Possible reasons for slow progress in this field include the lack of a biologically-based conceptual framework that would drive development of statistical models and methods for genetic analyses of data [here I was reminded of Burnham & Anderson’s coverage, in particular their criticism of mindless ‘Let the computer find out’-strategies – the authors of that chapter seem to share their skepticism…], the presence of hidden genetic heterogeneity, the collective influence of many genetic factors (each with small effects), the effects of rare alleles, and epigenetic effects, as well as molecular biological mechanisms regulating cellular functions. […] Decades of studies of candidate genes show that they are not linked to aging-related traits in a straightforward fashion (Finch and Tanzi 1997; Martin 2007). Recent genome-wide association studies (GWAS) have supported this finding by showing that the traits in late life are likely controlled by a relatively large number of common genetic variants […]. Further, GWAS often show that the detected associations are of tiny size (Stranger et al. 2011).”

I think this ties in well with what I’ve previously read on these and related topics – see e.g. the second-last paragraph quoted in my coverage of Richard Alexander’s book, or some of the remarks included in Roberts et al. Anyway, moving on:

“It is well known from epidemiology that values of variables describing physiological states at a given age are associated with human morbidity and mortality risks. Much less well known are the facts that not only the values of these variables at a given age, but also characteristics of their dynamic behavior during the life course are also associated with health and survival outcomes. This chapter [chapter 8 in the book, US] shows that, for monotonically changing variables, the value at age 40 (intercept), the rate of change (slope), and the variability of a physiological variable, at ages 40–60, significantly influence both health-span and longevity after age 60. For non-monotonically changing variables, the age at maximum, the maximum value, the rate of decline after reaching the maximum (right slope), and the variability in the variable over the life course may influence health-span and longevity. This indicates that such characteristics can be important targets for preventive measures aiming to postpone onsets of complex diseases and increase longevity.”

The chapter from which the quotes in the next two paragraphs are taken was completely filled with data from the Framingham Heart Study, and it was hard for me to know what to include here and what to leave out – so you should probably just consider the stuff I’ve included below as samples of the sort of observations included in that part of the coverage.

“To mediate the influence of internal or external factors on lifespan, physiological variables have to show associations with risks of disease and death at different age intervals, or directly with lifespan. For many physiological variables, such associations have been established in epidemiological studies. These include body mass index (BMI), diastolic blood pressure (DBP), systolic blood pressure (SBP), pulse pressure (PP), blood glucose (BG), serum cholesterol (SCH), hematocrit (H), and ventricular rate (VR). […] the connection between BMI and mortality risk is generally J-shaped […] Although all age patterns of physiological indices are non-monotonic functions of age, blood glucose (BG) and pulse pressure (PP) can be well approximated by monotonically increasing functions for both genders. […] the average values of body mass index (BMI) increase with age (up to age 55 for males and 65 for females), and then decline for both sexes. These values do not change much between ages 50 and 70 for males and between ages 60 and 70 for females. […] Except for blood glucose, all average age trajectories of physiological indices differ between males and females. Statistical analysis confirms the significance of these differences. In particular, after age 35 the female BMI increases faster than that of males. […] [When comparing women with less than or equal to 11 years of education [‘LE’] to women with 12 or more years of education [HE]:] The average values of BG for both groups are about the same until age 45. Then the BG curve for the LE females becomes higher than that of the HE females until age 85 where the curves intersect. […] The average values of BMI in the LE group are substantially higher than those among the HE group over the entire age interval. […] The average values of BG for the HE and LE males are very similar […] However, the differences between groups are much smaller than for females.”

They also in the chapter compared individuals with short life-spans [‘SL’, died before the age of 75] and those with long life-spans [‘LL’, 100 longest-living individuals in the relevant sample] to see if the variables/trajectories looked different. They did, for example: “trajectories for the LL females are substantially different from those for the SL females in all eight indices. Specifically, the average values of BG are higher and increase faster in the SL females. The entire age trajectory of BMI for the LL females is shifted to the right […] The average values of DBP [diastolic blood pressure, US] among the SL females are higher […] A particularly notable observation is the shift of the entire age trajectory of BMI for the LL males and females to the right (towards an older age), as compared with the SL group, and achieving its maximum at a later age. Such a pattern is markedly different from that for healthy and unhealthy individuals. The latter is mostly characterized by the higher values of BMI for the unhealthy people, while it has similar ages at maximum for both the healthy and unhealthy groups. […] Physiological aging changes usually develop in the presence of other factors affecting physiological dynamics and morbidity/mortality risks. Among these other factors are year of birth, gender, education, income, occupation, smoking, and alcohol use. An important limitation of most longitudinal studies is the lack of information regarding external disturbances affecting individuals in their day-today life.”

I incidentally noted while I was reading that chapter that a relevant variable ‘lurking in the shadows’ in the context of the male and female BMI trajectories might be changing smoking habits over time; I have not looked at US data on this topic, but I do know that the smoking patterns of Danish males and females during the latter half of the last century were markedly different and changed really quite dramatically in just a few decades; a lot more males than females smoked in the 60es, whereas the proportions of male- and female smokers today are much more similar, because a lot of males have given up smoking (I refer Danish readers to this blog post which I wrote some years ago on these topics). The authors of the chapter incidentally do look a little at data on smokers and they observe that smokers’ BMI are lower than non-smokers (not surprising), and that the smokers’ BMI curve (displaying the relationship between BMI and age) grows at a slower rate than the BMI curve of non-smokers (that this was to be expected is perhaps less clear, at least to me – the authors don’t interpret these specific numbers, they just report them).

The next chapter is one of the chapters in the book dealing with the SEER data I also mentioned not long ago in the context of my coverage of Bueno et al. Some sample quotes from that chapter below:

“To better address the challenge of “healthy aging” and to reduce economic burdens of aging-related diseases, key factors driving the onset and progression of diseases in older adults must be identified and evaluated. An identification of disease-specific age patterns with sufficient precision requires large databases that include various age-specific population groups. Collections of such datasets are costly and require long periods of time. That is why few studies have investigated disease-specific age patterns among older U.S. adults and there is limited knowledge of factors impacting these patterns. […] Information collected in U.S. Medicare Files of Service Use (MFSU) for the entire Medicare-eligible population of older U.S. adults can serve as an example of observational administrative data that can be used for analysis of disease-specific age patterns. […] In this chapter, we focus on a series of epidemiologic and biodemographic characteristics that can be studied using MFSU.”

“Two datasets capable of generating national level estimates for older U.S. adults are the Surveillance, Epidemiology, and End Results (SEER) Registry data linked to MFSU (SEER-M) and the National Long Term Care Survey (NLTCS), also linked to MFSU (NLTCS-M). […] The SEER-M data are the primary dataset analyzed in this chapter. The expanded SEER registry covers approximately 26 % of the U.S. population. In total, the Medicare records for 2,154,598 individuals are available in SEER-M […] For the majority of persons, we have continuous records of Medicare services use from 1991 (or from the time the person reached age 65 after 1990) to his/her death. […] The NLTCS-M data contain two of the six waves of the NLTCS: namely, the cohorts of years 1994 and 1999. […] In total, 34,077 individuals were followed-up between 1994 and 1999. These individuals were given the detailed NLTCS interview […] which has information on risk factors. More than 200 variables were selected”

In short, these data sets are very large, and contain a lot of information. Here are some results/data:

“Among studied diseases, incidence rates of Alzheimer’s disease, stroke, and heart failure increased with age, while the rates of lung and breast cancers, angina pectoris, diabetes, asthma, emphysema, arthritis, and goiter became lower at advanced ages. [..] Several types of age-patterns of disease incidence could be described. The first was a monotonic increase until age 85–95, with a subsequent slowing down, leveling off, and decline at age 100. This pattern was observed for myocardial infarction, stroke, heart failure, ulcer, and Alzheimer’s disease. The second type had an earlier-age maximum and a more symmetric shape (i.e., an inverted U-shape) which was observed for lung and colon cancers, Parkinson’s disease, and renal failure. The majority of diseases (e.g., prostate cancer, asthma, and diabetes mellitus among them) demonstrated a third shape: a monotonic decline with age or a decline after a short period of increased rates. […] The occurrence of age-patterns with a maximum and, especially, with a monotonic decline contradicts the hypothesis that the risk of geriatric diseases correlates with an accumulation of adverse health events […]. Two processes could be operative in the generation of such shapes. First, they could be attributed to the effect of selection […] when frail individuals do not survive to advanced ages. This approach is popular in cancer modeling […] The second explanation could be related to the possibility of under-diagnosis of certain chronic diseases at advanced ages (due to both less pronounced disease symptoms and infrequent doctor’s office visits); however, that possibility cannot be assessed with the available data […this is because the data sets are based on Medicare claims – US]”

“The most detailed U.S. data on cancer incidence come from the SEER Registry […] about 60 % of malignancies are diagnosed in persons aged 65+ years old […] In the U.S., the estimated percent of cancer patients alive after being diagnosed with cancer (in 2008, by current age) was 13 % for those aged 65–69, 25 % for ages 70–79, and 22 % for ages 80+ years old (compared with 40 % of those aged younger than 65 years old) […] Diabetes affects about 21 % of the U.S. population aged 65+ years old (McDonald et al. 2009). However, while more is known about the prevalence of diabetes, the incidence of this disease among older adults is less studied. […] [In multiple previous studies] the incidence rates of diabetes decreased with age for both males and females. In the present study, we find similar patterns […] The prevalence of asthma among the U.S. population aged 65+ years old in the mid-2000s was as high as 7 % […] older patients are more likely to be underdiagnosed, untreated, and hospitalized due to asthma than individuals younger than age 65 […] asthma incidence rates have been shown to decrease with age […] This trend of declining asthma incidence with age is in agreement with our results.”

“The prevalence and incidence of Alzheimer’s disease increase exponentially with age, with the most notable rise occurring through the seventh and eight decades of life (Reitz et al. 2011). […] whereas dementia incidence continues to increase beyond age 85, the rate of increase slows down [which] suggests that dementia diagnosed at advanced ages might be related not to the aging process per se, but associated with age-related risk factors […] Approximately 1–2 % of the population aged 65+ and up to 3–5 % aged 85+ years old suffer from Parkinson’s disease […] There are few studies of Parkinsons disease incidence, especially in the oldest old, and its age patterns at advanced ages remain controversial”.

“One disadvantage of large administrative databases is that certain factors can produce systematic over/underestimation of the number of diagnosed diseases or of identification of the age at disease onset. One reason for such uncertainties is an incorrect date of disease onset. Other sources are latent disenrollment and the effects of study design. […] the date of onset of a certain chronic disease is a quantity which is not defined as precisely as mortality. This uncertainty makes difficult the construction of a unified definition of the date of onset appropriate for population studies.”

“[W]e investigated the phenomenon of multimorbidity in the U.S. elderly population by analyzing mutual dependence in disease risks, i.e., we calculated disease risks for individuals with specific pre-existing conditions […]. In total, 420 pairs of diseases were analyzed. […] For each pair, we calculated age patterns of unconditional incidence rates of the diseases, conditional rates of the second (later manifested) disease for individuals after onset of the first (earlier manifested) disease, and the hazard ratio of development of the subsequent disease in the presence (or not) of the first disease. […] three groups of interrelations were identified: (i) diseases whose risk became much higher when patients had a certain pre-existing (earlier diagnosed) disease; (ii) diseases whose risk became lower than in the general population when patients had certain pre-existing conditions […] and (iii) diseases for which “two-tail” effects were observed: i.e., when the effects are significant for both orders of disease precedence; both effects can be direct (either one of the diseases from a disease pair increases the risk of the other disease), inverse (either one of the diseases from a disease pair decreases the risk of the other disease), or controversial (one disease increases the risk of the other, but the other disease decreases the risk of the first disease from the disease pair). In general, the majority of disease pairs with increased risk of the later diagnosed disease in both orders of precedence were those in which both the pre-existing and later occurring diseases were cancers, and also when both diseases were of the same organ. […] Generally, the effect of dependence between risks of two diseases diminishes with advancing age. […] Identifying mutual relationships in age-associated disease risks is extremely important since they indicate that development of […] diseases may involve common biological mechanisms.”

“in population cohorts, trends in prevalence result from combinations of trends in incidence, population at risk, recovery, and patients’ survival rates. Trends in the rates for one disease also may depend on trends in concurrent diseases, e.g., increasing survival from CHD contributes to an increase in the cancer incidence rate if the individuals who survived were initially susceptible to both diseases.”

March 1, 2017 Posted by | Biology, Books, Cancer/oncology, Cardiology, Demographics, Diabetes, Epidemiology, Genetics, Medicine, Nephrology, Neurology | Leave a comment

Random stuff

i. Fire works a little differently than people imagine. A great ask-science comment. See also AugustusFink-nottle’s comment in the same thread.


iii. I was very conflicted about whether to link to this because I haven’t actually spent any time looking at it myself so I don’t know if it’s any good, but according to somebody (?) who linked to it on SSC the people behind this stuff have academic backgrounds in evolutionary biology, which is something at least (whether you think this is a good thing or not will probably depend greatly on your opinion of evolutionary biologists, but I’ve definitely learned a lot more about human mating patterns, partner interaction patterns, etc. from evolutionary biologists than I have from personal experience, so I’m probably in the ‘they-sometimes-have-interesting-ideas-about-these-topics-and-those-ideas-may-not-be-terrible’-camp). I figure these guys are much more application-oriented than were some of the previous sources I’ve read on related topics, such as e.g. Kappeler et al. I add the link mostly so that if I in five years time have a stroke that obliterates most of my decision-making skills, causing me to decide that entering the dating market might be a good idea, I’ll have some idea where it might make sense to start.

iv. Stereotype (In)Accuracy in Perceptions of Groups and Individuals.

“Are stereotypes accurate or inaccurate? We summarize evidence that stereotype accuracy is one of the largest and most replicable findings in social psychology. We address controversies in this literature, including the long-standing  and continuing but unjustified emphasis on stereotype inaccuracy, how to define and assess stereotype accuracy, and whether stereotypic (vs. individuating) information can be used rationally in person perception. We conclude with suggestions for building theory and for future directions of stereotype (in)accuracy research.”

A few quotes from the paper:

Demographic stereotypes are accurate. Research has consistently shown moderate to high levels of correspondence accuracy for demographic (e.g., race/ethnicity, gender) stereotypes […]. Nearly all accuracy correlations for consensual stereotypes about race/ethnicity and  gender exceed .50 (compared to only 5% of social psychological findings; Richard, Bond, & Stokes-Zoota, 2003).[…] Rather than being based in cultural myths, the shared component of stereotypes is often highly accurate. This pattern cannot be easily explained by motivational or social-constructionist theories of stereotypes and probably reflects a “wisdom of crowds” effect […] personal stereotypes are also quite accurate, with correspondence accuracy for roughly half exceeding r =.50.”

“We found 34 published studies of racial-, ethnic-, and gender-stereotype accuracy. Although not every study examined discrepancy scores, when they did, a plurality or majority of all consensual stereotype judgments were accurate. […] In these 34 studies, when stereotypes were inaccurate, there was more evidence of underestimating than overestimating actual demographic group differences […] Research assessing the accuracy of  miscellaneous other stereotypes (e.g., about occupations, college majors, sororities, etc.) has generally found accuracy levels comparable to those for demographic stereotypes”

“A common claim […] is that even though many stereotypes accurately capture group means, they are still not accurate because group means cannot describe every individual group member. […] If people were rational, they would use stereotypes to judge individual targets when they lack information about targets’ unique personal characteristics (i.e., individuating information), when the stereotype itself is highly diagnostic (i.e., highly informative regarding the judgment), and when available individuating information is ambiguous or incompletely useful. People’s judgments robustly conform to rational predictions. In the rare situations in which a stereotype is highly diagnostic, people rely on it (e.g., Crawford, Jussim, Madon, Cain, & Stevens, 2011). When highly diagnostic individuating information is available, people overwhelmingly rely on it (Kunda & Thagard, 1996; effect size averaging r = .70). Stereotype biases average no higher than r = .10 ( Jussim, 2012) but reach r = .25 in the absence of individuating information (Kunda & Thagard, 1996). The more diagnostic individuating information  people have, the less they stereotype (Crawford et al., 2011; Krueger & Rothbart, 1988). Thus, people do not indiscriminately apply their stereotypes to all individual  members of stereotyped groups.” (Funder incidentally talked about this stuff as well in his book Personality Judgment).

One thing worth mentioning in the context of stereotypes is that if you look at stuff like crime data – which sadly not many people do – and you stratify based on stuff like country of origin, then the sub-group differences you observe tend to be very large. Some of the differences you observe between subgroups are not in the order of something like 10%, which is probably the sort of difference which could easily be ignored without major consequences; some subgroup differences can easily be in the order of one or two orders of magnitude. The differences are in some contexts so large as to basically make it downright idiotic to assume there are no differences – it doesn’t make sense, it’s frankly a stupid thing to do. To give an example, in Germany the probability that a random person, about whom you know nothing, has been a suspect in a thievery case is 22% if that random person happens to be of Algerian extraction, whereas it’s only 0,27% if you’re dealing with an immigrant from China. Roughly one in 13 of those Algerians have also been involved in a case of ‘body (bodily?) harm’, which is the case for less than one in 400 of the Chinese immigrants.

v. Assessing Immigrant Integration in Sweden after the May 2013 Riots. Some data from the article:

“Today, about one-fifth of Sweden’s population has an immigrant background, defined as those who were either born abroad or born in Sweden to two immigrant parents. The foreign born comprised 15.4 percent of the Swedish population in 2012, up from 11.3 percent in 2000 and 9.2 percent in 1990 […] Of the estimated 331,975 asylum applicants registered in EU countries in 2012, 43,865 (or 13 percent) were in Sweden. […] More than half of these applications were from Syrians, Somalis, Afghanis, Serbians, and Eritreans. […] One town of about 80,000 people, Södertälje, since the mid-2000s has taken in more Iraqi refugees than the United States and Canada combined.”

“Coupled with […] macroeconomic changes, the largely humanitarian nature of immigrant arrivals since the 1970s has posed challenges of labor market integration for Sweden, as refugees often arrive with low levels of education and transferable skills […] high unemployment rates have disproportionately affected immigrant communities in Sweden. In 2009-10, Sweden had the highest gap between native and immigrant employment rates among OECD countries. Approximately 63 percent of immigrants were employed compared to 76 percent of the native-born population. This 13 percentage-point gap is significantly greater than the OECD average […] Explanations for the gap include less work experience and domestic formal qualifications such as language skills among immigrants […] Among recent immigrants, defined as those who have been in the country for less than five years, the employment rate differed from that of the native born by more than 27 percentage points. In 2011, the Swedish newspaper Dagens Nyheter reported that 35 percent of the unemployed registered at the Swedish Public Employment Service were foreign born, up from 22 percent in 2005.”

“As immigrant populations have grown, Sweden has experienced a persistent level of segregation — among the highest in Western Europe. In 2008, 60 percent of native Swedes lived in areas where the majority of the population was also Swedish, and 20 percent lived in areas that were virtually 100 percent Swedish. In contrast, 20 percent of Sweden’s foreign born lived in areas where more than 40 percent of the population was also foreign born.”

vi. Book recommendations. Or rather, author recommendations. A while back I asked ‘the people of SSC’ if they knew of any fiction authors I hadn’t read yet which were both funny and easy to read. I got a lot of good suggestions, and the roughly 20 Dick Francis novels I’ve read during the fall I’ve read as a consequence of that thread.

vii. On the genetic structure of Denmark.

viii. Religious Fundamentalism and Hostility against Out-groups: A Comparison of Muslims and Christians in Western Europe.

“On the basis of an original survey among native Christians and Muslims of Turkish and Moroccan origin in Germany, France, the Netherlands, Belgium, Austria and Sweden, this paper investigates four research questions comparing native Christians to Muslim immigrants: (1) the extent of religious fundamentalism; (2) its socio-economic determinants; (3) whether it can be distinguished from other indicators of religiosity; and (4) its relationship to hostility towards out-groups (homosexuals, Jews, the West, and Muslims). The results indicate that religious fundamentalist attitudes are much more widespread among Sunnite Muslims than among native Christians, even after controlling for the different demographic and socio-economic compositions of these groups. […] Fundamentalist believers […] show very high levels of out-group hostility, especially among Muslims.”

ix. Portal: Dinosaurs. It would have been so incredibly awesome to have had access to this kind of stuff back when I was a child. The portal includes links to articles with names like ‘Bone Wars‘ – what’s not to like? Again, awesome!

x. “you can’t determine if something is truly random from observations alone. You can only determine if something is not truly random.” (link) An important insight well expressed.

xi. Chessprogramming. If you’re interested in having a look at how chess programs work, this is a neat resource. The wiki contains lots of links with information on specific sub-topics of interest. Also chess-related: The World Championship match between Carlsen and Karjakin has started. To the extent that I’ll be following the live coverage, I’ll be following Svidler et al.’s coverage on chess24. Robin van Kampen and Eric Hansen – both 2600+ elo GMs – did quite well yesterday, in my opinion.

xii. Justified by More Than Logos Alone (Razib Khan).

“Very few are Roman Catholic because they have read Aquinas’ Five Ways. Rather, they are Roman Catholic, in order of necessity, because God aligns with their deep intuitions, basic cognitive needs in terms of cosmological coherency, and because the church serves as an avenue for socialization and repetitive ritual which binds individuals to the greater whole. People do not believe in Catholicism as often as they are born Catholics, and the Catholic religion is rather well fitted to a range of predispositions to the typical human.”

November 12, 2016 Posted by | Books, Chemistry, Chess, Data, dating, Demographics, Genetics, Geography, immigration, Paleontology, Papers, Physics, Psychology, Random stuff, Religion | Leave a comment

Role of Biomarkers in Medicine

“The use of biomarkers in basic and clinical research has become routine in many areas of medicine. They are accepted as molecular signatures that have been well characterized and repeatedly shown to be capable of predicting relevant disease states or clinical outcomes. In Role of Biomarkers in Medicine, expert researchers in their individual field have reviewed many biomarkers or potential biomarkers in various types of diseases. The topics address numerous aspects of medicine, demonstrating the current conceptual status of biomarkers as clinical tools and as surrogate endpoints in clinical research.”

The above quote is from the preface of the book. Here’s my goodreads review. I have read about biomarkers before – for previous posts on this topic, see this link. I added the link in part because the coverage provided in this book is in my opinion generally of a somewhat lower quality than is the coverage that has been provided in some of the other books I’ve read on these topics. However the fact that the book is not amazing should probably not keep me from sharing some observations of interest from the book, which I have done in this post.

we suggest more precise studies to establish the exact role of this hormone […] additional studies are necessary […] there are conflicting results […] require further investigation […] more intervention studies with long-term follow-up are required. […] further studies need to be conducted […] further research is needed (There are a lot of comments like these in the book, I figured I should include a few in my coverage…)

“Cancer biomarkers (CB) are biomolecules produced either by the tumor cells or by other cells of the body in response to the tumor, and CB could be used as screening/early detection tool of cancer, diagnostic, prognostic, or predictor for the overall outcome of a patient. Moreover, cancer biomarkers may identify subpopulations of patients who are most likely to respond to a given therapy […] Unfortunately, […] only very few CB have been approved by the FDA as diagnostic or prognostic cancer markers […] 25 years ago, the clinical usefulness of CB was limited to be an effective tool for patient’s prognosis, surveillance, and therapy monitoring. […] CB have [since] been reported to be used also for screening of general population or risk groups, for differential diagnosis, and for clinical staging or stratification of cancer patients. Additionally, CB are used to estimate tumor burden and to substitute for a clinical endpoint and/or to measure clinical benefit, harm or lack of benefit, or harm [4, 18, 30]. Among commonly utilized biomarkers in clinical practice are PSA, AFP, CA125, and CEA.”

“Bladder cancer (BC) is the second most common malignancy in the urologic field. Preoperative predictive biomarkers of cancer progression and prognosis are imperative for optimizing […] treatment for patients with BC. […] Approximately 75–85% of BC cases are diagnosed as nonmuscle-invasive bladder cancer (NMIBC) […] NMIBC has a tendency to recur (50–70%) and may progress (10–20%) to a higher grade and/or muscle-invasive BC (MIBC) in time, which can lead to high cancer-specific mortality [2]. Histological tumor grade is one of the clinical factors associated with outcomes of patients with NMIBC. High-grade NMIBC generally exhibits more aggressive behavior than low-grade NMIBC, and it increases the risk of a poorer prognosis […] Cystoscopy and urine cytology are commonly used techniques for the diagnosis and surveillance of BC. Cystoscopy can identify […] most papillary and solid lesions, but this is highly invasive […] urine cytology is limited by examiner experience and low sensitivity. For these reasons, some tumor markers have been investigated […], but their sensitivity and specificity are limited [5] and they are unable to predict the clinical outcome of BC patients. […] Numerous efforts have been made to identify tumor markers. […] However, a serum marker that can serve as a reliable detection marker for BC has yet to be identified.”

“Endometrial cancer (EmCa) is the most common type of gynecological cancer. EmCa is the fourth most common cancer in the United States, which has been linked to increased incidence of obesity. […] there are no reliable biomarker tests for early detection of EmCa and treatment effectiveness. […] Approximately 75% of women with EmCa are postmenopausal; the most common symptom is postmenopausal bleeding […] Approximately 15% of women diagnosed with EmCa are younger than 50 years of age, while 5% are diagnosed before the age of 40 [29]. […] Roughly, half of the EmCa cases are linked to obesity. Obese women are four times more likely to develop EmCa when compared to normal weight women […] Obese individuals oftentimes exhibit resistance to leptin and show high levels of the adipokine in blood, which is known as leptin resistance […] prolonged exposure of leptin damages the hypothalamus causing it to become insensitive to the effects of leptin […] Evidence shows that leptin is an important pro-inflammatory, pro-angiogenic, and mitogenic factor for cancer. Leptin produced by cancer cells acts in an autocrine and paracrine manner to promote tumor cell proliferation, migration and invasion, pro-inflammation, and angiogenesis [58, 70]. High levels of leptin […] are associated with metastasis and decreased survival rates in breast cancer patients [58]. […] Metabolic syndrome including obesity, hypertension, insulin resistance, diabetes, and dyslipidemia increase the risk of developing multiple malignancies, particularly EmCa [30]. Younger women diagnosed with EmCa are usually obese, and their carcinomas show a well-differentiated histology [20].

“Normally, tumor suppressor genes act to inhibit or arrest cell proliferation and tumor development [37]. However; when mutated, tumor suppressors become inactive, thus permitting tumor growth. For example, mutations in p53 have been determined in various cancers such as breast, colon, lung, endometrium, leukemias, and carcinomas of many tissues. These p53 mutations are found in approximately 50% of all cancers [38]. Roughly 10–20% of endometrial carcinomas exhibit p53 mutations [37]. […] overexpression of mutated tumor suppressor p53 has been associated with Type II EmCa (poor histologic grade, non-endometrioid histology, advanced stage, and poor survival).”

“Increasing data indicate that oxidative stress is involved in the development of DR [diabetic retinopathy] [16–19]. The retina has a high content of polyunsaturated fatty acids and has the highest oxygen uptake and glucose oxidation relative to any other tissue. This phenomenon renders the retina more susceptible to oxidative stress [20]. […] Since long-term exposure to oxidative stress is strongly implicated in the pathogenesis of diabetic complications, polymorphic genes of detoxifying enzymes may be involved in the development of DR. […] A meta-analysis comprising 17 studies, including type 1 and type 2 diabetic patients from different ethnic origins, implied that the C (Ala) allele of the C47T polymorphism in the MnSOD gene had a significant protective effect against microvascular complications (DR and diabetic nephropathy) […] In the development of DR, superoxide levels are elevated in the retina, antioxidant defense system is compromised, MnSOD is inhibited, and mitochondria are swollen and dysfunctional [77,87–90]. Overexpression of MnSOD protects [against] diabetes-induced mitochondrial damage and the development of DR [19,91].”

Continuous high level of blood glucose in diabetes damages micro and macro blood vessels throughout the body by altering the endothelial cell lining of the blood vessels […] Diabetes threatens vision, and patients with diabetes develop cataracts at an earlier age and are nearly twice as likely to get glaucoma compared to non-diabetic[s] [3]. More than 75% of patients who have had diabetes mellitus for more than 20 years will develop diabetic retinopathy (DR) [4]. […] DR is a slow progressive retinal disease and occurs as a consequence of longstanding accumulated functional and structural impairment of the retina by diabetes. It is a multifactorial condition arising from the complex interplay between biochemical and metabolic abnormalities occurring in all cells of the retina. DR has been classically regarded as a microangiopathy of the retina, involving changes in the vascular wall leading to capillary occlusion and thereby retinal ischemia and leakage. And more recently, the neural defects in the retina are also being appreciated […]. Recently, various clinical investigators [have detected] neuronal dysfunction at very early stages of diabetes and numerous abnormalities in the retina can be identified even before the vascular pathology appears [76, 77], thus suggesting a direct effect of diabetes on the neural retina. […] An emerging issue in DR research is the focus on the mechanistic link between chronic low-grade inflammation and angiogenesis. Recent evidence has revealed that extracellular high-mobility group box-1 (HMGB1) protein acts as a potent proinflammatory cytokine that triggers inflammation and recruits leukocytes to the site of tissue damage, and exhibits angiogenic effects. The expression of HMGB1 is upregulated in epiretinal membranes and vitreous fluid from patients with proliferative DR and in the diabetic retina. […] HMGB1 may be a potential biomarker [for diabetic retinopathy] […] early blockade of HMGB1 may be an effective strategy to prevent the progression of DR.”

“High blood pressure is one of the leading risk factors for global mortality and is estimated to have caused 9.4 million deaths in 2010. A meta‐analysis which includes 1 million individuals has indicated that death from both CHD [coronary heart disease] and stroke increase progressively and linearly from BP levels as low as 115 mmHg systolic and 75 mmHg diastolic upwards [138]. The WHO [has] pointed out that a “reduction in systolic blood pressure of 10 mmHg is associated with a 22% reduction in coronary heart disease, 41% reduction in stroke in randomized trials, and a 41–46% reduction in cardiometabolic mortality in epidemiological studies” [139].”

Several reproducible studies have ascertained that individuals with autism demonstrate an abnormal brain 5-HT system […] peripheral alterations in the 5-HT system may be an important marker of central abnormalities in autism. […] In a recent study, Carminati et al. [129] tested the therapeutic efficacy of venlafaxine, an antidepressant drug that inhibits the reuptake of 5-HT, and [found] that venlafaxine at a low dose [resulted in] a substantial improvement in repetitive behaviors, restricted interests, social impairment, communication, and language. Venlafaxine probably acts via serotonergic mechanisms  […] OT [Oxytocin]-related studies in autism have repeatedly reported lower blood OT level in autistic patients compared to age- and gender-matched control subjects […] autistic patients demonstrate an altered neuroinflammatory response throughout their lives; they also show increased astrocyte and microglia inflammatory response in the cortex and the cerebellum  [47, 48].”

November 3, 2016 Posted by | autism, Books, Cancer/oncology, Cardiology, Diabetes, Epidemiology, Genetics, Immunology, Medicine, Neurology, Pharmacology | Leave a comment

The Biology of Moral Systems (I)

I have quoted from the book before, but I decided that this book deserves to be blogged in more detail. I’m close to finishing the book at this point (it’s definitely taken longer than it should have), and I’ll probably give it 5 stars on goodreads; I might also add it to my list of favourite books on the site. In this post I’ve added some quotes and ideas from the book, and a few comments. Before going any further I should note that it’s frankly impossible to cover anywhere near all the ideas covered in the book here on the blog, so if you’re even remotely interested in these kinds of things you really should pick up a copy of the book and read all of it.

“I believe that something crucial has been missing from all of the great debates of history, among philosophers, politicians, theologians, and thinkers from other and diverse backgrounds, on the issues of morality, ethics, justice, right and wrong. […] those who have tried to analyze morality have failed to treat the human traits that underlie moral behavior as outcomes of evolution […] for many conflicts of interest, compromises and enforceable contracts represent the only real solutions. Appeals to morality, I will argue, are simply the invoking of such compromises and contracts in particular ways. […] the process of natural selection that has given rise to all forms of life, including humans, operates such that success has always been relative. One consequence is that organisms resulting from the long-term cumulative effects of selection are expected to resist efforts to reveal their interests fully to others, and also efforts to place limits on their striving or to decide for them when their interests are being “fully” satisfied. These are all reasons why we should expect no “terminus” – ever – to debates on moral and ethical issues.” (these comments I also included in the quotes post to which I link at the beginning, but I thought it was worth including them in this post as well even so – US).

“I am convinced that biology can never offer […] easy or direct answers to the questions of what is right and wrong. I explicitly reject the attitude that whatever biology tells us is so is also what ought to be (David Hume’s so-called “naturalistic fallacy”) […] there are within biology no magic solutions to moral problems. […] Knowledge of the human background in organic evolution can [however] provide a deeper self-understanding by an increasing proportion of the world’s population; self-understanding that I believe can contribute to answering the serious questions of social living.”

“If there had been no recent discoveries in biology that provided new ways of looking at the concept of moral systems, then I would be optimistic indeed to believe that I could say much that is new. But there have been such discoveries. […] The central point in these writings [Hamilton, Williams, Trivers, Cavalli-Sforza, Feldman, Dawkins, Wilson, etc. – US] […] is that natural selection has apparently been maximizing the survival by reproduction of genes, as they have been defined by evolutionists, and that, with respect to the activities of individuals, this includes effects on copies of their genes, even copies located in other individuals. In other words, we are evidently evolved not only to aid the genetic materials in our own bodies, by creating and assisting descendants, but also to assist, by nepotism, copies of our genes that reside in collateral (nondescendant) relatives. […] ethics, morality, human conduct, and the human psyche are to be understood only if societies are seen as collections of individuals seeking their own self-interests […] In some respects these ideas run contrary to what people have believed and been taught about morality and human values: I suspect that nearly all humans believe it is a normal part of the functioning of every human individual now and then to assist someone else in the realization of that person’s own interests to the actual net expense of those of the altruist. What [the above-mentioned writings] tells us is that, despite our intuitions, there is not a shred of evidence to support this view of beneficence, and a great deal of convincing theory suggests that any such view will eventually be judged false. This implies that we will have to start all over again to describe and understand ourselves, in terms alien to our intuitions […] It is […] a goal of this book to contribute to this redescription and new understanding, and especially to discuss why our intuitions should have misinformed us.”

“Social behavior evolves as a succession of ploys and counterploys, and for humans these ploys are used, not only among individuals within social groups, but between and among small and large groups of up to hundreds of millions of individuals. The value of an evolutionary approach to human sociality is thus not to determine the limits of our actions so that we can abide by them. Rather, it is to examine our life strategies so that we can change them when we wish, as a result of understanding them. […] my use of the word biology in no way implies that moral systems have some kind of explicit genetic background, are genetically determined, or cannot be altered by adjusting the social environment. […] I mean simply to suggest that if we wish to understand those aspects of our behavior commonly regarded as involving morality or ethics, it will help to reconsider our behavior as a product of evolution by natural selection. The principal reason for this suggestion is that natural selection operates according to general principles which make its effects highly predictive, even with respect to traits and circumstances that have not yet been analyzed […] I am interested […] not in determining what is moral and immoral, in the sense of what people ought to be doing, but in elucidating the natural history of ethics and morality – in discovering how and why humans initiated and developed the ideas we have about right and wrong.”

I should perhaps mention here that sort-of-kind-of related stuff is covered in Aureli et al. (see e.g. this link), and that some parts of that book will probably make you understand Alexander’s ideas a lot better even if perhaps he didn’t read those specific authors – mainly because it gets a lot easier to imagine the sort of mechanisms which might be at play here if you’ve read this sort of literature. Here’s one relevant quote from the coverage of that book, which also deals with the question Alexander discusses above, and in a lot more detail throughout his book, namely ‘where our morality comes from?’

“we make two fundamental assertions regarding the evolution of morality: (1) there are specific types of behavior demonstrated by both human and nonhuman primates that hint at a shared evolutionary background to morality; and (2) there are theoretical and actual connections between morality and conflict resolution in both nonhuman primates and human development. […] the transition from nonmoral or premoral to moral is more gradual than commonly assumed. No magic point appears in either evolutionary history or human development at which morality suddenly comes into existence. In both early childhood and in animals closely related to us, we can recognize behaviors (and, in the case of children, judgments) that are essential building blocks of the morality of the human adult. […] the decision making and emotions underlying moral judgments are generated within the individual rather than being simply imposed by society. They are a product of evolution, an integrated part of the human genetic makeup, that makes the child construct a moral perspective through interactions with other members of its species. […] Much research has shown that children acquire morality through a social-cognitive process; children make connections between acts and consequences. Through a gradual process, children develop concepts of justice, fairness, and equality, and they apply these concepts to concrete everyday situations […] we assert that emotions such as empathy and sympathy provide an experiential basis by which children construct moral judgments. Emotional reactions from others, such as distress or crying, provide experiential information that children use to judge whether an act is right or wrong […] when a child hits another child, a crying response provides emotional information about the nature of the act, and this information enables the child, in part, to determine whether and why the transgression is wrong. Therefore, recognizing signs of distress in another person may be a basic requirement of the moral judgment process. The fact that responses to distress in another have been documented both in infancy and in the nonhuman primate literature provides initial support for the idea that these types of moral-like experiences are common to children and nonhuman primates.”

Alexander’s coverage is quite different from that found in Aureli et al.,, but some of the contributors to the latter work deal with similar questions to the ones in which he’s interested, using approaches not employed in Alexander’s book – so this is another place to look if you’re interested in these topics. Margalit’s The Emergence of Norms is also worth mentioning. Part of the reason why I mention these books here is incidentally that they’re not talked about in Alexander’s coverage (for very natural reasons, I should add, in the case of the former book at least; Natural Conflict Resolution was published more than a decade after Alexander wrote his book…).

“In the hierarchy of explanatory principles governing the traits of living organisms, evolutionary reductionism – the development of principles from the evolutionary process – tends to subsume all other kinds. Proximate-cause reductionism (or reduction by dissection) sometimes advances our understanding of the whole phenomena. […] When evolutionary reduction becomes trivial in the study of life it is for a reason different from incompleteness; rather, it is because the breadth of the generalization distances it too significantly from the particular problem that may be at hand. […] the greatest weakness of reduction by generalization is not that it is likely to be trivial but that errors are probable through unjustified leaps from hypothesis to conclusion […] Critics such as Gould and Lewontin […] do not discuss the facts that (a) all students of human behavior (not just those who take evolution into account) run the risk of leaping unwarrantedly from hypothesis to conclusion and (b) just-so stories were no less prevalent and hypothesis-testing no more prevalent in studies of human behavior before evolutionary biologists began to participate. […] I believe that failure by biologists and others to distinguish proximate- or partial-cause and evolutionary- or ultimate-cause reductionism […] is in some part responsible for the current chasm between the social and the biological sciences and the resistance to so-called biological approaches to understanding humans. […] Both approaches are essential to progress in biology and the social sciences, and it would be helpful if their relationship, and that of their respective practitioners, were not seen as adversarial.”

(Relatedly, love is motivationally prior to sugar. This one also seems relevant, though in a different way).

“Humans are not accustomed to dealing with their own strategies of life as if they had been tuned by natural selection. […] People are not generally aware of what their lifetimes have been evolved to accomplish, and, even if they are roughly aware of this, they do not easily accept that their everyday activities are in any sense means to that end. […] The theory of lifetimes most widely accepted among biologists is that individuals have evolved to maximize the likelihood of survival of not themselves, but their genes, and that they do this by reproducing and tending in various ways offspring and other carriers of their own genes […] In this theory, survival of the individual – and its growth, development, and learning – are proximate mechanisms of reproductive success, which is a proximate mechanism of genic survival. Only the genes have evolved to survive. […] To say that we are evolved to serve the interests of our genes in no way suggests that we are obliged to serve them. […] Evolution is surely most deterministic for those still unaware of it. If this argument is correct, it may be the first to carry us from is to ought, i.e., if we desire to be the conscious masters of our own fates, and if conscious effort in that direction is the most likely vehicle of survival and happiness, then we ought to study evolution.”

“People are sometimes comfortable with the notion that certain activities can be labeled as “purely cultural” because they also believe that there are behaviors that can be labeled “purely genetic.” Neither is true: the environment contributes to the expression of all behaviors, and culture is best described as part of the environment.”

“Happiness and its anticipation are […] proximate mechanisms that lead us to perform and repeat acts that in the environments of history, at least, would have led to greater reproductive success.”

“The remarkable difference between the patterns of senescence in semelparous (one-time breeding) and iteroparous (repeat-breeding) organisms is probably one of the best simple demonstrations of the central significance of reproduction in the individual’s lifetime. How, otherwise, could we explain the fact that those who reproduce but once, like salmon and soybeans, tend to die suddenly right afterward, while those like ourselves who have residual reproductive possibilities after the initial reproductive act decline or senesce gradually? […] once an organism has completed all possibilities of reproducing (through both offspring production and assistance, and helping other relatives), then selection can no longer affect its survival: any physiological or other breakdown that destroys it may persist and even spread if it is genetically linked to a trait that is expressed earlier and is reproductively beneficial. […] selection continually works against senescence, but is just never able to defeat it entirely. […] senescence leads to a generalized deterioration rather than one owing to a single effect or a few effects […] In the course of working against senescence, selection will tend to remove, one by one, the most frequent sources of mortality as a result of senescence. Whenever a single cause of mortality, such as a particular malfunction of any vital organ, becomes the predominant cause of mortality, then selection will more effectively reduce the significance of that particular defect (meaning those who lack it will outreproduce) until some other achieves greater relative significance. […] the result will be that all organs and systems will tend to deteriorate together. […] The point is that as we age, and as senescence proceeds, large numbers of potential sources of mortality tend to lurk ever more malevolently just “below the surface,” so that, unfortunately, the odds are very high against any dramatic lengthening of the maximum human lifetime through technology. […] natural selection maximizes the likelihood of genetic survival, which is incompatible with eliminating senescence. […] Senescence, and the finiteness of lifetimes, have evolved as incidental effects […] Organisms compete for genetic survival and the winners (in evolutionary terms) are those who sacrifice their phenotypes (selves) earlier when this results in greater reproduction.”

“altruism appears to diminish with decreasing degree of relatedness in sexual species whenever it is studied – in humans as well as nonhuman species”

October 5, 2016 Posted by | Anthropology, Biology, Books, Evolutionary biology, Genetics, Philosophy | Leave a comment

Human Drug Metabolism (III)

This is my third post about this book. You can read my previous posts here and here. In this post I have covered material from chapter 7, dealing with ‘factors affecting drug metabolism’.

“Data from animal studies in one country are usually comparable with that of another, provided the animal species and strain are the same. This provides a consistent picture of the basic pharmacological and toxicological actions of a candidate drug in a living organism […] it has been obvious since animal testing began that there would be large differences in the way a drug might perform in man compared with animal species […]. Unfortunately, there is no experimental model yet designed that can not only consider human biochemistry and physiology, but also the effects of age, smoking, legal and illegal drug usage, gender, diet, environment, disease and finally genetic variation. Indeed, many clinical studies have revealed enormous differences in drug clearance and pharmacological effect even in age, sex and ethnically matched individuals. In effect, this means that the first year or so of a drug’s clinical life is a vast, but monitored experiment, involving hundreds of thousands of patients and there is no guarantee of success.”

“Most biotransformational polymorphisms that might potentially cause a problem clinically are due to an inability of those with defective enzymes to remove the drug from the system. Drug failure can occur if the agent is administered as a pro-drug and requires some metabolic conversion to an active metabolite. Drug accumulation can lead to unpleasant side effects and loss of patient tolerance for the agent. […] Overall, there are a large number of factors that can influence drug metabolism, either by increasing clearance to cause drug failure, or by preventing clearance to lead to toxicity. In the real world, it is often impossible to delineate the different conflicting factors which result in net changes in drug clearance which cause a drug to fall out of, or climb above, the therapeutic window. It may only be possible clinically in many cases to try to change what appears to be the major cause to bring about a resolution of the situation to restore curative and non-toxic drug levels.”

“Most population studies of human polymorphisms list the allelic frequency, that is, how many of an ethnic group contain the alleles in question. […] The actual haplotypes in the population, that is, which individuals express which combinations of alleles, are not the same as the population allelic frequency. […] If an SNP or a combination of SNPs is a fairly mild defect in the enzyme when it is homozygously expressed, then the heterozygotes will show little impairment and the polymorphism may be clinically irrelevant. With other SNPs, the enzyme produced may be completely non-functional.  Homozygotes will be virtually unable to clear the drug and heterozygotes will show impairment also. There are also smaller populations of UMs, or ultra rapid metabolizers which may have a feature of their enzyme which either makes it super efficient or expressed in abnormally high amounts. […] Phenotyping will group patients in very broad EMs [extensive metabolizers], IMs [intermediate metabolizers] or PM [poor metabolizers] categories, but will be unable to distinguish between heterozygous and homozygous EMs. Although genotyping may be very helpful in dosage estimation in the initiation of therapy, there is no substitute for the normal process of therapeutic monitoring, which is effectively phenotyping the individual in the real world in terms of maximizing response and minimizing toxicity.”

“it is clear that there is a vast amount of genetic variation across humanity in terms of biotransformational capability and so the idea that in therapeutics, ‘one size fits all’ is not only outdated, but fabulously naïve. […] Unfortunately, detecting and responding successfully to human biotransformational polymorphisms has proved to be extremely problematic. In terms of polymorphism detection, this area is a classic illustration of how the exploration of the human genome with powerful molecular biological tools may unearth many apparently marked polymorphic defects that may not necessarily translate into a measurable clinical impact in terms of efficacy and toxicity. In reality, many more scientists have the opportunity to discover and publish such polymorphisms in vitro, than there are clinical scientists, resources and indeed cooperative volunteers or patients in sufficient quantity to determine practical clinical relevance.”

the CYP3A group (chromosome 7) metabolize around half of all drugs […] variation in the metabolism of CYP3A substrates […] can be up to ten-fold in terms of drug clearances and up to 90-fold in liver protein expression. […] It is likely that the full extent of the variation in CYP3A4 is still to be discovered […] While it is thought that CYP3A4 is not subject to an obvious major polymorphism, CYP3A5 definitely is. […] *3/*3 individuals form no serviceable CYP3A5. Functional CYP3A5 is found in around 20 per cent of Caucasians, half of Chinese/Japanese, 70 per cent of Hispanics and more than 80 per cent of African Americans.”

“A particularly dangerous polymorphism clinically was identified in the 1980s for one of the methyltransferases. The endogenous role of S-methylating thiopurine S-methyltransferase (TPMT) is not that clear, but […] [t]hese drugs are […] effective in some childhood leukaemias […] TPMT highlights the genotyping/phenotyping issue mentioned earlier in the management of patients with polymorphisms. Genotyping will reveal the level of TPMT expression that should be expected in the otherwise healthy patient. However, there are many factors which impact day-to-day TPMT expression during thiopurine therapy. […] Hence, what might be predicted from a genotype test may bear little resemblance to how the enzyme is performing on a particular day in a treatment cycle. So clinically, it is preferred to test actual TPMT activity.”

“Understanding of sulphonation and its roles in endogenous as well as xenobiotic metabolism is not as advanced compared with that of CYPs; however, the role of SULTs in the activation of carcinogens is becoming more apparent. One of the major influences on SULT activity is their polymorphic nature; in the case of one of the most important toxicologically relevant SULTs, SULT1A1, this isoform exists as three variants, SULT1A1*1 (wild-type), SULT1A1*2 and SULT1A1*3. The *1 variant allele is found in the majority of Caucasians (around 65 per cent), whilst the *2 variant differs only in the exchange of one amino acid for another. This single amino acid change has profound effects on the stability and catalytic activity of the isoform. The *2 variant is found in approximately 32 per cent of Caucasians and catalytically faulty […] About 9 in 10 Chinese people have the *1 allele and about 8 per cent have allele *2. About half of African-Americans have *1 and a third have *2. Interestingly, there is a *3 which is rare in most races but accounts for more than 22 per cent of African Americans. There is also considerable variation in SULT2A1 and SULT2B1, which are the major hydroxysteroid sulphators in the body, which may have implications for sex steroid and cholesterol handling. […] from the cancer-risk viewpoint, a highly active SULT1A1 *1 is usually an advantage in that it usually removes reactive species rapidly as stable sulphates. With some agents it is problematic as certain carcinogens such as acetylfluorene are indirectly activated to reactive species by SULTs. In addition, protective dietary flavonoids […] are also rapidly cleared by SULT1A1 *1, so there is a combination of production of toxins and loss of protective dietary agents. In terms of carcinogenesis risk, SULT1A1*2 could be a liability as potentially damaging substrates such as electrophilic toxins cannot be cleared rapidly. However, in some circumstances the *2 allele can be rather protective as […] it also allows protective agents [to] remain in tissues for longer periods. The combinations are endless and so it is often extremely difficult to predict risks of carcinogenicity for individuals and toxin exposures.”

GSTs are polymorphic and much research has been directed at linking increased predisposition to cytotoxicity and carcinogenicity with defective GST phenotypes. Active wild-type GSTMu-1 is found in around 60 per cent of Caucasians, but a non-functional version of the isoform is found in the remainder. […] GST-M1 null (non-functional alleles) can predispose to risks of prostate abnormalities and GST Pi is subject to several SNPs and many attempts have been made to link these SNPs with the consequences of failure to detoxify reactive species, such as the risk of lung cancer. […] Carcinogenesis may be due to a complex mix of factors, where different enzyme expression and activities may combine with particular reactive species from specific parent xenobiotics that lead to DNA damage only in certain individuals. Resolving specific risk factors may be extremely difficult in such circumstances. […] in cancer chemotherapy, there is evidence that the presence of GST-M1 and GST-T1 null (non-functional) alleles predisposes children to a six-fold higher level of adverse events usually seen with antineoplastic drugs, such as bone marrow damage, nephrotoxicity and neurotoxicity.”

“The effects of age on drug clearance and metabolism have been known since the 1950s, but they have been extensively investigated in the last 20 or so years. It is now generally accepted that at the extremes of life, neonatal and geriatric, drug clearance can be significantly different from the rest of humanity. In general, neonates, i.e. those less than four weeks old, cannot clear certain agents due to immaturity of drug metabolizing systems. Those over retirement age cannot clear the drugs due to loss of efficiency in their metabolizing systems. Either way, the net result can be toxicity due to drug accumulation. […] It seems that the inability of older people to clear drugs is not necessarily related to the efficacy of their CYP-mediated oxidations, which are often not much different from that of younger individuals. Studies with the major CYPs in vitro have revealed that CYP2D6 is unaffected by age, as are most other CYPs, with the exception of CYP1A2, which does decline in activity in the elderly. […] In general, there is little significant change in the inducibility in most CYPs, or in the capability of conjugation systems in vitro. […] there are significant changes in the liver itself, as it decreases in mass and its blood flow is reduced as we age. This occurs at the rate of around 0.5–1.5 per cent per year, so by the time we hit 60–70, we may have up to a 40 per cent decline in liver blood flow compared with a 30-year-old. Other factors include gradual decline in renal function, increased fat deposits and reduction in gut blood flow, which affects absorption. […] The problem arises that the drug’s bioavailability increases due to lack of first-pass clearance; this means that from a standard dose, blood levels can be considerably higher than would be expected in a 40-year-old. This can be a serious problem in drugs with a narrow TI, such as antiarrhythmics. In addition, average doses of warfarin required to provide therapeutic anticoagulation in the elderly are less than half those required for younger people. The person’s lifelong smoking and drinking habits, as well as older individuals ’ sometimes erratic diet also complicate this situation. Among the drugs cleared more slowly in older people are antipsychotics, paracetamol, antidepressants, benzodiazepines, warfarin, beta-blockers and indomethicin.”

“Thousands of polyphenols are found in plants, vegetables, fruit, as well as tea, coffee, wine and fruit juices. […] Flavonoids such as quercetin and fisetin are excellent substrates for COMT, so competitively inhibiting the metabolism of endogenous catecholamine and catechol oestrogens. Quercetin and other polyphenols are found in various foods such as soy (genestein) and they are potent inhibitors of SULT1A1 which sulphate endogenous oestrogens, so potentiating the effects of oestrogens in the body. Many of these flavonoids and isoflavonoids are manufactured and sold as cancer preventative agents; however, it is more likely that their elevation of oestrogen levels may have the opposite effect in the long term. It is also likely that various polyphenols influence other endogenous substrates of sulphotransferases, such as thyroid hormones and various catecholamines. It is gradually becoming apparent that polyphenols can induce UGTs, indeed; it would be surprising if they did not. […] Overall, it is likely that there are a large number of polyphenols that are potent modulators of CYPs and conjugative enzymes. […] It is clear that diet can substantially modulate biotransformation […] As to the effects on prescription drugs, […] abrupt changes in a person’s diet may significantly alter the clearance of drugs and lead to loss of efficacy or toxicity.”

In general, experimental or ‘probe’ drugs […] which are used to study the activities of a number of CYPs, are metabolized more quickly by women than men. This is allowing for differences in weight, fat distribution (body mass index) and volume of distribution […] It appears that CYP expression is linked to growth hormone (GH) and about the same amount is secreted over 24 hours in both sexes. In animals the pattern of release of the hormone is crucial to the effects on the CYPs; in females, GH is secreted in small but more or less continuous pulses, while males secrete large pulses, then periods of no secretion. The system is thought to be similar in humans. […] Little is known of the effects of the menopause and hormone replacement, where steroid metabolism changes dramatically. It is highly likely that these events could have profound effects on female drug clearance. […] females in general are more susceptible to drug adverse reactions than males, especially hepatotoxic effects.”

“For those chronically dependent on ethanol their CYP2E1 levels can be ten-fold higher than non-drinkers and they would clear CYP2E1 substrates extremely quickly if they chose to be sober for a period of time. This may lead to the accumulation of metabolites of the substrates. It is apparent that alcoholics who are sober can suffer paracetamol (acetaminophen)-induced liver toxicity at overdoses of around half that of non-drinkers, which is due to CYP2E1 induction. […]  the vast variation in ADH [alcohol dehydrogenase] catalytic activity across the human race is mainly due to just a few SNPs that profoundly change the efficiency of the isoforms. ADH1B/*1 is the most effective variant and is the ADH wild-type […] Part of a ‘successful’ career as an alcoholic depends possessing the ADH1B/*1 isoform. The other defective isoforms are found in low frequencies in alcoholics and cirrhotics. […] in the vast majority of individuals, whatever their variant of ADH, they are able to process acetaldehyde to acetate and water, as the consequences of failing to do this are severe. With ALDH, the wild-type and gold standard is ALDH2*1/*1, which has the highest activity of all these isoforms and is the second essential component for an alcoholic career. […] the variant ALDH2*1/*2 has less than a quarter of the wild-type’s capacity and is found predominantly in Eastern races. The variant ALDH2*2/*2 is completely useless and renders the individuals very sensitive to acetaldehyde poisoning, although the toxin is removed eventually by ALDH1A1 which does not seem to be affected by polymorphisms. In a survey of 1300 Japanese alcoholics, there was nobody at all with the ALDH2*2/*2 variant. […] Women are much more vulnerable to ethanol damage and on average die in half the time it generally takes for a male alcoholic to drink himself to death. Women drink much less than men also – one study indicated that a group of women consumed about 14,000 drinks to induce cirrhosis, whilst men required more than 44,000 to achieve the same effect. Ethanol distributes in total body water only, so in women their greater fat content means that blood ethanol levels are higher than men of similar weight and age.

September 15, 2016 Posted by | Books, Cancer/oncology, Genetics, Medicine, Pharmacology | Leave a comment

A few lectures

The sound quality of this lecture is not completely optimal – there’s a recurring echo popping up now and then which I found slightly annoying – but this should not keep you from watching the lecture. It’s a quite good lecture, and very accessible – I don’t really think you even need to know anything about genetics to follow most of what he’s talking about here; as far as I can tell it’s a lecture intended for people who don’t really know much about population genetics. He introduces key concepts as they are needed and he does not go much into the technical details which might cause people trouble (this of course also makes the lecture somewhat superficial, but you can’t get everything). If you’re the sort of person who wants details not included in the lecture you’re probably already reading e.g. Razib Khan (who incidentally recently blogged/criticized a not too dissimilar paper from the one discussed in the lecture, dealing with South Asia)…

I must admit that I actually didn’t like this lecture very much, but I figured I might as well include it in this post anyway.

I found some questions included and some aspects of the coverage a bit ‘too basic’ for my taste, but other people interested in chess reading along here may like Anna’s approach better; like Krause’s lecture I think it’s an accessible lecture, despite the fact that it actually covers many lines in quite a bit of detail. It’s a long lecture but I don’t think you necessarily need to watch all of it in one go (…or at all?) – the analysis of the second game, the Kortschnoj-Gheorghiu game, starts around 45 minutes in so that might for example be a good place to include a break, if a break is required.

February 1, 2016 Posted by | Anthropology, Archaeology, Chess, Computer science, Evolutionary biology, Genetics, History, Lectures | Leave a comment

The Origin of Species

I figured I ought to blog this book at some point, and today I decided to take out the time to do it. This is the second book by Darwin I’ve read – for blog content dealing with Darwin’s book The Voyage of the Beagle, see these posts. The two books are somewhat different; Beagle is sort of a travel book written by a scientist who decided to write down his observations during his travels, whereas Origin is a sort of popular-science research treatise – for more details on Beagle, see the posts linked above. If you plan on reading both the way I did I think you should aim to read them in the order they are written.

I did not rate the book on goodreads because I could not think of a fair way to rate the book; it’s a unique and very important contribution to the history of science, but how do you weigh the other dimensions? I decided not to try. Some of the people reviewing the book on goodreads call the book ‘dry’ or ‘dense’, but I’d say that I found the book quite easy to read compared to quite a few of the other books I’ve been reading this year and it doesn’t actually take that long to read; thus I read a quite substantial proportion of the book during a one day trip to Copenhagen and back. The book can be read by most literate people living in the 21st century – you do not need to know any evolutionary biology to read this book – but that said, how you read the book will to some extent depend upon how much you know about the topics about which Darwin theorizes in his book. I had a conversation with my brother about the book a short while after I’d read it, and I recall noting during that conversation that in my opinion one would probably get more out of reading this book if one has at least some knowledge of geology (for example some knowledge about the history of the theory of continental drift – this book was written long before the theory of plate tectonics was developed), paleontology, Mendel’s laws/genetics/the modern synthesis and modern evolutionary thought, ecology and ethology, etc. Whether or not you actually do ‘get more out of the book’ if you already know some stuff about the topics about which Darwin speaks is perhaps an open question, but I think a case can certainly be made that someone who already knows a bit about evolution and related topics will read this book in a different manner than will someone who knows very little about these topics. I should perhaps in this context point out to people new to this blog that even though I hardly consider myself an expert on these sorts of topics, I have nevertheless read quite a bit of stuff about those things in the past – books like this, this, this, this, this, this, this, this, this, this, this, this, this, this, and this one – so I was reading the book perhaps mainly from the vantage point of someone at least somewhat familiar both with many of the basic ideas and with a lot of the refinements of these ideas that people have added to the science of biology since Darwin’s time. One of the things my knowledge of modern biology and related topics had not prepared me for was how moronic some of the ideas of Darwin’s critics were at the time and how stupid some of the implicit alternatives were, and this is actually part of the fun of reading this book; there was a lot of stuff back then which even many of the people presumably held in high regard really had no clue about, and even outrageously idiotic ideas were seemingly taken quite seriously by people involved in the debate. I assume that biologists still to this day have to spend quite a bit of time and effort dealing with ignorant idiots (see also this), but back in Darwin’s day these people were presumably to a much greater extent taken seriously even among people in the scientific community, if indeed they were not themselves part of the scientific community.

Darwin was not right about everything and there’s a lot of stuff that modern biologists know which he had no idea about, so naturally some mistaken ideas made their way into Origin as well; for example the idea of the inheritance of acquired characteristics (Lamarckian inheritance) occasionally pops up and is implicitly defended in the book as a credible complement to natural selection, as also noted in Oliver Francis’ afterword to the book. On a general note it seems that Darwin did a better job convincing people about the importance of the concept of evolution than he did convincing people that the relevant mechanism behind evolution was natural selection; at least that’s what’s argued in wiki’s featured article on the history of evolutionary thought (to which I have linked before here on the blog).

Darwin emphasizes more than once in the book that evolution is a very slow process which takes a lot of time (for example: “I do believe that natural selection will always act very slowly, often only at long intervals of time, and generally on only a very few of the inhabitants of the same region at the same time”, p.123), and arguably this is also something about which he is part right/part wrong because the speed with which natural selection ‘makes itself felt’ depends upon a variety of factors, and it can be really quite fast in some contexts (see e.g. this and some of the topics covered in books like this one); though you can appreciate why he held the views he did on that topic.

A big problem confronted by Darwin was that he didn’t know how genes work, so in a sense the whole topic of the ‘mechanics of the whole thing’ – the ‘nuts and bolts’ – was more or less a black box to him (I have included a few quotes which indirectly relate to this problem in my coverage of the book below; as can be inferred from those quotes Darwin wasn’t completely clueless, but he might have benefited greatly from a chat with Gregor Mendel…) – in a way a really interesting thing about the book is how plausible the theory of natural selection is made out to be despite this blatantly obvious (at least to the modern reader) problem. Darwin was incidentally well aware there was a problem; just 6 pages into the first chapter of the book he observes frankly that: “The laws governing inheritance are quite unknown”. Some of the quotes below, e.g. on reciprocal crosses, illustrate that he was sort of scratching the surface, but in the book he never does more than that.

Below I have added some quotes from the book.

“Certainly no clear line of demarcation has as yet been drawn between species and sub-species […]; or, again, between sub-species and well-marked varieties, or between lesser varieties and individual differences. These differences blend into each other in an insensible series; and a series impresses the mind with the idea of an actual passage. […] I look at individual differences, though of small interest to the systematist, as of high importance […], as being the first step towards such slight varieties as are barely thought worth recording in works on natural history. And I look at varieties which are in any degree more distinct and permanent, as steps leading to more strongly marked and more permanent varieties; and at these latter, as leading to sub-species, and to species. […] I attribute the passage of a variety, from a state in which it differs very slightly from its parent to one in which it differs more, to the action of natural selection in accumulating […] differences of structure in certain definite directions. Hence I believe a well-marked variety may be justly called an incipient species […] I look at the term species as one arbitrarily given, for the sake of convenience, to a set of individuals closely resembling each other, and that it does not essentially differ from the term variety, which is given to less distinct and more fluctuating forms. The term variety, again, in comparison with mere individual differences, is also applied arbitrarily, and for mere convenience’ sake. […] the species of large genera present a strong analogy with varieties. And we can clearly understand these analogies, if species have once existed as varieties, and have thus originated: whereas, these analogies are utterly inexplicable if each species has been independently created.”

“Owing to [the] struggle for life, any variation, however slight and from whatever cause proceeding, if it be in any degree profitable to an individual of any species, in its infinitely complex relations to other organic beings and to external nature, will tend to the preservation of that individual, and will generally be inherited by its offspring. The offspring, also, will thus have a better chance of surviving, for, of the many individuals of any species which are periodically born, but a small number can survive. I have called this principle, by which each slight variation, if useful, is preserved, by the term of Natural Selection, in order to mark its relation to man’s power of selection. We have seen that man by selection can certainly produce great results, and can adapt organic beings to his own uses, through the accumulation of slight but useful variations, given to him by the hand of Nature. But Natural Selection, as we shall hereafter see, is a power incessantly ready for action, and is as immeasurably superior to man’s feeble efforts, as the works of Nature are to those of Art. […] In looking at Nature, it is most necessary to keep the foregoing considerations always in mind – never to forget that every single organic being around us may be said to be striving to the utmost to increase in numbers; that each lives by a struggle at some period of its life; that heavy destruction inevitably falls either on the young or old, during each generation or at recurrent intervals. Lighten any check, mitigate the destruction ever so little, and the number of the species will almost instantaneously increase to any amount. The face of Nature may be compared to a yielding surface, with ten thousand sharp wedges packed close together and driven inwards by incessant blows, sometimes one wedge being struck, and then another with greater force. […] A corollary of the highest importance may be deduced from the foregoing remarks, namely, that the structure of every organic being is related, in the most essential yet often hidden manner, to that of all other organic beings, with which it comes into competition for food or residence, or from which it has to escape, or on which it preys.”

“Under nature, the slightest difference of structure or constitution may well turn the nicely-balanced scale in the struggle for life, and so be preserved. How fleeting are the wishes and efforts of man! how short his time! And consequently how poor will his products be, compared with those accumulated by nature during whole geological periods. […] It may be said that natural selection is daily and hourly scrutinising, throughout the world, every variation, even the slightest; rejecting that which is bad, preserving and adding up all that is good; silently and insensibly working, whenever and wherever opportunity offers, at the improvement of each organic being in relation to its organic and inorganic conditions of life. We see nothing of these slow changes in progress, until the hand of time has marked the long lapses of ages, and then so imperfect is our view into long past geological ages, that we only see that the forms of life are now different from what they formerly were.”

“I have collected so large a body of facts, showing, in accordance with the almost universal belief of breeders, that with animals and plants a cross between different varieties, or between individuals of the same variety but of another strain, gives vigour and fertility to the offspring; and on the other hand, that close interbreeding diminishes vigour and fertility; that these facts alone incline me to believe that it is a general law of nature (utterly ignorant though we be of the meaning of the law) that no organic being self-fertilises itself for an eternity of generations; but that a cross with another individual is occasionally perhaps at very long intervals — indispensable. […] in many organic beings, a cross between two individuals is an obvious necessity for each birth; in many others it occurs perhaps only at long intervals; but in none, as I suspect, can self-fertilisation go on for perpetuity.”

“as new species in the course of time are formed through natural selection, others will become rarer and rarer, and finally extinct. The forms which stand in closest competition with those undergoing modification and improvement, will naturally suffer most. […] Whatever the cause may be of each slight difference in the offspring from their parents – and a cause for each must exist – it is the steady accumulation, through natural selection, of such differences, when beneficial to the individual, which gives rise to all the more important modifications of structure, by which the innumerable beings on the face of this earth are enabled to struggle with each other, and the best adapted to survive.”

“Natural selection, as has just been remarked, leads to divergence of character and to much extinction of the less improved and intermediate forms of life. On these principles, I believe, the nature of the affinities of all organic beings may be explained. It is a truly wonderful fact – the wonder of which we are apt to overlook from familiarity – that all animals and all plants throughout all time and space should be related to each other in group subordinate to group, in the manner which we everywhere behold – namely, varieties of the same species most closely related together, species of the same genus less closely and unequally related together, forming sections and sub-genera, species of distinct genera much less closely related, and genera related in different degrees, forming sub-families, families, orders, sub-classes, and classes. The several subordinate groups in any class cannot be ranked in a single file, but seem rather to be clustered round points, and these round other points, and so on in almost endless cycles. On the view that each species has been independently created, I can see no explanation of this great fact in the classification of all organic beings; but, to the best of my judgment, it is explained through inheritance and the complex action of natural selection, entailing extinction and divergence of character […] The affinities of all the beings of the same class have sometimes been represented by a great tree. I believe this simile largely speaks the truth. The green and budding twigs may represent existing species; and those produced during each former year may represent the long succession of extinct species. At each period of growth all the growing twigs have tried to branch out on all sides, and to overtop and kill the surrounding twigs and branches, in the same manner as species and groups of species have tried to overmaster other species in the great battle for life. The limbs divided into great branches, and these into lesser and lesser branches, were themselves once, when the tree was small, budding twigs; and this connexion of the former and present buds by ramifying branches may well represent the classification of all extinct and living species in groups subordinate to groups. Of the many twigs which flourished when the tree was a mere bush, only two or three, now grown into great branches, yet survive and bear all the other branches; so with the species which lived during long-past geological periods, very few now have living and modified descendants. From the first growth of the tree, many a limb and branch has decayed and dropped off; and these lost branches of various sizes may represent those whole orders, families, and genera which have now no living representatives, and which are known to us only from having been found in a fossil state. As we here and there see a thin straggling branch springing from a fork low down in a tree, and which by some chance has been favoured and is still alive on its summit, so we occasionally see an animal like the Ornithorhynchus or Lepidosiren, which in some small degree connects by its affinities two large branches of life, and which has apparently been saved from fatal competition by having inhabited a protected station. As buds give rise by growth to fresh buds, and these, if vigorous, branch out and overtop on all sides many a feebler branch, so by generation I believe it has been with the great Tree of Life, which fills with its dead and broken branches the crust of the earth, and covers the surface with its ever branching and beautiful ramifications.”

“No one has been able to point out what kind, or what amount, of difference in any recognisable character is sufficient to prevent two species crossing. It can be shown that plants most widely different in habit and general appearance, and having strongly marked differences in every part of the flower, even in the pollen, in the fruit, and in the cotyledons, can be crossed. […] By a reciprocal cross between two species, I mean the case, for instance, of a stallion-horse being first crossed with a female-ass, and then a male-ass with a mare: these two species may then be said to have been reciprocally crossed. There is often the widest possible difference in the facility of making reciprocal crosses. Such cases are highly important, for they prove that the capacity in any two species to cross is often completely independent of their systematic affinity, or of any recognisable difference in their whole organisation. On the other hand, these cases clearly show that the capacity for crossing is connected with constitutional differences imperceptible by us, and confined to the reproductive system. […] fertility in the hybrid is independent of its external resemblance to either pure parent. […] The foregoing rules and facts […] appear to me clearly to indicate that the sterility both of first crosses and of hybrids is simply incidental or dependent on unknown differences, chiefly in the reproductive systems, of the species which are crossed. […] Laying aside the question of fertility and sterility, in all other respects there seems to be a general and close similarity in the offspring of crossed species, and of crossed varieties. If we look at species as having been specially created, and at varieties as having been produced by secondary laws, this similarity would be an astonishing fact. But it harmonizes perfectly with the view that there is no essential distinction between species and varieties. […] the facts briefly given in this chapter do not seem to me opposed to, but even rather to support the view, that there is no fundamental distinction between species and varieties.”

“Believing, from reasons before alluded to, that our continents have long remained in nearly the same relative position, though subjected to large, but partial oscillations of level, I am strongly inclined to…” (…’probably get some things wrong…’, US)

“In considering the distribution of organic beings over the face of the globe, the first great fact which strikes us is, that neither the similarity nor the dissimilarity of the inhabitants of various regions can be accounted for by their climatal and other physical conditions. Of late, almost every author who has studied the subject has come to this conclusion. […] A second great fact which strikes us in our general review is, that barriers of any kind, or obstacles to free migration, are related in a close and important manner to the differences between the productions of various regions. […] A third great fact, partly included in the foregoing statements, is the affinity of the productions of the same continent or sea, though the species themselves are distinct at different points and stations. It is a law of the widest generality, and every continent offers innumerable instances. Nevertheless the naturalist in travelling, for instance, from north to south never fails to be struck by the manner in which successive groups of beings, specifically distinct, yet clearly related, replace each other. […] We see in these facts some deep organic bond, prevailing throughout space and time, over the same areas of land and water, and independent of their physical conditions. The naturalist must feel little curiosity, who is not led to inquire what this bond is.  This bond, on my theory, is simply inheritance […] The dissimilarity of the inhabitants of different regions may be attributed to modification through natural selection, and in a quite subordinate degree to the direct influence of different physical conditions. The degree of dissimilarity will depend on the migration of the more dominant forms of life from one region into another having been effected with more or less ease, at periods more or less remote; on the nature and number of the former immigrants; and on their action and reaction, in their mutual struggles for life; the relation of organism to organism being, as I have already often remarked, the most important of all relations. Thus the high importance of barriers comes into play by checking migration; as does time for the slow process of modification through natural selection. […] On this principle of inheritance with modification, we can understand how it is that sections of genera, whole genera, and even families are confined to the same areas, as is so commonly and notoriously the case.”

“the natural system is founded on descent with modification […] and […] all true classification is genealogical; […] community of descent is the hidden bond which naturalists have been unconsciously seeking, […] not some unknown plan or creation, or the enunciation of general propositions, and the mere putting together and separating objects more or less alike.”

September 27, 2015 Posted by | Biology, Books, Botany, Evolutionary biology, Genetics, Geology, Zoology | Leave a comment

An Introduction to Tropical Rain Forests (III)

This will be my last post about the book. I’ve included some observations from the second half of the book below.

“In the present chapter we look at […] time scales of a few years to a few centuries, up to the life spans of one or a few generations of trees. Change is examined in the context of development and disintegration of the forest canopy, the forest growth cycle […] There seems to be a general model of forest dynamics which holds in many different biomes, albeit with local variants. […] Two spatial scales of canopy dynamics can be distinguished: patch disturbance, which involves one or a few trees, and community-wide disturbance. Patch disturbance is sometimes called ‘forest gap-phase dynamics’ and since about the mid-1970s has been one of the main interests of forest scientists in many parts of the world.”

“Species differ in the microclimate in which they successfully regenerate. […] the microclimates within a rain forest […] are mainly determined by size of the canopy gap. The microclimate above the forest canopy, which is similar to that in a large clearing, is substantially different from that near the floor below mature phase forest. […] Outside, wind speeds during the day are higher, as is air temperature, while relative humidity is lower. […] The light climate within a forest is complex. There are four components, skylight coming through canopy holes, direct sunlight, seen as sunflecks on the forest floor, light transmitted through leaves, and light reflected from leaves, trunks and other surfaces. […] Both the quantity and quality of light reaching the plant is known to be of profound importance in the mechanisms of gap-phase dynamics […] The waveband 400 to 700 nm (which is approximately the visual spectrum) is utilized for photosynthesis and is known as photosynthetically active radiation or PAR. The forest floor only receives up to c. 2 per cent of the PAR incident on the forest canopy […] In addition to reduction in quantity of PAR within the forest canopy, PAR also changes in quality with a shift in the ratio of red to far-red wavelenghts […] the temporal pattern of sunfleck distribution through the day […] is of importance, not just the daily total PAR. […] The role of irradiance in seedling growth and release is easy to observe and has been much investigated. By contrast, little attention has been given to the potential role of plant mineral nutrients. […] So far, nutrients seem unimportant compared to radiation. […] Overall the shade/nutrient interaction story remains unresolved. One part of the picture is likely to be that there is no response to nutrients in dark conditions where irradiance is limiting, but a response at higher irradiances.”

“Canopy gaps have an aerial microclimate like that above the forest but the smaller the gap the less different it is from the forest interior […] Gaps were at first regarded as having a microclimate varying with their size, to be contrasted with closed-forest microclimate. But this is a simplification. […] gaps are neither homogenous holes nor are they sharply bounded. Within a gap the microclimate is most extreme towards the centre and changes outwards to the physical gap edge and beyond […] The larger the gap the more extreme the microclimate of its centre. […] there is much more variability between small gaps than large ones in microclimate [and] gap size is a poor surrogate measure of microclimate, most markedly over short periods.”

“tree species differ in the amount of solar radiation required for their regeneration. […] Ecologists and foresters continue to engage in vigorous debate as to whether species along [the] spectrum of light climates can be divided into clear, separate groups. […] some strong light-demanders require full light for both seed germination and seedling establishment. These are the pioneer species, set apart from all others by these two features.[168] By contrast, all other species have the capacity to germinate and establish below canopy shade. These may be called climax species. They are able to perpetuate in the same place, but are an extremely diverse group. […] Pioneer species germinate and establish in a gap after its creation […] They grow fast […] Below the canopy seedlings of climax species establish and, as the pioneer canopy breaks up after the death of individual trees, these climax species are ‘released’ […] and grow up as a second growth cycle. Succession has occurred as a group of climax species replaces the group of pioneer species.[…] Climax species as a group […] perpetuate themselves in situ, there is no directional change in species composition. This is called cyclic regeneration or replacement. In a small gap, pre-existing climax seedlings are released. In a large gap pioneers, which appear after gap creation, form the next forest growth cycle. One of the puzzles which remains unsolved is what determines gap-switch size. […] In all tropical rain forest floras there are fewer pioneer than climax species, and they mostly belong to a few families […] The most species-rich forested landscape will be one that includes both patches of secondary forest recovering from a big disturbance and consisting of pioneers, and also patches of primary forest composed of climax species.”

“Rain forest silviculture is the manipulation of the forest to favour species and thereby to enhance its value to humans. […] Timber properties, whether heavy or light, dark or pale, durable or not, are strongly correlated with growth rate and thus to the extent to which the species is light-demanding […]. Thus, the ecological basis of natural forest silviculture is the manipulation of the forest canopy. The biological principle of silviculture is that by controlling canopy gap size it is possible to influence species composition of the next growth cycle. The bigger the gaps the more fast-growing light-demanders will be favoured. This concept has been known in continental Europe since at least the twelth century. […] The silvicultural systems that have been applied to tropical rain forests belong to one of two kinds: the polycyclic and monocyclic systems, respectively […]. As the name implies, polycyclic systems are based on the repeated removal of selected trees in a continuing series of felling cycles, whose length is less than the time it takes the tree to mature [rotation age]. The aim is to remove trees before they begin to deteriorate from old age […] extraction on a polycyclic system tends to result in the formation of scattered small gaps in the forest canopy. By contrast, monocyclic systems remove all saleable trees at a single operation, and the length of the cycle more or less equals the rotation age of the trees. Except in those cases where there are few saleable trees, damage to the forest is more drastic than under a polycyclic system, the canopy is more extensively destroed, and bigger gaps are formed. […] the two kinds of system will tend to favour shade-bearing and light-demanding species, respectively, but the extent of the difference will depend on how many trees are felled at each cycle in a polycyclic system. […] Low intensity selective logging on a polycyclic system closely mimics the natural processes of forest dynamics and scarcely alters the composition. Monocyclic silvicultural systems, and polycyclic systems with many stems felled per hectare, shift species composition […] The amount of damage to the forest depends more on how many trees are felled than on timber volume extracted. It is commonly the case that for every tree removed for timber (logged) a second tree is totally smashed and a third tree receives damage from which it will recover”

“The essense of shifting agriculture (sometimes called swidden agriculture) is to fell a patch of forest, allow it to dry to the point where it will burn well, and then to set it on fire. The plant mineral nutrients are thereby mobilized and become available to plants in the ash. One or two fast-maturing crops of staple food species are grown […]. Yields then fall and the patch is abandoned to allow secondary forest to grow. Longer-lived species, such as chilli […] and fruit trees, and some root crops such as cassava […] are planted with the staples and continue to yield in the first years of the fallow period. Besides fruit and root crops the bush fallow, as it is often called, provides firewood, medicines, and building materials. After a minimum of 7 to 10 years the cycle can be repeated. There are many variants. Shifting agriculture was invented independently in all parts of the tropical world[253] and has proved sustainable over many centuries. […] It is now realized that shifting agriculture, as traditionally practised, is a sustainable low-input form of cultivation which can continue indefinitely on the infertile soils underlying most tropical rain forest […], provided the carrying capacity of the land is not exceeded. […] Shifting agriculture has the limitation that it can usually only support 10-20 persons km-2 […] because at any one time only c. 10 per cent of the area is under cultivation. It breaks down if either the bush fallow period is excessively shortened or if the period of cultivation is extended for too long, either of which is likely to occur if population increases and a land shortage develops. There is, however, another mode of shifting agriculture which is totally destructive […]. Farmers fell and burn the forest and grow crops on the released nutrients for several years in succession, continuing until coppicing potential and the soil seed bank are exhausted, pernicious weeds invade, and soil nutrients are seriously depleted. They then move on to a new patch of virgin forest. This is happening, for example, in parts of western Amazonia […] Replacement of forests by agriculture totally destroys them. If farmland is abandoned it is likely to take several centuries before all signs of forest succession have disappeared, and species-rich, structurally complex primary forest restored […] Agriculture is the main purpose for which rain forests are cleared. There are several major kinds of agriculture and their impact varies from place to place. Important detail is lost by pan-tropical generalization.”

“The mixed cultivation of trees and crops, agroforestry […], makes use of nutrient cycling by trees, as does shifting agriculture. Trees act as pumps, bringing nutrients into the superficial layers of the soil where shallow-rooted herbacious crops can utilize them. […] Early research led to the belief that nearly all the mineral nutrients in tropical rain forests are in the above-ground biomass and, despite much evidence to the contrary, this view is still sometimes expressed. [However] the popular belief that most of the nutrients of a tropical rain forest are in the biomass is seldom true.”

“Given a rich regional flora, forests are particularly favourable for the co-existence of many species in the same community, because they provide many different niches. […] The forest provides a whole array of different internal microclimates, both horizontally and vertically [recall this related observation from McMenamin & McMenamin: “One aspect of the environment that controls the number and types of organisms living in the environment is called its dimensionality […]. Two-dimensional (or Dimension 2) environments tend to be flat, whereas three-dimensional environments (Dimension 3) have, to a greater or lesser degree, a third dimension. This third dimension can be either in an upward or a downward direction, or a combination of both directions.” Additional dimensions add additional opportunities for specialization.] […] The same processes operate in all forests but forests have different degrees of complexity in canopy structure and differ in the number of species that occupy the many facets of what may be termed the ‘regeneration niche’. […] one-to-one specialization between a single plant and animal species as a factor of species richness exists only in a few cases […] Guilds of insects specialized to feed on (and where necessary detoxify) particular families or similar families of plants […] is a looser and commoner form of co-evolution and plays a more substantial role in the packing together of numerous sympatric species […] Browsing pressure (‘pest pressure’) of herbivores […] may be one factor that sometimes prevents any single species from attaining dominance, and acts to maintain species richness. In a similar manner dense seedling populations below a parent tree are often thinned out by disease or herbivory […] and this also therefore contributes to the prevention of single species dominance.”

“An important difference of tropical rain forests from others is the occurence of locally endemic species […]. This is one component of their species richness on the extensive scale. It means that in different places a particular niche may be occupied by different species which never compete because they never meet. It has the consequence that species are likely to become extinct when a rain forest is reduced in extent, more so than in other forest biomes. […] the main reasons why some tropical rain forests are extremely rich in species results from firstly, a long stable climatic history without episodes of extinction, in an equable environment, and in which there is no ‘climatic sieve’ to eliminate some species. Secondly, a forest canopy provides large numbers of spatial and temporal niches […] Thirdly, richness results from interactions with animals, mainly as pollinators, dispersers, or pests. Some of these factors underly species richness in other biomes also. […] The overall effeect of all of humankind’s many different impacts on tropical rain forests is to diminish the numerous dimensions of species richness. Not only does man destroy species, he also simplifies the ecosystems the remaining species inhabit.”

“the claim sometimes made that rain forests contain enormous numbers of drugs just awaiting exploitation does not survive critical examination.[319] Reality is more complex, and there are serious difficulties in developing an economic case for biodiversity conservation based on undiscovered pharmaceuticals. […] The cessation of logging is [likewise] not a realistic option, as too much money is at stake for both the nations and individuals involved.”

“Animal geneticists have given considerable thought to the question of how many individuals are necessary to maintain the full genetic integrity of a species in perpetuity.[425] Much has been learned from zoos. A simple but extremely crude rule-of-thumb is that a minimum population of 50 breeding adults maintains fitness in the short term, thus preserving a species ‘frozen’ at one instant of time. To prevent continual loss of genetic diversity (‘genetic erosion’) over the long term […] requires a big population, and a minimum of 500 breeding adults has been suggested to be necessary. This 50/500 rule is only a very rough approximation and can differ widely between species. […] Most difficult to conserve are animals (or indeed plants too) that live at very low population density (e.g. hornbills, tapir, and top carnivores, such as jaguar and tiger), or that have large territories (e.g. gaur, elephant) […] Increasingly in the future, tropical rain forest will only remain as fragments. […] There is a problem that such fragments may break the 50/500 rule […] and contain too few individuals of a species for its long-term genetic integrity. Species that occur at low density are especially vulnerable to genetic erosion, to chance extinction when numbers fall […], or to inbreeding depression. In particular, many trees live several centuries and may be persisting today but unable to breed, so the species is ‘living but dead’, doomed to extinction. […] small forest remnants may be too small to support certain species and this may have repercussions on other components of the ecosystem. […] Besides reduction in area, forest fragmentation also increases the proportion of edge relative to interior […] and if the fragments are surrounded by open land this will result in a change of microclimate.”


September 23, 2014 Posted by | Biology, Books, Botany, Ecology, Evolutionary biology, Genetics, Geography | Leave a comment

Aging – Facts and Theories (Interdisciplinary Topics in gerontology, Vol. 39) (I)

I said some not particularly nice things about this book in my goodreads review. While writing this post I was actually starting to seriously question my rating and asking myself whether I should make some adjustments to the review as well. Given the amount of blogging-worthy stuff included in the first part of the book, in combination with the fact that I’ve still yet to cover a couple of chapters I liked, I thought it might be unfair of me to give this publication one star, all things considered. But I’m not sure if I’ll change my rating. This part of my review stands: “if I don’t give this publication one star it’s really hard for me to imagine when I’d ever do that in the context of an academic publication.” The baseline requirements are set at a high level when you’re dealing with publications like these, and the rating should reflect that.

Below I have added some stuff from the first chapters of the book, and a few comments of my own.

“There is now strong evidence that telomere length is reduced during the serial subculture of human fibroblasts [11–13]. At some point before or during the formation of these cells, the enzyme telomerase, which maintains telomere length, is lost. It has therefore been suggested that commitment is in fact the loss of telomerase [14]. From this point, the cells can divide a given number of times, until the loss of DNA from the ends of chromosomes results in the cessation of cell division. The commitment theory proposes that this number of divisions, the parameter M, is constant, or close to constant.”

I believe some aspects of this line of work is well known also by people not working in the field, though I may be mistaken; I assume many people curious about these things have heard about stuff like the Hayflick limit. The book has quite a bit of stuff on research dealing with these things, and below are some critical remarks from the book on this topic:

[Works on human fibroplasts has] led to the concept that the hallmark of cellular aging is the postmitotic cell, the so called senescent cell, which would be one of the causes of the organism’s aging. However, there is no evidence showing that the human organism ages because somatic cells lose the potential to divide. In fact, the assumption goes against all the experiments that have tested the capacity of fibroblasts obtained from old donors to divide. It is obvious that in old age our cells are still capable of proliferating; what takes place is a deregulation in the proliferative response rather than the absence of the capacity to divide [14]. […] Unfortunately, the term cell senescence [has been] generalized and now encompasses anytime a cell enters an irreversible growth arrest due to a variety of causes. Sometimes quite harsh treatments [have been] used to induce growth arrest [in research], which not surprisingly [have induced] DNA damage and arrest of cell division. Since some investigators believe that the terminal postmitotic stage of proliferating cell populations is originated by oxidative stress, hydrogen peroxide is one of those molecules used to induce the ‘senescent phenotype’ […] [A problem is] that in the absence of detailed molecular data on what constitutes normal aging, it is difficult to decide whether the changes [which have been] reported reflect mechanisms underlying normal cellular aging/senescence or rather produce a mimic of cellular aging/senescence by quite different pathways. […] Now that the term cell senescence is well established to designate a postmitotic state, it is difficult to change the usage; however, it should be used only operationally without any connection with aging of the organism.”

“One modification [of DNA methylation during replicative aging] has particularly drawn the attention of gerontologists; it concerned the shortening of chromosome ends, the telomeres. One can still see claims in the literature that this is one of the main causes of aging. […] Since some immortal cell lines express the enzyme telomerase and develop the capacity to reconstitute telomeres after replication, the link between telomere integrity and replication potential seemed established; investigators were quick to equate telomere shortening with proliferation and aging. The reasoning was based on the syllogism: the number of potential divisions decreases with aging, telomeres are shortened during cell proliferation, hence aging is a function of telomere shortening. The syllogism is unjustified because the major and minor propositions have not been ascertained in comparative gerontology studies.
The erosion of telomeres through division is not universal. In humans, the division in vitro of normal keratinocytes [53, 54] , cardiomyocites [55] and astrocytes [56] is independent of telomere size. Results obtained with normal in vivo and in vitro lymphocytes vary with the laboratory and the methodology. […] In humans, telomere lengths [in one study] did not show a clear correlation with tissue renewal times in vivo […]; moreover, the rate of telomere loss slows throughout the human life span [63]. Fibroblasts from patients with Werner’s syndrome, which have a shorter life span than those of normal age-matched control donors, do not have shorter telomeres than control cells [64]. […] There are other caveats concerning the relationship between telomere shortening and proliferation. Human fibroblasts maintained in the presence of 3% oxygen instead of the usual concentration of 20% have an increased proliferation potential but have shorter telomeres [23]. Radiation-induced senescence-like growth arrest is independent of telomere shortening [68]. […] Telomere biology seems to vary with the species in a way unrelated with aging and with the respective cell proliferation life span in vitro. In non-human primates such as rhesus monkey, Japanese monkey, crab-eating monkey, chimpanzee, and orangutan, TRF [Terminal Restriction Fragment] length was more than double that of human somatic tissues [69]. […] Hamster embryonic fibroblasts express telomerase throughout their replicative life span and the average telomere length does not decrease [73]. Long telomeres, fast cell replicative aging in vitro and short longevity are found in either wild or inbred laboratory mice. […] In summary, it seems that the regulation of the length of telomeres has no implications for aging.”

Note that one needs to be careful about which conclusions to draw here. Hayflick’s finding seems to be correct:

“Most scientists who worked according to the guidelines published by Hayflick could reproduce his results. During the 2nd part of the last century, cell and tissue culture methods became standardized, and culture dishes and media became commercially available. This largely contributed to the interlaboratory standardization of culture methodology and settled to a large extent the controversies. Hayflick’s paradigm, stating that normal nontransformed cells cannot duplicate indefinitely in culture unless transformed into malignant cells, is now largely accepted.”

And I think the point is not that Hayflick’s finding was wrong, but rather that the relevance of this finding to how organisms actually age has been called into question. There’s probably a theoretical limit, but the limit is not particularly relevant to the way organisms age because organisms die before such considerations ever become relevant – this was my reading of what some of the authors thought about these things. Whether or not I’ve misunderstood the authors, at least this seems like a sensible solution to me to the problem of how to interpret the many different findings in this field.

Below I have added some coverage from a chapter on ideas about aging derived from evolutionary biology – again I assume some of these ideas may already be familiar to people reading along because such ideas are not exactly unknown, but that perhaps there’s even so perhaps some new stuff included in that part of the coverage as well:

“for any species, there is a compromise between longevity and the other life history traits. Some species, like elephants, need to live long to thrive or simply survive while for other ones, like mice, a high lifespan is superfluous. […] any theory of aging needs to take into account that longevity, like the other life history traits, does not evolve freely, and any hypothesis positing that longevity of human beings could reach astronomic values simply ignores basic concepts in biology.”

“For most of species, life in the wild is rather short because of the threats encountered by animals, with the consequence that many of them die at a younger age than they would in a protected environment. The basic consequence is that all events occurring beyond some age are of no importance to the species and to most of animals […] in animals, deleterious mutations expressed at young age are selected against, because they impair reproduction. […] By contrast, if the mutations are expressed at old age only, they will not be selected against because their bearers have already transmitted their genes to the next generation […] Mutations only expressed at old age will have no real effect on animals because most of them are already dead at this age […]. If the whole process is repeated for many mutations and generations, one can easily understand that deleterious mutations will accumulate at old age, i.e. animals will be loaded with many health problems if they reach old age. This theory [is] called the theory of the accumulation of mutations at old age […] Williams [added to this theory the idea that] some alleles could have favorable effects at young age and deleterious ones at old age […] [the idea being that such] alleles could be selected due to their positive effects at young age, despite the negative effects at old age. […] Williams’ theory is […] called the antagonistic pleiotropy theory of aging. […] for the time being, the evidence in favor of the hypothesis of the antagonistic pleiotropy theory of aging appears to be limited. In addition, because only a very few genes have been found to be associated with aging or longevity, it is difficult to show that mutations accumulate with age, as postulated by the theory of the accumulation of mutations at old age. However, it would be going too far to claim that genes with negative effects at old age (and possibly positive effects at young age) do not exist. For instance, a high growth hormone level is a risk for premature mortality in middle-aged men [50], while a too low level at young age impairs growth. This increased growth hormone level is probably not linked to a severe mutation in a single gene”

“The third classic evolutionary theory of aging, not contradictory with the two previous ones [mentioned in the paragraph above], is the disposable soma theory championed by Kirkwood [23] who argued that the germ line is immortal but soma is disposable, hence the name of the theory. This author emphasized that, because life is short in the wild, it is useless to invest more energy in body maintenance mechanisms (immunity, cellular repair processes, and so on) than needed to provide the expected lifespan in the wild, and thus soma is disposable. Investing more would be a waste and this energy would be better used in reproduction. […] it would be very dogmatic to consider that the whole aging process could be explained by relying on these theories, and the mainstream idea is that some parts of the aging process can be explained by these theories, but that they cannot explain all features of aging.”

“According to the previously described theories, aging is the result of either the accumulation of mutations at old age [21] , negative effects at old age of alleles with positive effects at young age [22], or a compromise between maintenance processes and reproduction [23]. Thus, they have a common theoretical background: genes do not program aging as they program development. Aging occurs because there is an age beyond which the probability to survive in the wild is very low. Beyond this age, no maintenance process can be selected during the course of evolution, simply because most of animals living in the wild are already dead. As no maintenance process has been selected, it is inevitable that the organism will be more and more unable to resist the various threats (e.g. diseases, molecular damage) and to remain as efficient as it was at young age (cognition, physical ability, and so on), and this aging process will be observed if animals live in protected environments. Therefore, it is unneeded to make the hypothesis that genes actively promote aging, simply because aging, contrary to development, can occur without the existence of such genes. If we would nevertheless accept that such genes do exist, it would imply that, as for all the other genes, mutants would also exist: some animals could escape the ‘aging program’ and these ever young animals would be potentially immortal. These mutants would thus only die of external causes, such as accidents, and could reproduce when the other animals are already dead. Due to this selective advantage, these mutants would become very common in a few generations and most of animals would be potentially immortal: such mutants have never been observed […] Aging is not due to genes actively programming aging but […] to the deleterious effects at old age of some genes that have not been selected to display these effects […] If aging is not programmed by genes, the basic consequence is that claims that studies of nematode mutants with extended longevity could allow ‘to think ageing as a disease that can be cured, or at least postponed’ [26] are not warranted. These mutants live longer because for this species increasing longevity is an appropriate strategy when food is scarce […], but it is an illusion to think that a gene governing aging has been discovered […], and there is no reason to be ‘so excited about the prospect of searching for – and finding – the causes of ageing and maybe even the fountain of youth itself’ [26].”

One more observation which although unrelated to the above coverage seemed to me relevant to include in this post as well:

“A remodeling at the cellular level resulting from a different equilibrium between cell compartments, the decline of one leading to overexpression of another, […] contributes to the aging syndrome […] In the skin [for example], the loss of elasticity and increased wrinkling are the result of the rearrangements in the relative proportion of the molecular and cellular constituents. […] There is [also] a causal relationship between changes in vascular compliance and the evolution of the collagen/elastin ratio, and the proportion of endothelial and smooth muscle cells. […] During aging, elastin is degraded and the collagen/elastin ratio increases. As a result, the elastic recoil of the vessel wall decreases […] The results from the investigation of aging of mitotic cells suggest that the evolution of several cell compartments through division constitutes a developmental process where cells are modified with […] consequent repercussions on cell function and cell interactions; this remodeling creates a drift that contributes to aging and senescence of the organism.”

September 6, 2014 Posted by | Biology, Books, Evolutionary biology, Genetics, Medicine | Leave a comment

Evolution and the Levels of Selection

After I’d read the book I googled the author and I came across this lecture, which is actually a really nice lecture about many of the ideas also included in the book:

The stuff covered during the last five minutes or so of the talk is not in the book – there’s no political theory or similar in there – but most of the other stuff is. The book is somewhat more theoretical than the lecture; there’s no stuff about vampire bats in there. It probably also goes without saying that the coverage in the book provides a lot more detail than does the lecture, which only really scratches the surface; the analytical level is quite a bit higher in the book.

The book is in my opinion an example of really good philosophy of science. I liked the book a lot, it’s really nicely written and the author seems to be a very precise and careful writer and thinker. There are pretty much no superfluous pages in the book, which also means that I’ve actually been a bit conflicted about how to blog it, because it seemed impossible to go over all those ideas in just a blog post or two. I suggest you watch the lecture; if you like the lecture and/or want to know more about the ideas presented there, you’ll want to read this book.

The book includes some equations here and there, but nothing you shouldn’t be able to handle. Some really important ideas in the book are not mentioned in the lecture, but this is natural given the format – there’s only so much stuff you can pack into one lecture. For example in any two-level setting including ‘particles’ and ‘collectives’, the question arises of how to even define collective (/’group’) fitness. One might define it as “the average or total fitness of its constituent particles; so the fittest collective is the one that contributes most offspring particles to future generations of particles.” Or one might define it as “the number of offspring collectives it leaves; so the fittest collective is the one that contributes the most offspring collectives to future generations of collectives.” The distinction between these two conceptualizations of collective fitness actually is really important in some analytical contexts, and this is definitely a distinction worth keeping in mind.

I may cover the book in more detail later, but for now I’ll limit coverage to the comments above and to the lecture. In my opinion it’s a really nice book, I gave it five stars on goodreads.

August 9, 2014 Posted by | Biology, Books, Evolutionary biology, Genetics, Lectures | 12 Comments

Open Thread

Some random observations and some links:

i. I’ve written about diabetic hypoglycemia before – I even blogged a book on the topic just a few weeks ago. So I’ll keep this short. Here’s the key observation from the post to which I link: “Hypoglycemia causes functional brain failure that is corrected in the vast majority of instances after the plasma glucose concentration is raised”.

Functional brain failure is pretty much what it sounds like – the brain stops working. The point I want to make here is that hypoglycemia can strike pretty much at any point in time, including when I’m doing stuff like blogging or commenting. I sometimes develop hypoglycemia while deeply engrossed in some intellectual activity, like reading, writing or chess, in part because in those situations I have a tendency to forget to listen to my body’s signals – perhaps I forget to eat because this stuff is really much more interesting than food, perhaps I don’t really care that I should probably take a blood test now because I’d really much rather just finish this book chapter/chess game/blogpost/whatever. That happens. When it happens while I’m blogging, what comes out the other end may look funny. I occasionally write stuff that’s incoherent and stupid. Sometimes the explanation is simple: I’m an idiot. Sometimes other things play a role as well.

This is a variable you cannot observe, but which I have a lot of information about. It’s a variable I’d like readers of this blog to at least be aware of.

ii. Maxwell wrote this post, which you should consider reading. I won’t pretend to have good reasons/justifications for disliking people I conceive of as arrogant, but I do want to note that I do this and always have. Arrogance is a trait I dislike immensely.

iii. Over the last few days I’ve been reading Okasha’s great book Evolution and the Levels of Selection (I’ve almost finished it and I expect to blog it tomorrow) – so of course when Zach Weiner came up with this joke yesterday, I laughed. Loudly:



(Click to view full size. The comic of course has almost nothing to do with the content of the book, but I’ll take any excuse I can get for blogging that comic…)

iv. The Feynman Lectures on Physics. Available to you, online, free of charge. Stuff like this sometimes makes me think we live in a very nice world at this point.

But then I read posts/watch videos like this one and I’m reminded that things are, complicated.

v. A few Khan Academy lectures:

August 8, 2014 Posted by | Genetics, History, Khan Academy, Lectures, Medicine, Personal, Physics | 9 Comments

Autism Spectrum Disorder (I)


I recently realized that I had actually never read a textbook like this on this topic. I did get some reading materials back when I got diagnosed so it’s not like I’ve never read anything about the stuff (and there was a lot of verbal information back then as well), but as mentioned I haven’t read a text on the topic. It was actually due to the old reading materials in question that I ended up deciding to read this book; I was looking for some other stuff the other day and I ended up perusing some of these materials (which I hadn’t seen in years), and I figured I should probably go read a book on the topic. Now I am.

The book is sort of okay. There are various complaints one might make, the most important one of which in the context of me reading the book is perhaps that children with autism-spectrum disorders grow up and become adults, and adults prefer to read chapters about adult stuff, not stuff about e.g. how to teach the preschooler with the diagnosis social skills. I’ve read roughly half the book at this point, and there’s not in my opinion been enough stuff about the adult setting at this point. Another complaint is that I as usual am somewhat mistrustful when guys like these talk about the conclusions to be drawn from some types of empirical evidence; the coverage has in my opinion been of a decidedly mixed quality in terms of the stuff dealing with behavioural interventions, in the sense that they on the one hand at one point reasonably frankly acknowledge that the evidence is sparse and of poor quality, and on the other hand later on seem to become very excited about a longitudinal study and start drawing big conclusions from that single study – which would be sort of fine, I like longitudinal studies, if not for the fact that the study was based on 6 (!) individuals. Similar things happen elsewhere in that part of the coverage – potential power issues are never mentioned in the book, at least they have not been so far – you find yourself reading about a ‘seminal’ study on 19 individuals, and then you move on to their comments about how there have been several other studies supporting those findings, including a study looking closer at 9 of the individuals involved in the original study. Sometimes it’s hard to know what to think, especially in the situations where the only people evaluating the interventions are the people who came up with them in the first place – this doesn’t seem like a particularly smart way to conduct business, though in some parts of psychology it seems to be more or less standard practice.

The stuff on behavioural interventions has in my opinion been some of the weakest stuff in the book so far, which is why I have not talked about this stuff in my coverage below. Some of the proposed interventions are incredibly expensive, and there’s probably a good reason why such things are usually not covered by public health care systems, however the authors do not really seem to consider economic aspects to be all that important, except to the extent that economic factors unfortunately restrict access to all these nice things we could do for these children; they’re aware that parents may not be able to afford the treatment options which are recommended at this point by people who would benefit from these treatment options being more widely used, but they don’t seem to be aware of the existence of things like cost-effectiveness analyses. It’s one thing to argue that there may be developmental gains to be achieved by early childhood interventions (I’ve previously done work in educational economics and I can tell you that it is a common finding in this literature that you can improve outcomes by throwing lots of money and attention after young children – a finding which should perhaps not be super surprising..), it’s quite another thing to argue that the specific interventions comtemplated are cost-effective. To be fair, cost-effectiveness is incredibly hard to evaluate when you’re contemplating evaluating interventions which may have effects lasting basically the rest of the life of the individual and the intervention is supposed to take place during the first years of a child’s life, but in my opinion you sort of need to at least pretend to try to address this aspect somehow; if you don’t, you’re quite likely to end up in a situation where it seems as if you’re acting as if there’s no (societal) budget constraint, and the authors of this book seem to me to move very close to this position at various points in the coverage.

I knew very little (nothing?) about autism-spectrum disorders before I got the diagnosis – I got diagnosed very late, in my adulthood. It’s sort of funny how you can miss important stuff like this without even knowing, and in a way it relates to a point which came up in my recent post on ethics, specifically the point that ‘bad’ people tend to think they are ‘good’ people, or at least no worse than average. How much do you really know about how good other people are at, say, interpreting nonverbal social signals? Would withdrawal from social interaction make the comparison easier or harder? If you don’t really engage in the normal patterns of non-verbal information exchanges, e.g. eye contact exchanges, during social situations, how are you to know that important information is contained in such exchanges? Individuals seem to make assumptions about these things to a large extent based on what they know themselves (about themselves?), and if you have limitations in these areas it may be difficult to figure out that this is the case; another apt analogy might be children who need glasses early on in their lives – we screen for vision impairment in young children in part because young children don’t know, and may never on their own ever realize, that the world is not supposed to be blurry, and that you’re actually supposed to be able to see all the letters written down on the blackboard.

I thought I should make one thing clear before moving on to the main text, a point particularly relevant considering the comic which I decided to start out with; which is that incompetence should not be equated with/interpreted as malicious intent. It seems to me that many people conceive of people with autism-spectrum disorders as inconsiderate jerks who don’t have a clue – I’ve seen quite smart people state relatively similar things in the past. I dislike the ‘jerk’-model because I try to be thoughtful and considerate when interacting with others, and when these people think that way I feel that they’re devaluing the work I put into this stuff. One important problem which is sort of hard to figure out how to deal with is that I’m well aware that the more thoughtful and considerate I am (…or is it: ‘try to be?’) during social encounters, the more taxing the social interactions may become, and taxing social interactions lead to social isolation and withdrawal. Coming up with a good equilibrium level of effort is not an easy task, and I think one needs to address aspects like these before making strong judgments about things like the jerkishness of specific behaviours. In a way people with social anxiety have similar concerns which other people also cannot observe (in this case it would be excessive amounts of thinking during social situations about whether they are doing stuff right now that may mean that they’ll get rejected by others, which then leads to oversensitivity to clues of rejection, leading to social avoidance because of perceived rejection). Of course people with autism-spectrum disorders may be anxious as well, as also mentioned in the coverage below. The level of self-awareness varies a lot in people with autism-spectrum disorders, but people with relatively high levels of self-awareness may certainly face some constraints and tradeoffs which are not immediately obvious to the outsider and which may actually be assumed by neurotypicals to be absent, given the diagnosis.

The textbook answered one question I’d been thinking about a few times without ever worrying enough about it to actually seek out an answer, which is the question of what the recent diagnostic changes might mean, given that I have a diagnosis which by now has been ‘retired’. It turns out that I was diagnosed with what in the textbook are considered to be the ‘gold-standard tools’, which means that this remark related to the recent diagnostic changes that have taken place seems to answer the question: “The DSM 5 noted that “Individuals with a well-established DSM-IV diagnosis of” “Asperger’s disorder” “should be given the diagnosis of autism spectrum disorder””. I’m not going to ‘ask’ for a ‘new’ diagnosis (/a ‘translation’ of my diagnosis) (and quite aside from what other people like to call this stuff, I like the word ‘eccentric’ a lot better than the word ‘autistic’…), but it’s nice to know which recommendations are being made in this area. Some of the quotes below also relate a bit to these aspects.

I’ve added some quotes from the book below.

“Autism is a developmental neurobiological disorder characterized by severe and pervasive impairments in reciprocal social interaction skills and communication skills (verbal and nonverbal), and by restricted, repetitive, and stereotyped behavior, interests, and activities. […] Autism and autistic stem from the Greek word autos, meaning “self.” The term autism originally referred to a basic disturbance, an extreme withdrawal of oneself from social life, or aloneness. […] The critical point in the scientific history of autism was in 1943, when Leo Kanner published Autistic Disturbances of Affective Conduct, a groundbreaking paper that described the symptoms of 11 children presenting similar behaviors that had not been previously recognized. […] Based on Kanner’s terminology, autism was considered for years a psychosis, and child psychiatrists were using “childhood schizophrenia” and “child psychosis” in autism as “interchangeable diagnoses.” […] A parallel line of inquiry to that of Kanner and Eisenberg is represented by the work of Hans Asperger.”

“In Autism and Pervasive Developmental Disorders, Fred Volkmar and Catherine Lord (2004) distinguished important points of differentiation and similarities between Kanner’s and Asperger’s descriptions. […] In concluding their comparison of Kanner’s and Asperger’s descriptions, Volkmar and Lord pondered whether, despite the relevant differences, it was “scientifically and clinically helpful to classify individuals with these traits into separate categories of autism or Asperger’s disorder, or whether it would be better to treat them as parts of a greater continuum.” The utility of the “greater continuum” has led to the category of autism spectrum disorder to be proposed for DSM-5. […] As a result of [various findings] and the lack of reliability in the community in making distinctions among the ASDs [Autism-Spectrum Disorders] [for example: “Variations in clinical severity among ASD cases are not valid indices of differences in pathophysiology or etiology”], the Fifth Edition of the Diagnostic and Statistical Manual (DSM-5) proposes to collapse all of these clinical syndromes into a single diagnosis of “autism spectrum disorder.” Although this revision is appropriate for community diagnosis, and thus the allocation of clinical and support services, research studies will continue to rely on research diagnostic instruments like the Autism Diagnostic Interview (ADI) and the Autism Diagnostic Observation Schedule (ADOS) [these were both part of my work-up, US] to make categorical distinctions between “autism and not autism” and “autism and autism spectrum disorder” (which includes Asperger’s disorder and PDDNOS [Pervasive Developmental Disorder Not Otherwise Specified]). These distinctions have played a vital role in advancing our understanding of the behavioral and neural profile of ASD over the past two decades”

“Recent studies and reports from the Centers for Disease Control […] have shown an increase in the prevalence of children diagnosed with an ASD to one in 110 […] The reported increase is thought to be attributable to several factors. First, there have been changes in diagnostic practices […] Second, there is greater public awareness of ASD and more case-finding […] Finally, there has been a tendency to diagnose many children with intellectual disability as PDD. […] no evidence currently exists to support any association between ASD and a specific environmental exposure. […] Numerous studies have failed to demonstrate a causal relationship between immunizations, particularly thimerosal-containing vaccines, and ASD […] The CDC (2009) reports the median age for a diagnosis of ASD to be between 4.5 and 5.5 years. […] the ASD diagnosis is four times more common for boys than in girls.”

“The essential features of Asperger’s disorder are severe and sustained impairment in social interaction (criterion A); and the development of restricted, repetitive patterns of behavior, interests, and activities (criterion B); which must cause clinically significant impairment in functioning (criterion C). There are no clinically significant delays in language (criterion D) or cognitive development (criterion E).”

“ASD (excluding Asperger’s disorder) has early language and communication impairment. […] almost two thirds of individuals with ASD also have ID [intellectual disability] […] 15%–20 % of cases of ASD are now linked to genetic or
chromosomal abnormalities […] Fragile X Syndrome (FXS) [is] the most common identifiable cause of ASD and the most common inheritable cause of ID. […] Thirty percent of individuals with FXS demonstrate characteristics of ASD.” [In some other conditions penetrance is even higher – examples that could be mentioned are 15q duplication and Timothy Syndrome, but prevalence is lower in these cases and especially in the latter case some might argue that the autism is the least of that child’s problems..]

Challenging behaviors [in individuals with ASD] may reflect pain that is not communicated verbally […] Challenging behaviors may [also] reflect the child’s difficulty with communication, changes, new places, new situations, new experiences, new sounds, new smells, and new people” [I wonder if you can spot a pattern here in terms of what these children (/people) don’t like? I think an important distinction here is to be made between curiosity and the desire to try out new things. I’m often, hesitant, about trying out new things, yet I’m also quite curious about a lot of things. Be careful which categories you apply here and how they may impact your thinking… In a related vein:] “Insistence on sameness and difficulty with change are common symptoms of an ASD. These behaviors should not typically be considered a behavior done to exert control over others.”

“Psychiatric comorbidity is now acknowledged as quite common in ASD [and] psychiatric comorbidity increases the level of impairment […] There is a handful of questionnaires [aiming at spotting psychiatric comorbidities] that have been developed specifically for use in developmentally disordered or ASD populations. […] none of the measures has the level of research support possessed by questionnaires used in other branches of psychiatry. The vast majority of these instruments have just one study behind their development, or have been studied only by the developer of the instrument. […] one of the main challenges in diagnosing psychiatric disorders in individuals with ASD is the possibility of different presenting symptoms and difficulty in differentiating impairment related to the underlying ASD from impairment due to a separate condition. […] While we do not want to miss true comorbid diagnoses, over-diagnosing comorbidity can be equally harmful. […] Mood disorders, such as depression and bipolar disorder, in ASD have recently begun to receive a great deal of attention […] there are many potential psychosocial stressors that could be possible triggers. For example, higher-functioning individuals who are aware of their deficits and badly desire friends, but lack success in this area, are at particular risk. […] Although there is little research on emotion regulation in ASD, there is clear evidence that emotion regulation is highly variable and often problematic in this population, regardless of psychiatric comorbidity […] Therefore, particularly for mood disorders, it is imperative to consider baseline functioning and not over-diagnose mood disorders when the concern may be more temperamental in nature.”

“Anxiety is considered by some to be the most common comorbid psychiatric concern in ASD […]. The DSM-IV-TR notes that individuals with ASD might have unusual fear reactions, and it is also not uncommon for there to be a general tendency toward anxiety for many individuals with ASD. […] There are many aspects of having an ASD that may lead to this increased risk for anxiety, to the degree that some consider anxiety and the social impairment in ASD to have a bidirectional relationship […] An increase in self-awareness is considered a risk factor for higher anxiety; therefore, anxiety is typically thought of as more common among individuals with ASD who have higher intellectual abilities, and older children, adolescents, and adults.”

“autism may be conceptualized as a disorder of complex information processing resulting from disordered development of the connectivity of cortical systems (e.g., failure of cortical systems specialization) […] approximately 15%–20% of infants with an older sibling diagnosed with autism will ultimately be diagnosable with ASD by three to four years of age. […] [Findings from longitudinal sibling studies] do not support the view that autism is primarily a social-communicative disorder and instead suggest that autism disrupts multiple aspects of development rather simultaneously. […] When both elementary and higher-order abilities in many domains are assessed, it becomes evident that deficits exist in several domains not considered to be integral parts of the autism syndrome, including aspects of the sensory-perceptual, motor, and memory domains. Furthermore, there are enhanced skills and impaired abilities within the same domains as deficits (e.g., memory, language, abstraction). […] Causal explanations for ASD must account for the comprehensive pattern of both deficits and intact aspects of the disorder both within and across multiple domains. […] There is no single primary deficit or triad of deficits, brain regions, or neural systems causing autism. […] Rather, autism broadly affects many abilities at the same time and systematically from its earliest presentation and throughout life. […] This pattern [can] be characterized overall as reflecting a disorder of complex or integrative information processing, which results from altered development of cerebral cortical connectivity in ASD. […] Just as the infant sibling studies have clearly demonstrated, studies of children and adults with autism have also demonstrated a broad but selective profile of deficits and intact or enhanced abilities that all reflect a relationship to information-processing demands. […] it is likely that genes affecting signaling pathways that regulate neuronal organization are strongly implicated in the etiology of autism.”

“ASD is now conceptualized as a developmental neurobiological disorder affecting elaboration of the forebrain circuitry that underlies the abilities most unique to human beings. […] Wiring the brain requires that neurons proliferate, acquire the correct identities, migrate to the appropriate locations, extend axons, and make guidance decisions with a high degree of spatial and temporal fidelity. Converging evidence indicates that more than one of these processes may be altered in various combinations to produce the heterogeneous phenotypes observed in ASD. […] Studies examining head circumference (HC) and brain volume (BV) in individuals with ASD have demonstrated altered brain growth trajectories across the lifespan. […]
• Up to 70 % of infants with ASD exhibit abnormally accelerated brain growth in the first year of life. Approximately 20% to 25% of infants in this subset actually meet formal criteria for macrocephaly (i.e., HC of 2.0 standard deviations above the mean) in the first year.
• BV is significantly larger by two to four years of life, and some children meet criteria for megalencephaly (i.e., BV 2.5 S.D. above mean).
• The first two years of life are usually a period of rapid brain growth in infants as neurons undergo significant postnatal growth in cell size and elaboration (actually overproduction) of axons, synapses, and dendrites. It is possible that this process is exaggerated somehow in at least a subset of ASD.
• Whatever the neurobiological basis, abnormal growth rates in ASD tend to decline significantly after the initial acceleration, causing an apparent “normalization” of BV by adolescence or early adulthood. […]
At the time of maximal brain growth in very early childhood, cerebral gray matter (GM) and white matter (WM) are both increased […] The frontal cortical GM and WM show the most enlargement, followed by the temporal lobe GM and WM and the parietal GM.”

“Thus far, [fMRI and fcMRI] studies have identified underconnectivity with the frontal cortex as a specific characteristic of the altered connectivity in autism, and this characteristic is present across the same wide range of domains of complex information processing that are affected in the disorder, including social, language, executive, and motor processes. […] measures of functional connectivity between specific areas have been shown to reliably predict the degree of impairment in specific domains among those diagnosed with autism. For instance, individuals with poorer social functioning measured by the ADI-R show lower functional connectivity between frontal and parietal cortices. These findings gave rise to the underconnectivity theory in autism, which now has sufficient support that it is accepted as a central feature of the pathophysiology of autism […] Results from these studies are consistent with the notion that autism is a disorder of distributed neural systems (e.g., the connections between structures rather than the structures themselves). […] Diffusion-weighted imaging measures the direction and speed of microscopic water movement in the brain, allowing inferences about the microstructure of the tissue that constrains such movement. These studies have consistently found reduced structural integrity of white matter in adults with ASD, indicating reduced anatomical connectivity […] like measures of functional connectivity, measures of anatomical connectivity derived from diffusion imaging have been shown to reliably predict symptom severity among individuals with autism.”

“In thinking about the genetic basis of autism, it is important to contrast syndromic (or complex) and non-syndromic (or idiopathic/essential) ASD. […] Syndromic ASD includes identifiable autism syndromes with known genetic causes, such as tuberous sclerosis complex, Fragile- X syndrome, Rett syndrome, and Smith-Magenis syndrome.
• Syndromic ASD is associated with a relatively higher propensity for dysmorphic features (including anatomical brain abnormalities), intellectual disability (ID), seizures, and female sex (sex ratios are almost equal).
• Syndromic ASD is also associated with a higher frequency of chromosomal abnormalities in general, many of which have been identified […]. However, it is not yet clear for many of these syndromes which features are typical of autism and which are unique.
Non-syndromic ASD is also called idiopathic autism and consists of cases with and without identifiable micro-deletions or duplications to the DNA. […] individuals with idiopathic ASD are more likely to be male, with sex ratios approximately 1:4 (F:M) but approaching 1:7 in milder cases.”

“Overall, approximately 10 % of children being evaluated for ASD are found to have an identified medical condition with a known genetic lesion such as Fragile X or tuberous sclerosis. An additional 10 % or more have an identifiable chromosomal structural abnormality or copy number variation associated with ASD. […] Recent genome-wide scans using microarray technology have demonstrated a substantial role for small chromosomal deletions or duplications (i.e., copy number variation or CNV) in the etiology of ASD. […] There is [however still] considerable debate concerning the genetic architecture underlying […] the majority of idiopathic autism. Arguments can be made for either the effects of single, but rare Mendelian causes (for which documented CNVs are presumably the tip of the iceberg) or the interaction of numerous common, but low-risk alleles. Genetic linkage and association studies have been traditionally employed to address the latter model, but have failed to consistently identify susceptibility loci.” [An important point I should perhaps make before finishing this post is that if incidence/prevalence of a condition is increasing fast in a population, which seems to be the case here, such an increase is in general considered to be unlikely to only be the result of genetic changes at the population level – that type of pattern is usually indicative of environmental factors playing an important role. It may well be that the ‘average cause’ is different from the ‘marginal cause’, and that it may be a good idea to be careful in terms of which tools to use to explain base rates and growth rates. It might be argued that increased assortative mating among nerds in Silicon Valley has increased incidence locally (I’m sure this might be argued as I’m quite sure I’ve seen this exact argument before…) and I’m not saying this may not be the case, but if close to one percent of the American population get diagnosed, what goes on in Silicon Valley probably isn’t super relevant one way or the other – only roughly 1% of the population live in that area altogether. Even if you were to argue that a similar process is going on everywhere else in the country, it sort of strains belief that ‘something else’ is not going on as well].

July 25, 2014 Posted by | autism, Books, Epidemiology, Genetics, Medicine, Neurology, Personal, Psychology | Leave a comment

Sexually Transmitted Diseases (4th edition) (IV)

Here’s a link to a previous post about the book, which includes links to the first two posts I wrote about it.

I was not super impressed with the coverage in part 3, although there was a lot of interesting stuff as well. However the level of coverage and amount of detail included is high in part four and five. There were a lot of details which evaded me in some of the recent chapters, but I also learned a great deal. There’s quite a lot of coverage of various ‘related topics’ (microbiology, biochemistry, immunology, oncology) in the parts of the book I’ve read recently, and like many other medical texts this book will help you realize that many things you in your mind had thought of as unrelated actually are connected in various interesting ways. It’s worth noting that given how many aspects of these things the book covers (again, 2000+ pages…) you actually get to know a lot of stuff about a lot of other things besides just ‘classic STDs’. It turns out that in Jamaica and Trinidad, over 70% of all lymphoid malignancies are attributable to exposure to a specific herpes virus most people probably haven’t heard about, HTLV-1 (prevalence is also high in other parts of the world, e.g. southern Japan). I didn’t expect to learn this from a book about sexually transmitted diseases, but there we are.

I hope that I’ve picked out stuff from this part of the coverage which is also intelligible to people who didn’t read the 95+% of those chapters I didn’t quote (I always like feedback on such aspects).

“At the simplest level, infection of a cell by a virus or bacterium may lead to cell death. In the case of viruses, specific disease syndromes may be caused by destruction of certain subsets of cells that express essential differentiated functions. A classic example of this is the development of the AIDS following HIV-1 mediated depletion of the CD4 lymphocyte population. Virus-induced cell death may result from one or more specific mechanisms. Many viruses express specific proteins that have as their major function the induction of a blockade in normal host cell metabolism (cellular translation and transcription) such that the metabolic machinery of the cell is subverted preferentially to viral replication. For obvious reasons, the expression of such proteins is usually highly toxic to the cell. Cellular destruction or “direct cytopathic effect” is considered responsible for the disease manifestations of many lytic viruses, including, for example, HSV and poliovirus. On the other hand, many cells may respond to the presence of an invading virus by the induction of apoptosis and the initiation of programmed cell death. Some viruses appear to have evolved mechanisms to prevent or delay apoptosis, thus potentially prolonging productive infection and maximizing replication. For example, HSV-1 infection induces apoptosis at multiple metabolic checkpoints but has also evolved mechanisms to block apoptosis at each point.28 Importantly, the inhibition of apoptosis by HSV-1 also prevents apoptosis induced by virus-specific cytotoxic T lymphocytes, thereby conferring on the infected cell a certain measure of resistance to the host’s cell-mediated immune responses.29

However, many viruses are not intrinsically cytopathic. HBV is a prime example, as many infected HBsAg carriers are asymptomatic and without overt evidence of active liver disease. Despite this, such carriers may be very infectious […] The presence or absence of liver disease is largely determined by the T-cell response to the virus.30 Thus, chronic hepatitis B results from a relatively vigorous but unsuccessful attempt on the part of the host to eliminate the infection. […] chronic liver inflammation and the occurrence of hepatocellular carcinoma reflect the immune response to the virus, rather than specific virus effects. Similar indirect mechanisms may contribute to the progressive immune destruction of infected CD4-positive lymphocytes in patients with HIV-1 infection.

Some bacterial disease processes may also be caused largely to immunopathologic responses. For instance, there is substantial evidence that complications of genital chlamydia infections (salpingitis, Reiter’s syndrome) are correlated with and may be owing to stimulation of antibodies against a heatshock protein (hsp60).33,34 […] In contrast, gonococcal tissue damage appears to be caused by the direct toxic effects of lipid A and peptidoglycan fragments”

“Some viruses are capable of altering differentiated cellular functions, resulting in the production of disease by mechanisms that do not exist among bacteria. A prime example is the altered cellular growth that follows infections by molluscum contagiosum virus (MCV) […]. A more extreme example is the proliferation of epithelial cells that is induced by infection with HPVs. HPV-related epithelial malignancies and cellular transformation are related to the expression of two specific HPV proteins, the E6 and E7 oncoproteins, by high-risk HPV subtypes.22 These proteins interact with p53 and pRb, both promoting cellular proliferation and cell survival. Oncogenic transformation is usually associated with high-level expression of E7 from integrated HPV DNA. The Kaposi’s sarcoma-associated herpes virus (KSHV) also expresses a number of proteins that mimic important host regulators of cellular proliferation and survival […] Expression of these proteins may result in deregulation of cell growth, with changes in the cellular morphology and/or acquisition of the ability of the cells to form colonies in soft agar, changes that are indicative of transformation.

On the other hand, hepatocellular cancers occurring in the context of chronic viral hepatitis are likely to have an alternative explanation. Although it is possible that integration of HBV DNA may be responsible for altered cellular growth control in some hepatitis B-associated cases, liver cancer in this setting may be primarily immunopathogenic.30,32 Chronic inflammation accompanied by oxidative stress and cellular DNA damage are likely to pla[y] important roles.”

“The human immunodeficiency viruses (HIV-1 and HIV-2) and the simian immunodeficiency viruses (SIV) (with a subscript indicating the species of origin) are members of the lentivirus genus of the Retroviridae family, commonly called retroviruses. […] Retroviruses are divided into two subfamilies: Orthoretrovirinae and Spumaretrovirinae […] The spumaretroviruses have distinctive features of their replication cycle that require this more distant classification. They have been isolated from primates, but not humans, and are not associated with any known disease. The orthoretroviruses are divided into six genera and represent viruses that infect snakes, fish, birds, and mammals. […] Human infections occur with viruses from two of these genera. The Deltaretrovirus genus includes human T-cell leukemia virus type I (HTLV-I), the causative agent of adult T-cell leukemia,5, 6, 7 and human T-cell leukemia virus type II (HTLV-II), which is not known to be associated with any disease syndrome. HTLV-I is also associated with another syndrome called HTLV-associated myelopathy (HAM). HTLV-I and HTLV-II are related to viruses found in primates and more distantly related to bovine leukemia virus. The lentivirus genus includes HIV-18 and HIV-29 as well as viruses found in a variety of mammals ranging from primates to sheep. Viruses within these different genera vary widely in the diseases they cause and the mechanisms of disease induction, in contrast to the many common features of their replication cycle. […] In its DNA form the viral genome is inserted into the host genome […]. This step in the virus life cycle has important implications for several features of virus-host interactions. For example, viral DNA that integrates into the genome of a cell but is not expressed becomes silently carried in the descendents of that cell. When this happens in a germline cell, or in the cell of an early embryo that becomes a germline cell, this copy of viral DNA becomes a linked physical part of the host genome, is present in every cell in the body, and is passed on to subsequent generations. Such a genetic element is called an endogenous retrovirus. Most of the elements that become fixed are defective, as there is probably a strong selective pressure against elements that can activate to produce infectious virus. Thus, they represent an archive within the host genome of previous waves of retroviral infections. In fact, the human genome carries a record of retroviral infections over the last 40 million years of primate evolution. These are viruses that we do not recognize as active in the human population at present but are represented by 110,000 genomic inserts of gammaretroviruses, 10,000 inserts of betaretroviruses, and 80,000 inserts of a genus that may be distantly related to spumaretroviruses or may represent an uncharacterized lineage.10 Most of these elements contain large deletions; however, if these deletions had been retained, our genomes would be 40% endogenous retroviruses by mass and outnumber our normal genes 7 to 1.”

“Most histories of retroviruses start with the dramatic discovery by Peyton Rous in 1911 that a virus, Rous sarcoma virus (RSV), could cause cancer. […] The isolation of other tumor-causing retroviruses followed and in time it became apparent that there were two broad classes of agents: one class of viruses caused cancer after a long latency period […], while the other class caused tumors that appeared rapidly […]. We now know that the acutely transforming retroviruses carry a cell-derived oncogene that is responsible for the transforming activity,14 while the slowly transforming retroviruses act by the chance integration of viral DNA near these cellular oncogenes in the host genome to induce their expression and promote tumor formation.15,16 Importantly, many of these same genes can be mutated or overexpressed in human cancers, and the proteins they encode are now the targets of new generations of specific antitumor therapies […] One can confidently surmise that the remnants of the beta- and gammaretroviruses littered in our genomes had such oncogenic effects when they were active. Ironically, for the active human retroviruses, HTLV-I causes tumors by a different but still poorly understood mechanism, and HIV is involved in tumor formation only indirectly through immune suppression. […] There are two fundamental differences between lentiviruses and most other retroviruses: Lentiviruses do not cause cancer [directly…] and they establish chronic infections that result in a long incubation period followed by a chronic symptomatic disease. The “slow” (lenti is Latin for slow), chronic nature of these viral infections was first appreciated for a disease of sheep called maedi-visna (maedi = labored breathing, visna = paralysis and wasting).”

“Using the current sequence diversity in the HIV-1 population, the 1959 sequence, and estimates of the rate of sequence change per year, it has been possible to suggest that the cross-species transmission event that gave rise to the M group of HIV-1 occurred early in the twentieth century.38 If we accept that SIVcpz [HIV in chimps…] has entered the human population three times in the last century (the three groups N,O, and M), then it follows that this virus likely has been transmitted to humans any number of times over the last 10,000 years. Only in the last century the human institutions of large cities and efficient transportation corridors have given these transmission events access to a human environment that could support an epidemic.”

“Over 100 herpesviruses have been identified, with at least eight infecting humans [I had no idea there were that many of them, and I had no clue some of the ones mentioned were actually herpes viruses…]. All human herpesviruses are well adapted to their natural host, being endemic in all human populations studied and carried by a significant fraction of persons in each population. The human herpesviruses include herpes simplex viruses types 1 and 2 (HSV-1 and HSV-2), varicella-zoster virus (VZV), Epstein-Barr virus (EBV), cytomegalovirus (CMV), human herpesvirus 6 (HHV-6), human herpesvirus 7 (HHV-7), and human herpesvirus 8 (HHV-8) or Kaposi’s sarcoma (KS)-associated herpesvirus. Disease caused by human herpesviruses tends to be relatively mild and self-limited in immunocompetent persons, although severe and quite unusual disease can be seen with immunosuppression. […] all herpesviruses share biologic traits. These include expression of a large number of viral enzymes, assembly of the nucleocapsid in the cell nucleus, cytopathic effects on the cell during productive infection, and ability to establish latent infections in an infected host.”

“Vaccine development poses great challenges in the case of herpesviruses because recovery from natural disease is not associated with elimination of virus and does not always protect against another episode of disease.
Live-attenuated, killed, and recombinant subunit herpesvirus vaccines have all been studied. Whole-virus vaccines have the advantage of exposing the immune system to all viral antigens. Live-attenuated vaccines have tended to produce longer-lasting immunity than killed preparations. However, live-attenuated herpesvirus vaccines may be capable of establishing latent infections. The risks are not clear and there is concern that vaccine recipients who subsequently become immunosuppressed may develop disease caused by reactivated virus. Two avirulent HSV strains have been shown to generate lethal recombinants in mice.127 Thus, recombination between an attenuated vaccine strain and a superinfecting wild-type strain could occur. Because several herpesviruses have been associated with malignancies in humans, the long-term safety of any live-attenuated vaccine needs careful study.”

“In the most recent data from NHANES, the prevalence of HSV-1 appears to have fallen slightly from 62% in the years 1988-1994 to 57.7% in the years 1999-2004 in the general population.30 In Western Europe, the prevalence of HSV-1 infection in young adults remains 10-20% higher than that in the United States.31 In STD clinics in the United States, about 60% of attendees have HSV-1 antibodies. In Asia and Africa, HSV-1 infection remains almost universal […] The cumulative lifetime incidence of HSV-2 reaches 25% in white women, 20% in white men, 80% in African American women and 60% in African American men […] Transmission of HSV between sexual partners has been addressed most often in prospective studies of serologically discordant couples, i.e., in couples in whom one partner has and the other does not have HSV-2. Longitudinal studies of such couples have shown that the transmission rate varies from 3% to 12% per year. […] Unlike other STDs, persons usually acquire genital HSV-1 and genital HSV-2 in the context of a steady rather than casual relationship.91 Women have higher rates of acquisition than men; in one study the attack rate among seronegative women approached 30% per year.88 […] Subclinical or asymptomatic viral shedding is an important aspect of the clinical and epidemiologic understanding of genital herpes, as most episodes of sexual and vertical transmission appear to occur during such shedding. […] the risk of HSV transmission is likely similar regardless of the presence of lesions, supporting the epidemiologic observation that most HSV is acquired from asymptomatic partners. […] Subclinical HSV reactivation is highest in the first year after acquisition of infection. During this time period, HSV can be detected from genital sites by PCR on a mean of 25-30% of days […]. This is about 1.5 times higher than patients sampled later in their disease course.”

“The major morbidity of recurrent genital herpes is its frequent reactivation rate. Most likely, all HSV-2 seropositive persons reactivate HSV-2 in the genital region. Moreover, because of the extensive area enervated by the sacral nerve root ganglia, reactivation of HSV-2 is widespread over a large anatomic area.

A prospective study of 457 patients with documented first-episode genital herpes infection has shown that 90% of patients with genital HSV-2 developed recurrences in the first 12 months of infection.93 The median recurrence rate was 0.33 recurrences/month. Most patients experienced multiple clinical reactivations. After primary HSV-2 infection, 38% of patients had at least 6 recurrences and 20% had more than 10 recurrences in the first year of infection. Men had slightly more frequent recurrences than women, median 5 per year compared with 4 recurrences per year [it’s important to note that the recurrence rate is substantial even in patients on suppressive therapy: “About 25% of persons on suppressive therapy will develop a breakthrough recurrence each 3-month period”] […] Recently, long-term cohort studies indicate that the frequency of symptomatic recurrences gradually decreases over time. In the initial years of infection, reported recurrence rate decreases by a median of 1 recurrence per year. […] subclinical shedding episodes account for one-third to one-half of the total episodes of HSV reactivation as measured by viral isolation and for 50-75% of reactivations as measured by PCR. […] Rather than regarding HSV-2 as a predominantly silent infection with occasional clinical outbreaks with marked viral shedding, HSV is a dynamic infection, with very frequent reactivation, mostly subclinical, and active effort on the part of the immune system of the host is required to control mucosal viral replication. […] Immunocompromised patients have frequent and prolonged mucocutaneous HSV infections.226, 227, 228 Over 70% of renal and bone marrow transplant recipients who have serologic evidence of HSV infection reactivate HSV infection clinically within the first month after transplantation […] Recurrent genital herpes in immunosuppressed patients often results in the development of large numbers of vesicles which coalesce into extensive deep, often necrotic, ulcerative lesions.228 […] about 70% of HIV-infected persons in the developed world and 95% in the developing world have HSV-2 antibody. […] The epidemiologic interactions between HIV and HSV-2 have led to calculation of potential population-level impact of these intersecting epidemics. […] The population attributable risk will depend on the prevalence of HSV-2 in the population at risk; at 50% HSV-2 prevalence, common among MSM [Males who have Sex with Males, US], or African Americans in the United States, or general population in sub-Saharan Africa, 35% of HIV infections will be attributable to HSV-2. […] the risk of transmitting HSV [from the mother] to the neonate is 30-50% in women with newly acquired HSV [during the last part of the pregnancy] versus <1% in women with established infection.” [This is relevant not only because herpes sucks, but also because it sucks even more when a newborn child gets it].

“More than 50% of individuals in most populations throughout the world demonstrate serological evidence of prior CMV infection.6 The coevolution with and adaptation to its human host over millions of years may account for the observation that in most cases, CMV infection causes few if any symptoms.5 However, in immunocompromised individuals, primary infection or reactivation of latent virus can be life-threatening. As well, congenital infections are common and can result in serious lifelong sequelae. […] Although CMV does not typically come to medical attention as a result of genital tract lesions or disease, it can be transmitted sexually and has important consequences for the sexually active, child-bearing population. […] As with many viruses that cause chronic infection, CMV seems to have coevolved with humans to a balanced state in which the virus persists but generally causes little clinical illness. The host’s innate and adaptive immune responses are usually successful at limiting CMV infection as is evident by the clear association of immune system dysfunction with CMV disease. In the absence of prophylactic antiviral treatment, CMV often reactivates in seropositive individuals who undergo hematopoietic stem cell transplantation (HSCT).41 Immunosuppression resulting from drugs used to treat cancer and autoimmune disorders, and from impaired T-cell function that occurs with advanced AIDS, is also associated with reactivation of CMV. […] The development of primary CMV infection has been noted in up to 79% of liver transplants and 58% of kidney or heart transplants in which the donor is seropositive and the recipient is seronegative.134,135 In the setting of HSCT, several studies have documented that CMV seropositivity of the recipient results in significantly increased overall posttransplant mortality compared to CMV seronegative recipients with a seronegative donor.136 When the recipient is CMV seronegative, overall mortality is increased when the donor is seropositive compared to the situation where the donor is seronegative.137 […] the transplant recipient is at particularly high risk of CMV reactivation during periods of potent immunosuppression that accompany graft rejection or graft-versus-host disease.”

July 7, 2014 Posted by | Books, Epidemiology, Evolutionary biology, Genetics, Immunology, Infectious disease, Medicine, Microbiology | Leave a comment

A few lectures

I had trouble following this, but I thought it was an interesting lecture anyway. The sound falls out a couple times for very brief periods of time (a few seconds, but still irritating) and a few other times it’s a bit difficult to tell what he’s saying because he speaks very fast. The guy who controls the camera occasionally forgets to follow him around, which is annoying. But aside from these small problems it’s a good lecture. Here are some links that I found helpful along the way (some were more helpful than others…) while watching the lecture: Duality (projective geometry), Euler characteristic, Big O notation, configuration (geometry), cubic curve, algebraic geometry of projective spaces, Cayley–Bacharach theorem.

I liked most of the lecture, but I agree with Razib Khan’s assessment that: “there may not have been a gene which made humanity, but a subtle complex of numerous genetic and cultural changes which transitioned at a critical point”. Based on his comments towards the end of the lecture, it seems that Pääbo thinks along different lines. It seems to me that the story about the origin and evolution of culture(s) is complex and multidimensional, and to tell the story of how humans got from flint axes to airplanes you need a lot more than to identify a few SNPs. I’d be very surprised if we can ‘narrow it down’ as much as Pääbo seems to assume we might be able to.

This lecture is much less technical than the first two – it’s a rather light and data-poor lecture, but I did find it worth watching.

April 22, 2014 Posted by | Genetics, History, Lectures, Mathematics, Medicine | Leave a comment