How Species Interact

There are multiple reasons why I have not covered Arditi and Ginzburg’s book before, but none of them are related to the quality of the book’s coverage. It’s a really nice book. However the coverage is somewhat technical and model-focused, which makes it harder to blog than other kinds of books. Also, the version of the book I read was a hardcover ‘paper book’ version, and ‘paper books’ take a lot more work for me to cover than do e-books.

I should probably get it out of the way here at the start of the post that if you’re interested in ecology, predator-prey dynamics, etc., this book is a book you would be well advised to read; or, if you don’t read the book, you should at least familiarize yourself with the ideas therein e.g. through having a look at some of Arditi & Ginzburg’s articles on these topics. I should however note that I don’t actually think skipping the book and having a look at some articles instead will necessarily be a labour-saving strategy; the book is not particularly long and it’s to the point, so although it’s not a particularly easy read their case for ratio dependence is actually somewhat easy to follow – if you take the effort – in the sense that I believe how different related ideas and observations are linked is quite likely better expounded upon in the book than they might have been in their articles. The presumably wrote the book precisely in order to provide a concise yet coherent overview.

I have had some trouble figuring out how to cover this book, and I’m still not quite sure what might be/have been the best approach; when covering technical books I’ll often skip a lot of detail and math and try to stick to what might be termed ‘the main ideas’ when quoting from such books, but there’s a clear limit as to how many of the technical details included in a book like this it is possible to skip if you still want to actually talk about the stuff covered in the work, and this sometimes make blogging such books awkward. These authors spend a lot of effort talking about how different ecological models work and which sort of conclusions these different models may lead to in different contexts, and this kind of stuff is a very big part of the book. I’m not sure if you strictly need to have read an ecology textbook or two before you read this one in order to be able to follow the coverage, but I know that I personally derived some benefit from having read Gurney & Nisbet’s ecology text in the past and I did look up stuff in that book a few times along the way, e.g. when reminding myself what a Holling type 2 functional response is and how models with such a functional response pattern behave. ‘In theory’ I assume one might argue that you could theoretically look up all the relevant concepts along the way without any background knowledge of ecology – assuming you have a decent understanding of basic calculus/differential equations, linear algebra, equilibrium dynamics, etc. (…systems analysis? It’s hard for me to know and outline exactly which sources I’ve read in the past which helped make this book easier to read than it otherwise would have been, but suffice it to say that if you look at the page count and think that this will be an quick/easy read, it will be that only if you’ve read more than a few books on ‘related topics’, broadly defined, in the past), but I wouldn’t advise reading the book if all you know is high school math – the book will be incomprehensible to you, and you won’t make it. I ended up concluding that it would simply be too much work to try to make this post ‘easy’ to read for people who are unfamiliar with these topics and have not read the book, so although I’ve hardly gone out of my way to make the coverage hard to follow, the blog coverage that is to follow is mainly for my own benefit.

First a few relevant links, then some quotes and comments.

Lotka–Volterra equations.
Ecosystem model.
Arditi–Ginzburg equations. (Yep, these equations are named after the authors of this book).
Nicholson–Bailey model.
Functional response.
Monod equation.
Rosenzweig-MacArthur predator-prey model.
Trophic cascade.
Underestimation of mutual interference of predators.
Coupling in predator-prey dynamics: Ratio Dependence.
Michaelis–Menten kinetics.
Trophic level.
Advection–diffusion equation.
Paradox of enrichment. [Two quotes from the book: “actual systems do not behave as Rosensweig’s model predict” + “When ecologists have looked for evidence of the paradox of enrichment in natural and laboratory systems, they often find none and typically present arguments about why it was not observed”]
Predator interference emerging from trophotaxis in predator–prey systems: An individual-based approach.
Directed movement of predators and the emergence of density dependence in predator-prey models.

“Ratio-dependent predation is now covered in major textbooks as an alternative to the standard prey-dependent view […]. One of this book’s messages is that the two simple extreme theories, prey dependence and ratio dependence, are not the only alternatives: they are the ends of a spectrum. There are ecological domains in which one view works better than the other, with an intermediate view also being a possible case. […] Our years of work spent on the subject have led us to the conclusion that, although prey dependence might conceivably be obtained in laboratory settings, the common case occurring in nature lies close to the ratio-dependent end. We believe that the latter, instead of the prey-dependent end, can be viewed as the “null model of predation.” […] we propose the gradual interference model, a specific form of predator-dependent functional response that is approximately prey dependent (as in the standard theory) at low consumer abundances and approximately ratio dependent at high abundances. […] When density is low, consumers do not interfere and prey dependence works (as in the standard theory). When consumers density is sufficiently high, interference causes ratio dependence to emerge. In the intermediate densities, predator-dependent models describe partial interference.”

“Studies of food chains are on the edge of two domains of ecology: population and community ecology. The properties of food chains are determined by the nature of their basic link, the interaction of two species, a consumer and its resource, a predator and its prey.1 The study of this basic link of the chain is part of population ecology while the more complex food webs belong to community ecology. This is one of the main reasons why understanding the dynamics of predation is important for many ecologists working at different scales.”

“We have named predator-dependent the functional responses of the form g = g(N,P), where the predator density P acts (in addition to N [prey abundance, US]) as an independent variable to determine the per capita kill rate […] predator-dependent functional response models have one more parameter than the prey-dependent or the ratio-dependent models. […] The main interest that we see in these intermediate models is that the additional parameter can provide a way to quantify the position of a specific predator-prey pair of species along a spectrum with prey dependence at one end and ratio dependence at the other end:

g(N) <- g(N,P) -> g(N/P) (1.21)

In the Hassell-Varley and Arditi-Akçakaya models […] the mutual interference parameter m plays the role of a cursor along this spectrum, from m = 0 for prey dependence to m = 1 for ratio dependence. Note that this theory does not exclude that strong interference goes “beyond ratio dependence,” with m > 1.2 This is also called overcompensation. […] In this book, rather than being interested in the interference parameters per se, we use predator-dependent models to determine, either parametrically or nonparametrically, which of the ends of the spectrum (1.21) better describes predator-prey systems in general.”

“[T]he fundamental problem of the Lotka-Volterra and the Rosensweig-MacArthur dynamic models lies in the functional response and in the fact that this mathematical function is assumed not to depend on consumer density. Since this function measures the number of prey captured per consumer per unit time, it is a quantity that should be accessible to observation. This variable could be apprehended either on the fast behavioral time scale or on the slow demographic time scale. These two approaches need not necessarily reveal the same properties: […] a given species could display a prey-dependent response on the fast scale and a predator-dependent response on the slow scale. The reason is that, on a very short scale, each predator individually may “feel” virtually alone in the environment and react only to the prey that it encounters. On the long scale, the predators are more likely to be affected by the presence of conspecifics, even without direct encounters. In the demographic context of this book, it is the long time scale that is relevant. […] if predator dependence is detected on the fast scale, then it can be inferred that it must be present on the slow scale; if predator dependence is not detected on the fast scale, it cannot be inferred that it is absent on the slow scale.”

Some related thoughts. A different way to think about this – which they don’t mention in the book, but which sprang to mind to me as I was reading it – is to think about this stuff in terms of a formal predator territorial overlap model and then asking yourself this question: Assume there’s zero territorial overlap – does this fact mean that the existence of conspecifics does not matter? The answer is of course no. The sizes of the individual patches/territories may be greatly influenced by the predator density even in such a context. Also, the territorial area available to potential offspring (certainly a fitness-relevant parameter) may be greatly influenced by the number of competitors inhabiting the surrounding territories. In relation to the last part of the quote it’s easy to see that in a model with significant territorial overlap you don’t need direct behavioural interaction among predators for the overlap to be relevant; even if two bears never meet, if one of them eats a fawn the other one would have come across two days later, well, such indirect influences may be important for prey availability. Of course as prey tend to be mobile, even if predator territories are static and non-overlapping in a geographic sense, they might not be in a functional sense. Moving on…

“In [chapter 2 we] attempted to assess the presence and the intensity of interference in all functional response data sets that we could gather in the literature. Each set must be trivariate, with estimates of the prey consumed at different values of prey density and different values of predator densities. Such data sets are not very abundant because most functional response experiments present in the literature are simply bivariate, with variations of the prey density only, often with a single predator individual, ignoring the fact that predator density can have an influence. This results from the usual presentation of functional responses in textbooks, which […] focus only on the influence of prey density.
Among the data sets that we analyzed, we did not find a single one in which the predator density did not have a significant effect. This is a powerful empirical argument against prey dependence. Most systems lie somewhere on the continuum between prey dependence (m=0) and ratio dependence (m=1). However, they do not appear to be equally distributed. The empirical evidence provided in this chapter suggests that they tend to accumulate closer to the ratio-dependent end than to the prey-dependent end.”

“Equilibrium properties result from the balanced predator-prey equations and contain elements of the underlying dynamic model. For this reason, the response of equilibria to a change in model parameters can inform us about the structure of the underlying equations. To check the appropriateness of the ratio-dependent versus prey-dependent views, we consider the theoretical equilibrium consequences of the two contrasting assumptions and compare them with the evidence from nature. […] According to the standard prey-dependent theory, in reference to [an] increase in primary production, the responses of the populations strongly depend on their level and on the total number of trophic levels. The last, top level always responds proportionally to F [primary input]. The next to the last level always remains constant: it is insensitive to enrichment at the bottom because it is perfectly controled [sic] by the last level. The first, primary producer level increases if the chain length has an odd number of levels, but declines (or stays constant with a Lotka-Volterra model) in the case of an even number of levels. According to the ratio-dependent theory, all levels increase proportionally, independently of how many levels are present. The present purpose of this chapter is to show that the second alternative is confirmed by natural data and that the strange predictions of the prey-dependent theory are unsupported.”

“If top predators are eliminated or reduced in abundance, models predict that the sequential lower trophic levels must respond by changes of alternating signs. For example, in a three-level system of plants-herbivores-predators, the reduction of predators leads to the increase of herbivores and the consequential reduction in plant abundance. This response is commonly called the trophic cascade. In a four-level system, the bottom level will increase in response to harvesting at the top. These predicted responses are quite intuitive and are, in fact, true for both short-term and long-term responses, irrespective of the theory one employs. […] A number of excellent reviews have summarized and meta-analyzed large amounts of data on trophic cascades in food chains […] In general, the cascading reaction is strongest in lakes, followed by marine systems, and weakest in terrestrial systems. […] Any theory that claims to describe the trophic chain equilibria has to produce such cascading when top predators are reduced or eliminated. It is well known that the standard prey-dependent theory supports this view of top-down cascading. It is not widely appreciated that top-down cascading is likewise a property of ratio-dependent trophic chains. […] It is [only] for equilibrial responses to enrichment at the bottom that predictions are strikingly different according to the two theories”.

As the book does spend a little time on this I should perhaps briefly interject here that the above paragraph should not be taken to indicate that the two types of models provide identical predictions in the top-down cascading context in all cases; both predict cascading, but there are even so some subtle differences between the models here as well. Some of these differences are however quite hard to test.

“[T]he traditional Lotka-Volterra interaction term […] is nothing other than the law of mass action of chemistry. It assumes that predator and prey individuals encounter each other randomly in the same way that molecules interact in a chemical solution. Other prey-dependent models, like Holling’s, derive from the same idea. […] an ecological system can only be described by such a model if conspecifics do not interfere with each other and if the system is sufficiently homogeneous […] we will demonstrate that spatial heterogeneity, be it in the form of a prey refuge or in the form of predator clusters, leads to emergence of gradual interference or of ratio dependence when the functional response is observed at the population level. […] We present two mechanistic individual-based models that illustrate how, with gradually increasing predator density and gradually increasing predator clustering, interference can become gradually stronger. Thus, a given biological system, prey dependent at low predator density, can gradually become ratio dependent at high predator density. […] ratio dependence is a simple way of summarizing the effects induced by spatial heterogeneity, while the prey dependent [models] (e.g., Lotka-Volterra) is more appropriate in homogeneous environments.”

“[W]e consider that a good model of interacting species must be fundamentally invariant to a proportional change of all abundances in the system. […] Allowing interacting populations to expand in balanced exponential growth makes the laws of ecology invariant with respect to multiplying interacting abundances by the same constant, so that only ratios matter. […] scaling invariance is required if we wish to preserve the possibility of joint exponential growth of an interacting pair. […] a ratio-dependent model allows for joint exponential growth. […] Neither the standard prey-dependent models nor the more general predator-dependent models allow for balanced growth. […] In our view, communities must be expected to expand exponentially in the presence of unlimited resources. Of course, limiting factors ultimately stop this expansion just as they do for a single species. With our view, it is the limiting resources that stop the joint expansion of the interacting populations; it is not directly due to the interactions themselves. This partitioning of the causes is a major simplification that traditional theory implies only in the case of a single species.”

August 1, 2017 Posted by | Biology, Books, Chemistry, Ecology, Mathematics, Studies | Leave a comment

Melanoma therapeutic strategies that select against resistance

A short lecture, but interesting:

If you’re not an oncologist, these two links in particular might be helpful to have a look at before you start out: BRAF (gene) & Myc. A very substantial proportion of the talk is devoted to math and stats methodology (which some people will find interesting and others …will not).

July 3, 2017 Posted by | Biology, Cancer/oncology, Genetics, Lectures, Mathematics, Medicine, Statistics | Leave a comment

The Antarctic

“A very poor book with poor coverage, mostly about politics and history (and a long collection of names of treaties and organizations). I would definitely not have finished it if it were much longer than it is.”

That was what I wrote about the book in my goodreads review. I was strongly debating whether or not to blog it at all, but I decided in the end to just settle for some very lazy coverage of the book, only consisting of links to content covered in the book. I only cover the book here to at least have some chance of remembering which kinds of things were covered in the book later on.

If you’re interested enough in the Antarctic to read a book about it, read Scott’s Last Expedition instead of this one (here’s my goodreads review of Scott).


Antarctica (featured).
Antarctic Convergence.
Antarctic Circle.
Southern Ocean.
Antarctic Circumpolar Current.
West Antarctic Ice Sheet.
East Antarctic Ice Sheet.
McMurdo Dry Valleys.
Patagonian toothfish.
Antarctic krill.
Fabian Gottlieb von Bellingshausen.
Edward Bransfield.
James Clark Ross.
United States Exploring Expedition.
Heroic Age of Antarctic Exploration (featured).
Nimrod Expedition (featured).
Roald Amundsen.
Wilhelm Filchner.
Japanese Antarctic Expedition.
Terra Nova Expedition (featured).
Lincoln Ellsworth.
British Graham Land expedition.
German Antarctic Expedition (1938–1939).
Operation Highjump.
Operation Windmill.
Operation Deep Freeze.
Commonwealth Trans-Antarctic Expedition.
Caroline Mikkelsen.
International Association of Antarctica Tour Operators.
Territorial claims in Antarctica.
International Geophysical Year.
Antarctic Treaty System.
Operation Tabarin.
Scientific Committee on Antarctic Research.
United Nations Convention on the Law of the Sea.
Convention on the Continental Shelf.
Council of Managers of National Antarctic Programs.
British Antarctic Survey.
International Polar Year.
Antarctic ozone hole.
Gamburtsev Mountain Range.
Pine Island Glacier (‘good article’).
Census of Antarctic Marine Life.
Lake Ellsworth Consortium.
Antarctic fur seal.
Southern elephant seal.
Grytviken (whaling-related).
International Convention for the Regulation of Whaling.
International Whaling Commission.
Ocean Drilling Program.
Convention on the Regulation of Antarctic Mineral Resource Activities.
Agreement on the Conservation of Albatrosses and Petrels.

July 3, 2017 Posted by | Biology, Books, Geography, Geology, History, Wikipedia | Leave a comment

The Biology of Moral Systems (II)

There are multiple really great books I have read ‘recently’ and which I have either not blogged at all, or not blogged in anywhere near the amount of detail they deserve; Alexander’s book is one of those books. I hope to get rid of some of the backlog soon. You can read my first post about the book here, and it might be a good idea to do so as I won’t allude to material covered in the first post here. In this post I have added some quotes from and comments related to the book’s second chapter, ‘A Biological View of Morality’.

“Moral systems are systems of indirect reciprocity. They exist because confluences of interest within groups are used to deal with conflicts of interest between groups. Indirect reciprocity develops because interactions are repeated, or flow among a society’s members, and because information about subsequent interactions can be gleaned from observing the reciprocal interactions of others.
To establish moral rules is to impose rewards and punishments (typically assistance and ostracism, respectively) to control social acts that, respectively, help or hurt others. To be regarded as moral, a rule typically must represent widespread opinion, reflecting the fact that it must apply with a certain degree of indiscrimininateness.”

“Moral philosophers have not treated the beneficence of humans as a part, somehow, of their selfishness; yet, as Trivers (1971) suggested, the biologist’s view of lifetimes leads directly to this argument. In other words, the normally expressed beneficence, or altruism, of parenthood and nepotism and the temporary altruism (or social investment) of reciprocity are expected to result in greater returns than their alternatives.
If biologists are correct, all that philosophers refer to as altruistic or utilitarian behavior by individuals will actually represent either the temporary altruism (phenotypic beneficence or social investment) of indirect somatic effort [‘Direct somatic effort refers to self-help that involves no other persons. Indirect somatic effort involves reciprocity, which may be direct or indirect. Returns from direct and indirect reciprocity may be immediate or delayed’ – Alexander spends some pages classifying human effort in terms of such ‘atoms of sociality’, which are useful devices for analytical purposes, but I decided not to cover that stuff in detail here – US] or direct and indirect nepotism. The exceptions are what might be called evolutionary mistakes or accidents that result in unreciprocated or “genetic” altruism, deleterious to both the phenotype and genotype of the altruist; such mistakes can occur in all of the above categories” [I should point out that Boyd and Richerson’s book Not by Genes Alone – another great book which I hope to blog soon – is worth having a look at if after reading Alexander’s book you think that he does not cover the topic of how and why such mistakes might happen in the amount of detail it deserves; they also cover related topics in some detail, from a different angle – US]

“It is my impression that many moral philosophers do not approach the problem of morality and ethics as if it arose as an effort to resolve conflicts of interests. Their involvement in conflicts of interest seems to come about obliquely through discussions of individuals’ views with respect to moral behavior, or their proximate feelings about morality – almost as if questions about conflicts of interest arise only because we operate under moral systems, rather than vice versa.”

“The problem, in developing a theory of moral systems that is consistent with evolutionary theory from biology, is in accounting for the altruism of moral behavior in genetically selfish terms. I believe this can be done by interpreting moral systems as systems of indirect reciprocity.
I regard indirect reciprocity as a consequence of direct reciprocity occurring in the presence of interested audiences – groups of individuals who continually evaluate the members of their society as possible future interactants from whom they would like to gain more than they lose […] Even in directly reciprocal interactions […] net losses to self […] may be the actual aim of one or even both individuals, if they are being scrutinized by others who are likely to engage either individual subsequently in reciprocity of greater significance than that occurring in the scrutinized acts. […] Systems of indirect reciprocity, and therefore moral systems, are social systems structured around the importance of status. The concept of status implies that an individual’s privileges, or its access to resources, are controlled in part by how others collectively think of him (hence, treat him) as a result of past interactions (including observations of interactions with others). […] The consequences of indirect reciprocity […] include the concomitant spread of altruism (as social investment genetically valuable to the altruist), rules, and efforts to cheat […]. I would not contend that we always carry out cost-benefit analyses on these issues deliberately or consciously. I do, however, contend that such analyses occur, sometimes consciously, sometimes not, and that we are evolved to be exceedingly accurate and quick at making them […] [A] conscience [is what] I have interpreted (Alexander, 1979a) as the “still small voice that tells us how far we can go in serving our own interests without incurring intolerable risks.””

“The long-term existence of complex patterns of indirect reciprocity […] seems to favor the evolution of keen abilities to (1) make one’s self seem more beneficent than is the case; and (2) influence others to be beneficent in such fashions as to be deleterious to themselves and beneficial to the moralizer, e.g. to lead others to (a) invest too much, (b) invest wrongly in the moralizer or his relatives and friends, or (c) invest indiscriminately on a larger scale than would otherwise be the case. According to this view, individuals are expected to parade the idea of much beneficence, and even of indiscriminate altruism as beneficial, so as to encourage people in general to engage in increasing amounts of social investment whether or not it is beneficial to their interests. […] They may also be expected to depress the fitness of competitors by identifying them, deceptively or not, as reciprocity cheaters (in other words, to moralize and gossip); to internalize rules or evolve the ability to acquire a conscience, interpreted […] as the ability to use or own judgment to serve our own interests; and to self-deceive and display false sincerity as defenses against detection of cheating and attributions of deliberateness in cheating […] Everyone will with to appear more beneficent than he is. There are two reasons: (1) this appearance, if credible, is more likely to lead to direct social rewards than its alternatives; (2) it is also more likely to encourage others to be more beneficent.”

“Consciousness and related aspects of the human psyche (self-awareness, self-reflection, foresight, planning, purpose, conscience, free will, etc.) are here hypothesized to represent a system for competing with other humans for status, resources, and eventually reproductive success. More specifically, the collection of these attributes is viewed as a means of seeing ourselves and our life situations as others see us and our life situations – most particularly in ways that will cause (the most and the most important of) them to continue to interact with us in fashions that will benefit us and seem to benefit them.
Consciousness, then, is a game of life in which the participants are trying to comprehend what is in one another’s minds before, and more effectively than, it can be done in reverse.”

“Provided with a means of relegating our deceptions to the subconsciousness […] false sincerity becomes easier and detection more difficult. There are reasons for believing that one does not need to know his own personal interests consciously in order to serve them as much as he needs to know the interests of others to thwart them. […] I have suggested that consciousness is a way of making our social behavior so unpredictable as to allow us to outmaneuver others; and that we press into subconsciousness (as opposed to forgetting) those things that remain useful to us but would be detrimental to us if others knew about them, and on which we are continually tested and would have to lie deliberately if they remained in our conscious mind […] Conscious concealment of interests, or disavowal, is deliberate deception, considered more reprehensible than anything not conscious. Indeed, if one does not know consciously what his interests are, he cannot, in some sense, be accused of deception even though he may be using an evolved ability of self-deception to deceive others. So it is not always – maybe not usually – in our evolutionary or surrogate-evolutionary interests to make them conscious […] If people can be fooled […] then there will be continual selection for becoming better at fooling others […]. This may include causing them to think that it will be best for them to help you when it is not. This ploy works because of the thin line everybody must continually tread with respect to not showing selfishness. If some people are self-destructively beneficent (i.e., make altruistic mistakes), and if people often cannot tell if one is such a mistake-maker, it might be profitable even to try to convince others that one is such a mistake-maker so as to be accepted as a cooperator or so that the other will be beneficent in expectation of large returns (through “mistakes”) later. […] Reciprocity may work this way because it is grounded evolutionarily in nepotism, appropriate dispensing of nepotism (as well as reciprocity) depends upon learning, and the wrong things can be learned. [Boyd and Richerson talk about this particular aspect, the learning part, in much more detail in their books – US] Self-deception, then may not be a pathological or detrimental trait, at least in most people most of the time. Rather, it may have evolved as a way to deceive others.”

“The only time that utilitarianism (promoting the greatest good to the greatest number) is predicted by evolutionary theory is when the interests of the group (the “greatest number”) and the individual coincide, and in such cases utilitarianism is not really altruistic in either the biologists’ or the philosophers’ sense of the term. […] If Kohlberg means to imply that a significant proportion of the populace of the world either implicitly or explicitly favors a system in which everyone (including himself) behaves so as to bring the greatest good to the greatest number, then I simply believe that he is wrong. If he supposes that only a relatively few – particularly moral philosophers and some others like them – have achieved this “stage,” then I also doubt the hypothesis. I accept that many people are aware of this concept of utility, that a small minority may advocate it, and that an even smaller minority may actually believe that they behave according to it. I speculate, however, that with a few inadvertent or accidental exceptions, no one actually follows this precept. I see the concept as having its main utility as a goal towards which one may exhort others to aspire, and towards which one may behave as if (or talk as if) aspiring, which actually practicing complex forms of self-interest.”

“Generally speaking, the bigger the group, the more complex the social organization, and the greater the group’s unity of purpose the more limited is individual entrepreneurship.”

“The function or raison d’etre [sic] of moral systems is evidently to provide the unity required to enable the group to compete successfully with other human groups. […] the argument that human evolution has been guided to some large extent by intergroup competition and aggression […] is central to the theory of morality presented here”.

June 29, 2017 Posted by | Anthropology, Biology, Books, Evolutionary biology, Genetics, Philosophy | Leave a comment


(The Pestallozzi quotes below are from The Education of Man, a short and poor aphorism collection I can not possibly recommend despite the inclusion of quotes from it in this post.)

i. “Only a good conscience always gives man the courage to handle his affairs straightforwardly, openly and without evasion.” (Johann Heinrich Pestalozzi)

ii. “An intimate relationship in its full power is always a source of human wisdom and strength in relationships less intimate.” (-ll-)

iii. “Whoever is unwilling to help himself can be helped by no one.” (-ll-)

iv. “He who has filled his pockets in the service of injustice will have little good to say on behalf of justice.” (-ll-)

v. “It is Man’s fate that no one knows the truth alone; we all possess it, but it is divided up among us. He who learns from one man only, will never learn what the others know.” (-ll-)

vi. “No scoundrel is so wicked that he cannot at some point truthfully reprove some honest man” (-ll-)

vii. “The man too keenly aware of his good reputation is likely to have a bad one.” (-ll-)

viii. “Many words make an excuse anything but convincing.” (-ll-)

ix. “Fashions are usually seen in their true perspective only when they have gone out of fashion.” (-ll-)

x. “A thing that nobody looks for is seldom found.” (-ll-)

xi. “Many discoveries must have been stillborn or smothered at birth. We know only those which survived.” (William Ian Beardmore Beveridge)

xii. “Time is the most valuable thing a man can spend.” (Theophrastus)

xiii. “The only man who makes no mistakes is the man who never does anything.” (Theodore Roosevelt)

xiv. “It is hard to fail, but it is worse never to have tried to succeed.” (-ll-)

xv. “From their appearance in the Triassic until the end of the Creta­ceous, a span of 140 million years, mam­mals remained small and inconspicuous while all the ecological roles of large ter­restrial herbivores and carnivores were monopolized by dinosaurs; mammals did not begin to radiate and produce large species until after the dinosaurs had al­ready become extinct at the end of the Cretaceous. One is forced to conclude that dinosaurs were competitively su­perior to mammals as large land vertebrates.” (Robert T. Bakker)

xvi. “Plants and plant-eaters co-evolved. And plants aren’t the passive partners in the chain of terrestrial life. […] A birch tree doesn’t feel cosmic fulfillment when a moose munches its leaves; the tree species, in fact, evolves to fight the moose, to keep the animal’s munching lips away from vulnerable young leaves and twigs. In the final analysis, the merciless hand of natural selection will favor the birch genes that make the tree less and less palatable to the moose in generation after generation. No plant species could survive for long by offering itself as unprotected fodder.” (-ll-)

xvii. “… if you look at crocodiles today, they aren’t really representative of what the lineage of crocodiles look like. Crocodiles are represented by about 23 species, plus or minus a couple. Along that lineage the more primitive members weren’t aquatic. A lot of them were bipedal, a lot of them looked like little dinosaurs. Some were armored, others had no teeth. They were all fully terrestrial. So this is just the last vestige of that radiation that we’re seeing. And the ancestor of both dinosaurs and crocodiles would have, to the untrained eye, looked much more like a dinosaur.” (Mark Norell)

xviii. “If we are to understand the interactions of a large number of agents, we must first be able to describe the capabilities of individual agents.” (John Henry Holland)

xix. “Evolution continually innovates, but at each level it conserves the elements that are recombined to yield the innovations.” (-ll-)

xx. “Model building is the art of selecting those aspects of a process that are relevant to the question being asked. […] High science depends on this art.” (-ll-)

June 19, 2017 Posted by | Biology, Books, Botany, Evolutionary biology, Paleontology, Quotes/aphorisms | Leave a comment

Imported Plant Diseases

I found myself debating whether or not I should read Lewis, Petrovskii, and Potts’ text The Mathematics Behind Biological Invasions a while back, but at the time I in the end decided that it would simply be too much work to justify the potential payoff – so instead of reading the book, I decided to just watch the above lecture and leave it at that. This lecture is definitely a very poor textbook substitute, and I was strongly debating whether or not to blog it because it just isn’t very good; the level of coverage is very low. Which is sad, because some of the diseases discussed in the lecture – like e.g. wheat leaf rust – are really important and worth knowing about. One of the important points made in the lecture is that in the context of potential epidemics, it can be difficult to know when and how to intervene because of the uncertainty involved; early action may be the more efficient choice in terms of resource use, but the earlier you intervene, the less certain will be the intervention payoff and the less you’ll know about stuff like transmission patterns (…would outbreak X ever really have spread very wide if we had not intervened? We don’t observe the counterfactual…). Such aspects of course are not only relevant to plant-diseases, and the lecture also contains other basic insights from epidemiology which apply to other types of disease – but if you’ve ever opened a basic epidemiology text you’ll know all these things already.

May 22, 2017 Posted by | Biology, Botany, Ecology, Epidemiology, Lectures | Leave a comment

Biodemography of aging (IV)

My working assumption as I was reading part two of the book was that I would not be covering that part of the book in much detail here because it would simply be too much work to make such posts legible to the readership of this blog. However I then later, while writing this post, had the thought that given that almost nobody reads along here anyway (I’m not complaining, mind you – this is how I like it these days), the main beneficiary of my blog posts will always be myself, which lead to the related observation/notion that I should not be limiting my coverage of interesting stuff here simply because some hypothetical and probably nonexistent readership out there might not be able to follow the coverage. So when I started out writing this post I was working under the assumption that it would be my last post about the book, but I now feel sure that if I find the time I’ll add at least one more post about the book’s statistics coverage. On a related note I am explicitly making the observation here that this post was written for my benefit, not yours. You can read it if you like, or not, but it was not really written for you.

I have added bold a few places to emphasize key concepts and observations from the quoted paragraphs and in order to make the post easier for me to navigate later (all the italics below are on the other hand those of the authors of the book).

Biodemography is a multidisciplinary branch of science that unites under its umbrella various analytic approaches aimed at integrating biological knowledge and methods and traditional demographic analyses to shed more light on variability in mortality and health across populations and between individuals. Biodemography of aging is a special subfield of biodemography that focuses on understanding the impact of processes related to aging on health and longevity.”

“Mortality rates as a function of age are a cornerstone of many demographic analyses. The longitudinal age trajectories of biomarkers add a new dimension to the traditional demographic analyses: the mortality rate becomes a function of not only age but also of these biomarkers (with additional dependence on a set of sociodemographic variables). Such analyses should incorporate dynamic characteristics of trajectories of biomarkers to evaluate their impact on mortality or other outcomes of interest. Traditional analyses using baseline values of biomarkers (e.g., Cox proportional hazards or logistic regression models) do not take into account these dynamics. One approach to the evaluation of the impact of biomarkers on mortality rates is to use the Cox proportional hazards model with time-dependent covariates; this approach is used extensively in various applications and is available in all popular statistical packages. In such a model, the biomarker is considered a time-dependent covariate of the hazard rate and the corresponding regression parameter is estimated along with standard errors to make statistical inference on the direction and the significance of the effect of the biomarker on the outcome of interest (e.g., mortality). However, the choice of the analytic approach should not be governed exclusively by its simplicity or convenience of application. It is essential to consider whether the method gives meaningful and interpretable results relevant to the research agenda. In the particular case of biodemographic analyses, the Cox proportional hazards model with time-dependent covariates is not the best choice.

“Longitudinal studies of aging present special methodological challenges due to inherent characteristics of the data that need to be addressed in order to avoid biased inference. The challenges are related to the fact that the populations under study (aging individuals) experience substantial dropout rates related to death or poor health and often have co-morbid conditions related to the disease of interest. The standard assumption made in longitudinal analyses (although usually not explicitly mentioned in publications) is that dropout (e.g., death) is not associated with the outcome of interest. While this can be safely assumed in many general longitudinal studies (where, e.g., the main causes of dropout might be the administrative end of the study or moving out of the study area, which are presumably not related to the studied outcomes), the very nature of the longitudinal outcomes (e.g., measurements of some physiological biomarkers) analyzed in a longitudinal study of aging assumes that they are (at least hypothetically) related to the process of aging. Because the process of aging leads to the development of diseases and, eventually, death, in longitudinal studies of aging an assumption of non-association of the reason for dropout and the outcome of interest is, at best, risky, and usually is wrong. As an illustration, we found that the average trajectories of different physiological indices of individuals dying at earlier ages markedly deviate from those of long-lived individuals, both in the entire Framingham original cohort […] and also among carriers of specific alleles […] In such a situation, panel compositional changes due to attrition affect the averaging procedure and modify the averages in the total sample. Furthermore, biomarkers are subject to measurement error and random biological variability. They are usually collected intermittently at examination times which may be sparse and typically biomarkers are not observed at event times. It is well known in the statistical literature that ignoring measurement errors and biological variation in such variables and using their observed “raw” values as time-dependent covariates in a Cox regression model may lead to biased estimates and incorrect inferences […] Standard methods of survival analysis such as the Cox proportional hazards model (Cox 1972) with time-dependent covariates should be avoided in analyses of biomarkers measured with errors because they can lead to biased estimates.

“Statistical methods aimed at analyses of time-to-event data jointly with longitudinal measurements have become known in the mainstream biostatistical literature as “joint models for longitudinal and time-to-event data” (“survival” or “failure time” are often used interchangeably with “time-to-event”) or simply “joint models.” This is an active and fruitful area of biostatistics with an explosive growth in recent years. […] The standard joint model consists of two parts, the first representing the dynamics of longitudinal data (which is referred to as the “longitudinal sub-model”) and the second one modeling survival or, generally, time-to-event data (which is referred to as the “survival sub-model”). […] Numerous extensions of this basic model have appeared in the joint modeling literature in recent decades, providing great flexibility in applications to a wide range of practical problems. […] The standard parameterization of the joint model (11.2) assumes that the risk of the event at age t depends on the current “true” value of the longitudinal biomarker at this age. While this is a reasonable assumption in general, it may be argued that additional dynamic characteristics of the longitudinal trajectory can also play a role in the risk of death or onset of a disease. For example, if two individuals at the same age have exactly the same level of some biomarker at this age, but the trajectory for the first individual increases faster with age than that of the second one, then the first individual can have worse survival chances for subsequent years. […] Therefore, extensions of the basic parameterization of joint models allowing for dependence of the risk of an event on such dynamic characteristics of the longitudinal trajectory can provide additional opportunities for comprehensive analyses of relationships between the risks and longitudinal trajectories. Several authors have considered such extended models. […] joint models are computationally intensive and are sometimes prone to convergence problems [however such] models provide more efficient estimates of the effect of a covariate […] on the time-to-event outcome in the case in which there is […] an effect of the covariate on the longitudinal trajectory of a biomarker. This means that analyses of longitudinal and time-to-event data in joint models may require smaller sample sizes to achieve comparable statistical power with analyses based on time-to-event data alone (Chen et al. 2011).”

“To be useful as a tool for biodemographers and gerontologists who seek biological explanations for observed processes, models of longitudinal data should be based on realistic assumptions and reflect relevant knowledge accumulated in the field. An example is the shape of the risk functions. Epidemiological studies show that the conditional hazards of health and survival events considered as functions of risk factors often have U- or J-shapes […], so a model of aging-related changes should incorporate this information. In addition, risk variables, and, what is very important, their effects on the risks of corresponding health and survival events, experience aging-related changes and these can differ among individuals. […] An important class of models for joint analyses of longitudinal and time-to-event data incorporating a stochastic process for description of longitudinal measurements uses an epidemiologically-justified assumption of a quadratic hazard (i.e., U-shaped in general and J-shaped for variables that can take values only on one side of the U-curve) considered as a function of physiological variables. Quadratic hazard models have been developed and intensively applied in studies of human longitudinal data”.

“Various approaches to statistical model building and data analysis that incorporate unobserved heterogeneity are ubiquitous in different scientific disciplines. Unobserved heterogeneity in models of health and survival outcomes can arise because there may be relevant risk factors affecting an outcome of interest that are either unknown or not measured in the data. Frailty models introduce the concept of unobserved heterogeneity in survival analysis for time-to-event data. […] Individual age trajectories of biomarkers can differ due to various observed as well as unobserved (and unknown) factors and such individual differences propagate to differences in risks of related time-to-event outcomes such as the onset of a disease or death. […] The joint analysis of longitudinal and time-to-event data is the realm of a special area of biostatistics named “joint models for longitudinal and time-to-event data” or simply “joint models” […] Approaches that incorporate heterogeneity in populations through random variables with continuous distributions (as in the standard joint models and their extensions […]) assume that the risks of events and longitudinal trajectories follow similar patterns for all individuals in a population (e.g., that biomarkers change linearly with age for all individuals). Although such homogeneity in patterns can be justifiable for some applications, generally this is a rather strict assumption […] A population under study may consist of subpopulations with distinct patterns of longitudinal trajectories of biomarkers that can also have different effects on the time-to-event outcome in each subpopulation. When such subpopulations can be defined on the base of observed covariate(s), one can perform stratified analyses applying different models for each subpopulation. However, observed covariates may not capture the entire heterogeneity in the population in which case it may be useful to conceive of the population as consisting of latent subpopulations defined by unobserved characteristics. Special methodological approaches are necessary to accommodate such hidden heterogeneity. Within the joint modeling framework, a special class of models, joint latent class models, was developed to account for such heterogeneity […] The joint latent class model has three components. First, it is assumed that a population consists of a fixed number of (latent) subpopulations. The latent class indicator represents the latent class membership and the probability of belonging to the latent class is specified by a multinomial logistic regression function of observed covariates. It is assumed that individuals from different latent classes have different patterns of longitudinal trajectories of biomarkers and different risks of event. The key assumption of the model is conditional independence of the biomarker and the time-to-events given the latent classes. Then the class-specific models for the longitudinal and time-to-event outcomes constitute the second and third component of the model thus completing its specification. […] the latent class stochastic process model […] provides a useful tool for dealing with unobserved heterogeneity in joint analyses of longitudinal and time-to-event outcomes and taking into account hidden components of aging in their joint influence on health and longevity. This approach is also helpful for sensitivity analyses in applications of the original stochastic process model. We recommend starting the analyses with the original stochastic process model and estimating the model ignoring possible hidden heterogeneity in the population. Then the latent class stochastic process model can be applied to test hypotheses about the presence of hidden heterogeneity in the data in order to appropriately adjust the conclusions if a latent structure is revealed.”

The longitudinal genetic-demographic model (or the genetic-demographic model for longitudinal data) […] combines three sources of information in the likelihood function: (1) follow-up data on survival (or, generally, on some time-to-event) for genotyped individuals; (2) (cross-sectional) information on ages at biospecimen collection for genotyped individuals; and (3) follow-up data on survival for non-genotyped individuals. […] Such joint analyses of genotyped and non-genotyped individuals can result in substantial improvements in statistical power and accuracy of estimates compared to analyses of the genotyped subsample alone if the proportion of non-genotyped participants is large. Situations in which genetic information cannot be collected for all participants of longitudinal studies are not uncommon. They can arise for several reasons: (1) the longitudinal study may have started some time before genotyping was added to the study design so that some initially participating individuals dropped out of the study (i.e., died or were lost to follow-up) by the time of genetic data collection; (2) budget constraints prohibit obtaining genetic information for the entire sample; (3) some participants refuse to provide samples for genetic analyses. Nevertheless, even when genotyped individuals constitute a majority of the sample or the entire sample, application of such an approach is still beneficial […] The genetic stochastic process model […] adds a new dimension to genetic biodemographic analyses, combining information on longitudinal measurements of biomarkers available for participants of a longitudinal study with follow-up data and genetic information. Such joint analyses of different sources of information collected in both genotyped and non-genotyped individuals allow for more efficient use of the research potential of longitudinal data which otherwise remains underused when only genotyped individuals or only subsets of available information (e.g., only follow-up data on genotyped individuals) are involved in analyses. Similar to the longitudinal genetic-demographic model […], the benefits of combining data on genotyped and non-genotyped individuals in the genetic SPM come from the presence of common parameters describing characteristics of the model for genotyped and non-genotyped subsamples of the data. This takes into account the knowledge that the non-genotyped subsample is a mixture of carriers and non-carriers of the same alleles or genotypes represented in the genotyped subsample and applies the ideas of heterogeneity analyses […] When the non-genotyped subsample is substantially larger than the genotyped subsample, these joint analyses can lead to a noticeable increase in the power of statistical estimates of genetic parameters compared to estimates based only on information from the genotyped subsample. This approach is applicable not only to genetic data but to any discrete time-independent variable that is observed only for a subsample of individuals in a longitudinal study.

“Despite an existing tradition of interpreting differences in the shapes or parameters of the mortality rates (survival functions) resulting from the effects of exposure to different conditions or other interventions in terms of characteristics of individual aging, this practice has to be used with care. This is because such characteristics are difficult to interpret in terms of properties of external and internal processes affecting the chances of death. An important question then is: What kind of mortality model has to be developed to obtain parameters that are biologically interpretable? The purpose of this chapter is to describe an approach to mortality modeling that represents mortality rates in terms of parameters of physiological changes and declining health status accompanying the process of aging in humans. […] A traditional (demographic) description of changes in individual health/survival status is performed using a continuous-time random Markov process with a finite number of states, and age-dependent transition intensity functions (transitions rates). Transitions to the absorbing state are associated with death, and the corresponding transition intensity is a mortality rate. Although such a description characterizes connections between health and mortality, it does not allow for studying factors and mechanisms involved in the aging-related health decline. Numerous epidemiological studies provide compelling evidence that health transition rates are influenced by a number of factors. Some of them are fixed at the time of birth […]. Others experience stochastic changes over the life course […] The presence of such randomly changing influential factors violates the Markov assumption, and makes the description of aging-related changes in health status more complicated. […] The age dynamics of influential factors (e.g., physiological variables) in connection with mortality risks has been described using a stochastic process model of human mortality and aging […]. Recent extensions of this model have been used in analyses of longitudinal data on aging, health, and longevity, collected in the Framingham Heart Study […] This model and its extensions are described in terms of a Markov stochastic process satisfying a diffusion-type stochastic differential equation. The stochastic process is stopped at random times associated with individuals’ deaths. […] When an individual’s health status is taken into account, the coefficients of the stochastic differential equations become dependent on values of the jumping process. This dependence violates the Markov assumption and renders the conditional Gaussian property invalid. So the description of this (continuously changing) component of aging-related changes in the body also becomes more complicated. Since studying age trajectories of physiological states in connection with changes in health status and mortality would provide more realistic scenarios for analyses of available longitudinal data, it would be a good idea to find an appropriate mathematical description of the joint evolution of these interdependent processes in aging organisms. For this purpose, we propose a comprehensive model of human aging, health, and mortality in which the Markov assumption is fulfilled by a two-component stochastic process consisting of jumping and continuously changing processes. The jumping component is used to describe relatively fast changes in health status occurring at random times, and the continuous component describes relatively slow stochastic age-related changes of individual physiological states. […] The use of stochastic differential equations for random continuously changing covariates has been studied intensively in the analysis of longitudinal data […] Such a description is convenient since it captures the feedback mechanism typical of biological systems reflecting regular aging-related changes and takes into account the presence of random noise affecting individual trajectories. It also captures the dynamic connections between aging-related changes in health and physiological states, which are important in many applications.”

April 23, 2017 Posted by | Biology, Books, Demographics, Genetics, Mathematics, Statistics | Leave a comment

Biodemography of aging (III)

Latent class representation of the Grade of Membership model.
Singular value decomposition.
Affine space.
Lebesgue measure.
General linear position.

The links above are links to topics I looked up while reading the second half of the book. The first link is quite relevant to the book’s coverage as a comprehensive longitudinal Grade of Membership (-GoM) model is covered in chapter 17. Relatedly, chapter 18 covers linear latent structure (-LLS) models, and as observed in the book LLS is a generalization of GoM. As should be obvious from the nature of the links some of the stuff included in the second half of the text is highly technical, and I’ll readily admit I was not fully able to understand all the details included in the coverage of chapters 17 and 18 in particular. On account of the technical nature of the coverage in Part 2 I’m not sure I’ll cover the second half of the book in much detail, though I probably shall devote at least one more post to some of those topics, as they were quite interesting even if some of the details were difficult to follow.

I have almost finished the book at this point, and I have already decided to both give the book five stars and include it on my list of favorite books on goodreads; it’s really well written, and it provides consistently highly detailed coverage of very high quality. As I also noted in the first post about the book the authors have given readability aspects some thought, and I am sure most readers would learn quite a bit from this text even if they were to skip some of the more technical chapters. The main body of Part 2 of the book, the subtitle of which is ‘Statistical Modeling of Aging, Health, and Longevity’, is however probably in general not worth the effort of reading unless you have a solid background in statistics.

This post includes some observations and quotes from the last chapters of the book’s Part 1.

“The proportion of older adults in the U.S. population is growing. This raises important questions about the increasing prevalence of aging-related diseases, multimorbidity issues, and disability among the elderly population. […] In 2009, 46.3 million people were covered by Medicare: 38.7 million of them were aged 65 years and older, and 7.6 million were disabled […]. By 2031, when the baby-boomer generation will be completely enrolled, Medicare is expected to reach 77 million individuals […]. Because the Medicare program covers 95 % of the nation’s aged population […], the prediction of future Medicare costs based on these data can be an important source of health care planning.”

“Three essential components (which could be also referred as sub-models) need to be developed to construct a modern model of forecasting of population health and associated medical costs: (i) a model of medical cost projections conditional on each health state in the model, (ii) health state projections, and (iii) a description of the distribution of initial health states of a cohort to be projected […] In making medical cost projections, two major effects should be taken into account: the dynamics of the medical costs during the time periods comprising the date of onset of chronic diseases and the increase of medical costs during the last years of life. In this chapter, we investigate and model the first of these two effects. […] the approach developed in this chapter generalizes the approach known as “life tables with covariates” […], resulting in a new family of forecasting models with covariates such as comorbidity indexes or medical costs. In sum, this chapter develops a model of the relationships between individual cost trajectories following the onset of aging-related chronic diseases. […] The underlying methodological idea is to aggregate the health state information into a single (or several) covariate(s) that can be determinative in predicting the risk of a health event (e.g., disease incidence) and whose dynamics could be represented by the model assumptions. An advantage of such an approach is its substantial reduction of the degrees of freedom compared with existing forecasting models  (e.g., the FEM model, Goldman and RAND Corporation 2004). […] We found that the time patterns of medical cost trajectories were similar for all diseases considered and can be described in terms of four components having the meanings of (i) the pre-diagnosis cost associated with initial comorbidity represented by medical expenditures, (ii) the cost peak associated with the onset of each disease, (iii) the decline/reduction in medical expenditures after the disease onset, and (iv) the difference between post- and pre-diagnosis cost levels associated with an acquired comorbidity. The description of the trajectories was formalized by a model which explicitly involves four parameters reflecting these four components.”

As I noted earlier in my coverage of the book, I don’t think the model above fully captures all relevant cost contributions of the diseases included, as the follow-up period was too short to capture all relevant costs to be included in the part iv model component. This is definitely a problem in the context of diabetes. But then again nothing in theory stops people from combining the model above with other models which are better at dealing with the excess costs associated with long-term complications of chronic diseases, and the model results were intriguing even if the model likely underperforms in a few specific disease contexts.

Moving on…

“Models of medical cost projections usually are based on regression models estimated with the majority of independent predictors describing demographic status of the individual, patient’s health state, and level of functional limitations, as well as their interactions […]. If the health states needs to be described by a number of simultaneously manifested diseases, then detailed stratification over the categorized variables or use of multivariate regression models allows for a better description of the health states. However, it can result in an abundance of model parameters to be estimated. One way to overcome these difficulties is to use an approach in which the model components are demographically-based aggregated characteristics that mimic the effects of specific states. The model developed in this chapter is an example of such an approach: the use of a comorbidity index rather than of a set of correlated categorical regressor variables to represent the health state allows for an essential reduction in the degrees of freedom of the problem.”

“Unlike mortality, the onset time of chronic disease is difficult to define with high precision due to the large variety of disease-specific criteria for onset/incident case identification […] there is always some arbitrariness in defining the date of chronic disease onset, and a unified definition of date of onset is necessary for population studies with a long-term follow-up.”

“Individual age trajectories of physiological indices are the product of a complicated interplay among genetic and non-genetic (environmental, behavioral, stochastic) factors that influence the human body during the course of aging. Accordingly, they may differ substantially among individuals in a cohort. Despite this fact, the average age trajectories for the same index follow remarkable regularities. […] some indices tend to change monotonically with age: the level of blood glucose (BG) increases almost monotonically; pulse pressure (PP) increases from age 40 until age 85, then levels off and shows a tendency to decline only at later ages. The age trajectories of other indices are non-monotonic: they tend to increase first and then decline. Body mass index (BMI) increases up to about age 70 and then declines, diastolic blood pressure (DBP) increases until age 55–60 and then declines, systolic blood pressure (SBP) increases until age 75 and then declines, serum cholesterol (SCH) increases until age 50 in males and age 70 in females and then declines, ventricular rate (VR) increases until age 55 in males and age 45 in females and then declines. With small variations, these general patterns are similar in males and females. The shapes of the age-trajectories of the physiological variables also appear to be similar for different genotypes. […] The effects of these physiological indices on mortality risk were studied in Yashin et al. (2006), who found that the effects are gender and age specific. They also found that the dynamic properties of the individual age trajectories of physiological indices may differ dramatically from one individual to the next.”

“An increase in the mortality rate with age is traditionally associated with the process of aging. This influence is mediated by aging-associated changes in thousands of biological and physiological variables, some of which have been measured in aging studies. The fact that the age trajectories of some of these variables differ among individuals with short and long life spans and healthy life spans indicates that dynamic properties of the indices affect life history traits. Our analyses of the FHS data clearly demonstrate that the values of physiological indices at age 40 are significant contributors both to life span and healthy life span […] suggesting that normalizing these variables around age 40 is important for preventing age-associated morbidity and mortality later in life. […] results [also] suggest that keeping physiological indices stable over the years of life could be as important as their normalizing around age 40.”

“The results […] indicate that, in the quest of identifying longevity genes, it may be important to look for candidate genes with pleiotropic effects on more than one dynamic characteristic of the age-trajectory of a physiological variable, such as genes that may influence both the initial value of a trait (intercept) and the rates of its changes over age (slopes). […] Our results indicate that the dynamic characteristics of age-related changes in physiological variables are important predictors of morbidity and mortality risks in aging individuals. […] We showed that the initial value (intercept), the rate of changes (slope), and the variability of a physiological index, in the age interval 40–60 years, significantly influenced both mortality risk and onset of unhealthy life at ages 60+ in our analyses of the Framingham Heart Study data. That is, these dynamic characteristics may serve as good predictors of late life morbidity and mortality risks. The results also suggest that physiological changes taking place in the organism in middle life may affect longevity through promoting or preventing diseases of old age. For non-monotonically changing indices, we found that having a later age at the peak value of the index […], a lower peak value […], a slower rate of decline in the index at older ages […], and less variability in the index over time, can be beneficial for longevity. Also, the dynamic characteristics of the physiological indices were, overall, associated with mortality risk more significantly than with onset of unhealthy life.”

“Decades of studies of candidate genes show that they are not linked to aging-related traits in a straightforward manner […]. Recent genome-wide association studies (GWAS) have reached fundamentally the same conclusion by showing that the traits in late life likely are controlled by a relatively large number of common genetic variants […]. Further, GWAS often show that the detected associations are of tiny effect […] the weak effect of genes on traits in late life can be not only because they confer small risks having small penetrance but because they confer large risks but in a complex fashion […] In this chapter, we consider several examples of complex modes of gene actions, including genetic tradeoffs, antagonistic genetic effects on the same traits at different ages, and variable genetic effects on lifespan. The analyses focus on the APOE common polymorphism. […] The analyses reported in this chapter suggest that the e4 allele can be protective against cancer with a more pronounced role in men. This protective effect is more characteristic of cancers at older ages and it holds in both the parental and offspring generations of the FHS participants. Unlike cancer, the effect of the e4 allele on risks of CVD is more pronounced in women. […] [The] results […] explicitly show that the same allele can change its role on risks of CVD in an antagonistic fashion from detrimental in women with onsets at younger ages to protective in women with onsets at older ages. […] e4 allele carriers have worse survival compared to non-e4 carriers in each cohort. […] Sex stratification shows sexual dimorphism in the effect of the e4 allele on survival […] with the e4 female carriers, particularly, being more exposed to worse survival. […] The results of these analyses provide two important insights into the role of genes in lifespan. First, they provide evidence on the key role of aging-related processes in genetic susceptibility to lifespan. For example, taking into account the specifics of aging-related processes gains 18 % in estimates of the RRs and five orders of magnitude in significance in the same sample of women […] without additional investments in increasing sample sizes and new genotyping. The second is that a detailed study of the role of aging-related processes in estimates of the effects of genes on lifespan (and healthspan) helps in detecting more homogeneous [high risk] sub-samples”.

“The aging of populations in developed countries requires effective strategies to extend healthspan. A promising solution could be to yield insights into the genetic predispositions for endophenotypes, diseases, well-being, and survival. It was thought that genome-wide association studies (GWAS) would be a major breakthrough in this endeavor. Various genetic association studies including GWAS assume that there should be a deterministic (unconditional) genetic component in such complex phenotypes. However, the idea of unconditional contributions of genes to these phenotypes faces serious difficulties which stem from the lack of direct evolutionary selection against or in favor of such phenotypes. In fact, evolutionary constraints imply that genes should be linked to age-related phenotypes in a complex manner through different mechanisms specific for given periods of life. Accordingly, the linkage between genes and these traits should be strongly modulated by age-related processes in a changing environment, i.e., by the individuals’ life course. The inherent sensitivity of genetic mechanisms of complex health traits to the life course will be a key concern as long as genetic discoveries continue to be aimed at improving human health.”

“Despite the common understanding that age is a risk factor of not just one but a large portion of human diseases in late life, each specific disease is typically considered as a stand-alone trait. Independence of diseases was a plausible hypothesis in the era of infectious diseases caused by different strains of microbes. Unlike those diseases, the exact etiology and precursors of diseases in late life are still elusive. It is clear, however, that the origin of these diseases differs from that of infectious diseases and that age-related diseases reflect a complicated interplay among ontogenetic changes, senescence processes, and damages from exposures to environmental hazards. Studies of the determinants of diseases in late life provide insights into a number of risk factors, apart from age, that are common for the development of many health pathologies. The presence of such common risk factors makes chronic diseases and hence risks of their occurrence interdependent. This means that the results of many calculations using the assumption of disease independence should be used with care. Chapter 4 argued that disregarding potential dependence among diseases may seriously bias estimates of potential gains in life expectancy attributable to the control or elimination of a specific disease and that the results of the process of coping with a specific disease will depend on the disease elimination strategy, which may affect mortality risks from other diseases.”

April 17, 2017 Posted by | Biology, Books, Cancer/oncology, Demographics, Economics, Epidemiology, Genetics, Medicine, Statistics | Leave a comment

Biodemography of aging (I)

“The goal of this monograph is to show how questions about the connections between and among aging, health, and longevity can be addressed using the wealth of available accumulated knowledge in the field, the large volumes of genetic and non-genetic data collected in longitudinal studies, and advanced biodemographic models and analytic methods. […] This monograph visualizes aging-related changes in physiological variables and survival probabilities, describes methods, and summarizes the results of analyses of longitudinal data on aging, health, and longevity in humans performed by the group of researchers in the Biodemography of Aging Research Unit (BARU) at Duke University during the past decade. […] the focus of this monograph is studying dynamic relationships between aging, health, and longevity characteristics […] our focus on biodemography/biomedical demography meant that we needed to have an interdisciplinary and multidisciplinary biodemographic perspective spanning the fields of actuarial science, biology, economics, epidemiology, genetics, health services research, mathematics, probability, and statistics, among others.”

The quotes above are from the book‘s preface. In case this aspect was not clear from the comments above, this is the kind of book where you’ll randomly encounter sentences like these:

The simplest model describing negative correlations between competing risks is the multivariate lognormal frailty model. We illustrate the properties of such model for the bivariate case.

“The time-to-event sub-model specifies the latent class-specific expressions for the hazard rates conditional on the vector of biomarkers Yt and the vector of observed covariates X …”

…which means that some parts of the book are really hard to blog; it simply takes more effort to deal with this stuff here than it’s worth. As a result of this my coverage of the book will not provide a remotely ‘balanced view’ of the topics covered in it; I’ll skip a lot of the technical stuff because I don’t think it makes much sense to cover specific models and algorithms included in the book in detail here. However I should probably also emphasize while on this topic that although the book is in general not an easy read, it’s hard to read because ‘this stuff is complicated’, not because the authors are not trying. The authors in fact make it clear already in the preface that some chapters are more easy to read than are others and that some chapters are actually deliberately written as ‘guideposts and way-stations‘, as they put it, in order to make it easier for the reader to find the stuff in which he or she is most interested (“the interested reader can focus directly on the chapters/sections of greatest interest without having to read the entire volume“) – they have definitely given readability aspects some thought, and I very much like the book so far; it’s full of great stuff and it’s very well written.

I have had occasion to question a few of the observations they’ve made, for example I was a bit skeptical about a few of the conclusions they drew in chapter 6 (‘Medical Cost Trajectories and Onset of Age-Associated Diseases’), but this was related to what some would certainly consider to be minor details. In the chapter they describe a model of medical cost trajectories where the post-diagnosis follow-up period is 20 months; this is in my view much too short a follow-up period to draw conclusions about medical cost trajectories in the context of type 2 diabetes, one of the diseases included in the model, which I know because I’m intimately familiar with the literature on that topic; you need to look 7-10 years ahead to get a proper sense of how this variable develops over time – and it really is highly relevant to include those later years, because if you do not you may miss out on a large proportion of the total cost given that a substantial proportion of the total cost of diabetes relate to complications which tend to take some years to develop. If your cost analysis is based on a follow-up period as short as that of that model you may also on a related note draw faulty conclusions about which medical procedures and -subsidies are sensible/cost effective in the setting of these patients, because highly adherent patients may be significantly more expensive in a short run analysis like this one (they show up to their medical appointments and take their medications…) but much cheaper in the long run (…because they take their medications they don’t go blind or develop kidney failure). But as I say, it’s a minor point – this was one condition out of 20 included in the analysis they present, and if they’d addressed all the things that pedants like me might take issue with, the book would be twice as long and it would likely no longer be readable. Relatedly, the model they discuss in that chapter is far from unsalvageable; it’s just that one of the components of interest –  ‘the difference between post- and pre-diagnosis cost levels associated with an acquired comorbidity’ – in the case of at least one disease is highly unlikely to be correct (given the authors’ interpretation of the variable), because there’s some stuff of relevance which the model does not include. I found the model quite interesting, despite the shortcomings, and the results were definitely surprising. (No, the above does not in my opinion count as an example of coverage of a ‘specific model […] in detail’. Or maybe it does, but I included no equations. On reflection I probably can’t promise much more than that, sometimes the details are interesting…)

Anyway, below I’ve added some quotes from the first few chapters of the book and a few remarks along the way.

“The genetics of aging, longevity, and mortality has become the subject of intensive analyses […]. However, most estimates of genetic effects on longevity in GWAS have not reached genome-wide statistical significance (after applying the Bonferroni correction for multiple testing) and many findings remain non-replicated. Possible reasons for slow progress in this field include the lack of a biologically-based conceptual framework that would drive development of statistical models and methods for genetic analyses of data [here I was reminded of Burnham & Anderson’s coverage, in particular their criticism of mindless ‘Let the computer find out’-strategies – the authors of that chapter seem to share their skepticism…], the presence of hidden genetic heterogeneity, the collective influence of many genetic factors (each with small effects), the effects of rare alleles, and epigenetic effects, as well as molecular biological mechanisms regulating cellular functions. […] Decades of studies of candidate genes show that they are not linked to aging-related traits in a straightforward fashion (Finch and Tanzi 1997; Martin 2007). Recent genome-wide association studies (GWAS) have supported this finding by showing that the traits in late life are likely controlled by a relatively large number of common genetic variants […]. Further, GWAS often show that the detected associations are of tiny size (Stranger et al. 2011).”

I think this ties in well with what I’ve previously read on these and related topics – see e.g. the second-last paragraph quoted in my coverage of Richard Alexander’s book, or some of the remarks included in Roberts et al. Anyway, moving on:

“It is well known from epidemiology that values of variables describing physiological states at a given age are associated with human morbidity and mortality risks. Much less well known are the facts that not only the values of these variables at a given age, but also characteristics of their dynamic behavior during the life course are also associated with health and survival outcomes. This chapter [chapter 8 in the book, US] shows that, for monotonically changing variables, the value at age 40 (intercept), the rate of change (slope), and the variability of a physiological variable, at ages 40–60, significantly influence both health-span and longevity after age 60. For non-monotonically changing variables, the age at maximum, the maximum value, the rate of decline after reaching the maximum (right slope), and the variability in the variable over the life course may influence health-span and longevity. This indicates that such characteristics can be important targets for preventive measures aiming to postpone onsets of complex diseases and increase longevity.”

The chapter from which the quotes in the next two paragraphs are taken was completely filled with data from the Framingham Heart Study, and it was hard for me to know what to include here and what to leave out – so you should probably just consider the stuff I’ve included below as samples of the sort of observations included in that part of the coverage.

“To mediate the influence of internal or external factors on lifespan, physiological variables have to show associations with risks of disease and death at different age intervals, or directly with lifespan. For many physiological variables, such associations have been established in epidemiological studies. These include body mass index (BMI), diastolic blood pressure (DBP), systolic blood pressure (SBP), pulse pressure (PP), blood glucose (BG), serum cholesterol (SCH), hematocrit (H), and ventricular rate (VR). […] the connection between BMI and mortality risk is generally J-shaped […] Although all age patterns of physiological indices are non-monotonic functions of age, blood glucose (BG) and pulse pressure (PP) can be well approximated by monotonically increasing functions for both genders. […] the average values of body mass index (BMI) increase with age (up to age 55 for males and 65 for females), and then decline for both sexes. These values do not change much between ages 50 and 70 for males and between ages 60 and 70 for females. […] Except for blood glucose, all average age trajectories of physiological indices differ between males and females. Statistical analysis confirms the significance of these differences. In particular, after age 35 the female BMI increases faster than that of males. […] [When comparing women with less than or equal to 11 years of education [‘LE’] to women with 12 or more years of education [HE]:] The average values of BG for both groups are about the same until age 45. Then the BG curve for the LE females becomes higher than that of the HE females until age 85 where the curves intersect. […] The average values of BMI in the LE group are substantially higher than those among the HE group over the entire age interval. […] The average values of BG for the HE and LE males are very similar […] However, the differences between groups are much smaller than for females.”

They also in the chapter compared individuals with short life-spans [‘SL’, died before the age of 75] and those with long life-spans [‘LL’, 100 longest-living individuals in the relevant sample] to see if the variables/trajectories looked different. They did, for example: “trajectories for the LL females are substantially different from those for the SL females in all eight indices. Specifically, the average values of BG are higher and increase faster in the SL females. The entire age trajectory of BMI for the LL females is shifted to the right […] The average values of DBP [diastolic blood pressure, US] among the SL females are higher […] A particularly notable observation is the shift of the entire age trajectory of BMI for the LL males and females to the right (towards an older age), as compared with the SL group, and achieving its maximum at a later age. Such a pattern is markedly different from that for healthy and unhealthy individuals. The latter is mostly characterized by the higher values of BMI for the unhealthy people, while it has similar ages at maximum for both the healthy and unhealthy groups. […] Physiological aging changes usually develop in the presence of other factors affecting physiological dynamics and morbidity/mortality risks. Among these other factors are year of birth, gender, education, income, occupation, smoking, and alcohol use. An important limitation of most longitudinal studies is the lack of information regarding external disturbances affecting individuals in their day-today life.”

I incidentally noted while I was reading that chapter that a relevant variable ‘lurking in the shadows’ in the context of the male and female BMI trajectories might be changing smoking habits over time; I have not looked at US data on this topic, but I do know that the smoking patterns of Danish males and females during the latter half of the last century were markedly different and changed really quite dramatically in just a few decades; a lot more males than females smoked in the 60es, whereas the proportions of male- and female smokers today are much more similar, because a lot of males have given up smoking (I refer Danish readers to this blog post which I wrote some years ago on these topics). The authors of the chapter incidentally do look a little at data on smokers and they observe that smokers’ BMI are lower than non-smokers (not surprising), and that the smokers’ BMI curve (displaying the relationship between BMI and age) grows at a slower rate than the BMI curve of non-smokers (that this was to be expected is perhaps less clear, at least to me – the authors don’t interpret these specific numbers, they just report them).

The next chapter is one of the chapters in the book dealing with the SEER data I also mentioned not long ago in the context of my coverage of Bueno et al. Some sample quotes from that chapter below:

“To better address the challenge of “healthy aging” and to reduce economic burdens of aging-related diseases, key factors driving the onset and progression of diseases in older adults must be identified and evaluated. An identification of disease-specific age patterns with sufficient precision requires large databases that include various age-specific population groups. Collections of such datasets are costly and require long periods of time. That is why few studies have investigated disease-specific age patterns among older U.S. adults and there is limited knowledge of factors impacting these patterns. […] Information collected in U.S. Medicare Files of Service Use (MFSU) for the entire Medicare-eligible population of older U.S. adults can serve as an example of observational administrative data that can be used for analysis of disease-specific age patterns. […] In this chapter, we focus on a series of epidemiologic and biodemographic characteristics that can be studied using MFSU.”

“Two datasets capable of generating national level estimates for older U.S. adults are the Surveillance, Epidemiology, and End Results (SEER) Registry data linked to MFSU (SEER-M) and the National Long Term Care Survey (NLTCS), also linked to MFSU (NLTCS-M). […] The SEER-M data are the primary dataset analyzed in this chapter. The expanded SEER registry covers approximately 26 % of the U.S. population. In total, the Medicare records for 2,154,598 individuals are available in SEER-M […] For the majority of persons, we have continuous records of Medicare services use from 1991 (or from the time the person reached age 65 after 1990) to his/her death. […] The NLTCS-M data contain two of the six waves of the NLTCS: namely, the cohorts of years 1994 and 1999. […] In total, 34,077 individuals were followed-up between 1994 and 1999. These individuals were given the detailed NLTCS interview […] which has information on risk factors. More than 200 variables were selected”

In short, these data sets are very large, and contain a lot of information. Here are some results/data:

“Among studied diseases, incidence rates of Alzheimer’s disease, stroke, and heart failure increased with age, while the rates of lung and breast cancers, angina pectoris, diabetes, asthma, emphysema, arthritis, and goiter became lower at advanced ages. [..] Several types of age-patterns of disease incidence could be described. The first was a monotonic increase until age 85–95, with a subsequent slowing down, leveling off, and decline at age 100. This pattern was observed for myocardial infarction, stroke, heart failure, ulcer, and Alzheimer’s disease. The second type had an earlier-age maximum and a more symmetric shape (i.e., an inverted U-shape) which was observed for lung and colon cancers, Parkinson’s disease, and renal failure. The majority of diseases (e.g., prostate cancer, asthma, and diabetes mellitus among them) demonstrated a third shape: a monotonic decline with age or a decline after a short period of increased rates. […] The occurrence of age-patterns with a maximum and, especially, with a monotonic decline contradicts the hypothesis that the risk of geriatric diseases correlates with an accumulation of adverse health events […]. Two processes could be operative in the generation of such shapes. First, they could be attributed to the effect of selection […] when frail individuals do not survive to advanced ages. This approach is popular in cancer modeling […] The second explanation could be related to the possibility of under-diagnosis of certain chronic diseases at advanced ages (due to both less pronounced disease symptoms and infrequent doctor’s office visits); however, that possibility cannot be assessed with the available data […this is because the data sets are based on Medicare claims – US]”

“The most detailed U.S. data on cancer incidence come from the SEER Registry […] about 60 % of malignancies are diagnosed in persons aged 65+ years old […] In the U.S., the estimated percent of cancer patients alive after being diagnosed with cancer (in 2008, by current age) was 13 % for those aged 65–69, 25 % for ages 70–79, and 22 % for ages 80+ years old (compared with 40 % of those aged younger than 65 years old) […] Diabetes affects about 21 % of the U.S. population aged 65+ years old (McDonald et al. 2009). However, while more is known about the prevalence of diabetes, the incidence of this disease among older adults is less studied. […] [In multiple previous studies] the incidence rates of diabetes decreased with age for both males and females. In the present study, we find similar patterns […] The prevalence of asthma among the U.S. population aged 65+ years old in the mid-2000s was as high as 7 % […] older patients are more likely to be underdiagnosed, untreated, and hospitalized due to asthma than individuals younger than age 65 […] asthma incidence rates have been shown to decrease with age […] This trend of declining asthma incidence with age is in agreement with our results.”

“The prevalence and incidence of Alzheimer’s disease increase exponentially with age, with the most notable rise occurring through the seventh and eight decades of life (Reitz et al. 2011). […] whereas dementia incidence continues to increase beyond age 85, the rate of increase slows down [which] suggests that dementia diagnosed at advanced ages might be related not to the aging process per se, but associated with age-related risk factors […] Approximately 1–2 % of the population aged 65+ and up to 3–5 % aged 85+ years old suffer from Parkinson’s disease […] There are few studies of Parkinsons disease incidence, especially in the oldest old, and its age patterns at advanced ages remain controversial”.

“One disadvantage of large administrative databases is that certain factors can produce systematic over/underestimation of the number of diagnosed diseases or of identification of the age at disease onset. One reason for such uncertainties is an incorrect date of disease onset. Other sources are latent disenrollment and the effects of study design. […] the date of onset of a certain chronic disease is a quantity which is not defined as precisely as mortality. This uncertainty makes difficult the construction of a unified definition of the date of onset appropriate for population studies.”

“[W]e investigated the phenomenon of multimorbidity in the U.S. elderly population by analyzing mutual dependence in disease risks, i.e., we calculated disease risks for individuals with specific pre-existing conditions […]. In total, 420 pairs of diseases were analyzed. […] For each pair, we calculated age patterns of unconditional incidence rates of the diseases, conditional rates of the second (later manifested) disease for individuals after onset of the first (earlier manifested) disease, and the hazard ratio of development of the subsequent disease in the presence (or not) of the first disease. […] three groups of interrelations were identified: (i) diseases whose risk became much higher when patients had a certain pre-existing (earlier diagnosed) disease; (ii) diseases whose risk became lower than in the general population when patients had certain pre-existing conditions […] and (iii) diseases for which “two-tail” effects were observed: i.e., when the effects are significant for both orders of disease precedence; both effects can be direct (either one of the diseases from a disease pair increases the risk of the other disease), inverse (either one of the diseases from a disease pair decreases the risk of the other disease), or controversial (one disease increases the risk of the other, but the other disease decreases the risk of the first disease from the disease pair). In general, the majority of disease pairs with increased risk of the later diagnosed disease in both orders of precedence were those in which both the pre-existing and later occurring diseases were cancers, and also when both diseases were of the same organ. […] Generally, the effect of dependence between risks of two diseases diminishes with advancing age. […] Identifying mutual relationships in age-associated disease risks is extremely important since they indicate that development of […] diseases may involve common biological mechanisms.”

“in population cohorts, trends in prevalence result from combinations of trends in incidence, population at risk, recovery, and patients’ survival rates. Trends in the rates for one disease also may depend on trends in concurrent diseases, e.g., increasing survival from CHD contributes to an increase in the cancer incidence rate if the individuals who survived were initially susceptible to both diseases.”

March 1, 2017 Posted by | Biology, Books, Cancer/oncology, Cardiology, Demographics, Diabetes, Epidemiology, Genetics, Medicine, Nephrology, Neurology | Leave a comment

The Ageing Immune System and Health (II)

Here’s the first post about the book. I finished it a while ago but I recently realized I had not completed my intended coverage of the book here on the blog back then, and as some of the book’s material sort-of-kind-of relates to material encountered in a book I’m currently reading (Biodemography of Aging) I decided I might as well finish my coverage of the book now in order to review some things I might have forgot in the meantime, by providing coverage here of some of the material covered in the second half of the book. It’s a nice book with some interesting observations, but as I also pointed out in my first post it is definitely not an easy read. Below I have included some observations from the book’s second half.


“The aged lung is characterised by airspace enlargement similar to, but not identical with acquired emphysema [4]. Such tissue damage is detected even in non-smokers above 50 years of age as the septa of the lung alveoli are destroyed and the enlarged alveolar structures result in a decreased surface for gas exchange […] Additional problems are that surfactant production decreases with age [6] increasing the effort needed to expand the lungs during inhalation in the already reduced thoracic cavity volume where the weakened muscles are unable to thoroughly ventilate. […] As ageing is associated with respiratory muscle strength reduction, coughing becomes difficult making it progressively challenging to eliminate inhaled particles, pollens, microbes, etc. Additionally, ciliary beat frequency (CBF) slows down with age impairing the lungs’ first line of defence: mucociliary clearance [9] as the cilia can no longer repel invading microorganisms and particles. Consequently e.g. bacteria can more easily colonise the airways leading to infections that are frequent in the pulmonary tract of the older adult.”

“With age there are dramatic changes in neutrophil function, including reduced chemotaxis, phagocytosis and bactericidal mechanisms […] reduced bactericidal function will predispose to infection but the reduced chemotaxis also has consequences for lung tissue as this results in increased tissue bystander damage from neutrophil elastases released during migration […] It is currently accepted that alterations in pulmonary PPAR profile, more precisely loss of PPARγ activity, can lead to inflammation, allergy, asthma, COPD, emphysema, fibrosis, and cancer […]. Since it has been reported that PPARγ activity decreases with age, this provides a possible explanation for the increasing incidence of these lung diseases and conditions in older individuals [6].”


“Age is an important risk factor for cancer and subjects aged over 60 also have a higher risk of comorbidities. Approximately 50 % of neoplasms occur in patients older than 70 years […] a major concern for poor prognosis is with cancer patients over 70–75 years. These patients have a lower functional reserve, a higher risk of toxicity after chemotherapy, and an increased risk of infection and renal complications that lead to a poor quality of life. […] [Whereas] there is a difference in organs with higher cancer incidence in developed versus developing countries [,] incidence increases with ageing almost irrespective of country […] The findings from Surveillance, Epidemiology and End Results Program [SEERincidentally I likely shall at some point discuss this one in much more detail, as the aforementioned biodemography textbook covers this data in a lot of detail.. – US] [6] show that almost a third of all cancer are diagnosed after the age of 75 years and 70 % of cancer-related deaths occur after the age of 65 years. […] The traditional clinical trial focus is on younger and healthier patient, i.e. with few or no co-morbidities. These restrictions have resulted in a lack of data about the optimal treatment for older patients [7] and a poor evidence base for therapeutic decisions. […] In the older patient, neutropenia, anemia, mucositis, cardiomyopathy and neuropathy — the toxic effects of chemotherapy — are more pronounced […] The correction of comorbidities and malnutrition can lead to greater safety in the prescription of chemotherapy […] Immunosenescence is a general classification for changes occurring in the immune system during the ageing process, as the distribution and function of cells involved in innate and adaptive immunity are impaired or remodelled […] Immunosenescence is considered a major contributor to cancer development in aged individuals“.

Neurodegenerative diseases:

“Dementia and age-related vision loss are major causes of disability in our ageing population and it is estimated that a third of people aged over 75 are affected. […] age is the largest risk factor for the development of neurodegenerative diseases […] older patients with comorbidities such as atherosclerosis, type II diabetes or those suffering from repeated or chronic systemic bacterial and viral infections show earlier onset and progression of clinical symptoms […] analysis of post-mortem brain tissue from healthy older individuals has provided evidence that the presence of misfolded proteins alone does not correlate with cognitive decline and dementia, implying that additional factors are critical for neural dysfunction. We now know that innate immune genes and life-style contribute to the onset and progression of age-related neuronal dysfunction, suggesting that chronic activation of the immune system plays a key role in the underlying mechanisms that lead to irreversible tissue damage in the CNS. […] Collectively these studies provide evidence for a critical role of inflammation in the pathogenesis of a range of neurodegenerative diseases, but the factors that drive or initiate inflammation remain largely elusive.”

“The effect of infection, mimicked experimentally by administration of bacterial lipopolysaccharide (LPS) has revealed that immune to brain communication is a critical component of a host organism’s response to infection and a collection of behavioural and metabolic adaptations are initiated over the course of the infection with the purpose of restricting the spread of a pathogen, optimising conditions for a successful immune response and preventing the spread of infection to other organisms [10]. These behaviours are mediated by an innate immune response and have been termed ‘sickness behaviours’ and include depression, reduced appetite, anhedonia, social withdrawal, reduced locomotor activity, hyperalgesia, reduced motivation, cognitive impairment and reduced memory encoding and recall […]. Metabolic adaptation to infection include fever, altered dietary intake and reduction in the bioavailability of nutrients that may facilitate the growth of a pathogen such as iron and zinc [10]. These behavioural and metabolic adaptions are evolutionary highly conserved and also occur in humans”.

“Sickness behaviour and transient microglial activation are beneficial for individuals with a normal, healthy CNS, but in the ageing or diseased brain the response to peripheral infection can be detrimental and increases the rate of cognitive decline. Aged rodents exhibit exaggerated sickness and prolonged neuroinflammation in response to systemic infection […] Older people who contract a bacterial or viral infection or experience trauma postoperatively, also show exaggerated neuroinflammatory responses and are prone to develop delirium, a condition which results in a severe short term cognitive decline and a long term decline in brain function […] Collectively these studies demonstrate that peripheral inflammation can increase the accumulation of two neuropathological hallmarks of AD, further strengthening the hypothesis that inflammation i[s] involved in the underlying pathology. […] Studies from our own laboratory have shown that AD patients with mild cognitive impairment show a fivefold increased rate of cognitive decline when contracting a systemic urinary tract or respiratory tract infection […] Apart from bacterial infection, chronic viral infections have also been linked to increased incidence of neurodegeneration, including cytomegalovirus (CMV). This virus is ubiquitously distributed in the human population, and along with other age-related diseases such as cardiovascular disease and cancer, has been associated with increased risk of developing vascular dementia and AD [66, 67].”


“Frailty is associated with changes to the immune system, importantly the presence of a pro-inflammatory environment and changes to both the innate and adaptive immune system. Some of these changes have been demonstrated to be present before the clinical features of frailty are apparent suggesting the presence of potentially modifiable mechanistic pathways. To date, exercise programme interventions have shown promise in the reversal of frailty and related physical characteristics, but there is no current evidence for successful pharmacological intervention in frailty. […] In practice, acute illness in a frail person results in a disproportionate change in a frail person’s functional ability when faced with a relatively minor physiological stressor, associated with a prolonged recovery time […] Specialist hospital services such as surgery [15], hip fractures [16] and oncology [17] have now begun to recognise frailty as an important predictor of mortality and morbidity.

I should probably mention here that this is another area where there’s an overlap between this book and the biodemography text I’m currently reading; chapter 7 of the latter text is about ‘Indices of Cumulative Deficits’ and covers this kind of stuff in a lot more detail than does this one, including e.g. detailed coverage of relevant statistical properties of one such index. Anyway, back to the coverage:

“Population based studies have demonstrated that the incidence of infection and subsequent mortality is higher in populations of frail people. […] The prevalence of pneumonia in a nursing home population is 30 times higher than the general population [39, 40]. […] The limited data available demonstrates that frailty is associated with a state of chronic inflammation. There is also evidence that inflammageing predates a diagnosis of frailty suggesting a causative role. […] A small number of studies have demonstrated a dysregulation of the innate immune system in frailty. Frail adults have raised white cell and neutrophil count. […] High white cell count can predict frailty at a ten year follow up [70]. […] A recent meta-analysis and four individual systematic reviews have found beneficial evidence of exercise programmes on selected physical and functional ability […] exercise interventions may have no positive effect in operationally defined frail individuals. […] To date there is no clear evidence that pharmacological interventions improve or ameliorate frailty.”


“[A]s we get older the time and intensity at which we exercise is severely reduced. Physical inactivity now accounts for a considerable proportion of age-related disease and mortality. […] Regular exercise has been shown to improve neutrophil microbicidal functions which reduce the risk of infectious disease. Exercise participation is also associated with increased immune cell telomere length, and may be related to improved vaccine responses. The anti-inflammatory effect of regular exercise and negative energy balance is evident by reduced inflammatory immune cell signatures and lower inflammatory cytokine concentrations. […] Reduced physical activity is associated with a positive energy balance leading to increased adiposity and subsequently systemic inflammation [5]. […] Elevated neutrophil counts accompany increased inflammation with age and the increased ratio of neutrophils to lymphocytes is associated with many age-related diseases including cancer [7]. Compared to more active individuals, less active and overweight individuals have higher circulating neutrophil counts [8]. […] little is known about the intensity, duration and type of exercise which can provide benefits to neutrophil function. […] it remains unclear whether exercise and physical activity can override the effects of NK cell dysfunction in the old. […] A considerable number of studies have assessed the effects of acute and chronic exercise on measures of T-cell immunesenescence including T cell subsets, phenotype, proliferation, cytokine production, chemotaxis, and co-stimulatory capacity. […] Taken together exercise appears to promote an anti-inflammatory response which is mediated by altered adipocyte function and improved energy metabolism leading to suppression of pro-inflammatory cytokine production in immune cells.”

February 24, 2017 Posted by | Biology, Books, Cancer/oncology, Epidemiology, Immunology, Medicine, Neurology | Leave a comment

Rocks: A very short introduction

I liked the book. Below I have added some sample observations from the book, as well as a collection of links to various topics covered/mentioned in the book.

“To make a variety of rocks, there needs to be a variety of minerals. The Earth has shown a capacity for making an increasing variety of minerals throughout its existence. Life has helped in this [but] [e]ven a dead planet […] can evolve a fine array of minerals and rocks. This is done simply by stretching out the composition of the original homogeneous magma. […] Such stretching of composition would have happened as the magma ocean of the earliest […] Earth cooled and began to solidify at the surface, forming the first crust of this new planet — and the starting point, one might say, of our planet’s rock cycle. When magma cools sufficiently to start to solidify, the first crystals that form do not have the same composition as the overall magma. In a magma of ‘primordial Earth’ type, the first common mineral to form was probably olivine, an iron-and-magnesium-rich silicate. This is a dense mineral, and so it tends to sink. As a consequence the remaining magma becomes richer in elements such as calcium and aluminium. From this, at temperatures of around 1,000°C, the mineral plagioclase feldspar would then crystallize, in a calcium-rich variety termed anorthite. This mineral, being significantly less dense than olivine, would tend to rise to the top of the cooling magma. On the Moon, itself cooling and solidifying after its fiery birth, layers of anorthite crystals several kilometres thick built up as the rock — anorthosite — of that body’s primordial crust. This anorthosite now forms the Moon’s ancient highlands, subsequently pulverized by countless meteorite impacts. This rock type can be found on Earth, too, particularly within ancient terrains. […] Was the Earth’s first surface rock also anorthosite? Probably—but we do not know for sure, as the Earth, a thoroughly active planet throughout its existence, has consumed and obliterated nearly all of the crust that formed in the first several hundred million years of its existence, in a mysterious interval of time that we now call the Hadean Eon. […] The earliest rocks that we know of date from the succeeding Archean Eon.”

“Where plates are pulled apart, then pressure is released at depth, above the ever-opening tectonic rift, for instance beneath the mid-ocean ridge that runs down the centre of the Atlantic Ocean. The pressure release from this crustal stretching triggers decompression melting in the rocks at depth. These deep rocks — peridotite — are dense, being rich in the iron- and magnesium-bearing mineral olivine. Heated to the point at which melting just begins, so that the melt fraction makes up only a few percentage points of the total, those melt droplets are enriched in silica and aluminium relative to the original peridotite. The melt will have a composition such that, when it cools and crystallizes, it will largely be made up of crystals of plagioclase feldspar together with pyroxene. Add a little more silica and quartz begins to appear. With less silica, olivine crystallizes instead of quartz.

The resulting rock is basalt. If there was anything like a universal rock of rocky planet surfaces, it is basalt. On Earth it makes up almost all of the ocean floor bedrock — in other words, the ocean crust, that is, the surface layer, some 10 km thick. Below, there is a boundary called the Mohorovičič Discontinuity (or ‘Moho’ for short)[…]. The Moho separates the crust from the dense peridotitic mantle rock that makes up the bulk of the lithosphere. […] Basalt makes up most of the surface of Venus, Mercury, and Mars […]. On the Moon, the ‘mare’ (‘seas’) are not of water but of basalt. Basalt, or something like it, will certainly be present in large amounts on the surfaces of rocky exoplanets, once we are able to bring them into close enough focus to work out their geology. […] At any one time, ocean floor basalts are the most common rock type on our planet’s surface. But any individual piece of ocean floor is, geologically, only temporary. It is the fate of almost all ocean crust — islands, plateaux, and all — to be destroyed within ocean trenches, sliding down into the Earth along subduction zones, to be recycled within the mantle. From that destruction […] there arise the rocks that make up the most durable component of the Earth’s surface: the continents.”

“Basaltic magmas are a common starting point for many other kinds of igneous rocks, through the mechanism of fractional crystallization […]. Remove the early-formed crystals from the melt, and the remaining melt will evolve chemically, usually in the direction of increasing proportions of silica and aluminium, and decreasing amounts of iron and magnesium. These magmas will therefore produce intermediate rocks such as andesites and diorites in the finely and coarsely crystalline varieties, respectively; and then more evolved silica-rich rocks such as rhyolites (fine), microgranites (medium), and granites (coarse). […] Granites themselves can evolve a little further, especially at the late stages of crystallization of large bodies of granite magma. The final magmas are often water-rich ones that contain many of the incompatible elements (such as thorium, uranium, and lithium), so called because they are difficult to fit within the molecular frameworks of the common igneous minerals. From these final ‘sweated-out’ magmas there can crystallize a coarsely crystalline rock known as pegmatite — famous because it contains a wide variety of minerals (of the ~4,500 minerals officially recognized on Earth […] some 500 have been recognized in pegmatites).”

“The less oxygen there is [at the area of deposition], the more the organic matter is preserved into the rock record, and it is where the seawater itself, by the sea floor, has little or no oxygen that some of the great carbon stores form. As animals cannot live in these conditions, organic-rich mud can accumulate quietly and undisturbed, layer by layer, here and there entombing the skeleton of some larger planktonic organism that has fallen in from the sunlit, oxygenated waters high above. It is these kinds of sediments that […] generate[d] the oil and gas that currently power our civilization. […] If sedimentary layers have not been buried too deeply, they can remain as soft muds or loose sands for millions of years — sometimes even for hundreds of millions of years. However, most buried sedimentary layers, sooner or later, harden and turn into rock, under the combined effects of increasing heat and pressure (as they become buried ever deeper under subsequent layers of sediment) and of changes in chemical environment. […] As rocks become buried ever deeper, they become progressively changed. At some stage, they begin to change their character and depart from the condition of sedimentary strata. At this point, usually beginning several kilometres below the surface, buried igneous rocks begin to transform too. The process of metamorphism has started, and may progress until those original strata become quite unrecognizable.”

“Frozen water is a mineral, and this mineral can make up a rock, both on Earth and, very commonly, on distant planets, moons, and comets […]. On Earth today, there are large deposits of ice strata on the cold polar regions of Antarctica and Greenland, with smaller amounts in mountain glaciers […]. These ice strata, the compressed remains of annual snowfalls, have simply piled up, one above the other, over time; on Antarctica, they reach almost 5 km in thickness and at their base are about a million years old. […] The ice cannot pile up for ever, however: as the pressure builds up it begins to behave plastically and to slowly flow downslope, eventually melting or, on reaching the sea, breaking off as icebergs. As the ice mass moves, it scrapes away at the underlying rock and soil, shearing these together to form a mixed deposit of mud, sand, pebbles, and characteristic striated (ice-scratched) cobbles and boulders […] termed a glacial till. Glacial tills, if found in the ancient rock record (where, hardened, they are referred to as tillites), are a sure clue to the former presence of ice.”

“At first approximation, the mantle is made of solid rock and is not […] a seething mass of magma that the fragile crust threatens to founder into. This solidity is maintained despite temperatures that, towards the base of the mantle, are of the order of 3,000°C — temperatures that would very easily melt rock at the surface. It is the immense pressures deep in the Earth, increasing more or less in step with temperature, that keep the mantle rock in solid form. In more detail, the solid rock of the mantle may include greater or lesser (but usually lesser) amounts of melted material, which locally can gather to produce magma chambers […] Nevertheless, the mantle rock is not solid in the sense that we might imagine at the surface: it is mobile, and much of it is slowly moving plastically, taking long journeys that, over many millions of years, may encompass the entire thickness of the mantle (the kinds of speeds estimated are comparable to those at which tectonic plates move, of a few centimetres a year). These are the movements that drive plate tectonics and that, in turn, are driven by the variation in temperature (and therefore density) from the contact region with the hot core, to the cooler regions of the upper mantle.”

“The outer core will not transmit certain types of seismic waves, which indicates that it is molten. […] Even farther into the interior, at the heart of the Earth, this metal magma becomes rock once more, albeit a rock that is mostly crystalline iron and nickel. However, it was not always so. The core used to be liquid throughout and then, some time ago, it began to crystallize into iron-nickel rock. Quite when this happened has been widely debated, with estimates ranging from over three billion years ago to about half a billion years ago. The inner core has now grown to something like 2,400 km across. Even allowing for the huge spans of geological time involved, this implies estimated rates of solidification that are impressive in real time — of some thousands of tons of molten metal crystallizing into solid form per second.”

“Rocks are made out of minerals, and those minerals are not a constant of the universe. A little like biological organisms, they have evolved and diversified through time. As the minerals have evolved, so have the rocks that they make up. […] The pattern of evolution of minerals was vividly outlined by Robert Hazen and his colleagues in what is now a classic paper published in 2008. They noted that in the depths of outer space, interstellar dust, as analysed by the astronomers’ spectroscopes, seems to be built of only about a dozen minerals […] Their component elements were forged in supernova explosions, and these minerals condensed among the matter and radiation that streamed out from these stellar outbursts. […] the number of minerals on the new Earth [shortly after formation was] about 500 (while the smaller, largely dry Moon has about 350). Plate tectonics began, with its attendant processes of subduction, mountain building, and metamorphism. The number of minerals rose to about 1,500 on a planet that may still have been biologically dead. […] The origin and spread of life at first did little to increase the number of mineral species, but once oxygen-producing photosynthesis started, then there was a great leap in mineral diversity as, for each mineral, various forms of oxide and hydroxide could crystallize. After this step, about two and a half billion years ago, there were over 4,000 minerals, most of them vanishingly rare. Since then, there may have been a slight increase in their numbers, associated with such events as the appearance and radiation of metazoan animals and plants […] Humans have begun to modify the chemistry and mineralogy of the Earth’s surface, and this has included the manufacture of many new types of mineral. […] Human-made minerals are produced in laboratories and factories around the world, with many new forms appearing every year. […] Materials sciences databases now being compiled suggest that more than 50,000 solid, inorganic, crystalline species have been created in the laboratory.”

Some links of interest:

Rock. Presolar grains. Silicate minerals. Silicon–oxygen tetrahedron. Quartz. Olivine. Feldspar. Mica. Jean-Baptiste Biot. Meteoritics. Achondrite/Chondrite/Chondrule. Carbonaceous chondrite. Iron–nickel alloy. Widmanstätten pattern. Giant-impact hypothesis (in the book this is not framed as a hypothesis nor is it explicitly referred to as the GIH; it’s just taken to be the correct account of what happened back then – US). Alfred Wegener. Arthur Holmes. Plate tectonics. Lithosphere. Asthenosphere. Fractional Melting (couldn’t find a wiki link about this exact topic; the MIT link is quite technical – sorry). Hotspot (geology). Fractional crystallization. Metastability. Devitrification. Porphyry (geology). Phenocryst. Thin section. Neptunism. Pyroclastic flow. Ignimbrite. Pumice. Igneous rock. Sedimentary rock. Weathering. Slab (geology). Clay minerals. Conglomerate (geology). BrecciaAeolian processes. Hummocky cross-stratification. Ralph Alger Bagnold. Montmorillonite. Limestone. Ooid. Carbonate platform. Turbidite. Desert varnish. Evaporite. Law of Superposition. Stratigraphy. Pressure solution. Compaction (geology). Recrystallization (geology). Cleavage (geology). Phyllite. Aluminosilicate. Gneiss. Rock cycle. Ultramafic rock. Serpentinite. Pressure-Temperature-time paths. Hornfels. Impactite. Ophiolite. Xenolith. Kimberlite. Transition zone (Earth). Mantle convection. Mantle plume. Core–mantle boundary. Post-perovskite. Earth’s inner core. Inge Lehmann. Stromatolites. Banded iron formations. Microbial mat. Quorum sensing. Cambrian explosion. Bioturbation. Biostratigraphy. Coral reef. Radiolaria. Carbonate compensation depth. Paleosol. Bone bed. Coprolite. Allan Hills 84001. Tharsis. Pedestal crater. Mineraloid. Concrete.

February 19, 2017 Posted by | Biology, Books, Geology | Leave a comment

The Biology of Moral Systems (I)

I have quoted from the book before, but I decided that this book deserves to be blogged in more detail. I’m close to finishing the book at this point (it’s definitely taken longer than it should have), and I’ll probably give it 5 stars on goodreads; I might also add it to my list of favourite books on the site. In this post I’ve added some quotes and ideas from the book, and a few comments. Before going any further I should note that it’s frankly impossible to cover anywhere near all the ideas covered in the book here on the blog, so if you’re even remotely interested in these kinds of things you really should pick up a copy of the book and read all of it.

“I believe that something crucial has been missing from all of the great debates of history, among philosophers, politicians, theologians, and thinkers from other and diverse backgrounds, on the issues of morality, ethics, justice, right and wrong. […] those who have tried to analyze morality have failed to treat the human traits that underlie moral behavior as outcomes of evolution […] for many conflicts of interest, compromises and enforceable contracts represent the only real solutions. Appeals to morality, I will argue, are simply the invoking of such compromises and contracts in particular ways. […] the process of natural selection that has given rise to all forms of life, including humans, operates such that success has always been relative. One consequence is that organisms resulting from the long-term cumulative effects of selection are expected to resist efforts to reveal their interests fully to others, and also efforts to place limits on their striving or to decide for them when their interests are being “fully” satisfied. These are all reasons why we should expect no “terminus” – ever – to debates on moral and ethical issues.” (these comments I also included in the quotes post to which I link at the beginning, but I thought it was worth including them in this post as well even so – US).

“I am convinced that biology can never offer […] easy or direct answers to the questions of what is right and wrong. I explicitly reject the attitude that whatever biology tells us is so is also what ought to be (David Hume’s so-called “naturalistic fallacy”) […] there are within biology no magic solutions to moral problems. […] Knowledge of the human background in organic evolution can [however] provide a deeper self-understanding by an increasing proportion of the world’s population; self-understanding that I believe can contribute to answering the serious questions of social living.”

“If there had been no recent discoveries in biology that provided new ways of looking at the concept of moral systems, then I would be optimistic indeed to believe that I could say much that is new. But there have been such discoveries. […] The central point in these writings [Hamilton, Williams, Trivers, Cavalli-Sforza, Feldman, Dawkins, Wilson, etc. – US] […] is that natural selection has apparently been maximizing the survival by reproduction of genes, as they have been defined by evolutionists, and that, with respect to the activities of individuals, this includes effects on copies of their genes, even copies located in other individuals. In other words, we are evidently evolved not only to aid the genetic materials in our own bodies, by creating and assisting descendants, but also to assist, by nepotism, copies of our genes that reside in collateral (nondescendant) relatives. […] ethics, morality, human conduct, and the human psyche are to be understood only if societies are seen as collections of individuals seeking their own self-interests […] In some respects these ideas run contrary to what people have believed and been taught about morality and human values: I suspect that nearly all humans believe it is a normal part of the functioning of every human individual now and then to assist someone else in the realization of that person’s own interests to the actual net expense of those of the altruist. What [the above-mentioned writings] tells us is that, despite our intuitions, there is not a shred of evidence to support this view of beneficence, and a great deal of convincing theory suggests that any such view will eventually be judged false. This implies that we will have to start all over again to describe and understand ourselves, in terms alien to our intuitions […] It is […] a goal of this book to contribute to this redescription and new understanding, and especially to discuss why our intuitions should have misinformed us.”

“Social behavior evolves as a succession of ploys and counterploys, and for humans these ploys are used, not only among individuals within social groups, but between and among small and large groups of up to hundreds of millions of individuals. The value of an evolutionary approach to human sociality is thus not to determine the limits of our actions so that we can abide by them. Rather, it is to examine our life strategies so that we can change them when we wish, as a result of understanding them. […] my use of the word biology in no way implies that moral systems have some kind of explicit genetic background, are genetically determined, or cannot be altered by adjusting the social environment. […] I mean simply to suggest that if we wish to understand those aspects of our behavior commonly regarded as involving morality or ethics, it will help to reconsider our behavior as a product of evolution by natural selection. The principal reason for this suggestion is that natural selection operates according to general principles which make its effects highly predictive, even with respect to traits and circumstances that have not yet been analyzed […] I am interested […] not in determining what is moral and immoral, in the sense of what people ought to be doing, but in elucidating the natural history of ethics and morality – in discovering how and why humans initiated and developed the ideas we have about right and wrong.”

I should perhaps mention here that sort-of-kind-of related stuff is covered in Aureli et al. (see e.g. this link), and that some parts of that book will probably make you understand Alexander’s ideas a lot better even if perhaps he didn’t read those specific authors – mainly because it gets a lot easier to imagine the sort of mechanisms which might be at play here if you’ve read this sort of literature. Here’s one relevant quote from the coverage of that book, which also deals with the question Alexander discusses above, and in a lot more detail throughout his book, namely ‘where our morality comes from?’

“we make two fundamental assertions regarding the evolution of morality: (1) there are specific types of behavior demonstrated by both human and nonhuman primates that hint at a shared evolutionary background to morality; and (2) there are theoretical and actual connections between morality and conflict resolution in both nonhuman primates and human development. […] the transition from nonmoral or premoral to moral is more gradual than commonly assumed. No magic point appears in either evolutionary history or human development at which morality suddenly comes into existence. In both early childhood and in animals closely related to us, we can recognize behaviors (and, in the case of children, judgments) that are essential building blocks of the morality of the human adult. […] the decision making and emotions underlying moral judgments are generated within the individual rather than being simply imposed by society. They are a product of evolution, an integrated part of the human genetic makeup, that makes the child construct a moral perspective through interactions with other members of its species. […] Much research has shown that children acquire morality through a social-cognitive process; children make connections between acts and consequences. Through a gradual process, children develop concepts of justice, fairness, and equality, and they apply these concepts to concrete everyday situations […] we assert that emotions such as empathy and sympathy provide an experiential basis by which children construct moral judgments. Emotional reactions from others, such as distress or crying, provide experiential information that children use to judge whether an act is right or wrong […] when a child hits another child, a crying response provides emotional information about the nature of the act, and this information enables the child, in part, to determine whether and why the transgression is wrong. Therefore, recognizing signs of distress in another person may be a basic requirement of the moral judgment process. The fact that responses to distress in another have been documented both in infancy and in the nonhuman primate literature provides initial support for the idea that these types of moral-like experiences are common to children and nonhuman primates.”

Alexander’s coverage is quite different from that found in Aureli et al.,, but some of the contributors to the latter work deal with similar questions to the ones in which he’s interested, using approaches not employed in Alexander’s book – so this is another place to look if you’re interested in these topics. Margalit’s The Emergence of Norms is also worth mentioning. Part of the reason why I mention these books here is incidentally that they’re not talked about in Alexander’s coverage (for very natural reasons, I should add, in the case of the former book at least; Natural Conflict Resolution was published more than a decade after Alexander wrote his book…).

“In the hierarchy of explanatory principles governing the traits of living organisms, evolutionary reductionism – the development of principles from the evolutionary process – tends to subsume all other kinds. Proximate-cause reductionism (or reduction by dissection) sometimes advances our understanding of the whole phenomena. […] When evolutionary reduction becomes trivial in the study of life it is for a reason different from incompleteness; rather, it is because the breadth of the generalization distances it too significantly from the particular problem that may be at hand. […] the greatest weakness of reduction by generalization is not that it is likely to be trivial but that errors are probable through unjustified leaps from hypothesis to conclusion […] Critics such as Gould and Lewontin […] do not discuss the facts that (a) all students of human behavior (not just those who take evolution into account) run the risk of leaping unwarrantedly from hypothesis to conclusion and (b) just-so stories were no less prevalent and hypothesis-testing no more prevalent in studies of human behavior before evolutionary biologists began to participate. […] I believe that failure by biologists and others to distinguish proximate- or partial-cause and evolutionary- or ultimate-cause reductionism […] is in some part responsible for the current chasm between the social and the biological sciences and the resistance to so-called biological approaches to understanding humans. […] Both approaches are essential to progress in biology and the social sciences, and it would be helpful if their relationship, and that of their respective practitioners, were not seen as adversarial.”

(Relatedly, love is motivationally prior to sugar. This one also seems relevant, though in a different way).

“Humans are not accustomed to dealing with their own strategies of life as if they had been tuned by natural selection. […] People are not generally aware of what their lifetimes have been evolved to accomplish, and, even if they are roughly aware of this, they do not easily accept that their everyday activities are in any sense means to that end. […] The theory of lifetimes most widely accepted among biologists is that individuals have evolved to maximize the likelihood of survival of not themselves, but their genes, and that they do this by reproducing and tending in various ways offspring and other carriers of their own genes […] In this theory, survival of the individual – and its growth, development, and learning – are proximate mechanisms of reproductive success, which is a proximate mechanism of genic survival. Only the genes have evolved to survive. […] To say that we are evolved to serve the interests of our genes in no way suggests that we are obliged to serve them. […] Evolution is surely most deterministic for those still unaware of it. If this argument is correct, it may be the first to carry us from is to ought, i.e., if we desire to be the conscious masters of our own fates, and if conscious effort in that direction is the most likely vehicle of survival and happiness, then we ought to study evolution.”

“People are sometimes comfortable with the notion that certain activities can be labeled as “purely cultural” because they also believe that there are behaviors that can be labeled “purely genetic.” Neither is true: the environment contributes to the expression of all behaviors, and culture is best described as part of the environment.”

“Happiness and its anticipation are […] proximate mechanisms that lead us to perform and repeat acts that in the environments of history, at least, would have led to greater reproductive success.”

“The remarkable difference between the patterns of senescence in semelparous (one-time breeding) and iteroparous (repeat-breeding) organisms is probably one of the best simple demonstrations of the central significance of reproduction in the individual’s lifetime. How, otherwise, could we explain the fact that those who reproduce but once, like salmon and soybeans, tend to die suddenly right afterward, while those like ourselves who have residual reproductive possibilities after the initial reproductive act decline or senesce gradually? […] once an organism has completed all possibilities of reproducing (through both offspring production and assistance, and helping other relatives), then selection can no longer affect its survival: any physiological or other breakdown that destroys it may persist and even spread if it is genetically linked to a trait that is expressed earlier and is reproductively beneficial. […] selection continually works against senescence, but is just never able to defeat it entirely. […] senescence leads to a generalized deterioration rather than one owing to a single effect or a few effects […] In the course of working against senescence, selection will tend to remove, one by one, the most frequent sources of mortality as a result of senescence. Whenever a single cause of mortality, such as a particular malfunction of any vital organ, becomes the predominant cause of mortality, then selection will more effectively reduce the significance of that particular defect (meaning those who lack it will outreproduce) until some other achieves greater relative significance. […] the result will be that all organs and systems will tend to deteriorate together. […] The point is that as we age, and as senescence proceeds, large numbers of potential sources of mortality tend to lurk ever more malevolently just “below the surface,” so that, unfortunately, the odds are very high against any dramatic lengthening of the maximum human lifetime through technology. […] natural selection maximizes the likelihood of genetic survival, which is incompatible with eliminating senescence. […] Senescence, and the finiteness of lifetimes, have evolved as incidental effects […] Organisms compete for genetic survival and the winners (in evolutionary terms) are those who sacrifice their phenotypes (selves) earlier when this results in greater reproduction.”

“altruism appears to diminish with decreasing degree of relatedness in sexual species whenever it is studied – in humans as well as nonhuman species”

October 5, 2016 Posted by | Anthropology, Biology, Books, Evolutionary biology, Genetics, Philosophy | Leave a comment


I recently read Nick Middleton’s short publication on this topic and decided it was worth blogging it here. I gave the publication 3 stars on goodreads; you can read my goodreads review of the book here.

In this post I’ll quote a bit from the book and add some details I thought were interesting.

“None of [the] approaches to desert definition is foolproof. All have their advantages and drawbacks. However, each approach delivers […] global map[s] of deserts and semi-deserts that [are] broadly similar […] Roughly, deserts cover about one-quarter of our planet’s land area, and semi-deserts another quarter.”

“High temperatures and a paucity of rainfall are two aspects of climate that many people routinely associate with deserts […] However, desert climates also embrace other extremes. Many arid zones experience freezing temperatures and snowfall is commonplace, particularly in those situated outside the tropics. […] For much of the time, desert skies are cloud-free, meaning deserts receive larger amounts of sunshine than any other natural environment. […] Most of the water vapour in the world’s atmosphere is supplied by evaporation from the oceans, so the more remote a location is from this source the more likely it is that any moisture in the air will have been lost by precipitation before it reaches continental interiors. The deserts of Central Asia illustrate this principle well: most of the moisture in the air is lost before it reaches the heart of the continent […] A clear distinction can be made between deserts in continental interiors and those on their coastal margins when it comes to the range of temperatures experienced. Oceans tend to exert a moderating influence on temperature, reducing extremes, so the greatest ranges of temperature are found far from the sea while coastal deserts experience a much more limited range. […] Freezing temperatures occur particularly in the mid-latitude deserts, but by no means exclusively so. […] snowfall occurs at the Algerian oasis towns of Ouagla and Ghardaia, in the northern Sahara, as often as once every 10 years on average.”

“[One] characteristic of rainfall in deserts is its variability from year to year which in many respects makes annual average statistics seem like nonsense. A very arid desert area may go for several years with no rain at all […]. It may then receive a whole ‘average’ year’s rainfall in just one storm […] Rainfall in deserts is also typically very variable in space as well as time. Hence, desert rainfall is frequently described as being ‘spotty’. This spottiness occurs because desert storms are often convective, raining in a relatively small area, perhaps just a few kilometres across. […] Climates can vary over a wide range of spatial scales […] Changes in temperature, wind, relative humidity, and other elements of climate can be detected over short distances, and this variability on a small scale creates distinctive climates in small areas. These are microclimates, different in some way from the conditions prevailing over the surrounding area as a whole. At the smallest scale, the shade given by an individual plant can be described as a microclimate. Over larger distances, the surface temperature of the sand in a dune will frequently be significantly different from a nearby dry salt lake because of the different properties of the two types of surface. […] Microclimates are important because they exert a critical control over all sorts of phenomena. These include areas suitable for plant and animal communities to develop, the ways in which rocks are broken down, and the speed at which these processes occur.”

“The level of temperature prevailing when precipitation occurs is important for an area’s water balance and its degree of aridity. A rainy season that occurs during the warm summer months, when evaporation is greatest, makes for a climate that is more arid than if precipitation is distributed more evenly throughout the year.”

“The extremely arid conditions of today[‘s Sahara Desert] have prevailed for only a few thousand years. There is lots of evidence to suggest that the Sahara was lush, nearly completely covered with grasses and shrubs, with many lakes that supported antelope, giraffe, elephant, hippopotamus, crocodile, and human populations in regions that today have almost no measurable precipitation. This ‘African Humid Period’ began around 15,000 years ago and came to an end around 10,000 years later. […] Globally, at the height of the most recent glacial period some 18,000 years ago, almost 50% of the land area between 30°N and 30°S was covered by two vast belts of sand, often called ‘sand seas’. Today, about 10% of this area is covered by sand seas. […] Around one-third of the Arabian subcontinent is covered by sandy deserts”.

“Much of the drainage in deserts is internal, as in Central Asia. Their rivers never reach the sea, but take water to interior basins. […] Salt is a common constituent of desert soils. The generally low levels of rainfall means that salts are seldom washed away through soils and therefore tend to accumulate in certain parts of the landscape. Large amounts of common salt (sodium chloride, or halite), which is very soluble in water, are found in some hyper-arid deserts.”

“Many deserts are very rich in rare and unique species thanks to their evolution in relative geographical isolation. Many of these plants and animals have adapted in remarkable ways to deal with the aridity and extremes of temperature. Indeed, some of these adaptations contribute to the apparent lifelessness of deserts simply because a good way to avoid some of the harsh conditions is to hide. Some small creatures spend hot days burrowed beneath the soil surface. In a similar way, certain desert plants spend most of the year and much of their lives dormant, as seeds waiting for the right conditions, brought on by a burst of rainfall. Given that desert rainstorms can be very variable in time and in space, many activities in the desert ecosystem occur only sporadically, as pulses of activity driven by the occasional cloudburst. […] The general scarcity of water is the most important, though by no means the only, environmental challenge faced by desert organisms. Limited supplies of food and nutrients, friable soils, high levels of solar radiation, high daytime temperatures, and the large diurnal temperature range are other challenges posed by desert conditions. These conditions are not always distributed evenly across a desert landscape, and the existence of more benign microenvironments is particularly important for desert plants and animals. Patches of terrain that are more biologically productive than their surroundings occur in even the most arid desert, geographical patterns caused by many factors, not only the simple availability of water.”

A small side note here: The book includes brief coverage of things like crassulacean acid metabolism and related topics covered in much more detail in Beer et al. I’m not going to go into that stuff here as this stuff was in my opinion much better covered in the latter book (some people might disagree, but people who would do that would at least have to admit that the coverage in Beer et al. is/was much more comprehensive than is Middleton’s coverage in this book). There are quite a few other topics included in the book which I did not include coverage of here in the post but I mention this topic in particular in part because I thought it was actually a good example underscoring how this book is very much just a very brief introduction; you can write book chapters, if not books, about some of the topics Middleton devotes a couple of paragraphs to in his coverage, which is but to be expected given the nature and range of coverage of the publication.

Plants aren’t ‘smart’ given any conventional definition of the word, but as I’ve talked about before here on the blog (e.g. here) when you look closer at the way they grow and ‘behave’ over the very long term, some of the things they do are actually at the very least ‘not really all that stupid’:

“The seeds of annuals germinate only when enough water is available to support the entire life cycle. Germinating after just a brief shower could be fatal, so mechanisms have developed for seeds to respond solely when sufficient water is available. Seeds germinate only when their protective seed coats have been broken down, allowing water to enter the seed and growth to begin. The seed coats of many desert species contain chemicals that repel water. These compounds are washed away by large amounts of water, but a short shower will not generate enough to remove all the water-repelling chemicals. Other species have very thick seed coats that are gradually worn away physically by abrasion as moving water knocks the seeds against stones and pebbles.”

What about animals? One thing I learned from this publication is that it turns out that being a mammal will, all else equal, definitely not give you a competitive edge in a hot desert environment:

“The need to conserve water is important to all creatures that live in hot deserts, but for mammals it is particularly crucial. In all environments mammals typically maintain a core body temperature of around 37–38°C, and those inhabiting most non-desert regions face the challenge of keeping their body temperature above the temperature of their environmental surrounds. In hot deserts, where environmental temperatures substantially exceed the body temperature on a regular basis, mammals face the reverse challenge. The only mechanism that will move heat out of an animal’s body against a temperature gradient is the evaporation of water, so maintenance of the core body temperature requires use of the resource that is by definition scarce in drylands.”

Humans? What about them?

“Certain aspects of a traditional mobile lifestyle have changed significantly for some groups of nomadic peoples. Herders in the Gobi desert in Mongolia pursue a way of life that in many ways has changed little since the times of the greatest of all nomadic leaders, Chinggis Khan, 750 years ago. They herd the same animals, eat the same foods, wear the same clothes, and still live in round felt-covered tents, traditional dwellings known in Mongolian as gers. Yet many gers now have a set of solar panels on the roof that powers a car battery, allowing an electric light to extend the day inside the tent. Some also have a television set.” (these remarks incidentally somehow reminded me of this brilliant Gary Larson cartoon)

“People have constructed dams to manage water resources in arid regions for thousands of years. One of the oldest was the Marib dam in Yemen, built about 3,000 years ago. Although this structure was designed to control water from flash floods, rather than for storage, the diverted flow was used to irrigate cropland. […] Although groundwater has been exploited for desert farmland using hand-dug underground channels for a very long time, the discovery of reserves of groundwater much deeper below some deserts has led to agricultural use on much larger scales in recent times. These deep groundwater reserves tend to be non-renewable, having built up during previous climatic periods of greater rainfall. Use of this fossil water has in many areas resulted in its rapid depletion.”

“Significant human impacts are thought to have a very long history in some deserts. One possible explanation for the paucity of rainfall in the interior of Australia is that early humans severely modified the landscape through their use of fire. Aboriginal people have used fire extensively in Central Australia for more than 20,000 years, particularly as an aid to hunting, but also for many other purposes, from clearing passages to producing smoke signals and promoting the growth of preferred plants. The theory suggests that regular burning converted the semi-arid zone’s mosaic of trees, shrubs, and grassland into the desert scrub seen today. This gradual change in the vegetation could have resulted in less moisture from plants reaching the atmosphere and hence the long-term desertification of the continent.” (I had never heard about this theory before, and so I of course have no idea if it’s correct or not – but it’s an interesting idea).

A few wikipedia links of interest:
Karakum Canal.
Atacama Desert.
Salar de Uyuni.
Taklamakan Desert.
Dust Bowl.
Namib Desert.

August 27, 2016 Posted by | Anthropology, Biology, Books, Botany, Ecology, Engineering, Geography, Zoology | Leave a comment

Human Drug Metabolism (I)

“It has been said that if a drug has no side effects, then it is unlikely to work. Drug therapy labours under the fundamental problem that usually every single cell in the body has to be treated just to exert a beneficial effect on a small group of cells, perhaps in one tissue. Although drug-targeting technology is improving rapidly, most of us who take an oral dose are still faced with the problem that the vast majority of our cells are being unnecessarily exposed to an agent that at best will have no effect, but at worst will exert many unwanted effects. Essentially, all drug treatment is really a compromise between positive and negative effects in the patient. […] This book is intended to provide a basic grounding in human drug metabolism, although it is useful if the reader has some knowledge of biochemistry, physiology and pharmacology from other sources. In addition, a qualitative understanding of chemistry can illuminate many facets of drug metabolism and toxicity. Although chemistry can be intimidating, I have tried to make the chemical aspects of drug metabolism as user-friendly as possible.”

I’m currently reading this book. To say that it is ‘useful if the reader has some knowledge’ of the topics mentioned is putting it mildly; I’d say it’s mandatory – my advice would be to stay far away from this book if you know nothing of pharmacology, biochem, and physiology. I know enough to follow most of the coverage, at least in terms of the big picture stuff, but some of the biochemistry details I frankly have been unable to follow; I think I could probably understand all of it if I were willing to look up all the words and concepts with which I’m unfamiliar, but I’m not willing to spend the time to do that. In this context it should also be mentioned that the book is very well written, in the sense that it is perfectly possible to read the book and follow the basic outline of what’s going on without necessarily understanding all details, so I don’t feel that the coverage in any way discourages me from reading the book the way I am – the significance of that hydrogen bond in the diagram will probably become apparent to you later, and even if it doesn’t you’ll probably manage.

In terms of general remarks about the book, a key point to be mentioned early on is also that the book is very dense and has a lot of interesting stuff. I find it hard at the moment to justify devoting time to blogging, but if that were not the case I’d probably feel tempted to cover this book in a lot of detail, with multiple posts delving into specific fascinating aspects of the coverage. Despite this being a book where I don’t really understand everything that’s going on all the time, I’m definitely at a five star rating at the moment, and I’ve read close to two-thirds of it at this point.

A few quotes:

“The process of drug development weeds out agents [or at least tries to weed out agents… – US] that have seriously negative actions and usually releases onto the market drugs that may have a profile of side effects, but these are relatively minor within a set concentration range where the drug’s pharmacological action is most effective. This range, or ‘therapeutic window’ is rather variable, but it will give some indication of the most ‘efficient’ drug concentration. This effectively means the most beneficial pharmacodynamic effects for the minimum side effects.”

If the dose is too low, you have a case of drug failure, where the drug doesn’t work. If the dose is too high, you experience toxicity. Both outcomes are problematic, but they manifest in different ways. Drug failure is usually a gradual process (days – “Therapeutic drug failure is usually a gradual process, where the time frame may be days before the problem is detected”), whereas toxicity may be of very rapid onset (hours).

“To some extent, every patient has a unique therapeutic window for each drug they take, as there is such huge variation in our pharmacodynamic drug sensitivities. This book is concerned with what systems influence how long a drug stays in our bodies. […] [The therapeutic index] has been defined as the ratio between the lethal or toxic dose and the effective dose that shows the normal range of pharmacological effect. In practice, a drug […] is listed as having a narrow TI if there is less than a twofold difference between the lethal and effective doses, or a twofold difference in the minimum toxic and minimum effective concentrations. Back in the 1960s, many drugs in common use had narrow TIs […] that could be toxic at relatively low levels. Over the last 30 years, the drug industry has aimed to replace this type of drug with agents with much higher TIs. […] However, there are many drugs […] which remain in use that have narrow or relatively narrow TIs”.

“metabolites are usually removed from the cell faster than the parent drug”

“The kidneys are mostly responsible for […] removal, known as elimination. The kidneys cannot filter large chemical entities like proteins, but they can remove the majority of smaller chemicals, depending on size, charge and water solubility. […] the kidney is a lipophilic (oil-loving) organ […] So the kidney is not efficient at eliminating lipophilic chemicals. One of the major roles of the liver is to use biotransforming enzymes to ensure that lipophilic agents are made water soluble enough to be cleared by the kidney. So the liver has an essential but indirect role in clearance, in that it must extract the drug from the circulation, biotransform (metabolize) it, then return the water-soluble product to the blood for the kidney to remove. The liver can also actively clear or physically remove its metabolic products from the circulation by excreting them in bile, where they travel through the gut to be eliminated in faeces.”

“Cell structures eventually settled around the format we see now, a largely aqueous cytoplasm bounded by a predominantly lipophilic protective membrane. Although the membrane does prevent entry and exit of many potential toxins, it is no barrier to other lipophilic molecules. If these molecules are highly lipophilic, they will passively diffuse into and become trapped in the membrane. If they are slightly less lipophilic, they will pass through it into the organism. So aside from ‘ housekeeping ’ enzyme systems, some enzymatic protection would have been needed against invading molecules from the immediate environment. […] the majority of living organisms including ourselves now possess some form of effective biotransformational enzyme capability which can detoxify and eliminate most hydrocarbons and related molecules. This capability has been effectively ‘stolen’ from bacteria over millions of years. The main biotransformational protection against aromatic hydrocarbons is a series of enzymes so named as they absorb UV light at 450 nm when reduced and bound to carbon monoxide. These specialized enzymes were termed cytochrome P450 monooxygenases or sometimes oxido-reductases. They are often referred to as ‘CYPs’ or ‘P450s’. […] All the CYPs accomplish their functions using the same basic mechanism, but each enzyme is adapted to dismantle particular groups of chemical structures. It is a testament to millions of years of ‘ research and development ’ in the evolution of CYPs, that perhaps 50,000 or more man-made chemical entities enter the environment for the first time every year and the vast majority can be oxidized by at least one form of CYP. […] To date, nearly 60 human CYPs have been identified […] It is likely that hundreds more CYP-mediated endogenous functions remain to be discovered. […] CYPs belong to a group of enzymes which all have similar core structures and modes of operation. […] Their importance to us is underlined by their key role in more than 75 per cent of all drug biotransformations.”

I would add a note here that a very large proportion of this book is, perhaps unsurprisingly in view of the above, about those CYPs; how they work, what exactly it is that they do, which different kinds there are and what roles they play in the metabolism of specific drugs and chemical compounds, variation in gene expression across individuals and across populations in the context of specific CYPs and how such variation may relate to differences in drug metabolism, etc.

“Drugs often parallel endogenous molecules in their oil solubility, although many are considerably more lipophilic than these molecules. Generally, drugs, and xenobiotic compounds, have to be fairly oil soluble or they would not be absorbed from the GI tract. Once absorbed these molecules could change both the structure and function of living systems and their oil solubility makes these molecules rather ‘elusive’, in the sense that they can enter and leave cells according to their concentration and are temporarily beyond the control of the living system. This problem is compounded by the difficulty encountered by living systems in the removal of lipophilic molecules. […] even after the kidney removes them from blood by filtering them, the lipophilicity of drugs, toxins and endogenous steroids means that as soon as they enter the collecting tubules, they can immediately return to the tissue of the tubules, as this is more oil-rich than the aqueous urine. So the majority of lipophilic molecules can be filtered dozens of times and only low levels are actually excreted. In addition, very high lipophilicity molecules like some insecticides and fire retardants might never leave adipose tissue at all […] This means that for lipophilic agents:
*the more lipophilic they are, the more these agents are trapped in membranes, affecting fluidity and causing disruption at high levels;
* if they are hormones, they can exert an irreversible effect on tissues that is outside normal physiological control;
*if they are toxic, they can potentially damage endogenous structures;
* if they are drugs, they are also free to cause any pharmacological effect for a considerable period of time.”

“A sculptor was once asked how he would go about sculpting an elephant from a block of stone. His response was ‘knock off all the bits that did not look like an elephant’. Similarly, drug-metabolizing CYPs have one main imperative, to make molecules more water-soluble. Every aspect of their structure and function, their position in the liver, their initial selection of substrate, binding, substrate orientation and catalytic cycling, is intended to accomplish this deceptively simple aim.”

“The use of therapeutic drugs is a constant battle to pharmacologically influence a system that is actively undermining the drugs’ effects by removing them as fast as possible. The processes of oxidative and conjugative metabolism, in concert with efflux pump systems, act to clear a variety of chemicals from the body into the urine or faeces, in the most rapid and efficient manner. The systems that manage these processes also sense and detect increases in certain lipophilic substances and this boosts the metabolic capability to respond to the increased load.”

“The aim of drug therapy is to provide a stable, predictable pharmacological effect that can be adjusted to the needs of the individual patient for as long is deemed clinically necessary. The physician may start drug therapy at a dosage that is decided on the basis of previous clinical experience and standard recommendations. At some point, the dosage might be increased if the desired effects were not forthcoming, or reduced if side effects are intolerable to the patient. This adjustment of dosage can be much easier in drugs that have a directly measurable response, such as a change in clotting time. However, in some drugs, this adjustment process can take longer to achieve than others, as the pharmacological effect, once attained, is gradually lost over a period of days. The dosage must be escalated to regain the original effect, sometimes several times, until the patient is stable on the dosage. In some cases, after some weeks of taking the drug, the initial pharmacological effect seen in the first few days now requires up to eight times the initial dosage to reproduce. It thus takes a significant period of time to create a stable pharmacological effect on a constant dose. In the same patients, if another drug is added to the regimen, it may not have any effect at all. In other patients, sudden withdrawal of perhaps only one drug in a regimen might lead to a gradual but serious intensification of the other drug’s side effects.”

“acceleration of drug metabolism as a response to the presence of certain drugs is known as ‘enzyme induction’ and drugs which cause it are often referred to as ‘inducers’ of drug metabolism. The process can be defined as: ‘An adaptive increase in the metabolizing capacity of a tissue’; this means that a drug or chemical is capable of inducing an increase in the transcription and translation of specific CYP isoforms, which are often (although not always) the most efficient metabolizers of that chemical. […] A new drug is generally regarded as an inducer if it produces a change in drug clearance which is equal to or greater than 40 per cent of an established potent inducer, usually taken as rifampicin. […] inducers are usually (but not always) lipophilic, contain aromatic groups and consequently, if they were not oxidized, they would be very persistent in living systems. CYP enzymes have evolved to oxidize this very type of agent; indeed, an elaborate and very effective system has also evolved to modulate the degree of CYP oxidation of these agents, so it is clear that living systems regard inducers as a particular threat among lipophilic agents in general. The process of induction is dynamic and closely controlled. The adaptive increase is constantly matched to the level of exposure to the drug, from very minor almost undetectable increases in CYP protein synthesis, all the way to a maximum enzyme synthesis that leads to the clearance of grammes of a chemical per day. Once exposure to the drug or toxin ceases, the adaptive increase in metabolizing capacity will subside gradually to the previous low level, usually within a time period of a few days. This varies according to the individual and the drug. […] it is clear there is almost limitless capacity for variation in terms of the basic pre-set responsiveness of the system as well as its susceptibility to different inducers and groups of inducers. Indeed, induction in different patients has been observed to differ by more than 20-fold.”

This one I added mostly because I didn’t know this and I thought it was worth including it here because it would make it easier for me to remember later (i.e., not because I figured other people might find this interesting):

CYP2E1 is very sensitive to diet, even becoming induced by high fat/low carbohydrate intakes. Surprisingly, starvation and diabetes also promote CYP2E1 functionality. Insulin levels fall during diet restriction, starvation and in diabetes and the formation of functional 2E1 is suppressed by insulin, so these conditions promote the increase of 2E1 metabolic capability. One of the consequences of diabetes and starvation is the major shift from glucose to fatty acid/tryglyceride oxidation, of which some of the by-products are small, hydrophilic and potentially toxic ‘ketone bodies’. These agents can cause a CNS intoxicating effect which is seen in diabetics who are very hypoglycaemic, they may appear ‘drunk’ and their breath will smell as if they had been drinking.”

A more general related point which may be of more interest to other people reading along here is that this is far from the only CYP which is sensitive to diet, and that diet-mediated effects may be very significant. I may go into this in more detail in a later post. Note that grapefruit is a major potentially problematic dietary component in many drug contexts:

“Although patients have been heroically consuming grapefruit juice for their health for decades, it took until the late 1980s before its effects on drug clearance were noted and several more years before it was realized that there could be a major problem with drug interactions […] The most noteworthy feature of the effect of grapefruit juice is its potency from a single ‘dose’ which coincides with a typical single breakfast intake of the juice, say around 200–300 ml. Studies with CYP3A substrates such as midazolam have shown that it can take up to three days before the effects wear off, which is consistent with the synthesis of new enzyme. […] there are a number of drugs that are subject to a very high gut wall component to their ‘first-pass’ metabolism […]; these include midazolam, terfenadine, lovastatin, simvastatin and astemizole. Their gut CYP clearance is so high that if the juice inhibits it, the concentration reaching the liver can increase six- or sevenfold. If the liver normally only extracts a relatively minor proportion of the parent agent, then plasma levels of such drugs increase dramatically towards toxicity […] the inhibitor effects of grapefruit juice in high first – pass drugs is particularly clinically relevant as it can occur after one exposure of the juice.”

It may sound funny, but there are two pages in this book about the effects of grapefruit juice, including a list of ‘Drugs that should not be taken with grapefruit juice’. Grapefruit is a well-known so-called mechanism-based inhibitor, and it may impact the metabolism of a lot of different drugs. It is far from the only known dietary component which may cause problems in a drug metabolism context – for example “cranberry juice has been known for some time as an inhibitor of warfarin metabolism”. On a general note the author remarks that: “There are hundreds of fruit preparations available that have been specifically marketed for their […] antioxidant capacities, such as purple grape, pomegranate, blueberry and acai juices. […] As they all contain large numbers of diverse phenolics and are pharmacologically active, they should be consumed with some caution during drug therapy.”

April 7, 2016 Posted by | Biology, Books, Medicine, Nephrology, Pharmacology | Leave a comment

Random Stuff

i. Some new words I’ve encountered (not all of them are from, but many of them are):

Uxoricide, persnickety, logy, philoprogenitive, impassive, hagiography, gunwale, flounce, vivify, pelage, irredentism, pertinacity,callipygous, valetudinarian, recrudesce, adjuration, epistolary, dandle, picaresque, humdinger, newel, lightsome, lunette, inflect, misoneism, cormorant, immanence, parvenu, sconce, acquisitiveness, lingual, Macaronic, divot, mettlesome, logomachy, raffish, marginalia, omnifarious, tatter, licit.

ii. A lecture:

I got annoyed a few times by the fact that you can’t tell where he’s pointing when he’s talking about the slides, which makes the lecture harder to follow than it ought to be, but it’s still an interesting lecture.

iii. Facts about Dihydrogen Monoxide. Includes coverage of important neglected topics such as ‘What is the link between Dihydrogen Monoxide and school violence?’ After reading the article, I am frankly outraged that this stuff’s still legal!

iv. Some wikipedia links of interest:


Steganography […] is the practice of concealing a file, message, image, or video within another file, message, image, or video. The word steganography combines the Greek words steganos (στεγανός), meaning “covered, concealed, or protected”, and graphein (γράφειν) meaning “writing”. […] Generally, the hidden messages appear to be (or be part of) something else: images, articles, shopping lists, or some other cover text. For example, the hidden message may be in invisible ink between the visible lines of a private letter. Some implementations of steganography that lack a shared secret are forms of security through obscurity, whereas key-dependent steganographic schemes adhere to Kerckhoffs’s principle.[1]

The advantage of steganography over cryptography alone is that the intended secret message does not attract attention to itself as an object of scrutiny. Plainly visible encrypted messages—no matter how unbreakable—arouse interest, and may in themselves be incriminating in countries where encryption is illegal.[2] Thus, whereas cryptography is the practice of protecting the contents of a message alone, steganography is concerned with concealing the fact that a secret message is being sent, as well as concealing the contents of the message.”

H. H. Holmes. A really nice guy.

Herman Webster Mudgett (May 16, 1861 – May 7, 1896), better known under the name of Dr. Henry Howard Holmes or more commonly just H. H. Holmes, was one of the first documented serial killers in the modern sense of the term.[1][2] In Chicago, at the time of the 1893 World’s Columbian Exposition, Holmes opened a hotel which he had designed and built for himself specifically with murder in mind, and which was the location of many of his murders. While he confessed to 27 murders, of which nine were confirmed, his actual body count could be up to 200.[3] He brought an unknown number of his victims to his World’s Fair Hotel, located about 3 miles (4.8 km) west of the fair, which was held in Jackson Park. Besides being a serial killer, H. H. Holmes was also a successful con artist and a bigamist. […]

Holmes purchased an empty lot across from the drugstore where he built his three-story, block-long hotel building. Because of its enormous structure, local people dubbed it “The Castle”. The building was 162 feet long and 50 feet wide. […] The ground floor of the Castle contained Holmes’ own relocated drugstore and various shops, while the upper two floors contained his personal office and a labyrinth of rooms with doorways opening to brick walls, oddly-angled hallways, stairways leading to nowhere, doors that could only be opened from the outside and a host of other strange and deceptive constructions. Holmes was constantly firing and hiring different workers during the construction of the Castle, claiming that “they were doing incompetent work.” His actual reason was to ensure that he was the only one who fully understood the design of the building.[3]

Minnesota Starvation Experiment.

“The Minnesota Starvation Experiment […] was a clinical study performed at the University of Minnesota between November 19, 1944 and December 20, 1945. The investigation was designed to determine the physiological and psychological effects of severe and prolonged dietary restriction and the effectiveness of dietary rehabilitation strategies.

The motivation of the study was twofold: First, to produce a definitive treatise on the subject of human starvation based on a laboratory simulation of severe famine and, second, to use the scientific results produced to guide the Allied relief assistance to famine victims in Europe and Asia at the end of World War II. It was recognized early in 1944 that millions of people were in grave danger of mass famine as a result of the conflict, and information was needed regarding the effects of semi-starvation—and the impact of various rehabilitation strategies—if postwar relief efforts were to be effective.”

“most of the subjects experienced periods of severe emotional distress and depression.[1]:161 There were extreme reactions to the psychological effects during the experiment including self-mutilation (one subject amputated three fingers of his hand with an axe, though the subject was unsure if he had done so intentionally or accidentally).[5] Participants exhibited a preoccupation with food, both during the starvation period and the rehabilitation phase. Sexual interest was drastically reduced, and the volunteers showed signs of social withdrawal and isolation.[1]:123–124 […] One of the crucial observations of the Minnesota Starvation Experiment […] is that the physical effects of the induced semi-starvation during the study closely approximate the conditions experienced by people with a range of eating disorders such as anorexia nervosa and bulimia nervosa.”

Post-vasectomy pain syndrome. Vasectomy reversal is a risk people probably know about, but this one seems to also be worth being aware of if one is considering having a vasectomy.

Transport in the Soviet Union (‘good article’). A few observations from the article:

“By the mid-1970s, only eight percent of the Soviet population owned a car. […]  From 1924 to 1971 the USSR produced 1 million vehicles […] By 1975 only 8 percent of rural households owned a car. […] Growth of motor vehicles had increased by 224 percent in the 1980s, while hardcore surfaced roads only increased by 64 percent. […] By the 1980s Soviet railways had become the most intensively used in the world. Most Soviet citizens did not own private transport, and if they did, it was difficult to drive long distances due to the poor conditions of many roads. […] Road transport played a minor role in the Soviet economy, compared to domestic rail transport or First World road transport. According to historian Martin Crouch, road traffic of goods and passengers combined was only 14 percent of the volume of rail transport. It was only late in its existence that the Soviet authorities put emphasis on road construction and maintenance […] Road transport as a whole lagged far behind that of rail transport; the average distance moved by motor transport in 1982 was 16.4 kilometres (10.2 mi), while the average for railway transport was 930 km per ton and 435 km per ton for water freight. In 1982 there was a threefold increase in investment since 1960 in motor freight transport, and more than a thirtyfold increase since 1940.”

March 3, 2016 Posted by | Biology, Cryptography, History, language, Lectures, Random stuff, Wikipedia, Zoology | Leave a comment


i. “The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data.” (John Tukey)

ii. “Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise.” (-ll-)

iii. “They who can no longer unlearn have lost the power to learn.” (John Lancaster Spalding)

iv. “If there are but few who interest thee, why shouldst thou be disappointed if but few find thee interesting?” (-ll-)

v. “Since the mass of mankind are too ignorant or too indolent to think seriously, if majorities are right it is by accident.” (-ll-)

vi. “As they are the bravest who require no witnesses to their deeds of daring, so they are the best who do right without thinking whether or not it shall be known.” (-ll-)

vii. “Perfection is beyond our reach, but they who earnestly strive to become perfect, acquire excellences and virtues of which the multitude have no conception.” (-ll-)

viii. “We are made ridiculous less by our defects than by the affectation of qualities which are not ours.” (-ll-)

ix. “If thy words are wise, they will not seem so to the foolish: if they are deep the shallow will not appreciate them. Think not highly of thyself, then, when thou art praised by many.” (-ll-)

x. “Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity. ” (George E. P. Box)

xi. “Intense ultraviolet (UV) radiation from the young Sun acted on the atmosphere to form small amounts of very many gases. Most of these dissolved easily in water, and fell out in rain, making Earth’s surface water rich in carbon compounds. […] the most important chemical of all may have been cyanide (HCN). It would have formed easily in the upper atmosphere from solar radiation and meteorite impact, then dissolved in raindrops. Today it is broken down almost at once by oxygen, but early in Earth’s history it built up at low concentrations in lakes and oceans. Cyanide is a basic building block for more complex organic molecules such as amino acids and nucleic acid bases. Life probably evolved in chemical conditions that would kill us instantly!” (Richard Cowen, History of Life, p.8)

xii. “Dinosaurs dominated land communities for 100 million years, and it was only after dinosaurs disappeared that mammals became dominant. It’s difficult to avoid the suspicion that dinosaurs were in some way competitively superior to mammals and confined them to small body size and ecological insignificance. […] Dinosaurs dominated many guilds in the Cretaceous, including that of large browsers. […] in terms of their reconstructed behavior […] dinosaurs should be compared not with living reptiles, but with living mammals and birds. […] By the end of the Cretaceous there were mammals with varied sets of genes but muted variation in morphology. […] All Mesozoic mammals were small. Mammals with small bodies can play only a limited number of ecological roles, mainly insectivores and omnivores. But when dinosaurs disappeared at the end of the Cretaceous, some of the Paleocene mammals quickly evolved to take over many of their ecological roles” (ibid., pp. 145, 154, 222, 227-228)

xiii. “To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.” (Ronald Fisher)

xiv. “Ideas are incestuous.” (Howard Raiffa)

xv. “Game theory […] deals only with the way in which ultrasmart, all knowing people should behave in competitive situations, and has little to say to Mr. X as he confronts the morass of his problem. ” (-ll-)

xvi. “One of the principal objects of theoretical research is to find the point of view from which the subject appears in the greatest simplicity.” (Josiah Williard Gibbs)

xvii. “Nothing is as dangerous as an ignorant friend; a wise enemy is to be preferred.” (Jean de La Fontaine)

xviii. “Humility is a virtue all preach, none practice; and yet everybody is content to hear.” (John Selden)

xix. “Few men make themselves masters of the things they write or speak.” (-ll-)

xx. “Wise men say nothing in dangerous times.” (-ll-)



January 15, 2016 Posted by | Biology, Books, Paleontology, Quotes/aphorisms, Statistics | Leave a comment

The Origin of Species

I figured I ought to blog this book at some point, and today I decided to take out the time to do it. This is the second book by Darwin I’ve read – for blog content dealing with Darwin’s book The Voyage of the Beagle, see these posts. The two books are somewhat different; Beagle is sort of a travel book written by a scientist who decided to write down his observations during his travels, whereas Origin is a sort of popular-science research treatise – for more details on Beagle, see the posts linked above. If you plan on reading both the way I did I think you should aim to read them in the order they are written.

I did not rate the book on goodreads because I could not think of a fair way to rate the book; it’s a unique and very important contribution to the history of science, but how do you weigh the other dimensions? I decided not to try. Some of the people reviewing the book on goodreads call the book ‘dry’ or ‘dense’, but I’d say that I found the book quite easy to read compared to quite a few of the other books I’ve been reading this year and it doesn’t actually take that long to read; thus I read a quite substantial proportion of the book during a one day trip to Copenhagen and back. The book can be read by most literate people living in the 21st century – you do not need to know any evolutionary biology to read this book – but that said, how you read the book will to some extent depend upon how much you know about the topics about which Darwin theorizes in his book. I had a conversation with my brother about the book a short while after I’d read it, and I recall noting during that conversation that in my opinion one would probably get more out of reading this book if one has at least some knowledge of geology (for example some knowledge about the history of the theory of continental drift – this book was written long before the theory of plate tectonics was developed), paleontology, Mendel’s laws/genetics/the modern synthesis and modern evolutionary thought, ecology and ethology, etc. Whether or not you actually do ‘get more out of the book’ if you already know some stuff about the topics about which Darwin speaks is perhaps an open question, but I think a case can certainly be made that someone who already knows a bit about evolution and related topics will read this book in a different manner than will someone who knows very little about these topics. I should perhaps in this context point out to people new to this blog that even though I hardly consider myself an expert on these sorts of topics, I have nevertheless read quite a bit of stuff about those things in the past – books like this, this, this, this, this, this, this, this, this, this, this, this, this, this, and this one – so I was reading the book perhaps mainly from the vantage point of someone at least somewhat familiar both with many of the basic ideas and with a lot of the refinements of these ideas that people have added to the science of biology since Darwin’s time. One of the things my knowledge of modern biology and related topics had not prepared me for was how moronic some of the ideas of Darwin’s critics were at the time and how stupid some of the implicit alternatives were, and this is actually part of the fun of reading this book; there was a lot of stuff back then which even many of the people presumably held in high regard really had no clue about, and even outrageously idiotic ideas were seemingly taken quite seriously by people involved in the debate. I assume that biologists still to this day have to spend quite a bit of time and effort dealing with ignorant idiots (see also this), but back in Darwin’s day these people were presumably to a much greater extent taken seriously even among people in the scientific community, if indeed they were not themselves part of the scientific community.

Darwin was not right about everything and there’s a lot of stuff that modern biologists know which he had no idea about, so naturally some mistaken ideas made their way into Origin as well; for example the idea of the inheritance of acquired characteristics (Lamarckian inheritance) occasionally pops up and is implicitly defended in the book as a credible complement to natural selection, as also noted in Oliver Francis’ afterword to the book. On a general note it seems that Darwin did a better job convincing people about the importance of the concept of evolution than he did convincing people that the relevant mechanism behind evolution was natural selection; at least that’s what’s argued in wiki’s featured article on the history of evolutionary thought (to which I have linked before here on the blog).

Darwin emphasizes more than once in the book that evolution is a very slow process which takes a lot of time (for example: “I do believe that natural selection will always act very slowly, often only at long intervals of time, and generally on only a very few of the inhabitants of the same region at the same time”, p.123), and arguably this is also something about which he is part right/part wrong because the speed with which natural selection ‘makes itself felt’ depends upon a variety of factors, and it can be really quite fast in some contexts (see e.g. this and some of the topics covered in books like this one); though you can appreciate why he held the views he did on that topic.

A big problem confronted by Darwin was that he didn’t know how genes work, so in a sense the whole topic of the ‘mechanics of the whole thing’ – the ‘nuts and bolts’ – was more or less a black box to him (I have included a few quotes which indirectly relate to this problem in my coverage of the book below; as can be inferred from those quotes Darwin wasn’t completely clueless, but he might have benefited greatly from a chat with Gregor Mendel…) – in a way a really interesting thing about the book is how plausible the theory of natural selection is made out to be despite this blatantly obvious (at least to the modern reader) problem. Darwin was incidentally well aware there was a problem; just 6 pages into the first chapter of the book he observes frankly that: “The laws governing inheritance are quite unknown”. Some of the quotes below, e.g. on reciprocal crosses, illustrate that he was sort of scratching the surface, but in the book he never does more than that.

Below I have added some quotes from the book.

“Certainly no clear line of demarcation has as yet been drawn between species and sub-species […]; or, again, between sub-species and well-marked varieties, or between lesser varieties and individual differences. These differences blend into each other in an insensible series; and a series impresses the mind with the idea of an actual passage. […] I look at individual differences, though of small interest to the systematist, as of high importance […], as being the first step towards such slight varieties as are barely thought worth recording in works on natural history. And I look at varieties which are in any degree more distinct and permanent, as steps leading to more strongly marked and more permanent varieties; and at these latter, as leading to sub-species, and to species. […] I attribute the passage of a variety, from a state in which it differs very slightly from its parent to one in which it differs more, to the action of natural selection in accumulating […] differences of structure in certain definite directions. Hence I believe a well-marked variety may be justly called an incipient species […] I look at the term species as one arbitrarily given, for the sake of convenience, to a set of individuals closely resembling each other, and that it does not essentially differ from the term variety, which is given to less distinct and more fluctuating forms. The term variety, again, in comparison with mere individual differences, is also applied arbitrarily, and for mere convenience’ sake. […] the species of large genera present a strong analogy with varieties. And we can clearly understand these analogies, if species have once existed as varieties, and have thus originated: whereas, these analogies are utterly inexplicable if each species has been independently created.”

“Owing to [the] struggle for life, any variation, however slight and from whatever cause proceeding, if it be in any degree profitable to an individual of any species, in its infinitely complex relations to other organic beings and to external nature, will tend to the preservation of that individual, and will generally be inherited by its offspring. The offspring, also, will thus have a better chance of surviving, for, of the many individuals of any species which are periodically born, but a small number can survive. I have called this principle, by which each slight variation, if useful, is preserved, by the term of Natural Selection, in order to mark its relation to man’s power of selection. We have seen that man by selection can certainly produce great results, and can adapt organic beings to his own uses, through the accumulation of slight but useful variations, given to him by the hand of Nature. But Natural Selection, as we shall hereafter see, is a power incessantly ready for action, and is as immeasurably superior to man’s feeble efforts, as the works of Nature are to those of Art. […] In looking at Nature, it is most necessary to keep the foregoing considerations always in mind – never to forget that every single organic being around us may be said to be striving to the utmost to increase in numbers; that each lives by a struggle at some period of its life; that heavy destruction inevitably falls either on the young or old, during each generation or at recurrent intervals. Lighten any check, mitigate the destruction ever so little, and the number of the species will almost instantaneously increase to any amount. The face of Nature may be compared to a yielding surface, with ten thousand sharp wedges packed close together and driven inwards by incessant blows, sometimes one wedge being struck, and then another with greater force. […] A corollary of the highest importance may be deduced from the foregoing remarks, namely, that the structure of every organic being is related, in the most essential yet often hidden manner, to that of all other organic beings, with which it comes into competition for food or residence, or from which it has to escape, or on which it preys.”

“Under nature, the slightest difference of structure or constitution may well turn the nicely-balanced scale in the struggle for life, and so be preserved. How fleeting are the wishes and efforts of man! how short his time! And consequently how poor will his products be, compared with those accumulated by nature during whole geological periods. […] It may be said that natural selection is daily and hourly scrutinising, throughout the world, every variation, even the slightest; rejecting that which is bad, preserving and adding up all that is good; silently and insensibly working, whenever and wherever opportunity offers, at the improvement of each organic being in relation to its organic and inorganic conditions of life. We see nothing of these slow changes in progress, until the hand of time has marked the long lapses of ages, and then so imperfect is our view into long past geological ages, that we only see that the forms of life are now different from what they formerly were.”

“I have collected so large a body of facts, showing, in accordance with the almost universal belief of breeders, that with animals and plants a cross between different varieties, or between individuals of the same variety but of another strain, gives vigour and fertility to the offspring; and on the other hand, that close interbreeding diminishes vigour and fertility; that these facts alone incline me to believe that it is a general law of nature (utterly ignorant though we be of the meaning of the law) that no organic being self-fertilises itself for an eternity of generations; but that a cross with another individual is occasionally perhaps at very long intervals — indispensable. […] in many organic beings, a cross between two individuals is an obvious necessity for each birth; in many others it occurs perhaps only at long intervals; but in none, as I suspect, can self-fertilisation go on for perpetuity.”

“as new species in the course of time are formed through natural selection, others will become rarer and rarer, and finally extinct. The forms which stand in closest competition with those undergoing modification and improvement, will naturally suffer most. […] Whatever the cause may be of each slight difference in the offspring from their parents – and a cause for each must exist – it is the steady accumulation, through natural selection, of such differences, when beneficial to the individual, which gives rise to all the more important modifications of structure, by which the innumerable beings on the face of this earth are enabled to struggle with each other, and the best adapted to survive.”

“Natural selection, as has just been remarked, leads to divergence of character and to much extinction of the less improved and intermediate forms of life. On these principles, I believe, the nature of the affinities of all organic beings may be explained. It is a truly wonderful fact – the wonder of which we are apt to overlook from familiarity – that all animals and all plants throughout all time and space should be related to each other in group subordinate to group, in the manner which we everywhere behold – namely, varieties of the same species most closely related together, species of the same genus less closely and unequally related together, forming sections and sub-genera, species of distinct genera much less closely related, and genera related in different degrees, forming sub-families, families, orders, sub-classes, and classes. The several subordinate groups in any class cannot be ranked in a single file, but seem rather to be clustered round points, and these round other points, and so on in almost endless cycles. On the view that each species has been independently created, I can see no explanation of this great fact in the classification of all organic beings; but, to the best of my judgment, it is explained through inheritance and the complex action of natural selection, entailing extinction and divergence of character […] The affinities of all the beings of the same class have sometimes been represented by a great tree. I believe this simile largely speaks the truth. The green and budding twigs may represent existing species; and those produced during each former year may represent the long succession of extinct species. At each period of growth all the growing twigs have tried to branch out on all sides, and to overtop and kill the surrounding twigs and branches, in the same manner as species and groups of species have tried to overmaster other species in the great battle for life. The limbs divided into great branches, and these into lesser and lesser branches, were themselves once, when the tree was small, budding twigs; and this connexion of the former and present buds by ramifying branches may well represent the classification of all extinct and living species in groups subordinate to groups. Of the many twigs which flourished when the tree was a mere bush, only two or three, now grown into great branches, yet survive and bear all the other branches; so with the species which lived during long-past geological periods, very few now have living and modified descendants. From the first growth of the tree, many a limb and branch has decayed and dropped off; and these lost branches of various sizes may represent those whole orders, families, and genera which have now no living representatives, and which are known to us only from having been found in a fossil state. As we here and there see a thin straggling branch springing from a fork low down in a tree, and which by some chance has been favoured and is still alive on its summit, so we occasionally see an animal like the Ornithorhynchus or Lepidosiren, which in some small degree connects by its affinities two large branches of life, and which has apparently been saved from fatal competition by having inhabited a protected station. As buds give rise by growth to fresh buds, and these, if vigorous, branch out and overtop on all sides many a feebler branch, so by generation I believe it has been with the great Tree of Life, which fills with its dead and broken branches the crust of the earth, and covers the surface with its ever branching and beautiful ramifications.”

“No one has been able to point out what kind, or what amount, of difference in any recognisable character is sufficient to prevent two species crossing. It can be shown that plants most widely different in habit and general appearance, and having strongly marked differences in every part of the flower, even in the pollen, in the fruit, and in the cotyledons, can be crossed. […] By a reciprocal cross between two species, I mean the case, for instance, of a stallion-horse being first crossed with a female-ass, and then a male-ass with a mare: these two species may then be said to have been reciprocally crossed. There is often the widest possible difference in the facility of making reciprocal crosses. Such cases are highly important, for they prove that the capacity in any two species to cross is often completely independent of their systematic affinity, or of any recognisable difference in their whole organisation. On the other hand, these cases clearly show that the capacity for crossing is connected with constitutional differences imperceptible by us, and confined to the reproductive system. […] fertility in the hybrid is independent of its external resemblance to either pure parent. […] The foregoing rules and facts […] appear to me clearly to indicate that the sterility both of first crosses and of hybrids is simply incidental or dependent on unknown differences, chiefly in the reproductive systems, of the species which are crossed. […] Laying aside the question of fertility and sterility, in all other respects there seems to be a general and close similarity in the offspring of crossed species, and of crossed varieties. If we look at species as having been specially created, and at varieties as having been produced by secondary laws, this similarity would be an astonishing fact. But it harmonizes perfectly with the view that there is no essential distinction between species and varieties. […] the facts briefly given in this chapter do not seem to me opposed to, but even rather to support the view, that there is no fundamental distinction between species and varieties.”

“Believing, from reasons before alluded to, that our continents have long remained in nearly the same relative position, though subjected to large, but partial oscillations of level, I am strongly inclined to…” (…’probably get some things wrong…’, US)

“In considering the distribution of organic beings over the face of the globe, the first great fact which strikes us is, that neither the similarity nor the dissimilarity of the inhabitants of various regions can be accounted for by their climatal and other physical conditions. Of late, almost every author who has studied the subject has come to this conclusion. […] A second great fact which strikes us in our general review is, that barriers of any kind, or obstacles to free migration, are related in a close and important manner to the differences between the productions of various regions. […] A third great fact, partly included in the foregoing statements, is the affinity of the productions of the same continent or sea, though the species themselves are distinct at different points and stations. It is a law of the widest generality, and every continent offers innumerable instances. Nevertheless the naturalist in travelling, for instance, from north to south never fails to be struck by the manner in which successive groups of beings, specifically distinct, yet clearly related, replace each other. […] We see in these facts some deep organic bond, prevailing throughout space and time, over the same areas of land and water, and independent of their physical conditions. The naturalist must feel little curiosity, who is not led to inquire what this bond is.  This bond, on my theory, is simply inheritance […] The dissimilarity of the inhabitants of different regions may be attributed to modification through natural selection, and in a quite subordinate degree to the direct influence of different physical conditions. The degree of dissimilarity will depend on the migration of the more dominant forms of life from one region into another having been effected with more or less ease, at periods more or less remote; on the nature and number of the former immigrants; and on their action and reaction, in their mutual struggles for life; the relation of organism to organism being, as I have already often remarked, the most important of all relations. Thus the high importance of barriers comes into play by checking migration; as does time for the slow process of modification through natural selection. […] On this principle of inheritance with modification, we can understand how it is that sections of genera, whole genera, and even families are confined to the same areas, as is so commonly and notoriously the case.”

“the natural system is founded on descent with modification […] and […] all true classification is genealogical; […] community of descent is the hidden bond which naturalists have been unconsciously seeking, […] not some unknown plan or creation, or the enunciation of general propositions, and the mere putting together and separating objects more or less alike.”

September 27, 2015 Posted by | Biology, Books, Botany, Evolutionary biology, Genetics, Geology, Zoology | Leave a comment

Loneliness (II)

Here’s my first post about the book. I’d probably have liked the book better if I hadn’t read the Cognitive Psychology text before this one, as knowledge from that book has made me think a few times in specific contexts that ‘that’s a bit more complicated than you’re making it out to be’ – as I also mentioned in the first post, the book is a bit too popular science-y for my taste. I have been reading other books in the last few days – for example I started reading Darwin a couple of days ago – and so I haven’t really spent much time on this one since my first post; however I have read the first 10 chapters (out of 14) by now, and below I’ve added a few observations from the chapters in the middle.

“In 1958, in a now-legendary, perhaps infamous experiment, the psychologist Harry Harlow of the University of Wisconsin removed newborn rhesus monkeys from their mothers. He presented these newborns instead with two surrogates, one made of wire and one made of cloth […]. Either stand-in could be rigged with a milk bottle, but regardless of which “mother” provided food, infant monkeys spent most of their time clinging to the one made of cloth, running to it immediately when startled or upset. They visited the wire mother only when that surrogate provided food, and then, only for as long as it took to feed.2

Harlow found that monkeys deprived of tactile comfort showed significant delays in their progress, both mentally and emotionally. Those deprived of tactile comfort and also raised in isolation from other monkeys developed additional behavioral aberrations, often severe, from which they never recovered. Even after they had rejoined the troop, these deprived monkeys would sit alone and rock back and forth. They were overly aggressive with their playmates, and later in life they remained unable to form normal attachments. They were, in fact, socially inept — a deficiency that extended down into the most basic biological behaviors. If a socially deprived female was approached by a normal male during the time when hormones made her sexually receptive, she would squat on the floor rather than present her hindquarters. When a previously isolated male approached a receptive female, he would clasp her head instead of her hindquarters, then engage in pelvic thrusts. […] Females raised in isolation became either incompetent or abusive mothers. Even monkeys raised in cages where they could see, smell, and hear — but not touch — other monkeys developed what the neuroscientist Mary Carlson has called an “autistic-like syndrome,” with excessive grooming, self-clasping, social withdrawal, and rocking. As Carlson told a reporter, “You were not really a monkey unless you were raised in an interactive monkey environment.””

In the authors’ coverage of oxytocin’s various roles in human- and animal social interaction they’re laying it on a bit thick in my opinion, and the less than skeptical coverage there leads me to also be somewhat skeptical of their coverage of the topic of mirror neurons, also on account of stuff like this. However I decided to add a little of the coverage of this topic anyway:

“In the 1980s the neurophysiologist Giacomo Rizzolatti began experimenting with macaque monkeys, running electrodes directly into their brains and giving them various objects to handle. The wiring was so precise that it allowed Rizzolatti and his colleagues to identify the specific monkey neurons that were activated at any moment.

When the monkeys carried out an action, such as reaching for a peanut, an area in the premotor cortex called F5 would fire […]. But then the scientists noticed something quite unexpected. When one of the researchers picked up a peanut to hand it to the monkey, those same motor neurons in the monkey’s brain fired. It was as if the animal itself had picked up the peanut. Likewise, the same neurons that fired when the monkey put a peanut in its mouth would fire when the monkey watched a researcher put a peanut in his mouth. […] Rizzolatti gave these structures the name “mirror neurons.” They fire even when the critical point of the action—the person’s hand grasping the peanut, for instance — is hidden from view behind some object, provided that the monkey knows there is a peanut back there. Even simply hearing the action — a peanut shell being cracked — can trigger the response. In all these instances, it is the goal rather than the observed action itself that is being mirrored in the monkey’s neural response. […] Rizzolatti and his colleagues confirmed the role of goals […] by performing brain scans while people watched humans, monkeys, and dogs opening and closing their jaws as if biting. Then they repeated the scans while the study subjects watched humans speak, monkeys smack their lips, and dogs bark.9 When the participants watched any of the three species carrying out the biting motion, the same areas of their brains were activated that activate when humans themselves bite. That is, observing actions that could reasonably be performed by humans, even when the performers were monkeys or dogs, activated the appropriate portion of the mirror neuron system in the human brain. […] the mirror neuron system isn’t simply “monkey see, monkey do,” or even “human see, human do.” It functions to give the observing individual knowledge of the observed action from a “personal” perspective. This “personal” understanding of others’ actions, it appears, promotes our understanding of and resonance with others.”

“In a study of how people monitor social cues, when researchers gave participants facts related to interpersonal or collective social ties presented in a diary format, those who were lonely remembered a greater proportion of this information than did those who were not lonely. Feeling lonely increases a person’s attentiveness to social cues just as being hungry increases a person’s attentiveness to food cues.28 […] They [later] presented images of twenty-four male and female faces depicting four emotions — anger, fear, happiness, and sadness — in two modes, high intensity and low intensity. The faces appeared individually for only one second, during which participants had to judge the emotional timbre. The higher the participants’ level of loneliness, the less accurate their interpretation of the facial expressions.”

“As we try to determine the meaning of events around us, we humans are not particularly good at knowing the causes of our own feelings or behavior. We overestimate our own strengths and underestimate our faults. We overestimate the importance of our contribution to group activities, the pervasiveness of our beliefs within the wider population, and the likelihood that an event we desire will occur.3 A At the same time we underestimate the contribution of others, as well as the likelihood that risks in the world apply to us. Events that unfold unexpectedly are not reasoned about as much as they are rationalized, and the act of remembering itself […] is far more of a biased reconstruction than an accurate recollection of events. […] Amid all the standard distortions we engage in, […] loneliness also sets us apart by making us more fragile, negative, and self-critical. […] One of the distinguishing characteristics of people who have become chronically lonely is the perception that they are doomed to social failure, with little if any control over external circumstances. Awash in pessimism, and feeling the need to protect themselves at every turn, they tend to withdraw, or to rely on the passive forms of coping under stress […] The social strategy that loneliness induces — high in social avoidance, low in social approach — also predicts future loneliness. The cynical worldview induced by loneliness, which consists of alienation and little faith in others, in turn, has been shown to contribute to actual social rejection. This is how feeling lonely creates self-fulfilling prophesies. If you maintain a subjective sense of rejection long enough, over time you are far more likely to confront the actual social rejection that you dread.8 […] In an effort to protect themselves against disappointment and the pain of rejection, the lonely can come up with endless numbers of reasons why a particular effort to reach out will be pointless, or why a particular relationship will never work. This may help explain why, when we’re feeling lonely, we undermine ourselves by assuming that we lack social skills that in fact, we do have available.”

“Because the emotional system that governs human self-preservation was built for a primitive environment and simple, direct dangers, it can be extremely naïve. It is impressionable and prefers shallow, social, and anecdotal information to abstract data. […] A sense of isolation can make [humans] feel unsafe. When we feel unsafe, we do the same thing a hunter-gatherer on the plains of Africa would do — we scan the horizon for threats. And just like a hunter-gatherer hearing an ominous sound in the brush, the lonely person too often assumes the worst, tightens up, and goes into the psychological equivalent of a protective crouch.”

“One might expect that a lonely person, hungry to fulfill unmet social needs, would be very accepting of a new acquaintance, just as a famished person might take pleasure in food that was not perfectly prepared or her favorite item on the menu. However, when people feel lonely they are actually far less accepting of potential new friends than when they feel socially contented.17 Studies show that lonely undergraduates hold more negative perceptions of their roommates than do their nonlonely peers.”

August 30, 2015 Posted by | Biology, Books, Psychology, Zoology | Leave a comment

Random Stuff / Open Thread

This is not a very ‘meaty’ post, but it’s been a long time since I had one of these and I figured it was time for another one. As always links and comments are welcome.

i. The unbearable accuracy of stereotypes. I made a mental note of reading this paper later a long time ago, but I’ve been busy with other things. Today I skimmed it and decided that it looks interesting enough to give it a detailed read later. Some remarks from the summary towards the end of the paper:

“The scientific evidence provides more evidence of accuracy than of inaccuracy in social stereotypes. The most appropriate generalization based on the evidence is that people’s beliefs about groups are usually moderately to highly accurate, and are occasionally highly inaccurate. […] This pattern of empirical support for moderate to high stereotype accuracy is not unique to any particular target or perceiver group. Accuracy has been found with racial and ethnic groups, gender, occupations, and college groups. […] The pattern of moderate to high stereotype accuracy is not unique to any particular research team or methodology. […] This pattern of moderate to high stereotype accuracy is not unique to the substance of the stereotype belief. It occurs for stereotypes regarding personality traits, demographic characteristics, achievement, attitudes, and behavior. […] The strong form of the exaggeration hypothesis – either defining stereotypes as exaggerations or as claiming that stereotypes usually lead to exaggeration – is not supported by data. Exaggeration does sometimes occur, but it does not appear to occur much more frequently than does accuracy or underestimation, and may even occur less frequently.”

I should perhaps note that this research is closely linked to Funder’s research on personality judgment, which I’ve previously covered on the blog here and here.

ii. I’ve spent approximately 150 hours on altogether at this point (having ‘mastered’ ~10.200 words in the process). A few words I’ve recently encountered on the site: Nescience (note to self: if someone calls you ‘nescient’ during a conversation, in many contexts that’ll be an insult, not a compliment) (Related note to self: I should find myself some smarter enemies, who use words like ‘nescient’…), eristic, carrel, oleaginous, decal, gable, epigone, armoire, chalet, cashmere, arrogate, ovine.

iii. why p = .048 should be rare (and why this feels counterintuitive).

iv. A while back I posted a few comments on SSC and I figured I might as well link to them here (at least it’ll make it easier for me to find them later on). Here is where I posted a few comments on a recent study dealing with Ramadan-related IQ effects, a topic which I’ve covered here on the blog before, and here I discuss some of the benefits of not having low self-esteem.

On a completely unrelated note, today I left a comment in a reddit thread about ‘Books That Challenged You / Made You See the World Differently’ which may also be of interest to readers of this blog. I realized while writing the comment that this question is probably getting more and more difficult for me to answer as time goes by. It really all depends upon what part of the world you want to see in a different light; which aspects you’re most interested in. For people wondering about where the books about mathematics and statistics were in that comment (I do like to think these fields play some role in terms of ‘how I see the world‘), I wasn’t really sure which book to include on such topics, if any; I can’t think of any single math or stats textbook that’s dramatically changed the way I thought about the world – to the extent that my knowledge about these topics has changed how I think about the world, it’s been a long drawn-out process.

v. Chess…

People who care the least bit about such things probably already know that a really strong tournament is currently being played in St. Louis, the so-called Sinquefield Cup, so I’m not going to talk about that here (for resources and relevant links, go here).

I talked about the strong rating pools on ICC not too long ago, but one thing I did not mention when discussing this topic back then was that yes, I also occasionally win against some of those grandmasters the rating pool throws at me – at least I’ve won a few times against GMs by now in bullet. I’m aware that for many ‘serious chess players’ bullet ‘doesn’t really count’ because the time dimension is much more important than it is in other chess settings, but to people who think skill doesn’t matter much in bullet I’d say they should have a match with Hikaru Nakamura and see how well they do against him (if you’re interested in how that might turn out, see e.g. this video – and keep in mind that at the beginning of the video Nakamura had already won 8 games in a row, out of 8, against his opponent in the first games, who incidentally is not exactly a beginner). The skill-sets required do not overlap perfectly between bullet and classical time control games, but when I started playing bullet online I quickly realized that good players really require very little time to completely outplay people who just play random moves (fast). Below I have posted a screencap I took while kibitzing a game of one of my former opponents, an anonymous GM from Germany, against whom I currently have a 2.5/6 score, with two wins, one draw, and three losses (see the ‘My score vs CPE’ box).

Kibitzing GMs(click to view full size).

I like to think of a score like this as at least some kind of accomplishment, though admittedly perhaps not a very big one.

Also in chess-related news, I’m currently reading Jesús de la Villa’s 100 Endgames book, which Christof Sielecki has said some very nice things about. A lot of the stuff I’ve encountered so far is stuff I’ve seen before, positions I’ve already encountered and worked on, endgame principles I’m familiar with, etc., but not all of it is known stuff and I really like the structure of the book. There are a lot of pages left, and as it is I’m planning to read this book from cover to cover, which is something I usually do not do when I read chess books (few people do, judging from various comments I’ve seen people make in all kinds of different contexts).

Lastly, a lecture:

August 25, 2015 Posted by | Biology, Books, Chess, language, Lectures, Personal, Psychology, Statistics | 2 Comments

Photosynthesis in the Marine Environment (III)

This will be my last post about the book. After having spent a few hours on the post I started to realize the post would become very long if I were to cover all the remaining chapters, and so in the end I decided not to discuss material from chapter 12 (‘How some marine plants modify the environment for other organisms’) here, even though I actually thought some of that stuff was quite interesting. I may decide to talk briefly about some of the stuff in that chapter in another blogpost later on (but most likely I won’t). For a few general remarks about the book, see my second post about it.

Some stuff from the last half of the book below:

“The light reactions of marine plants are similar to those of terrestrial plants […], except that pigments other than chlorophylls a and b and carotenoids may be involved in the capturing of light […] and that special arrangements between the two photosystems may be different […]. Similarly, the CO2-fixation and -reduction reactions are also basically the same in terrestrial and marine plants. Perhaps one should put this the other way around: Terrestrial-plant photosynthesis is similar to marine-plant photosynthesis, which is not surprising since plants have evolved in the oceans for 3.4 billion years and their descendants on land for only 350–400 million years. […] In underwater marine environments, the accessibility to CO2 is low mainly because of the low diffusivity of solutes in liquid media, and for CO2 this is exacerbated by today’s low […] ambient CO2 concentrations. Therefore, there is a need for a CCM also in marine plants […] CCMs in cyanobacteria are highly active and accumulation factors (the internal vs. external CO2 concentrations ratio) can be of the order of 800–900 […] CCMs in eukaryotic microalgae are not as effective at raising internal CO2 concentrations as are those in cyanobacteria, but […] microalgal CCMs result in CO2 accumulation factors as high as 180 […] CCMs are present in almost all marine plants. These CCMs are based mainly on various forms of HCO3 [bicarbonate] utilisation, and may raise the intrachloroplast (or, in cyanobacteria, intracellular or intra-carboxysome) CO2 to several-fold that of seawater. Thus, Rubisco is in effect often saturated by CO2, and photorespiration is therefore often absent or limited in marine plants.”

“we view the main difference in photosynthesis between marine and terrestrial plants as the latter’s ability to acquire Ci [inorganic carbon] (in most cases HCO3) from the external medium and concentrate it intracellularly in order to optimise their photosynthetic rates or, in some cases, to be able to photosynthesise at all. […] CO2 dissolved in seawater is, under air-equilibrated conditions and given today’s seawater pH, in equilibrium with a >100 times higher concentration of HCO3, and it is therefore not surprising that most marine plants utilise the latter Ci form for their photosynthetic needs. […] any plant that utilises bulk HCO3 from seawater must convert it to CO2 somewhere along its path to Rubisco. This can be done in different ways by different plants and under different conditions”

“The conclusion that macroalgae use HCO3 stems largely from results of experiments in which concentrations of CO2 and HCO3 were altered (chiefly by altering the pH of the seawater) while measuring photosynthetic rates, or where the plants themselves withdrew these Ci forms as they photosynthesised in a closed system as manifested by a pH increase (so-called pH-drift experiments) […] The reason that the pH in the surrounding seawater increases as plants photosynthesise is first that CO2 is in equilibrium with carbonic acid (H2CO3), and so the acidity decreases (i.e. pH rises) as CO2 is used up. At higher pH values (above ∼9), when all the CO2 is used up, then a decrease in HCO3 concentrations will also result in increased pH since the alkalinity is maintained by the formation of OH […] some algae can also give off OH to the seawater medium in exchange for HCO3 uptake, bringing the pH up even further (to >10).”

Carbonic anhydrase (CA) is a ubiquitous enzyme, found in all organisms investigated so far (from bacteria, through plants, to mammals such as ourselves). This may be seen as remarkable, since its only function is to catalyse the inter-conversion between CO2 and HCO3 in the reaction CO2 + H2O ↔ H2CO3; we can exchange the latter Ci form to HCO3 since this is spontaneously formed by H2CO3 and is present at a much higher equilibrium concentration than the latter. Without CA, the equilibrium between CO2 and HCO3 is a slow process […], but in the presence of CA the reaction becomes virtually instantaneous. Since CO2 and HCO3 generate different pH values of a solution, one of the roles of CA is to regulate intracellular pH […] another […] function is to convert HCO3 to CO2 somewhere en route towards the latter’s final fixation by Rubisco.”

“with very few […] exceptions, marine macrophytes are not C 4 plants. Also, while a CAM-like [Crassulacean acid metabolism-like, see my previous post about the book for details] feature of nightly uptake of Ci may complement that of the day in some brown algal kelps, this is an exception […] rather than a rule for macroalgae in general. Thus, virtually no marine macroalgae are C 4 or CAM plants, and instead their CCMs are dependent on HCO3 utilization, which brings about high concentrations of CO2 in the vicinity of Rubisco. In Ulva, this type of CCM causes the intra-cellular CO2 concentration to be some 200 μM, i.e. ∼15 times higher than that in seawater.“

“deposition of calcium carbonate (CaCO3) as either calcite or aragonite in marine organisms […] can occur within the cells, but for macroalgae it usually occurs outside of the cell membranes, i.e. in the cell walls or other intercellular spaces. The calcification (i.e. CaCO3 formation) can sometimes continue in darkness, but is normally greatly stimulated in light and follows the rate of photosynthesis. During photosynthesis, the uptake of CO2 will lower the total amount of dissolved inorganic carbon (Ci) and, thus, increase the pH in the seawater surrounding the cells, thereby increasing the saturation state of CaCO3. This, in turn, favours calcification […]. Conversely, it has been suggested that calcification might enhance the photosynthetic rate by increasing the rate of conversion of HCO3 to CO2 by lowering the pH. Respiration will reduce calcification rates when released CO2 increases Ci and/but lowers intercellular pH.”

“photosynthesis is most efficient at very low irradiances and increasingly inefficient as irradiances increase. This is most easily understood if we regard ‘efficiency’ as being dependent on quantum yield: At low ambient irradiances (the light that causes photosynthesis is also called ‘actinic’ light), almost all the photon energy conveyed through the antennae will result in electron flow through (or charge separation at) the reaction centres of photosystem II […]. Another way to put this is that the chances for energy funneled through the antennae to encounter an oxidised (or ‘open’) reaction centre are very high. Consequently, almost all of the photons emitted by the modulated measuring light will be consumed in photosynthesis, and very little of that photon energy will be used for generating fluorescence […] the higher the ambient (or actinic) light, the less efficient is photosynthesis (quantum yields are lower), and the less likely it is for photon energy funnelled through the antennae (including those from the measuring light) to find an open reaction centre, and so the fluorescence generated by the latter light increases […] Alpha (α), which is a measure of the maximal photosynthetic efficiency (or quantum yield, i.e. photosynthetic output per photons received, or absorbed […] by a specific leaf/thallus area, is high in low-light plants because pigment levels (or pigment densities per surface area) are high. In other words, under low-irradiance conditions where few photons are available, the probability that they will all be absorbed is higher in plants with a high density of photosynthetic pigments (or larger ‘antennae’ […]). In yet other words, efficient photon absorption is particularly important at low irradiances, where the higher concentration of pigments potentially optimises photosynthesis in low-light plants. In high-irradiance environments, where photons are plentiful, their efficient absorption becomes less important, and instead it is reactions downstream of the light reactions that become important in the performance of optimal rates of photosynthesis. The CO2-fixing capability of the enzyme Rubisco, which we have indicated as a bottleneck for the entire photosynthetic apparatus at high irradiances, is indeed generally higher in high-light than in low-light plants because of its higher concentration in the former. So, at high irradiances where the photon flux is not limiting to photosynthetic rates, the activity of Rubisco within the CO2-fixation and -reduction part of photosynthesis becomes limiting, but is optimised in high-light plants by up-regulation of its formation. […] photosynthetic responses have often been explained in terms of adaptation to low light being brought about by alterations in either the number of ‘photosynthetic units’ or their size […] There are good examples of both strategies occurring in different species of algae”.

“In general, photoinhibition can be defined as the lowering of photosynthetic rates at high irradiances. This is mainly due to the rapid (sometimes within minutes) degradation of […] the D1 protein. […] there are defense mechanisms [in plants] that divert excess light energy to processes different from photosynthesis; these processes thus cause a downregulation of the entire photosynthetic process while protecting the photosynthetic machinery from excess photons that could cause damage. One such process is the xanthophyll cycle. […] It has […] been suggested that the activity of the CCM in marine plants […] can be a source of energy dissipation. If CO2 levels are raised inside the cells to improve Rubisco activity, some of that CO2 can potentially leak out of the cells, and so raising the net energy cost of CO2 accumulation and, thus, using up large amounts of energy […]. Indirect evidence for this comes from experiments in which CCM activity is down-regulated by elevated CO2

“Photoinhibition is often divided into dynamic and chronic types, i.e. the former is quickly remedied (e.g. during the day[…]) while the latter is more persistent (e.g. over seasons […] the mechanisms for down-regulating photosynthesis by diverting photon energies and the reducing power of electrons away from the photosynthetic systems, including the possibility of detoxifying oxygen radicals, is important in high-light plants (that experience high irradiances during midday) as well as in those plants that do see significant fluctuations in irradiance throughout the day (e.g. intertidal benthic plants). While low-light plants may lack those systems of down-regulation, one must remember that they do not live in environments of high irradiances, and so seldom or never experience high irradiances. […] If plants had a mind, one could say that it was worth it for them to invest in pigments, but unnecessary to invest in high amounts of Rubisco, when growing under low-light conditions, and necessary for high-light growing plants to invest in Rubisco, but not in pigments. Evolution has, of course, shaped these responses”.

“shallow-growing corals […] show two types of photoinhibition: a dynamic type that remedies itself at the end of each day and a more chronic type that persists over longer time periods. […] Bleaching of corals occurs when they expel their zooxanthellae to the surrounding water, after which they either die or acquire new zooxanthellae of other types (or clades) that are better adapted to the changes in the environment that caused the bleaching. […] Active Ci acquisition mechanisms, whether based on localised active H+ extrusion and acidification and enhanced CO2 supply, or on active transport of HCO3, are all energy requiring. As a consequence it is not surprising that the CCM activity is decreased at lower light levels […] a whole spectrum of light-responses can be found in seagrasses, and those are often in co-ordinance with the average daily irradiances where they grow. […] The function of chloroplast clumping in Halophila stipulacea appears to be protection of the chloroplasts from high irradiances. Thus, a few peripheral chloroplasts ‘sacrifice’ themselves for the good of many others within the clump that will be exposed to lower irradiances. […] While water is an effective filter of UV radiation (UVR)2, many marine organisms are sensitive to UVR and have devised ways to protect themselves against this harmful radiation. These ways include the production of UV-filtering compounds called mycosporine-like amino acids (MAAs), which is common also in seagrasses”.

“Many algae and seagrasses grow in the intertidal and are, accordingly, exposed to air during various parts of the day. On the one hand, this makes them amenable to using atmospheric CO2, the diffusion rate of which is some 10 000 times higher in air than in water. […] desiccation is […] the big drawback when growing in the intertidal, and excessive desiccation will lead to death. When some of the green macroalgae left the seas and formed terrestrial plants some 400 million years ago (the latter of which then ‘invaded’ Earth), there was a need for measures to evolve that on the one side ensured a water supply to the above-ground parts of the plants (i.e. roots1) and, on the other, hindered the water entering the plants to evaporate (i.e. a water-impermeable cuticle). Macroalgae lack those barriers against losing intracellular water, and are thus more prone to desiccation, the rate of which depends on external factors such as heat and humidity and internal factors such as thallus thickness. […] the mechanisms of desiccation tolerance in macroalgae is not well understood on the cellular level […] there seems to be a general correlation between the sensitivity of the photosynthetic apparatus (more than the respiratory one) to desiccation and the occurrence of macroalgae along a vertical gradient in the intertidal: the less sensitive (i.e. the more tolerant), the higher up the algae can grow. This is especially true if the sensitivity to desiccation is measured as a function of the ability to regain photosynthetic rates following rehydration during re-submergence. While this correlation exists, the mechanism of protecting the photosynthetic system against desiccation is largely unknown”.

July 28, 2015 Posted by | Biology, Books, Botany, Chemistry, Evolutionary biology, Microbiology | Leave a comment