Econstudentlog

Physical chemistry

This is a good book, I really liked it, just as I really liked the other book in the series which I read by the same author, the one about the laws of thermodynamics (blog coverage here). I know much, much more about physics than I do about chemistry and even though some of it was review I learned a lot from this one. Recommended, certainly if you find the quotes below interesting. As usual, I’ve added some observations from the book and some links to topics/people/etc. covered/mentioned in the book below.

Some quotes:

“Physical chemists pay a great deal of attention to the electrons that surround the nucleus of an atom: it is here that the chemical action takes place and the element expresses its chemical personality. […] Quantum mechanics plays a central role in accounting for the arrangement of electrons around the nucleus. The early ‘Bohr model’ of the atom, […] with electrons in orbits encircling the nucleus like miniature planets and widely used in popular depictions of atoms, is wrong in just about every respect—but it is hard to dislodge from the popular imagination. The quantum mechanical description of atoms acknowledges that an electron cannot be ascribed to a particular path around the nucleus, that the planetary ‘orbits’ of Bohr’s theory simply don’t exist, and that some electrons do not circulate around the nucleus at all. […] Physical chemists base their understanding of the electronic structures of atoms on Schrödinger’s model of the hydrogen atom, which was formulated in 1926. […] An atom is often said to be mostly empty space. That is a remnant of Bohr’s model in which a point-like electron circulates around the nucleus; in the Schrödinger model, there is no empty space, just a varying probability of finding the electron at a particular location.”

“No more than two electrons may occupy any one orbital, and if two do occupy that orbital, they must spin in opposite directions. […] this form of the principle [the Pauli exclusion principleUS] […] is adequate for many applications in physical chemistry. At its very simplest, the principle rules out all the electrons of an atom (other than atoms of one-electron hydrogen and two-electron helium) having all their electrons in the 1s-orbital. Lithium, for instance, has three electrons: two occupy the 1s orbital, but the third cannot join them, and must occupy the next higher-energy orbital, the 2s-orbital. With that point in mind, something rather wonderful becomes apparent: the structure of the Periodic Table of the elements unfolds, the principal icon of chemistry. […] The first electron can enter the 1s-orbital, and helium’s (He) second electron can join it. At that point, the orbital is full, and lithium’s (Li) third electron must enter the next higher orbital, the 2s-orbital. The next electron, for beryllium (Be), can join it, but then it too is full. From that point on the next six electrons can enter in succession the three 2p-orbitals. After those six are present (at neon, Ne), all the 2p-orbitals are full and the eleventh electron, for sodium (Na), has to enter the 3s-orbital. […] Similar reasoning accounts for the entire structure of the Table, with elements in the same group all having analogous electron arrangements and each successive row (‘period’) corresponding to the next outermost shell of orbitals.”

“[O]n crossing the [Periodic] Table from left to right, atoms become smaller: even though they have progressively more electrons, the nuclear charge increases too, and draws the clouds in to itself. On descending a group, atoms become larger because in successive periods new outermost shells are started (as in going from lithium to sodium) and each new coating of cloud makes the atom bigger […] the ionization energy [is] the energy needed to remove one or more electrons from the atom. […] The ionization energy more or less follows the trend in atomic radii but in an opposite sense because the closer an electron lies to the positively charged nucleus, the harder it is to remove. Thus, ionization energy increases from left to right across the Table as the atoms become smaller. It decreases down a group because the outermost electron (the one that is most easily removed) is progressively further from the nucleus. […] the electron affinity [is] the energy released when an electron attaches to an atom. […] Electron affinities are highest on the right of the Table […] An ion is an electrically charged atom. That charge comes about either because the neutral atom has lost one or more of its electrons, in which case it is a positively charged cation […] or because it has captured one or more electrons and has become a negatively charged anion. […] Elements on the left of the Periodic Table, with their low ionization energies, are likely to lose electrons and form cations; those on the right, with their high electron affinities, are likely to acquire electrons and form anions. […] ionic bonds […] form primarily between atoms on the left and right of the Periodic Table.”

“Although the Schrödinger equation is too difficult to solve for molecules, powerful computational procedures have been developed by theoretical chemists to arrive at numerical solutions of great accuracy. All the procedures start out by building molecular orbitals from the available atomic orbitals and then setting about finding the best formulations. […] Depictions of electron distributions in molecules are now commonplace and very helpful for understanding the properties of molecules. It is particularly relevant to the development of new pharmacologically active drugs, where electron distributions play a central role […] Drug discovery, the identification of pharmacologically active species by computation rather than in vivo experiment, is an important target of modern computational chemistry.”

Work […] involves moving against an opposing force; heat […] is the transfer of energy that makes use of a temperature difference. […] the internal energy of a system that is isolated from external influences does not change. That is the First Law of thermodynamics. […] A system possesses energy, it does not possess work or heat (even if it is hot). Work and heat are two different modes for the transfer of energy into or out of a system. […] if you know the internal energy of a system, then you can calculate its enthalpy simply by adding to U the product of pressure and volume of the system (H = U + pV). The significance of the enthalpy […] is that a change in its value is equal to the output of energy as heat that can be obtained from the system provided it is kept at constant pressure. For instance, if the enthalpy of a system falls by 100 joules when it undergoes a certain change (such as a chemical reaction), then we know that 100 joules of energy can be extracted as heat from the system, provided the pressure is constant.”

“In the old days of physical chemistry (well into the 20th century), the enthalpy changes were commonly estimated by noting which bonds are broken in the reactants and which are formed to make the products, so A → B might be the bond-breaking step and B → C the new bond-formation step, each with enthalpy changes calculated from knowledge of the strengths of the old and new bonds. That procedure, while often a useful rule of thumb, often gave wildly inaccurate results because bonds are sensitive entities with strengths that depend on the identities and locations of the other atoms present in molecules. Computation now plays a central role: it is now routine to be able to calculate the difference in energy between the products and reactants, especially if the molecules are isolated as a gas, and that difference easily converted to a change of enthalpy. […] Enthalpy changes are very important for a rational discussion of changes in physical state (vaporization and freezing, for instance) […] If we know the enthalpy change taking place during a reaction, then provided the process takes place at constant pressure we know how much energy is released as heat into the surroundings. If we divide that heat transfer by the temperature, then we get the associated entropy change in the surroundings. […] provided the pressure and temperature are constant, a spontaneous change corresponds to a decrease in Gibbs energy. […] the chemical potential can be thought of as the Gibbs energy possessed by a standard-size block of sample. (More precisely, for a pure substance the chemical potential is the molar Gibbs energy, the Gibbs energy per mole of atoms or molecules.)”

“There are two kinds of work. One kind is the work of expansion that occurs when a reaction generates a gas and pushes back the atmosphere (perhaps by pressing out a piston). That type of work is called ‘expansion work’. However, a chemical reaction might do work other than by pushing out a piston or pushing back the atmosphere. For instance, it might do work by driving electrons through an electric circuit connected to a motor. This type of work is called ‘non-expansion work’. […] a change in the Gibbs energy of a system at constant temperature and pressure is equal to the maximum non-expansion work that can be done by the reaction. […] the link of thermodynamics with biology is that one chemical reaction might do the non-expansion work of building a protein from amino acids. Thus, a knowledge of the Gibbs energies changes accompanying metabolic processes is very important in bioenergetics, and much more important than knowing the enthalpy changes alone (which merely indicate a reaction’s ability to keep us warm).”

“[T]he probability that a molecule will be found in a state of particular energy falls off rapidly with increasing energy, so most molecules will be found in states of low energy and very few will be found in states of high energy. […] If the temperature is low, then the distribution declines so rapidly that only the very lowest levels are significantly populated. If the temperature is high, then the distribution falls off very slowly with increasing energy, and many high-energy states are populated. If the temperature is zero, the distribution has all the molecules in the ground state. If the temperature is infinite, all available states are equally populated. […] temperature […] is the single, universal parameter that determines the most probable distribution of molecules over the available states.”

“Mixing adds disorder and increases the entropy of the system and therefore lowers the Gibbs energy […] In the absence of mixing, a reaction goes to completion; when mixing of reactants and products is taken into account, equilibrium is reached when both are present […] Statistical thermodynamics, through the Boltzmann distribution and its dependence on temperature, allows physical chemists to understand why in some cases the equilibrium shifts towards reactants (which is usually unwanted) or towards products (which is normally wanted) as the temperature is raised. A rule of thumb […] is provided by a principle formulated by Henri Le Chatelier […] that a system at equilibrium responds to a disturbance by tending to oppose its effect. Thus, if a reaction releases energy as heat (is ‘exothermic’), then raising the temperature will oppose the formation of more products; if the reaction absorbs energy as heat (is ‘endothermic’), then raising the temperature will encourage the formation of more product.”

“Model building pervades physical chemistry […] some hold that the whole of science is based on building models of physical reality; much of physical chemistry certainly is.”

“For reasonably light molecules (such as the major constituents of air, N2 and O2) at room temperature, the molecules are whizzing around at an average speed of about 500 m/s (about 1000 mph). That speed is consistent with what we know about the propagation of sound, the speed of which is about 340 m/s through air: for sound to propagate, molecules must adjust their position to give a wave of undulating pressure, so the rate at which they do so must be comparable to their average speeds. […] a typical N2 or O2 molecule in air makes a collision every nanosecond and travels about 1000 molecular diameters between collisions. To put this scale into perspective: if a molecule is thought of as being the size of a tennis ball, then it travels about the length of a tennis court between collisions. Each molecule makes about a billion collisions a second.”

“X-ray diffraction makes use of the fact that electromagnetic radiation (which includes X-rays) consists of waves that can interfere with one another and give rise to regions of enhanced and diminished intensity. This so-called ‘diffraction pattern’ is characteristic of the object in the path of the rays, and mathematical procedures can be used to interpret the pattern in terms of the object’s structure. Diffraction occurs when the wavelength of the radiation is comparable to the dimensions of the object. X-rays have wavelengths comparable to the separation of atoms in solids, so are ideal for investigating their arrangement.”

“For most liquids the sample contracts when it freezes, so […] the temperature does not need to be lowered so much for freezing to occur. That is, the application of pressure raises the freezing point. Water, as in most things, is anomalous, and ice is less dense than liquid water, so water expands when it freezes […] when two gases are allowed to occupy the same container they invariably mix and each spreads uniformly through it. […] the quantity of gas that dissolves in any liquid is proportional to the pressure of the gas. […] When the temperature of [a] liquid is raised, it is easier for a dissolved molecule to gather sufficient energy to escape back up into the gas; the rate of impacts from the gas is largely unchanged. The outcome is a lowering of the concentration of dissolved gas at equilibrium. Thus, gases appear to be less soluble in hot water than in cold. […] the presence of dissolved substances affects the properties of solutions. For instance, the everyday experience of spreading salt on roads to hinder the formation of ice makes use of the lowering of freezing point of water when a salt is present. […] the boiling point is raised by the presence of a dissolved substance [whereas] the freezing point […] is lowered by the presence of a solute.”

“When a liquid and its vapour are present in a closed container the vapour exerts a characteristic pressure (when the escape of molecules from the liquid matches the rate at which they splash back down into it […][)] This characteristic pressure depends on the temperature and is called the ‘vapour pressure’ of the liquid. When a solute is present, the vapour pressure at a given temperature is lower than that of the pure liquid […] The extent of lowering is summarized by yet another limiting law of physical chemistry, ‘Raoult’s law’ [which] states that the vapour pressure of a solvent or of a component of a liquid mixture is proportional to the proportion of solvent or liquid molecules present. […] Osmosis [is] the tendency of solvent molecules to flow from the pure solvent to a solution separated from it by a [semi-]permeable membrane […] The entropy when a solute is present in a solvent is higher than when the solute is absent, so an increase in entropy, and therefore a spontaneous process, is achieved when solvent flows through the membrane from the pure liquid into the solution. The tendency for this flow to occur can be overcome by applying pressure to the solution, and the minimum pressure needed to overcome the tendency to flow is called the ‘osmotic pressure’. If one solution is put into contact with another through a semipermeable membrane, then there will be no net flow if they exert the same osmotic pressures and are ‘isotonic’.”

“Broadly speaking, the reaction quotient [‘Q’] is the ratio of concentrations, with product concentrations divided by reactant concentrations. It takes into account how the mingling of the reactants and products affects the total Gibbs energy of the mixture. The value of Q that corresponds to the minimum in the Gibbs energy […] is called the equilibrium constant and denoted K. The equilibrium constant, which is characteristic of a given reaction and depends on the temperature, is central to many discussions in chemistry. When K is large (1000, say), we can be reasonably confident that the equilibrium mixture will be rich in products; if K is small (0.001, say), then there will be hardly any products present at equilibrium and we should perhaps look for another way of making them. If K is close to 1, then both reactants and products will be abundant at equilibrium and will need to be separated. […] Equilibrium constants vary with temperature but not […] with pressure. […] van’t Hoff’s equation implies that if the reaction is strongly exothermic (releases a lot of energy as heat when it takes place), then the equilibrium constant decreases sharply as the temperature is raised. The opposite is true if the reaction is strongly endothermic (absorbs a lot of energy as heat). […] Typically it is found that the rate of a reaction [how fast it progresses] decreases as it approaches equilibrium. […] Most reactions go faster when the temperature is raised. […] reactions with high activation energies proceed slowly at low temperatures but respond sharply to changes of temperature. […] The surface area exposed by a catalyst is important for its function, for it is normally the case that the greater that area, the more effective is the catalyst.”

Links:

John Dalton.
Atomic orbital.
Electron configuration.
S,p,d,f orbitals.
Computational chemistry.
Atomic radius.
Covalent bond.
Gilbert Lewis.
Valence bond theory.
Molecular orbital theory.
Orbital hybridisation.
Bonding and antibonding orbitals.
Schrödinger equation.
Density functional theory.
Chemical thermodynamics.
Laws of thermodynamics/Zeroth law/First law/Second law/Third Law.
Conservation of energy.
Thermochemistry.
Bioenergetics.
Spontaneous processes.
Entropy.
Rudolf Clausius.
Chemical equilibrium.
Heat capacity.
Compressibility.
Statistical thermodynamics/statistical mechanics.
Boltzmann distribution.
State of matter/gas/liquid/solid.
Perfect gas/Ideal gas law.
Robert Boyle/Joseph Louis Gay-Lussac/Jacques Charles/Amedeo Avogadro.
Equation of state.
Kinetic theory of gases.
Van der Waals equation of state.
Maxwell–Boltzmann distribution.
Thermal conductivity.
Viscosity.
Nuclear magnetic resonance.
Debye–Hückel equation.
Ionic solids.
Catalysis.
Supercritical fluid.
Liquid crystal.
Graphene.
Benoît Paul Émile Clapeyron.
Phase (matter)/phase diagram/Gibbs’ phase rule.
Ideal solution/regular solution.
Henry’s law.
Chemical kinetics.
Electrochemistry.
Rate equation/First order reactions/Second order reactions.
Rate-determining step.
Arrhenius equation.
Collision theory.
Diffusion-controlled and activation-controlled reactions.
Transition state theory.
Photochemistry/fluorescence/phosphorescence/photoexcitation.
Photosynthesis.
Redox reactions.
Electrochemical cell.
Fuel cell.
Reaction dynamics.
Spectroscopy/emission spectroscopy/absorption spectroscopy/Raman spectroscopy.
Raman effect.
Magnetic resonance imaging.
Fourier-transform spectroscopy.
Electron paramagnetic resonance.
Mass spectrum.
Electron spectroscopy for chemical analysis.
Scanning tunneling microscope.
Chemisorption/physisorption.

Advertisements

October 5, 2017 Posted by | Biology, Books, Chemistry, Pharmacology, Physics | Leave a comment

Earth System Science

I decided not to rate this book. Some parts are great, some parts I didn’t think were very good.

I’ve added some quotes and links below. First a few links (I’ve tried not to add links here which I’ve also included in the quotes below):

Carbon cycle.
Origin of water on Earth.
Gaia hypothesis.
Albedo (climate and weather).
Snowball Earth.
Carbonate–silicate cycle.
Carbonate compensation depth.
Isotope fractionation.
CLAW hypothesis.
Mass-independent fractionation.
δ13C.
Great Oxygenation Event.
Acritarch.
Grypania.
Neoproterozoic.
Rodinia.
Sturtian glaciation.
Marinoan glaciation.
Ediacaran biota.
Cambrian explosion.
Quarternary.
Medieval Warm Period.
Little Ice Age.
Eutrophication.
Methane emissions.
Keeling curve.
CO2 fertilization effect.
Acid rain.
Ocean acidification.
Earth systems models.
Clausius–Clapeyron relation.
Thermohaline circulation.
Cryosphere.
The limits to growth.
Exoplanet Biosignature Gases.
Transiting Exoplanet Survey Satellite (TESS).
James Webb Space Telescope.
Habitable zone.
Kepler-186f.

A few quotes from the book:

“The scope of Earth system science is broad. It spans 4.5 billion years of Earth history, how the system functions now, projections of its future state, and ultimate fate. […] Earth system science is […] a deeply interdisciplinary field, which synthesizes elements of geology, biology, chemistry, physics, and mathematics. It is a young, integrative science that is part of a wider 21st-century intellectual trend towards trying to understand complex systems, and predict their behaviour. […] A key part of Earth system science is identifying the feedback loops in the Earth system and understanding the behaviour they can create. […] In systems thinking, the first step is usually to identify your system and its boundaries. […] what is part of the Earth system depends on the timescale being considered. […] The longer the timescale we look over, the more we need to include in the Earth system. […] for many Earth system scientists, the planet Earth is really comprised of two systems — the surface Earth system that supports life, and the great bulk of the inner Earth underneath. It is the thin layer of a system at the surface of the Earth […] that is the subject of this book.”

“Energy is in plentiful supply from the Sun, which drives the water cycle and also fuels the biosphere, via photosynthesis. However, the surface Earth system is nearly closed to materials, with only small inputs to the surface from the inner Earth. Thus, to support a flourishing biosphere, all the elements needed by life must be efficiently recycled within the Earth system. This in turn requires energy, to transform materials chemically and to move them physically around the planet. The resulting cycles of matter between the biosphere, atmosphere, ocean, land, and crust are called global biogeochemical cycles — because they involve biological, geological, and chemical processes. […] The global biogeochemical cycling of materials, fuelled by solar energy, has transformed the Earth system. […] It has made the Earth fundamentally different from its state before life and from its planetary neighbours, Mars and Venus. Through cycling the materials it needs, the Earth’s biosphere has bootstrapped itself into a much more productive state.”

“Each major element important for life has its own global biogeochemical cycle. However, every biogeochemical cycle can be conceptualized as a series of reservoirs (or ‘boxes’) of material connected by fluxes (or flows) of material between them. […] When a biogeochemical cycle is in steady state, the fluxes in and out of each reservoir must be in balance. This allows us to define additional useful quantities. Notably, the amount of material in a reservoir divided by the exchange flux with another reservoir gives the average ‘residence time’ of material in that reservoir with respect to the chosen process of exchange. For example, there are around 7 × 1016 moles of carbon dioxide (CO2) in today’s atmosphere, and photosynthesis removes around 9 × 1015 moles of CO2 per year, giving each molecule of CO2 a residence time of roughly eight years in the atmosphere before it is taken up, somewhere in the world, by photosynthesis. […] There are 3.8 × 1019 moles of molecular oxygen (O2) in today’s atmosphere, and oxidative weathering removes around 1 × 1013 moles of O2 per year, giving oxygen a residence time of around four million years with respect to removal by oxidative weathering. This makes the oxygen cycle […] a geological timescale cycle.”

“The water cycle is the physical circulation of water around the planet, between the ocean (where 97 per cent is stored), atmosphere, ice sheets, glaciers, sea-ice, freshwaters, and groundwater. […] To change the phase of water from solid to liquid or liquid to gas requires energy, which in the climate system comes from the Sun. Equally, when water condenses from gas to liquid or freezes from liquid to solid, energy is released. Solar heating drives evaporation from the ocean. This is responsible for supplying about 90 per cent of the water vapour to the atmosphere, with the other 10 per cent coming from evaporation on the land and freshwater surfaces (and sublimation of ice and snow directly to vapour). […] The water cycle is intimately connected to other biogeochemical cycles […]. Many compounds are soluble in water, and some react with water. This makes the ocean a key reservoir for several essential elements. It also means that rainwater can scavenge soluble gases and aerosols out of the atmosphere. When rainwater hits the land, the resulting solution can chemically weather rocks. Silicate weathering in turn helps keep the climate in a state where water is liquid.”

“In modern terms, plants acquire their carbon from carbon dioxide in the atmosphere, add electrons derived from water molecules to the carbon, and emit oxygen to the atmosphere as a waste product. […] In energy terms, global photosynthesis today captures about 130 terrawatts (1 TW = 1012 W) of solar energy in chemical form — about half of it in the ocean and about half on land. […] All the breakdown pathways for organic carbon together produce a flux of carbon dioxide back to the atmosphere that nearly balances photosynthetic uptake […] The surface recycling system is almost perfect, but a tiny fraction (about 0.1 per cent) of the organic carbon manufactured in photosynthesis escapes recycling and is buried in new sedimentary rocks. This organic carbon burial flux leaves an equivalent amount of oxygen gas behind in the atmosphere. Hence the burial of organic carbon represents the long-term source of oxygen to the atmosphere. […] the Earth’s crust has much more oxygen trapped in rocks in the form of oxidized iron and sulphur, than it has organic carbon. This tells us that there has been a net source of oxygen to the crust over Earth history, which must have come from the loss of hydrogen to space.”

“The oxygen cycle is relatively simple, because the reservoir of oxygen in the atmosphere is so massive that it dwarfs the reservoirs of organic carbon in vegetation, soils, and the ocean. Hence oxygen cannot get used up by the respiration or combustion of organic matter. Even the combustion of all known fossil fuel reserves can only put a small dent in the much larger reservoir of atmospheric oxygen (there are roughly 4 × 1017 moles of fossil fuel carbon, which is only about 1 per cent of the O2 reservoir). […] Unlike oxygen, the atmosphere is not the major surface reservoir of carbon. The amount of carbon in global vegetation is comparable to that in the atmosphere and the amount of carbon in soils (including permafrost) is roughly four times that in the atmosphere. Even these reservoirs are dwarfed by the ocean, which stores forty-five times as much carbon as the atmosphere, thanks to the fact that CO2 reacts with seawater. […] The exchange of carbon between the atmosphere and the land is largely biological, involving photosynthetic uptake and release by aerobic respiration (and, to a lesser extent, fires). […] Remarkably, when we look over Earth history there are fluctuations in the isotopic composition of carbonates, but no net drift up or down. This suggests that there has always been roughly one-fifth of carbon being buried in organic form and the other four-fifths as carbonate rocks. Thus, even on the early Earth, the biosphere was productive enough to support a healthy organic carbon burial flux.”

“The two most important nutrients for life are phosphorus and nitrogen, and they have very different biogeochemical cycles […] The largest reservoir of nitrogen is in the atmosphere, whereas the heavier phosphorus has no significant gaseous form. Phosphorus thus presents a greater recycling challenge for the biosphere. All phosphorus enters the surface Earth system from the chemical weathering of rocks on land […]. Phosphorus is concentrated in rocks in grains or veins of the mineral apatite. Natural selection has made plants on land and their fungal partners […] very effective at acquiring phosphorus from rocks, by manufacturing and secreting a range of organic acids that dissolve apatite. […] The average terrestrial ecosystem recycles phosphorus roughly fifty times before it is lost into freshwaters. […] The loss of phosphorus from the land is the ocean’s gain, providing the key input of this essential nutrient. Phosphorus is stored in the ocean as phosphate dissolved in the water. […] removal of phosphorus into the rock cycle balances the weathering of phosphorus from rocks on land. […] Although there is a large reservoir of nitrogen in the atmosphere, the molecules of nitrogen gas (N2) are extremely strongly bonded together, making nitrogen unavailable to most organisms. To split N2 and make nitrogen biologically available requires a remarkable biochemical feat — nitrogen fixation — which uses a lot of energy. In the ocean the dominant nitrogen fixers are cyanobacteria with a direct source of energy from sunlight. On land, various plants form a symbiotic partnership with nitrogen fixing bacteria, making a home for them in root nodules and supplying them with food in return for nitrogen. […] Nitrogen fixation and denitrification form the major input and output fluxes of nitrogen to both the land and the ocean, but there is also recycling of nitrogen within ecosystems. […] There is an intimate link between nutrient regulation and atmospheric oxygen regulation, because nutrient levels and marine productivity determine the source of oxygen via organic carbon burial. However, ocean nutrients are regulated on a much shorter timescale than atmospheric oxygen because their residence times are much shorter—about 2,000 years for nitrogen and 20,000 years for phosphorus.”

“[F]orests […] are vulnerable to increases in oxygen that increase the frequency and ferocity of fires. […] Combustion experiments show that fires only become self-sustaining in natural fuels when oxygen reaches around 17 per cent of the atmosphere. Yet for the last 370 million years there is a nearly continuous record of fossil charcoal, indicating that oxygen has never dropped below this level. At the same time, oxygen has never risen too high for fires to have prevented the slow regeneration of forests. The ease of combustion increases non-linearly with oxygen concentration, such that above 25–30 per cent oxygen (depending on the wetness of fuel) it is hard to see how forests could have survived. Thus oxygen has remained within 17–30 per cent of the atmosphere for at least the last 370 million years.”

“[T]he rate of silicate weathering increases with increasing CO2 and temperature. Thus, if something tends to increase CO2 or temperature it is counteracted by increased CO2 removal by silicate weathering. […] Plants are sensitive to variations in CO2 and temperature, and together with their fungal partners they greatly amplify weathering rates […] the most pronounced change in atmospheric CO2 over Phanerozoic time was due to plants colonizing the land. This started around 470 million years ago and escalated with the first forests 370 million years ago. The resulting acceleration of silicate weathering is estimated to have lowered the concentration of atmospheric CO2 by an order of magnitude […], and cooled the planet into a series of ice ages in the Carboniferous and Permian Periods.”

“The first photosynthesis was not the kind we are familiar with, which splits water and spits out oxygen as a waste product. Instead, early photosynthesis was ‘anoxygenic’ — meaning it didn’t produce oxygen. […] It could have used a range of compounds, in place of water, as a source of electrons with which to fix carbon from carbon dioxide and reduce it to sugars. Potential electron donors include hydrogen (H2) and hydrogen sulphide (H2S) in the atmosphere, or ferrous iron (Fe2+) dissolved in the ancient oceans. All of these are easier to extract electrons from than water. Hence they require fewer photons of sunlight and simpler photosynthetic machinery. The phylogenetic tree of life confirms that several forms of anoxygenic photosynthesis evolved very early on, long before oxygenic photosynthesis. […] If the early biosphere was fuelled by anoxygenic photosynthesis, plausibly based on hydrogen gas, then a key recycling process would have been the biological regeneration of this gas. Calculations suggest that once such recycling had evolved, the early biosphere might have achieved a global productivity up to 1 per cent of the modern marine biosphere. If early anoxygenic photosynthesis used the supply of reduced iron upwelling in the ocean, then its productivity would have been controlled by ocean circulation and might have reached 10 per cent of the modern marine biosphere. […] The innovation that supercharged the early biosphere was the origin of oxygenic photosynthesis using abundant water as an electron donor. This was not an easy process to evolve. To split water requires more energy — i.e. more high-energy photons of sunlight — than any of the earlier anoxygenic forms of photosynthesis. Evolution’s solution was to wire together two existing ‘photosystems’ in one cell and bolt on the front of them a remarkable piece of biochemical machinery that can rip apart water molecules. The result was the first cyanobacterial cell — the ancestor of all organisms performing oxygenic photosynthesis on the planet today. […] Once oxygenic photosynthesis had evolved, the productivity of the biosphere would no longer have been restricted by the supply of substrates for photosynthesis, as water and carbon dioxide were abundant. Instead, the availability of nutrients, notably nitrogen and phosphorus, would have become the major limiting factors on the productivity of the biosphere — as they still are today.” [If you’re curious to know more about how that fascinating ‘biochemical machinery’ works, this is a great book on these and related topics – US].

“On Earth, anoxygenic photosynthesis requires one photon per electron, whereas oxygenic photosynthesis requires two photons per electron. On Earth it took up to a billion years to evolve oxygenic photosynthesis, based on two photosystems that had already evolved independently in different types of anoxygenic photosynthesis. Around a fainter K- or M-type star […] oxygenic photosynthesis is estimated to require three or more photons per electron — and a corresponding number of photosystems — making it harder to evolve. […] However, fainter stars spend longer on the main sequence, giving more time for evolution to occur.”

“There was a lot more energy to go around in the post-oxidation world, because respiration of organic matter with oxygen yields an order of magnitude more energy than breaking food down anaerobically. […] The revolution in biological complexity culminated in the ‘Cambrian Explosion’ of animal diversity 540 to 515 million years ago, in which modern food webs were established in the ocean. […] Since then the most fundamental change in the Earth system has been the rise of plants on land […], beginning around 470 million years ago and culminating in the first global forests by 370 million years ago. This doubled global photosynthesis, increasing flows of materials. Accelerated chemical weathering of the land surface lowered atmospheric carbon dioxide levels and increased atmospheric oxygen levels, fully oxygenating the deep ocean. […] Although grasslands now cover about a third of the Earth’s productive land surface they are a geologically recent arrival. Grasses evolved amidst a trend of declining atmospheric carbon dioxide, and climate cooling and drying, over the past forty million years, and they only became widespread in two phases during the Miocene Epoch around seventeen and six million years ago. […] Since the rise of complex life, there have been several mass extinction events. […] whilst these rolls of the extinction dice marked profound changes in evolutionary winners and losers, they did not fundamentally alter the operation of the Earth system.” [If you’re interested in this kind of stuff, the evolution of food webs and so on, Herrera et al.’s wonderful book is a great place to start – US]

“The Industrial Revolution marks the transition from societies fuelled largely by recent solar energy (via biomass, water, and wind) to ones fuelled by concentrated ‘ancient sunlight’. Although coal had been used in small amounts for millennia, for example for iron making in ancient China, fossil fuel use only took off with the invention and refinement of the steam engine. […] With the Industrial Revolution, food and biomass have ceased to be the main source of energy for human societies. Instead the energy contained in annual food production, which supports today’s population, is at fifty exajoules (1 EJ = 1018 joules), only about a tenth of the total energy input to human societies of 500 EJ/yr. This in turn is equivalent to about a tenth of the energy captured globally by photosynthesis. […] solar energy is not very efficiently converted by photosynthesis, which is 1–2 per cent efficient at best. […] The amount of sunlight reaching the Earth’s land surface (2.5 × 1016 W) dwarfs current total human power consumption (1.5 × 1013 W) by more than a factor of a thousand.”

“The Earth system’s primary energy source is sunlight, which the biosphere converts and stores as chemical energy. The energy-capture devices — photosynthesizing organisms — construct themselves out of carbon dioxide, nutrients, and a host of trace elements taken up from their surroundings. Inputs of these elements and compounds from the solid Earth system to the surface Earth system are modest. Some photosynthesizers have evolved to increase the inputs of the materials they need — for example, by fixing nitrogen from the atmosphere and selectively weathering phosphorus out of rocks. Even more importantly, other heterotrophic organisms have evolved that recycle the materials that the photosynthesizers need (often as a by-product of consuming some of the chemical energy originally captured in photosynthesis). This extraordinary recycling system is the primary mechanism by which the biosphere maintains a high level of energy capture (productivity).”

“[L]ike all stars on the ‘main sequence’ (which generate energy through the nuclear fusion of hydrogen into helium), the Sun is burning inexorably brighter with time — roughly 1 per cent brighter every 100 million years — and eventually this will overheat the planet. […] Over Earth history, the silicate weathering negative feedback mechanism has counteracted the steady brightening of the Sun by removing carbon dioxide from the atmosphere. However, this cooling mechanism is near the limits of its operation, because CO2 has fallen to limiting levels for the majority of plants, which are key amplifiers of silicate weathering. Although a subset of plants have evolved which can photosynthesize down to lower CO2 levels [the author does not go further into this topic, but here’s a relevant link – US], they cannot draw CO2 down lower than about 10 ppm. This means there is a second possible fate for life — running out of CO2. Early models projected either CO2 starvation or overheating […] occurring about a billion years in the future. […] Whilst this sounds comfortingly distant, it represents a much shorter future lifespan for the Earth’s biosphere than its past history. Earth’s biosphere is entering its old age.”

September 28, 2017 Posted by | Astronomy, Biology, Books, Botany, Chemistry, Geology, Paleontology, Physics | Leave a comment

The Biology of Moral Systems (III)

This will be my last post about the book. It’s an important work which deserves to be read by far more people than have already read it. I have added some quotes and observations from the last chapters of the book below.

“If egoism, as self-interest in the biologists’ sense, is the reason for the promotion of ethical behavior, then, paradoxically, it is expected that everyone will constantly promote the notion that egoism is not a suitable theory of action, and, a fortiori, that he himself is not an egoist. Most of all he must present this appearance to his closest associates because it is in his best interests to do so – except, perhaps, to his closest relatives, to whom his egoism may often be displayed in cooperative ventures from which some distant- or non-relative suffers. Indeed, it may be arguable that it will be in the egoist’s best interest not to know (consciously) or to admit to himself that he is an egoist because of the value to himself of being able to convince others he is not.”

“The function of [societal] punishments and rewards, I have suggested, is to manipulate the behavior of participating individuals, restricting individual efforts to serve their own interests at others’ expense so as to promote harmony and unity within the group. The function of harmony and unity […] is to allow the group to compete against hostile forces, especially other human groups. It is apparent that success of the group may serve the interests of all individuals in the group; but it is also apparent that group success can be achieved with different patterns of individual success differentials within the group. So […] it is in the interests of those who are differentially successful to promote both unity and the rules so that group success will occur without necessitating changes deleterious to them. Similarly, it may be in the interests of those individuals who are relatively unsuccessful to promote dissatisfaction with existing rules and the notion that group success would be more likely if the rules were altered to favor them. […] the rules of morality and law alike seem not to be designed explicitly to allow people to live in harmony within societies but to enable societies to be sufficiently united to deter their enemies. Within-society harmony is the means not the end. […] extreme within-group altruism seems to correlate with and be historically related to between-group strife.”

“There are often few or no legitimate or rational expectations of reciprocity or “fairness” between social groups (especially warring or competing groups such as tribes or nations). Perhaps partly as a consequence, lying, deceit, or otherwise nasty or even heinous acts committed against enemies may sometimes not be regarded as immoral by others withing the group of those who commit them. They may even be regarded as highly moral if they seem dramatically to serve the interests of the group whose members commit them.”

“Two major assumptions, made universally or most of the time by philosophers, […] are responsible for the confusion that prevents philosophers from making sense out of morality […]. These assumptions are the following: 1. That proximate and ultimate mechanisms or causes have the same kind of significance and can be considered together as if they were members of the same class of causes; this is a failure to understand that proximate causes are evolved because of ultimate causes, and therefore may be expected to serve them, while the reverse is not true. Thus, pleasure is a proximate mechanism that in the usual environments of history is expected to impel us toward behavior that will contribute to our reproductive success. Contrarily, acts leading to reproductive success are not proximate mechanisms that evolved because they served the ultimate function of bringing us pleasure. 2. That morality inevitably involves some self-sacrifice. This assumption involves at least three elements: a. Failure to consider altruism as benefits to the actor. […] b. Failure to comprehend all avenues of indirect reciprocity within groups. c. Failure to take into account both within-group and between-group benefits.”

“If morality means true sacrifice of one’s own interests, and those of his family, then it seems to me that we could not have evolved to be moral. If morality requires ethical consistency, whereby one does not do socially what he would not advocate and assist all others also to do, then, again, it seems to me that we could not have evolved to be moral. […] humans are not really moral at all, in the sense of “true sacrifice” given above, but […] the concept of morality is useful to them. […] If it is so, then we might imagine that, in the sense and to the extent that they are anthropomorphized, the concepts of saints and angels, as well as that of God, were also created because of their usefulness to us. […] I think there have been far fewer […] truly self-sacrificing individuals than might be supposed, and most cases that might be brought forward are likely instead to be illustrations of the complexity and indirectness of reciprocity, especially the social value of appearing more altruistic than one is. […] I think that […] the concept of God must be viewed as originally generated and maintained for the purpose – now seen by many as immoral – of furthering the interests of one group of humans at the expense of one or more other groups. […] Gods are inventions originally developed to extend the notion that some have greater rights than others to design and enforce rules, and that some are more destined to be leaders, others to be followers. This notion, in turn, arose out of prior asymmetries in both power and judgment […] It works when (because) leaders are (have been) valuable, especially in the context of intergroup competition.”

“We try to move moral issues in the direction of involving no conflict of interest, always, I suggest, by seeking universal agreement with our own point of view.”

“Moral and legal systems are commonly distinguished by those, like moral philosophers, who study them formally. I believe, however, that the distinction between them is usually poorly drawn, and based on a failure to realize that moral as well as legal behavior occurs as a result of probably and possible punishments and reward. […] we often internalize the rules of law as well as the rules of morality – and perhaps by the same process […] It would seem that the rules of law are simply a specialized, derived aspect of what in earlier societies would have been a part of moral rules. On the other hand, law covers only a fraction of the situations in which morality is involved […] Law […] seems to be little more than ethics written down.”

“Anyone who reads the literature on dispute settlement within different societies […] will quickly understand that genetic relatedness counts: it allows for one-way flows of benefits and alliances. Long-term association also counts; it allows for reliability and also correlates with genetic relatedness. […] The larger the social group, the more fluid its membership; and the more attenuated the social interactions of its membership, the more they are forced to rely on formal law”.

“[I]ndividuals have separate interests. They join forces (live in groups; become social) when they share certain interests that can be better realized for all by close proximity or some forms of cooperation. Typically, however, the overlaps of interests rarely are completely congruent with those of either other individuals or the rest of the group. This means that, even during those times when individual interests within a group are most broadly overlapping, we may expect individuals to temper their cooperation with efforts to realize their own interests, and we may also expect them to have evolved to be adept at using others, or at thwarting the interests of others, to serve themselves (and their relatives). […] When the interests of all are most nearly congruent, it is essentially always due to a threat shared equally. Such threats almost always have to be external (or else they are less likely to affect everyone equally […] External threats to societies are typically other societies. Maintenance of such threats can yield situations in which everyone benefits from rigid, hierarchical, quasi-military, despotic government. Liberties afforded leaders – even elaborate perquisites of dictators – may be tolerated because such threats are ever-present […] Extrinsic threats, and the governments they produce, can yield inflexibilities of political structures that can persist across even lengthy intervals during which the threats are absent. Some societies have been able to structure their defenses against external threats as separate units (armies) within society, and to keep them separate. These rigidly hierarchical, totalitarian, and dictatorial subunits rise and fall in size and influence according to the importance of the external threat. […] Discussion of liberty and equality in democracies closely parallels discussions of morality and moral systems. In either case, adding a perspective from evolutionary biology seems to me to have potential for clarification.”

“It is indeed common, if not universal, to regard moral behavior as a kind of altruism that necessarily yields the altruist less than he gives, and to see egoism as either the opposite of morality or the source of immorality; but […] this view is usually based on an incomplete understanding of nepotism, reciprocity, and the significance of within-group unity for between-group competition. […] My view of moral systems in the real world, however, is that they are systems in which costs and benefits of specific actions are manipulated so as to produce reasonably harmonious associations in which everyone nevertheless pursues his own (in evolutionary terms) self-interest. I do not expect that moral and ethical arguments can ever be finally resolved. Compromises and contracts, then, are (at least currently) the only real solutions to actual conflicts of interest. This is why moral and ethical decisions must arise out of decisions of the collective of affected individuals; there is no single source of right and wrong.

I would also argue against the notion that rationality can be easily employed to produce a world of humans that self-sacrifice in favor of other humans, not to say nonhuman animals, plants, and inanimate objects. Declarations of such intentions may themselves often be the acts of self-interested persons developing, consciously or not, a socially self-benefiting view of themselves as extreme altruists. In this connection it is not irrelevant that the more dissimilar a species or object is to one’s self the less likely it is to provide a competitive threat by seeking the same resources. Accordingly, we should not be surprised to find humans who are highly benevolent toward other species or inanimate objects (some of which may serve them uncomplainingly), yet relatively hostile and noncooperative with fellow humans. As Darwin (1871) noted with respect to dogs, we have selected our domestic animals to return our altruism with interest.”

“It is not easy to discover precisely what historical differences have shaped current male-female differences. If, however, humans are in a general way similar to other highly parental organisms that live in social groups […] then we can hypothesize as follows: for men much of sexual activity has had as a main (ultimate) significance the initiating of pregnancies. It would follow that when a man avoids copulation it is likely to be because (1) there is no likelihood of pregnancy or (2) the costs entailed (venereal disease, danger from competition with other males, lowered status if the event becomes public, or an undesirable commitment) are too great in comparison with the probability that pregnancy will be induced. The man himself may be judging costs against the benefits of immediate sensory pleasures, such as orgasms (i.e., rather than thinking about pregnancy he may say that he was simply uninterested), but I am assuming that selection has tuned such expectations in terms of their probability of leading to actual reproduction […]. For women, I hypothesize, sexual activity per se has been more concerned with the securing of resources (again, I am speaking of ultimate and not necessarily conscious concerns) […]. Ordinarily, when women avoid or resist copulation, I speculate further, the disinterest, aversion, or inhibition may be traceable eventually to one (or more) of three causes: (1) there is no promise of commitment (of resources), (2) there is a likelihood of undesirable commitment (e.g., to a man with inadequate resources), or (3) there is a risk of loss of interest by a man with greater resources, than the one involved […] A man behaving so as to avoid pregnancies, and who derives from an evolutionary background of avoiding pregnancies, should be expected to favor copulation with women who are for age or other reasons incapable of pregnancy. A man derived from an evolutionary process in which securing of pregnancies typically was favored, may be expected to be most interested sexually in women most likely to become pregnant and near the height of the reproductive probability curve […] This means that men should usually be expected to anticipate the greatest sexual pleasure with young, healthy, intelligent women who show promise of providing superior parental care. […] In sexual competition, the alternatives of a man without resources are to present himself as a resource (i.e., as a mimic of one with resources or as one able and likely to secure resources because of his personal attributes […]), to obtain sex by force (rape), or to secure resources through a woman (e.g., allow himself to be kept by a relatively undesired woman, perhaps as a vehicle to secure liaisons with other women). […] in nonhuman species of higher animals, control of the essential resources of parenthood by females correlates with lack of parental behavior by males, promiscuous polygyny, and absence of long-term pair bonds. There is some evidence of parallel trends within human societies (cf. Flinn, 1981).” [It’s of some note that quite a few good books have been written on these topics since Alexander first published his book, so there are many places to look for detailed coverage of topics like these if you’re curious to know more – I can recommend both Kappeler & van Schaik (a must-read book on sexual selection, in my opinion) & Bobby Low. I didn’t think too highly of Miller or Meston & Buss, but those are a few other books on these topics which I’ve read – US].

“The reason that evolutionary knowledge has no moral content is [that] morality is a matter of whose interests one should, by conscious and willful behavior, serve, and how much; evolutionary knowledge contains no messages on this issue. The most it can do is provide information about the reasons for current conditions and predict some consequences of alternative courses of action. […] If some biologists and nonbiologists make unfounded assertions into conclusions, or develop pernicious and fallible arguments, then those assertions and arguments should be exposed for what they are. The reason for doing this, however, is not […should not be..? – US] to prevent or discourage any and all analyses of human activities, but to enable us to get on with a proper sort of analysis. Those who malign without being specific; who attack people rather than ideas; who gratuitously translate hypotheses into conclusions and then refer to them as “explanations,” “stories,” or “just-so-stories”; who parade the worst examples of argument and investigation with the apparent purpose of making all efforts at human self-analysis seem silly and trivial, I see as dangerously close to being ideologues at least as worrisome as those they malign. I cannot avoid the impression that their purpose is not to enlighten, but to play upon the uneasiness of those for whom the approach of evolutionary biology is alien and disquieting, perhaps for political rather than scientific purposes. It is more than a little ironic that the argument of politics rather than science is their own chief accusation with respect to scientists seeking to analyze human behavior in evolutionary terms (e.g. Gould and Levontin, 1979 […]).”

“[C]urrent selective theory indicates that natural selection has never operated to prevent species extinction. Instead it operates by saving the genetic materials of those individuals or families that outreproduce others. Whether species become extinct or not (and most have) is an incidental or accidental effect of natural selection. An inference from this is that the members of no species are equipped, as a direct result of their evolutionary history, with traits designed explicitly to prevent extinction when that possibility looms. […] Humans are no exception: unless their comprehension of the likelihood of extinction is so clear and real that they perceive the threat to themselves as individuals, and to their loved ones, they cannot be expected to take the collective action that will be necessary to reduce the risk of extinction.”

“In examining ourselves […] we are forced to use the attributes we wish to analyze to carry out the analysis, while resisting certain aspects of the analysis. At the very same time, we pretend that we are not resisting at all but are instead giving perfectly legitimate objections; and we use our realization that others will resist the analysis, for reasons as arcane as our own, to enlist their support in our resistance. And they very likely will give it. […] If arguments such as those made here have any validity it follows that a problem faced by everyone, in respect to morality, is that of discovering how to subvert or reduce some aspects of individual selfishness that evidently derive from our history of genetic individuality.”

“Essentially everyone thinks of himself as well-meaning, but from my viewpoint a society of well-meaning people who understand themselves and their history very well is a better milieu than a society of well-meaning people who do not.”

September 22, 2017 Posted by | Anthropology, Biology, Books, Evolutionary biology, Genetics, Philosophy, Psychology, Religion | Leave a comment

How Species Interact

There are multiple reasons why I have not covered Arditi and Ginzburg’s book before, but none of them are related to the quality of the book’s coverage. It’s a really nice book. However the coverage is somewhat technical and model-focused, which makes it harder to blog than other kinds of books. Also, the version of the book I read was a hardcover ‘paper book’ version, and ‘paper books’ take a lot more work for me to cover than do e-books.

I should probably get it out of the way here at the start of the post that if you’re interested in ecology, predator-prey dynamics, etc., this book is a book you would be well advised to read; or, if you don’t read the book, you should at least familiarize yourself with the ideas therein e.g. through having a look at some of Arditi & Ginzburg’s articles on these topics. I should however note that I don’t actually think skipping the book and having a look at some articles instead will necessarily be a labour-saving strategy; the book is not particularly long and it’s to the point, so although it’s not a particularly easy read their case for ratio dependence is actually somewhat easy to follow – if you take the effort – in the sense that I believe how different related ideas and observations are linked is quite likely better expounded upon in the book than they might have been in their articles. The presumably wrote the book precisely in order to provide a concise yet coherent overview.

I have had some trouble figuring out how to cover this book, and I’m still not quite sure what might be/have been the best approach; when covering technical books I’ll often skip a lot of detail and math and try to stick to what might be termed ‘the main ideas’ when quoting from such books, but there’s a clear limit as to how many of the technical details included in a book like this it is possible to skip if you still want to actually talk about the stuff covered in the work, and this sometimes make blogging such books awkward. These authors spend a lot of effort talking about how different ecological models work and which sort of conclusions these different models may lead to in different contexts, and this kind of stuff is a very big part of the book. I’m not sure if you strictly need to have read an ecology textbook or two before you read this one in order to be able to follow the coverage, but I know that I personally derived some benefit from having read Gurney & Nisbet’s ecology text in the past and I did look up stuff in that book a few times along the way, e.g. when reminding myself what a Holling type 2 functional response is and how models with such a functional response pattern behave. ‘In theory’ I assume one might argue that you could theoretically look up all the relevant concepts along the way without any background knowledge of ecology – assuming you have a decent understanding of basic calculus/differential equations, linear algebra, equilibrium dynamics, etc. (…systems analysis? It’s hard for me to know and outline exactly which sources I’ve read in the past which helped make this book easier to read than it otherwise would have been, but suffice it to say that if you look at the page count and think that this will be an quick/easy read, it will be that only if you’ve read more than a few books on ‘related topics’, broadly defined, in the past), but I wouldn’t advise reading the book if all you know is high school math – the book will be incomprehensible to you, and you won’t make it. I ended up concluding that it would simply be too much work to try to make this post ‘easy’ to read for people who are unfamiliar with these topics and have not read the book, so although I’ve hardly gone out of my way to make the coverage hard to follow, the blog coverage that is to follow is mainly for my own benefit.

First a few relevant links, then some quotes and comments.

Lotka–Volterra equations.
Ecosystem model.
Arditi–Ginzburg equations. (Yep, these equations are named after the authors of this book).
Nicholson–Bailey model.
Functional response.
Monod equation.
Rosenzweig-MacArthur predator-prey model.
Trophic cascade.
Underestimation of mutual interference of predators.
Coupling in predator-prey dynamics: Ratio Dependence.
Michaelis–Menten kinetics.
Trophic level.
Advection–diffusion equation.
Paradox of enrichment. [Two quotes from the book: “actual systems do not behave as Rosensweig’s model predict” + “When ecologists have looked for evidence of the paradox of enrichment in natural and laboratory systems, they often find none and typically present arguments about why it was not observed”]
Predator interference emerging from trophotaxis in predator–prey systems: An individual-based approach.
Directed movement of predators and the emergence of density dependence in predator-prey models.

“Ratio-dependent predation is now covered in major textbooks as an alternative to the standard prey-dependent view […]. One of this book’s messages is that the two simple extreme theories, prey dependence and ratio dependence, are not the only alternatives: they are the ends of a spectrum. There are ecological domains in which one view works better than the other, with an intermediate view also being a possible case. […] Our years of work spent on the subject have led us to the conclusion that, although prey dependence might conceivably be obtained in laboratory settings, the common case occurring in nature lies close to the ratio-dependent end. We believe that the latter, instead of the prey-dependent end, can be viewed as the “null model of predation.” […] we propose the gradual interference model, a specific form of predator-dependent functional response that is approximately prey dependent (as in the standard theory) at low consumer abundances and approximately ratio dependent at high abundances. […] When density is low, consumers do not interfere and prey dependence works (as in the standard theory). When consumers density is sufficiently high, interference causes ratio dependence to emerge. In the intermediate densities, predator-dependent models describe partial interference.”

“Studies of food chains are on the edge of two domains of ecology: population and community ecology. The properties of food chains are determined by the nature of their basic link, the interaction of two species, a consumer and its resource, a predator and its prey.1 The study of this basic link of the chain is part of population ecology while the more complex food webs belong to community ecology. This is one of the main reasons why understanding the dynamics of predation is important for many ecologists working at different scales.”

“We have named predator-dependent the functional responses of the form g = g(N,P), where the predator density P acts (in addition to N [prey abundance, US]) as an independent variable to determine the per capita kill rate […] predator-dependent functional response models have one more parameter than the prey-dependent or the ratio-dependent models. […] The main interest that we see in these intermediate models is that the additional parameter can provide a way to quantify the position of a specific predator-prey pair of species along a spectrum with prey dependence at one end and ratio dependence at the other end:

g(N) <- g(N,P) -> g(N/P) (1.21)

In the Hassell-Varley and Arditi-Akçakaya models […] the mutual interference parameter m plays the role of a cursor along this spectrum, from m = 0 for prey dependence to m = 1 for ratio dependence. Note that this theory does not exclude that strong interference goes “beyond ratio dependence,” with m > 1.2 This is also called overcompensation. […] In this book, rather than being interested in the interference parameters per se, we use predator-dependent models to determine, either parametrically or nonparametrically, which of the ends of the spectrum (1.21) better describes predator-prey systems in general.”

“[T]he fundamental problem of the Lotka-Volterra and the Rosensweig-MacArthur dynamic models lies in the functional response and in the fact that this mathematical function is assumed not to depend on consumer density. Since this function measures the number of prey captured per consumer per unit time, it is a quantity that should be accessible to observation. This variable could be apprehended either on the fast behavioral time scale or on the slow demographic time scale. These two approaches need not necessarily reveal the same properties: […] a given species could display a prey-dependent response on the fast scale and a predator-dependent response on the slow scale. The reason is that, on a very short scale, each predator individually may “feel” virtually alone in the environment and react only to the prey that it encounters. On the long scale, the predators are more likely to be affected by the presence of conspecifics, even without direct encounters. In the demographic context of this book, it is the long time scale that is relevant. […] if predator dependence is detected on the fast scale, then it can be inferred that it must be present on the slow scale; if predator dependence is not detected on the fast scale, it cannot be inferred that it is absent on the slow scale.”

Some related thoughts. A different way to think about this – which they don’t mention in the book, but which sprang to mind to me as I was reading it – is to think about this stuff in terms of a formal predator territorial overlap model and then asking yourself this question: Assume there’s zero territorial overlap – does this fact mean that the existence of conspecifics does not matter? The answer is of course no. The sizes of the individual patches/territories may be greatly influenced by the predator density even in such a context. Also, the territorial area available to potential offspring (certainly a fitness-relevant parameter) may be greatly influenced by the number of competitors inhabiting the surrounding territories. In relation to the last part of the quote it’s easy to see that in a model with significant territorial overlap you don’t need direct behavioural interaction among predators for the overlap to be relevant; even if two bears never meet, if one of them eats a fawn the other one would have come across two days later, well, such indirect influences may be important for prey availability. Of course as prey tend to be mobile, even if predator territories are static and non-overlapping in a geographic sense, they might not be in a functional sense. Moving on…

“In [chapter 2 we] attempted to assess the presence and the intensity of interference in all functional response data sets that we could gather in the literature. Each set must be trivariate, with estimates of the prey consumed at different values of prey density and different values of predator densities. Such data sets are not very abundant because most functional response experiments present in the literature are simply bivariate, with variations of the prey density only, often with a single predator individual, ignoring the fact that predator density can have an influence. This results from the usual presentation of functional responses in textbooks, which […] focus only on the influence of prey density.
Among the data sets that we analyzed, we did not find a single one in which the predator density did not have a significant effect. This is a powerful empirical argument against prey dependence. Most systems lie somewhere on the continuum between prey dependence (m=0) and ratio dependence (m=1). However, they do not appear to be equally distributed. The empirical evidence provided in this chapter suggests that they tend to accumulate closer to the ratio-dependent end than to the prey-dependent end.”

“Equilibrium properties result from the balanced predator-prey equations and contain elements of the underlying dynamic model. For this reason, the response of equilibria to a change in model parameters can inform us about the structure of the underlying equations. To check the appropriateness of the ratio-dependent versus prey-dependent views, we consider the theoretical equilibrium consequences of the two contrasting assumptions and compare them with the evidence from nature. […] According to the standard prey-dependent theory, in reference to [an] increase in primary production, the responses of the populations strongly depend on their level and on the total number of trophic levels. The last, top level always responds proportionally to F [primary input]. The next to the last level always remains constant: it is insensitive to enrichment at the bottom because it is perfectly controled [sic] by the last level. The first, primary producer level increases if the chain length has an odd number of levels, but declines (or stays constant with a Lotka-Volterra model) in the case of an even number of levels. According to the ratio-dependent theory, all levels increase proportionally, independently of how many levels are present. The present purpose of this chapter is to show that the second alternative is confirmed by natural data and that the strange predictions of the prey-dependent theory are unsupported.”

“If top predators are eliminated or reduced in abundance, models predict that the sequential lower trophic levels must respond by changes of alternating signs. For example, in a three-level system of plants-herbivores-predators, the reduction of predators leads to the increase of herbivores and the consequential reduction in plant abundance. This response is commonly called the trophic cascade. In a four-level system, the bottom level will increase in response to harvesting at the top. These predicted responses are quite intuitive and are, in fact, true for both short-term and long-term responses, irrespective of the theory one employs. […] A number of excellent reviews have summarized and meta-analyzed large amounts of data on trophic cascades in food chains […] In general, the cascading reaction is strongest in lakes, followed by marine systems, and weakest in terrestrial systems. […] Any theory that claims to describe the trophic chain equilibria has to produce such cascading when top predators are reduced or eliminated. It is well known that the standard prey-dependent theory supports this view of top-down cascading. It is not widely appreciated that top-down cascading is likewise a property of ratio-dependent trophic chains. […] It is [only] for equilibrial responses to enrichment at the bottom that predictions are strikingly different according to the two theories”.

As the book does spend a little time on this I should perhaps briefly interject here that the above paragraph should not be taken to indicate that the two types of models provide identical predictions in the top-down cascading context in all cases; both predict cascading, but there are even so some subtle differences between the models here as well. Some of these differences are however quite hard to test.

“[T]he traditional Lotka-Volterra interaction term […] is nothing other than the law of mass action of chemistry. It assumes that predator and prey individuals encounter each other randomly in the same way that molecules interact in a chemical solution. Other prey-dependent models, like Holling’s, derive from the same idea. […] an ecological system can only be described by such a model if conspecifics do not interfere with each other and if the system is sufficiently homogeneous […] we will demonstrate that spatial heterogeneity, be it in the form of a prey refuge or in the form of predator clusters, leads to emergence of gradual interference or of ratio dependence when the functional response is observed at the population level. […] We present two mechanistic individual-based models that illustrate how, with gradually increasing predator density and gradually increasing predator clustering, interference can become gradually stronger. Thus, a given biological system, prey dependent at low predator density, can gradually become ratio dependent at high predator density. […] ratio dependence is a simple way of summarizing the effects induced by spatial heterogeneity, while the prey dependent [models] (e.g., Lotka-Volterra) is more appropriate in homogeneous environments.”

“[W]e consider that a good model of interacting species must be fundamentally invariant to a proportional change of all abundances in the system. […] Allowing interacting populations to expand in balanced exponential growth makes the laws of ecology invariant with respect to multiplying interacting abundances by the same constant, so that only ratios matter. […] scaling invariance is required if we wish to preserve the possibility of joint exponential growth of an interacting pair. […] a ratio-dependent model allows for joint exponential growth. […] Neither the standard prey-dependent models nor the more general predator-dependent models allow for balanced growth. […] In our view, communities must be expected to expand exponentially in the presence of unlimited resources. Of course, limiting factors ultimately stop this expansion just as they do for a single species. With our view, it is the limiting resources that stop the joint expansion of the interacting populations; it is not directly due to the interactions themselves. This partitioning of the causes is a major simplification that traditional theory implies only in the case of a single species.”

August 1, 2017 Posted by | Biology, Books, Chemistry, Ecology, Mathematics, Studies | Leave a comment

Melanoma therapeutic strategies that select against resistance

A short lecture, but interesting:

If you’re not an oncologist, these two links in particular might be helpful to have a look at before you start out: BRAF (gene) & Myc. A very substantial proportion of the talk is devoted to math and stats methodology (which some people will find interesting and others …will not).

July 3, 2017 Posted by | Biology, Cancer/oncology, Genetics, Lectures, Mathematics, Medicine, Statistics | Leave a comment

The Antarctic

“A very poor book with poor coverage, mostly about politics and history (and a long collection of names of treaties and organizations). I would definitely not have finished it if it were much longer than it is.”

That was what I wrote about the book in my goodreads review. I was strongly debating whether or not to blog it at all, but I decided in the end to just settle for some very lazy coverage of the book, only consisting of links to content covered in the book. I only cover the book here to at least have some chance of remembering which kinds of things were covered in the book later on.

If you’re interested enough in the Antarctic to read a book about it, read Scott’s Last Expedition instead of this one (here’s my goodreads review of Scott).

Links:

Antarctica (featured).
Antarctic Convergence.
Antarctic Circle.
Southern Ocean.
Antarctic Circumpolar Current.
West Antarctic Ice Sheet.
East Antarctic Ice Sheet.
McMurdo Dry Valleys.
Notothenioidei.
Patagonian toothfish.
Antarctic krill.
Fabian Gottlieb von Bellingshausen.
Edward Bransfield.
James Clark Ross.
United States Exploring Expedition.
Heroic Age of Antarctic Exploration (featured).
Nimrod Expedition (featured).
Roald Amundsen.
Wilhelm Filchner.
Japanese Antarctic Expedition.
Terra Nova Expedition (featured).
Lincoln Ellsworth.
British Graham Land expedition.
German Antarctic Expedition (1938–1939).
Operation Highjump.
Operation Windmill.
Operation Deep Freeze.
Commonwealth Trans-Antarctic Expedition.
Caroline Mikkelsen.
International Association of Antarctica Tour Operators.
Territorial claims in Antarctica.
International Geophysical Year.
Antarctic Treaty System.
Operation Tabarin.
Scientific Committee on Antarctic Research.
United Nations Convention on the Law of the Sea.
Convention on the Continental Shelf.
Council of Managers of National Antarctic Programs.
British Antarctic Survey.
International Polar Year.
Antarctic ozone hole.
Gamburtsev Mountain Range.
Pine Island Glacier (‘good article’).
Census of Antarctic Marine Life.
Lake Ellsworth Consortium.
Antarctic fur seal.
Southern elephant seal.
Grytviken (whaling-related).
International Convention for the Regulation of Whaling.
International Whaling Commission.
Ocean Drilling Program.
Convention on the Regulation of Antarctic Mineral Resource Activities.
Agreement on the Conservation of Albatrosses and Petrels.

July 3, 2017 Posted by | Biology, Books, Geography, Geology, History, Wikipedia | Leave a comment

The Biology of Moral Systems (II)

There are multiple really great books I have read ‘recently’ and which I have either not blogged at all, or not blogged in anywhere near the amount of detail they deserve; Alexander’s book is one of those books. I hope to get rid of some of the backlog soon. You can read my first post about the book here, and it might be a good idea to do so as I won’t allude to material covered in the first post here. In this post I have added some quotes from and comments related to the book’s second chapter, ‘A Biological View of Morality’.

“Moral systems are systems of indirect reciprocity. They exist because confluences of interest within groups are used to deal with conflicts of interest between groups. Indirect reciprocity develops because interactions are repeated, or flow among a society’s members, and because information about subsequent interactions can be gleaned from observing the reciprocal interactions of others.
To establish moral rules is to impose rewards and punishments (typically assistance and ostracism, respectively) to control social acts that, respectively, help or hurt others. To be regarded as moral, a rule typically must represent widespread opinion, reflecting the fact that it must apply with a certain degree of indiscrimininateness.”

“Moral philosophers have not treated the beneficence of humans as a part, somehow, of their selfishness; yet, as Trivers (1971) suggested, the biologist’s view of lifetimes leads directly to this argument. In other words, the normally expressed beneficence, or altruism, of parenthood and nepotism and the temporary altruism (or social investment) of reciprocity are expected to result in greater returns than their alternatives.
If biologists are correct, all that philosophers refer to as altruistic or utilitarian behavior by individuals will actually represent either the temporary altruism (phenotypic beneficence or social investment) of indirect somatic effort [‘Direct somatic effort refers to self-help that involves no other persons. Indirect somatic effort involves reciprocity, which may be direct or indirect. Returns from direct and indirect reciprocity may be immediate or delayed’ – Alexander spends some pages classifying human effort in terms of such ‘atoms of sociality’, which are useful devices for analytical purposes, but I decided not to cover that stuff in detail here – US] or direct and indirect nepotism. The exceptions are what might be called evolutionary mistakes or accidents that result in unreciprocated or “genetic” altruism, deleterious to both the phenotype and genotype of the altruist; such mistakes can occur in all of the above categories” [I should point out that Boyd and Richerson’s book Not by Genes Alone – another great book which I hope to blog soon – is worth having a look at if after reading Alexander’s book you think that he does not cover the topic of how and why such mistakes might happen in the amount of detail it deserves; they also cover related topics in some detail, from a different angle – US]

“It is my impression that many moral philosophers do not approach the problem of morality and ethics as if it arose as an effort to resolve conflicts of interests. Their involvement in conflicts of interest seems to come about obliquely through discussions of individuals’ views with respect to moral behavior, or their proximate feelings about morality – almost as if questions about conflicts of interest arise only because we operate under moral systems, rather than vice versa.”

“The problem, in developing a theory of moral systems that is consistent with evolutionary theory from biology, is in accounting for the altruism of moral behavior in genetically selfish terms. I believe this can be done by interpreting moral systems as systems of indirect reciprocity.
I regard indirect reciprocity as a consequence of direct reciprocity occurring in the presence of interested audiences – groups of individuals who continually evaluate the members of their society as possible future interactants from whom they would like to gain more than they lose […] Even in directly reciprocal interactions […] net losses to self […] may be the actual aim of one or even both individuals, if they are being scrutinized by others who are likely to engage either individual subsequently in reciprocity of greater significance than that occurring in the scrutinized acts. […] Systems of indirect reciprocity, and therefore moral systems, are social systems structured around the importance of status. The concept of status implies that an individual’s privileges, or its access to resources, are controlled in part by how others collectively think of him (hence, treat him) as a result of past interactions (including observations of interactions with others). […] The consequences of indirect reciprocity […] include the concomitant spread of altruism (as social investment genetically valuable to the altruist), rules, and efforts to cheat […]. I would not contend that we always carry out cost-benefit analyses on these issues deliberately or consciously. I do, however, contend that such analyses occur, sometimes consciously, sometimes not, and that we are evolved to be exceedingly accurate and quick at making them […] [A] conscience [is what] I have interpreted (Alexander, 1979a) as the “still small voice that tells us how far we can go in serving our own interests without incurring intolerable risks.””

“The long-term existence of complex patterns of indirect reciprocity […] seems to favor the evolution of keen abilities to (1) make one’s self seem more beneficent than is the case; and (2) influence others to be beneficent in such fashions as to be deleterious to themselves and beneficial to the moralizer, e.g. to lead others to (a) invest too much, (b) invest wrongly in the moralizer or his relatives and friends, or (c) invest indiscriminately on a larger scale than would otherwise be the case. According to this view, individuals are expected to parade the idea of much beneficence, and even of indiscriminate altruism as beneficial, so as to encourage people in general to engage in increasing amounts of social investment whether or not it is beneficial to their interests. […] They may also be expected to depress the fitness of competitors by identifying them, deceptively or not, as reciprocity cheaters (in other words, to moralize and gossip); to internalize rules or evolve the ability to acquire a conscience, interpreted […] as the ability to use or own judgment to serve our own interests; and to self-deceive and display false sincerity as defenses against detection of cheating and attributions of deliberateness in cheating […] Everyone will with to appear more beneficent than he is. There are two reasons: (1) this appearance, if credible, is more likely to lead to direct social rewards than its alternatives; (2) it is also more likely to encourage others to be more beneficent.”

“Consciousness and related aspects of the human psyche (self-awareness, self-reflection, foresight, planning, purpose, conscience, free will, etc.) are here hypothesized to represent a system for competing with other humans for status, resources, and eventually reproductive success. More specifically, the collection of these attributes is viewed as a means of seeing ourselves and our life situations as others see us and our life situations – most particularly in ways that will cause (the most and the most important of) them to continue to interact with us in fashions that will benefit us and seem to benefit them.
Consciousness, then, is a game of life in which the participants are trying to comprehend what is in one another’s minds before, and more effectively than, it can be done in reverse.”

“Provided with a means of relegating our deceptions to the subconsciousness […] false sincerity becomes easier and detection more difficult. There are reasons for believing that one does not need to know his own personal interests consciously in order to serve them as much as he needs to know the interests of others to thwart them. […] I have suggested that consciousness is a way of making our social behavior so unpredictable as to allow us to outmaneuver others; and that we press into subconsciousness (as opposed to forgetting) those things that remain useful to us but would be detrimental to us if others knew about them, and on which we are continually tested and would have to lie deliberately if they remained in our conscious mind […] Conscious concealment of interests, or disavowal, is deliberate deception, considered more reprehensible than anything not conscious. Indeed, if one does not know consciously what his interests are, he cannot, in some sense, be accused of deception even though he may be using an evolved ability of self-deception to deceive others. So it is not always – maybe not usually – in our evolutionary or surrogate-evolutionary interests to make them conscious […] If people can be fooled […] then there will be continual selection for becoming better at fooling others […]. This may include causing them to think that it will be best for them to help you when it is not. This ploy works because of the thin line everybody must continually tread with respect to not showing selfishness. If some people are self-destructively beneficent (i.e., make altruistic mistakes), and if people often cannot tell if one is such a mistake-maker, it might be profitable even to try to convince others that one is such a mistake-maker so as to be accepted as a cooperator or so that the other will be beneficent in expectation of large returns (through “mistakes”) later. […] Reciprocity may work this way because it is grounded evolutionarily in nepotism, appropriate dispensing of nepotism (as well as reciprocity) depends upon learning, and the wrong things can be learned. [Boyd and Richerson talk about this particular aspect, the learning part, in much more detail in their books – US] Self-deception, then may not be a pathological or detrimental trait, at least in most people most of the time. Rather, it may have evolved as a way to deceive others.”

“The only time that utilitarianism (promoting the greatest good to the greatest number) is predicted by evolutionary theory is when the interests of the group (the “greatest number”) and the individual coincide, and in such cases utilitarianism is not really altruistic in either the biologists’ or the philosophers’ sense of the term. […] If Kohlberg means to imply that a significant proportion of the populace of the world either implicitly or explicitly favors a system in which everyone (including himself) behaves so as to bring the greatest good to the greatest number, then I simply believe that he is wrong. If he supposes that only a relatively few – particularly moral philosophers and some others like them – have achieved this “stage,” then I also doubt the hypothesis. I accept that many people are aware of this concept of utility, that a small minority may advocate it, and that an even smaller minority may actually believe that they behave according to it. I speculate, however, that with a few inadvertent or accidental exceptions, no one actually follows this precept. I see the concept as having its main utility as a goal towards which one may exhort others to aspire, and towards which one may behave as if (or talk as if) aspiring, which actually practicing complex forms of self-interest.”

“Generally speaking, the bigger the group, the more complex the social organization, and the greater the group’s unity of purpose the more limited is individual entrepreneurship.”

“The function or raison d’etre [sic] of moral systems is evidently to provide the unity required to enable the group to compete successfully with other human groups. […] the argument that human evolution has been guided to some large extent by intergroup competition and aggression […] is central to the theory of morality presented here”.

June 29, 2017 Posted by | Anthropology, Biology, Books, Evolutionary biology, Genetics, Philosophy | Leave a comment

Quotes

(The Pestallozzi quotes below are from The Education of Man, a short and poor aphorism collection I can not possibly recommend despite the inclusion of quotes from it in this post.)

i. “Only a good conscience always gives man the courage to handle his affairs straightforwardly, openly and without evasion.” (Johann Heinrich Pestalozzi)

ii. “An intimate relationship in its full power is always a source of human wisdom and strength in relationships less intimate.” (-ll-)

iii. “Whoever is unwilling to help himself can be helped by no one.” (-ll-)

iv. “He who has filled his pockets in the service of injustice will have little good to say on behalf of justice.” (-ll-)

v. “It is Man’s fate that no one knows the truth alone; we all possess it, but it is divided up among us. He who learns from one man only, will never learn what the others know.” (-ll-)

vi. “No scoundrel is so wicked that he cannot at some point truthfully reprove some honest man” (-ll-)

vii. “The man too keenly aware of his good reputation is likely to have a bad one.” (-ll-)

viii. “Many words make an excuse anything but convincing.” (-ll-)

ix. “Fashions are usually seen in their true perspective only when they have gone out of fashion.” (-ll-)

x. “A thing that nobody looks for is seldom found.” (-ll-)

xi. “Many discoveries must have been stillborn or smothered at birth. We know only those which survived.” (William Ian Beardmore Beveridge)

xii. “Time is the most valuable thing a man can spend.” (Theophrastus)

xiii. “The only man who makes no mistakes is the man who never does anything.” (Theodore Roosevelt)

xiv. “It is hard to fail, but it is worse never to have tried to succeed.” (-ll-)

xv. “From their appearance in the Triassic until the end of the Creta­ceous, a span of 140 million years, mam­mals remained small and inconspicuous while all the ecological roles of large ter­restrial herbivores and carnivores were monopolized by dinosaurs; mammals did not begin to radiate and produce large species until after the dinosaurs had al­ready become extinct at the end of the Cretaceous. One is forced to conclude that dinosaurs were competitively su­perior to mammals as large land vertebrates.” (Robert T. Bakker)

xvi. “Plants and plant-eaters co-evolved. And plants aren’t the passive partners in the chain of terrestrial life. […] A birch tree doesn’t feel cosmic fulfillment when a moose munches its leaves; the tree species, in fact, evolves to fight the moose, to keep the animal’s munching lips away from vulnerable young leaves and twigs. In the final analysis, the merciless hand of natural selection will favor the birch genes that make the tree less and less palatable to the moose in generation after generation. No plant species could survive for long by offering itself as unprotected fodder.” (-ll-)

xvii. “… if you look at crocodiles today, they aren’t really representative of what the lineage of crocodiles look like. Crocodiles are represented by about 23 species, plus or minus a couple. Along that lineage the more primitive members weren’t aquatic. A lot of them were bipedal, a lot of them looked like little dinosaurs. Some were armored, others had no teeth. They were all fully terrestrial. So this is just the last vestige of that radiation that we’re seeing. And the ancestor of both dinosaurs and crocodiles would have, to the untrained eye, looked much more like a dinosaur.” (Mark Norell)

xviii. “If we are to understand the interactions of a large number of agents, we must first be able to describe the capabilities of individual agents.” (John Henry Holland)

xix. “Evolution continually innovates, but at each level it conserves the elements that are recombined to yield the innovations.” (-ll-)

xx. “Model building is the art of selecting those aspects of a process that are relevant to the question being asked. […] High science depends on this art.” (-ll-)

June 19, 2017 Posted by | Biology, Books, Botany, Evolutionary biology, Paleontology, Quotes/aphorisms | Leave a comment

Imported Plant Diseases

I found myself debating whether or not I should read Lewis, Petrovskii, and Potts’ text The Mathematics Behind Biological Invasions a while back, but at the time I in the end decided that it would simply be too much work to justify the potential payoff – so instead of reading the book, I decided to just watch the above lecture and leave it at that. This lecture is definitely a very poor textbook substitute, and I was strongly debating whether or not to blog it because it just isn’t very good; the level of coverage is very low. Which is sad, because some of the diseases discussed in the lecture – like e.g. wheat leaf rust – are really important and worth knowing about. One of the important points made in the lecture is that in the context of potential epidemics, it can be difficult to know when and how to intervene because of the uncertainty involved; early action may be the more efficient choice in terms of resource use, but the earlier you intervene, the less certain will be the intervention payoff and the less you’ll know about stuff like transmission patterns (…would outbreak X ever really have spread very wide if we had not intervened? We don’t observe the counterfactual…). Such aspects of course are not only relevant to plant-diseases, and the lecture also contains other basic insights from epidemiology which apply to other types of disease – but if you’ve ever opened a basic epidemiology text you’ll know all these things already.

May 22, 2017 Posted by | Biology, Botany, Ecology, Epidemiology, Lectures | Leave a comment

Biodemography of aging (IV)

My working assumption as I was reading part two of the book was that I would not be covering that part of the book in much detail here because it would simply be too much work to make such posts legible to the readership of this blog. However I then later, while writing this post, had the thought that given that almost nobody reads along here anyway (I’m not complaining, mind you – this is how I like it these days), the main beneficiary of my blog posts will always be myself, which lead to the related observation/notion that I should not be limiting my coverage of interesting stuff here simply because some hypothetical and probably nonexistent readership out there might not be able to follow the coverage. So when I started out writing this post I was working under the assumption that it would be my last post about the book, but I now feel sure that if I find the time I’ll add at least one more post about the book’s statistics coverage. On a related note I am explicitly making the observation here that this post was written for my benefit, not yours. You can read it if you like, or not, but it was not really written for you.

I have added bold a few places to emphasize key concepts and observations from the quoted paragraphs and in order to make the post easier for me to navigate later (all the italics below are on the other hand those of the authors of the book).

Biodemography is a multidisciplinary branch of science that unites under its umbrella various analytic approaches aimed at integrating biological knowledge and methods and traditional demographic analyses to shed more light on variability in mortality and health across populations and between individuals. Biodemography of aging is a special subfield of biodemography that focuses on understanding the impact of processes related to aging on health and longevity.”

“Mortality rates as a function of age are a cornerstone of many demographic analyses. The longitudinal age trajectories of biomarkers add a new dimension to the traditional demographic analyses: the mortality rate becomes a function of not only age but also of these biomarkers (with additional dependence on a set of sociodemographic variables). Such analyses should incorporate dynamic characteristics of trajectories of biomarkers to evaluate their impact on mortality or other outcomes of interest. Traditional analyses using baseline values of biomarkers (e.g., Cox proportional hazards or logistic regression models) do not take into account these dynamics. One approach to the evaluation of the impact of biomarkers on mortality rates is to use the Cox proportional hazards model with time-dependent covariates; this approach is used extensively in various applications and is available in all popular statistical packages. In such a model, the biomarker is considered a time-dependent covariate of the hazard rate and the corresponding regression parameter is estimated along with standard errors to make statistical inference on the direction and the significance of the effect of the biomarker on the outcome of interest (e.g., mortality). However, the choice of the analytic approach should not be governed exclusively by its simplicity or convenience of application. It is essential to consider whether the method gives meaningful and interpretable results relevant to the research agenda. In the particular case of biodemographic analyses, the Cox proportional hazards model with time-dependent covariates is not the best choice.

“Longitudinal studies of aging present special methodological challenges due to inherent characteristics of the data that need to be addressed in order to avoid biased inference. The challenges are related to the fact that the populations under study (aging individuals) experience substantial dropout rates related to death or poor health and often have co-morbid conditions related to the disease of interest. The standard assumption made in longitudinal analyses (although usually not explicitly mentioned in publications) is that dropout (e.g., death) is not associated with the outcome of interest. While this can be safely assumed in many general longitudinal studies (where, e.g., the main causes of dropout might be the administrative end of the study or moving out of the study area, which are presumably not related to the studied outcomes), the very nature of the longitudinal outcomes (e.g., measurements of some physiological biomarkers) analyzed in a longitudinal study of aging assumes that they are (at least hypothetically) related to the process of aging. Because the process of aging leads to the development of diseases and, eventually, death, in longitudinal studies of aging an assumption of non-association of the reason for dropout and the outcome of interest is, at best, risky, and usually is wrong. As an illustration, we found that the average trajectories of different physiological indices of individuals dying at earlier ages markedly deviate from those of long-lived individuals, both in the entire Framingham original cohort […] and also among carriers of specific alleles […] In such a situation, panel compositional changes due to attrition affect the averaging procedure and modify the averages in the total sample. Furthermore, biomarkers are subject to measurement error and random biological variability. They are usually collected intermittently at examination times which may be sparse and typically biomarkers are not observed at event times. It is well known in the statistical literature that ignoring measurement errors and biological variation in such variables and using their observed “raw” values as time-dependent covariates in a Cox regression model may lead to biased estimates and incorrect inferences […] Standard methods of survival analysis such as the Cox proportional hazards model (Cox 1972) with time-dependent covariates should be avoided in analyses of biomarkers measured with errors because they can lead to biased estimates.

“Statistical methods aimed at analyses of time-to-event data jointly with longitudinal measurements have become known in the mainstream biostatistical literature as “joint models for longitudinal and time-to-event data” (“survival” or “failure time” are often used interchangeably with “time-to-event”) or simply “joint models.” This is an active and fruitful area of biostatistics with an explosive growth in recent years. […] The standard joint model consists of two parts, the first representing the dynamics of longitudinal data (which is referred to as the “longitudinal sub-model”) and the second one modeling survival or, generally, time-to-event data (which is referred to as the “survival sub-model”). […] Numerous extensions of this basic model have appeared in the joint modeling literature in recent decades, providing great flexibility in applications to a wide range of practical problems. […] The standard parameterization of the joint model (11.2) assumes that the risk of the event at age t depends on the current “true” value of the longitudinal biomarker at this age. While this is a reasonable assumption in general, it may be argued that additional dynamic characteristics of the longitudinal trajectory can also play a role in the risk of death or onset of a disease. For example, if two individuals at the same age have exactly the same level of some biomarker at this age, but the trajectory for the first individual increases faster with age than that of the second one, then the first individual can have worse survival chances for subsequent years. […] Therefore, extensions of the basic parameterization of joint models allowing for dependence of the risk of an event on such dynamic characteristics of the longitudinal trajectory can provide additional opportunities for comprehensive analyses of relationships between the risks and longitudinal trajectories. Several authors have considered such extended models. […] joint models are computationally intensive and are sometimes prone to convergence problems [however such] models provide more efficient estimates of the effect of a covariate […] on the time-to-event outcome in the case in which there is […] an effect of the covariate on the longitudinal trajectory of a biomarker. This means that analyses of longitudinal and time-to-event data in joint models may require smaller sample sizes to achieve comparable statistical power with analyses based on time-to-event data alone (Chen et al. 2011).”

“To be useful as a tool for biodemographers and gerontologists who seek biological explanations for observed processes, models of longitudinal data should be based on realistic assumptions and reflect relevant knowledge accumulated in the field. An example is the shape of the risk functions. Epidemiological studies show that the conditional hazards of health and survival events considered as functions of risk factors often have U- or J-shapes […], so a model of aging-related changes should incorporate this information. In addition, risk variables, and, what is very important, their effects on the risks of corresponding health and survival events, experience aging-related changes and these can differ among individuals. […] An important class of models for joint analyses of longitudinal and time-to-event data incorporating a stochastic process for description of longitudinal measurements uses an epidemiologically-justified assumption of a quadratic hazard (i.e., U-shaped in general and J-shaped for variables that can take values only on one side of the U-curve) considered as a function of physiological variables. Quadratic hazard models have been developed and intensively applied in studies of human longitudinal data”.

“Various approaches to statistical model building and data analysis that incorporate unobserved heterogeneity are ubiquitous in different scientific disciplines. Unobserved heterogeneity in models of health and survival outcomes can arise because there may be relevant risk factors affecting an outcome of interest that are either unknown or not measured in the data. Frailty models introduce the concept of unobserved heterogeneity in survival analysis for time-to-event data. […] Individual age trajectories of biomarkers can differ due to various observed as well as unobserved (and unknown) factors and such individual differences propagate to differences in risks of related time-to-event outcomes such as the onset of a disease or death. […] The joint analysis of longitudinal and time-to-event data is the realm of a special area of biostatistics named “joint models for longitudinal and time-to-event data” or simply “joint models” […] Approaches that incorporate heterogeneity in populations through random variables with continuous distributions (as in the standard joint models and their extensions […]) assume that the risks of events and longitudinal trajectories follow similar patterns for all individuals in a population (e.g., that biomarkers change linearly with age for all individuals). Although such homogeneity in patterns can be justifiable for some applications, generally this is a rather strict assumption […] A population under study may consist of subpopulations with distinct patterns of longitudinal trajectories of biomarkers that can also have different effects on the time-to-event outcome in each subpopulation. When such subpopulations can be defined on the base of observed covariate(s), one can perform stratified analyses applying different models for each subpopulation. However, observed covariates may not capture the entire heterogeneity in the population in which case it may be useful to conceive of the population as consisting of latent subpopulations defined by unobserved characteristics. Special methodological approaches are necessary to accommodate such hidden heterogeneity. Within the joint modeling framework, a special class of models, joint latent class models, was developed to account for such heterogeneity […] The joint latent class model has three components. First, it is assumed that a population consists of a fixed number of (latent) subpopulations. The latent class indicator represents the latent class membership and the probability of belonging to the latent class is specified by a multinomial logistic regression function of observed covariates. It is assumed that individuals from different latent classes have different patterns of longitudinal trajectories of biomarkers and different risks of event. The key assumption of the model is conditional independence of the biomarker and the time-to-events given the latent classes. Then the class-specific models for the longitudinal and time-to-event outcomes constitute the second and third component of the model thus completing its specification. […] the latent class stochastic process model […] provides a useful tool for dealing with unobserved heterogeneity in joint analyses of longitudinal and time-to-event outcomes and taking into account hidden components of aging in their joint influence on health and longevity. This approach is also helpful for sensitivity analyses in applications of the original stochastic process model. We recommend starting the analyses with the original stochastic process model and estimating the model ignoring possible hidden heterogeneity in the population. Then the latent class stochastic process model can be applied to test hypotheses about the presence of hidden heterogeneity in the data in order to appropriately adjust the conclusions if a latent structure is revealed.”

The longitudinal genetic-demographic model (or the genetic-demographic model for longitudinal data) […] combines three sources of information in the likelihood function: (1) follow-up data on survival (or, generally, on some time-to-event) for genotyped individuals; (2) (cross-sectional) information on ages at biospecimen collection for genotyped individuals; and (3) follow-up data on survival for non-genotyped individuals. […] Such joint analyses of genotyped and non-genotyped individuals can result in substantial improvements in statistical power and accuracy of estimates compared to analyses of the genotyped subsample alone if the proportion of non-genotyped participants is large. Situations in which genetic information cannot be collected for all participants of longitudinal studies are not uncommon. They can arise for several reasons: (1) the longitudinal study may have started some time before genotyping was added to the study design so that some initially participating individuals dropped out of the study (i.e., died or were lost to follow-up) by the time of genetic data collection; (2) budget constraints prohibit obtaining genetic information for the entire sample; (3) some participants refuse to provide samples for genetic analyses. Nevertheless, even when genotyped individuals constitute a majority of the sample or the entire sample, application of such an approach is still beneficial […] The genetic stochastic process model […] adds a new dimension to genetic biodemographic analyses, combining information on longitudinal measurements of biomarkers available for participants of a longitudinal study with follow-up data and genetic information. Such joint analyses of different sources of information collected in both genotyped and non-genotyped individuals allow for more efficient use of the research potential of longitudinal data which otherwise remains underused when only genotyped individuals or only subsets of available information (e.g., only follow-up data on genotyped individuals) are involved in analyses. Similar to the longitudinal genetic-demographic model […], the benefits of combining data on genotyped and non-genotyped individuals in the genetic SPM come from the presence of common parameters describing characteristics of the model for genotyped and non-genotyped subsamples of the data. This takes into account the knowledge that the non-genotyped subsample is a mixture of carriers and non-carriers of the same alleles or genotypes represented in the genotyped subsample and applies the ideas of heterogeneity analyses […] When the non-genotyped subsample is substantially larger than the genotyped subsample, these joint analyses can lead to a noticeable increase in the power of statistical estimates of genetic parameters compared to estimates based only on information from the genotyped subsample. This approach is applicable not only to genetic data but to any discrete time-independent variable that is observed only for a subsample of individuals in a longitudinal study.

“Despite an existing tradition of interpreting differences in the shapes or parameters of the mortality rates (survival functions) resulting from the effects of exposure to different conditions or other interventions in terms of characteristics of individual aging, this practice has to be used with care. This is because such characteristics are difficult to interpret in terms of properties of external and internal processes affecting the chances of death. An important question then is: What kind of mortality model has to be developed to obtain parameters that are biologically interpretable? The purpose of this chapter is to describe an approach to mortality modeling that represents mortality rates in terms of parameters of physiological changes and declining health status accompanying the process of aging in humans. […] A traditional (demographic) description of changes in individual health/survival status is performed using a continuous-time random Markov process with a finite number of states, and age-dependent transition intensity functions (transitions rates). Transitions to the absorbing state are associated with death, and the corresponding transition intensity is a mortality rate. Although such a description characterizes connections between health and mortality, it does not allow for studying factors and mechanisms involved in the aging-related health decline. Numerous epidemiological studies provide compelling evidence that health transition rates are influenced by a number of factors. Some of them are fixed at the time of birth […]. Others experience stochastic changes over the life course […] The presence of such randomly changing influential factors violates the Markov assumption, and makes the description of aging-related changes in health status more complicated. […] The age dynamics of influential factors (e.g., physiological variables) in connection with mortality risks has been described using a stochastic process model of human mortality and aging […]. Recent extensions of this model have been used in analyses of longitudinal data on aging, health, and longevity, collected in the Framingham Heart Study […] This model and its extensions are described in terms of a Markov stochastic process satisfying a diffusion-type stochastic differential equation. The stochastic process is stopped at random times associated with individuals’ deaths. […] When an individual’s health status is taken into account, the coefficients of the stochastic differential equations become dependent on values of the jumping process. This dependence violates the Markov assumption and renders the conditional Gaussian property invalid. So the description of this (continuously changing) component of aging-related changes in the body also becomes more complicated. Since studying age trajectories of physiological states in connection with changes in health status and mortality would provide more realistic scenarios for analyses of available longitudinal data, it would be a good idea to find an appropriate mathematical description of the joint evolution of these interdependent processes in aging organisms. For this purpose, we propose a comprehensive model of human aging, health, and mortality in which the Markov assumption is fulfilled by a two-component stochastic process consisting of jumping and continuously changing processes. The jumping component is used to describe relatively fast changes in health status occurring at random times, and the continuous component describes relatively slow stochastic age-related changes of individual physiological states. […] The use of stochastic differential equations for random continuously changing covariates has been studied intensively in the analysis of longitudinal data […] Such a description is convenient since it captures the feedback mechanism typical of biological systems reflecting regular aging-related changes and takes into account the presence of random noise affecting individual trajectories. It also captures the dynamic connections between aging-related changes in health and physiological states, which are important in many applications.”

April 23, 2017 Posted by | Biology, Books, Demographics, Genetics, Mathematics, Statistics | Leave a comment

Biodemography of aging (III)

Latent class representation of the Grade of Membership model.
Singular value decomposition.
Affine space.
Lebesgue measure.
General linear position.

The links above are links to topics I looked up while reading the second half of the book. The first link is quite relevant to the book’s coverage as a comprehensive longitudinal Grade of Membership (-GoM) model is covered in chapter 17. Relatedly, chapter 18 covers linear latent structure (-LLS) models, and as observed in the book LLS is a generalization of GoM. As should be obvious from the nature of the links some of the stuff included in the second half of the text is highly technical, and I’ll readily admit I was not fully able to understand all the details included in the coverage of chapters 17 and 18 in particular. On account of the technical nature of the coverage in Part 2 I’m not sure I’ll cover the second half of the book in much detail, though I probably shall devote at least one more post to some of those topics, as they were quite interesting even if some of the details were difficult to follow.

I have almost finished the book at this point, and I have already decided to both give the book five stars and include it on my list of favorite books on goodreads; it’s really well written, and it provides consistently highly detailed coverage of very high quality. As I also noted in the first post about the book the authors have given readability aspects some thought, and I am sure most readers would learn quite a bit from this text even if they were to skip some of the more technical chapters. The main body of Part 2 of the book, the subtitle of which is ‘Statistical Modeling of Aging, Health, and Longevity’, is however probably in general not worth the effort of reading unless you have a solid background in statistics.

This post includes some observations and quotes from the last chapters of the book’s Part 1.

“The proportion of older adults in the U.S. population is growing. This raises important questions about the increasing prevalence of aging-related diseases, multimorbidity issues, and disability among the elderly population. […] In 2009, 46.3 million people were covered by Medicare: 38.7 million of them were aged 65 years and older, and 7.6 million were disabled […]. By 2031, when the baby-boomer generation will be completely enrolled, Medicare is expected to reach 77 million individuals […]. Because the Medicare program covers 95 % of the nation’s aged population […], the prediction of future Medicare costs based on these data can be an important source of health care planning.”

“Three essential components (which could be also referred as sub-models) need to be developed to construct a modern model of forecasting of population health and associated medical costs: (i) a model of medical cost projections conditional on each health state in the model, (ii) health state projections, and (iii) a description of the distribution of initial health states of a cohort to be projected […] In making medical cost projections, two major effects should be taken into account: the dynamics of the medical costs during the time periods comprising the date of onset of chronic diseases and the increase of medical costs during the last years of life. In this chapter, we investigate and model the first of these two effects. […] the approach developed in this chapter generalizes the approach known as “life tables with covariates” […], resulting in a new family of forecasting models with covariates such as comorbidity indexes or medical costs. In sum, this chapter develops a model of the relationships between individual cost trajectories following the onset of aging-related chronic diseases. […] The underlying methodological idea is to aggregate the health state information into a single (or several) covariate(s) that can be determinative in predicting the risk of a health event (e.g., disease incidence) and whose dynamics could be represented by the model assumptions. An advantage of such an approach is its substantial reduction of the degrees of freedom compared with existing forecasting models  (e.g., the FEM model, Goldman and RAND Corporation 2004). […] We found that the time patterns of medical cost trajectories were similar for all diseases considered and can be described in terms of four components having the meanings of (i) the pre-diagnosis cost associated with initial comorbidity represented by medical expenditures, (ii) the cost peak associated with the onset of each disease, (iii) the decline/reduction in medical expenditures after the disease onset, and (iv) the difference between post- and pre-diagnosis cost levels associated with an acquired comorbidity. The description of the trajectories was formalized by a model which explicitly involves four parameters reflecting these four components.”

As I noted earlier in my coverage of the book, I don’t think the model above fully captures all relevant cost contributions of the diseases included, as the follow-up period was too short to capture all relevant costs to be included in the part iv model component. This is definitely a problem in the context of diabetes. But then again nothing in theory stops people from combining the model above with other models which are better at dealing with the excess costs associated with long-term complications of chronic diseases, and the model results were intriguing even if the model likely underperforms in a few specific disease contexts.

Moving on…

“Models of medical cost projections usually are based on regression models estimated with the majority of independent predictors describing demographic status of the individual, patient’s health state, and level of functional limitations, as well as their interactions […]. If the health states needs to be described by a number of simultaneously manifested diseases, then detailed stratification over the categorized variables or use of multivariate regression models allows for a better description of the health states. However, it can result in an abundance of model parameters to be estimated. One way to overcome these difficulties is to use an approach in which the model components are demographically-based aggregated characteristics that mimic the effects of specific states. The model developed in this chapter is an example of such an approach: the use of a comorbidity index rather than of a set of correlated categorical regressor variables to represent the health state allows for an essential reduction in the degrees of freedom of the problem.”

“Unlike mortality, the onset time of chronic disease is difficult to define with high precision due to the large variety of disease-specific criteria for onset/incident case identification […] there is always some arbitrariness in defining the date of chronic disease onset, and a unified definition of date of onset is necessary for population studies with a long-term follow-up.”

“Individual age trajectories of physiological indices are the product of a complicated interplay among genetic and non-genetic (environmental, behavioral, stochastic) factors that influence the human body during the course of aging. Accordingly, they may differ substantially among individuals in a cohort. Despite this fact, the average age trajectories for the same index follow remarkable regularities. […] some indices tend to change monotonically with age: the level of blood glucose (BG) increases almost monotonically; pulse pressure (PP) increases from age 40 until age 85, then levels off and shows a tendency to decline only at later ages. The age trajectories of other indices are non-monotonic: they tend to increase first and then decline. Body mass index (BMI) increases up to about age 70 and then declines, diastolic blood pressure (DBP) increases until age 55–60 and then declines, systolic blood pressure (SBP) increases until age 75 and then declines, serum cholesterol (SCH) increases until age 50 in males and age 70 in females and then declines, ventricular rate (VR) increases until age 55 in males and age 45 in females and then declines. With small variations, these general patterns are similar in males and females. The shapes of the age-trajectories of the physiological variables also appear to be similar for different genotypes. […] The effects of these physiological indices on mortality risk were studied in Yashin et al. (2006), who found that the effects are gender and age specific. They also found that the dynamic properties of the individual age trajectories of physiological indices may differ dramatically from one individual to the next.”

“An increase in the mortality rate with age is traditionally associated with the process of aging. This influence is mediated by aging-associated changes in thousands of biological and physiological variables, some of which have been measured in aging studies. The fact that the age trajectories of some of these variables differ among individuals with short and long life spans and healthy life spans indicates that dynamic properties of the indices affect life history traits. Our analyses of the FHS data clearly demonstrate that the values of physiological indices at age 40 are significant contributors both to life span and healthy life span […] suggesting that normalizing these variables around age 40 is important for preventing age-associated morbidity and mortality later in life. […] results [also] suggest that keeping physiological indices stable over the years of life could be as important as their normalizing around age 40.”

“The results […] indicate that, in the quest of identifying longevity genes, it may be important to look for candidate genes with pleiotropic effects on more than one dynamic characteristic of the age-trajectory of a physiological variable, such as genes that may influence both the initial value of a trait (intercept) and the rates of its changes over age (slopes). […] Our results indicate that the dynamic characteristics of age-related changes in physiological variables are important predictors of morbidity and mortality risks in aging individuals. […] We showed that the initial value (intercept), the rate of changes (slope), and the variability of a physiological index, in the age interval 40–60 years, significantly influenced both mortality risk and onset of unhealthy life at ages 60+ in our analyses of the Framingham Heart Study data. That is, these dynamic characteristics may serve as good predictors of late life morbidity and mortality risks. The results also suggest that physiological changes taking place in the organism in middle life may affect longevity through promoting or preventing diseases of old age. For non-monotonically changing indices, we found that having a later age at the peak value of the index […], a lower peak value […], a slower rate of decline in the index at older ages […], and less variability in the index over time, can be beneficial for longevity. Also, the dynamic characteristics of the physiological indices were, overall, associated with mortality risk more significantly than with onset of unhealthy life.”

“Decades of studies of candidate genes show that they are not linked to aging-related traits in a straightforward manner […]. Recent genome-wide association studies (GWAS) have reached fundamentally the same conclusion by showing that the traits in late life likely are controlled by a relatively large number of common genetic variants […]. Further, GWAS often show that the detected associations are of tiny effect […] the weak effect of genes on traits in late life can be not only because they confer small risks having small penetrance but because they confer large risks but in a complex fashion […] In this chapter, we consider several examples of complex modes of gene actions, including genetic tradeoffs, antagonistic genetic effects on the same traits at different ages, and variable genetic effects on lifespan. The analyses focus on the APOE common polymorphism. […] The analyses reported in this chapter suggest that the e4 allele can be protective against cancer with a more pronounced role in men. This protective effect is more characteristic of cancers at older ages and it holds in both the parental and offspring generations of the FHS participants. Unlike cancer, the effect of the e4 allele on risks of CVD is more pronounced in women. […] [The] results […] explicitly show that the same allele can change its role on risks of CVD in an antagonistic fashion from detrimental in women with onsets at younger ages to protective in women with onsets at older ages. […] e4 allele carriers have worse survival compared to non-e4 carriers in each cohort. […] Sex stratification shows sexual dimorphism in the effect of the e4 allele on survival […] with the e4 female carriers, particularly, being more exposed to worse survival. […] The results of these analyses provide two important insights into the role of genes in lifespan. First, they provide evidence on the key role of aging-related processes in genetic susceptibility to lifespan. For example, taking into account the specifics of aging-related processes gains 18 % in estimates of the RRs and five orders of magnitude in significance in the same sample of women […] without additional investments in increasing sample sizes and new genotyping. The second is that a detailed study of the role of aging-related processes in estimates of the effects of genes on lifespan (and healthspan) helps in detecting more homogeneous [high risk] sub-samples”.

“The aging of populations in developed countries requires effective strategies to extend healthspan. A promising solution could be to yield insights into the genetic predispositions for endophenotypes, diseases, well-being, and survival. It was thought that genome-wide association studies (GWAS) would be a major breakthrough in this endeavor. Various genetic association studies including GWAS assume that there should be a deterministic (unconditional) genetic component in such complex phenotypes. However, the idea of unconditional contributions of genes to these phenotypes faces serious difficulties which stem from the lack of direct evolutionary selection against or in favor of such phenotypes. In fact, evolutionary constraints imply that genes should be linked to age-related phenotypes in a complex manner through different mechanisms specific for given periods of life. Accordingly, the linkage between genes and these traits should be strongly modulated by age-related processes in a changing environment, i.e., by the individuals’ life course. The inherent sensitivity of genetic mechanisms of complex health traits to the life course will be a key concern as long as genetic discoveries continue to be aimed at improving human health.”

“Despite the common understanding that age is a risk factor of not just one but a large portion of human diseases in late life, each specific disease is typically considered as a stand-alone trait. Independence of diseases was a plausible hypothesis in the era of infectious diseases caused by different strains of microbes. Unlike those diseases, the exact etiology and precursors of diseases in late life are still elusive. It is clear, however, that the origin of these diseases differs from that of infectious diseases and that age-related diseases reflect a complicated interplay among ontogenetic changes, senescence processes, and damages from exposures to environmental hazards. Studies of the determinants of diseases in late life provide insights into a number of risk factors, apart from age, that are common for the development of many health pathologies. The presence of such common risk factors makes chronic diseases and hence risks of their occurrence interdependent. This means that the results of many calculations using the assumption of disease independence should be used with care. Chapter 4 argued that disregarding potential dependence among diseases may seriously bias estimates of potential gains in life expectancy attributable to the control or elimination of a specific disease and that the results of the process of coping with a specific disease will depend on the disease elimination strategy, which may affect mortality risks from other diseases.”

April 17, 2017 Posted by | Biology, Books, Cancer/oncology, Demographics, Economics, Epidemiology, Genetics, Health Economics, Medicine, Statistics | Leave a comment

Biodemography of aging (I)

“The goal of this monograph is to show how questions about the connections between and among aging, health, and longevity can be addressed using the wealth of available accumulated knowledge in the field, the large volumes of genetic and non-genetic data collected in longitudinal studies, and advanced biodemographic models and analytic methods. […] This monograph visualizes aging-related changes in physiological variables and survival probabilities, describes methods, and summarizes the results of analyses of longitudinal data on aging, health, and longevity in humans performed by the group of researchers in the Biodemography of Aging Research Unit (BARU) at Duke University during the past decade. […] the focus of this monograph is studying dynamic relationships between aging, health, and longevity characteristics […] our focus on biodemography/biomedical demography meant that we needed to have an interdisciplinary and multidisciplinary biodemographic perspective spanning the fields of actuarial science, biology, economics, epidemiology, genetics, health services research, mathematics, probability, and statistics, among others.”

The quotes above are from the book‘s preface. In case this aspect was not clear from the comments above, this is the kind of book where you’ll randomly encounter sentences like these:

The simplest model describing negative correlations between competing risks is the multivariate lognormal frailty model. We illustrate the properties of such model for the bivariate case.

“The time-to-event sub-model specifies the latent class-specific expressions for the hazard rates conditional on the vector of biomarkers Yt and the vector of observed covariates X …”

…which means that some parts of the book are really hard to blog; it simply takes more effort to deal with this stuff here than it’s worth. As a result of this my coverage of the book will not provide a remotely ‘balanced view’ of the topics covered in it; I’ll skip a lot of the technical stuff because I don’t think it makes much sense to cover specific models and algorithms included in the book in detail here. However I should probably also emphasize while on this topic that although the book is in general not an easy read, it’s hard to read because ‘this stuff is complicated’, not because the authors are not trying. The authors in fact make it clear already in the preface that some chapters are more easy to read than are others and that some chapters are actually deliberately written as ‘guideposts and way-stations‘, as they put it, in order to make it easier for the reader to find the stuff in which he or she is most interested (“the interested reader can focus directly on the chapters/sections of greatest interest without having to read the entire volume“) – they have definitely given readability aspects some thought, and I very much like the book so far; it’s full of great stuff and it’s very well written.

I have had occasion to question a few of the observations they’ve made, for example I was a bit skeptical about a few of the conclusions they drew in chapter 6 (‘Medical Cost Trajectories and Onset of Age-Associated Diseases’), but this was related to what some would certainly consider to be minor details. In the chapter they describe a model of medical cost trajectories where the post-diagnosis follow-up period is 20 months; this is in my view much too short a follow-up period to draw conclusions about medical cost trajectories in the context of type 2 diabetes, one of the diseases included in the model, which I know because I’m intimately familiar with the literature on that topic; you need to look 7-10 years ahead to get a proper sense of how this variable develops over time – and it really is highly relevant to include those later years, because if you do not you may miss out on a large proportion of the total cost given that a substantial proportion of the total cost of diabetes relate to complications which tend to take some years to develop. If your cost analysis is based on a follow-up period as short as that of that model you may also on a related note draw faulty conclusions about which medical procedures and -subsidies are sensible/cost effective in the setting of these patients, because highly adherent patients may be significantly more expensive in a short run analysis like this one (they show up to their medical appointments and take their medications…) but much cheaper in the long run (…because they take their medications they don’t go blind or develop kidney failure). But as I say, it’s a minor point – this was one condition out of 20 included in the analysis they present, and if they’d addressed all the things that pedants like me might take issue with, the book would be twice as long and it would likely no longer be readable. Relatedly, the model they discuss in that chapter is far from unsalvageable; it’s just that one of the components of interest –  ‘the difference between post- and pre-diagnosis cost levels associated with an acquired comorbidity’ – in the case of at least one disease is highly unlikely to be correct (given the authors’ interpretation of the variable), because there’s some stuff of relevance which the model does not include. I found the model quite interesting, despite the shortcomings, and the results were definitely surprising. (No, the above does not in my opinion count as an example of coverage of a ‘specific model […] in detail’. Or maybe it does, but I included no equations. On reflection I probably can’t promise much more than that, sometimes the details are interesting…)

Anyway, below I’ve added some quotes from the first few chapters of the book and a few remarks along the way.

“The genetics of aging, longevity, and mortality has become the subject of intensive analyses […]. However, most estimates of genetic effects on longevity in GWAS have not reached genome-wide statistical significance (after applying the Bonferroni correction for multiple testing) and many findings remain non-replicated. Possible reasons for slow progress in this field include the lack of a biologically-based conceptual framework that would drive development of statistical models and methods for genetic analyses of data [here I was reminded of Burnham & Anderson’s coverage, in particular their criticism of mindless ‘Let the computer find out’-strategies – the authors of that chapter seem to share their skepticism…], the presence of hidden genetic heterogeneity, the collective influence of many genetic factors (each with small effects), the effects of rare alleles, and epigenetic effects, as well as molecular biological mechanisms regulating cellular functions. […] Decades of studies of candidate genes show that they are not linked to aging-related traits in a straightforward fashion (Finch and Tanzi 1997; Martin 2007). Recent genome-wide association studies (GWAS) have supported this finding by showing that the traits in late life are likely controlled by a relatively large number of common genetic variants […]. Further, GWAS often show that the detected associations are of tiny size (Stranger et al. 2011).”

I think this ties in well with what I’ve previously read on these and related topics – see e.g. the second-last paragraph quoted in my coverage of Richard Alexander’s book, or some of the remarks included in Roberts et al. Anyway, moving on:

“It is well known from epidemiology that values of variables describing physiological states at a given age are associated with human morbidity and mortality risks. Much less well known are the facts that not only the values of these variables at a given age, but also characteristics of their dynamic behavior during the life course are also associated with health and survival outcomes. This chapter [chapter 8 in the book, US] shows that, for monotonically changing variables, the value at age 40 (intercept), the rate of change (slope), and the variability of a physiological variable, at ages 40–60, significantly influence both health-span and longevity after age 60. For non-monotonically changing variables, the age at maximum, the maximum value, the rate of decline after reaching the maximum (right slope), and the variability in the variable over the life course may influence health-span and longevity. This indicates that such characteristics can be important targets for preventive measures aiming to postpone onsets of complex diseases and increase longevity.”

The chapter from which the quotes in the next two paragraphs are taken was completely filled with data from the Framingham Heart Study, and it was hard for me to know what to include here and what to leave out – so you should probably just consider the stuff I’ve included below as samples of the sort of observations included in that part of the coverage.

“To mediate the influence of internal or external factors on lifespan, physiological variables have to show associations with risks of disease and death at different age intervals, or directly with lifespan. For many physiological variables, such associations have been established in epidemiological studies. These include body mass index (BMI), diastolic blood pressure (DBP), systolic blood pressure (SBP), pulse pressure (PP), blood glucose (BG), serum cholesterol (SCH), hematocrit (H), and ventricular rate (VR). […] the connection between BMI and mortality risk is generally J-shaped […] Although all age patterns of physiological indices are non-monotonic functions of age, blood glucose (BG) and pulse pressure (PP) can be well approximated by monotonically increasing functions for both genders. […] the average values of body mass index (BMI) increase with age (up to age 55 for males and 65 for females), and then decline for both sexes. These values do not change much between ages 50 and 70 for males and between ages 60 and 70 for females. […] Except for blood glucose, all average age trajectories of physiological indices differ between males and females. Statistical analysis confirms the significance of these differences. In particular, after age 35 the female BMI increases faster than that of males. […] [When comparing women with less than or equal to 11 years of education [‘LE’] to women with 12 or more years of education [HE]:] The average values of BG for both groups are about the same until age 45. Then the BG curve for the LE females becomes higher than that of the HE females until age 85 where the curves intersect. […] The average values of BMI in the LE group are substantially higher than those among the HE group over the entire age interval. […] The average values of BG for the HE and LE males are very similar […] However, the differences between groups are much smaller than for females.”

They also in the chapter compared individuals with short life-spans [‘SL’, died before the age of 75] and those with long life-spans [‘LL’, 100 longest-living individuals in the relevant sample] to see if the variables/trajectories looked different. They did, for example: “trajectories for the LL females are substantially different from those for the SL females in all eight indices. Specifically, the average values of BG are higher and increase faster in the SL females. The entire age trajectory of BMI for the LL females is shifted to the right […] The average values of DBP [diastolic blood pressure, US] among the SL females are higher […] A particularly notable observation is the shift of the entire age trajectory of BMI for the LL males and females to the right (towards an older age), as compared with the SL group, and achieving its maximum at a later age. Such a pattern is markedly different from that for healthy and unhealthy individuals. The latter is mostly characterized by the higher values of BMI for the unhealthy people, while it has similar ages at maximum for both the healthy and unhealthy groups. […] Physiological aging changes usually develop in the presence of other factors affecting physiological dynamics and morbidity/mortality risks. Among these other factors are year of birth, gender, education, income, occupation, smoking, and alcohol use. An important limitation of most longitudinal studies is the lack of information regarding external disturbances affecting individuals in their day-today life.”

I incidentally noted while I was reading that chapter that a relevant variable ‘lurking in the shadows’ in the context of the male and female BMI trajectories might be changing smoking habits over time; I have not looked at US data on this topic, but I do know that the smoking patterns of Danish males and females during the latter half of the last century were markedly different and changed really quite dramatically in just a few decades; a lot more males than females smoked in the 60es, whereas the proportions of male- and female smokers today are much more similar, because a lot of males have given up smoking (I refer Danish readers to this blog post which I wrote some years ago on these topics). The authors of the chapter incidentally do look a little at data on smokers and they observe that smokers’ BMI are lower than non-smokers (not surprising), and that the smokers’ BMI curve (displaying the relationship between BMI and age) grows at a slower rate than the BMI curve of non-smokers (that this was to be expected is perhaps less clear, at least to me – the authors don’t interpret these specific numbers, they just report them).

The next chapter is one of the chapters in the book dealing with the SEER data I also mentioned not long ago in the context of my coverage of Bueno et al. Some sample quotes from that chapter below:

“To better address the challenge of “healthy aging” and to reduce economic burdens of aging-related diseases, key factors driving the onset and progression of diseases in older adults must be identified and evaluated. An identification of disease-specific age patterns with sufficient precision requires large databases that include various age-specific population groups. Collections of such datasets are costly and require long periods of time. That is why few studies have investigated disease-specific age patterns among older U.S. adults and there is limited knowledge of factors impacting these patterns. […] Information collected in U.S. Medicare Files of Service Use (MFSU) for the entire Medicare-eligible population of older U.S. adults can serve as an example of observational administrative data that can be used for analysis of disease-specific age patterns. […] In this chapter, we focus on a series of epidemiologic and biodemographic characteristics that can be studied using MFSU.”

“Two datasets capable of generating national level estimates for older U.S. adults are the Surveillance, Epidemiology, and End Results (SEER) Registry data linked to MFSU (SEER-M) and the National Long Term Care Survey (NLTCS), also linked to MFSU (NLTCS-M). […] The SEER-M data are the primary dataset analyzed in this chapter. The expanded SEER registry covers approximately 26 % of the U.S. population. In total, the Medicare records for 2,154,598 individuals are available in SEER-M […] For the majority of persons, we have continuous records of Medicare services use from 1991 (or from the time the person reached age 65 after 1990) to his/her death. […] The NLTCS-M data contain two of the six waves of the NLTCS: namely, the cohorts of years 1994 and 1999. […] In total, 34,077 individuals were followed-up between 1994 and 1999. These individuals were given the detailed NLTCS interview […] which has information on risk factors. More than 200 variables were selected”

In short, these data sets are very large, and contain a lot of information. Here are some results/data:

“Among studied diseases, incidence rates of Alzheimer’s disease, stroke, and heart failure increased with age, while the rates of lung and breast cancers, angina pectoris, diabetes, asthma, emphysema, arthritis, and goiter became lower at advanced ages. [..] Several types of age-patterns of disease incidence could be described. The first was a monotonic increase until age 85–95, with a subsequent slowing down, leveling off, and decline at age 100. This pattern was observed for myocardial infarction, stroke, heart failure, ulcer, and Alzheimer’s disease. The second type had an earlier-age maximum and a more symmetric shape (i.e., an inverted U-shape) which was observed for lung and colon cancers, Parkinson’s disease, and renal failure. The majority of diseases (e.g., prostate cancer, asthma, and diabetes mellitus among them) demonstrated a third shape: a monotonic decline with age or a decline after a short period of increased rates. […] The occurrence of age-patterns with a maximum and, especially, with a monotonic decline contradicts the hypothesis that the risk of geriatric diseases correlates with an accumulation of adverse health events […]. Two processes could be operative in the generation of such shapes. First, they could be attributed to the effect of selection […] when frail individuals do not survive to advanced ages. This approach is popular in cancer modeling […] The second explanation could be related to the possibility of under-diagnosis of certain chronic diseases at advanced ages (due to both less pronounced disease symptoms and infrequent doctor’s office visits); however, that possibility cannot be assessed with the available data […this is because the data sets are based on Medicare claims – US]”

“The most detailed U.S. data on cancer incidence come from the SEER Registry […] about 60 % of malignancies are diagnosed in persons aged 65+ years old […] In the U.S., the estimated percent of cancer patients alive after being diagnosed with cancer (in 2008, by current age) was 13 % for those aged 65–69, 25 % for ages 70–79, and 22 % for ages 80+ years old (compared with 40 % of those aged younger than 65 years old) […] Diabetes affects about 21 % of the U.S. population aged 65+ years old (McDonald et al. 2009). However, while more is known about the prevalence of diabetes, the incidence of this disease among older adults is less studied. […] [In multiple previous studies] the incidence rates of diabetes decreased with age for both males and females. In the present study, we find similar patterns […] The prevalence of asthma among the U.S. population aged 65+ years old in the mid-2000s was as high as 7 % […] older patients are more likely to be underdiagnosed, untreated, and hospitalized due to asthma than individuals younger than age 65 […] asthma incidence rates have been shown to decrease with age […] This trend of declining asthma incidence with age is in agreement with our results.”

“The prevalence and incidence of Alzheimer’s disease increase exponentially with age, with the most notable rise occurring through the seventh and eight decades of life (Reitz et al. 2011). […] whereas dementia incidence continues to increase beyond age 85, the rate of increase slows down [which] suggests that dementia diagnosed at advanced ages might be related not to the aging process per se, but associated with age-related risk factors […] Approximately 1–2 % of the population aged 65+ and up to 3–5 % aged 85+ years old suffer from Parkinson’s disease […] There are few studies of Parkinsons disease incidence, especially in the oldest old, and its age patterns at advanced ages remain controversial”.

“One disadvantage of large administrative databases is that certain factors can produce systematic over/underestimation of the number of diagnosed diseases or of identification of the age at disease onset. One reason for such uncertainties is an incorrect date of disease onset. Other sources are latent disenrollment and the effects of study design. […] the date of onset of a certain chronic disease is a quantity which is not defined as precisely as mortality. This uncertainty makes difficult the construction of a unified definition of the date of onset appropriate for population studies.”

“[W]e investigated the phenomenon of multimorbidity in the U.S. elderly population by analyzing mutual dependence in disease risks, i.e., we calculated disease risks for individuals with specific pre-existing conditions […]. In total, 420 pairs of diseases were analyzed. […] For each pair, we calculated age patterns of unconditional incidence rates of the diseases, conditional rates of the second (later manifested) disease for individuals after onset of the first (earlier manifested) disease, and the hazard ratio of development of the subsequent disease in the presence (or not) of the first disease. […] three groups of interrelations were identified: (i) diseases whose risk became much higher when patients had a certain pre-existing (earlier diagnosed) disease; (ii) diseases whose risk became lower than in the general population when patients had certain pre-existing conditions […] and (iii) diseases for which “two-tail” effects were observed: i.e., when the effects are significant for both orders of disease precedence; both effects can be direct (either one of the diseases from a disease pair increases the risk of the other disease), inverse (either one of the diseases from a disease pair decreases the risk of the other disease), or controversial (one disease increases the risk of the other, but the other disease decreases the risk of the first disease from the disease pair). In general, the majority of disease pairs with increased risk of the later diagnosed disease in both orders of precedence were those in which both the pre-existing and later occurring diseases were cancers, and also when both diseases were of the same organ. […] Generally, the effect of dependence between risks of two diseases diminishes with advancing age. […] Identifying mutual relationships in age-associated disease risks is extremely important since they indicate that development of […] diseases may involve common biological mechanisms.”

“in population cohorts, trends in prevalence result from combinations of trends in incidence, population at risk, recovery, and patients’ survival rates. Trends in the rates for one disease also may depend on trends in concurrent diseases, e.g., increasing survival from CHD contributes to an increase in the cancer incidence rate if the individuals who survived were initially susceptible to both diseases.”

March 1, 2017 Posted by | Biology, Books, Cancer/oncology, Cardiology, Demographics, Diabetes, Epidemiology, Genetics, Health Economics, Medicine, Nephrology, Neurology | Leave a comment

The Ageing Immune System and Health (II)

Here’s the first post about the book. I finished it a while ago but I recently realized I had not completed my intended coverage of the book here on the blog back then, and as some of the book’s material sort-of-kind-of relates to material encountered in a book I’m currently reading (Biodemography of Aging) I decided I might as well finish my coverage of the book now in order to review some things I might have forgot in the meantime, by providing coverage here of some of the material covered in the second half of the book. It’s a nice book with some interesting observations, but as I also pointed out in my first post it is definitely not an easy read. Below I have included some observations from the book’s second half.

Lungs:

“The aged lung is characterised by airspace enlargement similar to, but not identical with acquired emphysema [4]. Such tissue damage is detected even in non-smokers above 50 years of age as the septa of the lung alveoli are destroyed and the enlarged alveolar structures result in a decreased surface for gas exchange […] Additional problems are that surfactant production decreases with age [6] increasing the effort needed to expand the lungs during inhalation in the already reduced thoracic cavity volume where the weakened muscles are unable to thoroughly ventilate. […] As ageing is associated with respiratory muscle strength reduction, coughing becomes difficult making it progressively challenging to eliminate inhaled particles, pollens, microbes, etc. Additionally, ciliary beat frequency (CBF) slows down with age impairing the lungs’ first line of defence: mucociliary clearance [9] as the cilia can no longer repel invading microorganisms and particles. Consequently e.g. bacteria can more easily colonise the airways leading to infections that are frequent in the pulmonary tract of the older adult.”

“With age there are dramatic changes in neutrophil function, including reduced chemotaxis, phagocytosis and bactericidal mechanisms […] reduced bactericidal function will predispose to infection but the reduced chemotaxis also has consequences for lung tissue as this results in increased tissue bystander damage from neutrophil elastases released during migration […] It is currently accepted that alterations in pulmonary PPAR profile, more precisely loss of PPARγ activity, can lead to inflammation, allergy, asthma, COPD, emphysema, fibrosis, and cancer […]. Since it has been reported that PPARγ activity decreases with age, this provides a possible explanation for the increasing incidence of these lung diseases and conditions in older individuals [6].”

Cancer:

“Age is an important risk factor for cancer and subjects aged over 60 also have a higher risk of comorbidities. Approximately 50 % of neoplasms occur in patients older than 70 years […] a major concern for poor prognosis is with cancer patients over 70–75 years. These patients have a lower functional reserve, a higher risk of toxicity after chemotherapy, and an increased risk of infection and renal complications that lead to a poor quality of life. […] [Whereas] there is a difference in organs with higher cancer incidence in developed versus developing countries [,] incidence increases with ageing almost irrespective of country […] The findings from Surveillance, Epidemiology and End Results Program [SEERincidentally I likely shall at some point discuss this one in much more detail, as the aforementioned biodemography textbook covers this data in a lot of detail.. – US] [6] show that almost a third of all cancer are diagnosed after the age of 75 years and 70 % of cancer-related deaths occur after the age of 65 years. […] The traditional clinical trial focus is on younger and healthier patient, i.e. with few or no co-morbidities. These restrictions have resulted in a lack of data about the optimal treatment for older patients [7] and a poor evidence base for therapeutic decisions. […] In the older patient, neutropenia, anemia, mucositis, cardiomyopathy and neuropathy — the toxic effects of chemotherapy — are more pronounced […] The correction of comorbidities and malnutrition can lead to greater safety in the prescription of chemotherapy […] Immunosenescence is a general classification for changes occurring in the immune system during the ageing process, as the distribution and function of cells involved in innate and adaptive immunity are impaired or remodelled […] Immunosenescence is considered a major contributor to cancer development in aged individuals“.

Neurodegenerative diseases:

“Dementia and age-related vision loss are major causes of disability in our ageing population and it is estimated that a third of people aged over 75 are affected. […] age is the largest risk factor for the development of neurodegenerative diseases […] older patients with comorbidities such as atherosclerosis, type II diabetes or those suffering from repeated or chronic systemic bacterial and viral infections show earlier onset and progression of clinical symptoms […] analysis of post-mortem brain tissue from healthy older individuals has provided evidence that the presence of misfolded proteins alone does not correlate with cognitive decline and dementia, implying that additional factors are critical for neural dysfunction. We now know that innate immune genes and life-style contribute to the onset and progression of age-related neuronal dysfunction, suggesting that chronic activation of the immune system plays a key role in the underlying mechanisms that lead to irreversible tissue damage in the CNS. […] Collectively these studies provide evidence for a critical role of inflammation in the pathogenesis of a range of neurodegenerative diseases, but the factors that drive or initiate inflammation remain largely elusive.”

“The effect of infection, mimicked experimentally by administration of bacterial lipopolysaccharide (LPS) has revealed that immune to brain communication is a critical component of a host organism’s response to infection and a collection of behavioural and metabolic adaptations are initiated over the course of the infection with the purpose of restricting the spread of a pathogen, optimising conditions for a successful immune response and preventing the spread of infection to other organisms [10]. These behaviours are mediated by an innate immune response and have been termed ‘sickness behaviours’ and include depression, reduced appetite, anhedonia, social withdrawal, reduced locomotor activity, hyperalgesia, reduced motivation, cognitive impairment and reduced memory encoding and recall […]. Metabolic adaptation to infection include fever, altered dietary intake and reduction in the bioavailability of nutrients that may facilitate the growth of a pathogen such as iron and zinc [10]. These behavioural and metabolic adaptions are evolutionary highly conserved and also occur in humans”.

“Sickness behaviour and transient microglial activation are beneficial for individuals with a normal, healthy CNS, but in the ageing or diseased brain the response to peripheral infection can be detrimental and increases the rate of cognitive decline. Aged rodents exhibit exaggerated sickness and prolonged neuroinflammation in response to systemic infection […] Older people who contract a bacterial or viral infection or experience trauma postoperatively, also show exaggerated neuroinflammatory responses and are prone to develop delirium, a condition which results in a severe short term cognitive decline and a long term decline in brain function […] Collectively these studies demonstrate that peripheral inflammation can increase the accumulation of two neuropathological hallmarks of AD, further strengthening the hypothesis that inflammation i[s] involved in the underlying pathology. […] Studies from our own laboratory have shown that AD patients with mild cognitive impairment show a fivefold increased rate of cognitive decline when contracting a systemic urinary tract or respiratory tract infection […] Apart from bacterial infection, chronic viral infections have also been linked to increased incidence of neurodegeneration, including cytomegalovirus (CMV). This virus is ubiquitously distributed in the human population, and along with other age-related diseases such as cardiovascular disease and cancer, has been associated with increased risk of developing vascular dementia and AD [66, 67].”

Frailty:

“Frailty is associated with changes to the immune system, importantly the presence of a pro-inflammatory environment and changes to both the innate and adaptive immune system. Some of these changes have been demonstrated to be present before the clinical features of frailty are apparent suggesting the presence of potentially modifiable mechanistic pathways. To date, exercise programme interventions have shown promise in the reversal of frailty and related physical characteristics, but there is no current evidence for successful pharmacological intervention in frailty. […] In practice, acute illness in a frail person results in a disproportionate change in a frail person’s functional ability when faced with a relatively minor physiological stressor, associated with a prolonged recovery time […] Specialist hospital services such as surgery [15], hip fractures [16] and oncology [17] have now begun to recognise frailty as an important predictor of mortality and morbidity.

I should probably mention here that this is another area where there’s an overlap between this book and the biodemography text I’m currently reading; chapter 7 of the latter text is about ‘Indices of Cumulative Deficits’ and covers this kind of stuff in a lot more detail than does this one, including e.g. detailed coverage of relevant statistical properties of one such index. Anyway, back to the coverage:

“Population based studies have demonstrated that the incidence of infection and subsequent mortality is higher in populations of frail people. […] The prevalence of pneumonia in a nursing home population is 30 times higher than the general population [39, 40]. […] The limited data available demonstrates that frailty is associated with a state of chronic inflammation. There is also evidence that inflammageing predates a diagnosis of frailty suggesting a causative role. […] A small number of studies have demonstrated a dysregulation of the innate immune system in frailty. Frail adults have raised white cell and neutrophil count. […] High white cell count can predict frailty at a ten year follow up [70]. […] A recent meta-analysis and four individual systematic reviews have found beneficial evidence of exercise programmes on selected physical and functional ability […] exercise interventions may have no positive effect in operationally defined frail individuals. […] To date there is no clear evidence that pharmacological interventions improve or ameliorate frailty.”

Exercise:

“[A]s we get older the time and intensity at which we exercise is severely reduced. Physical inactivity now accounts for a considerable proportion of age-related disease and mortality. […] Regular exercise has been shown to improve neutrophil microbicidal functions which reduce the risk of infectious disease. Exercise participation is also associated with increased immune cell telomere length, and may be related to improved vaccine responses. The anti-inflammatory effect of regular exercise and negative energy balance is evident by reduced inflammatory immune cell signatures and lower inflammatory cytokine concentrations. […] Reduced physical activity is associated with a positive energy balance leading to increased adiposity and subsequently systemic inflammation [5]. […] Elevated neutrophil counts accompany increased inflammation with age and the increased ratio of neutrophils to lymphocytes is associated with many age-related diseases including cancer [7]. Compared to more active individuals, less active and overweight individuals have higher circulating neutrophil counts [8]. […] little is known about the intensity, duration and type of exercise which can provide benefits to neutrophil function. […] it remains unclear whether exercise and physical activity can override the effects of NK cell dysfunction in the old. […] A considerable number of studies have assessed the effects of acute and chronic exercise on measures of T-cell immunesenescence including T cell subsets, phenotype, proliferation, cytokine production, chemotaxis, and co-stimulatory capacity. […] Taken together exercise appears to promote an anti-inflammatory response which is mediated by altered adipocyte function and improved energy metabolism leading to suppression of pro-inflammatory cytokine production in immune cells.”

February 24, 2017 Posted by | Biology, Books, Cancer/oncology, Epidemiology, Immunology, Medicine, Neurology | Leave a comment

Rocks: A very short introduction

I liked the book. Below I have added some sample observations from the book, as well as a collection of links to various topics covered/mentioned in the book.

“To make a variety of rocks, there needs to be a variety of minerals. The Earth has shown a capacity for making an increasing variety of minerals throughout its existence. Life has helped in this [but] [e]ven a dead planet […] can evolve a fine array of minerals and rocks. This is done simply by stretching out the composition of the original homogeneous magma. […] Such stretching of composition would have happened as the magma ocean of the earliest […] Earth cooled and began to solidify at the surface, forming the first crust of this new planet — and the starting point, one might say, of our planet’s rock cycle. When magma cools sufficiently to start to solidify, the first crystals that form do not have the same composition as the overall magma. In a magma of ‘primordial Earth’ type, the first common mineral to form was probably olivine, an iron-and-magnesium-rich silicate. This is a dense mineral, and so it tends to sink. As a consequence the remaining magma becomes richer in elements such as calcium and aluminium. From this, at temperatures of around 1,000°C, the mineral plagioclase feldspar would then crystallize, in a calcium-rich variety termed anorthite. This mineral, being significantly less dense than olivine, would tend to rise to the top of the cooling magma. On the Moon, itself cooling and solidifying after its fiery birth, layers of anorthite crystals several kilometres thick built up as the rock — anorthosite — of that body’s primordial crust. This anorthosite now forms the Moon’s ancient highlands, subsequently pulverized by countless meteorite impacts. This rock type can be found on Earth, too, particularly within ancient terrains. […] Was the Earth’s first surface rock also anorthosite? Probably—but we do not know for sure, as the Earth, a thoroughly active planet throughout its existence, has consumed and obliterated nearly all of the crust that formed in the first several hundred million years of its existence, in a mysterious interval of time that we now call the Hadean Eon. […] The earliest rocks that we know of date from the succeeding Archean Eon.”

“Where plates are pulled apart, then pressure is released at depth, above the ever-opening tectonic rift, for instance beneath the mid-ocean ridge that runs down the centre of the Atlantic Ocean. The pressure release from this crustal stretching triggers decompression melting in the rocks at depth. These deep rocks — peridotite — are dense, being rich in the iron- and magnesium-bearing mineral olivine. Heated to the point at which melting just begins, so that the melt fraction makes up only a few percentage points of the total, those melt droplets are enriched in silica and aluminium relative to the original peridotite. The melt will have a composition such that, when it cools and crystallizes, it will largely be made up of crystals of plagioclase feldspar together with pyroxene. Add a little more silica and quartz begins to appear. With less silica, olivine crystallizes instead of quartz.

The resulting rock is basalt. If there was anything like a universal rock of rocky planet surfaces, it is basalt. On Earth it makes up almost all of the ocean floor bedrock — in other words, the ocean crust, that is, the surface layer, some 10 km thick. Below, there is a boundary called the Mohorovičič Discontinuity (or ‘Moho’ for short)[…]. The Moho separates the crust from the dense peridotitic mantle rock that makes up the bulk of the lithosphere. […] Basalt makes up most of the surface of Venus, Mercury, and Mars […]. On the Moon, the ‘mare’ (‘seas’) are not of water but of basalt. Basalt, or something like it, will certainly be present in large amounts on the surfaces of rocky exoplanets, once we are able to bring them into close enough focus to work out their geology. […] At any one time, ocean floor basalts are the most common rock type on our planet’s surface. But any individual piece of ocean floor is, geologically, only temporary. It is the fate of almost all ocean crust — islands, plateaux, and all — to be destroyed within ocean trenches, sliding down into the Earth along subduction zones, to be recycled within the mantle. From that destruction […] there arise the rocks that make up the most durable component of the Earth’s surface: the continents.”

“Basaltic magmas are a common starting point for many other kinds of igneous rocks, through the mechanism of fractional crystallization […]. Remove the early-formed crystals from the melt, and the remaining melt will evolve chemically, usually in the direction of increasing proportions of silica and aluminium, and decreasing amounts of iron and magnesium. These magmas will therefore produce intermediate rocks such as andesites and diorites in the finely and coarsely crystalline varieties, respectively; and then more evolved silica-rich rocks such as rhyolites (fine), microgranites (medium), and granites (coarse). […] Granites themselves can evolve a little further, especially at the late stages of crystallization of large bodies of granite magma. The final magmas are often water-rich ones that contain many of the incompatible elements (such as thorium, uranium, and lithium), so called because they are difficult to fit within the molecular frameworks of the common igneous minerals. From these final ‘sweated-out’ magmas there can crystallize a coarsely crystalline rock known as pegmatite — famous because it contains a wide variety of minerals (of the ~4,500 minerals officially recognized on Earth […] some 500 have been recognized in pegmatites).”

“The less oxygen there is [at the area of deposition], the more the organic matter is preserved into the rock record, and it is where the seawater itself, by the sea floor, has little or no oxygen that some of the great carbon stores form. As animals cannot live in these conditions, organic-rich mud can accumulate quietly and undisturbed, layer by layer, here and there entombing the skeleton of some larger planktonic organism that has fallen in from the sunlit, oxygenated waters high above. It is these kinds of sediments that […] generate[d] the oil and gas that currently power our civilization. […] If sedimentary layers have not been buried too deeply, they can remain as soft muds or loose sands for millions of years — sometimes even for hundreds of millions of years. However, most buried sedimentary layers, sooner or later, harden and turn into rock, under the combined effects of increasing heat and pressure (as they become buried ever deeper under subsequent layers of sediment) and of changes in chemical environment. […] As rocks become buried ever deeper, they become progressively changed. At some stage, they begin to change their character and depart from the condition of sedimentary strata. At this point, usually beginning several kilometres below the surface, buried igneous rocks begin to transform too. The process of metamorphism has started, and may progress until those original strata become quite unrecognizable.”

“Frozen water is a mineral, and this mineral can make up a rock, both on Earth and, very commonly, on distant planets, moons, and comets […]. On Earth today, there are large deposits of ice strata on the cold polar regions of Antarctica and Greenland, with smaller amounts in mountain glaciers […]. These ice strata, the compressed remains of annual snowfalls, have simply piled up, one above the other, over time; on Antarctica, they reach almost 5 km in thickness and at their base are about a million years old. […] The ice cannot pile up for ever, however: as the pressure builds up it begins to behave plastically and to slowly flow downslope, eventually melting or, on reaching the sea, breaking off as icebergs. As the ice mass moves, it scrapes away at the underlying rock and soil, shearing these together to form a mixed deposit of mud, sand, pebbles, and characteristic striated (ice-scratched) cobbles and boulders […] termed a glacial till. Glacial tills, if found in the ancient rock record (where, hardened, they are referred to as tillites), are a sure clue to the former presence of ice.”

“At first approximation, the mantle is made of solid rock and is not […] a seething mass of magma that the fragile crust threatens to founder into. This solidity is maintained despite temperatures that, towards the base of the mantle, are of the order of 3,000°C — temperatures that would very easily melt rock at the surface. It is the immense pressures deep in the Earth, increasing more or less in step with temperature, that keep the mantle rock in solid form. In more detail, the solid rock of the mantle may include greater or lesser (but usually lesser) amounts of melted material, which locally can gather to produce magma chambers […] Nevertheless, the mantle rock is not solid in the sense that we might imagine at the surface: it is mobile, and much of it is slowly moving plastically, taking long journeys that, over many millions of years, may encompass the entire thickness of the mantle (the kinds of speeds estimated are comparable to those at which tectonic plates move, of a few centimetres a year). These are the movements that drive plate tectonics and that, in turn, are driven by the variation in temperature (and therefore density) from the contact region with the hot core, to the cooler regions of the upper mantle.”

“The outer core will not transmit certain types of seismic waves, which indicates that it is molten. […] Even farther into the interior, at the heart of the Earth, this metal magma becomes rock once more, albeit a rock that is mostly crystalline iron and nickel. However, it was not always so. The core used to be liquid throughout and then, some time ago, it began to crystallize into iron-nickel rock. Quite when this happened has been widely debated, with estimates ranging from over three billion years ago to about half a billion years ago. The inner core has now grown to something like 2,400 km across. Even allowing for the huge spans of geological time involved, this implies estimated rates of solidification that are impressive in real time — of some thousands of tons of molten metal crystallizing into solid form per second.”

“Rocks are made out of minerals, and those minerals are not a constant of the universe. A little like biological organisms, they have evolved and diversified through time. As the minerals have evolved, so have the rocks that they make up. […] The pattern of evolution of minerals was vividly outlined by Robert Hazen and his colleagues in what is now a classic paper published in 2008. They noted that in the depths of outer space, interstellar dust, as analysed by the astronomers’ spectroscopes, seems to be built of only about a dozen minerals […] Their component elements were forged in supernova explosions, and these minerals condensed among the matter and radiation that streamed out from these stellar outbursts. […] the number of minerals on the new Earth [shortly after formation was] about 500 (while the smaller, largely dry Moon has about 350). Plate tectonics began, with its attendant processes of subduction, mountain building, and metamorphism. The number of minerals rose to about 1,500 on a planet that may still have been biologically dead. […] The origin and spread of life at first did little to increase the number of mineral species, but once oxygen-producing photosynthesis started, then there was a great leap in mineral diversity as, for each mineral, various forms of oxide and hydroxide could crystallize. After this step, about two and a half billion years ago, there were over 4,000 minerals, most of them vanishingly rare. Since then, there may have been a slight increase in their numbers, associated with such events as the appearance and radiation of metazoan animals and plants […] Humans have begun to modify the chemistry and mineralogy of the Earth’s surface, and this has included the manufacture of many new types of mineral. […] Human-made minerals are produced in laboratories and factories around the world, with many new forms appearing every year. […] Materials sciences databases now being compiled suggest that more than 50,000 solid, inorganic, crystalline species have been created in the laboratory.”

Some links of interest:

Rock. Presolar grains. Silicate minerals. Silicon–oxygen tetrahedron. Quartz. Olivine. Feldspar. Mica. Jean-Baptiste Biot. Meteoritics. Achondrite/Chondrite/Chondrule. Carbonaceous chondrite. Iron–nickel alloy. Widmanstätten pattern. Giant-impact hypothesis (in the book this is not framed as a hypothesis nor is it explicitly referred to as the GIH; it’s just taken to be the correct account of what happened back then – US). Alfred Wegener. Arthur Holmes. Plate tectonics. Lithosphere. Asthenosphere. Fractional Melting (couldn’t find a wiki link about this exact topic; the MIT link is quite technical – sorry). Hotspot (geology). Fractional crystallization. Metastability. Devitrification. Porphyry (geology). Phenocryst. Thin section. Neptunism. Pyroclastic flow. Ignimbrite. Pumice. Igneous rock. Sedimentary rock. Weathering. Slab (geology). Clay minerals. Conglomerate (geology). BrecciaAeolian processes. Hummocky cross-stratification. Ralph Alger Bagnold. Montmorillonite. Limestone. Ooid. Carbonate platform. Turbidite. Desert varnish. Evaporite. Law of Superposition. Stratigraphy. Pressure solution. Compaction (geology). Recrystallization (geology). Cleavage (geology). Phyllite. Aluminosilicate. Gneiss. Rock cycle. Ultramafic rock. Serpentinite. Pressure-Temperature-time paths. Hornfels. Impactite. Ophiolite. Xenolith. Kimberlite. Transition zone (Earth). Mantle convection. Mantle plume. Core–mantle boundary. Post-perovskite. Earth’s inner core. Inge Lehmann. Stromatolites. Banded iron formations. Microbial mat. Quorum sensing. Cambrian explosion. Bioturbation. Biostratigraphy. Coral reef. Radiolaria. Carbonate compensation depth. Paleosol. Bone bed. Coprolite. Allan Hills 84001. Tharsis. Pedestal crater. Mineraloid. Concrete.

February 19, 2017 Posted by | Biology, Books, Geology | Leave a comment

The Laws of Thermodynamics

Here’s a relevant 60 symbols video with Mike Merrifield. Below a few observations from the book, and some links.

“Among the hundreds of laws that describe the universe, there lurks a mighty handful. These are the laws of thermodynamics, which summarize the properties of energy and its transformation from one form to another. […] The mighty handful consists of four laws, with the numbering starting inconveniently at zero and ending at three. The first two laws (the ‘zeroth’ and the ‘first’) introduce two familiar but nevertheless enigmatic properties, the temperature and the energy. The third of the four (the ‘second law’) introduces what many take to be an even more elusive property, the entropy […] The second law is one of the all-time great laws of science […]. The fourth of the laws (the ‘third law’) has a more technical role, but rounds out the structure of the subject and both enables and foils its applications.”

Classical thermodynamics is the part of thermodynamics that emerged during the nineteenth century before everyone was fully convinced about the reality of atoms, and concerns relationships between bulk properties. You can do classical thermodynamics even if you don’t believe in atoms. Towards the end of the nineteenth century, when most scientists accepted that atoms were real and not just an accounting device, there emerged the version of thermodynamics called statistical thermodynamics, which sought to account for the bulk properties of matter in terms of its constituent atoms. The ‘statistical’ part of the name comes from the fact that in the discussion of bulk properties we don’t need to think about the behaviour of individual atoms but we do need to think about the average behaviour of myriad atoms. […] In short, whereas dynamics deals with the behaviour of individual bodies, thermodynamics deals with the average behaviour of vast numbers of them.”

“In everyday language, heat is both a noun and a verb. Heat flows; we heat. In thermodynamics heat is not an entity or even a form of energy: heat is a mode of transfer of energy. It is not a form of energy, or a fluid of some kind, or anything of any kind. Heat is the transfer of energy by virtue of a temperature difference. Heat is the name of a process, not the name of an entity.”

“The supply of 1J of energy as heat to 1 g of water results in an increase in temperature of about 0.2°C. Substances with a high heat capacity (water is an example) require a larger amount of heat to bring about a given rise in temperature than those with a small heat capacity (air is an example). In formal thermodynamics, the conditions under which heating takes place must be specified. For instance, if the heating takes place under conditions of constant pressure with the sample free to expand, then some of the energy supplied as heat goes into expanding the sample and therefore to doing work. Less energy remains in the sample, so its temperature rises less than when it is constrained to have a constant volume, and therefore we report that its heat capacity is higher. The difference between heat capacities of a system at constant volume and at constant pressure is of most practical significance for gases, which undergo large changes in volume as they are heated in vessels that are able to expand.”

“Heat capacities vary with temperature. An important experimental observation […] is that the heat capacity of every substance falls to zero when the temperature is reduced towards absolute zero (T = 0). A very small heat capacity implies that even a tiny transfer of heat to a system results in a significant rise in temperature, which is one of the problems associated with achieving very low temperatures when even a small leakage of heat into a sample can have a serious effect on the temperature”.

“A crude restatement of Clausius’s statement is that refrigerators don’t work unless you turn them on.”

“The Gibbs energy is of the greatest importance in chemistry and in the field of bioenergetics, the study of energy utilization in biology. Most processes in chemistry and biology occur at constant temperature and pressure, and so to decide whether they are spontaneous and able to produce non-expansion work we need to consider the Gibbs energy. […] Our bodies live off Gibbs energy. Many of the processes that constitute life are non-spontaneous reactions, which is why we decompose and putrefy when we die and these life-sustaining reactions no longer continue. […] In biology a very important ‘heavy weight’ reaction involves the molecule adenosine triphosphate (ATP). […] When a terminal phosphate group is snipped off by reaction with water […], to form adenosine diphosphate (ADP), there is a substantial decrease in Gibbs energy, arising in part from the increase in entropy when the group is liberated from the chain. Enzymes in the body make use of this change in Gibbs energy […] to bring about the linking of amino acids, and gradually build a protein molecule. It takes the effort of about three ATP molecules to link two amino acids together, so the construction of a typical protein of about 150 amino acid groups needs the energy released by about 450 ATP molecules. […] The ADP molecules, the husks of dead ATP molecules, are too valuable just to discard. They are converted back into ATP molecules by coupling to reactions that release even more Gibbs energy […] and which reattach a phosphate group to each one. These heavy-weight reactions are the reactions of metabolism of the food that we need to ingest regularly.”

Links of interest below – the stuff covered in the links is the sort of stuff covered in this book:

Laws of thermodynamics (article includes links to many other articles of interest, including links to each of the laws mentioned above).
System concepts.
Intensive and extensive properties.
Mechanical equilibrium.
Thermal equilibrium.
Diathermal wall.
Thermodynamic temperature.
Thermodynamic beta.
Ludwig Boltzmann.
Boltzmann constant.
Maxwell–Boltzmann distribution.
Conservation of energy.
Work (physics).
Internal energy.
Heat (physics).
Microscopic view of heat.
Reversible process (thermodynamics).
Carnot’s theorem.
Enthalpy.
Fluctuation-dissipation theorem.
Noether’s theorem.
Entropy.
Thermal efficiency.
Rudolf Clausius.
Spontaneous process.
Residual entropy.
Heat engine.
Coefficient of performance.
Helmholtz free energy.
Gibbs free energy.
Phase transition.
Chemical equilibrium.
Superconductivity.
Superfluidity.
Absolute zero.

February 5, 2017 Posted by | Biology, Books, Chemistry, Physics | Leave a comment

The Biology of Moral Systems (I)

I have quoted from the book before, but I decided that this book deserves to be blogged in more detail. I’m close to finishing the book at this point (it’s definitely taken longer than it should have), and I’ll probably give it 5 stars on goodreads; I might also add it to my list of favourite books on the site. In this post I’ve added some quotes and ideas from the book, and a few comments. Before going any further I should note that it’s frankly impossible to cover anywhere near all the ideas covered in the book here on the blog, so if you’re even remotely interested in these kinds of things you really should pick up a copy of the book and read all of it.

“I believe that something crucial has been missing from all of the great debates of history, among philosophers, politicians, theologians, and thinkers from other and diverse backgrounds, on the issues of morality, ethics, justice, right and wrong. […] those who have tried to analyze morality have failed to treat the human traits that underlie moral behavior as outcomes of evolution […] for many conflicts of interest, compromises and enforceable contracts represent the only real solutions. Appeals to morality, I will argue, are simply the invoking of such compromises and contracts in particular ways. […] the process of natural selection that has given rise to all forms of life, including humans, operates such that success has always been relative. One consequence is that organisms resulting from the long-term cumulative effects of selection are expected to resist efforts to reveal their interests fully to others, and also efforts to place limits on their striving or to decide for them when their interests are being “fully” satisfied. These are all reasons why we should expect no “terminus” – ever – to debates on moral and ethical issues.” (these comments I also included in the quotes post to which I link at the beginning, but I thought it was worth including them in this post as well even so – US).

“I am convinced that biology can never offer […] easy or direct answers to the questions of what is right and wrong. I explicitly reject the attitude that whatever biology tells us is so is also what ought to be (David Hume’s so-called “naturalistic fallacy”) […] there are within biology no magic solutions to moral problems. […] Knowledge of the human background in organic evolution can [however] provide a deeper self-understanding by an increasing proportion of the world’s population; self-understanding that I believe can contribute to answering the serious questions of social living.”

“If there had been no recent discoveries in biology that provided new ways of looking at the concept of moral systems, then I would be optimistic indeed to believe that I could say much that is new. But there have been such discoveries. […] The central point in these writings [Hamilton, Williams, Trivers, Cavalli-Sforza, Feldman, Dawkins, Wilson, etc. – US] […] is that natural selection has apparently been maximizing the survival by reproduction of genes, as they have been defined by evolutionists, and that, with respect to the activities of individuals, this includes effects on copies of their genes, even copies located in other individuals. In other words, we are evidently evolved not only to aid the genetic materials in our own bodies, by creating and assisting descendants, but also to assist, by nepotism, copies of our genes that reside in collateral (nondescendant) relatives. […] ethics, morality, human conduct, and the human psyche are to be understood only if societies are seen as collections of individuals seeking their own self-interests […] In some respects these ideas run contrary to what people have believed and been taught about morality and human values: I suspect that nearly all humans believe it is a normal part of the functioning of every human individual now and then to assist someone else in the realization of that person’s own interests to the actual net expense of those of the altruist. What [the above-mentioned writings] tells us is that, despite our intuitions, there is not a shred of evidence to support this view of beneficence, and a great deal of convincing theory suggests that any such view will eventually be judged false. This implies that we will have to start all over again to describe and understand ourselves, in terms alien to our intuitions […] It is […] a goal of this book to contribute to this redescription and new understanding, and especially to discuss why our intuitions should have misinformed us.”

“Social behavior evolves as a succession of ploys and counterploys, and for humans these ploys are used, not only among individuals within social groups, but between and among small and large groups of up to hundreds of millions of individuals. The value of an evolutionary approach to human sociality is thus not to determine the limits of our actions so that we can abide by them. Rather, it is to examine our life strategies so that we can change them when we wish, as a result of understanding them. […] my use of the word biology in no way implies that moral systems have some kind of explicit genetic background, are genetically determined, or cannot be altered by adjusting the social environment. […] I mean simply to suggest that if we wish to understand those aspects of our behavior commonly regarded as involving morality or ethics, it will help to reconsider our behavior as a product of evolution by natural selection. The principal reason for this suggestion is that natural selection operates according to general principles which make its effects highly predictive, even with respect to traits and circumstances that have not yet been analyzed […] I am interested […] not in determining what is moral and immoral, in the sense of what people ought to be doing, but in elucidating the natural history of ethics and morality – in discovering how and why humans initiated and developed the ideas we have about right and wrong.”

I should perhaps mention here that sort-of-kind-of related stuff is covered in Aureli et al. (see e.g. this link), and that some parts of that book will probably make you understand Alexander’s ideas a lot better even if perhaps he didn’t read those specific authors – mainly because it gets a lot easier to imagine the sort of mechanisms which might be at play here if you’ve read this sort of literature. Here’s one relevant quote from the coverage of that book, which also deals with the question Alexander discusses above, and in a lot more detail throughout his book, namely ‘where our morality comes from?’

“we make two fundamental assertions regarding the evolution of morality: (1) there are specific types of behavior demonstrated by both human and nonhuman primates that hint at a shared evolutionary background to morality; and (2) there are theoretical and actual connections between morality and conflict resolution in both nonhuman primates and human development. […] the transition from nonmoral or premoral to moral is more gradual than commonly assumed. No magic point appears in either evolutionary history or human development at which morality suddenly comes into existence. In both early childhood and in animals closely related to us, we can recognize behaviors (and, in the case of children, judgments) that are essential building blocks of the morality of the human adult. […] the decision making and emotions underlying moral judgments are generated within the individual rather than being simply imposed by society. They are a product of evolution, an integrated part of the human genetic makeup, that makes the child construct a moral perspective through interactions with other members of its species. […] Much research has shown that children acquire morality through a social-cognitive process; children make connections between acts and consequences. Through a gradual process, children develop concepts of justice, fairness, and equality, and they apply these concepts to concrete everyday situations […] we assert that emotions such as empathy and sympathy provide an experiential basis by which children construct moral judgments. Emotional reactions from others, such as distress or crying, provide experiential information that children use to judge whether an act is right or wrong […] when a child hits another child, a crying response provides emotional information about the nature of the act, and this information enables the child, in part, to determine whether and why the transgression is wrong. Therefore, recognizing signs of distress in another person may be a basic requirement of the moral judgment process. The fact that responses to distress in another have been documented both in infancy and in the nonhuman primate literature provides initial support for the idea that these types of moral-like experiences are common to children and nonhuman primates.”

Alexander’s coverage is quite different from that found in Aureli et al.,, but some of the contributors to the latter work deal with similar questions to the ones in which he’s interested, using approaches not employed in Alexander’s book – so this is another place to look if you’re interested in these topics. Margalit’s The Emergence of Norms is also worth mentioning. Part of the reason why I mention these books here is incidentally that they’re not talked about in Alexander’s coverage (for very natural reasons, I should add, in the case of the former book at least; Natural Conflict Resolution was published more than a decade after Alexander wrote his book…).

“In the hierarchy of explanatory principles governing the traits of living organisms, evolutionary reductionism – the development of principles from the evolutionary process – tends to subsume all other kinds. Proximate-cause reductionism (or reduction by dissection) sometimes advances our understanding of the whole phenomena. […] When evolutionary reduction becomes trivial in the study of life it is for a reason different from incompleteness; rather, it is because the breadth of the generalization distances it too significantly from the particular problem that may be at hand. […] the greatest weakness of reduction by generalization is not that it is likely to be trivial but that errors are probable through unjustified leaps from hypothesis to conclusion […] Critics such as Gould and Lewontin […] do not discuss the facts that (a) all students of human behavior (not just those who take evolution into account) run the risk of leaping unwarrantedly from hypothesis to conclusion and (b) just-so stories were no less prevalent and hypothesis-testing no more prevalent in studies of human behavior before evolutionary biologists began to participate. […] I believe that failure by biologists and others to distinguish proximate- or partial-cause and evolutionary- or ultimate-cause reductionism […] is in some part responsible for the current chasm between the social and the biological sciences and the resistance to so-called biological approaches to understanding humans. […] Both approaches are essential to progress in biology and the social sciences, and it would be helpful if their relationship, and that of their respective practitioners, were not seen as adversarial.”

(Relatedly, love is motivationally prior to sugar. This one also seems relevant, though in a different way).

“Humans are not accustomed to dealing with their own strategies of life as if they had been tuned by natural selection. […] People are not generally aware of what their lifetimes have been evolved to accomplish, and, even if they are roughly aware of this, they do not easily accept that their everyday activities are in any sense means to that end. […] The theory of lifetimes most widely accepted among biologists is that individuals have evolved to maximize the likelihood of survival of not themselves, but their genes, and that they do this by reproducing and tending in various ways offspring and other carriers of their own genes […] In this theory, survival of the individual – and its growth, development, and learning – are proximate mechanisms of reproductive success, which is a proximate mechanism of genic survival. Only the genes have evolved to survive. […] To say that we are evolved to serve the interests of our genes in no way suggests that we are obliged to serve them. […] Evolution is surely most deterministic for those still unaware of it. If this argument is correct, it may be the first to carry us from is to ought, i.e., if we desire to be the conscious masters of our own fates, and if conscious effort in that direction is the most likely vehicle of survival and happiness, then we ought to study evolution.”

“People are sometimes comfortable with the notion that certain activities can be labeled as “purely cultural” because they also believe that there are behaviors that can be labeled “purely genetic.” Neither is true: the environment contributes to the expression of all behaviors, and culture is best described as part of the environment.”

“Happiness and its anticipation are […] proximate mechanisms that lead us to perform and repeat acts that in the environments of history, at least, would have led to greater reproductive success.”

“The remarkable difference between the patterns of senescence in semelparous (one-time breeding) and iteroparous (repeat-breeding) organisms is probably one of the best simple demonstrations of the central significance of reproduction in the individual’s lifetime. How, otherwise, could we explain the fact that those who reproduce but once, like salmon and soybeans, tend to die suddenly right afterward, while those like ourselves who have residual reproductive possibilities after the initial reproductive act decline or senesce gradually? […] once an organism has completed all possibilities of reproducing (through both offspring production and assistance, and helping other relatives), then selection can no longer affect its survival: any physiological or other breakdown that destroys it may persist and even spread if it is genetically linked to a trait that is expressed earlier and is reproductively beneficial. […] selection continually works against senescence, but is just never able to defeat it entirely. […] senescence leads to a generalized deterioration rather than one owing to a single effect or a few effects […] In the course of working against senescence, selection will tend to remove, one by one, the most frequent sources of mortality as a result of senescence. Whenever a single cause of mortality, such as a particular malfunction of any vital organ, becomes the predominant cause of mortality, then selection will more effectively reduce the significance of that particular defect (meaning those who lack it will outreproduce) until some other achieves greater relative significance. […] the result will be that all organs and systems will tend to deteriorate together. […] The point is that as we age, and as senescence proceeds, large numbers of potential sources of mortality tend to lurk ever more malevolently just “below the surface,” so that, unfortunately, the odds are very high against any dramatic lengthening of the maximum human lifetime through technology. […] natural selection maximizes the likelihood of genetic survival, which is incompatible with eliminating senescence. […] Senescence, and the finiteness of lifetimes, have evolved as incidental effects […] Organisms compete for genetic survival and the winners (in evolutionary terms) are those who sacrifice their phenotypes (selves) earlier when this results in greater reproduction.”

“altruism appears to diminish with decreasing degree of relatedness in sexual species whenever it is studied – in humans as well as nonhuman species”

October 5, 2016 Posted by | Anthropology, Biology, Books, Evolutionary biology, Genetics, Philosophy | Leave a comment

Deserts

I recently read Nick Middleton’s short publication on this topic and decided it was worth blogging it here. I gave the publication 3 stars on goodreads; you can read my goodreads review of the book here.

In this post I’ll quote a bit from the book and add some details I thought were interesting.

“None of [the] approaches to desert definition is foolproof. All have their advantages and drawbacks. However, each approach delivers […] global map[s] of deserts and semi-deserts that [are] broadly similar […] Roughly, deserts cover about one-quarter of our planet’s land area, and semi-deserts another quarter.”

“High temperatures and a paucity of rainfall are two aspects of climate that many people routinely associate with deserts […] However, desert climates also embrace other extremes. Many arid zones experience freezing temperatures and snowfall is commonplace, particularly in those situated outside the tropics. […] For much of the time, desert skies are cloud-free, meaning deserts receive larger amounts of sunshine than any other natural environment. […] Most of the water vapour in the world’s atmosphere is supplied by evaporation from the oceans, so the more remote a location is from this source the more likely it is that any moisture in the air will have been lost by precipitation before it reaches continental interiors. The deserts of Central Asia illustrate this principle well: most of the moisture in the air is lost before it reaches the heart of the continent […] A clear distinction can be made between deserts in continental interiors and those on their coastal margins when it comes to the range of temperatures experienced. Oceans tend to exert a moderating influence on temperature, reducing extremes, so the greatest ranges of temperature are found far from the sea while coastal deserts experience a much more limited range. […] Freezing temperatures occur particularly in the mid-latitude deserts, but by no means exclusively so. […] snowfall occurs at the Algerian oasis towns of Ouagla and Ghardaia, in the northern Sahara, as often as once every 10 years on average.”

“[One] characteristic of rainfall in deserts is its variability from year to year which in many respects makes annual average statistics seem like nonsense. A very arid desert area may go for several years with no rain at all […]. It may then receive a whole ‘average’ year’s rainfall in just one storm […] Rainfall in deserts is also typically very variable in space as well as time. Hence, desert rainfall is frequently described as being ‘spotty’. This spottiness occurs because desert storms are often convective, raining in a relatively small area, perhaps just a few kilometres across. […] Climates can vary over a wide range of spatial scales […] Changes in temperature, wind, relative humidity, and other elements of climate can be detected over short distances, and this variability on a small scale creates distinctive climates in small areas. These are microclimates, different in some way from the conditions prevailing over the surrounding area as a whole. At the smallest scale, the shade given by an individual plant can be described as a microclimate. Over larger distances, the surface temperature of the sand in a dune will frequently be significantly different from a nearby dry salt lake because of the different properties of the two types of surface. […] Microclimates are important because they exert a critical control over all sorts of phenomena. These include areas suitable for plant and animal communities to develop, the ways in which rocks are broken down, and the speed at which these processes occur.”

“The level of temperature prevailing when precipitation occurs is important for an area’s water balance and its degree of aridity. A rainy season that occurs during the warm summer months, when evaporation is greatest, makes for a climate that is more arid than if precipitation is distributed more evenly throughout the year.”

“The extremely arid conditions of today[‘s Sahara Desert] have prevailed for only a few thousand years. There is lots of evidence to suggest that the Sahara was lush, nearly completely covered with grasses and shrubs, with many lakes that supported antelope, giraffe, elephant, hippopotamus, crocodile, and human populations in regions that today have almost no measurable precipitation. This ‘African Humid Period’ began around 15,000 years ago and came to an end around 10,000 years later. […] Globally, at the height of the most recent glacial period some 18,000 years ago, almost 50% of the land area between 30°N and 30°S was covered by two vast belts of sand, often called ‘sand seas’. Today, about 10% of this area is covered by sand seas. […] Around one-third of the Arabian subcontinent is covered by sandy deserts”.

“Much of the drainage in deserts is internal, as in Central Asia. Their rivers never reach the sea, but take water to interior basins. […] Salt is a common constituent of desert soils. The generally low levels of rainfall means that salts are seldom washed away through soils and therefore tend to accumulate in certain parts of the landscape. Large amounts of common salt (sodium chloride, or halite), which is very soluble in water, are found in some hyper-arid deserts.”

“Many deserts are very rich in rare and unique species thanks to their evolution in relative geographical isolation. Many of these plants and animals have adapted in remarkable ways to deal with the aridity and extremes of temperature. Indeed, some of these adaptations contribute to the apparent lifelessness of deserts simply because a good way to avoid some of the harsh conditions is to hide. Some small creatures spend hot days burrowed beneath the soil surface. In a similar way, certain desert plants spend most of the year and much of their lives dormant, as seeds waiting for the right conditions, brought on by a burst of rainfall. Given that desert rainstorms can be very variable in time and in space, many activities in the desert ecosystem occur only sporadically, as pulses of activity driven by the occasional cloudburst. […] The general scarcity of water is the most important, though by no means the only, environmental challenge faced by desert organisms. Limited supplies of food and nutrients, friable soils, high levels of solar radiation, high daytime temperatures, and the large diurnal temperature range are other challenges posed by desert conditions. These conditions are not always distributed evenly across a desert landscape, and the existence of more benign microenvironments is particularly important for desert plants and animals. Patches of terrain that are more biologically productive than their surroundings occur in even the most arid desert, geographical patterns caused by many factors, not only the simple availability of water.”

A small side note here: The book includes brief coverage of things like crassulacean acid metabolism and related topics covered in much more detail in Beer et al. I’m not going to go into that stuff here as this stuff was in my opinion much better covered in the latter book (some people might disagree, but people who would do that would at least have to admit that the coverage in Beer et al. is/was much more comprehensive than is Middleton’s coverage in this book). There are quite a few other topics included in the book which I did not include coverage of here in the post but I mention this topic in particular in part because I thought it was actually a good example underscoring how this book is very much just a very brief introduction; you can write book chapters, if not books, about some of the topics Middleton devotes a couple of paragraphs to in his coverage, which is but to be expected given the nature and range of coverage of the publication.

Plants aren’t ‘smart’ given any conventional definition of the word, but as I’ve talked about before here on the blog (e.g. here) when you look closer at the way they grow and ‘behave’ over the very long term, some of the things they do are actually at the very least ‘not really all that stupid’:

“The seeds of annuals germinate only when enough water is available to support the entire life cycle. Germinating after just a brief shower could be fatal, so mechanisms have developed for seeds to respond solely when sufficient water is available. Seeds germinate only when their protective seed coats have been broken down, allowing water to enter the seed and growth to begin. The seed coats of many desert species contain chemicals that repel water. These compounds are washed away by large amounts of water, but a short shower will not generate enough to remove all the water-repelling chemicals. Other species have very thick seed coats that are gradually worn away physically by abrasion as moving water knocks the seeds against stones and pebbles.”

What about animals? One thing I learned from this publication is that it turns out that being a mammal will, all else equal, definitely not give you a competitive edge in a hot desert environment:

“The need to conserve water is important to all creatures that live in hot deserts, but for mammals it is particularly crucial. In all environments mammals typically maintain a core body temperature of around 37–38°C, and those inhabiting most non-desert regions face the challenge of keeping their body temperature above the temperature of their environmental surrounds. In hot deserts, where environmental temperatures substantially exceed the body temperature on a regular basis, mammals face the reverse challenge. The only mechanism that will move heat out of an animal’s body against a temperature gradient is the evaporation of water, so maintenance of the core body temperature requires use of the resource that is by definition scarce in drylands.”

Humans? What about them?

“Certain aspects of a traditional mobile lifestyle have changed significantly for some groups of nomadic peoples. Herders in the Gobi desert in Mongolia pursue a way of life that in many ways has changed little since the times of the greatest of all nomadic leaders, Chinggis Khan, 750 years ago. They herd the same animals, eat the same foods, wear the same clothes, and still live in round felt-covered tents, traditional dwellings known in Mongolian as gers. Yet many gers now have a set of solar panels on the roof that powers a car battery, allowing an electric light to extend the day inside the tent. Some also have a television set.” (these remarks incidentally somehow reminded me of this brilliant Gary Larson cartoon)

“People have constructed dams to manage water resources in arid regions for thousands of years. One of the oldest was the Marib dam in Yemen, built about 3,000 years ago. Although this structure was designed to control water from flash floods, rather than for storage, the diverted flow was used to irrigate cropland. […] Although groundwater has been exploited for desert farmland using hand-dug underground channels for a very long time, the discovery of reserves of groundwater much deeper below some deserts has led to agricultural use on much larger scales in recent times. These deep groundwater reserves tend to be non-renewable, having built up during previous climatic periods of greater rainfall. Use of this fossil water has in many areas resulted in its rapid depletion.”

“Significant human impacts are thought to have a very long history in some deserts. One possible explanation for the paucity of rainfall in the interior of Australia is that early humans severely modified the landscape through their use of fire. Aboriginal people have used fire extensively in Central Australia for more than 20,000 years, particularly as an aid to hunting, but also for many other purposes, from clearing passages to producing smoke signals and promoting the growth of preferred plants. The theory suggests that regular burning converted the semi-arid zone’s mosaic of trees, shrubs, and grassland into the desert scrub seen today. This gradual change in the vegetation could have resulted in less moisture from plants reaching the atmosphere and hence the long-term desertification of the continent.” (I had never heard about this theory before, and so I of course have no idea if it’s correct or not – but it’s an interesting idea).

A few wikipedia links of interest:
Yardang.
Karakum Canal.
Atacama Desert.
Salar de Uyuni.
Taklamakan Desert.
Dust Bowl.
Namib Desert.
Dzud.

August 27, 2016 Posted by | Anthropology, Biology, Books, Botany, Ecology, Engineering, Geography, Zoology | Leave a comment

Human Drug Metabolism (I)

“It has been said that if a drug has no side effects, then it is unlikely to work. Drug therapy labours under the fundamental problem that usually every single cell in the body has to be treated just to exert a beneficial effect on a small group of cells, perhaps in one tissue. Although drug-targeting technology is improving rapidly, most of us who take an oral dose are still faced with the problem that the vast majority of our cells are being unnecessarily exposed to an agent that at best will have no effect, but at worst will exert many unwanted effects. Essentially, all drug treatment is really a compromise between positive and negative effects in the patient. […] This book is intended to provide a basic grounding in human drug metabolism, although it is useful if the reader has some knowledge of biochemistry, physiology and pharmacology from other sources. In addition, a qualitative understanding of chemistry can illuminate many facets of drug metabolism and toxicity. Although chemistry can be intimidating, I have tried to make the chemical aspects of drug metabolism as user-friendly as possible.”

I’m currently reading this book. To say that it is ‘useful if the reader has some knowledge’ of the topics mentioned is putting it mildly; I’d say it’s mandatory – my advice would be to stay far away from this book if you know nothing of pharmacology, biochem, and physiology. I know enough to follow most of the coverage, at least in terms of the big picture stuff, but some of the biochemistry details I frankly have been unable to follow; I think I could probably understand all of it if I were willing to look up all the words and concepts with which I’m unfamiliar, but I’m not willing to spend the time to do that. In this context it should also be mentioned that the book is very well written, in the sense that it is perfectly possible to read the book and follow the basic outline of what’s going on without necessarily understanding all details, so I don’t feel that the coverage in any way discourages me from reading the book the way I am – the significance of that hydrogen bond in the diagram will probably become apparent to you later, and even if it doesn’t you’ll probably manage.

In terms of general remarks about the book, a key point to be mentioned early on is also that the book is very dense and has a lot of interesting stuff. I find it hard at the moment to justify devoting time to blogging, but if that were not the case I’d probably feel tempted to cover this book in a lot of detail, with multiple posts delving into specific fascinating aspects of the coverage. Despite this being a book where I don’t really understand everything that’s going on all the time, I’m definitely at a five star rating at the moment, and I’ve read close to two-thirds of it at this point.

A few quotes:

“The process of drug development weeds out agents [or at least tries to weed out agents… – US] that have seriously negative actions and usually releases onto the market drugs that may have a profile of side effects, but these are relatively minor within a set concentration range where the drug’s pharmacological action is most effective. This range, or ‘therapeutic window’ is rather variable, but it will give some indication of the most ‘efficient’ drug concentration. This effectively means the most beneficial pharmacodynamic effects for the minimum side effects.”

If the dose is too low, you have a case of drug failure, where the drug doesn’t work. If the dose is too high, you experience toxicity. Both outcomes are problematic, but they manifest in different ways. Drug failure is usually a gradual process (days – “Therapeutic drug failure is usually a gradual process, where the time frame may be days before the problem is detected”), whereas toxicity may be of very rapid onset (hours).

“To some extent, every patient has a unique therapeutic window for each drug they take, as there is such huge variation in our pharmacodynamic drug sensitivities. This book is concerned with what systems influence how long a drug stays in our bodies. […] [The therapeutic index] has been defined as the ratio between the lethal or toxic dose and the effective dose that shows the normal range of pharmacological effect. In practice, a drug […] is listed as having a narrow TI if there is less than a twofold difference between the lethal and effective doses, or a twofold difference in the minimum toxic and minimum effective concentrations. Back in the 1960s, many drugs in common use had narrow TIs […] that could be toxic at relatively low levels. Over the last 30 years, the drug industry has aimed to replace this type of drug with agents with much higher TIs. […] However, there are many drugs […] which remain in use that have narrow or relatively narrow TIs”.

“metabolites are usually removed from the cell faster than the parent drug”

“The kidneys are mostly responsible for […] removal, known as elimination. The kidneys cannot filter large chemical entities like proteins, but they can remove the majority of smaller chemicals, depending on size, charge and water solubility. […] the kidney is a lipophilic (oil-loving) organ […] So the kidney is not efficient at eliminating lipophilic chemicals. One of the major roles of the liver is to use biotransforming enzymes to ensure that lipophilic agents are made water soluble enough to be cleared by the kidney. So the liver has an essential but indirect role in clearance, in that it must extract the drug from the circulation, biotransform (metabolize) it, then return the water-soluble product to the blood for the kidney to remove. The liver can also actively clear or physically remove its metabolic products from the circulation by excreting them in bile, where they travel through the gut to be eliminated in faeces.”

“Cell structures eventually settled around the format we see now, a largely aqueous cytoplasm bounded by a predominantly lipophilic protective membrane. Although the membrane does prevent entry and exit of many potential toxins, it is no barrier to other lipophilic molecules. If these molecules are highly lipophilic, they will passively diffuse into and become trapped in the membrane. If they are slightly less lipophilic, they will pass through it into the organism. So aside from ‘ housekeeping ’ enzyme systems, some enzymatic protection would have been needed against invading molecules from the immediate environment. […] the majority of living organisms including ourselves now possess some form of effective biotransformational enzyme capability which can detoxify and eliminate most hydrocarbons and related molecules. This capability has been effectively ‘stolen’ from bacteria over millions of years. The main biotransformational protection against aromatic hydrocarbons is a series of enzymes so named as they absorb UV light at 450 nm when reduced and bound to carbon monoxide. These specialized enzymes were termed cytochrome P450 monooxygenases or sometimes oxido-reductases. They are often referred to as ‘CYPs’ or ‘P450s’. […] All the CYPs accomplish their functions using the same basic mechanism, but each enzyme is adapted to dismantle particular groups of chemical structures. It is a testament to millions of years of ‘ research and development ’ in the evolution of CYPs, that perhaps 50,000 or more man-made chemical entities enter the environment for the first time every year and the vast majority can be oxidized by at least one form of CYP. […] To date, nearly 60 human CYPs have been identified […] It is likely that hundreds more CYP-mediated endogenous functions remain to be discovered. […] CYPs belong to a group of enzymes which all have similar core structures and modes of operation. […] Their importance to us is underlined by their key role in more than 75 per cent of all drug biotransformations.”

I would add a note here that a very large proportion of this book is, perhaps unsurprisingly in view of the above, about those CYPs; how they work, what exactly it is that they do, which different kinds there are and what roles they play in the metabolism of specific drugs and chemical compounds, variation in gene expression across individuals and across populations in the context of specific CYPs and how such variation may relate to differences in drug metabolism, etc.

“Drugs often parallel endogenous molecules in their oil solubility, although many are considerably more lipophilic than these molecules. Generally, drugs, and xenobiotic compounds, have to be fairly oil soluble or they would not be absorbed from the GI tract. Once absorbed these molecules could change both the structure and function of living systems and their oil solubility makes these molecules rather ‘elusive’, in the sense that they can enter and leave cells according to their concentration and are temporarily beyond the control of the living system. This problem is compounded by the difficulty encountered by living systems in the removal of lipophilic molecules. […] even after the kidney removes them from blood by filtering them, the lipophilicity of drugs, toxins and endogenous steroids means that as soon as they enter the collecting tubules, they can immediately return to the tissue of the tubules, as this is more oil-rich than the aqueous urine. So the majority of lipophilic molecules can be filtered dozens of times and only low levels are actually excreted. In addition, very high lipophilicity molecules like some insecticides and fire retardants might never leave adipose tissue at all […] This means that for lipophilic agents:
*the more lipophilic they are, the more these agents are trapped in membranes, affecting fluidity and causing disruption at high levels;
* if they are hormones, they can exert an irreversible effect on tissues that is outside normal physiological control;
*if they are toxic, they can potentially damage endogenous structures;
* if they are drugs, they are also free to cause any pharmacological effect for a considerable period of time.”

“A sculptor was once asked how he would go about sculpting an elephant from a block of stone. His response was ‘knock off all the bits that did not look like an elephant’. Similarly, drug-metabolizing CYPs have one main imperative, to make molecules more water-soluble. Every aspect of their structure and function, their position in the liver, their initial selection of substrate, binding, substrate orientation and catalytic cycling, is intended to accomplish this deceptively simple aim.”

“The use of therapeutic drugs is a constant battle to pharmacologically influence a system that is actively undermining the drugs’ effects by removing them as fast as possible. The processes of oxidative and conjugative metabolism, in concert with efflux pump systems, act to clear a variety of chemicals from the body into the urine or faeces, in the most rapid and efficient manner. The systems that manage these processes also sense and detect increases in certain lipophilic substances and this boosts the metabolic capability to respond to the increased load.”

“The aim of drug therapy is to provide a stable, predictable pharmacological effect that can be adjusted to the needs of the individual patient for as long is deemed clinically necessary. The physician may start drug therapy at a dosage that is decided on the basis of previous clinical experience and standard recommendations. At some point, the dosage might be increased if the desired effects were not forthcoming, or reduced if side effects are intolerable to the patient. This adjustment of dosage can be much easier in drugs that have a directly measurable response, such as a change in clotting time. However, in some drugs, this adjustment process can take longer to achieve than others, as the pharmacological effect, once attained, is gradually lost over a period of days. The dosage must be escalated to regain the original effect, sometimes several times, until the patient is stable on the dosage. In some cases, after some weeks of taking the drug, the initial pharmacological effect seen in the first few days now requires up to eight times the initial dosage to reproduce. It thus takes a significant period of time to create a stable pharmacological effect on a constant dose. In the same patients, if another drug is added to the regimen, it may not have any effect at all. In other patients, sudden withdrawal of perhaps only one drug in a regimen might lead to a gradual but serious intensification of the other drug’s side effects.”

“acceleration of drug metabolism as a response to the presence of certain drugs is known as ‘enzyme induction’ and drugs which cause it are often referred to as ‘inducers’ of drug metabolism. The process can be defined as: ‘An adaptive increase in the metabolizing capacity of a tissue’; this means that a drug or chemical is capable of inducing an increase in the transcription and translation of specific CYP isoforms, which are often (although not always) the most efficient metabolizers of that chemical. […] A new drug is generally regarded as an inducer if it produces a change in drug clearance which is equal to or greater than 40 per cent of an established potent inducer, usually taken as rifampicin. […] inducers are usually (but not always) lipophilic, contain aromatic groups and consequently, if they were not oxidized, they would be very persistent in living systems. CYP enzymes have evolved to oxidize this very type of agent; indeed, an elaborate and very effective system has also evolved to modulate the degree of CYP oxidation of these agents, so it is clear that living systems regard inducers as a particular threat among lipophilic agents in general. The process of induction is dynamic and closely controlled. The adaptive increase is constantly matched to the level of exposure to the drug, from very minor almost undetectable increases in CYP protein synthesis, all the way to a maximum enzyme synthesis that leads to the clearance of grammes of a chemical per day. Once exposure to the drug or toxin ceases, the adaptive increase in metabolizing capacity will subside gradually to the previous low level, usually within a time period of a few days. This varies according to the individual and the drug. […] it is clear there is almost limitless capacity for variation in terms of the basic pre-set responsiveness of the system as well as its susceptibility to different inducers and groups of inducers. Indeed, induction in different patients has been observed to differ by more than 20-fold.”

This one I added mostly because I didn’t know this and I thought it was worth including it here because it would make it easier for me to remember later (i.e., not because I figured other people might find this interesting):

CYP2E1 is very sensitive to diet, even becoming induced by high fat/low carbohydrate intakes. Surprisingly, starvation and diabetes also promote CYP2E1 functionality. Insulin levels fall during diet restriction, starvation and in diabetes and the formation of functional 2E1 is suppressed by insulin, so these conditions promote the increase of 2E1 metabolic capability. One of the consequences of diabetes and starvation is the major shift from glucose to fatty acid/tryglyceride oxidation, of which some of the by-products are small, hydrophilic and potentially toxic ‘ketone bodies’. These agents can cause a CNS intoxicating effect which is seen in diabetics who are very hypoglycaemic, they may appear ‘drunk’ and their breath will smell as if they had been drinking.”

A more general related point which may be of more interest to other people reading along here is that this is far from the only CYP which is sensitive to diet, and that diet-mediated effects may be very significant. I may go into this in more detail in a later post. Note that grapefruit is a major potentially problematic dietary component in many drug contexts:

“Although patients have been heroically consuming grapefruit juice for their health for decades, it took until the late 1980s before its effects on drug clearance were noted and several more years before it was realized that there could be a major problem with drug interactions […] The most noteworthy feature of the effect of grapefruit juice is its potency from a single ‘dose’ which coincides with a typical single breakfast intake of the juice, say around 200–300 ml. Studies with CYP3A substrates such as midazolam have shown that it can take up to three days before the effects wear off, which is consistent with the synthesis of new enzyme. […] there are a number of drugs that are subject to a very high gut wall component to their ‘first-pass’ metabolism […]; these include midazolam, terfenadine, lovastatin, simvastatin and astemizole. Their gut CYP clearance is so high that if the juice inhibits it, the concentration reaching the liver can increase six- or sevenfold. If the liver normally only extracts a relatively minor proportion of the parent agent, then plasma levels of such drugs increase dramatically towards toxicity […] the inhibitor effects of grapefruit juice in high first – pass drugs is particularly clinically relevant as it can occur after one exposure of the juice.”

It may sound funny, but there are two pages in this book about the effects of grapefruit juice, including a list of ‘Drugs that should not be taken with grapefruit juice’. Grapefruit is a well-known so-called mechanism-based inhibitor, and it may impact the metabolism of a lot of different drugs. It is far from the only known dietary component which may cause problems in a drug metabolism context – for example “cranberry juice has been known for some time as an inhibitor of warfarin metabolism”. On a general note the author remarks that: “There are hundreds of fruit preparations available that have been specifically marketed for their […] antioxidant capacities, such as purple grape, pomegranate, blueberry and acai juices. […] As they all contain large numbers of diverse phenolics and are pharmacologically active, they should be consumed with some caution during drug therapy.”

April 7, 2016 Posted by | Biology, Books, Medicine, Nephrology, Pharmacology | Leave a comment

Random Stuff

i. Some new words I’ve encountered (not all of them are from vocabulary.com, but many of them are):

Uxoricide, persnickety, logy, philoprogenitive, impassive, hagiography, gunwale, flounce, vivify, pelage, irredentism, pertinacity,callipygous, valetudinarian, recrudesce, adjuration, epistolary, dandle, picaresque, humdinger, newel, lightsome, lunette, inflect, misoneism, cormorant, immanence, parvenu, sconce, acquisitiveness, lingual, Macaronic, divot, mettlesome, logomachy, raffish, marginalia, omnifarious, tatter, licit.

ii. A lecture:

I got annoyed a few times by the fact that you can’t tell where he’s pointing when he’s talking about the slides, which makes the lecture harder to follow than it ought to be, but it’s still an interesting lecture.

iii. Facts about Dihydrogen Monoxide. Includes coverage of important neglected topics such as ‘What is the link between Dihydrogen Monoxide and school violence?’ After reading the article, I am frankly outraged that this stuff’s still legal!

iv. Some wikipedia links of interest:

Steganography.

Steganography […] is the practice of concealing a file, message, image, or video within another file, message, image, or video. The word steganography combines the Greek words steganos (στεγανός), meaning “covered, concealed, or protected”, and graphein (γράφειν) meaning “writing”. […] Generally, the hidden messages appear to be (or be part of) something else: images, articles, shopping lists, or some other cover text. For example, the hidden message may be in invisible ink between the visible lines of a private letter. Some implementations of steganography that lack a shared secret are forms of security through obscurity, whereas key-dependent steganographic schemes adhere to Kerckhoffs’s principle.[1]

The advantage of steganography over cryptography alone is that the intended secret message does not attract attention to itself as an object of scrutiny. Plainly visible encrypted messages—no matter how unbreakable—arouse interest, and may in themselves be incriminating in countries where encryption is illegal.[2] Thus, whereas cryptography is the practice of protecting the contents of a message alone, steganography is concerned with concealing the fact that a secret message is being sent, as well as concealing the contents of the message.”

H. H. Holmes. A really nice guy.

Herman Webster Mudgett (May 16, 1861 – May 7, 1896), better known under the name of Dr. Henry Howard Holmes or more commonly just H. H. Holmes, was one of the first documented serial killers in the modern sense of the term.[1][2] In Chicago, at the time of the 1893 World’s Columbian Exposition, Holmes opened a hotel which he had designed and built for himself specifically with murder in mind, and which was the location of many of his murders. While he confessed to 27 murders, of which nine were confirmed, his actual body count could be up to 200.[3] He brought an unknown number of his victims to his World’s Fair Hotel, located about 3 miles (4.8 km) west of the fair, which was held in Jackson Park. Besides being a serial killer, H. H. Holmes was also a successful con artist and a bigamist. […]

Holmes purchased an empty lot across from the drugstore where he built his three-story, block-long hotel building. Because of its enormous structure, local people dubbed it “The Castle”. The building was 162 feet long and 50 feet wide. […] The ground floor of the Castle contained Holmes’ own relocated drugstore and various shops, while the upper two floors contained his personal office and a labyrinth of rooms with doorways opening to brick walls, oddly-angled hallways, stairways leading to nowhere, doors that could only be opened from the outside and a host of other strange and deceptive constructions. Holmes was constantly firing and hiring different workers during the construction of the Castle, claiming that “they were doing incompetent work.” His actual reason was to ensure that he was the only one who fully understood the design of the building.[3]

Minnesota Starvation Experiment.

“The Minnesota Starvation Experiment […] was a clinical study performed at the University of Minnesota between November 19, 1944 and December 20, 1945. The investigation was designed to determine the physiological and psychological effects of severe and prolonged dietary restriction and the effectiveness of dietary rehabilitation strategies.

The motivation of the study was twofold: First, to produce a definitive treatise on the subject of human starvation based on a laboratory simulation of severe famine and, second, to use the scientific results produced to guide the Allied relief assistance to famine victims in Europe and Asia at the end of World War II. It was recognized early in 1944 that millions of people were in grave danger of mass famine as a result of the conflict, and information was needed regarding the effects of semi-starvation—and the impact of various rehabilitation strategies—if postwar relief efforts were to be effective.”

“most of the subjects experienced periods of severe emotional distress and depression.[1]:161 There were extreme reactions to the psychological effects during the experiment including self-mutilation (one subject amputated three fingers of his hand with an axe, though the subject was unsure if he had done so intentionally or accidentally).[5] Participants exhibited a preoccupation with food, both during the starvation period and the rehabilitation phase. Sexual interest was drastically reduced, and the volunteers showed signs of social withdrawal and isolation.[1]:123–124 […] One of the crucial observations of the Minnesota Starvation Experiment […] is that the physical effects of the induced semi-starvation during the study closely approximate the conditions experienced by people with a range of eating disorders such as anorexia nervosa and bulimia nervosa.”

Post-vasectomy pain syndrome. Vasectomy reversal is a risk people probably know about, but this one seems to also be worth being aware of if one is considering having a vasectomy.

Transport in the Soviet Union (‘good article’). A few observations from the article:

“By the mid-1970s, only eight percent of the Soviet population owned a car. […]  From 1924 to 1971 the USSR produced 1 million vehicles […] By 1975 only 8 percent of rural households owned a car. […] Growth of motor vehicles had increased by 224 percent in the 1980s, while hardcore surfaced roads only increased by 64 percent. […] By the 1980s Soviet railways had become the most intensively used in the world. Most Soviet citizens did not own private transport, and if they did, it was difficult to drive long distances due to the poor conditions of many roads. […] Road transport played a minor role in the Soviet economy, compared to domestic rail transport or First World road transport. According to historian Martin Crouch, road traffic of goods and passengers combined was only 14 percent of the volume of rail transport. It was only late in its existence that the Soviet authorities put emphasis on road construction and maintenance […] Road transport as a whole lagged far behind that of rail transport; the average distance moved by motor transport in 1982 was 16.4 kilometres (10.2 mi), while the average for railway transport was 930 km per ton and 435 km per ton for water freight. In 1982 there was a threefold increase in investment since 1960 in motor freight transport, and more than a thirtyfold increase since 1940.”

March 3, 2016 Posted by | Biology, Cryptography, History, Language, Lectures, Ophthalmology, Random stuff, Wikipedia, Zoology | Leave a comment

Quotes

i. “The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data.” (John Tukey)

ii. “Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise.” (-ll-)

iii. “They who can no longer unlearn have lost the power to learn.” (John Lancaster Spalding)

iv. “If there are but few who interest thee, why shouldst thou be disappointed if but few find thee interesting?” (-ll-)

v. “Since the mass of mankind are too ignorant or too indolent to think seriously, if majorities are right it is by accident.” (-ll-)

vi. “As they are the bravest who require no witnesses to their deeds of daring, so they are the best who do right without thinking whether or not it shall be known.” (-ll-)

vii. “Perfection is beyond our reach, but they who earnestly strive to become perfect, acquire excellences and virtues of which the multitude have no conception.” (-ll-)

viii. “We are made ridiculous less by our defects than by the affectation of qualities which are not ours.” (-ll-)

ix. “If thy words are wise, they will not seem so to the foolish: if they are deep the shallow will not appreciate them. Think not highly of thyself, then, when thou art praised by many.” (-ll-)

x. “Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity. ” (George E. P. Box)

xi. “Intense ultraviolet (UV) radiation from the young Sun acted on the atmosphere to form small amounts of very many gases. Most of these dissolved easily in water, and fell out in rain, making Earth’s surface water rich in carbon compounds. […] the most important chemical of all may have been cyanide (HCN). It would have formed easily in the upper atmosphere from solar radiation and meteorite impact, then dissolved in raindrops. Today it is broken down almost at once by oxygen, but early in Earth’s history it built up at low concentrations in lakes and oceans. Cyanide is a basic building block for more complex organic molecules such as amino acids and nucleic acid bases. Life probably evolved in chemical conditions that would kill us instantly!” (Richard Cowen, History of Life, p.8)

xii. “Dinosaurs dominated land communities for 100 million years, and it was only after dinosaurs disappeared that mammals became dominant. It’s difficult to avoid the suspicion that dinosaurs were in some way competitively superior to mammals and confined them to small body size and ecological insignificance. […] Dinosaurs dominated many guilds in the Cretaceous, including that of large browsers. […] in terms of their reconstructed behavior […] dinosaurs should be compared not with living reptiles, but with living mammals and birds. […] By the end of the Cretaceous there were mammals with varied sets of genes but muted variation in morphology. […] All Mesozoic mammals were small. Mammals with small bodies can play only a limited number of ecological roles, mainly insectivores and omnivores. But when dinosaurs disappeared at the end of the Cretaceous, some of the Paleocene mammals quickly evolved to take over many of their ecological roles” (ibid., pp. 145, 154, 222, 227-228)

xiii. “To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of.” (Ronald Fisher)

xiv. “Ideas are incestuous.” (Howard Raiffa)

xv. “Game theory […] deals only with the way in which ultrasmart, all knowing people should behave in competitive situations, and has little to say to Mr. X as he confronts the morass of his problem. ” (-ll-)

xvi. “One of the principal objects of theoretical research is to find the point of view from which the subject appears in the greatest simplicity.” (Josiah Williard Gibbs)

xvii. “Nothing is as dangerous as an ignorant friend; a wise enemy is to be preferred.” (Jean de La Fontaine)

xviii. “Humility is a virtue all preach, none practice; and yet everybody is content to hear.” (John Selden)

xix. “Few men make themselves masters of the things they write or speak.” (-ll-)

xx. “Wise men say nothing in dangerous times.” (-ll-)

 

 

January 15, 2016 Posted by | Biology, Books, Paleontology, Quotes/aphorisms, Statistics | Leave a comment