“Every atom of our bodies has been part of a star, and every informed person should know something of how the stars evolve.”

I gave the book three stars on goodreads. At times it’s a bit too popular-science-y for me, and I think the level of coverage is a little bit lower than that of some of the other physics books in the ‘A Very Brief Introduction‘ series by Oxford University Press, but on the other hand it did teach me some new things and explained some other things I knew about but did not fully understand before and I’m well aware that it can be really hard to strike the right balance when writing books like these. I don’t like it when authors employ analogies instead of equations to explain stuff, but on the other hand I’ve seen some of the relevant equations before, e.g. in the context of IAS lectures, so I was okay with skipping some of the math because I know how the math here can really blow up in your face fast – and it’s not like this book has no math or equations, but I think it’s the kind of math most people should be able to deal with. It’s a decent introduction to the topic, and I must admit I have yet really to be significantly disappointed in a book from the physics part of this OUP series – they’re good books, readable and interesting.

Below I have added some quotes and observations from the book, as well as some relevant links to material or people covered in the book. Some of the links below I have also added previously when covering other books in the physics series, but I do not really care about that as I try to cover each book separately; the two main ideas behind adding links of this kind are: 1) to remind me which topics (…which I was unable to cover in detail in the post using quotes, because there’s too much stuff to cover in the book for that to make sense…) were covered in the book, and: 2) to give people who might be interested in reading the book an idea of which topics are covered therein; if I neglected to add relevant links simply because such topics were also covered in other books I’ve covered here, the link collection would not accomplish what I’d like it to accomplish. The link collection was gathered while I was reading the book (I was bookmarking relevant wiki articles along the way while reading the book), whereas the quotes included in the post were only added to the post after I had finished adding the links from the link collection; I am well aware that some topics covered in the quotes of the book are also covered in the link collection, but I didn’t care enough about this ‘double coverage of topics’ to remove those links that refer to material also covered in my quotes in this post from the link collection.

I think the part of the book coverage related to finding good quotes to include in this post was harder than it has been in the context of some of the other physics books I’ve covered recently, because the author goes into quite some detail explaining some specific dynamics of star evolution which are not easy to boil down to a short quote which is still meaningful to people who do not know the context. The fact that he does go into those details was of course part of the reason why I liked the book.

“[W]e cannot consider heat energy in isolation from the other large energy store that the Sun has – gravity. Clearly, gravity is an energy source, since if it were not for the resistance of gas pressure, it would make all the Sun’s gas move inwards at high speed. So heat and gravity are both potential sources of energy, and must be related by the need to keep the Sun in equilibrium. As the Sun tries to cool down, energy must be swapped between these two forms to keep the Sun in balance […] the heat energy inside the Sun is not enough to spread all of its contents out over space and destroy it as an identifiable object. The Sun is gravitationally bound – its heat energy is significant, but cannot supply enough energy to loosen gravity’s grip, and unbind the Sun. This means that when pressure balances gravity for any system (as in the Sun), the total heat energy T is always slightly less than that needed (V) to disperse it. In fact, it turns out to be exactly half of what would be needed for this dispersal, so that 2T + V = 0, or V = −2 T. The quantities T and V have opposite signs, because energy has to be supplied to overcome gravity, that is, you have to use T to try to cancel some of V. […] you need to supply energy to a star in order to overcome its gravity and disperse all of its gas to infinity. In line with this, the star’s total energy (thermal plus gravitational) is E = T + V = −T, that is, the total energy is minus its thermal energy, and so is itself negative. That is, a star is a gravitationally bound object. Whenever the system changes slowly enough that pressure always balances gravity, these two energies always have to be in this 1:2 ratio. […] This reasoning shows that cooling, shrinking, and heating up all go together, that is, as the Sun tries to cool down, its interior heats up. […] Because E = –T, when the star loses energy (by radiating), making its total energy E more negative, the thermal energy T gets more positive, that is, losing energy makes the star heat up. […] This result, that stars heat up when they try to cool, is central to understanding why stars evolve.”

“[T]he whole of chemistry is simply the science of electromagnetic interaction of atoms with each other. Specifically, chemistry is what happens when electrons stick atoms together to make molecules. The electrons doing the sticking are the outer ones, those furthest from the nucleus. The physical rules governing the arrangement of electrons around the nucleus mean that atoms divide into families characterized by their outer electron configurations. Since the outer electrons specify the chemical properties of the elements, these families have similar chemistry. This is the origin of the periodic table of the elements. In this sense, chemistry is just a specialized branch of physics. […] atoms can combine, or react, in many different ways. A chemical reaction means that the electrons sticking atoms together are rearranging themselves. When this happens, electromagnetic energy may be released, […] or an energy supply may be needed […] Just as we measured gravitational binding energy as the amount of energy needed to disperse a body against the force of its own gravity, molecules have electromagnetic binding energies measured by the energies of the orbiting electrons holding them together. […] changes of electronic binding only produce chemical energy yields, which are far too small to power stars. […] Converting hydrogen into helium is about 15 million times more effective than burning oil. This is because strong nuclear forces are so much more powerful than electromagnetic forces.”

“[T]here are two chains of reactions which can convert hydrogen to helium. The rate at which they occur is in both cases quite sensitive to the gas density, varying as its square, but extremely sensitive to the gas temperature […] If the temperature is below a certain threshold value, the total energy output from hydrogen burning is completely negligible. If the temperature rises only slightly above this threshold, the energy output becomes enormous. It becomes so enormous that the effect of all this energy hitting the gas in the star’s centre is life-threatening to it. […] energy is related to mass. So being hit by energy is like being hit by mass: luminous energy exerts a pressure. For a luminosity above a certain limiting value related to the star’s mass, the pressure will blow it apart. […] The central temperature of the Sun, and stars like it, must be almost precisely at the threshold value. It is this temperature sensitivity which fixes the Sun’s central temperature at the value of ten million degrees […] All stars burning hydrogen in their centres must have temperatures close to this value. […] central temperature [is] roughly proportional to the ratio of mass to radius [and this means that] the radius of a hydrogen-burning star is approximately proportional to its mass […] You might wonder how the star ‘knows’ that its radius is supposed to have this value. This is simple: if the radius is too large, the star’s central temperature is too low to produce any nuclear luminosity at all. […] the star will shrink in an attempt to provide the luminosity from its gravitational binding energy. But this shrinking is just what it needs to adjust the temperature in its centre to the right value to start hydrogen burning and produce exactly the right luminosity. Similarly, if the star’s radius is slightly too small, its nuclear luminosity will grow very rapidly. This increases the radiation pressure, and forces the star to expand, again back to the right radius and so the right luminosity. These simple arguments show that the star’s structure is self-adjusting, and therefore extremely stable […] The basis of this stability is the sensitivity of the nuclear luminosity to temperature and so radius, which controls it like a thermostat.”

“Hydrogen burning produces a dense and growing ball of helium at the star’s centre. […] the star has a weight problem to solve – the helium ball feels its own weight, and that of all the rest of the star as well. A similar effect led to the ignition of hydrogen in the first place […] we can see what happens as the core mass grows. Let’s imagine that the core mass has doubled. Then the core radius also doubles, and its volume grows by a factor 2 × 2 × 2 = 8. This is a bigger factor than the mass growth, so the density is 2/(2 × 2 × 2) = 1/4 of its original value. We end with the surprising result that as the helium core mass grows in time, its central number density drops. […] Because pressure is proportional to density, the central pressure of the core drops also […] Since the density of the hydrogen envelope does not change over time, […] the helium core becomes less and less able to cope with its weight problem as its mass increases. […] The end result is that once the helium core contains more than about 10% of the star’s mass, its pressure is too low to support the weight of the star, and things have to change drastically. […] massive stars have much shorter main-sequence lifetimes, decreasing like the inverse square of their masses […] A star near the minimum main-sequence mass of one-tenth of the Sun’s has an unimaginably long lifetime of almost 1013 years, nearly a thousand times the Sun’s. All low-mass stars are still in the first flush of youth. This is the fundamental fact of stellar life: massive stars have short lives, and low-mass stars live almost forever – certainly far longer than the current age of the Universe.”

“We have met all three […] timescales [see links below – US] for the Sun. The nuclear time is ten billion years, the thermal timescale is thirty million years, and the dynamical one […] just half an hour. […] Each timescale says how long the star takes to react to changes of the given type. The dynamical time tells us that if we mess up the hydrostatic balance between pressure and weight, the star will react by moving its mass around for a few dynamical times (in the Sun’s case, a few hours) and then settle down to a new state in which pressure and weight are in balance. And because this time is so short compared with the thermal time, the stellar material will not have lost or gained any significant amount of heat, but simply carried this around […] although the star quickly finds a new hydrostatic equilibrium, this will not correspond to thermal equilibrium, where heat moves smoothly outwards through the star at precisely the rate determined by the nuclear reactions deep in the centre. Instead, some bits of the star will be too cool to pass all this heat on outwards, and some will be too hot to absorb much of it. Over a thermal timescale (a few tens of millions of years in the Sun), the cool parts will absorb the extra heat they need from the stellar radiation field, and the hot parts rid themselves of the excess they have, until we again reach a new state of thermal equilibrium. Finally, the nuclear timescale tells us the time over which the star synthesizes new chemical elements, radiating the released energy into space.”

“[S]tars can end their lives in just one of three possible ways: white dwarf, neutron star, or black hole.”

“Stars live a long time, but must eventually die. Their stores of nuclear energy are finite, so they cannot shine forever. […] they are forced onwards through a succession of evolutionary states because the virial theorem connects gravity with thermodynamics and prevents them from cooling down. So main-sequence dwarfs inexorably become red giants, and then supergiants. What breaks this chain? Its crucial link is that the pressure supporting a star depends on how hot it is. This link would snap if the star was instead held up by a pressure which did not care about its heat content. Finally freed from the demand to stay hot to support itself, a star like this would slowly cool down and die. This would be an endpoint for stellar evolution. […] Electron degeneracy pressure does not depend on temperature, only density. […] one possible endpoint of stellar evolution arises when a star is so compressed that electron degeneracy is its main form of pressure. […] [Once] the star is a supergiant […] a lot of its mass is in a hugely extended envelope, several hundred times the Sun’s radius. Because of this vast size, the gravity tying the envelope to the core is very weak. […] Even quite small outward forces can easily overcome this feeble pull and liberate mass from the envelope, so a lot of the star’s mass is blown out into space. Eventually, almost the entire remaining envelope is ejected as a roughly spherical cloud of gas. The core quickly exhausts the thin shell of nuclear-burning material on its surface. Now gravity makes the core contract in on itself and become denser, increasing the electron degeneracy pressure further. The core ends as an extremely compact star, with a radius similar to the Earth’s, but a mass similar to the Sun, supported by this pressure. This is a white dwarf. […] Even though its surface is at least initially hot, its small surface means that it is faint. […] White dwarfs cannot start nuclear reactions, so eventually they must cool down and become dark, cold, dead objects. But before this happens, they still glow from the heat energy left over from their earlier evolution, slowly getting fainter. Astronomers observe many white dwarfs in the sky, suggesting that this is how a large fraction of all stars end their lives. […] Stars with an initial mass more than about seven times the Sun’s cannot end as white dwarfs.”

“In many ways, a neutron star is a vastly more compact version of a white dwarf, with the fundamental difference that its pressure arises from degenerate neutrons, not degenerate electrons. One can show that the ratio of the two stellar radii, with white dwarfs about one thousand times bigger than the 10 kilometres of a neutron star, is actually just the ratio of neutron to electron mass.”

“Most massive stars are not isolated, but part of a binary system […]. If one is a normal star, and the other a neutron star, and the binary is not very wide, there are ways for gas to fall from the normal star on to the neutron star. […] Accretion on to very compact objects like neutron stars almost always occurs through a disc, since the gas that falls in always has some rotation. […] a star’s luminosity cannot be bigger than the Eddington limit. At this limit, the pressure of the radiation balances the star’s gravity at its surface, so any more luminosity blows matter off the star. The same sort of limit must apply to accretion: if this tries to make too high a luminosity, radiation pressure will tend to blow away the rest of the gas that is trying to fall in, and so reduce the luminosity until it is below the limit. […] a neutron star is only 10 kilometres in radius, compared with the 700,000 kilometres of the Sun. This can only happen if this very small surface gets very hot. The surface of a healthily accreting neutron star reaches about 10 million degrees, compared with the 6,000 or so of the Sun. […] The radiation from such intensely hot surfaces comes out at much shorter wavelengths than the visible emission from the Sun – the surfaces of a neutron star and its accretion disc emit photons that are much more energetic than those of visible light. Accreting neutron stars and black holes make X-rays.”

“[S]tar formation […] is harder to understand than any other part of stellar evolution. So we use our knowledge of the later stages of stellar evolution to help us understand star formation. Working backwards in this way is a very common procedure in astronomy […] We know much less about how stars form than we do about any later part of their evolution. […] The cyclic nature of star formation, with stars being born from matter chemically enriched by earlier generations, and expelling still more processed material into space as they die, defines a cosmic epoch – the epoch of stars. The end of this epoch will arrive only when the stars have turned all the normal matter of the Universe into iron, and left it locked in dead remnants such as black holes.”

Stellar evolution.
Gustav Kirchhoff.
Robert Bunsen.
Joseph von Fraunhofer.
Absorption spectroscopy.
Emission spectrum.
Doppler effect.
Stellar luminosity.
Cecilia Payne-Gaposchkin.
Ejnar Hertzsprung/Henry Norris Russell/Hertzsprung–Russell diagram.
Red giant.
White dwarf (featured article).
Main sequence (featured article).
Gravity/Electrostatics/Strong nuclear force.
Pressure/Boyle’s law/Charles’s law.
Hermann von Helmholtz.
William Thomson (Kelvin).
Gravitational binding energy.
Thermal energy/Gravitational energy.
Virial theorem.
Kelvin-Helmholtz time scale.
Chemical energy/Bond-dissociation energy.
Nuclear binding energy.
Nuclear fusion.
Heisenberg’s uncertainty principle.
Quantum tunnelling.
Pauli exclusion principle.
Eddington limit.
Electron degeneracy pressure.
Nuclear timescale.
Number density.
Dynamical timescale/free-fall time.
Hydrostatic equilibrium/Thermal equilibrium.
Core collapse.
Hertzsprung gap.
Supergiant star.
Chandrasekhar limit.
Core-collapse supernova (‘good article’).
Crab Nebula.
Stellar nucleosynthesis.
Neutron star.
Schwarzschild radius.
Black hole (‘good article’).
Roy Kerr.
Jocelyn Bell.
Anthony Hewish.
Accretion/Accretion disk.
X-ray binary.
Binary star evolution.
SS 433.
Gamma ray burst.
Hubble’s law/Hubble time.
Cosmic distance ladder/Standard candle/Cepheid variable.
Star formation.
Pillars of Creation.
Jeans instability.
Initial mass function.

July 2, 2017 Posted by | Astronomy, Books, Chemistry, Physics | Leave a comment


“A recent study estimated that 234 million surgical procedures requiring anaesthesia are performed worldwide annually. Anaesthesia is the largest hospital specialty in the UK, with over 12,000 practising anaesthetists […] In this book, I give a short account of the historical background of anaesthetic practice, a review of anaesthetic equipment, techniques, and medications, and a discussion of how they work. The risks and side effects of anaesthetics will be covered, and some of the subspecialties of anaesthetic practice will be explored.”

I liked the book, and I gave it three stars on goodreads; I was closer to four stars than two. Below I have added a few sample observations from the book, as well as what turned out in the end to be actually a quite considerable number of links (more than 60 it turned out, from a brief count) to topics/people/etc. discussed or mentioned in the text. I decided to spend a bit more time finding relevant links than I’ve previously done when writing link-heavy posts, so in this post I have not limited myself to wikipedia articles and I e.g. also link directly to primary literature discussed in the coverage. The links provided are, as usual, meant to be indicators of which kind of stuff is covered in the book, rather than an alternative to the book; some of the wikipedia articles in particular I assume are not very good (the main point of a link to a wikipedia article of questionable quality should probably be taken to be an indication that I consider ‘awareness of the existence of concept X’ to be of interest/important also to people who have not read this book, even if no great resource on the topic was immediately at hand to me).

Sample observations from the book:

“[G]eneral anaesthesia is not sleep. In physiological terms, the two states are very dissimilar. The term general anaesthesia refers to the state of unconsciousness which is deliberately produced by the action of drugs on the patient. Local anaesthesia (and its related terms) refers to the numbness produced in a part of the body by deliberate interruption of nerve function; this is typically achieved without affecting consciousness. […] The purpose of inhaling ether vapour [in the past] was so that surgery would be painless, not so that unconsciousness would necessarily be produced. However, unconsciousness and immobility soon came to be considered desirable attributes […] For almost a century, lying still was the only reliable sign of adequate anaesthesia.”

“The experience of pain triggers powerful emotional consequences, including fear, anger, and anxiety. A reasonable word for the emotional response to pain is ‘suffering’. Pain also triggers the formation of memories which remind us to avoid potentially painful experiences in the future. The intensity of pain perception and suffering also depends on the mental state of the subject at the time, and the relationship between pain, memory, and emotion is subtle and complex. […] The effects of adrenaline are responsible for the appearance of someone in pain: pale, sweating, trembling, with a rapid heart rate and breathing. Additionally, a hormonal storm is activated, readying the body to respond to damage and fight infection. This is known as the stress response. […] Those responses may be abolished by an analgesic such as morphine, which will counteract all those changes. For this reason, it is routine to use analgesic drugs in addition to anaesthetic ones. […] Typical anaesthetic agents are poor at suppressing the stress response, but analgesics like morphine are very effective. […] The hormonal stress response can be shown to be harmful, especially to those who are already ill. For example, the increase in blood coagulability which evolved to reduce blood loss as a result of injury makes the patient more likely to suffer a deep venous thrombosis in the leg veins.”

“If we monitor the EEG of someone under general anaesthesia, certain identifiable changes to the signal occur. In general, the frequency spectrum of the signal slows. […] Next, the overall power of the signal diminishes. In very deep general anaesthesia, short periods of electrical silence, known as burst suppression, can be observed. Finally, the overall randomness of the signal, its entropy, decreases. In short, the EEG of someone who is anaesthetized looks completely different from someone who is awake. […] Depth of anaesthesia is no longer considered to be a linear concept […] since it is clear that anaesthesia is not a single process. It is now believed that the two most important components of anaesthesia are unconsciousness and suppression of the stress response. These can be represented on a three-dimensional diagram called a response surface. [Here’s incidentally a recent review paper on related topics, US]”

“Before the widespread advent of anaesthesia, there were very few painkilling options available. […] Alcohol was commonly given as a means of enhancing the patient’s courage prior to surgery, but alcohol has almost no effect on pain perception. […] For many centuries, opium was the only effective pain-relieving substance known. […] For general anaesthesia to be discovered, certain prerequisites were required. On the one hand, the idea that surgery without pain was achievable had to be accepted as possible. Despite tantalizing clues from history, this idea took a long time to catch on. The few workers who pursued this idea were often openly ridiculed. On the other, an agent had to be discovered that was potent enough to render a patient suitably unconscious to tolerate surgery, but not so potent that overdose (hence accidental death) was too likely. This agent also needed to be easy to produce, tolerable for the patient, and easy enough for untrained people to administer. The herbal candidates (opium, mandrake) were too unreliable or dangerous. The next reasonable candidate, and every agent since, was provided by the proliferating science of chemistry.”

“Inducing anaesthesia by intravenous injection is substantially quicker than the inhalational method. Inhalational induction may take several minutes, while intravenous induction happens in the time it takes for the blood to travel from the needle to the brain (30 to 60 seconds). The main benefit of this is not convenience or comfort but patient safety. […] It was soon discovered that the ideal balance is to induce anaesthesia intravenously, but switch to an inhalational agent […] to keep the patient anaesthetized during the operation. The template of an intravenous induction followed by maintenance with an inhalational agent is still widely used today. […] Most of the drawbacks of volatile agents disappear when the patient is already anaesthetized [and] volatile agents have several advantages for maintenance. First, they are predictable in their effects. Second, they can be conveniently administered in known quantities. Third, the concentration delivered or exhaled by the patient can be easily and reliably measured. Finally, at steady state, the concentration of volatile agent in the patient’s expired air is a close reflection of its concentration in the patient’s brain. This gives the anaesthetist a reliable way of ensuring that enough anaesthetic is present to ensure the patient remains anaesthetized.”

“All current volatile agents are colourless liquids that evaporate into a vapour which produces general anaesthesia when inhaled. All are chemically stable, which means they are non-flammable, and not likely to break down or be metabolized to poisonous products. What distinguishes them from each other are their specific properties: potency, speed of onset, and smell. Potency of an inhalational agent is expressed as MAC, the minimum alveolar concentration required to keep 50% of adults unmoving in response to a standard surgical skin incision. MAC as a concept was introduced […] in 1963, and has proven to be a very useful way of comparing potencies of different anaesthetic agents. […] MAC correlates with observed depth of anaesthesia. It has been known for over a century that potency correlates very highly with lipid solubility; that is, the more soluble an agent is in lipid […], the more potent an anaesthetic it is. This is known as the Meyer-Overton correlation […] Speed of onset is inversely proportional to water solubility. The less soluble in water, the more rapidly an agent will take effect. […] Where immobility is produced at around 1.0 MAC, amnesia is produced at a much lower dose, typically 0.25 MAC, and unconsciousness at around 0.5 MAC. Therefore, a patient may move in response to a surgical stimulus without either being conscious of the stimulus, or remembering it afterwards.”

“The most useful way to estimate the body’s physiological reserve is to assess the patient’s tolerance for exercise. Exercise is a good model of the surgical stress response. The greater the patient’s tolerance for exercise, the better the perioperative outcome is likely to be […] For a smoker who is unable to quit, stopping for even a couple of days before the operation improves outcome. […] Dying ‘on the table’ during surgery is very unusual. Patients who die following surgery usually do so during convalescence, their weakened state making them susceptible to complications such as wound breakdown, chest infections, deep venous thrombosis, and pressure sores.”

Mechanical ventilation is based on the principle of intermittent positive pressure ventilation (IPPV), gas being ‘blown’ into the patient’s lungs from the machine. […] Inflating a patient’s lungs is a delicate process. Healthy lung tissue is fragile, and can easily be damaged by overdistension (barotrauma). While healthy lung tissue is light and spongy, and easily inflated, diseased lung tissue may be heavy and waterlogged and difficult to inflate, and therefore may collapse, allowing blood to pass through it without exchanging any gases (this is known as shunt). Simply applying higher pressures may not be the answer: this may just overdistend adjacent areas of healthier lung. The ventilator must therefore provide a series of breaths whose volume and pressure are very closely controlled. Every aspect of a mechanical breath may now be adjusted by the anaesthetist: the volume, the pressure, the frequency, and the ratio of inspiratory time to expiratory time are only the basic factors.”

“All anaesthetic drugs are poisons. Remember that in achieving a state of anaesthesia you intend to poison someone, but not kill them – so give as little as possible. [Introductory quote to a chapter, from an Anaesthetics textbook – US] […] Other cells besides neurons use action potentials as the basis of cellular signalling. For example, the synchronized contraction of heart muscle is performed using action potentials, and action potentials are transmitted from nerves to skeletal muscle at the neuromuscular junction to initiate movement. Local anaesthetic drugs are therefore toxic to the heart and brain. In the heart, local anaesthetic drugs interfere with normal contraction, eventually stopping the heart. In the brain, toxicity causes seizures and coma. To avoid toxicity, the total dose is carefully limited”.

Links of interest:

General anaesthesia.
Muscle relaxant.
Arthur Ernest Guedel.
Guedel’s classification.
Beta rhythm.
Frances Burney.
Henry Hill Hickman.
Horace Wells.
William Thomas Green Morton.
Diethyl ether.
James Young Simpson.
Joseph Thomas Clover.
Inhalational anaesthetic.
Pulmonary aspiration.
Principles of Total Intravenous Anaesthesia (TIVA).
Patient-controlled analgesia.
Airway management.
Oropharyngeal airway.
Tracheal intubation.
Laryngeal mask airway.
Anaesthetic machine.
Soda lime.
Sodium thiopental.
Neuromuscular-blocking drug.
Gate control theory of pain.
Multimodal analgesia.
Hartmann’s solution (…what this is called seems to be depending on whom you ask, but it’s called Hartmann’s solution in the book…).
Local anesthetic.
Karl Koller.
Regional anesthesia.
Spinal anaesthesia.
Epidural nerve block.
Intensive care medicine.
Bjørn Aage Ibsen.
Chronic pain.
Pain wind-up.
John Bonica.
Twilight sleep.
Veterinary anesthesia.
Pearse et al. (results of paper briefly discussed in the book).
Awareness under anaesthesia (skip the first page).
Pollard et al. (2007).
Postoperative nausea and vomiting.
Postoperative cognitive dysfunction.
Monk et al. (2008).
Malignant hyperthermia.
Suxamethonium apnoea.

February 13, 2017 Posted by | Books, Chemistry, Medicine, Papers, Pharmacology | Leave a comment

Random stuff

i. Fire works a little differently than people imagine. A great ask-science comment. See also AugustusFink-nottle’s comment in the same thread.


iii. I was very conflicted about whether to link to this because I haven’t actually spent any time looking at it myself so I don’t know if it’s any good, but according to somebody (?) who linked to it on SSC the people behind this stuff have academic backgrounds in evolutionary biology, which is something at least (whether you think this is a good thing or not will probably depend greatly on your opinion of evolutionary biologists, but I’ve definitely learned a lot more about human mating patterns, partner interaction patterns, etc. from evolutionary biologists than I have from personal experience, so I’m probably in the ‘they-sometimes-have-interesting-ideas-about-these-topics-and-those-ideas-may-not-be-terrible’-camp). I figure these guys are much more application-oriented than were some of the previous sources I’ve read on related topics, such as e.g. Kappeler et al. I add the link mostly so that if I in five years time have a stroke that obliterates most of my decision-making skills, causing me to decide that entering the dating market might be a good idea, I’ll have some idea where it might make sense to start.

iv. Stereotype (In)Accuracy in Perceptions of Groups and Individuals.

“Are stereotypes accurate or inaccurate? We summarize evidence that stereotype accuracy is one of the largest and most replicable findings in social psychology. We address controversies in this literature, including the long-standing  and continuing but unjustified emphasis on stereotype inaccuracy, how to define and assess stereotype accuracy, and whether stereotypic (vs. individuating) information can be used rationally in person perception. We conclude with suggestions for building theory and for future directions of stereotype (in)accuracy research.”

A few quotes from the paper:

Demographic stereotypes are accurate. Research has consistently shown moderate to high levels of correspondence accuracy for demographic (e.g., race/ethnicity, gender) stereotypes […]. Nearly all accuracy correlations for consensual stereotypes about race/ethnicity and  gender exceed .50 (compared to only 5% of social psychological findings; Richard, Bond, & Stokes-Zoota, 2003).[…] Rather than being based in cultural myths, the shared component of stereotypes is often highly accurate. This pattern cannot be easily explained by motivational or social-constructionist theories of stereotypes and probably reflects a “wisdom of crowds” effect […] personal stereotypes are also quite accurate, with correspondence accuracy for roughly half exceeding r =.50.”

“We found 34 published studies of racial-, ethnic-, and gender-stereotype accuracy. Although not every study examined discrepancy scores, when they did, a plurality or majority of all consensual stereotype judgments were accurate. […] In these 34 studies, when stereotypes were inaccurate, there was more evidence of underestimating than overestimating actual demographic group differences […] Research assessing the accuracy of  miscellaneous other stereotypes (e.g., about occupations, college majors, sororities, etc.) has generally found accuracy levels comparable to those for demographic stereotypes”

“A common claim […] is that even though many stereotypes accurately capture group means, they are still not accurate because group means cannot describe every individual group member. […] If people were rational, they would use stereotypes to judge individual targets when they lack information about targets’ unique personal characteristics (i.e., individuating information), when the stereotype itself is highly diagnostic (i.e., highly informative regarding the judgment), and when available individuating information is ambiguous or incompletely useful. People’s judgments robustly conform to rational predictions. In the rare situations in which a stereotype is highly diagnostic, people rely on it (e.g., Crawford, Jussim, Madon, Cain, & Stevens, 2011). When highly diagnostic individuating information is available, people overwhelmingly rely on it (Kunda & Thagard, 1996; effect size averaging r = .70). Stereotype biases average no higher than r = .10 ( Jussim, 2012) but reach r = .25 in the absence of individuating information (Kunda & Thagard, 1996). The more diagnostic individuating information  people have, the less they stereotype (Crawford et al., 2011; Krueger & Rothbart, 1988). Thus, people do not indiscriminately apply their stereotypes to all individual  members of stereotyped groups.” (Funder incidentally talked about this stuff as well in his book Personality Judgment).

One thing worth mentioning in the context of stereotypes is that if you look at stuff like crime data – which sadly not many people do – and you stratify based on stuff like country of origin, then the sub-group differences you observe tend to be very large. Some of the differences you observe between subgroups are not in the order of something like 10%, which is probably the sort of difference which could easily be ignored without major consequences; some subgroup differences can easily be in the order of one or two orders of magnitude. The differences are in some contexts so large as to basically make it downright idiotic to assume there are no differences – it doesn’t make sense, it’s frankly a stupid thing to do. To give an example, in Germany the probability that a random person, about whom you know nothing, has been a suspect in a thievery case is 22% if that random person happens to be of Algerian extraction, whereas it’s only 0,27% if you’re dealing with an immigrant from China. Roughly one in 13 of those Algerians have also been involved in a case of ‘body (bodily?) harm’, which is the case for less than one in 400 of the Chinese immigrants.

v. Assessing Immigrant Integration in Sweden after the May 2013 Riots. Some data from the article:

“Today, about one-fifth of Sweden’s population has an immigrant background, defined as those who were either born abroad or born in Sweden to two immigrant parents. The foreign born comprised 15.4 percent of the Swedish population in 2012, up from 11.3 percent in 2000 and 9.2 percent in 1990 […] Of the estimated 331,975 asylum applicants registered in EU countries in 2012, 43,865 (or 13 percent) were in Sweden. […] More than half of these applications were from Syrians, Somalis, Afghanis, Serbians, and Eritreans. […] One town of about 80,000 people, Södertälje, since the mid-2000s has taken in more Iraqi refugees than the United States and Canada combined.”

“Coupled with […] macroeconomic changes, the largely humanitarian nature of immigrant arrivals since the 1970s has posed challenges of labor market integration for Sweden, as refugees often arrive with low levels of education and transferable skills […] high unemployment rates have disproportionately affected immigrant communities in Sweden. In 2009-10, Sweden had the highest gap between native and immigrant employment rates among OECD countries. Approximately 63 percent of immigrants were employed compared to 76 percent of the native-born population. This 13 percentage-point gap is significantly greater than the OECD average […] Explanations for the gap include less work experience and domestic formal qualifications such as language skills among immigrants […] Among recent immigrants, defined as those who have been in the country for less than five years, the employment rate differed from that of the native born by more than 27 percentage points. In 2011, the Swedish newspaper Dagens Nyheter reported that 35 percent of the unemployed registered at the Swedish Public Employment Service were foreign born, up from 22 percent in 2005.”

“As immigrant populations have grown, Sweden has experienced a persistent level of segregation — among the highest in Western Europe. In 2008, 60 percent of native Swedes lived in areas where the majority of the population was also Swedish, and 20 percent lived in areas that were virtually 100 percent Swedish. In contrast, 20 percent of Sweden’s foreign born lived in areas where more than 40 percent of the population was also foreign born.”

vi. Book recommendations. Or rather, author recommendations. A while back I asked ‘the people of SSC’ if they knew of any fiction authors I hadn’t read yet which were both funny and easy to read. I got a lot of good suggestions, and the roughly 20 Dick Francis novels I’ve read during the fall I’ve read as a consequence of that thread.

vii. On the genetic structure of Denmark.

viii. Religious Fundamentalism and Hostility against Out-groups: A Comparison of Muslims and Christians in Western Europe.

“On the basis of an original survey among native Christians and Muslims of Turkish and Moroccan origin in Germany, France, the Netherlands, Belgium, Austria and Sweden, this paper investigates four research questions comparing native Christians to Muslim immigrants: (1) the extent of religious fundamentalism; (2) its socio-economic determinants; (3) whether it can be distinguished from other indicators of religiosity; and (4) its relationship to hostility towards out-groups (homosexuals, Jews, the West, and Muslims). The results indicate that religious fundamentalist attitudes are much more widespread among Sunnite Muslims than among native Christians, even after controlling for the different demographic and socio-economic compositions of these groups. […] Fundamentalist believers […] show very high levels of out-group hostility, especially among Muslims.”

ix. Portal: Dinosaurs. It would have been so incredibly awesome to have had access to this kind of stuff back when I was a child. The portal includes links to articles with names like ‘Bone Wars‘ – what’s not to like? Again, awesome!

x. “you can’t determine if something is truly random from observations alone. You can only determine if something is not truly random.” (link) An important insight well expressed.

xi. Chessprogramming. If you’re interested in having a look at how chess programs work, this is a neat resource. The wiki contains lots of links with information on specific sub-topics of interest. Also chess-related: The World Championship match between Carlsen and Karjakin has started. To the extent that I’ll be following the live coverage, I’ll be following Svidler et al.’s coverage on chess24. Robin van Kampen and Eric Hansen – both 2600+ elo GMs – did quite well yesterday, in my opinion.

xii. Justified by More Than Logos Alone (Razib Khan).

“Very few are Roman Catholic because they have read Aquinas’ Five Ways. Rather, they are Roman Catholic, in order of necessity, because God aligns with their deep intuitions, basic cognitive needs in terms of cosmological coherency, and because the church serves as an avenue for socialization and repetitive ritual which binds individuals to the greater whole. People do not believe in Catholicism as often as they are born Catholics, and the Catholic religion is rather well fitted to a range of predispositions to the typical human.”

November 12, 2016 Posted by | Books, Chemistry, Chess, Data, dating, Demographics, Genetics, Geography, immigration, Paleontology, Papers, Physics, Psychology, Random stuff, Religion | Leave a comment

Photosynthesis in the Marine Environment (III)

This will be my last post about the book. After having spent a few hours on the post I started to realize the post would become very long if I were to cover all the remaining chapters, and so in the end I decided not to discuss material from chapter 12 (‘How some marine plants modify the environment for other organisms’) here, even though I actually thought some of that stuff was quite interesting. I may decide to talk briefly about some of the stuff in that chapter in another blogpost later on (but most likely I won’t). For a few general remarks about the book, see my second post about it.

Some stuff from the last half of the book below:

“The light reactions of marine plants are similar to those of terrestrial plants […], except that pigments other than chlorophylls a and b and carotenoids may be involved in the capturing of light […] and that special arrangements between the two photosystems may be different […]. Similarly, the CO2-fixation and -reduction reactions are also basically the same in terrestrial and marine plants. Perhaps one should put this the other way around: Terrestrial-plant photosynthesis is similar to marine-plant photosynthesis, which is not surprising since plants have evolved in the oceans for 3.4 billion years and their descendants on land for only 350–400 million years. […] In underwater marine environments, the accessibility to CO2 is low mainly because of the low diffusivity of solutes in liquid media, and for CO2 this is exacerbated by today’s low […] ambient CO2 concentrations. Therefore, there is a need for a CCM also in marine plants […] CCMs in cyanobacteria are highly active and accumulation factors (the internal vs. external CO2 concentrations ratio) can be of the order of 800–900 […] CCMs in eukaryotic microalgae are not as effective at raising internal CO2 concentrations as are those in cyanobacteria, but […] microalgal CCMs result in CO2 accumulation factors as high as 180 […] CCMs are present in almost all marine plants. These CCMs are based mainly on various forms of HCO3 [bicarbonate] utilisation, and may raise the intrachloroplast (or, in cyanobacteria, intracellular or intra-carboxysome) CO2 to several-fold that of seawater. Thus, Rubisco is in effect often saturated by CO2, and photorespiration is therefore often absent or limited in marine plants.”

“we view the main difference in photosynthesis between marine and terrestrial plants as the latter’s ability to acquire Ci [inorganic carbon] (in most cases HCO3) from the external medium and concentrate it intracellularly in order to optimise their photosynthetic rates or, in some cases, to be able to photosynthesise at all. […] CO2 dissolved in seawater is, under air-equilibrated conditions and given today’s seawater pH, in equilibrium with a >100 times higher concentration of HCO3, and it is therefore not surprising that most marine plants utilise the latter Ci form for their photosynthetic needs. […] any plant that utilises bulk HCO3 from seawater must convert it to CO2 somewhere along its path to Rubisco. This can be done in different ways by different plants and under different conditions”

“The conclusion that macroalgae use HCO3 stems largely from results of experiments in which concentrations of CO2 and HCO3 were altered (chiefly by altering the pH of the seawater) while measuring photosynthetic rates, or where the plants themselves withdrew these Ci forms as they photosynthesised in a closed system as manifested by a pH increase (so-called pH-drift experiments) […] The reason that the pH in the surrounding seawater increases as plants photosynthesise is first that CO2 is in equilibrium with carbonic acid (H2CO3), and so the acidity decreases (i.e. pH rises) as CO2 is used up. At higher pH values (above ∼9), when all the CO2 is used up, then a decrease in HCO3 concentrations will also result in increased pH since the alkalinity is maintained by the formation of OH […] some algae can also give off OH to the seawater medium in exchange for HCO3 uptake, bringing the pH up even further (to >10).”

Carbonic anhydrase (CA) is a ubiquitous enzyme, found in all organisms investigated so far (from bacteria, through plants, to mammals such as ourselves). This may be seen as remarkable, since its only function is to catalyse the inter-conversion between CO2 and HCO3 in the reaction CO2 + H2O ↔ H2CO3; we can exchange the latter Ci form to HCO3 since this is spontaneously formed by H2CO3 and is present at a much higher equilibrium concentration than the latter. Without CA, the equilibrium between CO2 and HCO3 is a slow process […], but in the presence of CA the reaction becomes virtually instantaneous. Since CO2 and HCO3 generate different pH values of a solution, one of the roles of CA is to regulate intracellular pH […] another […] function is to convert HCO3 to CO2 somewhere en route towards the latter’s final fixation by Rubisco.”

“with very few […] exceptions, marine macrophytes are not C 4 plants. Also, while a CAM-like [Crassulacean acid metabolism-like, see my previous post about the book for details] feature of nightly uptake of Ci may complement that of the day in some brown algal kelps, this is an exception […] rather than a rule for macroalgae in general. Thus, virtually no marine macroalgae are C 4 or CAM plants, and instead their CCMs are dependent on HCO3 utilization, which brings about high concentrations of CO2 in the vicinity of Rubisco. In Ulva, this type of CCM causes the intra-cellular CO2 concentration to be some 200 μM, i.e. ∼15 times higher than that in seawater.“

“deposition of calcium carbonate (CaCO3) as either calcite or aragonite in marine organisms […] can occur within the cells, but for macroalgae it usually occurs outside of the cell membranes, i.e. in the cell walls or other intercellular spaces. The calcification (i.e. CaCO3 formation) can sometimes continue in darkness, but is normally greatly stimulated in light and follows the rate of photosynthesis. During photosynthesis, the uptake of CO2 will lower the total amount of dissolved inorganic carbon (Ci) and, thus, increase the pH in the seawater surrounding the cells, thereby increasing the saturation state of CaCO3. This, in turn, favours calcification […]. Conversely, it has been suggested that calcification might enhance the photosynthetic rate by increasing the rate of conversion of HCO3 to CO2 by lowering the pH. Respiration will reduce calcification rates when released CO2 increases Ci and/but lowers intercellular pH.”

“photosynthesis is most efficient at very low irradiances and increasingly inefficient as irradiances increase. This is most easily understood if we regard ‘efficiency’ as being dependent on quantum yield: At low ambient irradiances (the light that causes photosynthesis is also called ‘actinic’ light), almost all the photon energy conveyed through the antennae will result in electron flow through (or charge separation at) the reaction centres of photosystem II […]. Another way to put this is that the chances for energy funneled through the antennae to encounter an oxidised (or ‘open’) reaction centre are very high. Consequently, almost all of the photons emitted by the modulated measuring light will be consumed in photosynthesis, and very little of that photon energy will be used for generating fluorescence […] the higher the ambient (or actinic) light, the less efficient is photosynthesis (quantum yields are lower), and the less likely it is for photon energy funnelled through the antennae (including those from the measuring light) to find an open reaction centre, and so the fluorescence generated by the latter light increases […] Alpha (α), which is a measure of the maximal photosynthetic efficiency (or quantum yield, i.e. photosynthetic output per photons received, or absorbed […] by a specific leaf/thallus area, is high in low-light plants because pigment levels (or pigment densities per surface area) are high. In other words, under low-irradiance conditions where few photons are available, the probability that they will all be absorbed is higher in plants with a high density of photosynthetic pigments (or larger ‘antennae’ […]). In yet other words, efficient photon absorption is particularly important at low irradiances, where the higher concentration of pigments potentially optimises photosynthesis in low-light plants. In high-irradiance environments, where photons are plentiful, their efficient absorption becomes less important, and instead it is reactions downstream of the light reactions that become important in the performance of optimal rates of photosynthesis. The CO2-fixing capability of the enzyme Rubisco, which we have indicated as a bottleneck for the entire photosynthetic apparatus at high irradiances, is indeed generally higher in high-light than in low-light plants because of its higher concentration in the former. So, at high irradiances where the photon flux is not limiting to photosynthetic rates, the activity of Rubisco within the CO2-fixation and -reduction part of photosynthesis becomes limiting, but is optimised in high-light plants by up-regulation of its formation. […] photosynthetic responses have often been explained in terms of adaptation to low light being brought about by alterations in either the number of ‘photosynthetic units’ or their size […] There are good examples of both strategies occurring in different species of algae”.

“In general, photoinhibition can be defined as the lowering of photosynthetic rates at high irradiances. This is mainly due to the rapid (sometimes within minutes) degradation of […] the D1 protein. […] there are defense mechanisms [in plants] that divert excess light energy to processes different from photosynthesis; these processes thus cause a downregulation of the entire photosynthetic process while protecting the photosynthetic machinery from excess photons that could cause damage. One such process is the xanthophyll cycle. […] It has […] been suggested that the activity of the CCM in marine plants […] can be a source of energy dissipation. If CO2 levels are raised inside the cells to improve Rubisco activity, some of that CO2 can potentially leak out of the cells, and so raising the net energy cost of CO2 accumulation and, thus, using up large amounts of energy […]. Indirect evidence for this comes from experiments in which CCM activity is down-regulated by elevated CO2

“Photoinhibition is often divided into dynamic and chronic types, i.e. the former is quickly remedied (e.g. during the day[…]) while the latter is more persistent (e.g. over seasons […] the mechanisms for down-regulating photosynthesis by diverting photon energies and the reducing power of electrons away from the photosynthetic systems, including the possibility of detoxifying oxygen radicals, is important in high-light plants (that experience high irradiances during midday) as well as in those plants that do see significant fluctuations in irradiance throughout the day (e.g. intertidal benthic plants). While low-light plants may lack those systems of down-regulation, one must remember that they do not live in environments of high irradiances, and so seldom or never experience high irradiances. […] If plants had a mind, one could say that it was worth it for them to invest in pigments, but unnecessary to invest in high amounts of Rubisco, when growing under low-light conditions, and necessary for high-light growing plants to invest in Rubisco, but not in pigments. Evolution has, of course, shaped these responses”.

“shallow-growing corals […] show two types of photoinhibition: a dynamic type that remedies itself at the end of each day and a more chronic type that persists over longer time periods. […] Bleaching of corals occurs when they expel their zooxanthellae to the surrounding water, after which they either die or acquire new zooxanthellae of other types (or clades) that are better adapted to the changes in the environment that caused the bleaching. […] Active Ci acquisition mechanisms, whether based on localised active H+ extrusion and acidification and enhanced CO2 supply, or on active transport of HCO3, are all energy requiring. As a consequence it is not surprising that the CCM activity is decreased at lower light levels […] a whole spectrum of light-responses can be found in seagrasses, and those are often in co-ordinance with the average daily irradiances where they grow. […] The function of chloroplast clumping in Halophila stipulacea appears to be protection of the chloroplasts from high irradiances. Thus, a few peripheral chloroplasts ‘sacrifice’ themselves for the good of many others within the clump that will be exposed to lower irradiances. […] While water is an effective filter of UV radiation (UVR)2, many marine organisms are sensitive to UVR and have devised ways to protect themselves against this harmful radiation. These ways include the production of UV-filtering compounds called mycosporine-like amino acids (MAAs), which is common also in seagrasses”.

“Many algae and seagrasses grow in the intertidal and are, accordingly, exposed to air during various parts of the day. On the one hand, this makes them amenable to using atmospheric CO2, the diffusion rate of which is some 10 000 times higher in air than in water. […] desiccation is […] the big drawback when growing in the intertidal, and excessive desiccation will lead to death. When some of the green macroalgae left the seas and formed terrestrial plants some 400 million years ago (the latter of which then ‘invaded’ Earth), there was a need for measures to evolve that on the one side ensured a water supply to the above-ground parts of the plants (i.e. roots1) and, on the other, hindered the water entering the plants to evaporate (i.e. a water-impermeable cuticle). Macroalgae lack those barriers against losing intracellular water, and are thus more prone to desiccation, the rate of which depends on external factors such as heat and humidity and internal factors such as thallus thickness. […] the mechanisms of desiccation tolerance in macroalgae is not well understood on the cellular level […] there seems to be a general correlation between the sensitivity of the photosynthetic apparatus (more than the respiratory one) to desiccation and the occurrence of macroalgae along a vertical gradient in the intertidal: the less sensitive (i.e. the more tolerant), the higher up the algae can grow. This is especially true if the sensitivity to desiccation is measured as a function of the ability to regain photosynthetic rates following rehydration during re-submergence. While this correlation exists, the mechanism of protecting the photosynthetic system against desiccation is largely unknown”.

July 28, 2015 Posted by | Biology, Books, Botany, Chemistry, Evolutionary biology, Microbiology | Leave a comment

Photosynthesis in the Marine Environment (II)

Here’s my first post about the book. I gave the book four stars on goodreads – here’s a link to my short goodreads review of the book.

As pointed out in the review, ‘it’s really mostly a biochemistry text.’ At least there’s a lot of that stuff in there (‘it get’s better towards the end’, would be one way to put it – the last chapters deal mostly with other topics, such as measurement and brief notes on some not-particularly-well-explored ecological dynamics of potential interest), and if you don’t want to read a book which deals in some detail with topics and concepts like alkalinity, crassulacean acid metabolism, photophosphorylation, photosynthetic reaction centres, Calvin cycle (also known straightforwardly as the ‘reductive pentose phosphate cycle’…), enzymes with names like Ribulose-1,5-bisphosphate carboxylase/oxygenase (‘RuBisCO’ among friends…) and phosphoenolpyruvate carboxylase (‘PEP-case’ among friends…), mycosporine-like amino acid, 4,4′-Diisothiocyanatostilbene-2,2′-disulfonic acid (‘DIDS’ among friends), phosphoenolpyruvate, photorespiration, carbonic anhydrase, C4 carbon fixation, cytochrome b6f complex, … – well, you should definitely not read this book. If you do feel like reading about these sorts of things, having a look at the book seems to me a better idea than reading the wiki articles.

I’m not a biochemist but I could follow a great deal of what was going on in this book, which is perhaps a good indication of how well written the book is. This stuff’s interesting and complicated, and the authors cover most of it quite well. The book has way too much stuff for it to make sense to cover all of it here, but I do want to cover some more stuff from the book, so I’ve added some quotes below.

“Water velocities are central to marine photosynthetic organisms because they affect the transport of nutrients such as Ci [inorganic carbon] towards the photosynthesising cells, as well as the removal of by-products such as excess O2 during the day. Such bulk transport is especially important in aquatic media since diffusion rates there are typically some 10 000 times lower than in air […] It has been established that increasing current velocities will increase photosynthetic rates and, thus, productivity of macrophytes as long as they do not disrupt the thalli of macroalgae or the leaves of seagrasses”.

Photosynthesis is the process by which the energy of light is used in order to form energy-rich organic compounds from low-energy inorganic compounds. In doing so, electrons from water (H2O) reduce carbon dioxide (CO2) to carbohydrates. […] The process of photosynthesis can conveniently be separated into two parts: the ‘photo’ part in which light energy is converted into chemical energy bound in the molecule ATP and reducing power is formed as NADPH [another friend with a long name], and the ‘synthesis’ part in which that ATP and NADPH are used in order to reduce CO2 to sugars […]. The ‘photo’ part of photosynthesis is, for obvious reasons, also called its light reactions while the ‘synthesis’ part can be termed CO2-fixation and -reduction, or the Calvin cycle after one of its discoverers; this part also used to be called the ‘dark reactions’ [or light-independent reactions] of photosynthesis because it can proceed in vitro (= outside the living cell, e.g. in a test-tube) in darkness provided that ATP and NADPH are added artificially. […] ATP and NADPH are the energy source and reducing power, respectively, formed by the light reactions, that are subsequently used in order to reduce carbon dioxide (CO2) to sugars (synonymous with carbohydrates) in the Calvin cycle. Molecular oxygen (O2) is formed as a by-product of photosynthesis.”

“In photosynthetic bacteria (such as the cyanobacteria), the light reactions are located at the plasma membrane and internal membranes derived as invaginations of the plasma membrane. […] most of the CO2-fixing enzyme ribulose-bisphosphate carboxylase/oxygenase […] is here located in structures termed carboxysomes. […] In all other plants (including algae), however, the entire process of photosynthesis takes place within intracellular compartments called chloroplasts which, as the name suggests, are chlorophyll-containing plastids (plastids are those compartments in cells that are associated with photosynthesis).”

“Photosynthesis can be seen as a process in which part of the radiant energy from sunlight is ‘harvested’ by plants in order to supply chemical energy for growth. The first step in such light harvesting is the absorption of photons by photosynthetic pigments[1]. The photosynthetic pigments are special in that they not only convert the energy of absorbed photons to heat (as do most other pigments), but largely convert photon energy into a flow of electrons; the latter is ultimately used to provide chemical energy to reduce CO2 to carbohydrates. […] Pigments are substances that can absorb different wavelengths selectively and so appear as the colour of those photons that are less well absorbed (and, therefore, are reflected, or transmitted, back to our eyes). (An object is black if all photons are absorbed, and white if none are absorbed.) In plants and animals, the pigment molecules within the cells and their organelles thus give them certain colours. The green colour of many plant parts is due to the selective absorption of chlorophylls […], while other substances give colour to, e.g. flowers or fruits. […] Chlorophyll is a major photosynthetic pigment, and chlorophyll a is present in all plants, including all algae and the cyanobacteria. […] The molecular sub-structure of the chlorophyll’s ‘head’ makes it absorb mainly blue and red light […], while green photons are hardly absorbed but, rather, reflected back to our eyes […] so that chlorophyll-containing plant parts look green. […] In addition to chlorophyll a, all plants contain carotenoids […] All these accessory pigments act to fill in the ‘green window’ generated by the chlorophylls’ non-absorbance in that band […] and, thus, broaden the spectrum of light that can be utilized […] beyond that absorbed by chlorophyll.”

“Photosynthesis is principally a redox process in which carbon dioxide (CO2) is reduced to carbohydrates (or, in a shorter word, sugars) by electrons derived from water. […] since water has an energy level (or redox potential) that is much lower than that of sugar, or, more precisely, than that of the compound that finally reduces CO2 to sugars (i.e. NADPH), it follows that energy must be expended in the process; this energy stems from the photons of light. […] Redox reactions are those reactions in which one compound, B, becomes reduced by receiving electrons from another compound, A, the latter then becomes oxidised by donating the electrons to B. The reduction of B can only occur if the electron-donating compound A has a higher energy level, or […] has a redox potential that is higher, or more negative in terms of electron volts, than that of compound B. The redox potential, or reduction potential, […] can thus be seen as a measure of the ease by which a compound can become reduced […] the greater the difference in redox potential between compounds B and A, the greater the tendency that B will be reduced by A. In photosynthesis, the redox potential of the compound that finally reduces CO2, i.e. NADPH, is more negative than that from which the electrons for this reduction stems, i.e. H2O, and the entire process can therefore not occur spontaneously. Instead, light energy is used in order to boost electrons from H2O through intermediary compounds to such high redox potentials that they can, eventually, be used for CO2 reduction. In essence, then, the light reactions of photosynthesis describe how photon energy is used to boost electrons from H2O to an energy level (or redox potential) high (or negative) enough to reduce CO2 to sugars.”

“Fluorescence in general is the generation of light (emission of photons) from the energy released during de-excitation of matter previously excited by electromagnetic energy. In photosynthesis, fluorescence occurs as electrons of chlorophyll undergo de-excitation, i.e. return to the original orbital from which they were knocked out by photons. […] there is an inverse (or negative) correlation between fluorescence yield (i.e. the amount of fluorescence generated per photons absorbed by chlorophyll) and photosynthetic yield (i.e. the amount of photosynthesis performed per photons similarly absorbed).”

“In some cases, more photon energy is received by a plant than can be used for photosynthesis, and this can lead to photo-inhibition or photo-damage […]. Therefore, many plants exposed to high irradiances possess ways of dissipating such excess light energy, the most well known of which is the xanthophyll cycle. In principle, energy is shuttled between various carotenoids collectively called xanthophylls and is, in the process, dissipated as heat.”

“In order to ‘fix’ CO2 (= incorporate it into organic matter within the cell) and reduce it to sugars, the NADPH and ATP formed in the light reactions are used in a series of chemical reactions that take place in the stroma of the chloroplasts (or, in prokaryotic autotrophs such as cyanobacteria, the cytoplasm of the cells); each reaction is catalysed by its specific enzyme, and the bottleneck for the production of carbohydrates is often considered to be the enzyme involved in its first step, i.e. the fixation of CO2 [this enzyme is RubisCO] […] These CO2-fixation and -reduction reactions are known as the Calvin cycle […] or the C3 cycle […] The latter name stems from the fact that the first stable product of CO2 fixation in the cycle is a 3-carbon compound called phosphoglyceric acid (PGA): Carbon dioxide in the stroma is fixed onto a 5-carbon sugar called ribulose-bisphosphate (RuBP) in order to form 2 molecules of PGA […] It should be noted that this reaction does not produce a reduced, energy-rich, carbon compound, but is only the first, ‘CO2– fixing’, step of the Calvin cycle. In subsequent steps, PGA is energized by the ATP formed through photophosphorylation and is reduced by NADPH […] to form a 3-carbon phosphorylated sugar […] here denoted simply as triose phosphate (TP); these reactions can be called the CO2-reduction step of the Calvin cycle […] 1/6 of the TPs formed leave the cycle while 5/6 are needed in order to re-form RuBP molecules in what we can call the regeneration part of the cycle […]; it is this recycling of most of the final product of the Calvin cycle (i.e. TP) to re-form RuBP that lends it to be called a biochemical ‘cycle’ rather than a pathway.”

“Rubisco […] not only functions as a carboxylase, but […] also acts as an oxygenase […] When Rubisco reacts with oxygen instead of CO2, only 1 molecule of PGA is formed together with 1 molecule of the 2-carbon compound phosphoglycolate […] Not only is there no gain in organic carbon by this reaction, but CO2 is actually lost in the further metabolism of phosphoglycolate, which comprises a series of reactions termed photorespiration […] While photorespiration is a complex process […] it is also an apparently wasteful one […] and it is not known why this process has evolved in plants altogether. […] Photorespiration can reduce the net photosynthetic production by up to 25%.”

“Because of Rubisco’s low affinity to CO2 as compared with the low atmospheric, and even lower intracellular, CO2 concentration […], systems have evolved in some plants by which CO2 can be concentrated at the vicinity of this enzyme; these systems are accordingly termed CO2 concentrating mechanisms (CCM). For terrestrial plants, this need for concentrating CO2 is exacerbated in those that grow in hot and/or arid areas where water needs to be saved by partly or fully closing stomata during the day, thus restricting also the influx of CO2 from an already CO2-limiting atmosphere. Two such CCMs exist in terrestrial plants: the C4 cycle and the Crassulacean acid metabolism (CAM) pathway. […] The C 4 cycle is called so because the first stable product of CO2-fixation is not the 3-carbon compound PGA (as in the Calvin cycle) but, rather, malic acid (often referred to by its anion malate) or aspartic acid (or its anion aspartate), both of which are 4-carbon compounds. […] C4 [terrestrial] plants are […] more common in areas of high temperature, especially when accompanied with scarce rains, than in areas with higher rainfall […] While atmospheric CO2 is fixed […] via the C4 cycle, it should be noted that this biochemical cycle cannot reduce CO2 to high energy containing sugars […] since the Calvin cycle is the only biochemical system that can reduce CO2 to energy-rich carbohydrates in plants, it follows that the CO2 initially fixed by the C4 cycle […] is finally reduced via the Calvin cycle also in C4 plants. In summary, the C 4 cycle can be viewed as being an additional CO2 sequesterer, or a biochemical CO2 ‘pump’, that concentrates CO2 for the rather inefficient enzyme Rubisco in C4 plants that grow under conditions where the CO2 supply is extremely limited because partly closed stomata restrict its influx into the photosynthesising cells.”

“Crassulacean acid metabolism (CAM) is similar to the C 4 cycle in that atmospheric CO2 […] is initially fixed via PEP-case into the 4-carbon compound malate. However, this fixation is carried out during the night […] The ecological advantage behind CAM metabolism is that a CAM plant can grow, or at least survive, under prolonged (sometimes months) conditions of severe water stress. […] CAM plants are typical of the desert flora, and include most cacti. […] The principal difference between C 4 and CAM metabolism is that in C4 plants the initial fixation of atmospheric CO2 and its final fixation and reduction in the Calvin cycle is separated in space (between mesophyll and bundle-sheath cells) while in CAM plants the two processes are separated in time (between the initial fixation of CO2 during the night and its re-fixation and reduction during the day).”

July 20, 2015 Posted by | Biology, Botany, Chemistry, Ecology, Microbiology | Leave a comment

Calculated Risks: Understanding the Toxicity of Chemicals in our Environment (ii)

I finished the book – I didn’t expect to do that quite so soon, which is why I ended up posting two posts about the book during the same day. As Miao pointed out in her comment a newer version of the book exists, so if my posts have made you curious you should probably give that one a shot instead; this is a good book, but sometimes you can tell it wasn’t exactly written yesterday.

This book may tell you a lot of stuff you already know, especially if you have a little knowledge about biological systems, the human body, or perhaps basic statistics. I considered big parts of some chapters to be review stuff I already knew; I’d have preferred a slightly more detailed and in-depth treatment of the material. I didn’t need to be reminded how the kidneys work or that there’s such a thing as a blood-brain barrier, the stats stuff was of course old hat to me, I’m familiar with the linear no-threshold model, and there’s a lot of stuff about carcinogens in Mukherjee not covered in this book…

So it may tell you a lot of stuff you already know. But it will also tell you a lot of new stuff. I learned quite a bit and I liked reading the book, even the parts I probably didn’t really ‘need’ to read. I gave it 3 stars on account of the ‘written two decades ago’-thing and the ‘I don’t think I’m part of the core target group’-thing – but if current me had read it in the year 2000 I’d probably have given it four stars.

I don’t really know if the newer edition of the book is better than the one I read, and it’s dangerous to make assumptions about these things, but if he hasn’t updated it at all it’s still a good book, and if he has updated the material the new version is in all likelihood even better than the one I read. If you’re interested in this stuff, I don’t think this is a bad place to start.

I found out while writing the first post about the book that quoting from the book is quite bothersome. I’m lazy, so I decided to limit coverage here to some links which I’ve posted below – the stuff I link to is either covered or related to stuff that is covered in the book. It was a lot easier for me to post these links than to quote from the book in part because I visited many of these articles along the way while reading the book:

No-observed-adverse-effect level.
Polycyclic aromatic hydrocarbon.
Percivall Pott.
Dose–response relationship.
Acceptable daily intake.
Linear Low dose Extrapolation for Cancer Risk Assessments: Sources of Uncertainty and How They Affect the Precision of Risk Estimates (short paper)
Delaney clause.

Do note that these links taken together can be somewhat misleading – as you could hopefully tell from the quotes in the first post, the book is quite systematic and the main focus is on basic/key concepts. To the extent that specific poisons like paraquat and DDT are mentioned in the book they’re used to ‘zoom in’ on a certain aspect in order to illustrate a specific feature, or perhaps in order to point out an important distinction – stuff like that.

July 30, 2013 Posted by | Biology, Books, Cancer/oncology, Chemistry, Medicine | Leave a comment

Calculated Risks: Understanding the Toxicity of Chemicals in our Environment

So what is this book about? The introductory remarks below from the preface provide part of the answer:

“A word about organization of topics […] First, it is important to understand what we mean when we talk about ‘chemicals’. Many people think the term refers only to generally noxious materials that are manufactored in industrial swamps, frequently for no good purpose. The existence of such an image impedes understanding of toxicology and needs to be corrected. Moreover, because the molecular architecture of chemicals is a determinant of their behaviour in biological systems, it is important to create a little understanding of the principles of chemical structure and behavior. For these reasons, we begin with a brief review of some fundamentals of chemistry.

The two ultimate sources of chemicals – nature and industrial and laboratory synthesis – are then briefly described. This review sets the stage for a discussion of how human beings become exposed to chemicals. The conditions of human exposure are a critical determinant of whether and how a chemical will produce injury or disease, so the discussion of chemical sources and exposures naturally leads to the major subject of the book – the science of toxicology.

The major subjects of the last third of this volume are risk assessment […] and risk control, or management, and the associated topic of public perceptions of risk in relation to the judgments of experts.”

What can I say? – it made sense to read a toxicology textbook in between the Christie novels… The book was written in the 90s, but there are a lot of key principles and -concepts covered here that probably don’t have a much better description now than they did when the book was written. I wanted the overview and the book has delivered so far – I like it. Here’s some more stuff from the first half of the book:

“the greatest sources of chemicals to which we are regularly and directly exposed are the natural components of the plants and animals we consume as foods. In terms of both numbers and structural variations, no other chemical sources matches food. We have no firm estimate of the number of such chemicals we are exposed to through food, but it is surely immense. A cup of coffee contains, for example, nearly 200 different organic chemicals – natural components of the coffee bean that are extracted into water. Some impart color, some taste, some aroma, others none of the above. The simple potato has about 100 different natural components …” […]

“These facts bring out one of the most important concepts in toxicology: all chemicals are toxic under some conditions of exposure. What the toxicologist would like to know are those conditions. Once they are known, measures can be taken to limit human exposures so that toxicity can be avoided.” […]

The route of exposure refers to the way the chemical moves from the exposure medium into the body. For chemicals in the environment the three major routes are ingestion (the oral route), inhalation, and skin contact (or dermal contact). […]

The typical dose units are […] milligram of chemical per kilogram of body weight per day (mg/kg b.w./day). […] For the same intake […] the lighter person receives the greater dose. […] Duration of exposure as well as the dose received […] needs to be included in the equation […] dose and its duration are the critical determinants of the potential for toxicity. Exposure creates the dose.” […]

Analytical chemistry has undergone extraordinary advances over the past two to three decades. Chemists are able to measure many chemicals at the part-per-billion level which in the 1960s could be measured only at the part-per-million level […] or even the part-per-thousand level. […] These advances in detection capabilities have revealed that industrial chemicals are more widespread in the environment than might have been guessed 10 or 20 years ago, simply because chemists are now capable of measuring concentrations that could not be detected with analytical technology available in the 1960s. This trend will no doubt continue …” […]

“The nature of toxic damage produced by a chemical, the part of the body where that damage occurs, the severity of the damage, and the likelihood that the damage can be reversed, all depend upon the processes of absorption, distribution, metabolism and excretion, ADME for short. The combined effects of these processes determine the concentration a particular chemical […] will achieve in various tissues and cells of the body and the duration of time it spends there. Chemical form, concentration, and duration in turn determine the nature and extent of injury produced. Injury produced after absorption is referred to as systemic toxicity, to contrast it with local toxicity.” […]

“Care must be taken to distinguish subchronic or chronic exposures from subchronic or chronic effects. By the latter, toxicologists generally refer to some adverse effect that does not appear immediately after exposure begins but only after a delay; sometimes the effect may not be observed until near the end of a lifetime, even when exposure begins early in life (cancers, for example, are generally in this category of chronic effects). But the production of chronic effects may or may not require chronic exposure. For some chemicals acute or subchronic exposures may be all that is needed to produce a chronic toxicity; the effect is a delayed one. For others chronic exposure may be required to create chronic toxicity.” […]

“In the final analysis we are interested not in toxicity, but rather in risk. By risk is meant the likelihood, or probability, that the toxic properties of a chemical will be produced in populations of individuals under their actual conditions of exposure. To evaluate the risk of toxicity occurring for a specific chemical at least three types of information are requred:
1) The types of toxicity the chemical can produce (its targets and the forms of injury they incur).
2) The conditions of exposure (dose and duration) under which the chemical’s toxicity can be produced.
3) The conditions (dose, timing and duration) under which the population of people whose risk is being evaluated is or could be exposed to the chemical.

It is not sufficient to understand any one or two of these; no useful statement about risk can be made unless all three are understood.” […]

“It is rare that any single epidemiology study provides sufficiently definitive information to allow scientists to conclude that a cause-effect relationship exists between a chemical exposure and a human disease. Instead epidemiologists search for certain patterns. Does there seem to be a consistent association between the occurence of excess rates of a certain condition (lung cancer, for example) and certain exposures (e.g. to cigarette smoke) in several epidemiology studies involving different populations of people? If a consistent pattern of associations is seen, and other criteria are satisfied, causality can be established with reasonable certainty. […] Epidemiology studies are, of course, only useful after exposure has occurred. For certain classes of toxic agents, carcinogens being the most notable, exposure may have to take place for several decades before the effect, if it exists, is observable […] The obvious point is that epidemiology studies can not be used to identify toxic properties prior to the introduction of the chemical into commerce. This is one reason toxicologists turn to the laboratory. […] “The ‘nuts and bolts’ of animal testing, and the problems of test interpretation and extrapolation of results to human beings, comprise one of the central areas of controversy in the field of chemical risk assessment.” […]

Toxicologists classify hepatic toxicants according to the type of injuries they produce. Some cause accumulation of excessive and potentially dangerous amounts of lipids (fats). Others can kill liver cells; they cause cell necrosis. Cholestasis, which is decreased secretion of bile leading to jaundice […] can be produced as side effects of several therapeutic agents. Cirrhosis, a chronic change characterized by the deposition of connective tissue fibers, can be brought about after chronic exposure to several substances. […] ‘hepatotoxicity’ is not a very helpful term, because it fails to convey the fact that several quite distinct types of hepatic injury can be induced by chemical exposures and that, for each, different underlying mechanisms are at work. In fact, this situation exists for all targets, not only the liver.”

July 30, 2013 Posted by | Biology, Books, Chemistry, Epidemiology, Medicine | 2 Comments

Wikipedia articles of interest

i. Planetary habitability (featured).

Planetary habitability is the measure of a planet‘s or a natural satellite‘s potential to develop and sustain life. Life may develop directly on a planet or satellite or be transferred to it from another body, a theoretical process known as panspermia. As the existence of life beyond Earth is currently uncertain, planetary habitability is largely an extrapolation of conditions on Earth and the characteristics of the Sun and Solar System which appear favourable to life’s flourishing—in particular those factors that have sustained complex, multicellular organisms and not just simpler, unicellular creatures. Research and theory in this regard is a component of planetary science and the emerging discipline of astrobiology.

An absolute requirement for life is an energy source, and the notion of planetary habitability implies that many other geophysical, geochemical, and astrophysical criteria must be met before an astronomical body can support life. In its astrobiology roadmap, NASA has defined the principal habitability criteria as “extended regions of liquid water, conditions favourable for the assembly of complex organic molecules, and energy sources to sustain metabolism.”[1]

In determining the habitability potential of a body, studies focus on its bulk composition, orbital properties, atmosphere, and potential chemical interactions. Stellar characteristics of importance include mass and luminosity, stable variability, and high metallicity. Rocky, terrestrial-type planets and moons with the potential for Earth-like chemistry are a primary focus of astrobiological research, although more speculative habitability theories occasionally examine alternative biochemistries and other types of astronomical bodies.”

The article has a lot of stuff – if you’re the least bit interested (and if you are human and alive, as well as a complex enough lifeform to even conceptualize questions like these, why wouldn’t you be?) you should go have a look. When analyzing which factors might impact habitability of a system, some might say that we humans are rather constrained by our somewhat limited sample size of planetary systems known to support complex multicellular life, but this doesn’t mean we can’t say anything about this stuff. Even though extreme caution is naturally warranted when drawing conclusions here. Incidentally, although the Earth does support complex life now we would probably be well-advised to remember that this was not always the case, nor will it continue to be the case in the future – here’s one guess at what the Earth will look like in 7 billion years:


The image is from this article. Of course living organisms on Earth will be screwed long before this point is reached.

ii. Parity of zero (‘good article’).

Zero is an even number. Apparently a rather long wikipedia article can be written about this fact…

iii. 1907 Tiflis bank robbery (featured).

Not just any bank robbery – guns as well as bombs/grenades were used during the robbery, around 40 people died(!), and the list of names of the people behind the robbery includes the names Stalin and Lenin.

iv. Möbius Syndrome – what would your life be like if you were unable to make facial expressions and unable to move your eyes from side to side? If you want to know, you should ask these people. Or you could of course just start out by reading the article…

v. Book of the Dead (‘good article’).


“The Book of the Dead is an ancient Egyptian funerary text, used from the beginning of the New Kingdom (around 1550 BCE) to around 50 BCE.[1] The original Egyptian name for the text, transliterated rw nw prt m hrw[2] is translated as “Book of Coming Forth by Day”.[3] Another translation would be “Book of emerging forth into the Light”. The text consists of a number of magic spells intended to assist a dead person’s journey through the Duat, or underworld, and into the afterlife.

The Book of the Dead was part of a tradition of funerary texts which includes the earlier Pyramid Texts and Coffin Texts, which were painted onto objects, not papyrus. Some of the spells included were drawn from these older works and date to the 3rd millennium BCE. Other spells were composed later in Egyptian history, dating to the Third Intermediate Period (11th to 7th centuries BCE). A number of the spells which made up the Book continued to be inscribed on tomb walls and sarcophagi, as had always been the spells from which they originated. The Book of the Dead was placed in the coffin or burial chamber of the deceased.

There was no single or canonical Book of the Dead. The surviving papyri contain a varying selection of religious and magical texts and vary considerably in their illustration. Some people seem to have commissioned their own copies of the Book of the Dead, perhaps choosing the spells they thought most vital in their own progression to the afterlife. […]

A Book of the Dead papyrus was produced to order by scribes. They were commissioned by people in preparation for their own funeral, or by the relatives of someone recently deceased. They were expensive items; one source gives the price of a Book of the Dead scroll as one deben of silver,[50] perhaps half the annual pay of a labourer.[51] Papyrus itself was evidently costly, as there are many instances of its re-use in everyday documents, creating palimpsests. In one case, a Book of the Dead was written on second-hand papyrus.[52]

Most owners of the Book of the Dead were evidently part of the social elite; they were initially reserved for the royal family, but later papyri are found in the tombs of scribes, priests and officials. Most owners were men, and generally the vignettes included the owner’s wife as well. Towards the beginning of the history of the Book of the Dead, there are roughly 10 copies belonging to men for every one for a woman. However, during the Third Intermediate Period, 2/3 were for women; and women owned roughly a third of the hieratic paypri from the Late and Ptolemaic Periods.[53]

The dimensions of a Book of the Dead could vary widely; the longest is 40m long while some are as short as 1m. They are composed of sheets of papyrus joined together, the individual papyri varying in width from 15 cm to 45 cm.”

vi. Volcanic ash (‘good article’).


Volcanic ash consists of fragments of pulverized rock, minerals and volcanic glass, created during volcanic eruptions, less than 2 mm (0.079 in) in diameter.[1] The term volcanic ash is also often loosely used to refer to all explosive eruption products (correctly referred to as tephra), including particles larger than 2mm. Volcanic ash is formed during explosive volcanic eruptions when dissolved gases in magma expand and escape violently into the atmosphere. The force of the escaping gas shatters the magma and propels it into the atmosphere where it solidifies into fragments of volcanic rock and glass. Ash is also produced when magma comes into contact with water during phreatomagmatic eruptions, causing the water to explosively flash to steam leading to shattering of magma. Once in the air, ash is transported by wind up to thousands of kilometers away. […]

Physical and chemical characteristics of volcanic ash are primarily controlled by the style of volcanic eruption.[8] Volcanoes display a range of eruption styles which are controlled by magma chemistry, crystal content, temperature and dissolved gases of the erupting magma and can be classified using the Volcanic Explosivity Index (VEI). Effusive eruptions (VEI 1) of basaltic composition produce <105 m3 of ejecta, whereas extremely explosive eruptions (VEI 5+) of rhyolitic and dacitic composition can inject large quantities (>109 m3) of ejecta into the atmosphere. Another parameter controlling the amount of ash produced is the duration of the eruption: the longer the eruption is sustained, the more ash will be produced. […]

The types of minerals present in volcanic ash are dependent on the chemistry of the magma from which it was erupted. Considering that the most abundant elements found in magma are silica (SiO2) and oxygen, the various types of magma (and therefore ash) produced during volcanic eruptions are most commonly explained in terms of their silica content. Low energy eruptions of basalt produce a characteristically dark coloured ash containing ~45 – 55% silica that is generally rich in iron (Fe) and magnesium (Mg). The most explosive rhyolite eruptions produce a felsic ash that is high in silica (>69%) while other types of ash with an intermediate composition (e.g. andesite or dacite) have a silica content between 55-69%.

The principal gases released during volcanic activity are water, carbon dioxide, sulfur dioxide, hydrogen, hydrogen sulfide, carbon monoxide and hydrogen chloride.[9] These sulfur and halogen gases and metals are removed from the atmosphere by processes of chemical reaction, dry and wet deposition, and by adsorption onto the surface of volcanic ash. […]

Ash particles are incorporated into eruption columns as they are ejected from the vent at high velocity. The initial momentum from the eruption propels the column upwards. As air is drawn into the column, the bulk density decreases and it starts to rise buoyantly into the atmosphere.[6] At a point where the bulk density of the column is the same as the surrounding atmosphere, the column will cease rising and start moving laterally. Lateral dispersion is controlled by prevailing winds and the ash may be deposited hundreds to thousands of kilometres from the volcano, depending on eruption column height, particle size of the ash and climatic conditions (especially wind direction and strength and humidity).[25]

Ash fallout occurs immediately after the eruption and is controlled by particle density. Initially, coarse particles fall out close to source. This is followed by fallout of accretionary lapilli, which is the result of particle agglomeration within the column.[26] Ash fallout is less concentrated during the final stages as the column moves downwind. This results in an ash fall deposit which generally decreases in thickness and grain size exponentially with increasing distance from the volcano.[27] Fine ash particles may remain in the atmosphere for days to weeks and be dispersed by high-altitude winds.”

If you’re interested in this kind of stuff (the first parts of the article), Press’ and Siever’s textbook Earth, which I read last summer (here’s one relevant post), is pretty good. There’s a lot of stuff in the article about how this stuff impacts humans and human infrastructure though I decided against including any of that stuff here – if you’re curious, go have a look.

vii. Kingdom of Mysore (featured).

“The Kingdom of Mysore was a kingdom of southern India, traditionally believed to have been founded in 1399 in the vicinity of the modern city of Mysore. The kingdom, which was ruled by the Wodeyar family, initially served as a vassal state of the Vijayanagara Empire. With the decline of the Vijayanagara Empire (c.1565), the kingdom became independent. The 17th century saw a steady expansion of its territory and, under Narasaraja Wodeyar I and Chikka Devaraja Wodeyar, the kingdom annexed large expanses of what is now southern Karnataka and parts of Tamil Nadu to become a powerful state in the southern Deccan.

The kingdom reached the height of its military power and dominion in the latter half of the 18th century under the de facto ruler Haider Ali and his son Tipu Sultan. During this time, it came into conflict with the Marathas, the British and the Nizam of Hyderabad, which culminated in the four Anglo-Mysore wars. Success in the first two Anglo-Mysore wars was followed by defeat in the third and fourth. Following Tipu’s death in the fourth war of 1799, large parts of his kingdom were annexed by the British, which signalled the end of a period of Mysorean hegemony over southern Deccan. The British restored the Wodeyars to their throne by way of a subsidiary alliance and the diminished Mysore was transformed into a princely state. The Wodeyars continued to rule the state until Indian independence in 1947, when Mysore acceded to the Union of India. […]

The vast majority of the people lived in villages and agriculture was their main occupation. The economy of the kingdom was based on agriculture. Grains, pulses, vegetables and flowers were cultivated. Commercial crops included sugarcane and cotton. The agrarian population consisted of landlords (gavunda, zamindar, heggadde) who tilled the land by employing a number of landless labourers, usually paying them in grain. Minor cultivators were also willing to hire themselves out as labourers if the need arose.[73] It was due to the availability of these landless labourers that kings and landlords were able to execute major projects such as palaces, temples, mosques, anicuts (dams) and tanks.[74] Because land was abundant and the population relatively sparse, no rent was charged on land ownership. Instead, landowners paid tax for cultivation, which amounted to up to one-half of all harvested produce.[74]

Tipu Sultan is credited to have founded state trading depots in various locations of his kingdom. In addition, he founded depots in foreign locations such as Karachi, Jeddah and Muscat, where Mysore products were sold.[75] During Tipu’s rule French technology was used for the first time in carpentry and smithy, Chinese technology was used for sugar production, and technology from Bengal helped improve the sericulture industry.[76] State factories were established in Kanakapura and Taramandelpeth for producing cannons and gunpowder respectively. The state held the monopoly in the production of essentials such as sugar, salt, iron, pepper, cardamom, betel nut, tobacco and sandalwood, as well as the extraction of incense oil from sandalwood and the mining of silver, gold and precious stones. Sandalwood was exported to China and the Persian Gulf countries and sericulture was developed in twenty-one centres within the kingdom.[77]

This system changed under the British, when tax payments were made in cash, and were used for the maintenance of the army, police and other civil and public establishments. A portion of the tax was transferred to England as the “Indian tribute”.[78] Unhappy with the loss of their traditional revenue system and the problems they faced, peasants rose in rebellion in many parts of south India.[79] […]

Prior to the 18th century, the society of the kingdom followed age-old and deeply established norms of social interaction between people. Accounts by contemporaneous travellers indicate the widespread practice of the Hindu caste system and of animal sacrifices during the nine day celebrations (called Mahanavami).[101] Later, fundamental changes occurred due to the struggle between native and foreign powers. Though wars between the Hindu kingdoms and the Sultanates continued, the battles between native rulers (including Muslims) and the newly arrived British took centre stage.[61] The spread of English education, the introduction of the printing press and the criticism of the prevailing social system by Christian missionaries helped make the society more open and flexible. The rise of modern nationalism throughout India also had its impact on Mysore.[102]

With the advent of British power, English education gained prominence in addition to traditional education in local languages. These changes were orchestrated by Lord Elphinstone, the governor of the Madras Presidency. […]

Social reforms aimed at removing practices such as sati and social discrimination based upon untouchability, as well as demands for the emancipation of the lower classes, swept across India and influenced Mysore territory.[106] In 1894, the kingdom passed laws to abolish the marriage of girls below the age of eight. Remarriage of widowed women and marriage of destitute women was encouraged, and in 1923, some women were granted the permission to exercise their franchise in elections.[107] There were, however, uprisings against British authority in the Mysore territory, notably the Kodagu uprising in 1835 (after the British dethroned the local ruler Chikkaviraraja) and the Kanara uprising of 1837.”

Not from wikipedia, but a link to this recent post by Razib Khan seems relevant to include here.

July 14, 2013 Posted by | Astronomy, Biology, Chemistry, Geology, History, Mathematics, Medicine, Wikipedia | Leave a comment


I have a paper deadline approaching, so I’ll be unlikely to blog much more this week. Below some links and stuff of interest:

i. Plos One: A Survey on Data Reproducibility in Cancer Research Provides Insights into Our Limited Ability to Translate Findings from the Laboratory to the Clinic.

“we surveyed the faculty and trainees at MD Anderson Cancer Center using an anonymous computerized questionnaire; we sought to ascertain the frequency and potential causes of non-reproducible data. We found that ~50% of respondents had experienced at least one episode of the inability to reproduce published data; many who pursued this issue with the original authors were never able to identify the reason for the lack of reproducibility; some were even met with a less than “collegial” interaction. […] These results suggest that the problem of data reproducibility is real. Biomedical science needs to establish processes to decrease the problem and adjudicate discrepancies in findings when they are discovered.”

ii. The development in the number of people killed in traffic accidents in Denmark over the last decade (link):

Traffic accidents
For people who don’t understand Danish: The x-axis displays the years, the y-axis displays deaths – I dislike it when people manipulate the y-axis (…it should start at 0, not 200…), but this decline is real; the number of Danes killed in traffic accidents has more than halved over the last decade (463 deaths in 2002; 220 deaths in 2011). The number of people sustaining traffic-related injuries dropped from 9254 in 2002 to 4259 in 2011. There’s a direct link to the data set at the link provided above if you want to know more.

iii. Gender identity and relative income within households, by Bertrand, Kamenica & Pan.

“We examine causes and consequences of relative income within households. We establish that gender identity – in particular, an aversion to the wife earning more than the husband – impacts marriage formation, the wife’s labor force participation, the wife’s income conditional on working, marriage satisfaction, likelihood of divorce, and the division of home production. The distribution of the share of household income earned by the wife exhibits a sharp cliff at 0.5, which suggests that a couple is less willing to match if her income exceeds his. Within marriage markets, when a randomly chosen woman becomes more likely to earn more than a randomly chosen man, marriage rates decline. Within couples, if the wife’s potential income (based on her demographics) is likely to exceed the husband’s, the wife is less likely to be in the labor force and earns less than her potential if she does work. Couples where the wife earns more than the husband are less satisfied with their marriage and are more likely to divorce. Finally, based on time use surveys, the gender gap in non-market work is larger if the wife earns more than the husband.” […]

“In our preferred specification […] we find that if the wife earns more than the husband, spouses are 7 percentage points (15%) less likely to report that their marriage is very happy, 8 percentage points (32%) more likely to report marital troubles in the past year, and 6 percentage points (46%) more likely to have discussed separating in the past year.”

These are not trivial effects…

iv. Some Khan Academy videos of interest:

v. The Age Distribution of Missing Women in India.

“Relative to developed countries, there are far fewer women than men in India. Estimates suggest that among the stock of women who could potentially be alive today, over 25 million are “missing”. Sex selection at birth and the mistreatment of young girls are widely regarded as key explanations. We provide a decomposition of missing women by age across the states. While we do not dispute the existence of severe gender bias at young ages, our computations yield some striking findings. First, the vast majority of missing women in India are of adult age. Second, there is significant variation in the distribution of missing women by age across different states. Missing girls at birth are most pervasive in some north-western states, but excess female mortality at older ages is relatively low. In contrast, some north-eastern states have the highest excess female mortality in adulthood but the lowest number of missing women at birth. The state-wise variation in the distribution of missing women across the age groups makes it very difficult to draw simple conclusions to explain the missing women phenomenon in India.”

A table from the paper:

Anderson et al

“We estimate that a total of more than two million women in India are missing in a given year. Our age decomposition of this total yields some striking findings. First, the majority of missing women, in India die in adulthood. Our estimates demonstrate that roughly 12% of missing women are found at birth, 25% die in childhood, 18% at the reproductive ages, and 45% die at older ages. […] There are just two states in which the majority of missing women are either never born or die in childhood (i e, [sic] before age 15), and these are Haryana and Rajasthan. Moreover, the missing women in these three states add up to well under 15% of the total missing women in India.

For all other states, the majority of missing women die in adulthood. […]

Because there is so much state-wise variation in the distribution of missing women across the age groups, it is difficult to provide a clear explanation for missing women in India. The traditional explanation for missing women, a strong preference for the birth of a son, is most likely driving a significant proportion of missing women in the two states of Punjab and Haryana where the biased sex ratios at birth are undeniable. However, the explanation for excess female deaths after birth is far from clear.”

May 22, 2013 Posted by | Cancer/oncology, Chemistry, Data, Demographics, Economics, Khan Academy, marriage, Medicine, Papers | Leave a comment

Khan Academy videos of interest

I assume that not all of the five videos below are equally easy to understand for people who’ve not watched the previous ones in the various relevant playlists, but this is the stuff I’ve been watching lately and you should know where to look by now if something isn’t perfectly clear. I incidentally covered some relevant background material previously on the blog – if concepts from chemistry like ‘oxidation states’ are a bit far away, a couple of the videos in that post may be helpful.

I stopped caring much when I reached the 1 million mark (until they introduced the Kepler badge – then I started caring a little again until I’d gotten that one), but I noticed today that I’m at this point almost at the 1,5 million energy points mark (1.487.776). I’ve watched approximately 400 videos at the site by now.

Here’s a semi-related link with some good news: Khan Academy Launches First State-Wide Pilot In Idaho.

March 7, 2013 Posted by | Biology, Botany, Chemistry, Khan Academy, Lectures | 2 Comments

Wikipedia articles of interest

i. 2,4-Dinitrophenol.

2,4-Dinitrophenol (DNP), C6H4N2O5, is an inhibitor of efficient energy (ATP) production in cells with mitochondria. It uncouples oxidative phosphorylation by carrying protons across the mitochondrial membrane, leading to a rapid consumption of energy without generation of ATP. […]

DNP was used extensively in diet pills from 1933 to 1938 after Cutting and Tainter at Stanford University made their first report on the drug’s ability to greatly increase metabolic rate.[3][4] After only its first year on the market Tainter estimated that probably at least 100,000 persons had been treated with DNP in the United States, in addition to many others abroad.[5] DNP acts as a protonophore, allowing protons to leak across the inner mitochondrial membrane and thus bypass ATP synthase. This makes ATP energy production less efficient. In effect, part of the energy that is normally produced from cellular respiration is wasted as heat. The inefficiency is proportional to the dose of DNP that is taken. As the dose increases and energy production is made more inefficient, metabolic rate increases (and more fat is burned) in order to compensate for the inefficiency and meet energy demands. DNP is probably the best known agent for uncoupling oxidative phosphorylation. The production or “phosphorylation” of ATP by ATP synthase gets disconnected or “uncoupled” from oxidation. Interestingly, the factor that limits ever-increasing doses of DNP is not a lack of ATP energy production, but rather an excessive rise in body temperature due to the heat produced during uncoupling. Accordingly, DNP overdose will cause fatal hyperthermia. In light of this, it’s advised that the dose be slowly titrated according to personal tolerance, which varies greatly.[6] Case reports have shown that an acute administration of 20–50 mg/kg in humans can be lethal.[7] Concerns about dangerous side-effects and rapidly developing cataracts resulted in DNP being discontinued in the United States by the end of 1938. DNP, however, continues to be used by some bodybuilders and athletes to rapidly lose body fat. Fatal overdoses are rare, but are still reported on occasion. These include cases of accidental exposure,[8] suicide,[7][9][10] and excessive intentional exposure.[9][11][12] […]

While DNP itself is considered by many to be too risky for human use, its mechanism of action remains under investigation as a potential approach for treating obesity.[19]

ii. Opium. Long article with lots of good stuff.

“The most important reason for the increase in opiate consumption in the United States during the 19th century was the prescribing and dispensing of legal opiates by physicians and pharmacists to women with ”female problems” (mostly to relieve menstrual pain). Between 150,000 and 200,000 opiate addicts lived in the United States in the late 19th century and between two-thirds and three-quarters of these addicts were women.[35] […]

After the 1757 Battle of Plassey and 1764 Battle of Buxar, the British East India Company gained the power to act as diwan of Bengal, Bihar, and Orissa (See company rule in India). This allowed the company to exercise a monopoly over opium production and export in India, to encourage ryots to cultivate the cash crops of indigo and opium with cash advances, and to prohibit the “hoarding” of rice. This strategy led to the increase of the land tax to 50% of the value of crops and to the doubling of East India Company profits by 1777. It is also claimed to have contributed to the starvation of ten million people in the Bengal famine of 1770. Beginning in 1773, the British government began enacting oversight of the company’s operations, and in response to the Indian Rebellion of 1857 this policy culminated in the establishment of direct rule over the Presidencies and provinces of British India. Bengal opium was highly prized, commanding twice the price of the domestic Chinese product, which was regarded as inferior in quality.[47]

Some competition came from the newly independent United States, which began to compete in Guangzhou (Canton) selling Turkish opium in the 1820s. Portuguese traders also brought opium from the independent Malwa states of western India, although by 1820, the British were able to restrict this trade by charging “pass duty” on the opium when it was forced to pass through Bombay to reach an entrepot.[17] Despite drastic penalties and continued prohibition of opium until 1860, opium importation rose steadily from 200 chests per year under Yongzheng to 1,000 under Qianlong, 4,000 under Jiaqing, and 30,000 under Daoguang.[48] The illegal sale of opium became one of the world’s most valuable single commodity trades and has been called “the most long continued and systematic international crime of modern times.”[49]

In response to the ever-growing number of Chinese people becoming addicted to opium, Daoguang of the Qing Dynasty took strong action to halt the import of opium, including the seizure of cargo. In 1838, the Chinese Commissioner Lin Zexu destroyed 20,000 chests of opium in Guangzhou (Canton).[17] Given that a chest of opium was worth nearly $1,000 in 1800, this was a substantial economic loss. The British, not willing to replace the cheap opium with costly silver, began the First Opium War in 1840, the British winning Hong Kong and trade concessions in the first of a series of Unequal Treaties.

Following China’s defeat in the Second Opium War in 1858, China was forced to legalize opium and began massive domestic production. Importation of opium peaked in 1879 at 6,700 tons, and by 1906, China was producing 85% of the world’s opium, some 35,000 tons, and 27% of its adult male population regularly used opium —13.5 million people consuming 39,000 tons of opium yearly.[47] From 1880 to the beginning of the Communist era, Britain attempted to discourage the use of opium in China, but this effectively promoted the use of morphine, heroin, and cocaine, further exacerbating the problem of addiction.[50] […]

iii. Metallicity.

“In astronomy and physical cosmology, the metallicity (also called Z[1]) of an object is the proportion of its matter made up of chemical elements other than hydrogen and helium. Since stars, which comprise most of the visible matter in the universe, are composed mostly of hydrogen and helium, astronomers use for convenience the blanket term “metal” to describe all other elements collectively.[2] Thus, a nebula rich in carbon, nitrogen, oxygen, and neon would be “metal-rich” in astrophysical terms even though those elements are non-metals in chemistry. This term should not be confused with the usual definition of “metal“; metallic bonds are impossible within stars, and the very strongest chemical bonds are only possible in the outer layers of cool K and M stars. Normal chemistry therefore has little or no relevance in stellar interiors.

The metallicity of an astronomical object may provide an indication of its age. When the universe first formed, according to the Big Bang theory, it consisted almost entirely of hydrogen which, through primordial nucleosynthesis, created a sizeable proportion of helium and only trace amounts of lithium and beryllium and no heavier elements. Therefore, older stars have lower metallicities than younger stars such as our Sun.”

iv. Batavian Republic.

“The Batavian Republic (Dutch: Bataafse Republiek) was the successor of the Republic of the United Netherlands. It was proclaimed on January 19, 1795, and ended on June 5, 1806, with the accession of Louis Bonaparte to the throne of the Kingdom of Holland.” (the article has much more)

v. Taiping Rebellion. Never heard of this? You should have:

“The Taiping Rebellion was a widespread civil war in southern China from 1850 to 1864, against the ruling Manchu-led Qing Dynasty. It was led by heterodox Christian convert Hong Xiuquan, who, having claimed to have received visions, maintained that he was the younger brother of Jesus Christ [2]. About 20 million people died, mainly civilians, in one of the deadliest military conflicts in history.[3]

vi. Borobudur (featured).

Borobudur, or Barabudur, is a 9th-century Mahayana Buddhist monument in Magelang, Central Java, Indonesia. The monument consists of six square platforms topped by three circular platforms, and is decorated with 2,672 relief panels and 504 Buddha statues.[1] A main dome, located at the center of the top platform, is surrounded by 72 Buddha statues seated inside a perforated stupa.”

September 9, 2012 Posted by | Astronomy, Chemistry, Geography, History, Physics, Wikipedia | Leave a comment

Wikipedia articles of interest

i. British anti-invasion preparations of World War II. From the article:

“Any German invasion of Britain would have to involve the landing of troops and equipment somewhere on the coast, and the most vulnerable areas were the south and east coasts of England. Here, Emergency Coastal Batteries were constructed to protect ports and likely landing places. They were fitted with whatever guns were available, which mainly came from naval vessels scrapped since the end of the First World War. These included 6 inch (152 mm), 5.5 inch (140 mm), 4.7 inch (120 mm) and 4 inch (102 mm) guns. These had little ammunition, sometimes as few as ten rounds apiece. At Dover, two 14 inch (356 mm) guns known as Winnie and Pooh were employed.[25] There were also a small number of land based torpedo launching sites.[26]

Beaches were blocked with entanglements of barbed wire, usually in the form of three coils of concertina wire fixed by metal posts, or a simple fence of straight wires supported on waist-high posts.[27] The wire would also demarcate extensive minefields, with both anti-tank and anti-personnel mines on and behind the beaches. On many of the more remote beaches this combination of wire and mines represented the full extent of the passive defences.

Portions of the Romney Marsh, which was the planned invasion site of Operation Sea Lion, were flooded[28] and there were plans to flood more of the Marsh if the invasion were to materialise.[29]

Piers, ideal for landing of troops, and situated in large numbers along the south coast of England, were disassembled, blocked or otherwise destroyed. Many piers were not repaired until the late 1940s or early 1950s.[30]

Where a barrier to tanks was required, Admiralty scaffolding (also known as beach scaffolding or obstacle Z.1) was constructed. Essentially, this was a fence of scaffolding tubes 9 feet (2.7 m) high and was placed at low water so that tanks could not get a good run at it.[31] Admiralty scaffolding was deployed along hundreds of miles of vulnerable beaches.[32]

An even more robust barrier to tanks was provided by long lines of anti-tank cubes. The cubes were made of reinforced concrete 5 feet (1.5 m) to a side. Thousands were cast in situ in rows sometimes two or three deep.

The beaches themselves were overlooked by pillboxes of various types (see British hardened field defences of the Second World War). These were sometimes placed low down to get maximum advantage from enfilading fire whereas others were placed high up making them much harder to capture. Searchlights were installed at the coast to illuminate the sea surface and the beaches for artillery fire.[33][34][35]

I also thought this article, on British hardened field defences (pillboxes), was quite fascinating. It seems to me that at least a few of the models were not much more than poorly constructed deathtraps, whereas some others were remarkably well constructed.

ii. Bradford-Hill criteria. I was waiting a long time for these to be brought up (/mentioned?) during this lecture; they were never mentioned and along the way the lecturer made me start doubting whether he even knew the difference between a p-value and a correlation coefficient. Either way, the criteria are “a group of minimal conditions necessary to provide adequate evidence of a causal relationship between an incidence and a consequence” – here’s the list from the article:

  1. Strength of association (relative risk, odds ratio)
  2. Consistency
  3. Specificity
  4. Temporal relationship (temporality) – not heuristic; factually necessary for cause to precede consequence
  5. Biological gradient (dose-response relationship)
  6. Plausibility (biological plausibility)
  7. Coherence
  8. Experiment (reversibility)
  9. Analogy (consideration of alternate explanations)

Do have these in mind the next time you come across an article on reddit (or wherever) explaining how ‘drinking X two times a week will prevent cancer’ or how ‘doing Y will minimize your risk of getting disease Z’ (or whatever). Of course in 1965, when the criteria were formulated, people had never even heard about stuff like Granger causality tests, vector autoregressive models and instrumental variable models. Establishing any kind of reasonably strong argument for a causal relationship between two sets of variables is very hard.

iii. Minamata disease. Via mercury and mercury poisoning. It’s a horrible story and I think it’s pretty much certain that quite a few comparable disasters are unfolding right now elsewhere, e.g. in China. From the article:

Minamata disease […] is a neurological syndrome caused by severe mercury poisoning. Symptoms include ataxia, numbness in the hands and feet, general muscle weakness, narrowing of the field of vision and damage to hearing and speech. In extreme cases, insanity, paralysis, coma, and death follow within weeks of the onset of symptoms. A congenital form of the disease can also affect foetuses in the womb.

Minamata disease was first discovered in Minamata city in Kumamoto prefecture, Japan, in 1956. It was caused by the release of methylmercury in the industrial wastewater from the Chisso Corporation‘s chemical factory, which continued from 1932 to 1968. This highly toxic chemical bioaccumulated in shellfish and fish in Minamata Bay and the Shiranui Sea, which when eaten by the local populace resulted in mercury poisoning. While cat, dog, pig, and human deaths continued over more than 30 years, the government and company did little to prevent the pollution.

As of March 2001, 2,265 victims had been officially recognised (1,784 of whom had died)[1] and over 10,000 had received financial compensation from Chisso.[2] By 2004, Chisso Corporation had paid $86 million in compensation, and in the same year was ordered to clean up its contamination.[3] On March 29, 2010, a settlement was reached to compensate as-yet uncertified victims.[4]

A second outbreak of Minamata disease occurred in Niigata Prefecture in 1965. The original Minamata disease and Niigata Minamata disease are considered two of the Four Big Pollution Diseases of Japan.”

See also Patio process, a historically quite significant technique which improved the yields of silver mines in South America.

iv. Recovery position.

“The recovery position refers to one of a series of variations on a lateral recumbent or three-quarters prone position of the body, in to which an unconscious but breathing casualty can be placed as part of first aid treatment.

An unconscious person (GCS <8) in a supine position (on their back) may not be able to maintain an open airway as a conscious person would.[1] This can lead to an obstruction of the airway, restricting the flow of air and preventing gaseous exchange, which then causes hypoxia, which is life threatening. Thousands of fatalities occur every year in casualties where the cause of unconsciousness was not fatal, but where airway obstruction caused the patient to suffocate.[2][3][4] The cause of unconsciousness can be any reason from trauma to intoxication from alcohol.”

You never know when you need to know stuff like this.

v. Cassava.

Cassava (Manihot esculenta), also called yuca, mogo, manioc, mandioca and kamoting kaoy a woody shrub of the Euphorbiaceae (spurge family) native to South America, is extensively cultivated as an annual crop in tropical and subtropical regions for its edible starchy, tuberous root, a major source of carbohydrates. It differs from the similarly-spelled yucca, an unrelated fruit-bearing shrub in the Asparagaceae family. Cassava, when dried to a starchy, powdery (or pearly) extract is called tapioca, while its fermented, flaky version is named garri.

Cassava is the third-largest source of food carbohydrates in the tropics.[1][2] Cassava is a major staple food in the developing world, providing a basic diet for around 500 million people.[3] Cassava is one of the most drought-tolerant crops, capable of growing on marginal soils. Nigeria is the world’s largest producer of cassava.

Cassava root is a good source of carbohydrates, but a poor source of protein. A predominantly cassava root diet can cause protein-energy malnutrition.[4]

Cassava is classified as sweet or bitter. Like other roots and tubers, Cassava contains anti-nutrition factors and toxins.[5] It must be properly prepared before consumption. Improper preparation of cassava can leave enough residual cyanide to cause acute cyanide intoxication and goiters, and may even cause ataxia or partial paralysis.[6] Nevertheless, farmers often prefer the bitter varieties because they deter pests, animals, and thieves.[7] The more-toxic varieties of Cassava are a fall-back resource (a “food security crop”) in times of famine in some places.[8]

Using the toxic varieties as fall-back ressources is of course not exactly optimal. It can actually, and has, lead to really terrible outcomes (here’s the study Rosling talks about, I have not been able to find a non-gated version):

vi. Solon. I’m sure that for most readers the name rings a bell, but what do you actually know about the guy? If you click the link, you’ll know more…

vii. Emulsion.

“An emulsion is a mixture of two or more liquids that are normally immiscible (un-blendable). Emulsions are part of a more general class of two-phase systems of matter called colloids. Although the terms colloid and emulsion are sometimes used interchangeably, emulsion is used when both the dispersed and the continuous phase are liquid. In an emulsion, one liquid (the dispersed phase) is dispersed in the other (the continuous phase). Examples of emulsions include vinaigrettes, milk, and some cutting fluids for metal working. The photo-sensitive side of photographic film is an example of a colloid.”

viii. Onomatopoeia. Included because that’s just a neat word for something I didn’t know had a name:

An onomatopoeia or onomatopœia […] from the Greek ὀνοματοποιία;[1] ὄνομα for “name”[2] and ποιέω for “I make”,[3] adjectival form: “onomatopoeic” or “onomatopoetic”) is a word that imitates or suggests the source of the sound that it describes. Onomatopoeia (as an uncountable noun) refers to the property of such words. Common occurrences of onomatopoeias include animal noises, such as “oink” or “meow” or “roar”. Onomatopoeias are not the same across all languages; they conform to some extent to the broader linguistic system they are part of; hence the sound of a clock may be tick tock in English, dī dā in Mandarin, or katchin katchin in Japanese.”

(And now you know…)

ix. Chemokine. It’s a technical article, but you can’t read it and not at least get the message that the human body is almost unbelievably complex.

May 18, 2012 Posted by | Biology, Botany, Chemistry, Epidemiology, History, Immunology, Medicine, Neurology, Statistics, Wikipedia | Leave a comment

Basic Pharmacology Principles

A very good introductionary lecture on pharmacology:

I decided to post some wikipedia links to a few of the concepts he covers in the lecture below (however I’m pretty sure the lecture is the more efficient way to learn this stuff, at least the basics):

Receptor antagonist.
EC50 (half maximal effective concentration).
Dose-response relationship.

January 30, 2012 Posted by | Biology, Chemistry, Lectures, Medicine, Pharmacology, Wikipedia | Leave a comment

Khan videos of interest

Some of the stuff I’ve been watching today:

(For the record: I think the above video is just plain cool. I’ve had real trouble ‘getting’ how the kidneys actually work, despite reading quite a bit about that subject at one point; one of the main things I remember from reading about it when I did was that the more details were added to the mix the more confused I tended to get (always a risk when you use peer-reviewed research to supplement wikipedia and similar sources). Khan does a brilliant job here, he’s of course simplifying stuff somewhat but you get it.)

In a couple of later videos he makes a few clarifications regarding the terminology (and frankly if you like this one, you should watch them too – this one is the next in the series) but those are not super important.

The last two were some of the videos I felt I had to take a closer look at in order to get a little more out of some of the sections in the Microbiology textbook. I’m pretty sure this stuff was covered in HS-chemistry, but that’s a long time ago and I haven’t used that stuff since then so a lot of it is just gone. Thanks to Khan, brushing up on some of this stuff is a lot easier than it otherwise could have been.

I think I ought to have a go at the calculus section and linear algebra at some point, but so far I haven’t really found the motivation to do so – besides from watching a few random videos along the way. Incidentally, today I crossed the 100-completed-videos mark (75 of them were in the Cosmology and Astronomy section, which I’ve watched in full from start to finish – and which I highly recommend even though technically some of the videos probably do not belong in this category at all).

August 7, 2011 Posted by | Astronomy, Biology, Cardiology, Chemistry, Khan Academy, Lectures, Medicine, Nephrology, Physics | Leave a comment

Bill Bryson (II)

More quotes from his wonderful book:

1. “Before [Richard] Owen, museums were designed primarily for the use and edification of the elite, and even they found it difficult to gain access. In the early days of the British Museum, prospective visitors had to make a written application and undergo a brief interview to determine if they were fit to be admitted at all. They then had to return a second time to pick up a ticket – that is, assuming they had passed the interview – and finally come back a third time to view the museum’s treasures. Even then they were whisked through in groups and not allowed to linger. Owen’s plan was to welcome everyone, even to the point of encouraging working men to visit in the evening, and to devote most of the museum’s space to public displays. He even proposed, very radically, to put informative labels on each display so that people could appreciate what they were viewing.”

2. “At the turn of the twentieth century, palaeontologists had literally tons of old bones to pick over. The problem was that they still didn’t have any idea how old any of these bones were. Worse, the agreed ages for the Earth couldn’t comfortably support the numbers of aeons and ages and epochs that the past obviously contained. If Earth were really only twenty million years old or so, as the great Lord Kelvin insisted, then whole orders of ancient creatures must have come into being and gone out again practically in the same geological instant. It just made no sense. […] Such was the confusion that by the close of the nineteenth century, depending on which text you consulted, you could learn that the number of years that stood between us and the dawn of complex life in the Cambrian period was 3 million, 18 million, 600 million, 794 million, or 2,4 billion – or som other number within that range. As late as 1910 [five years after Einstein’s Annus Mirabilis papers], one of the most respected estimates, by the American George Becker, put the Earth’s age at perhaps as little as 55 million years.”

3. “Soon after taking up his position [in the beginning of the nineteenth century], [Humphry] Davy began to bang out new elements one after the other – potassium, sodium, magnesium, calcium, strontium, and aluminum or aluminium […] He discovered so many elements not so much because he was serially astute as because he developed an ingenious technique of applying electricity to a molten substance – electrolysis, as it is known. Altogether he discovered a dozen elements, a fifth of the known totals of his day.”

4. “They [Ernest Rutherford and Frederick Soddy] also discovered that radioactive elements decayed into other elements – that one day you had an atom of uranium, say, and the next you had an atom of lead. This was truly extraordinary. It was alchemy pure and simple; no-one had ever imagined that such a thing could happen naturally and spontaneously. […] For a long time it was assumed that anything so miraculously energetic as radioactivity must be beneficial. For years, manufacturers of toothpaste and laxatives put radioactive thorium in their products, and at least until the late 1920s the Glen Springs Hotel in the Finger Lakes region of New York (and doubtless others as well) featured with pride the therapeutic effects of its ‘Radio-active mineral springs’. It wasn’t banned in consumer products until 1938. By this time it was much too late for Mme Curie, who died of leukaemia in 1934.”

5. “In 1875, when a young German in Kiel named Max Planck was deciding whether to devote his life to mathematics or to physics, he was urged most heartily not to choose physics because the breakthroughs had all been made there. The coming century, he was assured, would be one of consolidation and refinement, not revolution.”

6. “You may not feel outstandingly robust, but if you are an average-sized adult you will contain within your modest frame no less than 7 x 10^18 joules of potential energy – enough to explode with the force of thirty very large hydrogen bombs, assuming you knew how to liberate it and really wished to make a point. Everything has this kind of energy trapped within it. We’re just not very good at getting it out. Even a uranium bomb – the most energetic thing we have produced yet – releases less than 1 per cent of the energy it could release if only we were more cunning.”

7. “It is worth pausing for a moment to consider just how little was known of the cosmos at the this time. Astronomers today believe there are perhaps 140 billion galaxies in the visible universe. […] In 1919, when Hubble first put his head to the eyepiece, the number of these galaxies known to us was exactly one: the Milky Way. Everything else was thought to be either part of the Milky Way itself or one of many distant, peripheral puffs of gas. […] at the time Leavitt and Cannon were inferring fundamental properties of the cosmos from dim smudges of distant stars on photographic plates, the Harvard astronomer William H. Pickering, who could of course peer into a first-class telescope as often as he wanted, was developing his seminal theory that dark patches on the Moon were caused by swarms of seasonally migrating insects.”

8. “Atoms, in short, are very abundant. They are also fantastically durable. Because they are so long-lived, atoms really get around. Every atom you possess has almost certainly passed through several stars and been part of millions of organisms on its way to becoming you. We are each so atomically numerous and so vigorously recycled at death that a significant number of our atoms – up to a billion for each of us, it has been suggested – probably once belonged to Shakespeare.”

From the wiki correction page: “Jupiter Scientific has done an analysis of this problem and the figure in Bryon’s book is probably low: It is likely that each of us has about 200 billion atoms that were once in Shakespeare’s body.”

9. “Even though lead was widely known to be dangerous, by the early years of the twentieth century it could be found in all manner of consumer products. Food came in cans sealed with lead solder. Water was often stored in lead-lined tanks. Lead arsenate was sprayed onto fruits as a pesticide. Lead even came as part of the composition of toothpaste tubes. […] Americans alive today each have about 625 times more lead in their blood than people did a century ago.”

In this chapter we also learn that we did not arrive at the current best estimate of the age of the earth until little over 50 years ago – I won’t quote from the book, but wikipedia has the short version: “An age of 4.55 ± 1.5% billion years, very close to today’s accepted age, was determined by C.C. Patterson using uranium-lead isotope dating (specifically lead-lead dating) on several meteorites including the Canyon Diablo meteorite and published in 1956.” At this point, the age of the universe was still very uncertain, from the book: “In 1956, astronomers discovered that Cepheid variables were more variable than they had thought; they came in two varieties, not one. This allowed them to rework their calculations and come up with a new age for the universe of between seven billion and twenty billion years” – as Bryson puts it, that estimate was “not terribly precise”. Our knowledge about the age of the universe is quite new.

10. “Well into the 1970s, one of the most popular and influential geological textbooks, The Earth by the venerable Harold Jefferys, strenuously insisted that plate tectonics was a physical impossibility, just as it had in the first edition way back in 1924. It was equally dismissive of convection and sea-floor spreading. And in Basin and Range, published in 1980, John McPhee noted that even then one American geologist in eight still didn’t believe in plate tectonics.”

11. “By the time Shoemaker came along, a common view was that Meteor Crater had been formed by an underground steam explosion. Shoemaker knew nothing about underground steam explosions – he couldn’t; they don’t exist…”

July 30, 2011 Posted by | Astronomy, Books, Chemistry, cosmology, Geology, Paleontology, Physics | Leave a comment

Bill Bryson

Here’s the link, order it if you like what you read here. I read the book 3 years ago, but this is the kind of book that you’ll probably want to reread at some point if you’re like me. When I read it the first time I borrowed my big brother’s book, as he had it standing on his bookshelf while I was visiting him over the Summer. I recently bought the book myself (it was on sale) and I’ve pretty much since I bought it been somewhat bugged by the fact that (yet) a(/nother) book I’ve read stands on my bookshelf looking as if it’s never even been touched by a human hand (most of the books I’ve read contains pages painted in at least two colours and often contain various notes in the margin – ‘you can tell they’ve been read’). So I decided to take another shot at it, also because I needed a break from Genetics – some of that is hard and this is supposed to be my vacation after all… Ok, let’s move on to some quotes from the book:

1. I’d actually like to quote the introduction chapter in full, it’s that good; but that would be overkill so less will do. However I can’t stop myself from telling you in a bit more detail just how Bryson starts out (…I was just about to add ‘…his adventure’):

“Welcome. And congratulations. I am delighted that you could make it. Getting here wasn’t easy, I know. In fact, I suspect it was a little tougher than you realize.
To begin with, for you to be here now trillions of drifting atoms had somehow to assemble in an intricate and curiously obliging manner to create you. It’s an arrangement so specialized and particular that it has never been tried before and will only exist this once. For the next many years (we hope) these tiny particles will uncomplainingly engage in all the billions of deft, co-operative efforts necessary to keep you intact and let you experience the supremely agreeable but generally under appreciated state known as existence.
Why atoms take this trouble is a bit of a puzzle. Being you is not a gratifying experience at the atomic level. For all their devoted attention, your atoms don’t actually care about you – indeed, they don’t even know that you are there. They don’t even know that they are there.”


“Even a long human life adds up to only about 650,000 hours. And when that modest milestone flashes into view, or at some other point thereabouts, for reasons unknown your atoms will close you down, then silently reassemble and go off to be other things. And that’s it for you. […] The only thing special about the atoms that make you is that they make you. That is, of course, the miracle of life.


But the fact that you have atoms and that they assemble in such a willing manner is only part of what got you here. To be here now, alive in the twenty-first century and smart enough to know it, you also had to be the beneficiary of an extraordinary string of biological good fortune. Survival on Earth is a surprisingly tricky business. […] The average species on Earth lasts for only about four million years […] Consider the fact that for 3,8 billion years, a period of time older than the Earth’s mountains and rivers and oceans, every one of your forebears on both sides has been attractive enough to find a mate, healthy enough to reproduce, and sufficiently blessed by fate and circumstances to live long enough to do so. Not one of your pertinent ancestors was squashed, devoured, drowned, starved, stuck fast, untimely wounded or otherwise deflected from its life’s quest of delivering a tiny charge of genetic material to the right partner at the right moment to perpetuate the only possible sequence of heriditary combinations that could result – eventually, astoundingly, and all too briefly – in you.[*]

This is a book about how it happened…”

*Technically, this passage is not entirely true/correct, as the concept of sexual reproduction is quite a bit younger than that – but the finer details don’t subtract much from the narrative: “The first fossilized evidence of sexually reproducing organisms is from eukaryotes of the Stenian period, about 1 to 1.2 billion years ago.” (wikipedia) It’s still a pretty long time ago. Interestingly, this inaccuracy is not mentioned on this wiki page dealing with inaccuracies and errors in the book. I’ve found at least a few passages besides those that I considered a bit problematic while reading them, but I generally let those pass when I’m reading both because of the background of the author and the likely background of the target group (it’s pop sci after all).

So anyway, that’s how he starts out.

2. Also from the introduction:

“about four of five years ago, I suppose – I was on a long flight across the Pacific, staring idly out the window at moonlit ocean, when it occured to me with a certain uncomfortable forcefulness that I didn’t know the first thing about the only planet I was ever going to live on. I had no idea, for example, why the oceans were salty but the Great Lakes weren’t. Didn’t have the faintest idea. I didn’t know if the oceans were growing more salty with time or less, and whether ocean salinity levels was something I should be concerned about or not. […] I didn’t know what a proton was, or a protein, didn’t know a quark from a quasar, didn’t understand how geologists could look at a layer of rock on a canyon wall and tell you how old it was – didn’t know anything, really.”

So he spent 3 years of his life to write the book and try to find out some of this stuff presumably asking a lot of really awkward questions along the way. Quotes below are from the book proper, not from the introduction:

3. “until 1978 no-one had ever noticed that Pluto has a moon.” […] “Our solar system may be the liveliest thing for trillions of miles, but all the visible stuff in it […] fills less than a trillionth of the available space.” […] “When I was a boy, the solar system was thought to contain thirty moons. The total now is at least ninety, about a third of which have been found in just the last ten years. The point to remember, of course, when considering the universe at large is that we don’t actually know what is in our own solar system.” […] Surprisingly little of the universe is visible to us when we incline our heads to the sky. Only about six thousand stars are visible to the naked eye from Earth, and only about two thousand can be seen from any one spot.”

4. “It was history’s first co-operative international scientific venture, and almost everywhere it ran into problems. Many observers were waylaid by war, sickness or shipwreck. Others made their destinations but opened their crates to find equipment broken or warped by tropical heat. Once again the French seemed fated to provide the most memorably unlucky participants. Jean Chappe spent months travelling to Siberia by coach, boat and sleigh, nursing his delicate instruments over every perilous bump, only to find the last vital stretch blocked by swollen rivers, the result of unusually heavy spring rains, which the locals were swift to blame on him after they saw him pointing strange instruments at the sky. Chappe managed to escape with his life, but with no useful measurements.”

5. “The second half of the eighteenth century was a time when people of a scientific bent grew intensely interested in the physical properties of fundamental things – gases and electricity in particular – and began seeing what they could do with them, often with more enthusiasm than sense. In America, Benjamin Franklin famously risked his life by flying a kite in an electrical storm. In France, a chemist named Pilatre de Rozier tested the flammability of hydrogen by gulping a mouthful and blowing across an open flame, proving at a stroke that hydrogen is indeed explosively combustible and that eyebrows are not necessarily a permanent feature of one’s face.”

6. “It is hard to imagine now, but geology excited the nineteenth century – positively gripped it – in a way that no science ever had before or would again. In 1839, when Roderick Murchison published The Silurian System, a plump and ponderous study of a type of rock called greywacke, it was an instant bestseller, racing through four editions, even though it cost 8 guineas a copy and was, in true Huttonian style, unreadable. (As even a Murchison supporter conceded, it had ‘a total want of literary attractiveness’.) And when, in 1841, the great Charles Lyell travelled to America to give a series of lectures in Boston, sellout audiences of three thousand at a time packed into the Lowell Institute to hear his tranquillizing descriptions of marine zeolites and seismic perturbations in Campania.”

7. “The first attempt at measurement [of the age of the Earth] that could be called remotely scientific was made by the Frenchman Georges-Louis Leclerc, Comte de Buffon, in the 1770s. It had long been known that the Earth radiated appreciable amounts of heat – that was apparent to anyone who went down a coal mine – but there wasn’t any way of estimating the rate of dissipation. Buffon’s experiment consisted of heating spheres until they glowed white-hot and then estimating the rate of heat loss by touching them (presumably very lightly at first) as they cooled. From this he guessed the Earth’s age to be somewhere between 75,000 and 168,000 thousand years old. This was of course a wild underestimate; but it was a radical notion nonetheless…”

Bryson often include examples like these, on just how people figured stuff out – as you can also tell from quote #4 and #5. These parts of the book are really fascinating to me, because they make it clear just how many problems related to measurements and knowledge sharing that were around, making life complicated for people trying to figure stuff out in the past; problems we don’t even spare a thought today. And because descriptions such as these make it much more clear how many of the tools people today take for granted didn’t exactly come along by themselves. The stuff above deals with only the first 100 pages or so; needless to say, there’s a lot of good stuff in this book. I’ll bring more quotes and stuff from the book tomorrow – I should have blogged the book in detail the first time I read it, but I never got around to do it and this time I’ll try to rectify that mistake.

July 29, 2011 Posted by | Astronomy, Books, Chemistry, Evolutionary biology, Geology, History, Science | Leave a comment

Wikipedia articles of interest

1. Evolutionary history of plants.

“Evidence suggests that an algal scum formed on the land 1,200 million years ago, but it was not until the Ordovician period, around 450 million years ago, that land plants appeared.”

The period before 450 million years ago of course covers more than 9/10ths of the history of the Earth.

“The early Devonian landscape was devoid of vegetation taller than waist height. Without the evolution of a robust vascular system, taller heights could not be attained. There was, however, a constant evolutionary pressure to attain greater height. The most obvious advantage is the harvesting of more sunlight for photosynthesis – by overshadowing competitors – but a further advantage is present in spore distribution, as spores (and, later, seeds) can be blown greater distances if they start higher.”

However plants didn’t stay short for long: “the late Devonian Archaeopteris, a precursor to gymnosperms which evolved from the trimerophytes,[38] reached 30 m in height.”

Here’s what the Earth looked like back then (Wikipedia, link):

This is all part of why it’s so difficult to imagine what earth was like when you go millions of years back in time. We take so many things in our environment for granted. Grasses as we know them didn’t come about until somewhere around the K-T boundary.

2. Acid-base reaction

“An acid-base reaction is a chemical reaction, that occurs between an acid and a base. Several concepts that provide alternative definitions for the reaction mechanisms involved and their application in solving related problems exist. Despite several differences in definitions, their importance becomes apparent as different methods of analysis when applied to acid-base reactions for gaseous or liquid species, or when acid or base character may be somewhat less apparent.”

There are quite a few different theories described in the article, I didn’t know that I didn’t know this. The article contains a brilliant example of a case where ‘everybody knew that this theory was true until it was proven wrong’:

“The first scientific concept of acids and bases was provided by Antoine Lavoisier circa 1776. Since Lavoisier’s knowledge of strong acids was mainly restricted to oxoacids, such as HNO3 (nitric acid) and H2SO4 (sulfuric acid), which tend to contain central atoms in high oxidation states surrounded by oxygen, and since he was not aware of the true composition of the hydrohalic acids (HF, HCl, HBr (hydrogen fluroide), and HI) (hydrogen iodide), he defined acids in terms of their containing oxygen, which in fact he named from Greek words meaning “acid-former” (from the Greek οξυς (oxys) meaning “acid” or “sharp” and γεινομαι (geinomai) meaning “engender”). The Lavoisier definition was held as absolute truth for over 30 years, until the 1810 article and subsequent lectures by Sir Humphry Davy in which he proved the lack of oxygen in H2S, H2Te, and the hydrohalic acids.”

3. Anglo-Zulu War. This article is part of the featured British Empire Portal, which contains quite a few articles I’ll probably have to read at some point. As written in the introduction to the portal: “By 1921, the British Empire held sway over a population of about 458 million people, approximately one-quarter of the world’s population. It covered about 36.6 million km² (14.2 million square miles), about a quarter of Earth’s total land area.” If you want to understand the world of 100 years ago, or the world some time before that, you really can’t ignore the history of the British Empire.

4. Medieval technology. Mostly links, but there are lots of them, and I’d guess there are quite a few good articles hidden here. Did you know that functional buttons – buttons with buttonholes for fastening or closing clothes – wasn’t invented until the 13th century?

November 24, 2010 Posted by | Biology, Botany, Chemistry, Evolutionary biology, Geology, History, Paleontology, Wikipedia | 2 Comments

Random wikipedia links of interest

1. Corrosion.

2. Demographics of the People’s Republic of China. A few quotes from the article:

a) “Census data obtained in 2000 revealed that 119 boys were born for every 100 girls, and among China’s “floating population” the ratio was as high as 128:100. These situations led the government in July 2004 to ban selective abortions of female fetuses. It is estimated that this imbalance will rise until 2025–2030 to reach 20% then slowly decrease.[2]”

b) “Average household size (2005) 3.1; rural households 3.3; urban households 3.0.
Average annual per capita disposable income of household (2005): rural households Y 3,255 (U.S.$397), urban households Y 10,493 (U.S.$1,281).”

c) A map of the population density (darker squares have higher density):

The ‘average population density’ of 137/km2 is not an all that interesting variable. The Gobi desert is not a nice place for humans to live: The temperature variation in the area is extreme, ranging from –40°C in the winter to +50°C in the summer.

3. Cost overrun. An excerpt:

“Cost overrun is common in infrastructure, building, and technology projects. One of the most comprehensive studies [1] of cost overrun that exists found that 9 out of 10 projects had overrun, overruns of 50 to 100 percent were common, overrun was found in each of 20 nations and five continents covered by the study, and overrun had been constant for the 70 years for which data were available. For IT projects, an industry study by the Standish Group (2004) found that average cost overrun was 43 percent, 71 percent of projects were over budget, over time, and under scope, and total waste was estimated at US$55 billion per year in the US alone.”

4. Tensor. This is difficult stuff.

5. Eye.

June 16, 2010 Posted by | Biology, Chemistry, Data, Demographics, Economics, Geography, Mathematics, Wikipedia | 7 Comments