“A recent study estimated that 234 million surgical procedures requiring anaesthesia are performed worldwide annually. Anaesthesia is the largest hospital specialty in the UK, with over 12,000 practising anaesthetists […] In this book, I give a short account of the historical background of anaesthetic practice, a review of anaesthetic equipment, techniques, and medications, and a discussion of how they work. The risks and side effects of anaesthetics will be covered, and some of the subspecialties of anaesthetic practice will be explored.”

I liked the book, and I gave it three stars on goodreads; I was closer to four stars than two. Below I have added a few sample observations from the book, as well as what turned out in the end to be actually a quite considerable number of links (more than 60 it turned out, from a brief count) to topics/people/etc. discussed or mentioned in the text. I decided to spend a bit more time finding relevant links than I’ve previously done when writing link-heavy posts, so in this post I have not limited myself to wikipedia articles and I e.g. also link directly to primary literature discussed in the coverage. The links provided are, as usual, meant to be indicators of which kind of stuff is covered in the book, rather than an alternative to the book; some of the wikipedia articles in particular I assume are not very good (the main point of a link to a wikipedia article of questionable quality should probably be taken to be an indication that I consider ‘awareness of the existence of concept X’ to be of interest/important also to people who have not read this book, even if no great resource on the topic was immediately at hand to me).

Sample observations from the book:

“[G]eneral anaesthesia is not sleep. In physiological terms, the two states are very dissimilar. The term general anaesthesia refers to the state of unconsciousness which is deliberately produced by the action of drugs on the patient. Local anaesthesia (and its related terms) refers to the numbness produced in a part of the body by deliberate interruption of nerve function; this is typically achieved without affecting consciousness. […] The purpose of inhaling ether vapour [in the past] was so that surgery would be painless, not so that unconsciousness would necessarily be produced. However, unconsciousness and immobility soon came to be considered desirable attributes […] For almost a century, lying still was the only reliable sign of adequate anaesthesia.”

“The experience of pain triggers powerful emotional consequences, including fear, anger, and anxiety. A reasonable word for the emotional response to pain is ‘suffering’. Pain also triggers the formation of memories which remind us to avoid potentially painful experiences in the future. The intensity of pain perception and suffering also depends on the mental state of the subject at the time, and the relationship between pain, memory, and emotion is subtle and complex. […] The effects of adrenaline are responsible for the appearance of someone in pain: pale, sweating, trembling, with a rapid heart rate and breathing. Additionally, a hormonal storm is activated, readying the body to respond to damage and fight infection. This is known as the stress response. […] Those responses may be abolished by an analgesic such as morphine, which will counteract all those changes. For this reason, it is routine to use analgesic drugs in addition to anaesthetic ones. […] Typical anaesthetic agents are poor at suppressing the stress response, but analgesics like morphine are very effective. […] The hormonal stress response can be shown to be harmful, especially to those who are already ill. For example, the increase in blood coagulability which evolved to reduce blood loss as a result of injury makes the patient more likely to suffer a deep venous thrombosis in the leg veins.”

“If we monitor the EEG of someone under general anaesthesia, certain identifiable changes to the signal occur. In general, the frequency spectrum of the signal slows. […] Next, the overall power of the signal diminishes. In very deep general anaesthesia, short periods of electrical silence, known as burst suppression, can be observed. Finally, the overall randomness of the signal, its entropy, decreases. In short, the EEG of someone who is anaesthetized looks completely different from someone who is awake. […] Depth of anaesthesia is no longer considered to be a linear concept […] since it is clear that anaesthesia is not a single process. It is now believed that the two most important components of anaesthesia are unconsciousness and suppression of the stress response. These can be represented on a three-dimensional diagram called a response surface. [Here’s incidentally a recent review paper on related topics, US]”

“Before the widespread advent of anaesthesia, there were very few painkilling options available. […] Alcohol was commonly given as a means of enhancing the patient’s courage prior to surgery, but alcohol has almost no effect on pain perception. […] For many centuries, opium was the only effective pain-relieving substance known. […] For general anaesthesia to be discovered, certain prerequisites were required. On the one hand, the idea that surgery without pain was achievable had to be accepted as possible. Despite tantalizing clues from history, this idea took a long time to catch on. The few workers who pursued this idea were often openly ridiculed. On the other, an agent had to be discovered that was potent enough to render a patient suitably unconscious to tolerate surgery, but not so potent that overdose (hence accidental death) was too likely. This agent also needed to be easy to produce, tolerable for the patient, and easy enough for untrained people to administer. The herbal candidates (opium, mandrake) were too unreliable or dangerous. The next reasonable candidate, and every agent since, was provided by the proliferating science of chemistry.”

“Inducing anaesthesia by intravenous injection is substantially quicker than the inhalational method. Inhalational induction may take several minutes, while intravenous induction happens in the time it takes for the blood to travel from the needle to the brain (30 to 60 seconds). The main benefit of this is not convenience or comfort but patient safety. […] It was soon discovered that the ideal balance is to induce anaesthesia intravenously, but switch to an inhalational agent […] to keep the patient anaesthetized during the operation. The template of an intravenous induction followed by maintenance with an inhalational agent is still widely used today. […] Most of the drawbacks of volatile agents disappear when the patient is already anaesthetized [and] volatile agents have several advantages for maintenance. First, they are predictable in their effects. Second, they can be conveniently administered in known quantities. Third, the concentration delivered or exhaled by the patient can be easily and reliably measured. Finally, at steady state, the concentration of volatile agent in the patient’s expired air is a close reflection of its concentration in the patient’s brain. This gives the anaesthetist a reliable way of ensuring that enough anaesthetic is present to ensure the patient remains anaesthetized.”

“All current volatile agents are colourless liquids that evaporate into a vapour which produces general anaesthesia when inhaled. All are chemically stable, which means they are non-flammable, and not likely to break down or be metabolized to poisonous products. What distinguishes them from each other are their specific properties: potency, speed of onset, and smell. Potency of an inhalational agent is expressed as MAC, the minimum alveolar concentration required to keep 50% of adults unmoving in response to a standard surgical skin incision. MAC as a concept was introduced […] in 1963, and has proven to be a very useful way of comparing potencies of different anaesthetic agents. […] MAC correlates with observed depth of anaesthesia. It has been known for over a century that potency correlates very highly with lipid solubility; that is, the more soluble an agent is in lipid […], the more potent an anaesthetic it is. This is known as the Meyer-Overton correlation […] Speed of onset is inversely proportional to water solubility. The less soluble in water, the more rapidly an agent will take effect. […] Where immobility is produced at around 1.0 MAC, amnesia is produced at a much lower dose, typically 0.25 MAC, and unconsciousness at around 0.5 MAC. Therefore, a patient may move in response to a surgical stimulus without either being conscious of the stimulus, or remembering it afterwards.”

“The most useful way to estimate the body’s physiological reserve is to assess the patient’s tolerance for exercise. Exercise is a good model of the surgical stress response. The greater the patient’s tolerance for exercise, the better the perioperative outcome is likely to be […] For a smoker who is unable to quit, stopping for even a couple of days before the operation improves outcome. […] Dying ‘on the table’ during surgery is very unusual. Patients who die following surgery usually do so during convalescence, their weakened state making them susceptible to complications such as wound breakdown, chest infections, deep venous thrombosis, and pressure sores.”

Mechanical ventilation is based on the principle of intermittent positive pressure ventilation (IPPV), gas being ‘blown’ into the patient’s lungs from the machine. […] Inflating a patient’s lungs is a delicate process. Healthy lung tissue is fragile, and can easily be damaged by overdistension (barotrauma). While healthy lung tissue is light and spongy, and easily inflated, diseased lung tissue may be heavy and waterlogged and difficult to inflate, and therefore may collapse, allowing blood to pass through it without exchanging any gases (this is known as shunt). Simply applying higher pressures may not be the answer: this may just overdistend adjacent areas of healthier lung. The ventilator must therefore provide a series of breaths whose volume and pressure are very closely controlled. Every aspect of a mechanical breath may now be adjusted by the anaesthetist: the volume, the pressure, the frequency, and the ratio of inspiratory time to expiratory time are only the basic factors.”

“All anaesthetic drugs are poisons. Remember that in achieving a state of anaesthesia you intend to poison someone, but not kill them – so give as little as possible. [Introductory quote to a chapter, from an Anaesthetics textbook – US] […] Other cells besides neurons use action potentials as the basis of cellular signalling. For example, the synchronized contraction of heart muscle is performed using action potentials, and action potentials are transmitted from nerves to skeletal muscle at the neuromuscular junction to initiate movement. Local anaesthetic drugs are therefore toxic to the heart and brain. In the heart, local anaesthetic drugs interfere with normal contraction, eventually stopping the heart. In the brain, toxicity causes seizures and coma. To avoid toxicity, the total dose is carefully limited”.

Links of interest:

General anaesthesia.
Muscle relaxant.
Arthur Ernest Guedel.
Guedel’s classification.
Beta rhythm.
Frances Burney.
Henry Hill Hickman.
Horace Wells.
William Thomas Green Morton.
Diethyl ether.
James Young Simpson.
Joseph Thomas Clover.
Inhalational anaesthetic.
Pulmonary aspiration.
Principles of Total Intravenous Anaesthesia (TIVA).
Patient-controlled analgesia.
Airway management.
Oropharyngeal airway.
Tracheal intubation.
Laryngeal mask airway.
Anaesthetic machine.
Soda lime.
Sodium thiopental.
Neuromuscular-blocking drug.
Gate control theory of pain.
Multimodal analgesia.
Hartmann’s solution (…what this is called seems to be depending on whom you ask, but it’s called Hartmann’s solution in the book…).
Local anesthetic.
Karl Koller.
Regional anesthesia.
Spinal anaesthesia.
Epidural nerve block.
Intensive care medicine.
Bjørn Aage Ibsen.
Chronic pain.
Pain wind-up.
John Bonica.
Twilight sleep.
Veterinary anesthesia.
Pearse et al. (results of paper briefly discussed in the book).
Awareness under anaesthesia (skip the first page).
Pollard et al. (2007).
Postoperative nausea and vomiting.
Postoperative cognitive dysfunction.
Monk et al. (2008).
Malignant hyperthermia.
Suxamethonium apnoea.

February 13, 2017 Posted by | books, Chemistry, medicine, papers, Pharmacology | Leave a comment

Random stuff

i. Fire works a little differently than people imagine. A great ask-science comment. See also AugustusFink-nottle’s comment in the same thread.


iii. I was very conflicted about whether to link to this because I haven’t actually spent any time looking at it myself so I don’t know if it’s any good, but according to somebody (?) who linked to it on SSC the people behind this stuff have academic backgrounds in evolutionary biology, which is something at least (whether you think this is a good thing or not will probably depend greatly on your opinion of evolutionary biologists, but I’ve definitely learned a lot more about human mating patterns, partner interaction patterns, etc. from evolutionary biologists than I have from personal experience, so I’m probably in the ‘they-sometimes-have-interesting-ideas-about-these-topics-and-those-ideas-may-not-be-terrible’-camp). I figure these guys are much more application-oriented than were some of the previous sources I’ve read on related topics, such as e.g. Kappeler et al. I add the link mostly so that if I in five years time have a stroke that obliterates most of my decision-making skills, causing me to decide that entering the dating market might be a good idea, I’ll have some idea where it might make sense to start.

iv. Stereotype (In)Accuracy in Perceptions of Groups and Individuals.

“Are stereotypes accurate or inaccurate? We summarize evidence that stereotype accuracy is one of the largest and most replicable findings in social psychology. We address controversies in this literature, including the long-standing  and continuing but unjustified emphasis on stereotype inaccuracy, how to define and assess stereotype accuracy, and whether stereotypic (vs. individuating) information can be used rationally in person perception. We conclude with suggestions for building theory and for future directions of stereotype (in)accuracy research.”

A few quotes from the paper:

Demographic stereotypes are accurate. Research has consistently shown moderate to high levels of correspondence accuracy for demographic (e.g., race/ethnicity, gender) stereotypes […]. Nearly all accuracy correlations for consensual stereotypes about race/ethnicity and  gender exceed .50 (compared to only 5% of social psychological findings; Richard, Bond, & Stokes-Zoota, 2003).[…] Rather than being based in cultural myths, the shared component of stereotypes is often highly accurate. This pattern cannot be easily explained by motivational or social-constructionist theories of stereotypes and probably reflects a “wisdom of crowds” effect […] personal stereotypes are also quite accurate, with correspondence accuracy for roughly half exceeding r =.50.”

“We found 34 published studies of racial-, ethnic-, and gender-stereotype accuracy. Although not every study examined discrepancy scores, when they did, a plurality or majority of all consensual stereotype judgments were accurate. […] In these 34 studies, when stereotypes were inaccurate, there was more evidence of underestimating than overestimating actual demographic group differences […] Research assessing the accuracy of  miscellaneous other stereotypes (e.g., about occupations, college majors, sororities, etc.) has generally found accuracy levels comparable to those for demographic stereotypes”

“A common claim […] is that even though many stereotypes accurately capture group means, they are still not accurate because group means cannot describe every individual group member. […] If people were rational, they would use stereotypes to judge individual targets when they lack information about targets’ unique personal characteristics (i.e., individuating information), when the stereotype itself is highly diagnostic (i.e., highly informative regarding the judgment), and when available individuating information is ambiguous or incompletely useful. People’s judgments robustly conform to rational predictions. In the rare situations in which a stereotype is highly diagnostic, people rely on it (e.g., Crawford, Jussim, Madon, Cain, & Stevens, 2011). When highly diagnostic individuating information is available, people overwhelmingly rely on it (Kunda & Thagard, 1996; effect size averaging r = .70). Stereotype biases average no higher than r = .10 ( Jussim, 2012) but reach r = .25 in the absence of individuating information (Kunda & Thagard, 1996). The more diagnostic individuating information  people have, the less they stereotype (Crawford et al., 2011; Krueger & Rothbart, 1988). Thus, people do not indiscriminately apply their stereotypes to all individual  members of stereotyped groups.” (Funder incidentally talked about this stuff as well in his book Personality Judgment).

One thing worth mentioning in the context of stereotypes is that if you look at stuff like crime data – which sadly not many people do – and you stratify based on stuff like country of origin, then the sub-group differences you observe tend to be very large. Some of the differences you observe between subgroups are not in the order of something like 10%, which is probably the sort of difference which could easily be ignored without major consequences; some subgroup differences can easily be in the order of one or two orders of magnitude. The differences are in some contexts so large as to basically make it downright idiotic to assume there are no differences – it doesn’t make sense, it’s frankly a stupid thing to do. To give an example, in Germany the probability that a random person, about whom you know nothing, has been a suspect in a thievery case is 22% if that random person happens to be of Algerian extraction, whereas it’s only 0,27% if you’re dealing with an immigrant from China. Roughly one in 13 of those Algerians have also been involved in a case of ‘body (bodily?) harm’, which is the case for less than one in 400 of the Chinese immigrants.

v. Assessing Immigrant Integration in Sweden after the May 2013 Riots. Some data from the article:

“Today, about one-fifth of Sweden’s population has an immigrant background, defined as those who were either born abroad or born in Sweden to two immigrant parents. The foreign born comprised 15.4 percent of the Swedish population in 2012, up from 11.3 percent in 2000 and 9.2 percent in 1990 […] Of the estimated 331,975 asylum applicants registered in EU countries in 2012, 43,865 (or 13 percent) were in Sweden. […] More than half of these applications were from Syrians, Somalis, Afghanis, Serbians, and Eritreans. […] One town of about 80,000 people, Södertälje, since the mid-2000s has taken in more Iraqi refugees than the United States and Canada combined.”

“Coupled with […] macroeconomic changes, the largely humanitarian nature of immigrant arrivals since the 1970s has posed challenges of labor market integration for Sweden, as refugees often arrive with low levels of education and transferable skills […] high unemployment rates have disproportionately affected immigrant communities in Sweden. In 2009-10, Sweden had the highest gap between native and immigrant employment rates among OECD countries. Approximately 63 percent of immigrants were employed compared to 76 percent of the native-born population. This 13 percentage-point gap is significantly greater than the OECD average […] Explanations for the gap include less work experience and domestic formal qualifications such as language skills among immigrants […] Among recent immigrants, defined as those who have been in the country for less than five years, the employment rate differed from that of the native born by more than 27 percentage points. In 2011, the Swedish newspaper Dagens Nyheter reported that 35 percent of the unemployed registered at the Swedish Public Employment Service were foreign born, up from 22 percent in 2005.”

“As immigrant populations have grown, Sweden has experienced a persistent level of segregation — among the highest in Western Europe. In 2008, 60 percent of native Swedes lived in areas where the majority of the population was also Swedish, and 20 percent lived in areas that were virtually 100 percent Swedish. In contrast, 20 percent of Sweden’s foreign born lived in areas where more than 40 percent of the population was also foreign born.”

vi. Book recommendations. Or rather, author recommendations. A while back I asked ‘the people of SSC’ if they knew of any fiction authors I hadn’t read yet which were both funny and easy to read. I got a lot of good suggestions, and the roughly 20 Dick Francis novels I’ve read during the fall I’ve read as a consequence of that thread.

vii. On the genetic structure of Denmark.

viii. Religious Fundamentalism and Hostility against Out-groups: A Comparison of Muslims and Christians in Western Europe.

“On the basis of an original survey among native Christians and Muslims of Turkish and Moroccan origin in Germany, France, the Netherlands, Belgium, Austria and Sweden, this paper investigates four research questions comparing native Christians to Muslim immigrants: (1) the extent of religious fundamentalism; (2) its socio-economic determinants; (3) whether it can be distinguished from other indicators of religiosity; and (4) its relationship to hostility towards out-groups (homosexuals, Jews, the West, and Muslims). The results indicate that religious fundamentalist attitudes are much more widespread among Sunnite Muslims than among native Christians, even after controlling for the different demographic and socio-economic compositions of these groups. […] Fundamentalist believers […] show very high levels of out-group hostility, especially among Muslims.”

ix. Portal: Dinosaurs. It would have been so incredibly awesome to have had access to this kind of stuff back when I was a child. The portal includes links to articles with names like ‘Bone Wars‘ – what’s not to like? Again, awesome!

x. “you can’t determine if something is truly random from observations alone. You can only determine if something is not truly random.” (link) An important insight well expressed.

xi. Chessprogramming. If you’re interested in having a look at how chess programs work, this is a neat resource. The wiki contains lots of links with information on specific sub-topics of interest. Also chess-related: The World Championship match between Carlsen and Karjakin has started. To the extent that I’ll be following the live coverage, I’ll be following Svidler et al.’s coverage on chess24. Robin van Kampen and Eric Hansen – both 2600+ elo GMs – did quite well yesterday, in my opinion.

xii. Justified by More Than Logos Alone (Razib Khan).

“Very few are Roman Catholic because they have read Aquinas’ Five Ways. Rather, they are Roman Catholic, in order of necessity, because God aligns with their deep intuitions, basic cognitive needs in terms of cosmological coherency, and because the church serves as an avenue for socialization and repetitive ritual which binds individuals to the greater whole. People do not believe in Catholicism as often as they are born Catholics, and the Catholic religion is rather well fitted to a range of predispositions to the typical human.”

November 12, 2016 Posted by | books, Chemistry, Chess, data, dating, demographics, genetics, Geography, immigration, Paleontology, papers, Physics, Psychology, random stuff, religion | Leave a comment

Photosynthesis in the Marine Environment (III)

This will be my last post about the book. After having spent a few hours on the post I started to realize the post would become very long if I were to cover all the remaining chapters, and so in the end I decided not to discuss material from chapter 12 (‘How some marine plants modify the environment for other organisms’) here, even though I actually thought some of that stuff was quite interesting. I may decide to talk briefly about some of the stuff in that chapter in another blogpost later on (but most likely I won’t). For a few general remarks about the book, see my second post about it.

Some stuff from the last half of the book below:

“The light reactions of marine plants are similar to those of terrestrial plants […], except that pigments other than chlorophylls a and b and carotenoids may be involved in the capturing of light […] and that special arrangements between the two photosystems may be different […]. Similarly, the CO2-fixation and -reduction reactions are also basically the same in terrestrial and marine plants. Perhaps one should put this the other way around: Terrestrial-plant photosynthesis is similar to marine-plant photosynthesis, which is not surprising since plants have evolved in the oceans for 3.4 billion years and their descendants on land for only 350–400 million years. […] In underwater marine environments, the accessibility to CO2 is low mainly because of the low diffusivity of solutes in liquid media, and for CO2 this is exacerbated by today’s low […] ambient CO2 concentrations. Therefore, there is a need for a CCM also in marine plants […] CCMs in cyanobacteria are highly active and accumulation factors (the internal vs. external CO2 concentrations ratio) can be of the order of 800–900 […] CCMs in eukaryotic microalgae are not as effective at raising internal CO2 concentrations as are those in cyanobacteria, but […] microalgal CCMs result in CO2 accumulation factors as high as 180 […] CCMs are present in almost all marine plants. These CCMs are based mainly on various forms of HCO3 [bicarbonate] utilisation, and may raise the intrachloroplast (or, in cyanobacteria, intracellular or intra-carboxysome) CO2 to several-fold that of seawater. Thus, Rubisco is in effect often saturated by CO2, and photorespiration is therefore often absent or limited in marine plants.”

“we view the main difference in photosynthesis between marine and terrestrial plants as the latter’s ability to acquire Ci [inorganic carbon] (in most cases HCO3) from the external medium and concentrate it intracellularly in order to optimise their photosynthetic rates or, in some cases, to be able to photosynthesise at all. […] CO2 dissolved in seawater is, under air-equilibrated conditions and given today’s seawater pH, in equilibrium with a >100 times higher concentration of HCO3, and it is therefore not surprising that most marine plants utilise the latter Ci form for their photosynthetic needs. […] any plant that utilises bulk HCO3 from seawater must convert it to CO2 somewhere along its path to Rubisco. This can be done in different ways by different plants and under different conditions”

“The conclusion that macroalgae use HCO3 stems largely from results of experiments in which concentrations of CO2 and HCO3 were altered (chiefly by altering the pH of the seawater) while measuring photosynthetic rates, or where the plants themselves withdrew these Ci forms as they photosynthesised in a closed system as manifested by a pH increase (so-called pH-drift experiments) […] The reason that the pH in the surrounding seawater increases as plants photosynthesise is first that CO2 is in equilibrium with carbonic acid (H2CO3), and so the acidity decreases (i.e. pH rises) as CO2 is used up. At higher pH values (above ∼9), when all the CO2 is used up, then a decrease in HCO3 concentrations will also result in increased pH since the alkalinity is maintained by the formation of OH […] some algae can also give off OH to the seawater medium in exchange for HCO3 uptake, bringing the pH up even further (to >10).”

Carbonic anhydrase (CA) is a ubiquitous enzyme, found in all organisms investigated so far (from bacteria, through plants, to mammals such as ourselves). This may be seen as remarkable, since its only function is to catalyse the inter-conversion between CO2 and HCO3 in the reaction CO2 + H2O ↔ H2CO3; we can exchange the latter Ci form to HCO3 since this is spontaneously formed by H2CO3 and is present at a much higher equilibrium concentration than the latter. Without CA, the equilibrium between CO2 and HCO3 is a slow process […], but in the presence of CA the reaction becomes virtually instantaneous. Since CO2 and HCO3 generate different pH values of a solution, one of the roles of CA is to regulate intracellular pH […] another […] function is to convert HCO3 to CO2 somewhere en route towards the latter’s final fixation by Rubisco.”

“with very few […] exceptions, marine macrophytes are not C 4 plants. Also, while a CAM-like [Crassulacean acid metabolism-like, see my previous post about the book for details] feature of nightly uptake of Ci may complement that of the day in some brown algal kelps, this is an exception […] rather than a rule for macroalgae in general. Thus, virtually no marine macroalgae are C 4 or CAM plants, and instead their CCMs are dependent on HCO3 utilization, which brings about high concentrations of CO2 in the vicinity of Rubisco. In Ulva, this type of CCM causes the intra-cellular CO2 concentration to be some 200 μM, i.e. ∼15 times higher than that in seawater.“

“deposition of calcium carbonate (CaCO3) as either calcite or aragonite in marine organisms […] can occur within the cells, but for macroalgae it usually occurs outside of the cell membranes, i.e. in the cell walls or other intercellular spaces. The calcification (i.e. CaCO3 formation) can sometimes continue in darkness, but is normally greatly stimulated in light and follows the rate of photosynthesis. During photosynthesis, the uptake of CO2 will lower the total amount of dissolved inorganic carbon (Ci) and, thus, increase the pH in the seawater surrounding the cells, thereby increasing the saturation state of CaCO3. This, in turn, favours calcification […]. Conversely, it has been suggested that calcification might enhance the photosynthetic rate by increasing the rate of conversion of HCO3 to CO2 by lowering the pH. Respiration will reduce calcification rates when released CO2 increases Ci and/but lowers intercellular pH.”

“photosynthesis is most efficient at very low irradiances and increasingly inefficient as irradiances increase. This is most easily understood if we regard ‘efficiency’ as being dependent on quantum yield: At low ambient irradiances (the light that causes photosynthesis is also called ‘actinic’ light), almost all the photon energy conveyed through the antennae will result in electron flow through (or charge separation at) the reaction centres of photosystem II […]. Another way to put this is that the chances for energy funneled through the antennae to encounter an oxidised (or ‘open’) reaction centre are very high. Consequently, almost all of the photons emitted by the modulated measuring light will be consumed in photosynthesis, and very little of that photon energy will be used for generating fluorescence […] the higher the ambient (or actinic) light, the less efficient is photosynthesis (quantum yields are lower), and the less likely it is for photon energy funnelled through the antennae (including those from the measuring light) to find an open reaction centre, and so the fluorescence generated by the latter light increases […] Alpha (α), which is a measure of the maximal photosynthetic efficiency (or quantum yield, i.e. photosynthetic output per photons received, or absorbed […] by a specific leaf/thallus area, is high in low-light plants because pigment levels (or pigment densities per surface area) are high. In other words, under low-irradiance conditions where few photons are available, the probability that they will all be absorbed is higher in plants with a high density of photosynthetic pigments (or larger ‘antennae’ […]). In yet other words, efficient photon absorption is particularly important at low irradiances, where the higher concentration of pigments potentially optimises photosynthesis in low-light plants. In high-irradiance environments, where photons are plentiful, their efficient absorption becomes less important, and instead it is reactions downstream of the light reactions that become important in the performance of optimal rates of photosynthesis. The CO2-fixing capability of the enzyme Rubisco, which we have indicated as a bottleneck for the entire photosynthetic apparatus at high irradiances, is indeed generally higher in high-light than in low-light plants because of its higher concentration in the former. So, at high irradiances where the photon flux is not limiting to photosynthetic rates, the activity of Rubisco within the CO2-fixation and -reduction part of photosynthesis becomes limiting, but is optimised in high-light plants by up-regulation of its formation. […] photosynthetic responses have often been explained in terms of adaptation to low light being brought about by alterations in either the number of ‘photosynthetic units’ or their size […] There are good examples of both strategies occurring in different species of algae”.

“In general, photoinhibition can be defined as the lowering of photosynthetic rates at high irradiances. This is mainly due to the rapid (sometimes within minutes) degradation of […] the D1 protein. […] there are defense mechanisms [in plants] that divert excess light energy to processes different from photosynthesis; these processes thus cause a downregulation of the entire photosynthetic process while protecting the photosynthetic machinery from excess photons that could cause damage. One such process is the xanthophyll cycle. […] It has […] been suggested that the activity of the CCM in marine plants […] can be a source of energy dissipation. If CO2 levels are raised inside the cells to improve Rubisco activity, some of that CO2 can potentially leak out of the cells, and so raising the net energy cost of CO2 accumulation and, thus, using up large amounts of energy […]. Indirect evidence for this comes from experiments in which CCM activity is down-regulated by elevated CO2

“Photoinhibition is often divided into dynamic and chronic types, i.e. the former is quickly remedied (e.g. during the day[…]) while the latter is more persistent (e.g. over seasons […] the mechanisms for down-regulating photosynthesis by diverting photon energies and the reducing power of electrons away from the photosynthetic systems, including the possibility of detoxifying oxygen radicals, is important in high-light plants (that experience high irradiances during midday) as well as in those plants that do see significant fluctuations in irradiance throughout the day (e.g. intertidal benthic plants). While low-light plants may lack those systems of down-regulation, one must remember that they do not live in environments of high irradiances, and so seldom or never experience high irradiances. […] If plants had a mind, one could say that it was worth it for them to invest in pigments, but unnecessary to invest in high amounts of Rubisco, when growing under low-light conditions, and necessary for high-light growing plants to invest in Rubisco, but not in pigments. Evolution has, of course, shaped these responses”.

“shallow-growing corals […] show two types of photoinhibition: a dynamic type that remedies itself at the end of each day and a more chronic type that persists over longer time periods. […] Bleaching of corals occurs when they expel their zooxanthellae to the surrounding water, after which they either die or acquire new zooxanthellae of other types (or clades) that are better adapted to the changes in the environment that caused the bleaching. […] Active Ci acquisition mechanisms, whether based on localised active H+ extrusion and acidification and enhanced CO2 supply, or on active transport of HCO3, are all energy requiring. As a consequence it is not surprising that the CCM activity is decreased at lower light levels […] a whole spectrum of light-responses can be found in seagrasses, and those are often in co-ordinance with the average daily irradiances where they grow. […] The function of chloroplast clumping in Halophila stipulacea appears to be protection of the chloroplasts from high irradiances. Thus, a few peripheral chloroplasts ‘sacrifice’ themselves for the good of many others within the clump that will be exposed to lower irradiances. […] While water is an effective filter of UV radiation (UVR)2, many marine organisms are sensitive to UVR and have devised ways to protect themselves against this harmful radiation. These ways include the production of UV-filtering compounds called mycosporine-like amino acids (MAAs), which is common also in seagrasses”.

“Many algae and seagrasses grow in the intertidal and are, accordingly, exposed to air during various parts of the day. On the one hand, this makes them amenable to using atmospheric CO2, the diffusion rate of which is some 10 000 times higher in air than in water. […] desiccation is […] the big drawback when growing in the intertidal, and excessive desiccation will lead to death. When some of the green macroalgae left the seas and formed terrestrial plants some 400 million years ago (the latter of which then ‘invaded’ Earth), there was a need for measures to evolve that on the one side ensured a water supply to the above-ground parts of the plants (i.e. roots1) and, on the other, hindered the water entering the plants to evaporate (i.e. a water-impermeable cuticle). Macroalgae lack those barriers against losing intracellular water, and are thus more prone to desiccation, the rate of which depends on external factors such as heat and humidity and internal factors such as thallus thickness. […] the mechanisms of desiccation tolerance in macroalgae is not well understood on the cellular level […] there seems to be a general correlation between the sensitivity of the photosynthetic apparatus (more than the respiratory one) to desiccation and the occurrence of macroalgae along a vertical gradient in the intertidal: the less sensitive (i.e. the more tolerant), the higher up the algae can grow. This is especially true if the sensitivity to desiccation is measured as a function of the ability to regain photosynthetic rates following rehydration during re-submergence. While this correlation exists, the mechanism of protecting the photosynthetic system against desiccation is largely unknown”.

July 28, 2015 Posted by | biology, books, Botany, Chemistry, evolution | Leave a comment

Photosynthesis in the Marine Environment (II)

Here’s my first post about the book. I gave the book four stars on goodreads – here’s a link to my short goodreads review of the book.

As pointed out in the review, ‘it’s really mostly a biochemistry text.’ At least there’s a lot of that stuff in there (‘it get’s better towards the end’, would be one way to put it – the last chapters deal mostly with other topics, such as measurement and brief notes on some not-particularly-well-explored ecological dynamics of potential interest), and if you don’t want to read a book which deals in some detail with topics and concepts like alkalinity, crassulacean acid metabolism, photophosphorylation, photosynthetic reaction centres, Calvin cycle (also known straightforwardly as the ‘reductive pentose phosphate cycle’…), enzymes with names like Ribulose-1,5-bisphosphate carboxylase/oxygenase (‘RuBisCO’ among friends…) and phosphoenolpyruvate carboxylase (‘PEP-case’ among friends…), mycosporine-like amino acid, 4,4′-Diisothiocyanatostilbene-2,2′-disulfonic acid (‘DIDS’ among friends), phosphoenolpyruvate, photorespiration, carbonic anhydrase, C4 carbon fixation, cytochrome b6f complex, … – well, you should definitely not read this book. If you do feel like reading about these sorts of things, having a look at the book seems to me a better idea than reading the wiki articles.

I’m not a biochemist but I could follow a great deal of what was going on in this book, which is perhaps a good indication of how well written the book is. This stuff’s interesting and complicated, and the authors cover most of it quite well. The book has way too much stuff for it to make sense to cover all of it here, but I do want to cover some more stuff from the book, so I’ve added some quotes below.

“Water velocities are central to marine photosynthetic organisms because they affect the transport of nutrients such as Ci [inorganic carbon] towards the photosynthesising cells, as well as the removal of by-products such as excess O2 during the day. Such bulk transport is especially important in aquatic media since diffusion rates there are typically some 10 000 times lower than in air […] It has been established that increasing current velocities will increase photosynthetic rates and, thus, productivity of macrophytes as long as they do not disrupt the thalli of macroalgae or the leaves of seagrasses”.

Photosynthesis is the process by which the energy of light is used in order to form energy-rich organic compounds from low-energy inorganic compounds. In doing so, electrons from water (H2O) reduce carbon dioxide (CO2) to carbohydrates. […] The process of photosynthesis can conveniently be separated into two parts: the ‘photo’ part in which light energy is converted into chemical energy bound in the molecule ATP and reducing power is formed as NADPH [another friend with a long name], and the ‘synthesis’ part in which that ATP and NADPH are used in order to reduce CO2 to sugars […]. The ‘photo’ part of photosynthesis is, for obvious reasons, also called its light reactions while the ‘synthesis’ part can be termed CO2-fixation and -reduction, or the Calvin cycle after one of its discoverers; this part also used to be called the ‘dark reactions’ [or light-independent reactions] of photosynthesis because it can proceed in vitro (= outside the living cell, e.g. in a test-tube) in darkness provided that ATP and NADPH are added artificially. […] ATP and NADPH are the energy source and reducing power, respectively, formed by the light reactions, that are subsequently used in order to reduce carbon dioxide (CO2) to sugars (synonymous with carbohydrates) in the Calvin cycle. Molecular oxygen (O2) is formed as a by-product of photosynthesis.”

“In photosynthetic bacteria (such as the cyanobacteria), the light reactions are located at the plasma membrane and internal membranes derived as invaginations of the plasma membrane. […] most of the CO2-fixing enzyme ribulose-bisphosphate carboxylase/oxygenase […] is here located in structures termed carboxysomes. […] In all other plants (including algae), however, the entire process of photosynthesis takes place within intracellular compartments called chloroplasts which, as the name suggests, are chlorophyll-containing plastids (plastids are those compartments in cells that are associated with photosynthesis).”

“Photosynthesis can be seen as a process in which part of the radiant energy from sunlight is ‘harvested’ by plants in order to supply chemical energy for growth. The first step in such light harvesting is the absorption of photons by photosynthetic pigments[1]. The photosynthetic pigments are special in that they not only convert the energy of absorbed photons to heat (as do most other pigments), but largely convert photon energy into a flow of electrons; the latter is ultimately used to provide chemical energy to reduce CO2 to carbohydrates. […] Pigments are substances that can absorb different wavelengths selectively and so appear as the colour of those photons that are less well absorbed (and, therefore, are reflected, or transmitted, back to our eyes). (An object is black if all photons are absorbed, and white if none are absorbed.) In plants and animals, the pigment molecules within the cells and their organelles thus give them certain colours. The green colour of many plant parts is due to the selective absorption of chlorophylls […], while other substances give colour to, e.g. flowers or fruits. […] Chlorophyll is a major photosynthetic pigment, and chlorophyll a is present in all plants, including all algae and the cyanobacteria. […] The molecular sub-structure of the chlorophyll’s ‘head’ makes it absorb mainly blue and red light […], while green photons are hardly absorbed but, rather, reflected back to our eyes […] so that chlorophyll-containing plant parts look green. […] In addition to chlorophyll a, all plants contain carotenoids […] All these accessory pigments act to fill in the ‘green window’ generated by the chlorophylls’ non-absorbance in that band […] and, thus, broaden the spectrum of light that can be utilized […] beyond that absorbed by chlorophyll.”

“Photosynthesis is principally a redox process in which carbon dioxide (CO2) is reduced to carbohydrates (or, in a shorter word, sugars) by electrons derived from water. […] since water has an energy level (or redox potential) that is much lower than that of sugar, or, more precisely, than that of the compound that finally reduces CO2 to sugars (i.e. NADPH), it follows that energy must be expended in the process; this energy stems from the photons of light. […] Redox reactions are those reactions in which one compound, B, becomes reduced by receiving electrons from another compound, A, the latter then becomes oxidised by donating the electrons to B. The reduction of B can only occur if the electron-donating compound A has a higher energy level, or […] has a redox potential that is higher, or more negative in terms of electron volts, than that of compound B. The redox potential, or reduction potential, […] can thus be seen as a measure of the ease by which a compound can become reduced […] the greater the difference in redox potential between compounds B and A, the greater the tendency that B will be reduced by A. In photosynthesis, the redox potential of the compound that finally reduces CO2, i.e. NADPH, is more negative than that from which the electrons for this reduction stems, i.e. H2O, and the entire process can therefore not occur spontaneously. Instead, light energy is used in order to boost electrons from H2O through intermediary compounds to such high redox potentials that they can, eventually, be used for CO2 reduction. In essence, then, the light reactions of photosynthesis describe how photon energy is used to boost electrons from H2O to an energy level (or redox potential) high (or negative) enough to reduce CO2 to sugars.”

“Fluorescence in general is the generation of light (emission of photons) from the energy released during de-excitation of matter previously excited by electromagnetic energy. In photosynthesis, fluorescence occurs as electrons of chlorophyll undergo de-excitation, i.e. return to the original orbital from which they were knocked out by photons. […] there is an inverse (or negative) correlation between fluorescence yield (i.e. the amount of fluorescence generated per photons absorbed by chlorophyll) and photosynthetic yield (i.e. the amount of photosynthesis performed per photons similarly absorbed).”

“In some cases, more photon energy is received by a plant than can be used for photosynthesis, and this can lead to photo-inhibition or photo-damage […]. Therefore, many plants exposed to high irradiances possess ways of dissipating such excess light energy, the most well known of which is the xanthophyll cycle. In principle, energy is shuttled between various carotenoids collectively called xanthophylls and is, in the process, dissipated as heat.”

“In order to ‘fix’ CO2 (= incorporate it into organic matter within the cell) and reduce it to sugars, the NADPH and ATP formed in the light reactions are used in a series of chemical reactions that take place in the stroma of the chloroplasts (or, in prokaryotic autotrophs such as cyanobacteria, the cytoplasm of the cells); each reaction is catalysed by its specific enzyme, and the bottleneck for the production of carbohydrates is often considered to be the enzyme involved in its first step, i.e. the fixation of CO2 [this enzyme is RubisCO] […] These CO2-fixation and -reduction reactions are known as the Calvin cycle […] or the C3 cycle […] The latter name stems from the fact that the first stable product of CO2 fixation in the cycle is a 3-carbon compound called phosphoglyceric acid (PGA): Carbon dioxide in the stroma is fixed onto a 5-carbon sugar called ribulose-bisphosphate (RuBP) in order to form 2 molecules of PGA […] It should be noted that this reaction does not produce a reduced, energy-rich, carbon compound, but is only the first, ‘CO2– fixing’, step of the Calvin cycle. In subsequent steps, PGA is energized by the ATP formed through photophosphorylation and is reduced by NADPH […] to form a 3-carbon phosphorylated sugar […] here denoted simply as triose phosphate (TP); these reactions can be called the CO2-reduction step of the Calvin cycle […] 1/6 of the TPs formed leave the cycle while 5/6 are needed in order to re-form RuBP molecules in what we can call the regeneration part of the cycle […]; it is this recycling of most of the final product of the Calvin cycle (i.e. TP) to re-form RuBP that lends it to be called a biochemical ‘cycle’ rather than a pathway.”

“Rubisco […] not only functions as a carboxylase, but […] also acts as an oxygenase […] When Rubisco reacts with oxygen instead of CO2, only 1 molecule of PGA is formed together with 1 molecule of the 2-carbon compound phosphoglycolate […] Not only is there no gain in organic carbon by this reaction, but CO2 is actually lost in the further metabolism of phosphoglycolate, which comprises a series of reactions termed photorespiration […] While photorespiration is a complex process […] it is also an apparently wasteful one […] and it is not known why this process has evolved in plants altogether. […] Photorespiration can reduce the net photosynthetic production by up to 25%.”

“Because of Rubisco’s low affinity to CO2 as compared with the low atmospheric, and even lower intracellular, CO2 concentration […], systems have evolved in some plants by which CO2 can be concentrated at the vicinity of this enzyme; these systems are accordingly termed CO2 concentrating mechanisms (CCM). For terrestrial plants, this need for concentrating CO2 is exacerbated in those that grow in hot and/or arid areas where water needs to be saved by partly or fully closing stomata during the day, thus restricting also the influx of CO2 from an already CO2-limiting atmosphere. Two such CCMs exist in terrestrial plants: the C4 cycle and the Crassulacean acid metabolism (CAM) pathway. […] The C 4 cycle is called so because the first stable product of CO2-fixation is not the 3-carbon compound PGA (as in the Calvin cycle) but, rather, malic acid (often referred to by its anion malate) or aspartic acid (or its anion aspartate), both of which are 4-carbon compounds. […] C4 [terrestrial] plants are […] more common in areas of high temperature, especially when accompanied with scarce rains, than in areas with higher rainfall […] While atmospheric CO2 is fixed […] via the C4 cycle, it should be noted that this biochemical cycle cannot reduce CO2 to high energy containing sugars […] since the Calvin cycle is the only biochemical system that can reduce CO2 to energy-rich carbohydrates in plants, it follows that the CO2 initially fixed by the C4 cycle […] is finally reduced via the Calvin cycle also in C4 plants. In summary, the C 4 cycle can be viewed as being an additional CO2 sequesterer, or a biochemical CO2 ‘pump’, that concentrates CO2 for the rather inefficient enzyme Rubisco in C4 plants that grow under conditions where the CO2 supply is extremely limited because partly closed stomata restrict its influx into the photosynthesising cells.”

“Crassulacean acid metabolism (CAM) is similar to the C 4 cycle in that atmospheric CO2 […] is initially fixed via PEP-case into the 4-carbon compound malate. However, this fixation is carried out during the night […] The ecological advantage behind CAM metabolism is that a CAM plant can grow, or at least survive, under prolonged (sometimes months) conditions of severe water stress. […] CAM plants are typical of the desert flora, and include most cacti. […] The principal difference between C 4 and CAM metabolism is that in C4 plants the initial fixation of atmospheric CO2 and its final fixation and reduction in the Calvin cycle is separated in space (between mesophyll and bundle-sheath cells) while in CAM plants the two processes are separated in time (between the initial fixation of CO2 during the night and its re-fixation and reduction during the day).”

July 20, 2015 Posted by | biology, Botany, Chemistry | Leave a comment

Khan Academy videos of interest

I assume that not all of the five videos below are equally easy to understand for people who’ve not watched the previous ones in the various relevant playlists, but this is the stuff I’ve been watching lately and you should know where to look by now if something isn’t perfectly clear. I incidentally covered some relevant background material previously on the blog – if concepts from chemistry like ‘oxidation states’ are a bit far away, a couple of the videos in that post may be helpful.

I stopped caring much when I reached the 1 million mark (until they introduced the Kepler badge – then I started caring a little again until I’d gotten that one), but I noticed today that I’m at this point almost at the 1,5 million energy points mark (1.487.776). I’ve watched approximately 400 videos at the site by now.

Here’s a semi-related link with some good news: Khan Academy Launches First State-Wide Pilot In Idaho.

March 7, 2013 Posted by | biology, Chemistry, Khan Academy, Lectures | 2 Comments

Wikipedia articles of interest

i. 2,4-Dinitrophenol.

2,4-Dinitrophenol (DNP), C6H4N2O5, is an inhibitor of efficient energy (ATP) production in cells with mitochondria. It uncouples oxidative phosphorylation by carrying protons across the mitochondrial membrane, leading to a rapid consumption of energy without generation of ATP. […]

DNP was used extensively in diet pills from 1933 to 1938 after Cutting and Tainter at Stanford University made their first report on the drug’s ability to greatly increase metabolic rate.[3][4] After only its first year on the market Tainter estimated that probably at least 100,000 persons had been treated with DNP in the United States, in addition to many others abroad.[5] DNP acts as a protonophore, allowing protons to leak across the inner mitochondrial membrane and thus bypass ATP synthase. This makes ATP energy production less efficient. In effect, part of the energy that is normally produced from cellular respiration is wasted as heat. The inefficiency is proportional to the dose of DNP that is taken. As the dose increases and energy production is made more inefficient, metabolic rate increases (and more fat is burned) in order to compensate for the inefficiency and meet energy demands. DNP is probably the best known agent for uncoupling oxidative phosphorylation. The production or “phosphorylation” of ATP by ATP synthase gets disconnected or “uncoupled” from oxidation. Interestingly, the factor that limits ever-increasing doses of DNP is not a lack of ATP energy production, but rather an excessive rise in body temperature due to the heat produced during uncoupling. Accordingly, DNP overdose will cause fatal hyperthermia. In light of this, it’s advised that the dose be slowly titrated according to personal tolerance, which varies greatly.[6] Case reports have shown that an acute administration of 20–50 mg/kg in humans can be lethal.[7] Concerns about dangerous side-effects and rapidly developing cataracts resulted in DNP being discontinued in the United States by the end of 1938. DNP, however, continues to be used by some bodybuilders and athletes to rapidly lose body fat. Fatal overdoses are rare, but are still reported on occasion. These include cases of accidental exposure,[8] suicide,[7][9][10] and excessive intentional exposure.[9][11][12] […]

While DNP itself is considered by many to be too risky for human use, its mechanism of action remains under investigation as a potential approach for treating obesity.[19]

ii. Opium. Long article with lots of good stuff.

“The most important reason for the increase in opiate consumption in the United States during the 19th century was the prescribing and dispensing of legal opiates by physicians and pharmacists to women with ”female problems” (mostly to relieve menstrual pain). Between 150,000 and 200,000 opiate addicts lived in the United States in the late 19th century and between two-thirds and three-quarters of these addicts were women.[35] […]

After the 1757 Battle of Plassey and 1764 Battle of Buxar, the British East India Company gained the power to act as diwan of Bengal, Bihar, and Orissa (See company rule in India). This allowed the company to exercise a monopoly over opium production and export in India, to encourage ryots to cultivate the cash crops of indigo and opium with cash advances, and to prohibit the “hoarding” of rice. This strategy led to the increase of the land tax to 50% of the value of crops and to the doubling of East India Company profits by 1777. It is also claimed to have contributed to the starvation of ten million people in the Bengal famine of 1770. Beginning in 1773, the British government began enacting oversight of the company’s operations, and in response to the Indian Rebellion of 1857 this policy culminated in the establishment of direct rule over the Presidencies and provinces of British India. Bengal opium was highly prized, commanding twice the price of the domestic Chinese product, which was regarded as inferior in quality.[47]

Some competition came from the newly independent United States, which began to compete in Guangzhou (Canton) selling Turkish opium in the 1820s. Portuguese traders also brought opium from the independent Malwa states of western India, although by 1820, the British were able to restrict this trade by charging “pass duty” on the opium when it was forced to pass through Bombay to reach an entrepot.[17] Despite drastic penalties and continued prohibition of opium until 1860, opium importation rose steadily from 200 chests per year under Yongzheng to 1,000 under Qianlong, 4,000 under Jiaqing, and 30,000 under Daoguang.[48] The illegal sale of opium became one of the world’s most valuable single commodity trades and has been called “the most long continued and systematic international crime of modern times.”[49]

In response to the ever-growing number of Chinese people becoming addicted to opium, Daoguang of the Qing Dynasty took strong action to halt the import of opium, including the seizure of cargo. In 1838, the Chinese Commissioner Lin Zexu destroyed 20,000 chests of opium in Guangzhou (Canton).[17] Given that a chest of opium was worth nearly $1,000 in 1800, this was a substantial economic loss. The British, not willing to replace the cheap opium with costly silver, began the First Opium War in 1840, the British winning Hong Kong and trade concessions in the first of a series of Unequal Treaties.

Following China’s defeat in the Second Opium War in 1858, China was forced to legalize opium and began massive domestic production. Importation of opium peaked in 1879 at 6,700 tons, and by 1906, China was producing 85% of the world’s opium, some 35,000 tons, and 27% of its adult male population regularly used opium —13.5 million people consuming 39,000 tons of opium yearly.[47] From 1880 to the beginning of the Communist era, Britain attempted to discourage the use of opium in China, but this effectively promoted the use of morphine, heroin, and cocaine, further exacerbating the problem of addiction.[50] […]

iii. Metallicity.

“In astronomy and physical cosmology, the metallicity (also called Z[1]) of an object is the proportion of its matter made up of chemical elements other than hydrogen and helium. Since stars, which comprise most of the visible matter in the universe, are composed mostly of hydrogen and helium, astronomers use for convenience the blanket term “metal” to describe all other elements collectively.[2] Thus, a nebula rich in carbon, nitrogen, oxygen, and neon would be “metal-rich” in astrophysical terms even though those elements are non-metals in chemistry. This term should not be confused with the usual definition of “metal“; metallic bonds are impossible within stars, and the very strongest chemical bonds are only possible in the outer layers of cool K and M stars. Normal chemistry therefore has little or no relevance in stellar interiors.

The metallicity of an astronomical object may provide an indication of its age. When the universe first formed, according to the Big Bang theory, it consisted almost entirely of hydrogen which, through primordial nucleosynthesis, created a sizeable proportion of helium and only trace amounts of lithium and beryllium and no heavier elements. Therefore, older stars have lower metallicities than younger stars such as our Sun.”

iv. Batavian Republic.

“The Batavian Republic (Dutch: Bataafse Republiek) was the successor of the Republic of the United Netherlands. It was proclaimed on January 19, 1795, and ended on June 5, 1806, with the accession of Louis Bonaparte to the throne of the Kingdom of Holland.” (the article has much more)

v. Taiping Rebellion. Never heard of this? You should have:

“The Taiping Rebellion was a widespread civil war in southern China from 1850 to 1864, against the ruling Manchu-led Qing Dynasty. It was led by heterodox Christian convert Hong Xiuquan, who, having claimed to have received visions, maintained that he was the younger brother of Jesus Christ [2]. About 20 million people died, mainly civilians, in one of the deadliest military conflicts in history.[3]

vi. Borobudur (featured).

Borobudur, or Barabudur, is a 9th-century Mahayana Buddhist monument in Magelang, Central Java, Indonesia. The monument consists of six square platforms topped by three circular platforms, and is decorated with 2,672 relief panels and 504 Buddha statues.[1] A main dome, located at the center of the top platform, is surrounded by 72 Buddha statues seated inside a perforated stupa.”

September 9, 2012 Posted by | astronomy, Chemistry, Geography, history, wikipedia | Leave a comment

Khan videos of interest

Some of the stuff I’ve been watching today:

(For the record: I think the above video is just plain cool. I’ve had real trouble ‘getting’ how the kidneys actually work, despite reading quite a bit about that subject at one point; one of the main things I remember from reading about it when I did was that the more details were added to the mix the more confused I tended to get (always a risk when you use peer-reviewed research to supplement wikipedia and similar sources). Khan does a brilliant job here, he’s of course simplifying stuff somewhat but you get it.)

In a couple of later videos he makes a few clarifications regarding the terminology (and frankly if you like this one, you should watch them too – this one is the next in the series) but those are not super important.

The last two were some of the videos I felt I had to take a closer look at in order to get a little more out of some of the sections in the Microbiology textbook. I’m pretty sure this stuff was covered in HS-chemistry, but that’s a long time ago and I haven’t used that stuff since then so a lot of it is just gone. Thanks to Khan, brushing up on some of this stuff is a lot easier than it otherwise could have been.

I think I ought to have a go at the calculus section and linear algebra at some point, but so far I haven’t really found the motivation to do so – besides from watching a few random videos along the way. Incidentally, today I crossed the 100-completed-videos mark (75 of them were in the Cosmology and Astronomy section, which I’ve watched in full from start to finish – and which I highly recommend even though technically some of the videos probably do not belong in this category at all).

August 7, 2011 Posted by | biology, Chemistry, Khan Academy, Lectures, medicine | Leave a comment

Bill Bryson (II)

More quotes from his wonderful book:

1. “Before [Richard] Owen, museums were designed primarily for the use and edification of the elite, and even they found it difficult to gain access. In the early days of the British Museum, prospective visitors had to make a written application and undergo a brief interview to determine if they were fit to be admitted at all. They then had to return a second time to pick up a ticket – that is, assuming they had passed the interview – and finally come back a third time to view the museum’s treasures. Even then they were whisked through in groups and not allowed to linger. Owen’s plan was to welcome everyone, even to the point of encouraging working men to visit in the evening, and to devote most of the museum’s space to public displays. He even proposed, very radically, to put informative labels on each display so that people could appreciate what they were viewing.”

2. “At the turn of the twentieth century, palaeontologists had literally tons of old bones to pick over. The problem was that they still didn’t have any idea how old any of these bones were. Worse, the agreed ages for the Earth couldn’t comfortably support the numbers of aeons and ages and epochs that the past obviously contained. If Earth were really only twenty million years old or so, as the great Lord Kelvin insisted, then whole orders of ancient creatures must have come into being and gone out again practically in the same geological instant. It just made no sense. […] Such was the confusion that by the close of the nineteenth century, depending on which text you consulted, you could learn that the number of years that stood between us and the dawn of complex life in the Cambrian period was 3 million, 18 million, 600 million, 794 million, or 2,4 billion – or som other number within that range. As late as 1910 [five years after Einstein’s Annus Mirabilis papers], one of the most respected estimates, by the American George Becker, put the Earth’s age at perhaps as little as 55 million years.”

3. “Soon after taking up his position [in the beginning of the nineteenth century], [Humphry] Davy began to bang out new elements one after the other – potassium, sodium, magnesium, calcium, strontium, and aluminum or aluminium […] He discovered so many elements not so much because he was serially astute as because he developed an ingenious technique of applying electricity to a molten substance – electrolysis, as it is known. Altogether he discovered a dozen elements, a fifth of the known totals of his day.”

4. “They [Ernest Rutherford and Frederick Soddy] also discovered that radioactive elements decayed into other elements – that one day you had an atom of uranium, say, and the next you had an atom of lead. This was truly extraordinary. It was alchemy pure and simple; no-one had ever imagined that such a thing could happen naturally and spontaneously. […] For a long time it was assumed that anything so miraculously energetic as radioactivity must be beneficial. For years, manufacturers of toothpaste and laxatives put radioactive thorium in their products, and at least until the late 1920s the Glen Springs Hotel in the Finger Lakes region of New York (and doubtless others as well) featured with pride the therapeutic effects of its ‘Radio-active mineral springs’. It wasn’t banned in consumer products until 1938. By this time it was much too late for Mme Curie, who died of leukaemia in 1934.”

5. “In 1875, when a young German in Kiel named Max Planck was deciding whether to devote his life to mathematics or to physics, he was urged most heartily not to choose physics because the breakthroughs had all been made there. The coming century, he was assured, would be one of consolidation and refinement, not revolution.”

6. “You may not feel outstandingly robust, but if you are an average-sized adult you will contain within your modest frame no less than 7 x 10^18 joules of potential energy – enough to explode with the force of thirty very large hydrogen bombs, assuming you knew how to liberate it and really wished to make a point. Everything has this kind of energy trapped within it. We’re just not very good at getting it out. Even a uranium bomb – the most energetic thing we have produced yet – releases less than 1 per cent of the energy it could release if only we were more cunning.”

7. “It is worth pausing for a moment to consider just how little was known of the cosmos at the this time. Astronomers today believe there are perhaps 140 billion galaxies in the visible universe. […] In 1919, when Hubble first put his head to the eyepiece, the number of these galaxies known to us was exactly one: the Milky Way. Everything else was thought to be either part of the Milky Way itself or one of many distant, peripheral puffs of gas. […] at the time Leavitt and Cannon were inferring fundamental properties of the cosmos from dim smudges of distant stars on photographic plates, the Harvard astronomer William H. Pickering, who could of course peer into a first-class telescope as often as he wanted, was developing his seminal theory that dark patches on the Moon were caused by swarms of seasonally migrating insects.”

8. “Atoms, in short, are very abundant. They are also fantastically durable. Because they are so long-lived, atoms really get around. Every atom you possess has almost certainly passed through several stars and been part of millions of organisms on its way to becoming you. We are each so atomically numerous and so vigorously recycled at death that a significant number of our atoms – up to a billion for each of us, it has been suggested – probably once belonged to Shakespeare.”

From the wiki correction page: “Jupiter Scientific has done an analysis of this problem and the figure in Bryon’s book is probably low: It is likely that each of us has about 200 billion atoms that were once in Shakespeare’s body.”

9. “Even though lead was widely known to be dangerous, by the early years of the twentieth century it could be found in all manner of consumer products. Food came in cans sealed with lead solder. Water was often stored in lead-lined tanks. Lead arsenate was sprayed onto fruits as a pesticide. Lead even came as part of the composition of toothpaste tubes. […] Americans alive today each have about 625 times more lead in their blood than people did a century ago.”

In this chapter we also learn that we did not arrive at the current best estimate of the age of the earth until little over 50 years ago – I won’t quote from the book, but wikipedia has the short version: “An age of 4.55 ± 1.5% billion years, very close to today’s accepted age, was determined by C.C. Patterson using uranium-lead isotope dating (specifically lead-lead dating) on several meteorites including the Canyon Diablo meteorite and published in 1956.” At this point, the age of the universe was still very uncertain, from the book: “In 1956, astronomers discovered that Cepheid variables were more variable than they had thought; they came in two varieties, not one. This allowed them to rework their calculations and come up with a new age for the universe of between seven billion and twenty billion years” – as Bryson puts it, that estimate was “not terribly precise”. Our knowledge about the age of the universe is quite new.

10. “Well into the 1970s, one of the most popular and influential geological textbooks, The Earth by the venerable Harold Jefferys, strenuously insisted that plate tectonics was a physical impossibility, just as it had in the first edition way back in 1924. It was equally dismissive of convection and sea-floor spreading. And in Basin and Range, published in 1980, John McPhee noted that even then one American geologist in eight still didn’t believe in plate tectonics.”

11. “By the time Shoemaker came along, a common view was that Meteor Crater had been formed by an underground steam explosion. Shoemaker knew nothing about underground steam explosions – he couldn’t; they don’t exist…”

July 30, 2011 Posted by | astronomy, books, Chemistry, cosmology, Geology, Paleontology, Physics | Leave a comment

Bill Bryson

Here’s the link, order it if you like what you read here. I read the book 3 years ago, but this is the kind of book that you’ll probably want to reread at some point if you’re like me. When I read it the first time I borrowed my big brother’s book, as he had it standing on his bookshelf while I was visiting him over the Summer. I recently bought the book myself (it was on sale) and I’ve pretty much since I bought it been somewhat bugged by the fact that (yet) a(/nother) book I’ve read stands on my bookshelf looking as if it’s never even been touched by a human hand (most of the books I’ve read contains pages painted in at least two colours and often contain various notes in the margin – ‘you can tell they’ve been read’). So I decided to take another shot at it, also because I needed a break from Genetics – some of that is hard and this is supposed to be my vacation after all… Ok, let’s move on to some quotes from the book:

1. I’d actually like to quote the introduction chapter in full, it’s that good; but that would be overkill so less will do. However I can’t stop myself from telling you in a bit more detail just how Bryson starts out (…I was just about to add ‘…his adventure’):

“Welcome. And congratulations. I am delighted that you could make it. Getting here wasn’t easy, I know. In fact, I suspect it was a little tougher than you realize.
To begin with, for you to be here now trillions of drifting atoms had somehow to assemble in an intricate and curiously obliging manner to create you. It’s an arrangement so specialized and particular that it has never been tried before and will only exist this once. For the next many years (we hope) these tiny particles will uncomplainingly engage in all the billions of deft, co-operative efforts necessary to keep you intact and let you experience the supremely agreeable but generally under appreciated state known as existence.
Why atoms take this trouble is a bit of a puzzle. Being you is not a gratifying experience at the atomic level. For all their devoted attention, your atoms don’t actually care about you – indeed, they don’t even know that you are there. They don’t even know that they are there.”


“Even a long human life adds up to only about 650,000 hours. And when that modest milestone flashes into view, or at some other point thereabouts, for reasons unknown your atoms will close you down, then silently reassemble and go off to be other things. And that’s it for you. […] The only thing special about the atoms that make you is that they make you. That is, of course, the miracle of life.


But the fact that you have atoms and that they assemble in such a willing manner is only part of what got you here. To be here now, alive in the twenty-first century and smart enough to know it, you also had to be the beneficiary of an extraordinary string of biological good fortune. Survival on Earth is a surprisingly tricky business. […] The average species on Earth lasts for only about four million years […] Consider the fact that for 3,8 billion years, a period of time older than the Earth’s mountains and rivers and oceans, every one of your forebears on both sides has been attractive enough to find a mate, healthy enough to reproduce, and sufficiently blessed by fate and circumstances to live long enough to do so. Not one of your pertinent ancestors was squashed, devoured, drowned, starved, stuck fast, untimely wounded or otherwise deflected from its life’s quest of delivering a tiny charge of genetic material to the right partner at the right moment to perpetuate the only possible sequence of heriditary combinations that could result – eventually, astoundingly, and all too briefly – in you.[*]

This is a book about how it happened…”

*Technically, this passage is not entirely true/correct, as the concept of sexual reproduction is quite a bit younger than that – but the finer details don’t subtract much from the narrative: “The first fossilized evidence of sexually reproducing organisms is from eukaryotes of the Stenian period, about 1 to 1.2 billion years ago.” (wikipedia) It’s still a pretty long time ago. Interestingly, this inaccuracy is not mentioned on this wiki page dealing with inaccuracies and errors in the book. I’ve found at least a few passages besides those that I considered a bit problematic while reading them, but I generally let those pass when I’m reading both because of the background of the author and the likely background of the target group (it’s pop sci after all).

So anyway, that’s how he starts out.

2. Also from the introduction:

“about four of five years ago, I suppose – I was on a long flight across the Pacific, staring idly out the window at moonlit ocean, when it occured to me with a certain uncomfortable forcefulness that I didn’t know the first thing about the only planet I was ever going to live on. I had no idea, for example, why the oceans were salty but the Great Lakes weren’t. Didn’t have the faintest idea. I didn’t know if the oceans were growing more salty with time or less, and whether ocean salinity levels was something I should be concerned about or not. […] I didn’t know what a proton was, or a protein, didn’t know a quark from a quasar, didn’t understand how geologists could look at a layer of rock on a canyon wall and tell you how old it was – didn’t know anything, really.”

So he spent 3 years of his life to write the book and try to find out some of this stuff presumably asking a lot of really awkward questions along the way. Quotes below are from the book proper, not from the introduction:

3. “until 1978 no-one had ever noticed that Pluto has a moon.” […] “Our solar system may be the liveliest thing for trillions of miles, but all the visible stuff in it […] fills less than a trillionth of the available space.” […] “When I was a boy, the solar system was thought to contain thirty moons. The total now is at least ninety, about a third of which have been found in just the last ten years. The point to remember, of course, when considering the universe at large is that we don’t actually know what is in our own solar system.” […] Surprisingly little of the universe is visible to us when we incline our heads to the sky. Only about six thousand stars are visible to the naked eye from Earth, and only about two thousand can be seen from any one spot.”

4. “It was history’s first co-operative international scientific venture, and almost everywhere it ran into problems. Many observers were waylaid by war, sickness or shipwreck. Others made their destinations but opened their crates to find equipment broken or warped by tropical heat. Once again the French seemed fated to provide the most memorably unlucky participants. Jean Chappe spent months travelling to Siberia by coach, boat and sleigh, nursing his delicate instruments over every perilous bump, only to find the last vital stretch blocked by swollen rivers, the result of unusually heavy spring rains, which the locals were swift to blame on him after they saw him pointing strange instruments at the sky. Chappe managed to escape with his life, but with no useful measurements.”

5. “The second half of the eighteenth century was a time when people of a scientific bent grew intensely interested in the physical properties of fundamental things – gases and electricity in particular – and began seeing what they could do with them, often with more enthusiasm than sense. In America, Benjamin Franklin famously risked his life by flying a kite in an electrical storm. In France, a chemist named Pilatre de Rozier tested the flammability of hydrogen by gulping a mouthful and blowing across an open flame, proving at a stroke that hydrogen is indeed explosively combustible and that eyebrows are not necessarily a permanent feature of one’s face.”

6. “It is hard to imagine now, but geology excited the nineteenth century – positively gripped it – in a way that no science ever had before or would again. In 1839, when Roderick Murchison published The Silurian System, a plump and ponderous study of a type of rock called greywacke, it was an instant bestseller, racing through four editions, even though it cost 8 guineas a copy and was, in true Huttonian style, unreadable. (As even a Murchison supporter conceded, it had ‘a total want of literary attractiveness’.) And when, in 1841, the great Charles Lyell travelled to America to give a series of lectures in Boston, sellout audiences of three thousand at a time packed into the Lowell Institute to hear his tranquillizing descriptions of marine zeolites and seismic perturbations in Campania.”

7. “The first attempt at measurement [of the age of the Earth] that could be called remotely scientific was made by the Frenchman Georges-Louis Leclerc, Comte de Buffon, in the 1770s. It had long been known that the Earth radiated appreciable amounts of heat – that was apparent to anyone who went down a coal mine – but there wasn’t any way of estimating the rate of dissipation. Buffon’s experiment consisted of heating spheres until they glowed white-hot and then estimating the rate of heat loss by touching them (presumably very lightly at first) as they cooled. From this he guessed the Earth’s age to be somewhere between 75,000 and 168,000 thousand years old. This was of course a wild underestimate; but it was a radical notion nonetheless…”

Bryson often include examples like these, on just how people figured stuff out – as you can also tell from quote #4 and #5. These parts of the book are really fascinating to me, because they make it clear just how many problems related to measurements and knowledge sharing that were around, making life complicated for people trying to figure stuff out in the past; problems we don’t even spare a thought today. And because descriptions such as these make it much more clear how many of the tools people today take for granted didn’t exactly come along by themselves. The stuff above deals with only the first 100 pages or so; needless to say, there’s a lot of good stuff in this book. I’ll bring more quotes and stuff from the book tomorrow – I should have blogged the book in detail the first time I read it, but I never got around to do it and this time I’ll try to rectify that mistake.

July 29, 2011 Posted by | astronomy, books, Chemistry, Geology, history, science | Leave a comment

Wikipedia articles of interest

1. Evolutionary history of plants.

“Evidence suggests that an algal scum formed on the land 1,200 million years ago, but it was not until the Ordovician period, around 450 million years ago, that land plants appeared.”

The period before 450 million years ago of course covers more than 9/10ths of the history of the Earth.

“The early Devonian landscape was devoid of vegetation taller than waist height. Without the evolution of a robust vascular system, taller heights could not be attained. There was, however, a constant evolutionary pressure to attain greater height. The most obvious advantage is the harvesting of more sunlight for photosynthesis – by overshadowing competitors – but a further advantage is present in spore distribution, as spores (and, later, seeds) can be blown greater distances if they start higher.”

However plants didn’t stay short for long: “the late Devonian Archaeopteris, a precursor to gymnosperms which evolved from the trimerophytes,[38] reached 30 m in height.”

Here’s what the Earth looked like back then (Wikipedia, link):

This is all part of why it’s so difficult to imagine what earth was like when you go millions of years back in time. We take so many things in our environment for granted. Grasses as we know them didn’t come about until somewhere around the K-T boundary.

2. Acid-base reaction

“An acid-base reaction is a chemical reaction, that occurs between an acid and a base. Several concepts that provide alternative definitions for the reaction mechanisms involved and their application in solving related problems exist. Despite several differences in definitions, their importance becomes apparent as different methods of analysis when applied to acid-base reactions for gaseous or liquid species, or when acid or base character may be somewhat less apparent.”

There are quite a few different theories described in the article, I didn’t know that I didn’t know this. The article contains a brilliant example of a case where ‘everybody knew that this theory was true until it was proven wrong’:

“The first scientific concept of acids and bases was provided by Antoine Lavoisier circa 1776. Since Lavoisier’s knowledge of strong acids was mainly restricted to oxoacids, such as HNO3 (nitric acid) and H2SO4 (sulfuric acid), which tend to contain central atoms in high oxidation states surrounded by oxygen, and since he was not aware of the true composition of the hydrohalic acids (HF, HCl, HBr (hydrogen fluroide), and HI) (hydrogen iodide), he defined acids in terms of their containing oxygen, which in fact he named from Greek words meaning “acid-former” (from the Greek οξυς (oxys) meaning “acid” or “sharp” and γεινομαι (geinomai) meaning “engender”). The Lavoisier definition was held as absolute truth for over 30 years, until the 1810 article and subsequent lectures by Sir Humphry Davy in which he proved the lack of oxygen in H2S, H2Te, and the hydrohalic acids.”

3. Anglo-Zulu War. This article is part of the featured British Empire Portal, which contains quite a few articles I’ll probably have to read at some point. As written in the introduction to the portal: “By 1921, the British Empire held sway over a population of about 458 million people, approximately one-quarter of the world’s population. It covered about 36.6 million km² (14.2 million square miles), about a quarter of Earth’s total land area.” If you want to understand the world of 100 years ago, or the world some time before that, you really can’t ignore the history of the British Empire.

4. Medieval technology. Mostly links, but there are lots of them, and I’d guess there are quite a few good articles hidden here. Did you know that functional buttons – buttons with buttonholes for fastening or closing clothes – wasn’t invented until the 13th century?

November 24, 2010 Posted by | biology, Chemistry, Geology, history, Paleontology, wikipedia | 2 Comments

Random wikipedia links of interest

1. Corrosion.

2. Demographics of the People’s Republic of China. A few quotes from the article:

a) “Census data obtained in 2000 revealed that 119 boys were born for every 100 girls, and among China’s “floating population” the ratio was as high as 128:100. These situations led the government in July 2004 to ban selective abortions of female fetuses. It is estimated that this imbalance will rise until 2025–2030 to reach 20% then slowly decrease.[2]”

b) “Average household size (2005) 3.1; rural households 3.3; urban households 3.0.
Average annual per capita disposable income of household (2005): rural households Y 3,255 (U.S.$397), urban households Y 10,493 (U.S.$1,281).”

c) A map of the population density (darker squares have higher density):

The ‘average population density’ of 137/km2 is not an all that interesting variable. The Gobi desert is not a nice place for humans to live: The temperature variation in the area is extreme, ranging from –40°C in the winter to +50°C in the summer.

3. Cost overrun. An excerpt:

“Cost overrun is common in infrastructure, building, and technology projects. One of the most comprehensive studies [1] of cost overrun that exists found that 9 out of 10 projects had overrun, overruns of 50 to 100 percent were common, overrun was found in each of 20 nations and five continents covered by the study, and overrun had been constant for the 70 years for which data were available. For IT projects, an industry study by the Standish Group (2004) found that average cost overrun was 43 percent, 71 percent of projects were over budget, over time, and under scope, and total waste was estimated at US$55 billion per year in the US alone.”

4. Tensor. This is difficult stuff.

5. Eye.

June 16, 2010 Posted by | biology, Chemistry, data, demographics, economics, Geography, mathematics, wikipedia | 7 Comments