Econstudentlog

Materials… (II)

Some more quotes and links:

“Whether materials are stiff and strong, or hard or weak, is the territory of mechanics. […] the 19th century continuum theory of linear elasticity is still the basis of much of modern solid mechanics. A stiff material is one which does not deform much when a force acts on it. Stiffness is quite distinct from strength. A material may be stiff but weak, like a piece of dry spaghetti. If you pull it, it stretches only slightly […], but as you ramp up the force it soon breaks. To put this on a more scientific footing, so that we can compare different materials, we might devise a test in which we apply a force to stretch a bar of material and measure the increase in length. The fractional change in length is the strain; and the applied force divided by the cross-sectional area of the bar is the stress. To check that it is Hookean, we double the force and confirm that the strain has also doubled. To check that it is truly elastic, we remove the force and check that the bar returns to the same length that it started with. […] then we calculate the ratio of the stress to the strain. This ratio is the Young’s modulus of the material, a quantity which measures its stiffness. […] While we are measuring the change in length of the bar, we might also see if there is a change in its width. It is not unreasonable to think that as the bar stretches it also becomes narrower. The Poisson’s ratio of the material is defined as the ratio of the transverse strain to the longitudinal strain (without the minus sign).”

“There was much argument between Cauchy and Lamé and others about whether there are two stiffness moduli or one. […] In fact, there are two stiffness moduli. One describes the resistance of a material to shearing and the other to compression. The shear modulus is the stiffness in distortion, for example in twisting. It captures the resistance of a material to changes of shape, with no accompanying change of volume. The compression modulus (usually called the bulk modulus) expresses the resistance to changes of volume (but not shape). This is what occurs as a cube of material is lowered deep into the sea, and is squeezed on all faces by the water pressure. The Young’s modulus [is] a combination of the more fundamental shear and bulk moduli, since stretching in one direction produces changes in both shape and volume. […] A factor of about 10,000 covers the useful range of Young’s modulus in engineering materials. The stiffness can be traced back to the forces acting between atoms and molecules in the solid state […]. Materials like diamond or tungsten with strong bonds are stiff in the bulk, while polymer materials with weak intermolecular forces have low stiffness.”

“In pure compression, the concept of ‘strength’ has no meaning, since the material cannot fail or rupture. But materials can and do fail in tension or in shear. To judge how strong a material is we can go back for example to the simple tension arrangement we used for measuring stiffness, but this time make it into a torture test in which the specimen is put on the rack. […] We find […] that we reach a strain at which the material stops being elastic and is permanently stretched. We have reached the yield point, and beyond this we have damaged the material but it has not failed. After further yielding, the bar may fail by fracture […]. On the other hand, with a bar of cast iron, there comes a point where the bar breaks, noisily and without warning, and without yield. This is a failure by brittle fracture. The stress at which it breaks is the tensile strength of the material. For the ductile material, the stress at which plastic deformation starts is the tensile yield stress. Both are measures of strength. It is in metals that yield and plasticity are of the greatest significance and value. In working components, yield provides a safety margin between small-strain elasticity and catastrophic rupture. […] plastic deformation is [also] exploited in making things from metals like steel and aluminium. […] A useful feature of plastic deformation in metals is that plastic straining raises the yield stress, particularly at lower temperatures.”

“Brittle failure is not only noisy but often scary. Engineers keep well away from it. An elaborate theory of fracture mechanics has been built up to help them avoid it, and there are tough materials to hand which do not easily crack. […] Since small cracks and flaws are present in almost any engineering component […], the trick is not to avoid cracks but to avoid long cracks which exceed [a] critical length. […] In materials which can yield, the tip stress can be relieved by plastic deformation, and this is a potent toughening mechanism in some materials. […] The trick of compressing a material to suppress cracking is a powerful way to toughen materials.”

“Hardness is a property which materials scientists think of in a particular and practical way. It tells us how well a material resists being damaged or deformed by a sharp object. That is useful information and it can be obtained easily. […] Soft is sometimes the opposite of hard […] But a different kind of soft is squidgy. […] In the soft box, we find many everyday materials […]. Some soft materials such as adhesives and lubricants are of great importance in engineering. For all of them, the model of a stiff crystal lattice provides no guidance. There is usually no crystal. The units are polymer chains, or small droplets of liquids, or small solid particles, with weak forces acting between them, and little structural organization. Structures when they exist are fragile. Soft materials deform easily when forces act on them […]. They sit as a rule somewhere between rigid solids and simple liquids. Their mechanical behaviour is dominated by various kinds of plasticity.”

“In pure metals, the resistivity is extremely low […] and a factor of ten covers all of them. […] the low resistivity (or, put another way, the high conductivity) arises from the existence of a conduction band in the solid which is only partly filled. Electrons in the conduction band are mobile and drift in an applied electric field. This is the electric current. The electrons are subject to some scattering from lattice vibrations which impedes their motion and generates an intrinsic resistance. Scattering becomes more severe as the temperature rises and the amplitude of the lattice vibrations becomes greater, so that the resistivity of metals increases with temperature. Scattering is further increased by microstructural heterogeneities, such as grain boundaries, lattice distortions, and other defects, and by phases of different composition. So alloys have appreciably higher resistivities than their pure parent metals. Adding 5 per cent nickel to iron doubles the resistivity, although the resistivities of the two pure metals are similar. […] Resistivity depends fundamentally on band structure. […] Plastics and rubbers […] are usually insulators. […] Electronically conducting plastics would have many uses, and some materials [e.g. this one] are now known. […] The electrical resistivity of many metals falls to exactly zero as they are cooled to very low temperatures. The critical temperature at which this happens varies, but for pure metallic elements it always lies below 10 K. For a few alloys, it is a little higher. […] Superconducting windings provide stable and powerful magnetic fields for magnetic resonance imaging, and many industrial and scientific uses.”

“A permanent magnet requires no power. Its magnetization has its origin in the motion of electrons in atoms and ions in the solid, but only a few materials have the favourable combination of quantum properties to give rise to useful ferromagnetism. […] Ferromagnetism disappears completely above the so-called Curie temperature. […] Below the Curie temperature, ferromagnetic alignment throughout the material can be established by imposing an external polarizing field to create a net magnetization. In this way a practical permanent magnet is made. The ideal permanent magnet has an intense magnetization (a strong field) which remains after the polarizing field is switched off. It can only be demagnetized by applying a strong polarizing field in the opposite direction: the size of this field is the coercivity of the magnet material. For a permanent magnet, it should be as high as possible. […] Permanent magnets are ubiquitous but more or less invisible components of umpteen devices. There are a hundred or so in every home […]. There are also important uses for ‘soft’ magnetic materials, in devices where we want the ferromagnetism to be temporary, not permanent. Soft magnets lose their magnetization after the polarizing field is removed […] They have low coercivity, approaching zero. When used in a transformer, such a soft ferromagnetic material links the input and output coils by magnetic induction. Ideally, the magnetization should reverse during every cycle of the alternating current to minimize energy losses and heating. […] Silicon transformer steels yielded large gains in efficiency in electrical power distribution when they were first introduced in the 1920s, and they remain pre-eminent.”

“At least 50 families of plastics are produced commercially today. […] These materials all consist of linear string molecules, most with simple carbon backbones, a few with carbon-oxygen backbones […] Plastics as a group are valuable because they are lightweight and work well in wet environments, and don’t go rusty. They are mostly unaffected by acids and salts. But they burn, and they don’t much like sunlight as the ultraviolet light can break the polymer backbone. Most commercial plastics are mixed with substances which make it harder for them to catch fire and which filter out the ultraviolet light. Above all, plastics are used because they can be formed and shaped so easily. The string molecule itself is held together by strong chemical bonds and is resilient, but the forces between the molecules are weak. So plastics melt at low temperatures to produce rather viscous liquids […]. And with modest heat and a little pressure, they can be injected into moulds to produce articles of almost any shape”.

“The downward cascade of high purity to adulterated materials in recycling is a kind of entropy effect: unmixing is thermodynamically hard work. But there is an energy-driven problem too. Most materials are thermodynamically unstable (or metastable) in their working environments and tend to revert to the substances from which they were made. This is well-known in the case of metals, and is the usual meaning of corrosion. The metals are more stable when combined with oxygen than uncombined. […] Broadly speaking, ceramic materials are more stable thermodynamically, since they already contain much oxygen in chemical combination. Even so, ceramics used in the open usually fall victim to some environmental predator. Often it is water that causes damage. Water steals sodium and potassium from glass surfaces by slow leaching. The surface shrinks and cracks, so the glass loses its transparency. […] Stones and bricks may succumb to the stresses of repeated freezing when wet; limestones decay also by the chemical action of sulfur and nitrogen gasses in polluted rainwater. Even buried archaeological pots slowly react with water in a remorseless process similar to that of rock weathering.”

Ashby plot.
Alan Arnold Griffith.
Creep (deformation).
Amontons’ laws of friction.
Viscoelasticity.
Internal friction.
Surfactant.
Dispersant.
Rheology.
Liquid helium.
Conductor. Insulator. Semiconductor. P-type -ll-. N-type -ll-.
Hall–Héroult process.
Cuprate.
Magnetostriction.
Snell’s law.
Chromatic aberration.
Dispersion (optics).
Dye.
Density functional theory.
Glass.
Pilkington float process.
Superalloy.
Ziegler–Natta catalyst.
Transistor.
Integrated circuit.
Negative-index metamaterial.
Auxetics.
Titanium dioxide.
Hyperfine structure (/-interactions).
Diamond anvil cell.
Synthetic rubber.
Simon–Ehrlich wager.
Sankey diagram.

Advertisements

November 16, 2017 Posted by | Books, Physics, Chemistry, Engineering | Leave a comment

Materials (I)…

Useful matter is a good definition of materials. […] Materials are materials because inventive people find ingenious things to do with them. Or just because people use them. […] Materials science […] explains how materials are made and how they behave as we use them.”

I recently read this book, which I liked. Below I have added some quotes from the first half of the book, with some added hopefully helpful links, as well as a collection of links at the bottom of the post to other topics covered.

“We understand all materials by knowing about composition and microstructure. Despite their extraordinary minuteness, the atoms are the fundamental units, and they are real, with precise attributes, not least size. Solid materials tend towards crystallinity (for the good thermodynamic reason that it is the arrangement of lowest energy), and they usually achieve it, though often in granular, polycrystalline forms. Processing conditions greatly influence microstructures which may be mobile and dynamic, particularly at high temperatures. […] The idea that we can understand materials by looking at their internal structure in finer and finer detail goes back to the beginnings of microscopy […]. This microstructural view is more than just an important idea, it is the explanatory framework at the core of materials science. Many other concepts and theories exist in materials science, but this is the framework. It says that materials are intricately constructed on many length-scales, and if we don’t understand the internal structure we shall struggle to explain or to predict material behaviour.”

“Oxygen is the most abundant element in the earth’s crust and silicon the second. In nature, silicon occurs always in chemical combination with oxygen, the two forming the strong Si–O chemical bond. The simplest combination, involving no other elements, is silica; and most grains of sand are crystals of silica in the form known as quartz. […] The quartz crystal comes in right- and left-handed forms. Nothing like this happens in metals but arises frequently when materials are built from molecules and chemical bonds. The crystal structure of quartz has to incorporate two different atoms, silicon and oxygen, each in a repeating pattern and in the precise ratio 1:2. There is also the severe constraint imposed by the Si–O chemical bonds which require that each Si atom has four O neighbours arranged around it at the corners of a tetrahedron, every O bonded to two Si atoms. The crystal structure which quartz adopts (which of all possibilities is the one of lowest energy) is made up of triangular and hexagonal units. But within this there are buried helixes of Si and O atoms, and a helix must be either right- or left-handed. Once a quartz crystal starts to grow as right- or left-handed, its structure templates all the other helices with the same handedness. Equal numbers of right- and left-handed crystals occur in nature, but each is unambiguously one or the other.”

“In the living tree, and in the harvested wood that we use as a material, there is a hierarchy of structural levels, climbing all the way from the molecular to the scale of branch and trunk. The stiff cellulose chains are bundled into fibrils, which are themselves bonded by other organic molecules to build the walls of cells; which in turn form channels for the transport of water and nutrients, the whole having the necessary mechanical properties to support its weight and to resist the loads of wind and rain. In the living tree, the structure allows also for growth and repair. There are many things to be learned from biological materials, but the most universal is that biology builds its materials at many structural levels, and rarely makes a distinction between the material and the organism. Being able to build materials with hierarchical architectures is still more or less out of reach in materials engineering. Understanding how materials spontaneously self-assemble is the biggest challenge in contemporary nanotechnology.”

“The example of diamond shows two things about crystalline materials. First, anything we know about an atom and its immediate environment (neighbours, distances, angles) holds for every similar atom throughout a piece of material, however large; and second, everything we know about the unit cell (its size, its shape, and its symmetry) also applies throughout an entire crystal […] and by extension throughout a material made of a myriad of randomly oriented crystallites. These two general propositions provide the basis and justification for lattice theories of material behaviour which were developed from the 1920s onwards. We know that every solid material must be held together by internal cohesive forces. If it were not, it would fly apart and turn into a gas. A simple lattice theory says that if we can work out what forces act on the atoms in one unit cell, then this should be enough to understand the cohesion of the entire crystal. […] In lattice models which describe the cohesion and dynamics of the atoms, the role of the electrons is mainly in determining the interatomic bonding and the stiffness of the bond-spring. But in many materials, and especially in metals and semiconductors, some of the electrons are free to move about within the lattice. A lattice model of electron behaviour combines a geometrical description of the lattice with a more or less mechanical view of the atomic cores, and a fully quantum theoretical description of the electrons themselves. We need only to take account of the outer electrons of the atoms, as the inner electrons are bound tightly into the cores and are not itinerant. The outer electrons are the ones that form chemical bonds, so they are also called the valence electrons.”

“It is harder to push atoms closer together than to pull them further apart. While atoms are soft on the outside, they have harder cores, and pushed together the cores start to collide. […] when we bring a trillion atoms together to form a crystal, it is the valence electrons that are disturbed as the atoms approach each other. As the atomic cores come close to the equilibrium spacing of the crystal, the electron states of the isolated atoms morph into a set of collective states […]. These collective electron states have a continuous distribution of energies up to a top level, and form a ‘band’. But the separation of the valence electrons into distinct electron-pair states is preserved in the band structure, so that we find that the collective states available to the entire population of valence electrons in the entire crystal form a set of bands […]. Thus in silicon, there are two main bands.”

“The perfect crystal has atoms occupying all the positions prescribed by the geometry of its crystal lattice. But real crystalline materials fall short of perfection […] For instance, an individual site may be unoccupied (a vacancy). Or an extra atom may be squeezed into the crystal at a position which is not a lattice position (an interstitial). An atom may fall off its lattice site, creating a vacancy and an interstitial at the same time. Sometimes a site is occupied by the wrong kind of atom. Point defects of this kind distort the crystal in their immediate neighbourhood. Vacancies free up diffusional movement, allowing atoms to hop from site to site. Larger scale defects invariably exist too. A complete layer of atoms or unit cells may terminate abruptly within the crystal to produce a line defect (a dislocation). […] There are materials which try their best to crystallize, but find it hard to do so. Many polymer materials are like this. […] The best they can do is to form small crystalline regions in which the molecules lie side by side over limited distances. […] Often the crystalline domains comprise about half the material: it is a semicrystal. […] Crystals can be formed from the melt, from solution, and from the vapour. All three routes are used in industry and in the laboratory. As a rule, crystals that grow slowly are good crystals. Geological time can give wonderful results. Often, crystals are grown on a seed, a small crystal of the same material deliberately introduced into the crystallization medium. If this is a melt, the seed can gradually be pulled out, drawing behind it a long column of new crystal material. This is the Czochralski process, an important method for making semiconductors. […] However it is done, crystals invariably grow by adding material to the surface of a small particle to make it bigger.”

“As we go down the Periodic Table of elements, the atoms get heavier much more quickly than they get bigger. The mass of a single atom of uranium at the bottom of the Table is about 25 times greater than that of an atom of the lightest engineering metal, beryllium, at the top, but its radius is only 40 per cent greater. […] The density of solid materials of every kind is fixed mainly by where the constituent atoms are in the Periodic Table. The packing arrangement in the solid has only a small influence, although the crystalline form of a substance is usually a little denser than the amorphous form […] The range of solid densities available is therefore quite limited. At the upper end we hit an absolute barrier, with nothing denser than osmium (22,590 kg/m3). At the lower end we have some slack, as we can make lighter materials by the trick of incorporating holes to make foams and sponges and porous materials of all kinds. […] in the entire catalogue of available materials there is a factor of about a thousand for ingenious people to play with, from say 20 to 20,000 kg/m3.”

“The expansion of materials as we increase their temperature is a universal tendency. It occurs because as we raise the temperature the thermal energy of the atoms and molecules increases correspondingly, and this fights against the cohesive forces of attraction. The mean distance of separation between atoms in the solid (or the liquid) becomes larger. […] As a general rule, the materials with small thermal expansivities are metals and ceramics with high melting temperatures. […] Although thermal expansion is a smooth process which continues from the lowest temperatures to the melting point, it is sometimes interrupted by sudden jumps […]. Changes in crystal structure at precise temperatures are commonplace in materials of all kinds. […] There is a cluster of properties which describe the thermal behaviour of materials. Besides the expansivity, there is the specific heat, and also the thermal conductivity. These properties show us, for example, that it takes about four times as much energy to increase the temperature of 1 kilogram of aluminium by 1°C as 1 kilogram of silver; and that good conductors of heat are usually also good conductors of electricity. At everyday temperatures there is not a huge difference in specific heat between materials. […] In all crystalline materials, thermal conduction arises from the diffusion of phonons from hot to cold regions. As they travel, the phonons are subject to scattering both by collisions with other phonons, and with defects in the material. This picture explains why the thermal conductivity falls as temperature rises”.

 

Materials science.
Metals.
Inorganic compound.
Organic compound.
Solid solution.
Copper. Bronze. Brass. Alloy.
Electrical conductivity.
Steel. Bessemer converter. Gamma iron. Alpha iron. Cementite. Martensite.
Phase diagram.
Equation of state.
Calcite. Limestone.
Birefringence.
Portland cement.
Cellulose.
Wood.
Ceramic.
Mineralogy.
Crystallography.
Laue diffraction pattern.
Silver bromide. Latent image. Photographic film. Henry Fox Talbot.
Graphene. Graphite.
Thermal expansion.
Invar.
Dulong–Petit law.
Wiedemann–Franz law.

 

November 14, 2017 Posted by | Biology, Books, Chemistry, Engineering, Physics | Leave a comment

Organic Chemistry (II)

I have included some observations from the second half of the book below, as well as some links to topics covered.

“[E]nzymes are used routinely to catalyse reactions in the research laboratory, and for a variety of industrial processes involving pharmaceuticals, agrochemicals, and biofuels. In the past, enzymes had to be extracted from natural sources — a process that was both expensive and slow. But nowadays, genetic engineering can incorporate the gene for a key enzyme into the DNA of fast growing microbial cells, allowing the enzyme to be obtained more quickly and in far greater yield. Genetic engineering has also made it possible to modify the amino acids making up an enzyme. Such modified enzymes can prove more effective as catalysts, accept a wider range of substrates, and survive harsher reaction conditions. […] New enzymes are constantly being discovered in the natural world as well as in the laboratory. Fungi and bacteria are particularly rich in enzymes that allow them to degrade organic compounds. It is estimated that a typical bacterial cell contains about 3,000 enzymes, whereas a fungal cell contains 6,000. Considering the variety of bacterial and fungal species in existence, this represents a huge reservoir of new enzymes, and it is estimated that only 3 per cent of them have been investigated so far.”

“One of the most important applications of organic chemistry involves the design and synthesis of pharmaceutical agents — a topic that is defined as medicinal chemistry. […] In the 19th century, chemists isolated chemical components from known herbs and extracts. Their aim was to identify a single chemical that was responsible for the extract’s pharmacological effects — the active principle. […] It was not long before chemists synthesized analogues of active principles. Analogues are structures which have been modified slightly from the original active principle. Such modifications can often improve activity or reduce side effects. This led to the concept of the lead compound — a compound with a useful pharmacological activity that could act as the starting point for further research. […] The first half of the 20th century culminated in the discovery of effective antimicrobial agents. […] The 1960s can be viewed as the birth of rational drug design. During that period there were important advances in the design of effective anti-ulcer agents, anti-asthmatics, and beta-blockers for the treatment of high blood pressure. Much of this was based on trying to understand how drugs work at the molecular level and proposing theories about why some compounds were active and some were not.”

“[R]ational drug design was boosted enormously towards the end of the century by advances in both biology and chemistry. The sequencing of the human genome led to the identification of previously unknown proteins that could serve as potential drug targets. […] Advances in automated, small-scale testing procedures (high-throughput screening) also allowed the rapid testing of potential drugs. In chemistry, advances were made in X-ray crystallography and NMR spectroscopy, allowing scientists to study the structure of drugs and their mechanisms of action. Powerful molecular modelling software packages were developed that allowed researchers to study how a drug binds to a protein binding site. […] the development of automated synthetic methods has vastly increased the number of compounds that can be synthesized in a given time period. Companies can now produce thousands of compounds that can be stored and tested for pharmacological activity. Such stores have been called chemical libraries and are routinely tested to identify compounds capable of binding with a specific protein target. These advances have boosted medicinal chemistry research over the last twenty years in virtually every area of medicine.”

“Drugs interact with molecular targets in the body such as proteins and nucleic acids. However, the vast majority of clinically useful drugs interact with proteins, especially receptors, enzymes, and transport proteins […] Enzymes are […] important drug targets. Drugs that bind to the active site and prevent the enzyme acting as a catalyst are known as enzyme inhibitors. […] Enzymes are located inside cells, and so enzyme inhibitors have to cross cell membranes in order to reach them—an important consideration in drug design. […] Transport proteins are targets for a number of therapeutically important drugs. For example, a group of antidepressants known as selective serotonin reuptake inhibitors prevent serotonin being transported into neurons by transport proteins.”

“The main pharmacokinetic factors are absorption, distribution, metabolism, and excretion. Absorption relates to how much of an orally administered drug survives the digestive enzymes and crosses the gut wall to reach the bloodstream. Once there, the drug is carried to the liver where a certain percentage of it is metabolized by metabolic enzymes. This is known as the first-pass effect. The ‘survivors’ are then distributed round the body by the blood supply, but this is an uneven process. The tissues and organs with the richest supply of blood vessels receive the greatest proportion of the drug. Some drugs may get ‘trapped’ or sidetracked. For example fatty drugs tend to get absorbed in fat tissue and fail to reach their target. The kidneys are chiefly responsible for the excretion of drugs and their metabolites.”

“Having identified a lead compound, it is important to establish which features of the compound are important for activity. This, in turn, can give a better understanding of how the compound binds to its molecular target. Most drugs are significantly smaller than molecular targets such as proteins. This means that the drug binds to quite a small region of the protein — a region known as the binding site […]. Within this binding site, there are binding regions that can form different types of intermolecular interactions such as van der Waals interactions, hydrogen bonds, and ionic interactions. If a drug has functional groups and substituents capable of interacting with those binding regions, then binding can take place. A lead compound may have several groups that are capable of forming intermolecular interactions, but not all of them are necessarily needed. One way of identifying the important binding groups is to crystallize the target protein with the drug bound to the binding site. X-ray crystallography then produces a picture of the complex which allows identification of binding interactions. However, it is not always possible to crystallize target proteins and so a different approach is needed. This involves synthesizing analogues of the lead compound where groups are modified or removed. Comparing the activity of each analogue with the lead compound can then determine whether a particular group is important or not. This is known as an SAR study, where SAR stands for structure–activity relationships.” Once the important binding groups have been identified, the pharmacophore for the lead compound can be defined. This specifies the important binding groups and their relative position in the molecule.”

“One way of identifying the active conformation of a flexible lead compound is to synthesize rigid analogues where the binding groups are locked into defined positions. This is known as rigidification or conformational restriction. The pharmacophore will then be represented by the most active analogue. […] A large number of rotatable bonds is likely to have an adverse effect on drug activity. This is because a flexible molecule can adopt a large number of conformations, and only one of these shapes corresponds to the active conformation. […] In contrast, a totally rigid molecule containing the required pharmacophore will bind the first time it enters the binding site, resulting in greater activity. […] It is also important to optimize a drug’s pharmacokinetic properties such that it can reach its target in the body. Strategies include altering the drug’s hydrophilic/hydrophobic properties to improve absorption, and the addition of substituents that block metabolism at specific parts of the molecule. […] The drug candidate must [in general] have useful activity and selectivity, with minimal side effects. It must have good pharmacokinetic properties, lack toxicity, and preferably have no interactions with other drugs that might be taken by a patient. Finally, it is important that it can be synthesized as cheaply as possible”.

“Most drugs that have reached clinical trials for the treatment of Alzheimer’s disease have failed. Between 2002 and 2012, 244 novel compounds were tested in 414 clinical trials, but only one drug gained approval. This represents a failure rate of 99.6 per cent as against a failure rate of 81 per cent for anti-cancer drugs.”

“It takes about ten years and £160 million to develop a new pesticide […] The volume of global sales increased 47 per cent in the ten-year period between 2002 and 2012, while, in 2012, total sales amounted to £31 billion. […] In many respects, agrochemical research is similar to pharmaceutical research. The aim is to find pesticides that are toxic to ‘pests’, but relatively harmless to humans and beneficial life forms. The strategies used to achieve this goal are also similar. Selectivity can be achieved by designing agents that interact with molecular targets that are present in pests, but not other species. Another approach is to take advantage of any metabolic reactions that are unique to pests. An inactive prodrug could then be designed that is metabolized to a toxic compound in the pest, but remains harmless in other species. Finally, it might be possible to take advantage of pharmacokinetic differences between pests and other species, such that a pesticide reaches its target more easily in the pest. […] Insecticides are being developed that act on a range of different targets as a means of tackling resistance. If resistance should arise to an insecticide acting on one particular target, then one can switch to using an insecticide that acts on a different target. […] Several insecticides act as insect growth regulators (IGRs) and target the moulting process rather than the nervous system. In general, IGRs take longer to kill insects but are thought to cause less detrimental effects to beneficial insects. […] Herbicides control weeds that would otherwise compete with crops for water and soil nutrients. More is spent on herbicides than any other class of pesticide […] The synthetic agent 2,4-D […] was synthesized by ICI in 1940 as part of research carried out on biological weapons […] It was first used commercially in 1946 and proved highly successful in eradicating weeds in cereal grass crops such as wheat, maize, and rice. […] The compound […] is still the most widely used herbicide in the world.”

“The type of conjugated system present in a molecule determines the specific wavelength of light absorbed. In general, the more extended the conjugation, the higher the wavelength absorbed. For example, β-carotene […] is the molecule responsible for the orange colour of carrots. It has a conjugated system involving eleven double bonds, and absorbs light in the blue region of the spectrum. It appears red because the reflected light lacks the blue component. Zeaxanthin is very similar in structure to β-carotene, and is responsible for the yellow colour of corn. […] Lycopene absorbs blue-green light and is responsible for the red colour of tomatoes, rose hips, and berries. Chlorophyll absorbs red light and is coloured green. […] Scented molecules interact with olfactory receptors in the nose. […] there are around 400 different olfactory protein receptors in humans […] The natural aroma of a rose is due mainly to 2-phenylethanol, geraniol, and citronellol.”

“Over the last fifty years, synthetic materials have largely replaced natural materials such as wood, leather, wool, and cotton. Plastics and polymers are perhaps the most visible sign of how organic chemistry has changed society. […] It is estimated that production of global plastics was 288 million tons in 2012 […] Polymerization involves linking molecular strands called polymers […]. By varying the nature of the monomer, a huge range of different polymers can be synthesized with widely differing properties. The idea of linking small molecular building blocks into polymers is not a new one. Nature has been at it for millions of years using amino acid building blocks to make proteins, and nucleotide building blocks to make nucleic acids […] The raw materials for plastics come mainly from oil, which is a finite resource. Therefore, it makes sense to recycle or depolymerize plastics to recover that resource. Virtually all plastics can be recycled, but it is not necessarily economically feasible to do so. Traditional recycling of polyesters, polycarbonates, and polystyrene tends to produce inferior plastics that are suitable only for low-quality goods.”

Adipic acid.
Protease. Lipase. Amylase. Cellulase.
Reflectin.
Agonist.
Antagonist.
Prodrug.
Conformational change.
Process chemistry (chemical development).
Clinical trial.
Phenylbutazone.
Pesticide.
Dichlorodiphenyltrichloroethane.
Aldrin.
N-Methyl carbamate.
Organophosphates.
Pyrethrum.
Neonicotinoid.
Colony collapse disorder.
Ecdysone receptor.
Methoprene.
Tebufenozide.
Fungicide.
Quinone outside inhibitors (QoI).
Allelopathy.
Glyphosate.
11-cis retinal.
Chromophore.
Synthetic dyes.
Methylene blue.
Cryptochrome.
Pheromone.
Artificial sweeteners.
Miraculin.
Addition polymer.
Condensation polymer.
Polyethylene.
Polypropylene.
Polyvinyl chloride.
Bisphenol A.
Vulcanization.
Kevlar.
Polycarbonate.
Polyhydroxyalkanoates.
Bioplastic.
Nanochemistry.
Allotropy.
Allotropes of carbon.
Carbon nanotube.
Rotaxane.
π-interactions.
Molecular switch.

November 11, 2017 Posted by | Biology, Books, Botany, Chemistry, Medicine, Pharmacology, Zoology | Leave a comment

Organic Chemistry (I)

This book‘s a bit longer than most ‘A very short introduction to…‘ publications, and it’s quite dense at times and included a lot of interesting stuff. It took me a while to finish it as I put it away a while back when I hit some of the more demanding content, but I did pick it up later and I really enjoyed most of the coverage. In the end I decided that I wouldn’t be doing the book justice if I were to limit my coverage of it to just one post, so this will be only the first of two posts of coverage of this book, covering roughly the first half of it.

As usual I have included in my post both some observations from the book (…and added a few links to these quotes where I figured they might be helpful) as well as some wiki links to topics discussed in the book.

“Organic chemistry is a branch of chemistry that studies carbon-based compounds in terms of their structure, properties, and synthesis. In contrast, inorganic chemistry covers the chemistry of all the other elements in the periodic table […] carbon-based compounds are crucial to the chemistry of life. [However] organic chemistry has come to be defined as the chemistry of carbon-based compounds, whether they originate from a living system or not. […] To date, 16 million compounds have been synthesized in organic chemistry laboratories across the world, with novel compounds being synthesized every day. […] The list of commodities that rely on organic chemistry include plastics, synthetic fabrics, perfumes, colourings, sweeteners, synthetic rubbers, and many other items that we use every day.”

“For a neutral carbon atom, there are six electrons occupying the space around the nucleus […] The electrons in the outer shell are defined as the valence electrons and these determine the chemical properties of the atom. The valence electrons are easily ‘accessible’ compared to the two electrons in the first shell. […] There is great significance in carbon being in the middle of the periodic table. Elements which are close to the left-hand side of the periodic table can lose their valence electrons to form positive ions. […] Elements on the right-hand side of the table can gain electrons to form negatively charged ions. […] The impetus for elements to form ions is the stability that is gained by having a full outer shell of electrons. […] Ion formation is feasible for elements situated to the left or the right of the periodic table, but it is less feasible for elements in the middle of the table. For carbon to gain a full outer shell of electrons, it would have to lose or gain four valence electrons, but this would require far too much energy. Therefore, carbon achieves a stable, full outer shell of electrons by another method. It shares electrons with other elements to form bonds. Carbon excels in this and can be considered chemistry’s ultimate elemental socialite. […] Carbon’s ability to form covalent bonds with other carbon atoms is one of the principle reasons why so many organic molecules are possible. Carbon atoms can be linked together in an almost limitless way to form a mind-blowing variety of carbon skeletons. […] carbon can form a bond to hydrogen, but it can also form bonds to atoms such as nitrogen, phosphorus, oxygen, sulphur, fluorine, chlorine, bromine, and iodine. As a result, organic molecules can contain a variety of different elements. Further variety can arise because it is possible for carbon to form double bonds or triple bonds to a variety of other atoms. The most common double bonds are formed between carbon and oxygen, carbon and nitrogen, or between two carbon atoms. […] The most common triple bonds are found between carbon and nitrogen, or between two carbon atoms.”

[C]hirality has huge importance. The two enantiomers of a chiral molecule behave differently when they interact with other chiral molecules, and this has important consequences in the chemistry of life. As an analogy, consider your left and right hands. These are asymmetric in shape and are non-superimposable mirror images. Similarly, a pair of gloves are non-superimposable mirror images. A left hand will fit snugly into a left-hand glove, but not into a right-hand glove. In the molecular world, a similar thing occurs. The proteins in our bodies are chiral molecules which can distinguish between the enantiomers of other molecules. For example, enzymes can distinguish between the two enantiomers of a chiral compound and catalyse a reaction with one of the enantiomers but not the other.”

“A key concept in organic chemistry is the functional group. A functional group is essentially a distinctive arrangement of atoms and bonds. […] Functional groups react in particular ways, and so it is possible to predict how a molecule might react based on the functional groups that are present. […] it is impossible to build a molecule atom by atom. Instead, target molecules are built by linking up smaller molecules. […] The organic chemist needs to have a good understanding of the reactions that are possible between different functional groups when choosing the molecular building blocks to be used for a synthesis. […] There are many […] reasons for carrying out FGTs [functional group transformations], especially when synthesizing complex molecules. For example, a starting material or a synthetic intermediate may lack a functional group at a key position of the molecular structure. Several reactions may then be required to introduce that functional group. On other occasions, a functional group may be added to a particular position then removed at a later stage. One reason for adding such a functional group would be to block an unwanted reaction at that position of the molecule. Another common situation is where a reactive functional group is converted to a less reactive functional group such that it does not interfere with a subsequent reaction. Later on, the original functional group is restored by another functional group transformation. This is known as a protection/deprotection strategy. The more complex the target molecule, the greater the synthetic challenge. Complexity is related to the number of rings, functional groups, substituents, and chiral centres that are present. […] The more reactions that are involved in a synthetic route, the lower the overall yield. […] retrosynthesis is a strategy by which organic chemists design a synthesis before carrying it out in practice. It is called retrosynthesis because the design process involves studying the target structure and working backwards to identify how that molecule could be synthesized from simpler starting materials. […] a key stage in retrosynthesis is identifying a bond that can be ‘disconnected’ to create those simpler molecules.”

“[V]ery few reactions produce the spectacular visual and audible effects observed in chemistry demonstrations. More typically, reactions involve mixing together two colourless solutions to produce another colourless solution. Temperature changes are a bit more informative. […] However, not all reactions generate heat, and monitoring the temperature is not a reliable way of telling whether the reaction has gone to completion or not. A better approach is to take small samples of the reaction solution at various times and to test these by chromatography or spectroscopy. […] If a reaction is taking place very slowly, different reaction conditions could be tried to speed it up. This could involve heating the reaction, carrying out the reaction under pressure, stirring the contents vigorously, ensuring that the reaction is carried out in a dry atmosphere, using a different solvent, using a catalyst, or using one of the reagents in excess. […] There are a large number of variables that can affect how efficiently reactions occur, and organic chemists in industry are often employed to develop the ideal conditions for a specific reaction. This is an area of organic chemistry known as chemical development. […] Once a reaction has been carried out, it is necessary to isolate and purify the reaction product. This often proves more time-consuming than carrying out the reaction itself. Ideally, one would remove the solvent used in the reaction and be left with the product. However, in most reactions this is not possible as other compounds are likely to be present in the reaction mixture. […] it is usually necessary to carry out procedures that will separate and isolate the desired product from these other compounds. This is known as ‘working up’ the reaction.”

“Proteins are large molecules (macromolecules) which serve a myriad of purposes, and are essentially polymers constructed from molecular building blocks called amino acids […]. In humans, there are twenty different amino acids having the same ‘head group’, consisting of a carboxylic acid and an amine attached to the same carbon atom […] The amino acids are linked up by the carboxylic acid of one amino acid reacting with the amine group of another to form an amide link. Since a protein is being produced, the amide bond is called a peptide bond, and the final protein consists of a polypeptide chain (or backbone) with different side chains ‘hanging off’ the chain […]. The sequence of amino acids present in the polypeptide sequence is known as the primary structure. Once formed, a protein folds into a specific 3D shape […] Nucleic acids […] are another form of biopolymer, and are formed from molecular building blocks called nucleotides. These link up to form a polymer chain where the backbone consists of alternating sugar and phosphate groups. There are two forms of nucleic acid — deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). In DNA, the sugar is deoxyribose , whereas the sugar in RNA is ribose. Each sugar ring has a nucleic acid base attached to it. For DNA, there are four different nucleic acid bases called adenine (A), thymine (T), cytosine (C), and guanine (G) […]. These bases play a crucial role in the overall structure and function of nucleic acids. […] DNA is actually made up of two DNA strands […] where the sugar-phosphate backbones are intertwined to form a double helix. The nucleic acid bases point into the centre of the helix, and each nucleic acid base ‘pairs up’ with a nucleic acid base on the opposite strand through hydrogen bonding. The base pairing is specifically between adenine and thymine, or between cytosine and guanine. This means that one polymer strand is complementary to the other, a feature that is crucial to DNA’s function as the storage molecule for genetic information. […]  [E]ach strand […] act as the template for the creation of a new strand to produce two identical ‘daughter’ DNA double helices […] [A] genetic alphabet of four letters (A, T, G, C) […] code for twenty amino acids. […] [A]n amino acid is coded, not by one nucleotide, but by a set of three. The number of possible triplet combinations using four ‘letters’ is more than enough to encode all the amino acids.”

“Proteins have a variety of functions. Some proteins, such as collagen, keratin, and elastin, have a structural role. Others catalyse life’s chemical reactions and are called enzymes. They have a complex 3D shape, which includes a cavity called the active site […]. This is where the enzyme binds the molecules (substrates) that undergo the enzyme-catalysed reaction. […] A substrate has to have the correct shape to fit an enzyme’s active site, but it also needs binding groups to interact with that site […]. These interactions hold the substrate in the active site long enough for a reaction to occur, and typically involve hydrogen bonds, as well as van der Waals and ionic interactions. When a substrate binds, the enzyme normally undergoes an induced fit. In other words, the shape of the active site changes slightly to accommodate the substrate, and to hold it as tightly as possible. […] Once a substrate is bound to the active site, amino acids in the active site catalyse the subsequent reaction.”

“Proteins called receptors are involved in chemical communication between cells and respond to chemical messengers called neurotransmitters if they are released from nerves, or hormones if they are released by glands. Most receptors are embedded in the cell membrane, with part of their structure exposed on the outer surface of the cell membrane, and another part exposed on the inner surface. On the outer surface they contain a binding site that binds the molecular messenger. An induced fit then takes place that activates the receptor. This is very similar to what happens when a substrate binds to an enzyme […] The induced fit is crucial to the mechanism by which a receptor conveys a message into the cell — a process known as signal transduction. By changing shape, the protein initiates a series of molecular events that influences the internal chemistry within the cell. For example, some receptors are part of multiprotein complexes called ion channels. When the receptor changes shape, it causes the overall ion channel to change shape. This opens up a central pore allowing ions to flow across the cell membrane. The ion concentration within the cell is altered, and that affects chemical reactions within the cell, which ultimately lead to observable results such as muscle contraction. Not all receptors are membrane-bound. For example, steroid receptors are located within the cell. This means that steroid hormones need to cross the cell membrane in order to reach their target receptors. Transport proteins are also embedded in cell membranes and are responsible for transporting polar molecules such as amino acids into the cell. They are also important in controlling nerve action since they allow nerves to capture released neurotransmitters, such that they have a limited period of action.”

“RNA […] is crucial to protein synthesis (translation). There are three forms of RNA — messenger RNA (mRNA), transfer RNA (tRNA), and ribosomal RNA (rRNA). mRNA carries the genetic code for a particular protein from DNA to the site of protein production. Essentially, mRNA is a single-strand copy of a specific section of DNA. The process of copying that information is known as transcription. tRNA decodes the triplet code on mRNA by acting as a molecular adaptor. At one end of tRNA, there is a set of three bases (the anticodon) that can base pair to a set of three bases on mRNA (the codon). An amino acid is linked to the other end of the tRNA and the type of amino acid present is related to the anticodon that is present. When tRNA with the correct anticodon base pairs to the codon on mRNA, it brings the amino acid encoded by that codon. rRNA is a major constituent of a structure called a ribosome, which acts as the factory for protein production. The ribosome binds mRNA then coordinates and catalyses the translation process.”

Organic chemistry.
Carbon.
Stereochemistry.
Delocalization.
Hydrogen bond.
Van der Waals forces.
Ionic bonding.
Chemoselectivity.
Coupling reaction.
Chemical polarity.
Crystallization.
Elemental analysis.
NMR spectroscopy.
Polymerization.
Miller–Urey experiment.
Vester-Ulbricht hypothesis.
Oligonucleotide.
RNA world.
Ribozyme.

November 9, 2017 Posted by | Biology, Books, Chemistry, Genetics | Leave a comment

Molecules

This book is almost exclusively devoted to covering biochemistry topics. When the coverage is decent I find biochemistry reasonably interesting – for example I really liked Beer, Björk & Beardall’s photosynthesis book – and the coverage here was okay, but not more than that. I think that Ball was trying to cover a bit too much ground, or perhaps that there was really too much ground to cover for it to even make sense to try to write a book on this particular topic in a series like this. I learned a lot though.

As usual I’ve added some quotes from the coverage below, as well as some additional links to topics/concepts/people/etc. covered in the book.

“Most atoms on their own are highly reactive – they have a predisposition to join up with other atoms. Molecules are collectives of atoms, firmly welded together into assemblies that may contain anything up to many millions of them. […] By molecules, we generally mean assemblies of a discrete, countable number of atoms. […] Some pure elements adopt molecular forms; others do not. As a rough rule of thumb, metals are non-molecular […] whereas non-metals are molecular. […] molecules are the smallest units of meaning in chemistry. It is through molecules, not atoms, that one can tell stories in the sub-microscopic world. They are the words; atoms are just the letters. […] most words are distinct aggregates of several letters arranged in a particular order. We often find that longer words convey subtler and more finely nuanced meanings. And in molecules, as in words, the order in which the component parts are put together matters: ‘save’ and ‘vase’ do not mean the same thing.”

“There are something like 60,000 different varieties of protein molecule in human cells, each conducting a highly specialized task. It would generally be impossible to guess what this task is merely by looking at a protein. They are undistinguished in appearance, mostly globular in shape […] and composed primarily of carbon, hydrogen, nitrogen, oxygen, and a little sulphur. […] There are twenty varieties of amino acids in natural proteins. In the chain, one amino acid is linked to the next via a covalent bond called a peptide bond. Both molecules shed a few extraneous atoms to make this linkage, and the remainder – another link in the chain – is called a residue. The chain itself is termed a polypeptide. Any string of amino acid residues is a polypeptide. […] In a protein the order of amino acids along the chain – the sequence – is not arbitrary. It is selected […] to ensure that the chain will collapse and curl up in water into the precisely determined globular form of the protein, with all parts of the chain in the right place. This shape can be destroyed by warming the protein, a process called denaturation. But many proteins will fold up again spontaneously into the same globular structure when cooled. In other words, the chain has a kind of memory of its folded shape. The details of this folding process are still not fully understood – it is, in fact, one of the central unsolved puzzles of molecular biology. […] proteins are made not in the [cell] nucleus but in a different compartment called the endoplasmic reticulum […]. The gene is transcribed first into a molecule related to DNA, called RNA (ribonucleic acid). The RNA molecules travel from the nucleus to the endoplasmic reticulum, where they are translated to proteins. The proteins are then shipped off to where they are needed.”

[M]icrofibrils aggregate together in various ways. For example, they can gather in a staggered arrangement to form thick strands called banded fibrils. […] Banded fibrils constitute the connective tissues between cells – they are the cables that hold our flesh together. Bone consists of collagen banded fibrils sprinkled with tiny crystals of the mineral hydroxyapatite, which is basically calcium phosphate. Because of the high protein content of bone, it is flexible and resilient as well as hard. […] In contrast to the disorderly tangle of connective tissue, the eye’s cornea contains collagen fibrils packed side by side in an orderly manner. These fibrils are too small to scatter light, and so the material is virtually transparent. The basic design principle – one that recurs often in nature – is that, by tinkering with the chemical composition and, most importantly, the hierarchical arrangement of the same basic molecules, it is possible to extract several different kinds of material properties. […] cross-links determine the strength of the material: hair and fingernail are more highly cross-linked than skin. Curly or frizzy hair can be straightened by breaking some of [the] sulphur cross-links to make the hairs more pliable. […] Many of the body’s structural fabrics are proteins. Unlike enzymes, structural proteins do not have to conduct any delicate chemistry, but must simply be (for instance) tough, or flexible, or waterproof. In principle many other materials besides proteins would suffice; and indeed, plants use cellulose (a sugar-based polymer) to make their tissues.”

“In many ways, it is metabolism and not replication that provides the best working definition of life. Evolutionary biologists would say that we exist in order to reproduce – but we are not, even the most amorous of us, trying to reproduce all the time. Yet, if we stop metabolizing, even for a minute or two, we are done for. […] Whether waking or asleep, our bodies stay close to a healthy temperature of 37 °C. There is only one way of doing this: our cells are constantly pumping out heat, a by-product of metabolism. Heat is not really the point here – it is simply unavoidable, because all conversion of energy from one form to another squanders some of it this way. Our metabolic processes are primarily about making molecules. Cells cannot survive without constantly reinventing themselves: making new amino acids for proteins, new lipids for membranes, new nucleic acids so that they can divide.”

“In the body, combustion takes place in a tightly controlled, graded sequence of steps, and some chemical energy is drawn off and stored at each stage. […] A power station burns coal, oil, or gas […]. Burning is just a means to an end. The heat is used to turn water into steam; the pressure of the steam drives turbines; the turbines spin and send wire coils whirling in the arms of great magnets, which induces an electrical current in the wire. Energy is passed on, from chemical to heat to mechanical to electrical. And every plant has a barrage of regulatory and safety mechanisms. There are manual checks on pressure gauges and on the structural integrity of moving parts. Automatic sensors make the measurements. Failsafe devices avert catastrophic failure. Energy generation in the cell is every bit as complicated. […] The cell seems to have thought of everything, and has protein devices for fine-tuning it all.”

ATP is the key to the maintenance of cellular integrity and organization, and so the cell puts a great deal of effort into making as much of it as possible from each molecule of glucose that it burns. About 40 per cent of the energy released by the combustion of food is conserved in ATP molecules. ATP is rich in energy because it is like a coiled spring. It contains three phosphate groups, linked like so many train carriages. Each of these phosphate groups has a negative charge; this means that they repel one another. But because they are joined by chemical bonds, they cannot escape one another […]. Straining to get away, the phosphates pull an energetically powerful punch. […] The links between phosphates can be snipped in a reaction that involves water […] called hydrolysis (‘splitting with water’). Each time a bond is hydrolysed, energy is released. Setting free the outermost phosphate converts ATP to adenosine diphosphate (ADP); cleave the second phosphate and it becomes adenosine monophosphate (AMP). Both severances release comparable amounts of energy.”

“Burning sugar is a two-stage process, beginning with its transformation to a molecule called pyruvate in a process known as glycolysis […]. This involves a sequence of ten enzyme-catalysed steps. The first five of these split glucose in half […], powered by the consumption of ATP molecules: two of them are ‘decharged’ to ADP for every glucose molecule split. But the conversion of the fragments to pyruvate […] permits ATP to be recouped from ADP. Four ATP molecules are made this way, so that there is an overall gain of two ATP molecules per glucose molecule consumed. Thus glycolysis charges the cell’s batteries. Pyruvate then normally enters the second stage of the combustion process: the citric acid cycle, which requires oxygen. But if oxygen is scarce – that is, under anaerobic conditions – a contingency plan is enacted whereby pyruvate is instead converted to the molecule lactate. […] The first thing a mitochondrion does is convert pyruvate enzymatically to a molecule called acetyl coenzyme A (CoA). The breakdown of fatty acids and glycerides from fats also eventually generates acetyl CoA. The [citric acid] cycle is a sequence of eight enzyme-catalysed reactions that transform acetyl CoA first to citric acid and then to various other molecules, ending with […] oxaloacetate. This end is a new beginning, for oxaloacetate reacts with acetyl CoA to make citric acid. In some of the steps of the cycle, carbon dioxide is generated as a by-product. It dissolves in the bloodstream and is carried off to the lungs to be exhaled. Thus in effect the carbon in the original glucose molecules is syphoned off into the end product carbon dioxide, completing the combustion process. […] Also syphoned off from the cycle are electrons – crudely speaking, the citric acid cycle sends an electrical current to a different part of the mitochondrion. These electrons are used to convert oxygen molecules and positively charged hydrogen ions to water – an energy-releasing process. The energy is captured and used to make ATP in abundance.”

“While mammalian cells have fuel-burning factories in the form of mitochondria, the solar-power centres in the cells of plant leaves are compartments called chloroplasts […] chloroplast takes carbon dioxide and water, and from them constructs […] sugar. […] In the first part of photosynthesis, light is used to convert NADP to an electron carrier (NADPH) and to transform ADP to ATP. This is effectively a charging-up process that primes the chloroplast for glucose synthesis. In the second part, ATP and NADPH are used to turn carbon dioxide into sugar, in a cyclic sequence of steps called the Calvin–Benson cycle […] There are several similarities between the processes of aerobic metabolism and photosynthesis. Both consist of two distinct sub-processes with separate evolutionary origins: a linear sequence of reactions coupled to a cyclic sequence that regenerates the molecules they both need. The bridge between glycolysis and the citric acid cycle is the electron-ferrying NAD molecule; the two sub-processes of photosynthesis are bridged by the cycling of an almost identical molecule, NAD phosphate (NADP).”

“Despite the variety of messages that hormones convey, the mechanism by which the signal is passed from a receptor protein at the cell surface to the cell’s interior is the same in almost all cases. It involves a sequence of molecular interactions in which molecules transform one another down a relay chain. In cell biology this is called signal transduction. At the same time as relaying the message, these interactions amplify the signal so that the docking of a single hormone molecule to a receptor creates a big response inside the cell. […] The receptor proteins span the entire width of the membrane; the hormone-binding site protrudes on the outer surface, while the base of the receptor emerges from the inner surface […]. When the receptor binds its target hormone, a shape change is transmitted to the lower face of the protein, which enables it to act as an enzyme. […] The participants of all these processes [G protein, guanosine diphosphate and -triphosphate, adenylate cyclase… – figured it didn’t matter if I left out a few details – US…] are stuck to the cell wall. But cAMP floats freely in the cell’s cytoplasm, and is able to carry the signal into the cell interior. It is called a ‘second messenger’, since it is the agent that relays the signal of the ‘first messenger’ (the hormone) into the community of the cell. Cyclic AMP becomes attached to protein molecules called protein kinases, whereupon they in turn become activated as enzymes. Most protein kinases switch other enzymes on and off by attaching phosphate groups to them – a reaction called phosphorylation. […] The process might sound rather complicated, but it is really nothing more than a molecular relay. The signal is passed from the hormone to its receptor, then to the G protein, on to an enzyme and thence to the second messenger, and further on to a protein kinase, and so forth. The G-protein mechanism of signal transduction was discovered in the 1970s by Alfred Gilman and Martin Rodbell, for which they received the 1994 Nobel Prize for medicine. It represents one of the most widespread means of getting a message across a cell membrane. […] it is not just hormonal signalling that makes use of the G-protein mechanism. Our senses of vision and smell, which also involve the transmission of signals, employ the same switching process.”

“Although axon signals are electrical, they differ from those in the metal wires of electronic circuitry. The axon is basically a tubular cell membrane decorated along its length with channels that let sodium and potassium ions in and out. Some of these ion channels are permanently open; others are ‘gated’, opening or closing in response to electrical signals. And some are not really channels at all but pumps, which actively transport sodium ions out of the cell and potassium ions in. These sodium-potassium pumps can move ions […] powered by ATP. […] Drugs that relieve pain typically engage with inhibitory receptors. Morphine, the main active ingredient of opium, binds to so-called opioid receptors in the spinal cord, which inhibit the transmission of pain signals to the brain. There are also opioid receptors in the brain itself, which is why morphine and related opiate drugs have a mental as well as a somatic effect. These receptors in the brain are the binding sites of peptide molecules called endorphins, which the brain produces in response to pain. Some of these are themselves extremely powerful painkillers. […] Not all pain-relieving drugs (analgesics) work by blocking the pain signal. Some prevent the signal from ever being sent. Pain signals are initiated by peptides called prostaglandins, which are manufactured and released by distressed cells. Aspirin (acetylsalicylic acid) latches onto and inhibits one of the enzymes responsible for prostaglandin synthesis, cutting off the cry of pain at its source. Unfortunately, prostaglandins are also responsible for making the mucus that protects the stomach lining […], so one of the side effects of aspirin is the risk of ulcer formation.”

“Shape changes […] are common when a receptor binds its target. If binding alone is the objective, a big shape change is not terribly desirable, since the internal rearrangements of the receptor make heavy weather of the binding event and may make it harder to achieve. This is why many supramolecular hosts are designed so that they are ‘pre-organized’ to receive their guests, minimizing the shape change caused by binding.”

“The way that a protein chain folds up is determined by its amino-acid sequence […] so the ‘information’ for making a protein is uniquely specified by this sequence. DNA encodes this information using […] groups of three bases [to] represent each amino acid. This is the genetic code.* How a particular protein sequence determines the way its chain folds is not yet fully understood. […] Nevertheless, the principle of information flow in the cell is clear. DNA is a manual of information about proteins. We can think of each chromosome as a separate chapter, each gene as a word in that chapter (they are very long words!), and each sequential group of three bases in the gene as a character in the word. Proteins are translations of the words into another language, whose characters are amino acids. In general, only when the genetic language is translated can we understand what it means.”

“It is thought that only about 2–3 per cent of the entire human genome codes for proteins. […] Some people object to genetic engineering on the grounds that it is ethically wrong to tamper with the fundamental material of life – DNA – whether it is in bacteria, humans, tomatoes, or sheep. One can understand such objections, and it would be arrogant to dismiss them as unscientific. Nevertheless, they do sit uneasily with what we now know about the molecular basis of life. The idea that our genetic make-up is sacrosanct looks hard to sustain once we appreciate how contingent, not to say arbitrary, that make-up is. Our genomes are mostly parasite-riddled junk, full of the detritus of over three billion years of evolution.”

Links:

Roald Hoffmann.
Molecular solid.
Covalent bond.
Visible spectrum.
X-ray crystallography.
Electron microscope.
Valence (chemistry).
John Dalton.
Isomer.
Lysozyme.
Organic chemistry.
Synthetic dye industry/Alizarin.
Paul Ehrlich (staining).
Retrosynthetic analysis. [I would have added a link to ‘rational synthesis as well here if there’d been a good article on that topic, but I wasn’t able to find one. Anyway: “Organic chemists call [the] kind of procedure […] in which a starting molecule is converted systematically, bit by bit, to the desired product […] a rational synthesis.”]
Paclitaxel synthesis.
Protein.
Enzyme.
Tryptophan synthase.
Ubiquitin.
Amino acid.
Protein folding.
Peptide bond.
Hydrogen bond.
Nucleotide.
Chromosome.
Structural gene. Regulatory gene.
Operon.
Gregor Mendel.
Mitochondrial DNA.
RNA world.
Ribozyme.
Artificial gene synthesis.
Keratin.
Silk.
Vulcanization.
Aramid.
Microtubule.
Tubulin.
Carbon nanotube.
Amylase/pepsin/glycogen/insulin.
Cytochrome c oxidase.
ATP synthase.
Haemoglobin.
Thylakoid membrane.
Chlorophyll.
Liposome.
TNT.
Motor protein. Dynein. Kinesin.
Sarcomere.
Sliding filament theory of muscle action.
Photoisomerization.
Supramolecular chemistry.
Hormone. Endocrine system.
Neurotransmitter.
Ionophore.
DNA.
Mutation.
Intron. Exon.
Transposon.
Molecular electronics.

October 30, 2017 Posted by | Biology, Books, Botany, Chemistry, Genetics, Neurology, Pharmacology | Leave a comment

Physical chemistry

This is a good book, I really liked it, just as I really liked the other book in the series which I read by the same author, the one about the laws of thermodynamics (blog coverage here). I know much, much more about physics than I do about chemistry and even though some of it was review I learned a lot from this one. Recommended, certainly if you find the quotes below interesting. As usual, I’ve added some observations from the book and some links to topics/people/etc. covered/mentioned in the book below.

Some quotes:

“Physical chemists pay a great deal of attention to the electrons that surround the nucleus of an atom: it is here that the chemical action takes place and the element expresses its chemical personality. […] Quantum mechanics plays a central role in accounting for the arrangement of electrons around the nucleus. The early ‘Bohr model’ of the atom, […] with electrons in orbits encircling the nucleus like miniature planets and widely used in popular depictions of atoms, is wrong in just about every respect—but it is hard to dislodge from the popular imagination. The quantum mechanical description of atoms acknowledges that an electron cannot be ascribed to a particular path around the nucleus, that the planetary ‘orbits’ of Bohr’s theory simply don’t exist, and that some electrons do not circulate around the nucleus at all. […] Physical chemists base their understanding of the electronic structures of atoms on Schrödinger’s model of the hydrogen atom, which was formulated in 1926. […] An atom is often said to be mostly empty space. That is a remnant of Bohr’s model in which a point-like electron circulates around the nucleus; in the Schrödinger model, there is no empty space, just a varying probability of finding the electron at a particular location.”

“No more than two electrons may occupy any one orbital, and if two do occupy that orbital, they must spin in opposite directions. […] this form of the principle [the Pauli exclusion principleUS] […] is adequate for many applications in physical chemistry. At its very simplest, the principle rules out all the electrons of an atom (other than atoms of one-electron hydrogen and two-electron helium) having all their electrons in the 1s-orbital. Lithium, for instance, has three electrons: two occupy the 1s orbital, but the third cannot join them, and must occupy the next higher-energy orbital, the 2s-orbital. With that point in mind, something rather wonderful becomes apparent: the structure of the Periodic Table of the elements unfolds, the principal icon of chemistry. […] The first electron can enter the 1s-orbital, and helium’s (He) second electron can join it. At that point, the orbital is full, and lithium’s (Li) third electron must enter the next higher orbital, the 2s-orbital. The next electron, for beryllium (Be), can join it, but then it too is full. From that point on the next six electrons can enter in succession the three 2p-orbitals. After those six are present (at neon, Ne), all the 2p-orbitals are full and the eleventh electron, for sodium (Na), has to enter the 3s-orbital. […] Similar reasoning accounts for the entire structure of the Table, with elements in the same group all having analogous electron arrangements and each successive row (‘period’) corresponding to the next outermost shell of orbitals.”

“[O]n crossing the [Periodic] Table from left to right, atoms become smaller: even though they have progressively more electrons, the nuclear charge increases too, and draws the clouds in to itself. On descending a group, atoms become larger because in successive periods new outermost shells are started (as in going from lithium to sodium) and each new coating of cloud makes the atom bigger […] the ionization energy [is] the energy needed to remove one or more electrons from the atom. […] The ionization energy more or less follows the trend in atomic radii but in an opposite sense because the closer an electron lies to the positively charged nucleus, the harder it is to remove. Thus, ionization energy increases from left to right across the Table as the atoms become smaller. It decreases down a group because the outermost electron (the one that is most easily removed) is progressively further from the nucleus. […] the electron affinity [is] the energy released when an electron attaches to an atom. […] Electron affinities are highest on the right of the Table […] An ion is an electrically charged atom. That charge comes about either because the neutral atom has lost one or more of its electrons, in which case it is a positively charged cation […] or because it has captured one or more electrons and has become a negatively charged anion. […] Elements on the left of the Periodic Table, with their low ionization energies, are likely to lose electrons and form cations; those on the right, with their high electron affinities, are likely to acquire electrons and form anions. […] ionic bonds […] form primarily between atoms on the left and right of the Periodic Table.”

“Although the Schrödinger equation is too difficult to solve for molecules, powerful computational procedures have been developed by theoretical chemists to arrive at numerical solutions of great accuracy. All the procedures start out by building molecular orbitals from the available atomic orbitals and then setting about finding the best formulations. […] Depictions of electron distributions in molecules are now commonplace and very helpful for understanding the properties of molecules. It is particularly relevant to the development of new pharmacologically active drugs, where electron distributions play a central role […] Drug discovery, the identification of pharmacologically active species by computation rather than in vivo experiment, is an important target of modern computational chemistry.”

Work […] involves moving against an opposing force; heat […] is the transfer of energy that makes use of a temperature difference. […] the internal energy of a system that is isolated from external influences does not change. That is the First Law of thermodynamics. […] A system possesses energy, it does not possess work or heat (even if it is hot). Work and heat are two different modes for the transfer of energy into or out of a system. […] if you know the internal energy of a system, then you can calculate its enthalpy simply by adding to U the product of pressure and volume of the system (H = U + pV). The significance of the enthalpy […] is that a change in its value is equal to the output of energy as heat that can be obtained from the system provided it is kept at constant pressure. For instance, if the enthalpy of a system falls by 100 joules when it undergoes a certain change (such as a chemical reaction), then we know that 100 joules of energy can be extracted as heat from the system, provided the pressure is constant.”

“In the old days of physical chemistry (well into the 20th century), the enthalpy changes were commonly estimated by noting which bonds are broken in the reactants and which are formed to make the products, so A → B might be the bond-breaking step and B → C the new bond-formation step, each with enthalpy changes calculated from knowledge of the strengths of the old and new bonds. That procedure, while often a useful rule of thumb, often gave wildly inaccurate results because bonds are sensitive entities with strengths that depend on the identities and locations of the other atoms present in molecules. Computation now plays a central role: it is now routine to be able to calculate the difference in energy between the products and reactants, especially if the molecules are isolated as a gas, and that difference easily converted to a change of enthalpy. […] Enthalpy changes are very important for a rational discussion of changes in physical state (vaporization and freezing, for instance) […] If we know the enthalpy change taking place during a reaction, then provided the process takes place at constant pressure we know how much energy is released as heat into the surroundings. If we divide that heat transfer by the temperature, then we get the associated entropy change in the surroundings. […] provided the pressure and temperature are constant, a spontaneous change corresponds to a decrease in Gibbs energy. […] the chemical potential can be thought of as the Gibbs energy possessed by a standard-size block of sample. (More precisely, for a pure substance the chemical potential is the molar Gibbs energy, the Gibbs energy per mole of atoms or molecules.)”

“There are two kinds of work. One kind is the work of expansion that occurs when a reaction generates a gas and pushes back the atmosphere (perhaps by pressing out a piston). That type of work is called ‘expansion work’. However, a chemical reaction might do work other than by pushing out a piston or pushing back the atmosphere. For instance, it might do work by driving electrons through an electric circuit connected to a motor. This type of work is called ‘non-expansion work’. […] a change in the Gibbs energy of a system at constant temperature and pressure is equal to the maximum non-expansion work that can be done by the reaction. […] the link of thermodynamics with biology is that one chemical reaction might do the non-expansion work of building a protein from amino acids. Thus, a knowledge of the Gibbs energies changes accompanying metabolic processes is very important in bioenergetics, and much more important than knowing the enthalpy changes alone (which merely indicate a reaction’s ability to keep us warm).”

“[T]he probability that a molecule will be found in a state of particular energy falls off rapidly with increasing energy, so most molecules will be found in states of low energy and very few will be found in states of high energy. […] If the temperature is low, then the distribution declines so rapidly that only the very lowest levels are significantly populated. If the temperature is high, then the distribution falls off very slowly with increasing energy, and many high-energy states are populated. If the temperature is zero, the distribution has all the molecules in the ground state. If the temperature is infinite, all available states are equally populated. […] temperature […] is the single, universal parameter that determines the most probable distribution of molecules over the available states.”

“Mixing adds disorder and increases the entropy of the system and therefore lowers the Gibbs energy […] In the absence of mixing, a reaction goes to completion; when mixing of reactants and products is taken into account, equilibrium is reached when both are present […] Statistical thermodynamics, through the Boltzmann distribution and its dependence on temperature, allows physical chemists to understand why in some cases the equilibrium shifts towards reactants (which is usually unwanted) or towards products (which is normally wanted) as the temperature is raised. A rule of thumb […] is provided by a principle formulated by Henri Le Chatelier […] that a system at equilibrium responds to a disturbance by tending to oppose its effect. Thus, if a reaction releases energy as heat (is ‘exothermic’), then raising the temperature will oppose the formation of more products; if the reaction absorbs energy as heat (is ‘endothermic’), then raising the temperature will encourage the formation of more product.”

“Model building pervades physical chemistry […] some hold that the whole of science is based on building models of physical reality; much of physical chemistry certainly is.”

“For reasonably light molecules (such as the major constituents of air, N2 and O2) at room temperature, the molecules are whizzing around at an average speed of about 500 m/s (about 1000 mph). That speed is consistent with what we know about the propagation of sound, the speed of which is about 340 m/s through air: for sound to propagate, molecules must adjust their position to give a wave of undulating pressure, so the rate at which they do so must be comparable to their average speeds. […] a typical N2 or O2 molecule in air makes a collision every nanosecond and travels about 1000 molecular diameters between collisions. To put this scale into perspective: if a molecule is thought of as being the size of a tennis ball, then it travels about the length of a tennis court between collisions. Each molecule makes about a billion collisions a second.”

“X-ray diffraction makes use of the fact that electromagnetic radiation (which includes X-rays) consists of waves that can interfere with one another and give rise to regions of enhanced and diminished intensity. This so-called ‘diffraction pattern’ is characteristic of the object in the path of the rays, and mathematical procedures can be used to interpret the pattern in terms of the object’s structure. Diffraction occurs when the wavelength of the radiation is comparable to the dimensions of the object. X-rays have wavelengths comparable to the separation of atoms in solids, so are ideal for investigating their arrangement.”

“For most liquids the sample contracts when it freezes, so […] the temperature does not need to be lowered so much for freezing to occur. That is, the application of pressure raises the freezing point. Water, as in most things, is anomalous, and ice is less dense than liquid water, so water expands when it freezes […] when two gases are allowed to occupy the same container they invariably mix and each spreads uniformly through it. […] the quantity of gas that dissolves in any liquid is proportional to the pressure of the gas. […] When the temperature of [a] liquid is raised, it is easier for a dissolved molecule to gather sufficient energy to escape back up into the gas; the rate of impacts from the gas is largely unchanged. The outcome is a lowering of the concentration of dissolved gas at equilibrium. Thus, gases appear to be less soluble in hot water than in cold. […] the presence of dissolved substances affects the properties of solutions. For instance, the everyday experience of spreading salt on roads to hinder the formation of ice makes use of the lowering of freezing point of water when a salt is present. […] the boiling point is raised by the presence of a dissolved substance [whereas] the freezing point […] is lowered by the presence of a solute.”

“When a liquid and its vapour are present in a closed container the vapour exerts a characteristic pressure (when the escape of molecules from the liquid matches the rate at which they splash back down into it […][)] This characteristic pressure depends on the temperature and is called the ‘vapour pressure’ of the liquid. When a solute is present, the vapour pressure at a given temperature is lower than that of the pure liquid […] The extent of lowering is summarized by yet another limiting law of physical chemistry, ‘Raoult’s law’ [which] states that the vapour pressure of a solvent or of a component of a liquid mixture is proportional to the proportion of solvent or liquid molecules present. […] Osmosis [is] the tendency of solvent molecules to flow from the pure solvent to a solution separated from it by a [semi-]permeable membrane […] The entropy when a solute is present in a solvent is higher than when the solute is absent, so an increase in entropy, and therefore a spontaneous process, is achieved when solvent flows through the membrane from the pure liquid into the solution. The tendency for this flow to occur can be overcome by applying pressure to the solution, and the minimum pressure needed to overcome the tendency to flow is called the ‘osmotic pressure’. If one solution is put into contact with another through a semipermeable membrane, then there will be no net flow if they exert the same osmotic pressures and are ‘isotonic’.”

“Broadly speaking, the reaction quotient [‘Q’] is the ratio of concentrations, with product concentrations divided by reactant concentrations. It takes into account how the mingling of the reactants and products affects the total Gibbs energy of the mixture. The value of Q that corresponds to the minimum in the Gibbs energy […] is called the equilibrium constant and denoted K. The equilibrium constant, which is characteristic of a given reaction and depends on the temperature, is central to many discussions in chemistry. When K is large (1000, say), we can be reasonably confident that the equilibrium mixture will be rich in products; if K is small (0.001, say), then there will be hardly any products present at equilibrium and we should perhaps look for another way of making them. If K is close to 1, then both reactants and products will be abundant at equilibrium and will need to be separated. […] Equilibrium constants vary with temperature but not […] with pressure. […] van’t Hoff’s equation implies that if the reaction is strongly exothermic (releases a lot of energy as heat when it takes place), then the equilibrium constant decreases sharply as the temperature is raised. The opposite is true if the reaction is strongly endothermic (absorbs a lot of energy as heat). […] Typically it is found that the rate of a reaction [how fast it progresses] decreases as it approaches equilibrium. […] Most reactions go faster when the temperature is raised. […] reactions with high activation energies proceed slowly at low temperatures but respond sharply to changes of temperature. […] The surface area exposed by a catalyst is important for its function, for it is normally the case that the greater that area, the more effective is the catalyst.”

Links:

John Dalton.
Atomic orbital.
Electron configuration.
S,p,d,f orbitals.
Computational chemistry.
Atomic radius.
Covalent bond.
Gilbert Lewis.
Valence bond theory.
Molecular orbital theory.
Orbital hybridisation.
Bonding and antibonding orbitals.
Schrödinger equation.
Density functional theory.
Chemical thermodynamics.
Laws of thermodynamics/Zeroth law/First law/Second law/Third Law.
Conservation of energy.
Thermochemistry.
Bioenergetics.
Spontaneous processes.
Entropy.
Rudolf Clausius.
Chemical equilibrium.
Heat capacity.
Compressibility.
Statistical thermodynamics/statistical mechanics.
Boltzmann distribution.
State of matter/gas/liquid/solid.
Perfect gas/Ideal gas law.
Robert Boyle/Joseph Louis Gay-Lussac/Jacques Charles/Amedeo Avogadro.
Equation of state.
Kinetic theory of gases.
Van der Waals equation of state.
Maxwell–Boltzmann distribution.
Thermal conductivity.
Viscosity.
Nuclear magnetic resonance.
Debye–Hückel equation.
Ionic solids.
Catalysis.
Supercritical fluid.
Liquid crystal.
Graphene.
Benoît Paul Émile Clapeyron.
Phase (matter)/phase diagram/Gibbs’ phase rule.
Ideal solution/regular solution.
Henry’s law.
Chemical kinetics.
Electrochemistry.
Rate equation/First order reactions/Second order reactions.
Rate-determining step.
Arrhenius equation.
Collision theory.
Diffusion-controlled and activation-controlled reactions.
Transition state theory.
Photochemistry/fluorescence/phosphorescence/photoexcitation.
Photosynthesis.
Redox reactions.
Electrochemical cell.
Fuel cell.
Reaction dynamics.
Spectroscopy/emission spectroscopy/absorption spectroscopy/Raman spectroscopy.
Raman effect.
Magnetic resonance imaging.
Fourier-transform spectroscopy.
Electron paramagnetic resonance.
Mass spectrum.
Electron spectroscopy for chemical analysis.
Scanning tunneling microscope.
Chemisorption/physisorption.

October 5, 2017 Posted by | Biology, Books, Chemistry, Pharmacology, Physics | Leave a comment

Earth System Science

I decided not to rate this book. Some parts are great, some parts I didn’t think were very good.

I’ve added some quotes and links below. First a few links (I’ve tried not to add links here which I’ve also included in the quotes below):

Carbon cycle.
Origin of water on Earth.
Gaia hypothesis.
Albedo (climate and weather).
Snowball Earth.
Carbonate–silicate cycle.
Carbonate compensation depth.
Isotope fractionation.
CLAW hypothesis.
Mass-independent fractionation.
δ13C.
Great Oxygenation Event.
Acritarch.
Grypania.
Neoproterozoic.
Rodinia.
Sturtian glaciation.
Marinoan glaciation.
Ediacaran biota.
Cambrian explosion.
Quarternary.
Medieval Warm Period.
Little Ice Age.
Eutrophication.
Methane emissions.
Keeling curve.
CO2 fertilization effect.
Acid rain.
Ocean acidification.
Earth systems models.
Clausius–Clapeyron relation.
Thermohaline circulation.
Cryosphere.
The limits to growth.
Exoplanet Biosignature Gases.
Transiting Exoplanet Survey Satellite (TESS).
James Webb Space Telescope.
Habitable zone.
Kepler-186f.

A few quotes from the book:

“The scope of Earth system science is broad. It spans 4.5 billion years of Earth history, how the system functions now, projections of its future state, and ultimate fate. […] Earth system science is […] a deeply interdisciplinary field, which synthesizes elements of geology, biology, chemistry, physics, and mathematics. It is a young, integrative science that is part of a wider 21st-century intellectual trend towards trying to understand complex systems, and predict their behaviour. […] A key part of Earth system science is identifying the feedback loops in the Earth system and understanding the behaviour they can create. […] In systems thinking, the first step is usually to identify your system and its boundaries. […] what is part of the Earth system depends on the timescale being considered. […] The longer the timescale we look over, the more we need to include in the Earth system. […] for many Earth system scientists, the planet Earth is really comprised of two systems — the surface Earth system that supports life, and the great bulk of the inner Earth underneath. It is the thin layer of a system at the surface of the Earth […] that is the subject of this book.”

“Energy is in plentiful supply from the Sun, which drives the water cycle and also fuels the biosphere, via photosynthesis. However, the surface Earth system is nearly closed to materials, with only small inputs to the surface from the inner Earth. Thus, to support a flourishing biosphere, all the elements needed by life must be efficiently recycled within the Earth system. This in turn requires energy, to transform materials chemically and to move them physically around the planet. The resulting cycles of matter between the biosphere, atmosphere, ocean, land, and crust are called global biogeochemical cycles — because they involve biological, geological, and chemical processes. […] The global biogeochemical cycling of materials, fuelled by solar energy, has transformed the Earth system. […] It has made the Earth fundamentally different from its state before life and from its planetary neighbours, Mars and Venus. Through cycling the materials it needs, the Earth’s biosphere has bootstrapped itself into a much more productive state.”

“Each major element important for life has its own global biogeochemical cycle. However, every biogeochemical cycle can be conceptualized as a series of reservoirs (or ‘boxes’) of material connected by fluxes (or flows) of material between them. […] When a biogeochemical cycle is in steady state, the fluxes in and out of each reservoir must be in balance. This allows us to define additional useful quantities. Notably, the amount of material in a reservoir divided by the exchange flux with another reservoir gives the average ‘residence time’ of material in that reservoir with respect to the chosen process of exchange. For example, there are around 7 × 1016 moles of carbon dioxide (CO2) in today’s atmosphere, and photosynthesis removes around 9 × 1015 moles of CO2 per year, giving each molecule of CO2 a residence time of roughly eight years in the atmosphere before it is taken up, somewhere in the world, by photosynthesis. […] There are 3.8 × 1019 moles of molecular oxygen (O2) in today’s atmosphere, and oxidative weathering removes around 1 × 1013 moles of O2 per year, giving oxygen a residence time of around four million years with respect to removal by oxidative weathering. This makes the oxygen cycle […] a geological timescale cycle.”

“The water cycle is the physical circulation of water around the planet, between the ocean (where 97 per cent is stored), atmosphere, ice sheets, glaciers, sea-ice, freshwaters, and groundwater. […] To change the phase of water from solid to liquid or liquid to gas requires energy, which in the climate system comes from the Sun. Equally, when water condenses from gas to liquid or freezes from liquid to solid, energy is released. Solar heating drives evaporation from the ocean. This is responsible for supplying about 90 per cent of the water vapour to the atmosphere, with the other 10 per cent coming from evaporation on the land and freshwater surfaces (and sublimation of ice and snow directly to vapour). […] The water cycle is intimately connected to other biogeochemical cycles […]. Many compounds are soluble in water, and some react with water. This makes the ocean a key reservoir for several essential elements. It also means that rainwater can scavenge soluble gases and aerosols out of the atmosphere. When rainwater hits the land, the resulting solution can chemically weather rocks. Silicate weathering in turn helps keep the climate in a state where water is liquid.”

“In modern terms, plants acquire their carbon from carbon dioxide in the atmosphere, add electrons derived from water molecules to the carbon, and emit oxygen to the atmosphere as a waste product. […] In energy terms, global photosynthesis today captures about 130 terrawatts (1 TW = 1012 W) of solar energy in chemical form — about half of it in the ocean and about half on land. […] All the breakdown pathways for organic carbon together produce a flux of carbon dioxide back to the atmosphere that nearly balances photosynthetic uptake […] The surface recycling system is almost perfect, but a tiny fraction (about 0.1 per cent) of the organic carbon manufactured in photosynthesis escapes recycling and is buried in new sedimentary rocks. This organic carbon burial flux leaves an equivalent amount of oxygen gas behind in the atmosphere. Hence the burial of organic carbon represents the long-term source of oxygen to the atmosphere. […] the Earth’s crust has much more oxygen trapped in rocks in the form of oxidized iron and sulphur, than it has organic carbon. This tells us that there has been a net source of oxygen to the crust over Earth history, which must have come from the loss of hydrogen to space.”

“The oxygen cycle is relatively simple, because the reservoir of oxygen in the atmosphere is so massive that it dwarfs the reservoirs of organic carbon in vegetation, soils, and the ocean. Hence oxygen cannot get used up by the respiration or combustion of organic matter. Even the combustion of all known fossil fuel reserves can only put a small dent in the much larger reservoir of atmospheric oxygen (there are roughly 4 × 1017 moles of fossil fuel carbon, which is only about 1 per cent of the O2 reservoir). […] Unlike oxygen, the atmosphere is not the major surface reservoir of carbon. The amount of carbon in global vegetation is comparable to that in the atmosphere and the amount of carbon in soils (including permafrost) is roughly four times that in the atmosphere. Even these reservoirs are dwarfed by the ocean, which stores forty-five times as much carbon as the atmosphere, thanks to the fact that CO2 reacts with seawater. […] The exchange of carbon between the atmosphere and the land is largely biological, involving photosynthetic uptake and release by aerobic respiration (and, to a lesser extent, fires). […] Remarkably, when we look over Earth history there are fluctuations in the isotopic composition of carbonates, but no net drift up or down. This suggests that there has always been roughly one-fifth of carbon being buried in organic form and the other four-fifths as carbonate rocks. Thus, even on the early Earth, the biosphere was productive enough to support a healthy organic carbon burial flux.”

“The two most important nutrients for life are phosphorus and nitrogen, and they have very different biogeochemical cycles […] The largest reservoir of nitrogen is in the atmosphere, whereas the heavier phosphorus has no significant gaseous form. Phosphorus thus presents a greater recycling challenge for the biosphere. All phosphorus enters the surface Earth system from the chemical weathering of rocks on land […]. Phosphorus is concentrated in rocks in grains or veins of the mineral apatite. Natural selection has made plants on land and their fungal partners […] very effective at acquiring phosphorus from rocks, by manufacturing and secreting a range of organic acids that dissolve apatite. […] The average terrestrial ecosystem recycles phosphorus roughly fifty times before it is lost into freshwaters. […] The loss of phosphorus from the land is the ocean’s gain, providing the key input of this essential nutrient. Phosphorus is stored in the ocean as phosphate dissolved in the water. […] removal of phosphorus into the rock cycle balances the weathering of phosphorus from rocks on land. […] Although there is a large reservoir of nitrogen in the atmosphere, the molecules of nitrogen gas (N2) are extremely strongly bonded together, making nitrogen unavailable to most organisms. To split N2 and make nitrogen biologically available requires a remarkable biochemical feat — nitrogen fixation — which uses a lot of energy. In the ocean the dominant nitrogen fixers are cyanobacteria with a direct source of energy from sunlight. On land, various plants form a symbiotic partnership with nitrogen fixing bacteria, making a home for them in root nodules and supplying them with food in return for nitrogen. […] Nitrogen fixation and denitrification form the major input and output fluxes of nitrogen to both the land and the ocean, but there is also recycling of nitrogen within ecosystems. […] There is an intimate link between nutrient regulation and atmospheric oxygen regulation, because nutrient levels and marine productivity determine the source of oxygen via organic carbon burial. However, ocean nutrients are regulated on a much shorter timescale than atmospheric oxygen because their residence times are much shorter—about 2,000 years for nitrogen and 20,000 years for phosphorus.”

“[F]orests […] are vulnerable to increases in oxygen that increase the frequency and ferocity of fires. […] Combustion experiments show that fires only become self-sustaining in natural fuels when oxygen reaches around 17 per cent of the atmosphere. Yet for the last 370 million years there is a nearly continuous record of fossil charcoal, indicating that oxygen has never dropped below this level. At the same time, oxygen has never risen too high for fires to have prevented the slow regeneration of forests. The ease of combustion increases non-linearly with oxygen concentration, such that above 25–30 per cent oxygen (depending on the wetness of fuel) it is hard to see how forests could have survived. Thus oxygen has remained within 17–30 per cent of the atmosphere for at least the last 370 million years.”

“[T]he rate of silicate weathering increases with increasing CO2 and temperature. Thus, if something tends to increase CO2 or temperature it is counteracted by increased CO2 removal by silicate weathering. […] Plants are sensitive to variations in CO2 and temperature, and together with their fungal partners they greatly amplify weathering rates […] the most pronounced change in atmospheric CO2 over Phanerozoic time was due to plants colonizing the land. This started around 470 million years ago and escalated with the first forests 370 million years ago. The resulting acceleration of silicate weathering is estimated to have lowered the concentration of atmospheric CO2 by an order of magnitude […], and cooled the planet into a series of ice ages in the Carboniferous and Permian Periods.”

“The first photosynthesis was not the kind we are familiar with, which splits water and spits out oxygen as a waste product. Instead, early photosynthesis was ‘anoxygenic’ — meaning it didn’t produce oxygen. […] It could have used a range of compounds, in place of water, as a source of electrons with which to fix carbon from carbon dioxide and reduce it to sugars. Potential electron donors include hydrogen (H2) and hydrogen sulphide (H2S) in the atmosphere, or ferrous iron (Fe2+) dissolved in the ancient oceans. All of these are easier to extract electrons from than water. Hence they require fewer photons of sunlight and simpler photosynthetic machinery. The phylogenetic tree of life confirms that several forms of anoxygenic photosynthesis evolved very early on, long before oxygenic photosynthesis. […] If the early biosphere was fuelled by anoxygenic photosynthesis, plausibly based on hydrogen gas, then a key recycling process would have been the biological regeneration of this gas. Calculations suggest that once such recycling had evolved, the early biosphere might have achieved a global productivity up to 1 per cent of the modern marine biosphere. If early anoxygenic photosynthesis used the supply of reduced iron upwelling in the ocean, then its productivity would have been controlled by ocean circulation and might have reached 10 per cent of the modern marine biosphere. […] The innovation that supercharged the early biosphere was the origin of oxygenic photosynthesis using abundant water as an electron donor. This was not an easy process to evolve. To split water requires more energy — i.e. more high-energy photons of sunlight — than any of the earlier anoxygenic forms of photosynthesis. Evolution’s solution was to wire together two existing ‘photosystems’ in one cell and bolt on the front of them a remarkable piece of biochemical machinery that can rip apart water molecules. The result was the first cyanobacterial cell — the ancestor of all organisms performing oxygenic photosynthesis on the planet today. […] Once oxygenic photosynthesis had evolved, the productivity of the biosphere would no longer have been restricted by the supply of substrates for photosynthesis, as water and carbon dioxide were abundant. Instead, the availability of nutrients, notably nitrogen and phosphorus, would have become the major limiting factors on the productivity of the biosphere — as they still are today.” [If you’re curious to know more about how that fascinating ‘biochemical machinery’ works, this is a great book on these and related topics – US].

“On Earth, anoxygenic photosynthesis requires one photon per electron, whereas oxygenic photosynthesis requires two photons per electron. On Earth it took up to a billion years to evolve oxygenic photosynthesis, based on two photosystems that had already evolved independently in different types of anoxygenic photosynthesis. Around a fainter K- or M-type star […] oxygenic photosynthesis is estimated to require three or more photons per electron — and a corresponding number of photosystems — making it harder to evolve. […] However, fainter stars spend longer on the main sequence, giving more time for evolution to occur.”

“There was a lot more energy to go around in the post-oxidation world, because respiration of organic matter with oxygen yields an order of magnitude more energy than breaking food down anaerobically. […] The revolution in biological complexity culminated in the ‘Cambrian Explosion’ of animal diversity 540 to 515 million years ago, in which modern food webs were established in the ocean. […] Since then the most fundamental change in the Earth system has been the rise of plants on land […], beginning around 470 million years ago and culminating in the first global forests by 370 million years ago. This doubled global photosynthesis, increasing flows of materials. Accelerated chemical weathering of the land surface lowered atmospheric carbon dioxide levels and increased atmospheric oxygen levels, fully oxygenating the deep ocean. […] Although grasslands now cover about a third of the Earth’s productive land surface they are a geologically recent arrival. Grasses evolved amidst a trend of declining atmospheric carbon dioxide, and climate cooling and drying, over the past forty million years, and they only became widespread in two phases during the Miocene Epoch around seventeen and six million years ago. […] Since the rise of complex life, there have been several mass extinction events. […] whilst these rolls of the extinction dice marked profound changes in evolutionary winners and losers, they did not fundamentally alter the operation of the Earth system.” [If you’re interested in this kind of stuff, the evolution of food webs and so on, Herrera et al.’s wonderful book is a great place to start – US]

“The Industrial Revolution marks the transition from societies fuelled largely by recent solar energy (via biomass, water, and wind) to ones fuelled by concentrated ‘ancient sunlight’. Although coal had been used in small amounts for millennia, for example for iron making in ancient China, fossil fuel use only took off with the invention and refinement of the steam engine. […] With the Industrial Revolution, food and biomass have ceased to be the main source of energy for human societies. Instead the energy contained in annual food production, which supports today’s population, is at fifty exajoules (1 EJ = 1018 joules), only about a tenth of the total energy input to human societies of 500 EJ/yr. This in turn is equivalent to about a tenth of the energy captured globally by photosynthesis. […] solar energy is not very efficiently converted by photosynthesis, which is 1–2 per cent efficient at best. […] The amount of sunlight reaching the Earth’s land surface (2.5 × 1016 W) dwarfs current total human power consumption (1.5 × 1013 W) by more than a factor of a thousand.”

“The Earth system’s primary energy source is sunlight, which the biosphere converts and stores as chemical energy. The energy-capture devices — photosynthesizing organisms — construct themselves out of carbon dioxide, nutrients, and a host of trace elements taken up from their surroundings. Inputs of these elements and compounds from the solid Earth system to the surface Earth system are modest. Some photosynthesizers have evolved to increase the inputs of the materials they need — for example, by fixing nitrogen from the atmosphere and selectively weathering phosphorus out of rocks. Even more importantly, other heterotrophic organisms have evolved that recycle the materials that the photosynthesizers need (often as a by-product of consuming some of the chemical energy originally captured in photosynthesis). This extraordinary recycling system is the primary mechanism by which the biosphere maintains a high level of energy capture (productivity).”

“[L]ike all stars on the ‘main sequence’ (which generate energy through the nuclear fusion of hydrogen into helium), the Sun is burning inexorably brighter with time — roughly 1 per cent brighter every 100 million years — and eventually this will overheat the planet. […] Over Earth history, the silicate weathering negative feedback mechanism has counteracted the steady brightening of the Sun by removing carbon dioxide from the atmosphere. However, this cooling mechanism is near the limits of its operation, because CO2 has fallen to limiting levels for the majority of plants, which are key amplifiers of silicate weathering. Although a subset of plants have evolved which can photosynthesize down to lower CO2 levels [the author does not go further into this topic, but here’s a relevant link – US], they cannot draw CO2 down lower than about 10 ppm. This means there is a second possible fate for life — running out of CO2. Early models projected either CO2 starvation or overheating […] occurring about a billion years in the future. […] Whilst this sounds comfortingly distant, it represents a much shorter future lifespan for the Earth’s biosphere than its past history. Earth’s biosphere is entering its old age.”

September 28, 2017 Posted by | Astronomy, Biology, Books, Botany, Chemistry, Geology, Paleontology, Physics | Leave a comment

Light

I gave the book two stars. Some quotes and links below.

“Lenses are ubiquitous in image-forming devices […] Imaging instruments have two components: the lens itself, and a light detector, which converts the light into, typically, an electrical signal. […] In every case the location of the lens with respect to the detector is a key design parameter, as is the focal length of the lens which quantifies its ‘ray-bending’ power. The focal length is set by the curvature of the surfaces of the lens and its thickness. More strongly curved surfaces and thicker materials are used to make lenses with short focal lengths, and these are used usually in instruments where a high magnification is needed, such as a microscope. Because the refractive index of the lens material usually depends on the colour of light, rays of different colours are bent by different amounts at the surface, leading to a focus for each colour occurring in a different position. […] lenses with a big diameter and a short focal length will produce the tiniest images of point-like objects. […] about the best you can do in any lens system you could actually make is an image size of approximately one wavelength. This is the fundamental limit to the pixel size for lenses used in most optical instruments, such as cameras and binoculars. […] Much more sophisticated methods are required to see even smaller things. The reason is that the wave nature of light puts a lower limit on the size of a spot of light. […] At the other extreme, both ground- and space-based telescopes for astronomy are very large instruments with relatively simple optical imaging components […]. The distinctive feature of these imaging systems is their size. The most distant stars are very, very faint. Hardly any of their light makes it to the Earth. It is therefore very important to collect as much of it as possible. This requires a very big lens or mirror”.

“[W]hat sort of wave is light? This was […] answered in the 19th century by James Clerk Maxwell, who showed that it is an oscillation of a new kind of entity: the electromagnetic field. This field is effectively a force that acts on electric charges and magnetic materials. […] In the early 19th century, Michael Faraday had shown the close connections between electric and magnetic fields. Maxwell brought them together, as the electromagnetic force field. […] in the wave model, light can be considered as very high frequency oscillations of the electromagnetic field. One consequence of this idea is that moving electric charges can generate light waves. […] When […] charges accelerate — that is, when they change their speed or their direction of motion — then a simple law of physics is that they emit light. Understanding this was one of the great achievements of the theory of electromagnetism.”

“It was the observation of interference effects in a famous experiment by Thomas Young in 1803 that really put the wave picture of light as the leading candidate as an explanation of the nature of light. […] It is interference of light waves that causes the colours in a thin film of oil floating on water. Interference transforms very small distances, on the order of the wavelength of light, into very big changes in light intensity — from no light to four times as bright as the individual constituent waves. Such changes in intensity are easy to detect or see, and thus interference is a very good way to measure small changes in displacement on the scale of the wavelength of light. Many optical sensors are based on interference effects.”

“[L]ight beams […] gradually diverge as they propagate. This is because a beam of light, which by definition has a limited spatial extent, must be made up of waves that propagate in more than one direction. […] This phenomenon is called diffraction. […] if you want to transmit light over long distances, then diffraction could be a problem. It will cause the energy in the light beam to spread out, so that you would need a bigger and bigger optical system and detector to capture all of it. This is important for telecommunications, since nearly all of the information transmitted over long-distance communications links is encoded on to light beams. […] The means to manage diffraction so that long-distance communication is possible is to use wave guides, such as optical fibres.”

“[O]ptical waves […] guided along a fibre or in a glass ‘chip’ […] underpins the long-distance telecommunications infrastructure that connects people across different continents and powers the Internet. The reason it is so effective is that light-based communications have much more capacity for carrying information than do electrical wires, or even microwave cellular networks. […] In optical communications, […] bits are represented by the intensity of the light beam — typically low intensity is a 0 and higher intensity a 1. The more of these that arrive per second, the faster the communication rate. […] Why is optics so good for communications? There are two reasons. First, light beams don’t easily influence each other, so that a single fibre can support many light pulses (usually of different colours) simultaneously without the messages getting scrambled up. The reason for this is that the glass of which the fibre is made does not absorb light (or only absorbs it in tiny amounts), and so does not heat up and disrupt other pulse trains. […] the ‘crosstalk’ between light beams is very weak in most materials, so that many beams can be present at once without causing a degradation of the signal. This is very different from electrons moving down a copper wire, which is the usual way in which local ‘wired’ communications links function. Electrons tend to heat up the wire, dissipating their energy. This makes the signals harder to receive, and thus the number of different signal channels has to be kept small enough to avoid this problem. Second, light waves oscillate at very high frequencies, and this allows very short pulses to be generated This means that the pulses can be spaced very close together in time, making the transmission of more bits of information per second possible. […] Fibre-based optical networks can also support a very wide range of colours of light.”

“Waves can be defined by their wavelength, amplitude, and phase […]. Particles are defined by their position and direction of travel […], and a collection of particles by their density […] and range of directions. The media in which the light moves are characterized by their refractive indices. This can vary across space. […] Hamilton showed that what was important was how rapidly the refractive index changed in space compared with the length of an optical wave. That is, if the changes in index took place on a scale of close to a wavelength, then the wave character of light was evident. If it varied more smoothly and very slowly in space then the particle picture provided an adequate description. He showed how the simpler ray picture emerges from the more complex wave picture in certain commonly encountered situations. The appearance of wave-like phenomena, such as diffraction and interference, occurs when the size scales of the wavelength of light and the structures in which it propagates are similar. […] Particle-like behaviour — motion along a well-defined trajectory — is sufficient to describe the situation when all objects are much bigger than the wavelength of light, and have no sharp edges.”

“When things are heated up, they change colour. Take a lump of metal. As it gets hotter and hotter it first glows red, then orange, and then white. Why does this happen? This question stumped many of the great scientists [in the 19th century], including Maxwell himself. The problem was that Maxwell’s theory of light, when applied to this problem, indicated that the colour should get bluer and bluer as the temperature increased, without a limit, eventually moving out of the range of human vision into the ultraviolet—beyond blue—region of the spectrum. But this does not happen in practice. […] Max Planck […] came up with an idea to explain the spectrum emitted by hot objects — so-called ‘black bodies’. He conjectured that when light and matter interact, they do so only by exchanging discrete ‘packets’, or quanta, or energy. […] this conjecture was set to radically change physics.”

“What Dirac did was to develop a quantum mechanical version of Maxwell’s theory of electromagnetic fields. […] It set the quantum field up as the fundamental entity on which the universe is built — neither particle nor wave, but both at once; complete wave–particle duality. It is a beautiful reconciliation of all the phenomena that light exhibits, and provides a framework in which to understand all optical effects, both those from the classical world of Newton, Maxwell, and Hamilton and those of the quantum world of Planck, Einstein, and Bohr. […] Light acts as a particle of more or less well-defined energy when it interacts with matter. Yet it retains its ability to exhibit wave-like phenomena at the same time. The resolution [was] a new concept: the quantum field. Light particles — photons — are excitations of this field, which propagates according to quantum versions of Maxwell’s equations for light waves. Quantum fields, of which light is perhaps the simplest example, are now regarded as being the fundamental entities of the universe, underpinning all types of material and non-material things. The only explanation is that the stuff of the world is neither particle nor wave but both. This is the nature of reality.”

Some links:

Light.
Optics.
Watt.
Irradiance.
Coherence (physics).
Electromagnetic spectrum.
Joseph von Fraunhofer.
Spectroscopy.
Wave.
Transverse wave.
Wavelength.
Spatial frequency.
Polarization (waves).
Specular reflection.
Negative-index metamaterial.
Birefringence.
Interference (wave propagation).
Diffraction.
Young’s interference experiment.
Holography.
Photoactivated localization microscopy.
Stimulated emission depletion (STED) microscopy.
Fourier’s theorem (I found it hard to find a good source on this one. According to the book, “Fourier’s theorem says in simple terms that the smaller you focus light, the broader the range of wave directions you need to achieve this spot”)
X-ray diffraction.
Brewster’s angle.
Liquid crystal.
Liquid crystal display.
Wave–particle duality.
Fermat’s principle.
Wavefront.
Maupertuis’ principle.
Johann Jakob Balmer.
Max Planck.
Photoelectric effect.
Niels Bohr.
Matter wave.
Quantum vacuum.
Lamb shift.
Light-emitting diode.
Fluorescent tube.
Synchrotron radiation.
Quantum state.
Quantum fluctuation.
Spontaneous emission/stimulated emission.
Photodetector.
Laser.
Optical cavity.
X-ray absorption spectroscopy.
Diamond Light Source.
Mode-locking.
Stroboscope.
Femtochemistry.
Spacetime.
Atomic clock.
Time dilation.
High harmonic generation.
Frequency comb.
Optical tweezers.
Bose–Einstein condensate.
Pump probe spectroscopy.
Vulcan laser.
Plasma (physics).
Nonclassical light.
Photon polarization.
Quantum entanglement.
Bell test experiments.
Quantum key distribution/Quantum cryptography/Quantum computing.

August 31, 2017 Posted by | Books, Chemistry, Computer science, Physics | Leave a comment

How Species Interact

There are multiple reasons why I have not covered Arditi and Ginzburg’s book before, but none of them are related to the quality of the book’s coverage. It’s a really nice book. However the coverage is somewhat technical and model-focused, which makes it harder to blog than other kinds of books. Also, the version of the book I read was a hardcover ‘paper book’ version, and ‘paper books’ take a lot more work for me to cover than do e-books.

I should probably get it out of the way here at the start of the post that if you’re interested in ecology, predator-prey dynamics, etc., this book is a book you would be well advised to read; or, if you don’t read the book, you should at least familiarize yourself with the ideas therein e.g. through having a look at some of Arditi & Ginzburg’s articles on these topics. I should however note that I don’t actually think skipping the book and having a look at some articles instead will necessarily be a labour-saving strategy; the book is not particularly long and it’s to the point, so although it’s not a particularly easy read their case for ratio dependence is actually somewhat easy to follow – if you take the effort – in the sense that I believe how different related ideas and observations are linked is quite likely better expounded upon in the book than they might have been in their articles. The presumably wrote the book precisely in order to provide a concise yet coherent overview.

I have had some trouble figuring out how to cover this book, and I’m still not quite sure what might be/have been the best approach; when covering technical books I’ll often skip a lot of detail and math and try to stick to what might be termed ‘the main ideas’ when quoting from such books, but there’s a clear limit as to how many of the technical details included in a book like this it is possible to skip if you still want to actually talk about the stuff covered in the work, and this sometimes make blogging such books awkward. These authors spend a lot of effort talking about how different ecological models work and which sort of conclusions these different models may lead to in different contexts, and this kind of stuff is a very big part of the book. I’m not sure if you strictly need to have read an ecology textbook or two before you read this one in order to be able to follow the coverage, but I know that I personally derived some benefit from having read Gurney & Nisbet’s ecology text in the past and I did look up stuff in that book a few times along the way, e.g. when reminding myself what a Holling type 2 functional response is and how models with such a functional response pattern behave. ‘In theory’ I assume one might argue that you could theoretically look up all the relevant concepts along the way without any background knowledge of ecology – assuming you have a decent understanding of basic calculus/differential equations, linear algebra, equilibrium dynamics, etc. (…systems analysis? It’s hard for me to know and outline exactly which sources I’ve read in the past which helped make this book easier to read than it otherwise would have been, but suffice it to say that if you look at the page count and think that this will be an quick/easy read, it will be that only if you’ve read more than a few books on ‘related topics’, broadly defined, in the past), but I wouldn’t advise reading the book if all you know is high school math – the book will be incomprehensible to you, and you won’t make it. I ended up concluding that it would simply be too much work to try to make this post ‘easy’ to read for people who are unfamiliar with these topics and have not read the book, so although I’ve hardly gone out of my way to make the coverage hard to follow, the blog coverage that is to follow is mainly for my own benefit.

First a few relevant links, then some quotes and comments.

Lotka–Volterra equations.
Ecosystem model.
Arditi–Ginzburg equations. (Yep, these equations are named after the authors of this book).
Nicholson–Bailey model.
Functional response.
Monod equation.
Rosenzweig-MacArthur predator-prey model.
Trophic cascade.
Underestimation of mutual interference of predators.
Coupling in predator-prey dynamics: Ratio Dependence.
Michaelis–Menten kinetics.
Trophic level.
Advection–diffusion equation.
Paradox of enrichment. [Two quotes from the book: “actual systems do not behave as Rosensweig’s model predict” + “When ecologists have looked for evidence of the paradox of enrichment in natural and laboratory systems, they often find none and typically present arguments about why it was not observed”]
Predator interference emerging from trophotaxis in predator–prey systems: An individual-based approach.
Directed movement of predators and the emergence of density dependence in predator-prey models.

“Ratio-dependent predation is now covered in major textbooks as an alternative to the standard prey-dependent view […]. One of this book’s messages is that the two simple extreme theories, prey dependence and ratio dependence, are not the only alternatives: they are the ends of a spectrum. There are ecological domains in which one view works better than the other, with an intermediate view also being a possible case. […] Our years of work spent on the subject have led us to the conclusion that, although prey dependence might conceivably be obtained in laboratory settings, the common case occurring in nature lies close to the ratio-dependent end. We believe that the latter, instead of the prey-dependent end, can be viewed as the “null model of predation.” […] we propose the gradual interference model, a specific form of predator-dependent functional response that is approximately prey dependent (as in the standard theory) at low consumer abundances and approximately ratio dependent at high abundances. […] When density is low, consumers do not interfere and prey dependence works (as in the standard theory). When consumers density is sufficiently high, interference causes ratio dependence to emerge. In the intermediate densities, predator-dependent models describe partial interference.”

“Studies of food chains are on the edge of two domains of ecology: population and community ecology. The properties of food chains are determined by the nature of their basic link, the interaction of two species, a consumer and its resource, a predator and its prey.1 The study of this basic link of the chain is part of population ecology while the more complex food webs belong to community ecology. This is one of the main reasons why understanding the dynamics of predation is important for many ecologists working at different scales.”

“We have named predator-dependent the functional responses of the form g = g(N,P), where the predator density P acts (in addition to N [prey abundance, US]) as an independent variable to determine the per capita kill rate […] predator-dependent functional response models have one more parameter than the prey-dependent or the ratio-dependent models. […] The main interest that we see in these intermediate models is that the additional parameter can provide a way to quantify the position of a specific predator-prey pair of species along a spectrum with prey dependence at one end and ratio dependence at the other end:

g(N) <- g(N,P) -> g(N/P) (1.21)

In the Hassell-Varley and Arditi-Akçakaya models […] the mutual interference parameter m plays the role of a cursor along this spectrum, from m = 0 for prey dependence to m = 1 for ratio dependence. Note that this theory does not exclude that strong interference goes “beyond ratio dependence,” with m > 1.2 This is also called overcompensation. […] In this book, rather than being interested in the interference parameters per se, we use predator-dependent models to determine, either parametrically or nonparametrically, which of the ends of the spectrum (1.21) better describes predator-prey systems in general.”

“[T]he fundamental problem of the Lotka-Volterra and the Rosensweig-MacArthur dynamic models lies in the functional response and in the fact that this mathematical function is assumed not to depend on consumer density. Since this function measures the number of prey captured per consumer per unit time, it is a quantity that should be accessible to observation. This variable could be apprehended either on the fast behavioral time scale or on the slow demographic time scale. These two approaches need not necessarily reveal the same properties: […] a given species could display a prey-dependent response on the fast scale and a predator-dependent response on the slow scale. The reason is that, on a very short scale, each predator individually may “feel” virtually alone in the environment and react only to the prey that it encounters. On the long scale, the predators are more likely to be affected by the presence of conspecifics, even without direct encounters. In the demographic context of this book, it is the long time scale that is relevant. […] if predator dependence is detected on the fast scale, then it can be inferred that it must be present on the slow scale; if predator dependence is not detected on the fast scale, it cannot be inferred that it is absent on the slow scale.”

Some related thoughts. A different way to think about this – which they don’t mention in the book, but which sprang to mind to me as I was reading it – is to think about this stuff in terms of a formal predator territorial overlap model and then asking yourself this question: Assume there’s zero territorial overlap – does this fact mean that the existence of conspecifics does not matter? The answer is of course no. The sizes of the individual patches/territories may be greatly influenced by the predator density even in such a context. Also, the territorial area available to potential offspring (certainly a fitness-relevant parameter) may be greatly influenced by the number of competitors inhabiting the surrounding territories. In relation to the last part of the quote it’s easy to see that in a model with significant territorial overlap you don’t need direct behavioural interaction among predators for the overlap to be relevant; even if two bears never meet, if one of them eats a fawn the other one would have come across two days later, well, such indirect influences may be important for prey availability. Of course as prey tend to be mobile, even if predator territories are static and non-overlapping in a geographic sense, they might not be in a functional sense. Moving on…

“In [chapter 2 we] attempted to assess the presence and the intensity of interference in all functional response data sets that we could gather in the literature. Each set must be trivariate, with estimates of the prey consumed at different values of prey density and different values of predator densities. Such data sets are not very abundant because most functional response experiments present in the literature are simply bivariate, with variations of the prey density only, often with a single predator individual, ignoring the fact that predator density can have an influence. This results from the usual presentation of functional responses in textbooks, which […] focus only on the influence of prey density.
Among the data sets that we analyzed, we did not find a single one in which the predator density did not have a significant effect. This is a powerful empirical argument against prey dependence. Most systems lie somewhere on the continuum between prey dependence (m=0) and ratio dependence (m=1). However, they do not appear to be equally distributed. The empirical evidence provided in this chapter suggests that they tend to accumulate closer to the ratio-dependent end than to the prey-dependent end.”

“Equilibrium properties result from the balanced predator-prey equations and contain elements of the underlying dynamic model. For this reason, the response of equilibria to a change in model parameters can inform us about the structure of the underlying equations. To check the appropriateness of the ratio-dependent versus prey-dependent views, we consider the theoretical equilibrium consequences of the two contrasting assumptions and compare them with the evidence from nature. […] According to the standard prey-dependent theory, in reference to [an] increase in primary production, the responses of the populations strongly depend on their level and on the total number of trophic levels. The last, top level always responds proportionally to F [primary input]. The next to the last level always remains constant: it is insensitive to enrichment at the bottom because it is perfectly controled [sic] by the last level. The first, primary producer level increases if the chain length has an odd number of levels, but declines (or stays constant with a Lotka-Volterra model) in the case of an even number of levels. According to the ratio-dependent theory, all levels increase proportionally, independently of how many levels are present. The present purpose of this chapter is to show that the second alternative is confirmed by natural data and that the strange predictions of the prey-dependent theory are unsupported.”

“If top predators are eliminated or reduced in abundance, models predict that the sequential lower trophic levels must respond by changes of alternating signs. For example, in a three-level system of plants-herbivores-predators, the reduction of predators leads to the increase of herbivores and the consequential reduction in plant abundance. This response is commonly called the trophic cascade. In a four-level system, the bottom level will increase in response to harvesting at the top. These predicted responses are quite intuitive and are, in fact, true for both short-term and long-term responses, irrespective of the theory one employs. […] A number of excellent reviews have summarized and meta-analyzed large amounts of data on trophic cascades in food chains […] In general, the cascading reaction is strongest in lakes, followed by marine systems, and weakest in terrestrial systems. […] Any theory that claims to describe the trophic chain equilibria has to produce such cascading when top predators are reduced or eliminated. It is well known that the standard prey-dependent theory supports this view of top-down cascading. It is not widely appreciated that top-down cascading is likewise a property of ratio-dependent trophic chains. […] It is [only] for equilibrial responses to enrichment at the bottom that predictions are strikingly different according to the two theories”.

As the book does spend a little time on this I should perhaps briefly interject here that the above paragraph should not be taken to indicate that the two types of models provide identical predictions in the top-down cascading context in all cases; both predict cascading, but there are even so some subtle differences between the models here as well. Some of these differences are however quite hard to test.

“[T]he traditional Lotka-Volterra interaction term […] is nothing other than the law of mass action of chemistry. It assumes that predator and prey individuals encounter each other randomly in the same way that molecules interact in a chemical solution. Other prey-dependent models, like Holling’s, derive from the same idea. […] an ecological system can only be described by such a model if conspecifics do not interfere with each other and if the system is sufficiently homogeneous […] we will demonstrate that spatial heterogeneity, be it in the form of a prey refuge or in the form of predator clusters, leads to emergence of gradual interference or of ratio dependence when the functional response is observed at the population level. […] We present two mechanistic individual-based models that illustrate how, with gradually increasing predator density and gradually increasing predator clustering, interference can become gradually stronger. Thus, a given biological system, prey dependent at low predator density, can gradually become ratio dependent at high predator density. […] ratio dependence is a simple way of summarizing the effects induced by spatial heterogeneity, while the prey dependent [models] (e.g., Lotka-Volterra) is more appropriate in homogeneous environments.”

“[W]e consider that a good model of interacting species must be fundamentally invariant to a proportional change of all abundances in the system. […] Allowing interacting populations to expand in balanced exponential growth makes the laws of ecology invariant with respect to multiplying interacting abundances by the same constant, so that only ratios matter. […] scaling invariance is required if we wish to preserve the possibility of joint exponential growth of an interacting pair. […] a ratio-dependent model allows for joint exponential growth. […] Neither the standard prey-dependent models nor the more general predator-dependent models allow for balanced growth. […] In our view, communities must be expected to expand exponentially in the presence of unlimited resources. Of course, limiting factors ultimately stop this expansion just as they do for a single species. With our view, it is the limiting resources that stop the joint expansion of the interacting populations; it is not directly due to the interactions themselves. This partitioning of the causes is a major simplification that traditional theory implies only in the case of a single species.”

August 1, 2017 Posted by | Biology, Books, Chemistry, Ecology, Mathematics, Studies | Leave a comment

Stars

“Every atom of our bodies has been part of a star, and every informed person should know something of how the stars evolve.”

I gave the book three stars on goodreads. At times it’s a bit too popular-science-y for me, and I think the level of coverage is a little bit lower than that of some of the other physics books in the ‘A Very Brief Introduction‘ series by Oxford University Press, but on the other hand it did teach me some new things and explained some other things I knew about but did not fully understand before and I’m well aware that it can be really hard to strike the right balance when writing books like these. I don’t like it when authors employ analogies instead of equations to explain stuff, but on the other hand I’ve seen some of the relevant equations before, e.g. in the context of IAS lectures, so I was okay with skipping some of the math because I know how the math here can really blow up in your face fast – and it’s not like this book has no math or equations, but I think it’s the kind of math most people should be able to deal with. It’s a decent introduction to the topic, and I must admit I have yet really to be significantly disappointed in a book from the physics part of this OUP series – they’re good books, readable and interesting.

Below I have added some quotes and observations from the book, as well as some relevant links to material or people covered in the book. Some of the links below I have also added previously when covering other books in the physics series, but I do not really care about that as I try to cover each book separately; the two main ideas behind adding links of this kind are: 1) to remind me which topics (…which I was unable to cover in detail in the post using quotes, because there’s too much stuff to cover in the book for that to make sense…) were covered in the book, and: 2) to give people who might be interested in reading the book an idea of which topics are covered therein; if I neglected to add relevant links simply because such topics were also covered in other books I’ve covered here, the link collection would not accomplish what I’d like it to accomplish. The link collection was gathered while I was reading the book (I was bookmarking relevant wiki articles along the way while reading the book), whereas the quotes included in the post were only added to the post after I had finished adding the links from the link collection; I am well aware that some topics covered in the quotes of the book are also covered in the link collection, but I didn’t care enough about this ‘double coverage of topics’ to remove those links that refer to material also covered in my quotes in this post from the link collection.

I think the part of the book coverage related to finding good quotes to include in this post was harder than it has been in the context of some of the other physics books I’ve covered recently, because the author goes into quite some detail explaining some specific dynamics of star evolution which are not easy to boil down to a short quote which is still meaningful to people who do not know the context. The fact that he does go into those details was of course part of the reason why I liked the book.

“[W]e cannot consider heat energy in isolation from the other large energy store that the Sun has – gravity. Clearly, gravity is an energy source, since if it were not for the resistance of gas pressure, it would make all the Sun’s gas move inwards at high speed. So heat and gravity are both potential sources of energy, and must be related by the need to keep the Sun in equilibrium. As the Sun tries to cool down, energy must be swapped between these two forms to keep the Sun in balance […] the heat energy inside the Sun is not enough to spread all of its contents out over space and destroy it as an identifiable object. The Sun is gravitationally bound – its heat energy is significant, but cannot supply enough energy to loosen gravity’s grip, and unbind the Sun. This means that when pressure balances gravity for any system (as in the Sun), the total heat energy T is always slightly less than that needed (V) to disperse it. In fact, it turns out to be exactly half of what would be needed for this dispersal, so that 2T + V = 0, or V = −2 T. The quantities T and V have opposite signs, because energy has to be supplied to overcome gravity, that is, you have to use T to try to cancel some of V. […] you need to supply energy to a star in order to overcome its gravity and disperse all of its gas to infinity. In line with this, the star’s total energy (thermal plus gravitational) is E = T + V = −T, that is, the total energy is minus its thermal energy, and so is itself negative. That is, a star is a gravitationally bound object. Whenever the system changes slowly enough that pressure always balances gravity, these two energies always have to be in this 1:2 ratio. […] This reasoning shows that cooling, shrinking, and heating up all go together, that is, as the Sun tries to cool down, its interior heats up. […] Because E = –T, when the star loses energy (by radiating), making its total energy E more negative, the thermal energy T gets more positive, that is, losing energy makes the star heat up. […] This result, that stars heat up when they try to cool, is central to understanding why stars evolve.”

“[T]he whole of chemistry is simply the science of electromagnetic interaction of atoms with each other. Specifically, chemistry is what happens when electrons stick atoms together to make molecules. The electrons doing the sticking are the outer ones, those furthest from the nucleus. The physical rules governing the arrangement of electrons around the nucleus mean that atoms divide into families characterized by their outer electron configurations. Since the outer electrons specify the chemical properties of the elements, these families have similar chemistry. This is the origin of the periodic table of the elements. In this sense, chemistry is just a specialized branch of physics. […] atoms can combine, or react, in many different ways. A chemical reaction means that the electrons sticking atoms together are rearranging themselves. When this happens, electromagnetic energy may be released, […] or an energy supply may be needed […] Just as we measured gravitational binding energy as the amount of energy needed to disperse a body against the force of its own gravity, molecules have electromagnetic binding energies measured by the energies of the orbiting electrons holding them together. […] changes of electronic binding only produce chemical energy yields, which are far too small to power stars. […] Converting hydrogen into helium is about 15 million times more effective than burning oil. This is because strong nuclear forces are so much more powerful than electromagnetic forces.”

“[T]here are two chains of reactions which can convert hydrogen to helium. The rate at which they occur is in both cases quite sensitive to the gas density, varying as its square, but extremely sensitive to the gas temperature […] If the temperature is below a certain threshold value, the total energy output from hydrogen burning is completely negligible. If the temperature rises only slightly above this threshold, the energy output becomes enormous. It becomes so enormous that the effect of all this energy hitting the gas in the star’s centre is life-threatening to it. […] energy is related to mass. So being hit by energy is like being hit by mass: luminous energy exerts a pressure. For a luminosity above a certain limiting value related to the star’s mass, the pressure will blow it apart. […] The central temperature of the Sun, and stars like it, must be almost precisely at the threshold value. It is this temperature sensitivity which fixes the Sun’s central temperature at the value of ten million degrees […] All stars burning hydrogen in their centres must have temperatures close to this value. […] central temperature [is] roughly proportional to the ratio of mass to radius [and this means that] the radius of a hydrogen-burning star is approximately proportional to its mass […] You might wonder how the star ‘knows’ that its radius is supposed to have this value. This is simple: if the radius is too large, the star’s central temperature is too low to produce any nuclear luminosity at all. […] the star will shrink in an attempt to provide the luminosity from its gravitational binding energy. But this shrinking is just what it needs to adjust the temperature in its centre to the right value to start hydrogen burning and produce exactly the right luminosity. Similarly, if the star’s radius is slightly too small, its nuclear luminosity will grow very rapidly. This increases the radiation pressure, and forces the star to expand, again back to the right radius and so the right luminosity. These simple arguments show that the star’s structure is self-adjusting, and therefore extremely stable […] The basis of this stability is the sensitivity of the nuclear luminosity to temperature and so radius, which controls it like a thermostat.”

“Hydrogen burning produces a dense and growing ball of helium at the star’s centre. […] the star has a weight problem to solve – the helium ball feels its own weight, and that of all the rest of the star as well. A similar effect led to the ignition of hydrogen in the first place […] we can see what happens as the core mass grows. Let’s imagine that the core mass has doubled. Then the core radius also doubles, and its volume grows by a factor 2 × 2 × 2 = 8. This is a bigger factor than the mass growth, so the density is 2/(2 × 2 × 2) = 1/4 of its original value. We end with the surprising result that as the helium core mass grows in time, its central number density drops. […] Because pressure is proportional to density, the central pressure of the core drops also […] Since the density of the hydrogen envelope does not change over time, […] the helium core becomes less and less able to cope with its weight problem as its mass increases. […] The end result is that once the helium core contains more than about 10% of the star’s mass, its pressure is too low to support the weight of the star, and things have to change drastically. […] massive stars have much shorter main-sequence lifetimes, decreasing like the inverse square of their masses […] A star near the minimum main-sequence mass of one-tenth of the Sun’s has an unimaginably long lifetime of almost 1013 years, nearly a thousand times the Sun’s. All low-mass stars are still in the first flush of youth. This is the fundamental fact of stellar life: massive stars have short lives, and low-mass stars live almost forever – certainly far longer than the current age of the Universe.”

“We have met all three […] timescales [see links below – US] for the Sun. The nuclear time is ten billion years, the thermal timescale is thirty million years, and the dynamical one […] just half an hour. […] Each timescale says how long the star takes to react to changes of the given type. The dynamical time tells us that if we mess up the hydrostatic balance between pressure and weight, the star will react by moving its mass around for a few dynamical times (in the Sun’s case, a few hours) and then settle down to a new state in which pressure and weight are in balance. And because this time is so short compared with the thermal time, the stellar material will not have lost or gained any significant amount of heat, but simply carried this around […] although the star quickly finds a new hydrostatic equilibrium, this will not correspond to thermal equilibrium, where heat moves smoothly outwards through the star at precisely the rate determined by the nuclear reactions deep in the centre. Instead, some bits of the star will be too cool to pass all this heat on outwards, and some will be too hot to absorb much of it. Over a thermal timescale (a few tens of millions of years in the Sun), the cool parts will absorb the extra heat they need from the stellar radiation field, and the hot parts rid themselves of the excess they have, until we again reach a new state of thermal equilibrium. Finally, the nuclear timescale tells us the time over which the star synthesizes new chemical elements, radiating the released energy into space.”

“[S]tars can end their lives in just one of three possible ways: white dwarf, neutron star, or black hole.”

“Stars live a long time, but must eventually die. Their stores of nuclear energy are finite, so they cannot shine forever. […] they are forced onwards through a succession of evolutionary states because the virial theorem connects gravity with thermodynamics and prevents them from cooling down. So main-sequence dwarfs inexorably become red giants, and then supergiants. What breaks this chain? Its crucial link is that the pressure supporting a star depends on how hot it is. This link would snap if the star was instead held up by a pressure which did not care about its heat content. Finally freed from the demand to stay hot to support itself, a star like this would slowly cool down and die. This would be an endpoint for stellar evolution. […] Electron degeneracy pressure does not depend on temperature, only density. […] one possible endpoint of stellar evolution arises when a star is so compressed that electron degeneracy is its main form of pressure. […] [Once] the star is a supergiant […] a lot of its mass is in a hugely extended envelope, several hundred times the Sun’s radius. Because of this vast size, the gravity tying the envelope to the core is very weak. […] Even quite small outward forces can easily overcome this feeble pull and liberate mass from the envelope, so a lot of the star’s mass is blown out into space. Eventually, almost the entire remaining envelope is ejected as a roughly spherical cloud of gas. The core quickly exhausts the thin shell of nuclear-burning material on its surface. Now gravity makes the core contract in on itself and become denser, increasing the electron degeneracy pressure further. The core ends as an extremely compact star, with a radius similar to the Earth’s, but a mass similar to the Sun, supported by this pressure. This is a white dwarf. […] Even though its surface is at least initially hot, its small surface means that it is faint. […] White dwarfs cannot start nuclear reactions, so eventually they must cool down and become dark, cold, dead objects. But before this happens, they still glow from the heat energy left over from their earlier evolution, slowly getting fainter. Astronomers observe many white dwarfs in the sky, suggesting that this is how a large fraction of all stars end their lives. […] Stars with an initial mass more than about seven times the Sun’s cannot end as white dwarfs.”

“In many ways, a neutron star is a vastly more compact version of a white dwarf, with the fundamental difference that its pressure arises from degenerate neutrons, not degenerate electrons. One can show that the ratio of the two stellar radii, with white dwarfs about one thousand times bigger than the 10 kilometres of a neutron star, is actually just the ratio of neutron to electron mass.”

“Most massive stars are not isolated, but part of a binary system […]. If one is a normal star, and the other a neutron star, and the binary is not very wide, there are ways for gas to fall from the normal star on to the neutron star. […] Accretion on to very compact objects like neutron stars almost always occurs through a disc, since the gas that falls in always has some rotation. […] a star’s luminosity cannot be bigger than the Eddington limit. At this limit, the pressure of the radiation balances the star’s gravity at its surface, so any more luminosity blows matter off the star. The same sort of limit must apply to accretion: if this tries to make too high a luminosity, radiation pressure will tend to blow away the rest of the gas that is trying to fall in, and so reduce the luminosity until it is below the limit. […] a neutron star is only 10 kilometres in radius, compared with the 700,000 kilometres of the Sun. This can only happen if this very small surface gets very hot. The surface of a healthily accreting neutron star reaches about 10 million degrees, compared with the 6,000 or so of the Sun. […] The radiation from such intensely hot surfaces comes out at much shorter wavelengths than the visible emission from the Sun – the surfaces of a neutron star and its accretion disc emit photons that are much more energetic than those of visible light. Accreting neutron stars and black holes make X-rays.”

“[S]tar formation […] is harder to understand than any other part of stellar evolution. So we use our knowledge of the later stages of stellar evolution to help us understand star formation. Working backwards in this way is a very common procedure in astronomy […] We know much less about how stars form than we do about any later part of their evolution. […] The cyclic nature of star formation, with stars being born from matter chemically enriched by earlier generations, and expelling still more processed material into space as they die, defines a cosmic epoch – the epoch of stars. The end of this epoch will arrive only when the stars have turned all the normal matter of the Universe into iron, and left it locked in dead remnants such as black holes.”

Stellar evolution.
Gustav Kirchhoff.
Robert Bunsen.
Joseph von Fraunhofer.
Spectrograph.
Absorption spectroscopy.
Emission spectrum.
Doppler effect.
Parallax.
Stellar luminosity.
Cecilia Payne-Gaposchkin.
Ejnar Hertzsprung/Henry Norris Russell/Hertzsprung–Russell diagram.
Red giant.
White dwarf (featured article).
Main sequence (featured article).
Gravity/Electrostatics/Strong nuclear force.
Pressure/Boyle’s law/Charles’s law.
Hermann von Helmholtz.
William Thomson (Kelvin).
Gravitational binding energy.
Thermal energy/Gravitational energy.
Virial theorem.
Kelvin-Helmholtz time scale.
Chemical energy/Bond-dissociation energy.
Nuclear binding energy.
Nuclear fusion.
Heisenberg’s uncertainty principle.
Quantum tunnelling.
Pauli exclusion principle.
Eddington limit.
Convection.
Electron degeneracy pressure.
Nuclear timescale.
Number density.
Dynamical timescale/free-fall time.
Hydrostatic equilibrium/Thermal equilibrium.
Core collapse.
Hertzsprung gap.
Supergiant star.
Chandrasekhar limit.
Core-collapse supernova (‘good article’).
Crab Nebula.
Stellar nucleosynthesis.
Neutron star.
Schwarzschild radius.
Black hole (‘good article’).
Roy Kerr.
Pulsar.
Jocelyn Bell.
Anthony Hewish.
Accretion/Accretion disk.
X-ray binary.
Binary star evolution.
SS 433.
Gamma ray burst.
Hubble’s law/Hubble time.
Cosmic distance ladder/Standard candle/Cepheid variable.
Star formation.
Pillars of Creation.
Jeans instability.
Initial mass function.

July 2, 2017 Posted by | Astronomy, Books, Chemistry, Physics | Leave a comment

Anesthesia

“A recent study estimated that 234 million surgical procedures requiring anaesthesia are performed worldwide annually. Anaesthesia is the largest hospital specialty in the UK, with over 12,000 practising anaesthetists […] In this book, I give a short account of the historical background of anaesthetic practice, a review of anaesthetic equipment, techniques, and medications, and a discussion of how they work. The risks and side effects of anaesthetics will be covered, and some of the subspecialties of anaesthetic practice will be explored.”

I liked the book, and I gave it three stars on goodreads; I was closer to four stars than two. Below I have added a few sample observations from the book, as well as what turned out in the end to be actually a quite considerable number of links (more than 60 it turned out, from a brief count) to topics/people/etc. discussed or mentioned in the text. I decided to spend a bit more time finding relevant links than I’ve previously done when writing link-heavy posts, so in this post I have not limited myself to wikipedia articles and I e.g. also link directly to primary literature discussed in the coverage. The links provided are, as usual, meant to be indicators of which kind of stuff is covered in the book, rather than an alternative to the book; some of the wikipedia articles in particular I assume are not very good (the main point of a link to a wikipedia article of questionable quality should probably be taken to be an indication that I consider ‘awareness of the existence of concept X’ to be of interest/important also to people who have not read this book, even if no great resource on the topic was immediately at hand to me).

Sample observations from the book:

“[G]eneral anaesthesia is not sleep. In physiological terms, the two states are very dissimilar. The term general anaesthesia refers to the state of unconsciousness which is deliberately produced by the action of drugs on the patient. Local anaesthesia (and its related terms) refers to the numbness produced in a part of the body by deliberate interruption of nerve function; this is typically achieved without affecting consciousness. […] The purpose of inhaling ether vapour [in the past] was so that surgery would be painless, not so that unconsciousness would necessarily be produced. However, unconsciousness and immobility soon came to be considered desirable attributes […] For almost a century, lying still was the only reliable sign of adequate anaesthesia.”

“The experience of pain triggers powerful emotional consequences, including fear, anger, and anxiety. A reasonable word for the emotional response to pain is ‘suffering’. Pain also triggers the formation of memories which remind us to avoid potentially painful experiences in the future. The intensity of pain perception and suffering also depends on the mental state of the subject at the time, and the relationship between pain, memory, and emotion is subtle and complex. […] The effects of adrenaline are responsible for the appearance of someone in pain: pale, sweating, trembling, with a rapid heart rate and breathing. Additionally, a hormonal storm is activated, readying the body to respond to damage and fight infection. This is known as the stress response. […] Those responses may be abolished by an analgesic such as morphine, which will counteract all those changes. For this reason, it is routine to use analgesic drugs in addition to anaesthetic ones. […] Typical anaesthetic agents are poor at suppressing the stress response, but analgesics like morphine are very effective. […] The hormonal stress response can be shown to be harmful, especially to those who are already ill. For example, the increase in blood coagulability which evolved to reduce blood loss as a result of injury makes the patient more likely to suffer a deep venous thrombosis in the leg veins.”

“If we monitor the EEG of someone under general anaesthesia, certain identifiable changes to the signal occur. In general, the frequency spectrum of the signal slows. […] Next, the overall power of the signal diminishes. In very deep general anaesthesia, short periods of electrical silence, known as burst suppression, can be observed. Finally, the overall randomness of the signal, its entropy, decreases. In short, the EEG of someone who is anaesthetized looks completely different from someone who is awake. […] Depth of anaesthesia is no longer considered to be a linear concept […] since it is clear that anaesthesia is not a single process. It is now believed that the two most important components of anaesthesia are unconsciousness and suppression of the stress response. These can be represented on a three-dimensional diagram called a response surface. [Here’s incidentally a recent review paper on related topics, US]”

“Before the widespread advent of anaesthesia, there were very few painkilling options available. […] Alcohol was commonly given as a means of enhancing the patient’s courage prior to surgery, but alcohol has almost no effect on pain perception. […] For many centuries, opium was the only effective pain-relieving substance known. […] For general anaesthesia to be discovered, certain prerequisites were required. On the one hand, the idea that surgery without pain was achievable had to be accepted as possible. Despite tantalizing clues from history, this idea took a long time to catch on. The few workers who pursued this idea were often openly ridiculed. On the other, an agent had to be discovered that was potent enough to render a patient suitably unconscious to tolerate surgery, but not so potent that overdose (hence accidental death) was too likely. This agent also needed to be easy to produce, tolerable for the patient, and easy enough for untrained people to administer. The herbal candidates (opium, mandrake) were too unreliable or dangerous. The next reasonable candidate, and every agent since, was provided by the proliferating science of chemistry.”

“Inducing anaesthesia by intravenous injection is substantially quicker than the inhalational method. Inhalational induction may take several minutes, while intravenous induction happens in the time it takes for the blood to travel from the needle to the brain (30 to 60 seconds). The main benefit of this is not convenience or comfort but patient safety. […] It was soon discovered that the ideal balance is to induce anaesthesia intravenously, but switch to an inhalational agent […] to keep the patient anaesthetized during the operation. The template of an intravenous induction followed by maintenance with an inhalational agent is still widely used today. […] Most of the drawbacks of volatile agents disappear when the patient is already anaesthetized [and] volatile agents have several advantages for maintenance. First, they are predictable in their effects. Second, they can be conveniently administered in known quantities. Third, the concentration delivered or exhaled by the patient can be easily and reliably measured. Finally, at steady state, the concentration of volatile agent in the patient’s expired air is a close reflection of its concentration in the patient’s brain. This gives the anaesthetist a reliable way of ensuring that enough anaesthetic is present to ensure the patient remains anaesthetized.”

“All current volatile agents are colourless liquids that evaporate into a vapour which produces general anaesthesia when inhaled. All are chemically stable, which means they are non-flammable, and not likely to break down or be metabolized to poisonous products. What distinguishes them from each other are their specific properties: potency, speed of onset, and smell. Potency of an inhalational agent is expressed as MAC, the minimum alveolar concentration required to keep 50% of adults unmoving in response to a standard surgical skin incision. MAC as a concept was introduced […] in 1963, and has proven to be a very useful way of comparing potencies of different anaesthetic agents. […] MAC correlates with observed depth of anaesthesia. It has been known for over a century that potency correlates very highly with lipid solubility; that is, the more soluble an agent is in lipid […], the more potent an anaesthetic it is. This is known as the Meyer-Overton correlation […] Speed of onset is inversely proportional to water solubility. The less soluble in water, the more rapidly an agent will take effect. […] Where immobility is produced at around 1.0 MAC, amnesia is produced at a much lower dose, typically 0.25 MAC, and unconsciousness at around 0.5 MAC. Therefore, a patient may move in response to a surgical stimulus without either being conscious of the stimulus, or remembering it afterwards.”

“The most useful way to estimate the body’s physiological reserve is to assess the patient’s tolerance for exercise. Exercise is a good model of the surgical stress response. The greater the patient’s tolerance for exercise, the better the perioperative outcome is likely to be […] For a smoker who is unable to quit, stopping for even a couple of days before the operation improves outcome. […] Dying ‘on the table’ during surgery is very unusual. Patients who die following surgery usually do so during convalescence, their weakened state making them susceptible to complications such as wound breakdown, chest infections, deep venous thrombosis, and pressure sores.”

Mechanical ventilation is based on the principle of intermittent positive pressure ventilation (IPPV), gas being ‘blown’ into the patient’s lungs from the machine. […] Inflating a patient’s lungs is a delicate process. Healthy lung tissue is fragile, and can easily be damaged by overdistension (barotrauma). While healthy lung tissue is light and spongy, and easily inflated, diseased lung tissue may be heavy and waterlogged and difficult to inflate, and therefore may collapse, allowing blood to pass through it without exchanging any gases (this is known as shunt). Simply applying higher pressures may not be the answer: this may just overdistend adjacent areas of healthier lung. The ventilator must therefore provide a series of breaths whose volume and pressure are very closely controlled. Every aspect of a mechanical breath may now be adjusted by the anaesthetist: the volume, the pressure, the frequency, and the ratio of inspiratory time to expiratory time are only the basic factors.”

“All anaesthetic drugs are poisons. Remember that in achieving a state of anaesthesia you intend to poison someone, but not kill them – so give as little as possible. [Introductory quote to a chapter, from an Anaesthetics textbook – US] […] Other cells besides neurons use action potentials as the basis of cellular signalling. For example, the synchronized contraction of heart muscle is performed using action potentials, and action potentials are transmitted from nerves to skeletal muscle at the neuromuscular junction to initiate movement. Local anaesthetic drugs are therefore toxic to the heart and brain. In the heart, local anaesthetic drugs interfere with normal contraction, eventually stopping the heart. In the brain, toxicity causes seizures and coma. To avoid toxicity, the total dose is carefully limited”.

Links of interest:

Anaesthesia.
General anaesthesia.
Muscle relaxant.
Nociception.
Arthur Ernest Guedel.
Guedel’s classification.
Beta rhythm.
Frances Burney.
Laudanum.
Dwale.
Henry Hill Hickman.
Horace Wells.
William Thomas Green Morton.
Diethyl ether.
Chloroform.
James Young Simpson.
Joseph Thomas Clover.
Barbiturates.
Inhalational anaesthetic.
Antisialagogue.
Pulmonary aspiration.
Principles of Total Intravenous Anaesthesia (TIVA).
Propofol.
Patient-controlled analgesia.
Airway management.
Oropharyngeal airway.
Tracheal intubation.
Laryngoscopy.
Laryngeal mask airway.
Anaesthetic machine.
Soda lime.
Sodium thiopental.
Etomidate.
Ketamine.
Neuromuscular-blocking drug.
Neostigmine.
Sugammadex.
Gate control theory of pain.
Multimodal analgesia.
Hartmann’s solution (…what this is called seems to be depending on whom you ask, but it’s called Hartmann’s solution in the book…).
Local anesthetic.
Karl Koller.
Amylocaine.
Procaine.
Lidocaine.
Regional anesthesia.
Spinal anaesthesia.
Epidural nerve block.
Intensive care medicine.
Bjørn Aage Ibsen.
Chronic pain.
Pain wind-up.
John Bonica.
Twilight sleep.
Veterinary anesthesia.
Pearse et al. (results of paper briefly discussed in the book).
Awareness under anaesthesia (skip the first page).
Pollard et al. (2007).
Postoperative nausea and vomiting.
Postoperative cognitive dysfunction.
Monk et al. (2008).
Malignant hyperthermia.
Suxamethonium apnoea.

February 13, 2017 Posted by | Books, Chemistry, Medicine, Papers, Pharmacology | Leave a comment

The Laws of Thermodynamics

Here’s a relevant 60 symbols video with Mike Merrifield. Below a few observations from the book, and some links.

“Among the hundreds of laws that describe the universe, there lurks a mighty handful. These are the laws of thermodynamics, which summarize the properties of energy and its transformation from one form to another. […] The mighty handful consists of four laws, with the numbering starting inconveniently at zero and ending at three. The first two laws (the ‘zeroth’ and the ‘first’) introduce two familiar but nevertheless enigmatic properties, the temperature and the energy. The third of the four (the ‘second law’) introduces what many take to be an even more elusive property, the entropy […] The second law is one of the all-time great laws of science […]. The fourth of the laws (the ‘third law’) has a more technical role, but rounds out the structure of the subject and both enables and foils its applications.”

Classical thermodynamics is the part of thermodynamics that emerged during the nineteenth century before everyone was fully convinced about the reality of atoms, and concerns relationships between bulk properties. You can do classical thermodynamics even if you don’t believe in atoms. Towards the end of the nineteenth century, when most scientists accepted that atoms were real and not just an accounting device, there emerged the version of thermodynamics called statistical thermodynamics, which sought to account for the bulk properties of matter in terms of its constituent atoms. The ‘statistical’ part of the name comes from the fact that in the discussion of bulk properties we don’t need to think about the behaviour of individual atoms but we do need to think about the average behaviour of myriad atoms. […] In short, whereas dynamics deals with the behaviour of individual bodies, thermodynamics deals with the average behaviour of vast numbers of them.”

“In everyday language, heat is both a noun and a verb. Heat flows; we heat. In thermodynamics heat is not an entity or even a form of energy: heat is a mode of transfer of energy. It is not a form of energy, or a fluid of some kind, or anything of any kind. Heat is the transfer of energy by virtue of a temperature difference. Heat is the name of a process, not the name of an entity.”

“The supply of 1J of energy as heat to 1 g of water results in an increase in temperature of about 0.2°C. Substances with a high heat capacity (water is an example) require a larger amount of heat to bring about a given rise in temperature than those with a small heat capacity (air is an example). In formal thermodynamics, the conditions under which heating takes place must be specified. For instance, if the heating takes place under conditions of constant pressure with the sample free to expand, then some of the energy supplied as heat goes into expanding the sample and therefore to doing work. Less energy remains in the sample, so its temperature rises less than when it is constrained to have a constant volume, and therefore we report that its heat capacity is higher. The difference between heat capacities of a system at constant volume and at constant pressure is of most practical significance for gases, which undergo large changes in volume as they are heated in vessels that are able to expand.”

“Heat capacities vary with temperature. An important experimental observation […] is that the heat capacity of every substance falls to zero when the temperature is reduced towards absolute zero (T = 0). A very small heat capacity implies that even a tiny transfer of heat to a system results in a significant rise in temperature, which is one of the problems associated with achieving very low temperatures when even a small leakage of heat into a sample can have a serious effect on the temperature”.

“A crude restatement of Clausius’s statement is that refrigerators don’t work unless you turn them on.”

“The Gibbs energy is of the greatest importance in chemistry and in the field of bioenergetics, the study of energy utilization in biology. Most processes in chemistry and biology occur at constant temperature and pressure, and so to decide whether they are spontaneous and able to produce non-expansion work we need to consider the Gibbs energy. […] Our bodies live off Gibbs energy. Many of the processes that constitute life are non-spontaneous reactions, which is why we decompose and putrefy when we die and these life-sustaining reactions no longer continue. […] In biology a very important ‘heavy weight’ reaction involves the molecule adenosine triphosphate (ATP). […] When a terminal phosphate group is snipped off by reaction with water […], to form adenosine diphosphate (ADP), there is a substantial decrease in Gibbs energy, arising in part from the increase in entropy when the group is liberated from the chain. Enzymes in the body make use of this change in Gibbs energy […] to bring about the linking of amino acids, and gradually build a protein molecule. It takes the effort of about three ATP molecules to link two amino acids together, so the construction of a typical protein of about 150 amino acid groups needs the energy released by about 450 ATP molecules. […] The ADP molecules, the husks of dead ATP molecules, are too valuable just to discard. They are converted back into ATP molecules by coupling to reactions that release even more Gibbs energy […] and which reattach a phosphate group to each one. These heavy-weight reactions are the reactions of metabolism of the food that we need to ingest regularly.”

Links of interest below – the stuff covered in the links is the sort of stuff covered in this book:

Laws of thermodynamics (article includes links to many other articles of interest, including links to each of the laws mentioned above).
System concepts.
Intensive and extensive properties.
Mechanical equilibrium.
Thermal equilibrium.
Diathermal wall.
Thermodynamic temperature.
Thermodynamic beta.
Ludwig Boltzmann.
Boltzmann constant.
Maxwell–Boltzmann distribution.
Conservation of energy.
Work (physics).
Internal energy.
Heat (physics).
Microscopic view of heat.
Reversible process (thermodynamics).
Carnot’s theorem.
Enthalpy.
Fluctuation-dissipation theorem.
Noether’s theorem.
Entropy.
Thermal efficiency.
Rudolf Clausius.
Spontaneous process.
Residual entropy.
Heat engine.
Coefficient of performance.
Helmholtz free energy.
Gibbs free energy.
Phase transition.
Chemical equilibrium.
Superconductivity.
Superfluidity.
Absolute zero.

February 5, 2017 Posted by | Biology, Books, Chemistry, Physics | Leave a comment

Random stuff

i. Fire works a little differently than people imagine. A great ask-science comment. See also AugustusFink-nottle’s comment in the same thread.

ii.

iii. I was very conflicted about whether to link to this because I haven’t actually spent any time looking at it myself so I don’t know if it’s any good, but according to somebody (?) who linked to it on SSC the people behind this stuff have academic backgrounds in evolutionary biology, which is something at least (whether you think this is a good thing or not will probably depend greatly on your opinion of evolutionary biologists, but I’ve definitely learned a lot more about human mating patterns, partner interaction patterns, etc. from evolutionary biologists than I have from personal experience, so I’m probably in the ‘they-sometimes-have-interesting-ideas-about-these-topics-and-those-ideas-may-not-be-terrible’-camp). I figure these guys are much more application-oriented than were some of the previous sources I’ve read on related topics, such as e.g. Kappeler et al. I add the link mostly so that if I in five years time have a stroke that obliterates most of my decision-making skills, causing me to decide that entering the dating market might be a good idea, I’ll have some idea where it might make sense to start.

iv. Stereotype (In)Accuracy in Perceptions of Groups and Individuals.

“Are stereotypes accurate or inaccurate? We summarize evidence that stereotype accuracy is one of the largest and most replicable findings in social psychology. We address controversies in this literature, including the long-standing  and continuing but unjustified emphasis on stereotype inaccuracy, how to define and assess stereotype accuracy, and whether stereotypic (vs. individuating) information can be used rationally in person perception. We conclude with suggestions for building theory and for future directions of stereotype (in)accuracy research.”

A few quotes from the paper:

Demographic stereotypes are accurate. Research has consistently shown moderate to high levels of correspondence accuracy for demographic (e.g., race/ethnicity, gender) stereotypes […]. Nearly all accuracy correlations for consensual stereotypes about race/ethnicity and  gender exceed .50 (compared to only 5% of social psychological findings; Richard, Bond, & Stokes-Zoota, 2003).[…] Rather than being based in cultural myths, the shared component of stereotypes is often highly accurate. This pattern cannot be easily explained by motivational or social-constructionist theories of stereotypes and probably reflects a “wisdom of crowds” effect […] personal stereotypes are also quite accurate, with correspondence accuracy for roughly half exceeding r =.50.”

“We found 34 published studies of racial-, ethnic-, and gender-stereotype accuracy. Although not every study examined discrepancy scores, when they did, a plurality or majority of all consensual stereotype judgments were accurate. […] In these 34 studies, when stereotypes were inaccurate, there was more evidence of underestimating than overestimating actual demographic group differences […] Research assessing the accuracy of  miscellaneous other stereotypes (e.g., about occupations, college majors, sororities, etc.) has generally found accuracy levels comparable to those for demographic stereotypes”

“A common claim […] is that even though many stereotypes accurately capture group means, they are still not accurate because group means cannot describe every individual group member. […] If people were rational, they would use stereotypes to judge individual targets when they lack information about targets’ unique personal characteristics (i.e., individuating information), when the stereotype itself is highly diagnostic (i.e., highly informative regarding the judgment), and when available individuating information is ambiguous or incompletely useful. People’s judgments robustly conform to rational predictions. In the rare situations in which a stereotype is highly diagnostic, people rely on it (e.g., Crawford, Jussim, Madon, Cain, & Stevens, 2011). When highly diagnostic individuating information is available, people overwhelmingly rely on it (Kunda & Thagard, 1996; effect size averaging r = .70). Stereotype biases average no higher than r = .10 ( Jussim, 2012) but reach r = .25 in the absence of individuating information (Kunda & Thagard, 1996). The more diagnostic individuating information  people have, the less they stereotype (Crawford et al., 2011; Krueger & Rothbart, 1988). Thus, people do not indiscriminately apply their stereotypes to all individual  members of stereotyped groups.” (Funder incidentally talked about this stuff as well in his book Personality Judgment).

One thing worth mentioning in the context of stereotypes is that if you look at stuff like crime data – which sadly not many people do – and you stratify based on stuff like country of origin, then the sub-group differences you observe tend to be very large. Some of the differences you observe between subgroups are not in the order of something like 10%, which is probably the sort of difference which could easily be ignored without major consequences; some subgroup differences can easily be in the order of one or two orders of magnitude. The differences are in some contexts so large as to basically make it downright idiotic to assume there are no differences – it doesn’t make sense, it’s frankly a stupid thing to do. To give an example, in Germany the probability that a random person, about whom you know nothing, has been a suspect in a thievery case is 22% if that random person happens to be of Algerian extraction, whereas it’s only 0,27% if you’re dealing with an immigrant from China. Roughly one in 13 of those Algerians have also been involved in a case of ‘body (bodily?) harm’, which is the case for less than one in 400 of the Chinese immigrants.

v. Assessing Immigrant Integration in Sweden after the May 2013 Riots. Some data from the article:

“Today, about one-fifth of Sweden’s population has an immigrant background, defined as those who were either born abroad or born in Sweden to two immigrant parents. The foreign born comprised 15.4 percent of the Swedish population in 2012, up from 11.3 percent in 2000 and 9.2 percent in 1990 […] Of the estimated 331,975 asylum applicants registered in EU countries in 2012, 43,865 (or 13 percent) were in Sweden. […] More than half of these applications were from Syrians, Somalis, Afghanis, Serbians, and Eritreans. […] One town of about 80,000 people, Södertälje, since the mid-2000s has taken in more Iraqi refugees than the United States and Canada combined.”

“Coupled with […] macroeconomic changes, the largely humanitarian nature of immigrant arrivals since the 1970s has posed challenges of labor market integration for Sweden, as refugees often arrive with low levels of education and transferable skills […] high unemployment rates have disproportionately affected immigrant communities in Sweden. In 2009-10, Sweden had the highest gap between native and immigrant employment rates among OECD countries. Approximately 63 percent of immigrants were employed compared to 76 percent of the native-born population. This 13 percentage-point gap is significantly greater than the OECD average […] Explanations for the gap include less work experience and domestic formal qualifications such as language skills among immigrants […] Among recent immigrants, defined as those who have been in the country for less than five years, the employment rate differed from that of the native born by more than 27 percentage points. In 2011, the Swedish newspaper Dagens Nyheter reported that 35 percent of the unemployed registered at the Swedish Public Employment Service were foreign born, up from 22 percent in 2005.”

“As immigrant populations have grown, Sweden has experienced a persistent level of segregation — among the highest in Western Europe. In 2008, 60 percent of native Swedes lived in areas where the majority of the population was also Swedish, and 20 percent lived in areas that were virtually 100 percent Swedish. In contrast, 20 percent of Sweden’s foreign born lived in areas where more than 40 percent of the population was also foreign born.”

vi. Book recommendations. Or rather, author recommendations. A while back I asked ‘the people of SSC’ if they knew of any fiction authors I hadn’t read yet which were both funny and easy to read. I got a lot of good suggestions, and the roughly 20 Dick Francis novels I’ve read during the fall I’ve read as a consequence of that thread.

vii. On the genetic structure of Denmark.

viii. Religious Fundamentalism and Hostility against Out-groups: A Comparison of Muslims and Christians in Western Europe.

“On the basis of an original survey among native Christians and Muslims of Turkish and Moroccan origin in Germany, France, the Netherlands, Belgium, Austria and Sweden, this paper investigates four research questions comparing native Christians to Muslim immigrants: (1) the extent of religious fundamentalism; (2) its socio-economic determinants; (3) whether it can be distinguished from other indicators of religiosity; and (4) its relationship to hostility towards out-groups (homosexuals, Jews, the West, and Muslims). The results indicate that religious fundamentalist attitudes are much more widespread among Sunnite Muslims than among native Christians, even after controlling for the different demographic and socio-economic compositions of these groups. […] Fundamentalist believers […] show very high levels of out-group hostility, especially among Muslims.”

ix. Portal: Dinosaurs. It would have been so incredibly awesome to have had access to this kind of stuff back when I was a child. The portal includes links to articles with names like ‘Bone Wars‘ – what’s not to like? Again, awesome!

x. “you can’t determine if something is truly random from observations alone. You can only determine if something is not truly random.” (link) An important insight well expressed.

xi. Chessprogramming. If you’re interested in having a look at how chess programs work, this is a neat resource. The wiki contains lots of links with information on specific sub-topics of interest. Also chess-related: The World Championship match between Carlsen and Karjakin has started. To the extent that I’ll be following the live coverage, I’ll be following Svidler et al.’s coverage on chess24. Robin van Kampen and Eric Hansen – both 2600+ elo GMs – did quite well yesterday, in my opinion.

xii. Justified by More Than Logos Alone (Razib Khan).

“Very few are Roman Catholic because they have read Aquinas’ Five Ways. Rather, they are Roman Catholic, in order of necessity, because God aligns with their deep intuitions, basic cognitive needs in terms of cosmological coherency, and because the church serves as an avenue for socialization and repetitive ritual which binds individuals to the greater whole. People do not believe in Catholicism as often as they are born Catholics, and the Catholic religion is rather well fitted to a range of predispositions to the typical human.”

November 12, 2016 Posted by | Books, Chemistry, Chess, Data, dating, Demographics, Genetics, Geography, immigration, Paleontology, Papers, Physics, Psychology, Random stuff, Religion | Leave a comment

Photosynthesis in the Marine Environment (III)

This will be my last post about the book. After having spent a few hours on the post I started to realize the post would become very long if I were to cover all the remaining chapters, and so in the end I decided not to discuss material from chapter 12 (‘How some marine plants modify the environment for other organisms’) here, even though I actually thought some of that stuff was quite interesting. I may decide to talk briefly about some of the stuff in that chapter in another blogpost later on (but most likely I won’t). For a few general remarks about the book, see my second post about it.

Some stuff from the last half of the book below:

“The light reactions of marine plants are similar to those of terrestrial plants […], except that pigments other than chlorophylls a and b and carotenoids may be involved in the capturing of light […] and that special arrangements between the two photosystems may be different […]. Similarly, the CO2-fixation and -reduction reactions are also basically the same in terrestrial and marine plants. Perhaps one should put this the other way around: Terrestrial-plant photosynthesis is similar to marine-plant photosynthesis, which is not surprising since plants have evolved in the oceans for 3.4 billion years and their descendants on land for only 350–400 million years. […] In underwater marine environments, the accessibility to CO2 is low mainly because of the low diffusivity of solutes in liquid media, and for CO2 this is exacerbated by today’s low […] ambient CO2 concentrations. Therefore, there is a need for a CCM also in marine plants […] CCMs in cyanobacteria are highly active and accumulation factors (the internal vs. external CO2 concentrations ratio) can be of the order of 800–900 […] CCMs in eukaryotic microalgae are not as effective at raising internal CO2 concentrations as are those in cyanobacteria, but […] microalgal CCMs result in CO2 accumulation factors as high as 180 […] CCMs are present in almost all marine plants. These CCMs are based mainly on various forms of HCO3 [bicarbonate] utilisation, and may raise the intrachloroplast (or, in cyanobacteria, intracellular or intra-carboxysome) CO2 to several-fold that of seawater. Thus, Rubisco is in effect often saturated by CO2, and photorespiration is therefore often absent or limited in marine plants.”

“we view the main difference in photosynthesis between marine and terrestrial plants as the latter’s ability to acquire Ci [inorganic carbon] (in most cases HCO3) from the external medium and concentrate it intracellularly in order to optimise their photosynthetic rates or, in some cases, to be able to photosynthesise at all. […] CO2 dissolved in seawater is, under air-equilibrated conditions and given today’s seawater pH, in equilibrium with a >100 times higher concentration of HCO3, and it is therefore not surprising that most marine plants utilise the latter Ci form for their photosynthetic needs. […] any plant that utilises bulk HCO3 from seawater must convert it to CO2 somewhere along its path to Rubisco. This can be done in different ways by different plants and under different conditions”

“The conclusion that macroalgae use HCO3 stems largely from results of experiments in which concentrations of CO2 and HCO3 were altered (chiefly by altering the pH of the seawater) while measuring photosynthetic rates, or where the plants themselves withdrew these Ci forms as they photosynthesised in a closed system as manifested by a pH increase (so-called pH-drift experiments) […] The reason that the pH in the surrounding seawater increases as plants photosynthesise is first that CO2 is in equilibrium with carbonic acid (H2CO3), and so the acidity decreases (i.e. pH rises) as CO2 is used up. At higher pH values (above ∼9), when all the CO2 is used up, then a decrease in HCO3 concentrations will also result in increased pH since the alkalinity is maintained by the formation of OH […] some algae can also give off OH to the seawater medium in exchange for HCO3 uptake, bringing the pH up even further (to >10).”

Carbonic anhydrase (CA) is a ubiquitous enzyme, found in all organisms investigated so far (from bacteria, through plants, to mammals such as ourselves). This may be seen as remarkable, since its only function is to catalyse the inter-conversion between CO2 and HCO3 in the reaction CO2 + H2O ↔ H2CO3; we can exchange the latter Ci form to HCO3 since this is spontaneously formed by H2CO3 and is present at a much higher equilibrium concentration than the latter. Without CA, the equilibrium between CO2 and HCO3 is a slow process […], but in the presence of CA the reaction becomes virtually instantaneous. Since CO2 and HCO3 generate different pH values of a solution, one of the roles of CA is to regulate intracellular pH […] another […] function is to convert HCO3 to CO2 somewhere en route towards the latter’s final fixation by Rubisco.”

“with very few […] exceptions, marine macrophytes are not C 4 plants. Also, while a CAM-like [Crassulacean acid metabolism-like, see my previous post about the book for details] feature of nightly uptake of Ci may complement that of the day in some brown algal kelps, this is an exception […] rather than a rule for macroalgae in general. Thus, virtually no marine macroalgae are C 4 or CAM plants, and instead their CCMs are dependent on HCO3 utilization, which brings about high concentrations of CO2 in the vicinity of Rubisco. In Ulva, this type of CCM causes the intra-cellular CO2 concentration to be some 200 μM, i.e. ∼15 times higher than that in seawater.“

“deposition of calcium carbonate (CaCO3) as either calcite or aragonite in marine organisms […] can occur within the cells, but for macroalgae it usually occurs outside of the cell membranes, i.e. in the cell walls or other intercellular spaces. The calcification (i.e. CaCO3 formation) can sometimes continue in darkness, but is normally greatly stimulated in light and follows the rate of photosynthesis. During photosynthesis, the uptake of CO2 will lower the total amount of dissolved inorganic carbon (Ci) and, thus, increase the pH in the seawater surrounding the cells, thereby increasing the saturation state of CaCO3. This, in turn, favours calcification […]. Conversely, it has been suggested that calcification might enhance the photosynthetic rate by increasing the rate of conversion of HCO3 to CO2 by lowering the pH. Respiration will reduce calcification rates when released CO2 increases Ci and/but lowers intercellular pH.”

“photosynthesis is most efficient at very low irradiances and increasingly inefficient as irradiances increase. This is most easily understood if we regard ‘efficiency’ as being dependent on quantum yield: At low ambient irradiances (the light that causes photosynthesis is also called ‘actinic’ light), almost all the photon energy conveyed through the antennae will result in electron flow through (or charge separation at) the reaction centres of photosystem II […]. Another way to put this is that the chances for energy funneled through the antennae to encounter an oxidised (or ‘open’) reaction centre are very high. Consequently, almost all of the photons emitted by the modulated measuring light will be consumed in photosynthesis, and very little of that photon energy will be used for generating fluorescence […] the higher the ambient (or actinic) light, the less efficient is photosynthesis (quantum yields are lower), and the less likely it is for photon energy funnelled through the antennae (including those from the measuring light) to find an open reaction centre, and so the fluorescence generated by the latter light increases […] Alpha (α), which is a measure of the maximal photosynthetic efficiency (or quantum yield, i.e. photosynthetic output per photons received, or absorbed […] by a specific leaf/thallus area, is high in low-light plants because pigment levels (or pigment densities per surface area) are high. In other words, under low-irradiance conditions where few photons are available, the probability that they will all be absorbed is higher in plants with a high density of photosynthetic pigments (or larger ‘antennae’ […]). In yet other words, efficient photon absorption is particularly important at low irradiances, where the higher concentration of pigments potentially optimises photosynthesis in low-light plants. In high-irradiance environments, where photons are plentiful, their efficient absorption becomes less important, and instead it is reactions downstream of the light reactions that become important in the performance of optimal rates of photosynthesis. The CO2-fixing capability of the enzyme Rubisco, which we have indicated as a bottleneck for the entire photosynthetic apparatus at high irradiances, is indeed generally higher in high-light than in low-light plants because of its higher concentration in the former. So, at high irradiances where the photon flux is not limiting to photosynthetic rates, the activity of Rubisco within the CO2-fixation and -reduction part of photosynthesis becomes limiting, but is optimised in high-light plants by up-regulation of its formation. […] photosynthetic responses have often been explained in terms of adaptation to low light being brought about by alterations in either the number of ‘photosynthetic units’ or their size […] There are good examples of both strategies occurring in different species of algae”.

“In general, photoinhibition can be defined as the lowering of photosynthetic rates at high irradiances. This is mainly due to the rapid (sometimes within minutes) degradation of […] the D1 protein. […] there are defense mechanisms [in plants] that divert excess light energy to processes different from photosynthesis; these processes thus cause a downregulation of the entire photosynthetic process while protecting the photosynthetic machinery from excess photons that could cause damage. One such process is the xanthophyll cycle. […] It has […] been suggested that the activity of the CCM in marine plants […] can be a source of energy dissipation. If CO2 levels are raised inside the cells to improve Rubisco activity, some of that CO2 can potentially leak out of the cells, and so raising the net energy cost of CO2 accumulation and, thus, using up large amounts of energy […]. Indirect evidence for this comes from experiments in which CCM activity is down-regulated by elevated CO2

“Photoinhibition is often divided into dynamic and chronic types, i.e. the former is quickly remedied (e.g. during the day[…]) while the latter is more persistent (e.g. over seasons […] the mechanisms for down-regulating photosynthesis by diverting photon energies and the reducing power of electrons away from the photosynthetic systems, including the possibility of detoxifying oxygen radicals, is important in high-light plants (that experience high irradiances during midday) as well as in those plants that do see significant fluctuations in irradiance throughout the day (e.g. intertidal benthic plants). While low-light plants may lack those systems of down-regulation, one must remember that they do not live in environments of high irradiances, and so seldom or never experience high irradiances. […] If plants had a mind, one could say that it was worth it for them to invest in pigments, but unnecessary to invest in high amounts of Rubisco, when growing under low-light conditions, and necessary for high-light growing plants to invest in Rubisco, but not in pigments. Evolution has, of course, shaped these responses”.

“shallow-growing corals […] show two types of photoinhibition: a dynamic type that remedies itself at the end of each day and a more chronic type that persists over longer time periods. […] Bleaching of corals occurs when they expel their zooxanthellae to the surrounding water, after which they either die or acquire new zooxanthellae of other types (or clades) that are better adapted to the changes in the environment that caused the bleaching. […] Active Ci acquisition mechanisms, whether based on localised active H+ extrusion and acidification and enhanced CO2 supply, or on active transport of HCO3, are all energy requiring. As a consequence it is not surprising that the CCM activity is decreased at lower light levels […] a whole spectrum of light-responses can be found in seagrasses, and those are often in co-ordinance with the average daily irradiances where they grow. […] The function of chloroplast clumping in Halophila stipulacea appears to be protection of the chloroplasts from high irradiances. Thus, a few peripheral chloroplasts ‘sacrifice’ themselves for the good of many others within the clump that will be exposed to lower irradiances. […] While water is an effective filter of UV radiation (UVR)2, many marine organisms are sensitive to UVR and have devised ways to protect themselves against this harmful radiation. These ways include the production of UV-filtering compounds called mycosporine-like amino acids (MAAs), which is common also in seagrasses”.

“Many algae and seagrasses grow in the intertidal and are, accordingly, exposed to air during various parts of the day. On the one hand, this makes them amenable to using atmospheric CO2, the diffusion rate of which is some 10 000 times higher in air than in water. […] desiccation is […] the big drawback when growing in the intertidal, and excessive desiccation will lead to death. When some of the green macroalgae left the seas and formed terrestrial plants some 400 million years ago (the latter of which then ‘invaded’ Earth), there was a need for measures to evolve that on the one side ensured a water supply to the above-ground parts of the plants (i.e. roots1) and, on the other, hindered the water entering the plants to evaporate (i.e. a water-impermeable cuticle). Macroalgae lack those barriers against losing intracellular water, and are thus more prone to desiccation, the rate of which depends on external factors such as heat and humidity and internal factors such as thallus thickness. […] the mechanisms of desiccation tolerance in macroalgae is not well understood on the cellular level […] there seems to be a general correlation between the sensitivity of the photosynthetic apparatus (more than the respiratory one) to desiccation and the occurrence of macroalgae along a vertical gradient in the intertidal: the less sensitive (i.e. the more tolerant), the higher up the algae can grow. This is especially true if the sensitivity to desiccation is measured as a function of the ability to regain photosynthetic rates following rehydration during re-submergence. While this correlation exists, the mechanism of protecting the photosynthetic system against desiccation is largely unknown”.

July 28, 2015 Posted by | Biology, Books, Botany, Chemistry, Evolutionary biology, Microbiology | Leave a comment

Photosynthesis in the Marine Environment (II)

Here’s my first post about the book. I gave the book four stars on goodreads – here’s a link to my short goodreads review of the book.

As pointed out in the review, ‘it’s really mostly a biochemistry text.’ At least there’s a lot of that stuff in there (‘it get’s better towards the end’, would be one way to put it – the last chapters deal mostly with other topics, such as measurement and brief notes on some not-particularly-well-explored ecological dynamics of potential interest), and if you don’t want to read a book which deals in some detail with topics and concepts like alkalinity, crassulacean acid metabolism, photophosphorylation, photosynthetic reaction centres, Calvin cycle (also known straightforwardly as the ‘reductive pentose phosphate cycle’…), enzymes with names like Ribulose-1,5-bisphosphate carboxylase/oxygenase (‘RuBisCO’ among friends…) and phosphoenolpyruvate carboxylase (‘PEP-case’ among friends…), mycosporine-like amino acid, 4,4′-Diisothiocyanatostilbene-2,2′-disulfonic acid (‘DIDS’ among friends), phosphoenolpyruvate, photorespiration, carbonic anhydrase, C4 carbon fixation, cytochrome b6f complex, … – well, you should definitely not read this book. If you do feel like reading about these sorts of things, having a look at the book seems to me a better idea than reading the wiki articles.

I’m not a biochemist but I could follow a great deal of what was going on in this book, which is perhaps a good indication of how well written the book is. This stuff’s interesting and complicated, and the authors cover most of it quite well. The book has way too much stuff for it to make sense to cover all of it here, but I do want to cover some more stuff from the book, so I’ve added some quotes below.

“Water velocities are central to marine photosynthetic organisms because they affect the transport of nutrients such as Ci [inorganic carbon] towards the photosynthesising cells, as well as the removal of by-products such as excess O2 during the day. Such bulk transport is especially important in aquatic media since diffusion rates there are typically some 10 000 times lower than in air […] It has been established that increasing current velocities will increase photosynthetic rates and, thus, productivity of macrophytes as long as they do not disrupt the thalli of macroalgae or the leaves of seagrasses”.

Photosynthesis is the process by which the energy of light is used in order to form energy-rich organic compounds from low-energy inorganic compounds. In doing so, electrons from water (H2O) reduce carbon dioxide (CO2) to carbohydrates. […] The process of photosynthesis can conveniently be separated into two parts: the ‘photo’ part in which light energy is converted into chemical energy bound in the molecule ATP and reducing power is formed as NADPH [another friend with a long name], and the ‘synthesis’ part in which that ATP and NADPH are used in order to reduce CO2 to sugars […]. The ‘photo’ part of photosynthesis is, for obvious reasons, also called its light reactions while the ‘synthesis’ part can be termed CO2-fixation and -reduction, or the Calvin cycle after one of its discoverers; this part also used to be called the ‘dark reactions’ [or light-independent reactions] of photosynthesis because it can proceed in vitro (= outside the living cell, e.g. in a test-tube) in darkness provided that ATP and NADPH are added artificially. […] ATP and NADPH are the energy source and reducing power, respectively, formed by the light reactions, that are subsequently used in order to reduce carbon dioxide (CO2) to sugars (synonymous with carbohydrates) in the Calvin cycle. Molecular oxygen (O2) is formed as a by-product of photosynthesis.”

“In photosynthetic bacteria (such as the cyanobacteria), the light reactions are located at the plasma membrane and internal membranes derived as invaginations of the plasma membrane. […] most of the CO2-fixing enzyme ribulose-bisphosphate carboxylase/oxygenase […] is here located in structures termed carboxysomes. […] In all other plants (including algae), however, the entire process of photosynthesis takes place within intracellular compartments called chloroplasts which, as the name suggests, are chlorophyll-containing plastids (plastids are those compartments in cells that are associated with photosynthesis).”

“Photosynthesis can be seen as a process in which part of the radiant energy from sunlight is ‘harvested’ by plants in order to supply chemical energy for growth. The first step in such light harvesting is the absorption of photons by photosynthetic pigments[1]. The photosynthetic pigments are special in that they not only convert the energy of absorbed photons to heat (as do most other pigments), but largely convert photon energy into a flow of electrons; the latter is ultimately used to provide chemical energy to reduce CO2 to carbohydrates. […] Pigments are substances that can absorb different wavelengths selectively and so appear as the colour of those photons that are less well absorbed (and, therefore, are reflected, or transmitted, back to our eyes). (An object is black if all photons are absorbed, and white if none are absorbed.) In plants and animals, the pigment molecules within the cells and their organelles thus give them certain colours. The green colour of many plant parts is due to the selective absorption of chlorophylls […], while other substances give colour to, e.g. flowers or fruits. […] Chlorophyll is a major photosynthetic pigment, and chlorophyll a is present in all plants, including all algae and the cyanobacteria. […] The molecular sub-structure of the chlorophyll’s ‘head’ makes it absorb mainly blue and red light […], while green photons are hardly absorbed but, rather, reflected back to our eyes […] so that chlorophyll-containing plant parts look green. […] In addition to chlorophyll a, all plants contain carotenoids […] All these accessory pigments act to fill in the ‘green window’ generated by the chlorophylls’ non-absorbance in that band […] and, thus, broaden the spectrum of light that can be utilized […] beyond that absorbed by chlorophyll.”

“Photosynthesis is principally a redox process in which carbon dioxide (CO2) is reduced to carbohydrates (or, in a shorter word, sugars) by electrons derived from water. […] since water has an energy level (or redox potential) that is much lower than that of sugar, or, more precisely, than that of the compound that finally reduces CO2 to sugars (i.e. NADPH), it follows that energy must be expended in the process; this energy stems from the photons of light. […] Redox reactions are those reactions in which one compound, B, becomes reduced by receiving electrons from another compound, A, the latter then becomes oxidised by donating the electrons to B. The reduction of B can only occur if the electron-donating compound A has a higher energy level, or […] has a redox potential that is higher, or more negative in terms of electron volts, than that of compound B. The redox potential, or reduction potential, […] can thus be seen as a measure of the ease by which a compound can become reduced […] the greater the difference in redox potential between compounds B and A, the greater the tendency that B will be reduced by A. In photosynthesis, the redox potential of the compound that finally reduces CO2, i.e. NADPH, is more negative than that from which the electrons for this reduction stems, i.e. H2O, and the entire process can therefore not occur spontaneously. Instead, light energy is used in order to boost electrons from H2O through intermediary compounds to such high redox potentials that they can, eventually, be used for CO2 reduction. In essence, then, the light reactions of photosynthesis describe how photon energy is used to boost electrons from H2O to an energy level (or redox potential) high (or negative) enough to reduce CO2 to sugars.”

“Fluorescence in general is the generation of light (emission of photons) from the energy released during de-excitation of matter previously excited by electromagnetic energy. In photosynthesis, fluorescence occurs as electrons of chlorophyll undergo de-excitation, i.e. return to the original orbital from which they were knocked out by photons. […] there is an inverse (or negative) correlation between fluorescence yield (i.e. the amount of fluorescence generated per photons absorbed by chlorophyll) and photosynthetic yield (i.e. the amount of photosynthesis performed per photons similarly absorbed).”

“In some cases, more photon energy is received by a plant than can be used for photosynthesis, and this can lead to photo-inhibition or photo-damage […]. Therefore, many plants exposed to high irradiances possess ways of dissipating such excess light energy, the most well known of which is the xanthophyll cycle. In principle, energy is shuttled between various carotenoids collectively called xanthophylls and is, in the process, dissipated as heat.”

“In order to ‘fix’ CO2 (= incorporate it into organic matter within the cell) and reduce it to sugars, the NADPH and ATP formed in the light reactions are used in a series of chemical reactions that take place in the stroma of the chloroplasts (or, in prokaryotic autotrophs such as cyanobacteria, the cytoplasm of the cells); each reaction is catalysed by its specific enzyme, and the bottleneck for the production of carbohydrates is often considered to be the enzyme involved in its first step, i.e. the fixation of CO2 [this enzyme is RubisCO] […] These CO2-fixation and -reduction reactions are known as the Calvin cycle […] or the C3 cycle […] The latter name stems from the fact that the first stable product of CO2 fixation in the cycle is a 3-carbon compound called phosphoglyceric acid (PGA): Carbon dioxide in the stroma is fixed onto a 5-carbon sugar called ribulose-bisphosphate (RuBP) in order to form 2 molecules of PGA […] It should be noted that this reaction does not produce a reduced, energy-rich, carbon compound, but is only the first, ‘CO2– fixing’, step of the Calvin cycle. In subsequent steps, PGA is energized by the ATP formed through photophosphorylation and is reduced by NADPH […] to form a 3-carbon phosphorylated sugar […] here denoted simply as triose phosphate (TP); these reactions can be called the CO2-reduction step of the Calvin cycle […] 1/6 of the TPs formed leave the cycle while 5/6 are needed in order to re-form RuBP molecules in what we can call the regeneration part of the cycle […]; it is this recycling of most of the final product of the Calvin cycle (i.e. TP) to re-form RuBP that lends it to be called a biochemical ‘cycle’ rather than a pathway.”

“Rubisco […] not only functions as a carboxylase, but […] also acts as an oxygenase […] When Rubisco reacts with oxygen instead of CO2, only 1 molecule of PGA is formed together with 1 molecule of the 2-carbon compound phosphoglycolate […] Not only is there no gain in organic carbon by this reaction, but CO2 is actually lost in the further metabolism of phosphoglycolate, which comprises a series of reactions termed photorespiration […] While photorespiration is a complex process […] it is also an apparently wasteful one […] and it is not known why this process has evolved in plants altogether. […] Photorespiration can reduce the net photosynthetic production by up to 25%.”

“Because of Rubisco’s low affinity to CO2 as compared with the low atmospheric, and even lower intracellular, CO2 concentration […], systems have evolved in some plants by which CO2 can be concentrated at the vicinity of this enzyme; these systems are accordingly termed CO2 concentrating mechanisms (CCM). For terrestrial plants, this need for concentrating CO2 is exacerbated in those that grow in hot and/or arid areas where water needs to be saved by partly or fully closing stomata during the day, thus restricting also the influx of CO2 from an already CO2-limiting atmosphere. Two such CCMs exist in terrestrial plants: the C4 cycle and the Crassulacean acid metabolism (CAM) pathway. […] The C 4 cycle is called so because the first stable product of CO2-fixation is not the 3-carbon compound PGA (as in the Calvin cycle) but, rather, malic acid (often referred to by its anion malate) or aspartic acid (or its anion aspartate), both of which are 4-carbon compounds. […] C4 [terrestrial] plants are […] more common in areas of high temperature, especially when accompanied with scarce rains, than in areas with higher rainfall […] While atmospheric CO2 is fixed […] via the C4 cycle, it should be noted that this biochemical cycle cannot reduce CO2 to high energy containing sugars […] since the Calvin cycle is the only biochemical system that can reduce CO2 to energy-rich carbohydrates in plants, it follows that the CO2 initially fixed by the C4 cycle […] is finally reduced via the Calvin cycle also in C4 plants. In summary, the C 4 cycle can be viewed as being an additional CO2 sequesterer, or a biochemical CO2 ‘pump’, that concentrates CO2 for the rather inefficient enzyme Rubisco in C4 plants that grow under conditions where the CO2 supply is extremely limited because partly closed stomata restrict its influx into the photosynthesising cells.”

“Crassulacean acid metabolism (CAM) is similar to the C 4 cycle in that atmospheric CO2 […] is initially fixed via PEP-case into the 4-carbon compound malate. However, this fixation is carried out during the night […] The ecological advantage behind CAM metabolism is that a CAM plant can grow, or at least survive, under prolonged (sometimes months) conditions of severe water stress. […] CAM plants are typical of the desert flora, and include most cacti. […] The principal difference between C 4 and CAM metabolism is that in C4 plants the initial fixation of atmospheric CO2 and its final fixation and reduction in the Calvin cycle is separated in space (between mesophyll and bundle-sheath cells) while in CAM plants the two processes are separated in time (between the initial fixation of CO2 during the night and its re-fixation and reduction during the day).”

July 20, 2015 Posted by | Biology, Botany, Chemistry, Ecology, Microbiology | Leave a comment

Calculated Risks: Understanding the Toxicity of Chemicals in our Environment (ii)

I finished the book – I didn’t expect to do that quite so soon, which is why I ended up posting two posts about the book during the same day. As Miao pointed out in her comment a newer version of the book exists, so if my posts have made you curious you should probably give that one a shot instead; this is a good book, but sometimes you can tell it wasn’t exactly written yesterday.

This book may tell you a lot of stuff you already know, especially if you have a little knowledge about biological systems, the human body, or perhaps basic statistics. I considered big parts of some chapters to be review stuff I already knew; I’d have preferred a slightly more detailed and in-depth treatment of the material. I didn’t need to be reminded how the kidneys work or that there’s such a thing as a blood-brain barrier, the stats stuff was of course old hat to me, I’m familiar with the linear no-threshold model, and there’s a lot of stuff about carcinogens in Mukherjee not covered in this book…

So it may tell you a lot of stuff you already know. But it will also tell you a lot of new stuff. I learned quite a bit and I liked reading the book, even the parts I probably didn’t really ‘need’ to read. I gave it 3 stars on account of the ‘written two decades ago’-thing and the ‘I don’t think I’m part of the core target group’-thing – but if current me had read it in the year 2000 I’d probably have given it four stars.

I don’t really know if the newer edition of the book is better than the one I read, and it’s dangerous to make assumptions about these things, but if he hasn’t updated it at all it’s still a good book, and if he has updated the material the new version is in all likelihood even better than the one I read. If you’re interested in this stuff, I don’t think this is a bad place to start.

I found out while writing the first post about the book that quoting from the book is quite bothersome. I’m lazy, so I decided to limit coverage here to some links which I’ve posted below – the stuff I link to is either covered or related to stuff that is covered in the book. It was a lot easier for me to post these links than to quote from the book in part because I visited many of these articles along the way while reading the book:

Aflatoxin.
No-observed-adverse-effect level.
Teratology.
Paraquat.
DDT.
Methemoglobinemia.
Erethism.
DBCP.
Diethylstilbestrol.
Polycyclic aromatic hydrocarbon.
Percivall Pott.
Bioassay.
Electrophile.
Clastogen.
Mutagen.
Dose–response relationship.
Acceptable daily intake.
Linear Low dose Extrapolation for Cancer Risk Assessments: Sources of Uncertainty and How They Affect the Precision of Risk Estimates (short paper)
Delaney clause.
OSHA.

Do note that these links taken together can be somewhat misleading – as you could hopefully tell from the quotes in the first post, the book is quite systematic and the main focus is on basic/key concepts. To the extent that specific poisons like paraquat and DDT are mentioned in the book they’re used to ‘zoom in’ on a certain aspect in order to illustrate a specific feature, or perhaps in order to point out an important distinction – stuff like that.

July 30, 2013 Posted by | Biology, Books, Cancer/oncology, Chemistry, Medicine | Leave a comment

Calculated Risks: Understanding the Toxicity of Chemicals in our Environment

So what is this book about? The introductory remarks below from the preface provide part of the answer:

“A word about organization of topics […] First, it is important to understand what we mean when we talk about ‘chemicals’. Many people think the term refers only to generally noxious materials that are manufactored in industrial swamps, frequently for no good purpose. The existence of such an image impedes understanding of toxicology and needs to be corrected. Moreover, because the molecular architecture of chemicals is a determinant of their behaviour in biological systems, it is important to create a little understanding of the principles of chemical structure and behavior. For these reasons, we begin with a brief review of some fundamentals of chemistry.

The two ultimate sources of chemicals – nature and industrial and laboratory synthesis – are then briefly described. This review sets the stage for a discussion of how human beings become exposed to chemicals. The conditions of human exposure are a critical determinant of whether and how a chemical will produce injury or disease, so the discussion of chemical sources and exposures naturally leads to the major subject of the book – the science of toxicology.

The major subjects of the last third of this volume are risk assessment […] and risk control, or management, and the associated topic of public perceptions of risk in relation to the judgments of experts.”

What can I say? – it made sense to read a toxicology textbook in between the Christie novels… The book was written in the 90s, but there are a lot of key principles and -concepts covered here that probably don’t have a much better description now than they did when the book was written. I wanted the overview and the book has delivered so far – I like it. Here’s some more stuff from the first half of the book:

“the greatest sources of chemicals to which we are regularly and directly exposed are the natural components of the plants and animals we consume as foods. In terms of both numbers and structural variations, no other chemical sources matches food. We have no firm estimate of the number of such chemicals we are exposed to through food, but it is surely immense. A cup of coffee contains, for example, nearly 200 different organic chemicals – natural components of the coffee bean that are extracted into water. Some impart color, some taste, some aroma, others none of the above. The simple potato has about 100 different natural components …” […]

“These facts bring out one of the most important concepts in toxicology: all chemicals are toxic under some conditions of exposure. What the toxicologist would like to know are those conditions. Once they are known, measures can be taken to limit human exposures so that toxicity can be avoided.” […]

The route of exposure refers to the way the chemical moves from the exposure medium into the body. For chemicals in the environment the three major routes are ingestion (the oral route), inhalation, and skin contact (or dermal contact). […]

The typical dose units are […] milligram of chemical per kilogram of body weight per day (mg/kg b.w./day). […] For the same intake […] the lighter person receives the greater dose. […] Duration of exposure as well as the dose received […] needs to be included in the equation […] dose and its duration are the critical determinants of the potential for toxicity. Exposure creates the dose.” […]

Analytical chemistry has undergone extraordinary advances over the past two to three decades. Chemists are able to measure many chemicals at the part-per-billion level which in the 1960s could be measured only at the part-per-million level […] or even the part-per-thousand level. […] These advances in detection capabilities have revealed that industrial chemicals are more widespread in the environment than might have been guessed 10 or 20 years ago, simply because chemists are now capable of measuring concentrations that could not be detected with analytical technology available in the 1960s. This trend will no doubt continue …” […]

“The nature of toxic damage produced by a chemical, the part of the body where that damage occurs, the severity of the damage, and the likelihood that the damage can be reversed, all depend upon the processes of absorption, distribution, metabolism and excretion, ADME for short. The combined effects of these processes determine the concentration a particular chemical […] will achieve in various tissues and cells of the body and the duration of time it spends there. Chemical form, concentration, and duration in turn determine the nature and extent of injury produced. Injury produced after absorption is referred to as systemic toxicity, to contrast it with local toxicity.” […]

“Care must be taken to distinguish subchronic or chronic exposures from subchronic or chronic effects. By the latter, toxicologists generally refer to some adverse effect that does not appear immediately after exposure begins but only after a delay; sometimes the effect may not be observed until near the end of a lifetime, even when exposure begins early in life (cancers, for example, are generally in this category of chronic effects). But the production of chronic effects may or may not require chronic exposure. For some chemicals acute or subchronic exposures may be all that is needed to produce a chronic toxicity; the effect is a delayed one. For others chronic exposure may be required to create chronic toxicity.” […]

“In the final analysis we are interested not in toxicity, but rather in risk. By risk is meant the likelihood, or probability, that the toxic properties of a chemical will be produced in populations of individuals under their actual conditions of exposure. To evaluate the risk of toxicity occurring for a specific chemical at least three types of information are requred:
1) The types of toxicity the chemical can produce (its targets and the forms of injury they incur).
2) The conditions of exposure (dose and duration) under which the chemical’s toxicity can be produced.
3) The conditions (dose, timing and duration) under which the population of people whose risk is being evaluated is or could be exposed to the chemical.

It is not sufficient to understand any one or two of these; no useful statement about risk can be made unless all three are understood.” […]

“It is rare that any single epidemiology study provides sufficiently definitive information to allow scientists to conclude that a cause-effect relationship exists between a chemical exposure and a human disease. Instead epidemiologists search for certain patterns. Does there seem to be a consistent association between the occurence of excess rates of a certain condition (lung cancer, for example) and certain exposures (e.g. to cigarette smoke) in several epidemiology studies involving different populations of people? If a consistent pattern of associations is seen, and other criteria are satisfied, causality can be established with reasonable certainty. […] Epidemiology studies are, of course, only useful after exposure has occurred. For certain classes of toxic agents, carcinogens being the most notable, exposure may have to take place for several decades before the effect, if it exists, is observable […] The obvious point is that epidemiology studies can not be used to identify toxic properties prior to the introduction of the chemical into commerce. This is one reason toxicologists turn to the laboratory. […] “The ‘nuts and bolts’ of animal testing, and the problems of test interpretation and extrapolation of results to human beings, comprise one of the central areas of controversy in the field of chemical risk assessment.” […]

Toxicologists classify hepatic toxicants according to the type of injuries they produce. Some cause accumulation of excessive and potentially dangerous amounts of lipids (fats). Others can kill liver cells; they cause cell necrosis. Cholestasis, which is decreased secretion of bile leading to jaundice […] can be produced as side effects of several therapeutic agents. Cirrhosis, a chronic change characterized by the deposition of connective tissue fibers, can be brought about after chronic exposure to several substances. […] ‘hepatotoxicity’ is not a very helpful term, because it fails to convey the fact that several quite distinct types of hepatic injury can be induced by chemical exposures and that, for each, different underlying mechanisms are at work. In fact, this situation exists for all targets, not only the liver.”

July 30, 2013 Posted by | Biology, Books, Chemistry, Epidemiology, Medicine | 2 Comments

Wikipedia articles of interest

i. Planetary habitability (featured).

Planetary habitability is the measure of a planet‘s or a natural satellite‘s potential to develop and sustain life. Life may develop directly on a planet or satellite or be transferred to it from another body, a theoretical process known as panspermia. As the existence of life beyond Earth is currently uncertain, planetary habitability is largely an extrapolation of conditions on Earth and the characteristics of the Sun and Solar System which appear favourable to life’s flourishing—in particular those factors that have sustained complex, multicellular organisms and not just simpler, unicellular creatures. Research and theory in this regard is a component of planetary science and the emerging discipline of astrobiology.

An absolute requirement for life is an energy source, and the notion of planetary habitability implies that many other geophysical, geochemical, and astrophysical criteria must be met before an astronomical body can support life. In its astrobiology roadmap, NASA has defined the principal habitability criteria as “extended regions of liquid water, conditions favourable for the assembly of complex organic molecules, and energy sources to sustain metabolism.”[1]

In determining the habitability potential of a body, studies focus on its bulk composition, orbital properties, atmosphere, and potential chemical interactions. Stellar characteristics of importance include mass and luminosity, stable variability, and high metallicity. Rocky, terrestrial-type planets and moons with the potential for Earth-like chemistry are a primary focus of astrobiological research, although more speculative habitability theories occasionally examine alternative biochemistries and other types of astronomical bodies.”

The article has a lot of stuff – if you’re the least bit interested (and if you are human and alive, as well as a complex enough lifeform to even conceptualize questions like these, why wouldn’t you be?) you should go have a look. When analyzing which factors might impact habitability of a system, some might say that we humans are rather constrained by our somewhat limited sample size of planetary systems known to support complex multicellular life, but this doesn’t mean we can’t say anything about this stuff. Even though extreme caution is naturally warranted when drawing conclusions here. Incidentally, although the Earth does support complex life now we would probably be well-advised to remember that this was not always the case, nor will it continue to be the case in the future – here’s one guess at what the Earth will look like in 7 billion years:

Red_Giant_Earth_warm

The image is from this article. Of course living organisms on Earth will be screwed long before this point is reached.

ii. Parity of zero (‘good article’).

Zero is an even number. Apparently a rather long wikipedia article can be written about this fact…

iii. 1907 Tiflis bank robbery (featured).

Not just any bank robbery – guns as well as bombs/grenades were used during the robbery, around 40 people died(!), and the list of names of the people behind the robbery includes the names Stalin and Lenin.

iv. Möbius Syndrome – what would your life be like if you were unable to make facial expressions and unable to move your eyes from side to side? If you want to know, you should ask these people. Or you could of course just start out by reading the article…

v. Book of the Dead (‘good article’).

800px-PinedjemIIBookOfTheDead-BritishMuseum-August21-08

“The Book of the Dead is an ancient Egyptian funerary text, used from the beginning of the New Kingdom (around 1550 BCE) to around 50 BCE.[1] The original Egyptian name for the text, transliterated rw nw prt m hrw[2] is translated as “Book of Coming Forth by Day”.[3] Another translation would be “Book of emerging forth into the Light”. The text consists of a number of magic spells intended to assist a dead person’s journey through the Duat, or underworld, and into the afterlife.

The Book of the Dead was part of a tradition of funerary texts which includes the earlier Pyramid Texts and Coffin Texts, which were painted onto objects, not papyrus. Some of the spells included were drawn from these older works and date to the 3rd millennium BCE. Other spells were composed later in Egyptian history, dating to the Third Intermediate Period (11th to 7th centuries BCE). A number of the spells which made up the Book continued to be inscribed on tomb walls and sarcophagi, as had always been the spells from which they originated. The Book of the Dead was placed in the coffin or burial chamber of the deceased.

There was no single or canonical Book of the Dead. The surviving papyri contain a varying selection of religious and magical texts and vary considerably in their illustration. Some people seem to have commissioned their own copies of the Book of the Dead, perhaps choosing the spells they thought most vital in their own progression to the afterlife. […]

A Book of the Dead papyrus was produced to order by scribes. They were commissioned by people in preparation for their own funeral, or by the relatives of someone recently deceased. They were expensive items; one source gives the price of a Book of the Dead scroll as one deben of silver,[50] perhaps half the annual pay of a labourer.[51] Papyrus itself was evidently costly, as there are many instances of its re-use in everyday documents, creating palimpsests. In one case, a Book of the Dead was written on second-hand papyrus.[52]

Most owners of the Book of the Dead were evidently part of the social elite; they were initially reserved for the royal family, but later papyri are found in the tombs of scribes, priests and officials. Most owners were men, and generally the vignettes included the owner’s wife as well. Towards the beginning of the history of the Book of the Dead, there are roughly 10 copies belonging to men for every one for a woman. However, during the Third Intermediate Period, 2/3 were for women; and women owned roughly a third of the hieratic paypri from the Late and Ptolemaic Periods.[53]

The dimensions of a Book of the Dead could vary widely; the longest is 40m long while some are as short as 1m. They are composed of sheets of papyrus joined together, the individual papyri varying in width from 15 cm to 45 cm.”

vi. Volcanic ash (‘good article’).

800px-MountRedoubtEruption

Volcanic ash consists of fragments of pulverized rock, minerals and volcanic glass, created during volcanic eruptions, less than 2 mm (0.079 in) in diameter.[1] The term volcanic ash is also often loosely used to refer to all explosive eruption products (correctly referred to as tephra), including particles larger than 2mm. Volcanic ash is formed during explosive volcanic eruptions when dissolved gases in magma expand and escape violently into the atmosphere. The force of the escaping gas shatters the magma and propels it into the atmosphere where it solidifies into fragments of volcanic rock and glass. Ash is also produced when magma comes into contact with water during phreatomagmatic eruptions, causing the water to explosively flash to steam leading to shattering of magma. Once in the air, ash is transported by wind up to thousands of kilometers away. […]

Physical and chemical characteristics of volcanic ash are primarily controlled by the style of volcanic eruption.[8] Volcanoes display a range of eruption styles which are controlled by magma chemistry, crystal content, temperature and dissolved gases of the erupting magma and can be classified using the Volcanic Explosivity Index (VEI). Effusive eruptions (VEI 1) of basaltic composition produce <105 m3 of ejecta, whereas extremely explosive eruptions (VEI 5+) of rhyolitic and dacitic composition can inject large quantities (>109 m3) of ejecta into the atmosphere. Another parameter controlling the amount of ash produced is the duration of the eruption: the longer the eruption is sustained, the more ash will be produced. […]

The types of minerals present in volcanic ash are dependent on the chemistry of the magma from which it was erupted. Considering that the most abundant elements found in magma are silica (SiO2) and oxygen, the various types of magma (and therefore ash) produced during volcanic eruptions are most commonly explained in terms of their silica content. Low energy eruptions of basalt produce a characteristically dark coloured ash containing ~45 – 55% silica that is generally rich in iron (Fe) and magnesium (Mg). The most explosive rhyolite eruptions produce a felsic ash that is high in silica (>69%) while other types of ash with an intermediate composition (e.g. andesite or dacite) have a silica content between 55-69%.

The principal gases released during volcanic activity are water, carbon dioxide, sulfur dioxide, hydrogen, hydrogen sulfide, carbon monoxide and hydrogen chloride.[9] These sulfur and halogen gases and metals are removed from the atmosphere by processes of chemical reaction, dry and wet deposition, and by adsorption onto the surface of volcanic ash. […]

Ash particles are incorporated into eruption columns as they are ejected from the vent at high velocity. The initial momentum from the eruption propels the column upwards. As air is drawn into the column, the bulk density decreases and it starts to rise buoyantly into the atmosphere.[6] At a point where the bulk density of the column is the same as the surrounding atmosphere, the column will cease rising and start moving laterally. Lateral dispersion is controlled by prevailing winds and the ash may be deposited hundreds to thousands of kilometres from the volcano, depending on eruption column height, particle size of the ash and climatic conditions (especially wind direction and strength and humidity).[25]

Ash fallout occurs immediately after the eruption and is controlled by particle density. Initially, coarse particles fall out close to source. This is followed by fallout of accretionary lapilli, which is the result of particle agglomeration within the column.[26] Ash fallout is less concentrated during the final stages as the column moves downwind. This results in an ash fall deposit which generally decreases in thickness and grain size exponentially with increasing distance from the volcano.[27] Fine ash particles may remain in the atmosphere for days to weeks and be dispersed by high-altitude winds.”

If you’re interested in this kind of stuff (the first parts of the article), Press’ and Siever’s textbook Earth, which I read last summer (here’s one relevant post), is pretty good. There’s a lot of stuff in the article about how this stuff impacts humans and human infrastructure though I decided against including any of that stuff here – if you’re curious, go have a look.

vii. Kingdom of Mysore (featured).

“The Kingdom of Mysore was a kingdom of southern India, traditionally believed to have been founded in 1399 in the vicinity of the modern city of Mysore. The kingdom, which was ruled by the Wodeyar family, initially served as a vassal state of the Vijayanagara Empire. With the decline of the Vijayanagara Empire (c.1565), the kingdom became independent. The 17th century saw a steady expansion of its territory and, under Narasaraja Wodeyar I and Chikka Devaraja Wodeyar, the kingdom annexed large expanses of what is now southern Karnataka and parts of Tamil Nadu to become a powerful state in the southern Deccan.

The kingdom reached the height of its military power and dominion in the latter half of the 18th century under the de facto ruler Haider Ali and his son Tipu Sultan. During this time, it came into conflict with the Marathas, the British and the Nizam of Hyderabad, which culminated in the four Anglo-Mysore wars. Success in the first two Anglo-Mysore wars was followed by defeat in the third and fourth. Following Tipu’s death in the fourth war of 1799, large parts of his kingdom were annexed by the British, which signalled the end of a period of Mysorean hegemony over southern Deccan. The British restored the Wodeyars to their throne by way of a subsidiary alliance and the diminished Mysore was transformed into a princely state. The Wodeyars continued to rule the state until Indian independence in 1947, when Mysore acceded to the Union of India. […]

The vast majority of the people lived in villages and agriculture was their main occupation. The economy of the kingdom was based on agriculture. Grains, pulses, vegetables and flowers were cultivated. Commercial crops included sugarcane and cotton. The agrarian population consisted of landlords (gavunda, zamindar, heggadde) who tilled the land by employing a number of landless labourers, usually paying them in grain. Minor cultivators were also willing to hire themselves out as labourers if the need arose.[73] It was due to the availability of these landless labourers that kings and landlords were able to execute major projects such as palaces, temples, mosques, anicuts (dams) and tanks.[74] Because land was abundant and the population relatively sparse, no rent was charged on land ownership. Instead, landowners paid tax for cultivation, which amounted to up to one-half of all harvested produce.[74]

Tipu Sultan is credited to have founded state trading depots in various locations of his kingdom. In addition, he founded depots in foreign locations such as Karachi, Jeddah and Muscat, where Mysore products were sold.[75] During Tipu’s rule French technology was used for the first time in carpentry and smithy, Chinese technology was used for sugar production, and technology from Bengal helped improve the sericulture industry.[76] State factories were established in Kanakapura and Taramandelpeth for producing cannons and gunpowder respectively. The state held the monopoly in the production of essentials such as sugar, salt, iron, pepper, cardamom, betel nut, tobacco and sandalwood, as well as the extraction of incense oil from sandalwood and the mining of silver, gold and precious stones. Sandalwood was exported to China and the Persian Gulf countries and sericulture was developed in twenty-one centres within the kingdom.[77]

This system changed under the British, when tax payments were made in cash, and were used for the maintenance of the army, police and other civil and public establishments. A portion of the tax was transferred to England as the “Indian tribute”.[78] Unhappy with the loss of their traditional revenue system and the problems they faced, peasants rose in rebellion in many parts of south India.[79] […]

Prior to the 18th century, the society of the kingdom followed age-old and deeply established norms of social interaction between people. Accounts by contemporaneous travellers indicate the widespread practice of the Hindu caste system and of animal sacrifices during the nine day celebrations (called Mahanavami).[101] Later, fundamental changes occurred due to the struggle between native and foreign powers. Though wars between the Hindu kingdoms and the Sultanates continued, the battles between native rulers (including Muslims) and the newly arrived British took centre stage.[61] The spread of English education, the introduction of the printing press and the criticism of the prevailing social system by Christian missionaries helped make the society more open and flexible. The rise of modern nationalism throughout India also had its impact on Mysore.[102]

With the advent of British power, English education gained prominence in addition to traditional education in local languages. These changes were orchestrated by Lord Elphinstone, the governor of the Madras Presidency. […]

Social reforms aimed at removing practices such as sati and social discrimination based upon untouchability, as well as demands for the emancipation of the lower classes, swept across India and influenced Mysore territory.[106] In 1894, the kingdom passed laws to abolish the marriage of girls below the age of eight. Remarriage of widowed women and marriage of destitute women was encouraged, and in 1923, some women were granted the permission to exercise their franchise in elections.[107] There were, however, uprisings against British authority in the Mysore territory, notably the Kodagu uprising in 1835 (after the British dethroned the local ruler Chikkaviraraja) and the Kanara uprising of 1837.”

Not from wikipedia, but a link to this recent post by Razib Khan seems relevant to include here.

July 14, 2013 Posted by | Astronomy, Biology, Chemistry, Geology, History, Mathematics, Medicine, Wikipedia | Leave a comment

Stuff

I have a paper deadline approaching, so I’ll be unlikely to blog much more this week. Below some links and stuff of interest:

i. Plos One: A Survey on Data Reproducibility in Cancer Research Provides Insights into Our Limited Ability to Translate Findings from the Laboratory to the Clinic.

“we surveyed the faculty and trainees at MD Anderson Cancer Center using an anonymous computerized questionnaire; we sought to ascertain the frequency and potential causes of non-reproducible data. We found that ~50% of respondents had experienced at least one episode of the inability to reproduce published data; many who pursued this issue with the original authors were never able to identify the reason for the lack of reproducibility; some were even met with a less than “collegial” interaction. […] These results suggest that the problem of data reproducibility is real. Biomedical science needs to establish processes to decrease the problem and adjudicate discrepancies in findings when they are discovered.”

ii. The development in the number of people killed in traffic accidents in Denmark over the last decade (link):

Traffic accidents
For people who don’t understand Danish: The x-axis displays the years, the y-axis displays deaths – I dislike it when people manipulate the y-axis (…it should start at 0, not 200…), but this decline is real; the number of Danes killed in traffic accidents has more than halved over the last decade (463 deaths in 2002; 220 deaths in 2011). The number of people sustaining traffic-related injuries dropped from 9254 in 2002 to 4259 in 2011. There’s a direct link to the data set at the link provided above if you want to know more.

iii. Gender identity and relative income within households, by Bertrand, Kamenica & Pan.

“We examine causes and consequences of relative income within households. We establish that gender identity – in particular, an aversion to the wife earning more than the husband – impacts marriage formation, the wife’s labor force participation, the wife’s income conditional on working, marriage satisfaction, likelihood of divorce, and the division of home production. The distribution of the share of household income earned by the wife exhibits a sharp cliff at 0.5, which suggests that a couple is less willing to match if her income exceeds his. Within marriage markets, when a randomly chosen woman becomes more likely to earn more than a randomly chosen man, marriage rates decline. Within couples, if the wife’s potential income (based on her demographics) is likely to exceed the husband’s, the wife is less likely to be in the labor force and earns less than her potential if she does work. Couples where the wife earns more than the husband are less satisfied with their marriage and are more likely to divorce. Finally, based on time use surveys, the gender gap in non-market work is larger if the wife earns more than the husband.” […]

“In our preferred specification […] we find that if the wife earns more than the husband, spouses are 7 percentage points (15%) less likely to report that their marriage is very happy, 8 percentage points (32%) more likely to report marital troubles in the past year, and 6 percentage points (46%) more likely to have discussed separating in the past year.”

These are not trivial effects…

iv. Some Khan Academy videos of interest:

v. The Age Distribution of Missing Women in India.

“Relative to developed countries, there are far fewer women than men in India. Estimates suggest that among the stock of women who could potentially be alive today, over 25 million are “missing”. Sex selection at birth and the mistreatment of young girls are widely regarded as key explanations. We provide a decomposition of missing women by age across the states. While we do not dispute the existence of severe gender bias at young ages, our computations yield some striking findings. First, the vast majority of missing women in India are of adult age. Second, there is significant variation in the distribution of missing women by age across different states. Missing girls at birth are most pervasive in some north-western states, but excess female mortality at older ages is relatively low. In contrast, some north-eastern states have the highest excess female mortality in adulthood but the lowest number of missing women at birth. The state-wise variation in the distribution of missing women across the age groups makes it very difficult to draw simple conclusions to explain the missing women phenomenon in India.”

A table from the paper:

Anderson et al

“We estimate that a total of more than two million women in India are missing in a given year. Our age decomposition of this total yields some striking findings. First, the majority of missing women, in India die in adulthood. Our estimates demonstrate that roughly 12% of missing women are found at birth, 25% die in childhood, 18% at the reproductive ages, and 45% die at older ages. […] There are just two states in which the majority of missing women are either never born or die in childhood (i e, [sic] before age 15), and these are Haryana and Rajasthan. Moreover, the missing women in these three states add up to well under 15% of the total missing women in India.

For all other states, the majority of missing women die in adulthood. […]

Because there is so much state-wise variation in the distribution of missing women across the age groups, it is difficult to provide a clear explanation for missing women in India. The traditional explanation for missing women, a strong preference for the birth of a son, is most likely driving a significant proportion of missing women in the two states of Punjab and Haryana where the biased sex ratios at birth are undeniable. However, the explanation for excess female deaths after birth is far from clear.”

May 22, 2013 Posted by | Cancer/oncology, Chemistry, Data, Demographics, Economics, Khan Academy, marriage, Medicine, Papers | Leave a comment

Khan Academy videos of interest

I assume that not all of the five videos below are equally easy to understand for people who’ve not watched the previous ones in the various relevant playlists, but this is the stuff I’ve been watching lately and you should know where to look by now if something isn’t perfectly clear. I incidentally covered some relevant background material previously on the blog – if concepts from chemistry like ‘oxidation states’ are a bit far away, a couple of the videos in that post may be helpful.

I stopped caring much when I reached the 1 million mark (until they introduced the Kepler badge – then I started caring a little again until I’d gotten that one), but I noticed today that I’m at this point almost at the 1,5 million energy points mark (1.487.776). I’ve watched approximately 400 videos at the site by now.

Here’s a semi-related link with some good news: Khan Academy Launches First State-Wide Pilot In Idaho.

March 7, 2013 Posted by | Biology, Botany, Chemistry, Khan Academy, Lectures | 2 Comments