Econstudentlog

Oceans (II)

In this post I have added some more observations from the book and some more links related to the book‘s coverage.

“Almost all the surface waves we observe are generated by wind stress, acting either locally or far out to sea. Although the wave crests appear to move forwards with the wind, this does not occur. Mechanical energy, created by the original disturbance that caused the wave, travels through the ocean at the speed of the wave, whereas water does not. Individual molecules of water simply move back and forth, up and down, in a generally circular motion. […] The greater the wind force, the bigger the wave, the more energy stored within its bulk, and the more energy released when it eventually breaks. The amount of energy is enormous. Over long periods of time, whole coastlines retreat before the pounding waves – cliffs topple, rocks are worn to pebbles, pebbles to sand, and so on. Individual storm waves can exert instantaneous pressures of up to 30,000 kilograms […] per square metre. […] The rate at which energy is transferred across the ocean is the same as the velocity of the wave. […] waves typically travel at speeds of 30-40 kilometres per hour, and […] waves with a greater wavelength will travel faster than those with a shorter wavelength. […] With increasing wind speed and duration over which the wind blows, the wave height, period, and length all increase. The distance over which the wind blows is known as fetch, and is critical in influencing the growth of waves — the greater the area of ocean over which a storm blows, then the larger and more powerful the waves generated. The three stages in wave development are known as sea, swell, and surf. […] The ocean is highly efficient at transmitting energy. Water offers so little resistance to the small orbital motion of water particles in waves that individual wave trains may continue for thousands of kilometres. […] When the wave train encounters shallow water — say 50 metres for a 100-metre wavelength — the waves first feel the bottom and begin to slow down in response to frictional resistance. Wavelength decreases, the crests bunch closer together, and wave height increases until the wave becomes unstable and topples forwards as surf. […] Very often, waves approach obliquely to the coast and set up a significant transfer of water and sediment along the shoreline. The long-shore currents so developed can be very powerful, removing beach sand and building out spits and bars across the mouths of estuaries.” (People who’re interested in knowing more about these topics will probably enjoy Fredric Raichlen’s book on these topics – I did, US.)

“Wind is the principal force that drives surface currents, but the pattern of circulation results from a more complex interaction of wind drag, pressure gradients, and Coriolis deflection. Wind drag is a very inefficient process by which the momentum of moving air molecules is transmitted to water molecules at the ocean surface setting them in motion. The speed of water molecules (the current), initially in the direction of the wind, is only about 3–4 per cent of the wind speed. This means that a wind blowing constantly over a period of time at 50 kilometres per hour will produce a water current of about 1 knot (2 kilometres per hour). […] Although the movement of wind may seem random, changing from one day to the next, surface winds actually blow in a very regular pattern on a planetary scale. The subtropics are known for the trade winds with their strong easterly component, and the mid-latitudes for persistent westerlies. Wind drag by such large-scale wind systems sets the ocean waters in motion. The trade winds produce a pair of equatorial currents moving to the west in each ocean, while the westerlies drive a belt of currents that flow to the east at mid-latitudes in both hemispheres. […] Deflection by the Coriolis force and ultimately by the position of the continents creates very large oval-shaped gyres in each ocean.”

“The control exerted by the oceans is an integral and essential part of the global climate system. […] The oceans are one of the principal long-term stores on Earth for carbon and carbon dioxide […] The oceans are like a gigantic sponge holding fifty times more carbon dioxide than the atmosphere […] the sea surface acts as a two-way control valve for gas transfer, which opens and closes in response to two key properties – gas concentration and ocean stirring. First, the difference in gas concentration between the air and sea controls the direction and rate of gas exchange. Gas concentration in water depends on temperature—cold water dissolves more carbon dioxide than warm water, and on biological processes—such as photosynthesis and respiration by microscopic plants, animals, and bacteria that make up the plankton. These transfer processes affect all gases […]. Second, the strength of the ocean-stirring process, caused by wind and foaming waves, affects the ease with which gases are absorbed at the surface. More gas is absorbed during stormy weather and, once dissolved, is quickly mixed downwards by water turbulence. […] The transfer of heat, moisture, and other gases between the ocean and atmosphere drives small-scale oscillations in climate. The El Niño Southern Oscillation (ENSO) is the best known, causing 3–7-year climate cycles driven by the interaction of sea-surface temperature and trade winds along the equatorial Pacific. The effects are worldwide in their impact through a process of atmospheric teleconnection — causing floods in Europe and North America, monsoon failure and severe drought in India, South East Asia, and Australia, as well as decimation of the anchovy fishing industry off Peru.”

“Earth’s climate has not always been as it is today […] About 100 million years ago, for example, palm trees and crocodiles lived as far north as 80°N – the equivalent of Arctic Canada or northern Greenland today. […] Most of the geological past has enjoyed warm conditions. These have been interrupted at irregular intervals by cold and glacial climates of altogether shorter duration […][,] the last [of them] beginning around 3 million years ago. We are still in the grip of this last icehouse state, although in one of its relatively brief interglacial phases. […] Sea level has varied in the past in close consort with climate change […]. Around twenty-five thousand years ago, at the height of the last Ice Age, the global sea level was 120 metres lower than today. Huge tracts of the continental shelves that rim today’s landmasses were exposed. […] Further back in time, 80 million years ago, the sea level was around 250–350 metres higher than today, so that 82 per cent of the planet was ocean and only 18 per cent remained as dry land. Such changes have been the norm throughout geological history and entirely the result of natural causes.”

“Most of the solar energy absorbed by seawater is converted directly to heat, and water temperature is vital for the distribution and activity of life in the oceans. Whereas mean temperature ranges from 0 to 40 degrees Celsius, 90 per cent of the oceans are permanently below 5°C. Most marine animals are ectotherms (cold-blooded), which means that they obtain their body heat from their surroundings. They generally have narrow tolerance limits and are restricted to particular latitudinal belts or water depths. Marine mammals and birds are endotherms (warm-blooded), which means that their metabolism generates heat internally thereby allowing the organism to maintain constant body temperature. They can tolerate a much wider range of external conditions. Coping with the extreme (hydrostatic) pressure exerted at depth within the ocean is a challenge. For every 30 metres of water, the pressure increases by 3 atmospheres – roughly equivalent to the weight of an elephant.”

“There are at least 6000 different species of diatom. […] An average litre of surface water from the ocean contains over half a million diatoms and other unicellular phytoplankton and many thousands of zooplankton.”

“Several different styles of movement are used by marine organisms. These include floating, swimming, jet propulsion, creeping, crawling, and burrowing. […] The particular physical properties of water that most affect movement are density, viscosity, and buoyancy. Seawater is about 800 times denser than air and nearly 100 times more viscous. Consequently there is much more resistance on movement than on land […] Most large marine animals, including all fishes and mammals, have adopted some form of active swimming […]. Swimming efficiency in fishes has been achieved by minimizing the three types of drag resistance created by friction, turbulence, and body form. To reduce surface friction, the body must be smooth and rounded like a sphere. The scales of most fish are also covered with slime as further lubrication. To reduce form drag, the cross-sectional area of the body should be minimal — a pencil shape is ideal. To reduce the turbulent drag as water flows around the moving body, a rounded front end and tapered rear is required. […] Fins play a versatile role in the movement of a fish. There are several types including dorsal fins along the back, caudal or tail fins, and anal fins on the belly just behind the anus. Operating together, the beating fins provide stability and steering, forwards and reverse propulsion, and braking. They also help determine whether the motion is up or down, forwards or backwards.”

Links:

Rip current.
Rogue wave. Agulhas Current. Kuroshio Current.
Tsunami.
Tide. Tidal range.
Geostrophic current.
Ekman Spiral. Ekman transport. Upwelling.
Global thermohaline circulation system. Antarctic bottom water. North Atlantic Deep Water.
Rio Grande Rise.
Denmark Strait. Denmark Strait cataract (/waterfall?).
Atmospheric circulation. Jet streams.
Monsoon.
Cyclone. Tropical cyclone.
Ozone layer. Ozone depletion.
Milankovitch cycles.
Little Ice Age.
Oxygen Isotope Stratigraphy of the Oceans.
Contourite.
Earliest known life forms. Cyanobacteria. Prokaryote. Eukaryote. Multicellular organism. Microbial mat. Ediacaran. Cambrian explosion. Pikaia. Vertebrate. Major extinction events. Permian–Triassic extinction event. (The author seems to disagree with the authors of this article about potential causes, in particular in so far as they relate to the formation of Pangaea – as I felt uncertain about the accuracy of the claims made in the book I decided against covering this topic in this post, even though I find it interesting).
Tethys Ocean.
Plesiosauria. Pliosauroidea. Ichthyosaur. Ammonoidea. Belemnites. Pachyaena. Cetacea.
Pelagic zone. Nekton. Benthic zone. Neritic zone. Oceanic zone. Bathyal zone. Hadal zone.
Phytoplankton. Silicoflagellates. Coccolithophore. Dinoflagellate. Zooplankton. Protozoa. Tintinnid. Radiolaria. Copepods. Krill. Bivalves.
Elasmobranchii.
Ampullae of Lorenzini. Lateral line.
Baleen whale. Humpback whale.
Coral reef.
Box jellyfish. Stonefish.
Horseshoe crab.
Greenland shark. Giant squid.
Hydrothermal vent. Pompeii worms.
Atlantis II Deep. Aragonite. Phosphorite. Deep sea mining. Oil platform. Methane clathrate.
Ocean thermal energy conversion. Tidal barrage.
Mariculture.
Exxon Valdez oil spill.
Bottom trawling.

Advertisements

June 24, 2018 Posted by | Biology, Books, Engineering, Geology, Paleontology, Physics | Leave a comment

Oceans (I)

I read this book quite some time ago, but back when I did I never blogged it; instead I just added a brief review on goodreads. I remember that the main reason why I decided against blogging it shortly after I’d read it was that the coverage overlapped a great deal with Mladenov’s marine biology text, which I had at that time just read and actually did blog in some detail. I figured if I wanted to blog this book as well I would be well-advised to wait a while, so that I’d at least have forget some of the stuff first – that way blogging the book might end up serving as a review of stuff I’d forgot, rather than as a review of stuff that would still be fresh in my memory and so wouldn’t really be worth reviewing anyway. So now here we are a few months later, and I have come to think it might be a good idea to blog the book.

Below I have added some quotes from the first half of the book and some links to topics/people/etc. covered.

“Several methods now exist for calculating the rate of plate motion. Most reliable for present-day plate movement are direct observations made using satellites and laser technology. These show that the Atlantic Ocean is growing wider at a rate of between 2 and 4 centimetres per year (about the rate at which fingernails grow), the Indian Ocean is attempting to grow at a similar rate but is being severely hampered by surrounding plate collisions, while the fastest spreading centre is the East Pacific Rise along which ocean crust is being created at rates of around 17 centimetres per year (the rate at which hair grows). […] The Nazca plate has been plunging beneath South America for at least 200 million years – the imposing Andes, the longest mountain chain on Earth, is the result. […] By around 120 million years ago, South America and Africa began to drift apart and the South Atlantic was born. […] sea levels rose higher than at any time during the past billion years, perhaps as much as 350 metres higher than today. Only 18 per cent of the globe was dry land — 82 per cent was under water. These excessively high sea levels were the result of increased spreading activity — new oceans, new ridges, and faster spreading rates all meant that the mid-ocean ridge systems collectively displaced a greater volume of water than ever before. Global warming was far more extreme than today. Temperatures in the ocean rose to around 30°C at the equator and as much as 14°C at the poles. Ocean circulation was very sluggish.”

“The land–ocean boundary is known as the shoreline. Seaward of this, all continents are surrounded by a broad, flat continental shelf, typically 10–100 kilometres wide, which slopes very gently (less than one-tenth of a degree) to the shelf edge at a water depth of around 100 metres. Beyond this the continental slope plunges to the deep-ocean floor. The slope is from tens to a few hundred kilometres wide and with a mostly gentle gradient of 3–8 degrees, but locally steeper where it is affected by faulting. The base of slope abuts the abyssal plain — flat, almost featureless expanses between 4 and 6 kilometres deep. The oceans are compartmentalized into abyssal basins separated by submarine mountain ranges and plateaus, which are the result of submarine volcanic outpourings. Those parts of the Earth that are formed of ocean crust are relatively lower, because they are made up of denser rocks — basalts. Those formed of less dense rocks (granites) of the continental crust are relatively higher. Seawater fills in the deeper parts, the ocean basins, to an average depth of around 4 kilometres. In fact, some parts are shallower because the ocean crust is new and still warm — these are the mid-ocean ridges at around 2.5 kilometres — whereas older, cooler crust drags the seafloor down to a depth of over 6 kilometres. […] The seafloor is almost entirely covered with sediment. In places, such as on the flanks of mid-ocean ridges, it is no more than a thin veneer. Elsewhere, along stable continental margins or beneath major deltas where deposition has persisted for millions of years, the accumulated thickness can exceed 15 kilometres. These areas are known as sedimentary basins“.

“The super-efficiency of water as a solvent is due to an asymmetrical bonding between hydrogen and oxygen atoms. The resultant water molecule has an angular or kinked shape with weakly charged positive and negative ends, rather like magnetic poles. This polar structure is especially significant when water comes into contact with substances whose elements are held together by the attraction of opposite electrical charges. Such ionic bonding is typical of many salts, such as sodium chloride (common salt) in which a positive sodium ion is attracted to a negative chloride ion. Water molecules infiltrate the solid compound, the positive hydrogen end being attracted to the chloride and the negative oxygen end to the sodium, surrounding and then isolating the individual ions, thereby disaggregating the solid [I should mention that if you’re interested in knowing (much) more this topic, and closely related topics, this book covers these things in great detail – US]. An apparently simple process, but extremely effective. […] Water is a super-solvent, absorbing gases from the atmosphere and extracting salts from the land. About 3 billion tonnes of dissolved chemicals are delivered by rivers to the oceans each year, yet their concentration in seawater has remained much the same for at least several hundreds of millions of years. Some elements remain in seawater for 100 million years, others for only a few hundred, but all are eventually cycled through the rocks. The oceans act as a chemical filter and buffer for planet Earth, control the distribution of temperature, and moderate climate. Inestimable numbers of calories of heat energy are transferred every second from the equator to the poles in ocean currents. But, the ocean configuration also insulates Antarctica and allows the build-up of over 4000 metres of ice and snow above the South Pole. […] Over many aeons, the oceans slowly accumulated dissolved chemical ions (and complex ions) of almost every element present in the crust and atmosphere. Outgassing from the mantle from volcanoes and vents along the mid-ocean ridges contributed a variety of other elements […] The composition of the first seas was mostly one of freshwater together with some dissolved gases. Today, however, the world ocean contains over 5 trillion tonnes of dissolved salts, and nearly 100 different chemical elements […] If the oceans’ water evaporated completely, the dried residue of salts would be equivalent to a 45-metre-thick layer over the entire planet.”

“The average time a single molecule of water remains in any one reservoir varies enormously. It may survive only one night as dew, up to a week in the atmosphere or as part of an organism, two weeks in rivers, and up to a year or more in soils and wetlands. Residence times in the oceans are generally over 4000 years, and water may remain in ice caps for tens of thousands of years. Although the ocean appears to be in a steady state, in which both the relative proportion and amounts of dissolved elements per unit volume are nearly constant, this is achieved by a process of chemical cycles and sinks. The input of elements from mantle outgassing and continental runoff must be exactly balanced by their removal from the oceans into temporary or permanent sinks. The principal sink is the sediment and the principal agent removing ions from solution is biological. […] The residence times of different elements vary enormously from tens of millions of years for chloride and sodium, to a few hundred years only for manganese, aluminium, and iron. […] individual water molecules have cycled through the atmosphere (or mantle) and returned to the seas more than a million times since the world ocean formed.”

“Because of its polar structure and hydrogen bonding between individual molecules, water has both a high capacity for storing large amounts of heat and one of the highest specific heat values of all known substances. This means that water can absorb (or release) large amounts of heat energy while changing relatively little in temperature. Beach sand, by contrast, has a specific heat five times lower than water, which explains why, on sunny days, beaches soon become too hot to stand on with bare feet while the sea remains pleasantly cool. Solar radiation is the dominant source of heat energy for the ocean and for the Earth as a whole. The differential in solar input with latitude is the main driver for atmospheric winds and ocean currents. Both winds and especially currents are the prime means of mitigating the polar–tropical heat imbalance, so that the polar oceans do not freeze solid, nor the equatorial oceans gently simmer. For example, the Gulf Stream transports some 550 trillion calories from the Caribbean Sea across the North Atlantic each second, and so moderates the climate of north-western Europe.”

“[W]hy is [the sea] mostly blue? The sunlight incident on the sea has a full spectrum of wavelengths, including the rainbow of colours that make up the visible spectrum […] The longer wavelengths (red) and very short (ultraviolet) are preferentially absorbed by water, rapidly leaving near-monochromatic blue light to penetrate furthest before it too is absorbed. The dominant hue that is backscattered, therefore, is blue. In coastal waters, suspended sediment and dissolved organic debris absorb additional short wavelengths (blue) resulting in a greener hue. […] The speed of sound in seawater is about 1500 metres per second, almost five times that in air. It is even faster where the water is denser, warmer, or more salty and shows a slow but steady increase with depth (related to increasing water pressure).”

“From top to bottom, the ocean is organized into layers, in which the physical and chemical properties of the ocean – salinity, temperature, density, and light penetration – show strong vertical segregation. […] Almost all properties of the ocean vary in some way with depth. Light penetration is attenuated by absorption and scattering, giving an upper photic and lower aphotic zone, with a more or less well-defined twilight region in between. Absorption of incoming solar energy also preferentially heats the surface waters, although with marked variations between latitudes and seasons. This results in a warm surface layer, a transition layer (the thermocline) through which the temperature decreases rapidly with depth, and a cold deep homogeneous zone reaching to the ocean floor. Exactly the same broad three-fold layering is true for salinity, except that salinity increases with depth — through the halocline. The density of seawater is controlled by its temperature, salinity, and pressure, such that colder, saltier, and deeper waters are all more dense. A rapid density change, known as the pycnocline, is therefore found at approximately the same depth as the thermocline and halocline. This varies from about 10 to 500 metres, and is often completely absent at the highest latitudes. Winds and waves thoroughly stir and mix the upper layers of the ocean, even destroying the layered structure during major storms, but barely touch the more stable, deep waters.”

Links:

Arvid Pardo. Law of the Sea Convention.
Polynesians.
Ocean exploration timeline (a different timeline is presented in the book, but there’s some overlap). Age of Discovery. Vasco da Gama. Christopher Columbus. John Cabot. Amerigo Vespucci. Ferdinand Magellan. Luigi Marsigli. James Cook.
HMS Beagle. HMS Challenger. Challenger expedition.
Deep Sea Drilling Project. Integrated Ocean Drilling Program. Joides resolution.
World Ocean.
Geological history of Earth (this article of course covers much more than is covered in the book, but the book does cover some highlights). Plate tectonics. Lithosphere. Asthenosphere. Convection. Global mid-ocean ridge system.
Pillow lava. Hydrothermal vent. Hot spring.
Ophiolite.
Mohorovičić discontinuity.
Mid-Atlantic Ridge. Subduction zone. Ring of Fire.
Pluton. Nappe. Mélange. Transform fault. Strike-slip fault. San Andreas fault.
Paleoceanography. Tethys Ocean. Laurasia. Gondwana.
Oceanic anoxic event. Black shale.
Seabed.
Bengal Fan.
Fracture zone.
Seamount.
Terrigenous sediment. Biogenic and chemogenic sediment. Halite. Gypsum.
Carbonate compensation depth.
Laurentian fan.
Deep-water sediment waves. Submarine landslide. Turbidity current.
Water cycle.
Ocean acidification.
Timing and Climatic Consequences ofthe Opening of Drake Passage. The Opening of the Tasmanian Gateway Drove Global Cenozoic Paleoclimatic and Paleoceanographic Changes (report)Antarctic Circumpolar Current.
SOFAR channel.
Bathymetry.

June 18, 2018 Posted by | Books, Chemistry, Geology, Papers, Physics | Leave a comment

Structural engineering

“The purpose of the book is three-fold. First, I aim to help the general reader appreciate the nature of structure, the role of the structural engineer in man-made structures, and understand better the relationship between architecture and engineering. Second, I provide an overview of how structures work: how they stand up to the various demands made of them. Third, I give students and prospective students in engineering, architecture, and science access to perspectives and qualitative understanding of advanced modern structures — going well beyond the simple statics of most introductory texts. […] Structural engineering is an important part of almost all undergraduate courses in engineering. This book is novel in the use of ‘thought-experiments’ as a straightforward way of explaining some of the important concepts that students often find the most difficult. These include virtual work, strain energy, and maximum and minimum energy principles, all of which are basic to modern computational techniques. The focus is on gaining understanding without the distraction of mathematical detail. The book is therefore particularly relevant for students of civil, mechanical, aeronautical, and aerospace engineering but, of course, it does not cover all of the theoretical detail necessary for completing such courses.”

The above quote is from the book‘s preface. I gave the book 2 stars on goodreads, and I must say that I think David Muir Wood’s book in this series on a similar and closely overlapping topic, civil engineering, was just a significantly better book – if you’re planning on reading only one book on these topics, in my opinion you should pick Wood’s book. I have two main complaints against this book: There’s too much stuff about the aesthetic properties of structures, and the history- and development of the differences between architecture and engineering; and the author seems to think it’s no problem covering quite complicated topics with just analogies and thought experiments, without showing you any of the equations. As for the first point, I don’t really have any interest in aesthetics or architectural history; as for the second, I can handle math reasonably well, but I usually have trouble when people insist on hiding the equations from me and talking only ‘in images’. The absence of equations doesn’t mean the topic coverage is dumbed-down, much; it’s rather the case that the author is trying to cover the sort of material that we usually use mathematics to talk about, because this is the most efficient language to use, using different kinds of language; the problem is that things get lost in the translation. He got rid of the math, but not the complexity. The book does include many illustrations as well, including illustrations of some quite complicated topics and dynamics, but some of the things he talks about in the book are things you can’t illustrate well with images because you ‘run out of dimensions’ before you’ve handled all the relevant aspects/dynamics, an admission he himself makes in the book.

Anyway, the book is not terrible and there’s some interesting stuff in there. I’ve added a few more quotes and some links related to the book’s coverage below.

“All structures span a gap or a space of some kind and their primary role is to transmit the imposed forces safely. A bridge spans an obstruction like a road or a river. The roof truss of a house spans the rooms of the house. The fuselage of a jumbo jet spans between wheels of its undercarriage on the tarmac of an airport terminal and the self-weight, lift and drag forces in flight. The hull of a ship spans between the variable buoyancy forces caused by the waves of the sea. To be fit for purpose every structure has to cope with specific local conditions and perform inside acceptable boundaries of behaviour—which engineers call ‘limit states’. […] Safety is paramount in two ways. First, the risk of a structure totally losing its structural integrity must be very low—for example a building must not collapse or a ship break up. This maximum level of performance is called an ultimate limit state. If a structure should reach that state for whatever reason then the structural engineer tries to ensure that the collapse or break up is not sudden—that there is some degree of warning—but this is not always possible […] Second, structures must be able to do what they were built for—this is called serviceability or performance limit state. So for example a skyscraper building must not sway so much that it causes discomfort to the occupants, even if the risk of total collapse is still very small.”

“At its simplest force is a pull (tension) or a push (compression). […] There are three ways in which materials are strong in different combinations—pulling (tension), pushing (compression), and sliding (shear). Each is very important […] all intact structures have internal forces that balance the external forces acting on them. These external forces come from simple self-weight, people standing, sitting, walking, travelling across them in cars, trucks, and trains, and from the environment such as wind, water, and earthquakes. In that state of equilibrium it turns out that structures are naturally lazy—the energy stored in them is a minimum for that shape or form of structure. Form-finding structures are a special group of buildings that are allowed to find their own shape—subject to certain constraints. There are two classes—in the first, the form-finding process occurs in a model (which may be physical or theoretical) and the structure is scaled up from the model. In the second, the structure is actually built and then allowed to settle into shape. In both cases the structures are self-adjusting in that they move to a position in which the internal forces are in equilibrium and contain minimum energy. […] there is a big problem in using self-adjusting structures in practice. The movements under changing loads can make the structures unfit for purpose. […] Triangles are important in structural engineering because they are the simplest stable form of structure and you see them in all kinds of structures—whether form-finding or not. […] Other forms of pin jointed structure, such as a rectangle, will deform in shear as a mechanism […] unless it has diagonal bracing—making it triangular. […] bending occurs in part of a structure when the forces acting on it tend to make it turn or rotate—but it is constrained or prevented from turning freely by the way it is connected to the rest of the structure or to its foundations. The turning forces may be internal or external.”

“Energy is the capacity of a force to do work. If you stretch an elastic band it has an internal tension force resisting your pull. If you let go of one end the band will recoil and could inflict a sharp sting on your other hand. The internal force has energy or the capacity to do work because you stretched it. Before you let go the energy was potential; after you let go the energy became kinetic. Potential energy is the capacity to do work because of the position of something—in this case because you pulled the two ends of the band apart. […] A car at the top of a hill has the potential energy to roll down the hill if the brakes are released. The potential energy in the elastic band and in a structure has a specific name—it is called ‘strain energy’. Kinetic energy is due to movement, so when you let go of the band […] the potential energy is converted into kinetic energy. Kinetic energy depends on mass and velocity—so a truck can develop more kinetic energy than a small car. When a structure is loaded by a force then the structure moves in whatever way it can to ‘get out of the way’. If it can move freely it will do—just as if you push a car with the handbrake off it will roll forward. However, if the handbrake is on the car will not move, and an internal force will be set up between the point at which you are pushing and the wheels as they grip the road.”

“[A] rope hanging freely as a catenary has minimum energy and […] it can only resist one kind of force—tension. Engineers say that it has one degree of freedom. […] In brief, degrees of freedom are the independent directions in which a structure or any part of a structure can move or deform […] Movements along degrees of freedom define the shape and location of any object at a given time. Each part, each piece of a physical structure whatever its size is a physical object embedded in and connected to other objects […] similar objects which I will call its neighbours. Whatever its size each has the potential to move unless something stops it. Where it may move freely […] then no internal resisting force is created. […] where it is prevented from moving in any direction a reaction force is created with consequential internal forces in the structure. For example at a support to a bridge, where the whole bridge is normally stopped from moving vertically, then an external vertical reaction force develops which must be resisted by a set of internal forces that will depend on the form of the bridge. So inside the bridge structure each piece, however small or large, will move—but not freely. The neighbouring objects will get in the way […]. When this happens internal forces are created as the objects bump up against each other and we represent or model those forces along the pathways which are the local degrees of freedom. The structure has to be strong enough to resist these internal forces along these pathways.”

“The next question is ‘How do we find out how big the forces and movements are?’ It turns out that there is a whole class of structures where this is reasonably straightforward and these are the structures covered in elementary textbooks. Engineers call them ‘statically determinate’ […] For these structures we can find the sizes of the forces just by balancing the internal and external forces to establish equilibrium. […] Unfortunately many real structures can’t be fully explained in this way—they are ‘statically indeterminate‘. This is because whilst establishing equilibrium between internal and external forces is necessary it is not sufficient for finding all of the internal forces. […] The four-legged stool is statically indeterminate. You will begin to understand this if you have ever sat at a fourlegged wobbly table […] which has one leg shorter than the other three legs. There can be no force in that leg because there is no reaction from the ground. What is more, the opposite leg will have no internal force either because otherwise there would be a net turning moment about the line joining the other two legs. Thus the table is balanced on two legs—which is why it wobbles back and forth. […] each leg has one degree of freedom but we have only three ways of balancing them in the (x,y,z) directions. In mathematical terms, we have four unknown  variables (the internal forces) but only three equations (balancing equilibrium in three directions). It follows that there isn’t just one set of forces in equilibrium—indeed, there are many such sets.”

“[W]hen a structure is in equilibrium it has minimum strain energy. […] Strictly speaking, minimum strain energy as a criterion for equilibrium is [however] true only in specific circumstances. To understand this we need to look at the constitutive relations between forces and deformations or displacements. Strain energy is stored potential energy and that energy is the capacity to do work. The strain energy in a body is there because work has been done on it—a force moved through a distance. Hence in order to know the energy we must know how much displacement is caused by a given force. This is called a ‘constitutive relation’ and has the form ‘force equals a constitutive factor times a displacement’. The most common of these relationships is called ‘linear elastic’ where the force equals a simple numerical factor—called the stiffness—times the displacement […] The inverse of the stiffness is called flexibility”.

“Aeroplanes take off or ascend because the lift forces due to the forward motion of the plane exceed the weight […] In level flight or cruise the plane is neutrally buoyant and flies at a steady altitude. […] The structure of an aircraft consists of four sets of tubes: the fuselage, the wings, the tail, and the fin. For obvious reasons their weight needs to be as small as possible. […] Modern aircraft structures are semi-monocoque—meaning stressed skin but with a supporting frame. In other words the skin covering, which may be only a few millimetres thick, becomes part of the structure. […] In an overall sense, the lift and drag forces effectively act on the wings through centres of pressure. The wings also carry the weight of engines and fuel. During a typical flight, the positions of these centres of force vary along the wing—for example as fuel is used. The wings are balanced cantilevers fixed to the fuselage. Longer wings (compared to their width) produce greater lift but are also necessarily heavier—so a compromise is required.”

“When structures move quickly, in particular if they accelerate or decelerate, we have to consider […] the inertia force and the damping force. They occur, for example, as an aeroplane takes off and picks up speed. They occur in bridges and buildings that oscillate in the wind. As these structures move the various bits of the structure remain attached—perhaps vibrating in very complex patterns, but they remain joined together in a state of dynamic equilibrium. An inertia force results from an acceleration or deceleration of an object and is directly proportional to the weight of that object. […] Newton’s 2nd Law tells us that the magnitudes of these [inertial] forces are proportional to the rates of change of momentum. […] Damping arises from friction or ‘looseness’ between components. As a consequence, energy is dissipated into other forms such as heat and sound, and the vibrations get smaller. […] The kinetic energy of a structure in static equilibrium is zero, but as the structure moves its potential energy is converted into kinetic energy. This is because the total energy remains constant by the principle of the conservation of energy (the first law of thermodynamics). The changing forces and displacements along the degree of freedom pathways travel as a wave […]. The amplitude of the wave depends on the nature of the material and the connections between components.”

“For [a] structure to be safe the materials must be strong enough to resist the tension, the compression, and the shear. The strength of materials in tension is reasonably straightforward. We just need to know the limiting forces the material can resist. This is usually specified as a set of stresses. A stress is a force divided by a cross sectional area and represents a localized force over a small area of the material. Typical limiting tensile stresses are called the yield stress […] and the rupture stress—so we just need to know their numerical values from tests. Yield occurs when the material cannot regain its original state, and permanent displacements or strains occur. Rupture is when the material breaks or fractures. […] Limiting average shear stresses and maximum allowable stress are known for various materials. […] Strength in compression is much more difficult […] Modern practice using the finite element method enables us to make theoretical estimates […] but it is still approximate because of the simplifications necessary to do the computer analysis […]. One of the challenges to engineers who rely on finite element analysis is to make sure they understand the implications of the simplifications used.”

“Dynamic loads cause vibrations. One particularly dangerous form of vibration is called resonance […]. All structures have a natural frequency of free vibration. […] Resonance occurs if the frequency of an external vibrating force coincides with the natural frequency of the structure. The consequence is a rapid build up of vibrations that can become seriously damaging. […] Wind is a major source of vibrations. As it flows around a bluff body the air breaks away from the surface and moves in a circular motion like a whirlpool or whirlwind as eddies or vortices. Under certain conditions these vortices may break away on alternate sides, and as they are shed from the body they create pressure differences that cause the body to oscillate. […] a structure is in stable equilibrium when a small perturbation does not result in large displacements. A structure in dynamic equilibrium may oscillate about a stable equilibrium position. […] Flutter is dynamic and a form of wind-excited self-reinforcing oscillation. It occurs, as in the P-delta effect, because of changes in geometry. Forces that are no longer in line because of large displacements tend to modify those displacements of the structure, and these, in turn, modify the forces, and so on. In this process the energy input during a cycle of vibration may be greater than that lost by damping and so the amplitude increases in each cycle until destruction. It is a positive feed-back mechanism that amplifies the initial deformations, causes non-linearity, material plasticity and decreased stiffness, and reduced natural frequency. […] Regular pulsating loads, even very small ones, can cause other problems too through a phenomenon known as fatigue. The word is descriptive—under certain conditions the materials just get tired and crack. A normally ductile material like steel becomes brittle. Fatigue occurs under very small loads repeated many millions of times. All materials in all types of structures have a fatigue limit. […] Fatigue damage occurs deep in the material as microscopic bonds are broken. The problem is particularly acute in the heat affected zones of welded structures.”

“Resilience is the ability of a system to recover quickly from difficult conditions. […] One way of delivering a degree of resilience is to make a structure fail-safe—to mitigate failure if it happens. A household electrical fuse is an everyday example. The fuse does not prevent failure, but it does prevent extreme consequences such as an electrical fire. Damage-tolerance is a similar concept. Damage is any physical harm that reduces the value of something. A damage-tolerant structure is one in which any damage can be accommodated at least for a short time until it can be dealt with. […] human factors in failure are not just a matter of individuals’ slips, lapses, or mistakes but are also the result of organizational and cultural situations which are not easy to identify in advance or even at the time. Indeed, they may only become apparent in hindsight. It follows that another major part of safety is to design a structure so that it can be inspected, repaired, and maintained. Indeed all of the processes of creating a structure, whether conceiving, designing, making, or monitoring performance, have to be designed with sufficient resilience to accommodate unexpected events. In other words, safety is not something a system has (a property), rather it is something a system does (a performance). Providing resilience is a form of control—a way of managing uncertainties and risks.”

Stiffness.
Antoni Gaudí. Heinz Isler. Frei Otto.
Eden Project.
Tensegrity.
Bending moment.
Shear and moment diagram.
Stonehenge.
Pyramid at Meidum.
Vitruvius.
Master builder.
John Smeaton.
Puddling (metallurgy).
Cast iron.
Isambard Kingdom Brunel.
Henry Bessemer. Bessemer process.
Institution of Structural Engineers.
Graphic statics (wiki doesn’t have an article on this topic under this name and there isn’t much here, but it looks like google has a lot if you’re interested).
Constitutive equation.
Deformation (mechanics).
Compatibility (mechanics).
Principle of Minimum Complementary Energy.
Direct stiffness method. Finite element method.
Hogging and sagging.
Centre of buoyancy. Metacentre (fluid mechanics). Angle of attack.
Box girder bridge.
D’Alembert’s principle.
Longeron.
Buckling.
S-n diagram.

April 11, 2018 Posted by | Books, Engineering, Physics | Leave a comment

The Ice Age (II)

I really liked the book, recommended if you’re at all interested in this kind of stuff. Below some observations from the book’s second half, and some related links:

“Charles MacLaren, writing in 1842, […] argued that the formation of large ice sheets would result in a fall in sea level as water was taken from the oceans and stored frozen on the land. This insight triggered a new branch of ice age research – sea level change. This topic can get rather complicated because as ice sheets grow, global sea level falls. This is known as eustatic sea level change. As ice sheets increase in size, their weight depresses the crust and relative sea level will rise. This is known as isostatic sea level change. […] It is often quite tricky to differentiate between regional-scale isostatic factors and the global-scale eustatic sea level control.”

“By the late 1870s […] glacial geology had become a serious scholarly pursuit with a rapidly growing literature. […] [In the late 1880s] Carvill Lewis […] put forward the radical suggestion that the [sea] shells at Moel Tryfan and other elevated localities (which provided the most important evidence for the great marine submergence of Britain) were not in situ. Building on the earlier suggestions of Thomas Belt (1832–78) and James Croll, he argued that these materials had been dredged from the sea bed by glacial ice and pushed upslope so that ‘they afford no testimony to the former subsidence of the land’. Together, his recognition of terminal moraines and the reworking of marine shells undermined the key pillars of Lyell’s great marine submergence. This was a crucial step in establishing the primacy of glacial ice over icebergs in the deposition of the drift in Britain. […] By the end of the 1880s, it was the glacial dissenters who formed the eccentric minority. […] In the period leading up to World War One, there was [instead] much debate about whether the ice age involved a single phase of ice sheet growth and freezing climate (the monoglacial theory) or several phases of ice sheet build up and decay separated by warm interglacials (the polyglacial theory).”

“As the Earth rotates about its axis travelling through space in its orbit around the Sun, there are three components that change over time in elegant cycles that are entirely predictable. These are known as eccentricity, precession, and obliquity or ‘stretch, wobble, and roll’ […]. These orbital perturbations are caused by the gravitational pull of the other planets in our Solar System, especially Jupiter. Milankovitch calculated how each of these orbital cycles influenced the amount of solar radiation received at different latitudes over time. These are known as Milankovitch Cycles or Croll–Milankovitch Cycles to reflect the important contribution made by both men. […] The shape of the Earth’s orbit around the Sun is not constant. It changes from an almost circular orbit to one that is mildly elliptical (a slightly stretched circle) […]. This orbital eccentricity operates over a 400,000- and 100,000-year cycle. […] Changes in eccentricity have a relatively minor influence on the total amount of solar radiation reaching the Earth, but they are important for the climate system because they modulate the influence of the precession cycle […]. When eccentricity is high, for example, axial precession has a greater impact on seasonality. […] The Earth is currently tilted at an angle of 23.4° to the plane of its orbit around the Sun. Astronomers refer to this axial tilt as obliquity. This angle is not fixed. It rolls back and forth over a 41,000-year cycle from a tilt of 22.1° to 24.5° and back again […]. Even small changes in tilt can modify the strength of the seasons. With a greater angle of tilt, for example, we can have hotter summers and colder winters. […] Cooler, reduced insolation summers are thought to be a key factor in the initiation of ice sheet growth in the middle and high latitudes because they allow more snow to survive the summer melt season. Slightly warmer winters may also favour ice sheet build-up as greater evaporation from a warmer ocean will increase snowfall over the centres of ice sheet growth. […] The Earth’s axis of rotation is not fixed. It wobbles like a spinning top slowing down. This wobble traces a circle on the celestial sphere […]. At present the Earth’s rotational axis points toward Polaris (the current northern pole star) but in 11,000 years it will point towards another star, Vega. This slow circling motion is known as axial precession and it has important impacts on the Earth’s climate by causing the solstices and equinoxes to move around the Earth’s orbit. In other words, the seasons shift over time. Precession operates over a 19,000- and 23,000-year cycle. This cycle is often referred to as the Precession of the Equinoxes.”

The albedo of a surface is a measure of its ability to reflect solar energy. Darker surfaces tend to absorb most of the incoming solar energy and have low albedos. The albedo of the ocean surface in high latitudes is commonly about 10 per cent — in other words, it absorbs 90 per cent of the incoming solar radiation. In contrast, snow, glacial ice, and sea ice have much higher albedos and can reflect between 50 and 90 per cent of incoming solar energy back into the atmosphere. The elevated albedos of bright frozen surfaces are a key feature of the polar radiation budget. Albedo feedback loops are important over a range of spatial and temporal scales. A cooling climate will increase snow cover on land and the extent of sea ice in the oceans. These high albedo surfaces will then reflect more solar radiation to intensify and sustain the cooling trend, resulting in even more snow and sea ice. This positive feedback can play a major role in the expansion of snow and ice cover and in the initiation of a glacial phase. Such positive feedbacks can also work in reverse when a warming phase melts ice and snow to reveal dark and low albedo surfaces such as peaty soil or bedrock.”

“At the end of the Cretaceous, around 65 million years ago (Ma), lush forests thrived in the Polar Regions and ocean temperatures were much warmer than today. This warm phase continued for the next 10 million years, peaking during the Eocene thermal maximum […]. From that time onwards, however, Earth’s climate began a steady cooling that saw the initiation of widespread glacial conditions, first in Antarctica between 40 and 30 Ma, in Greenland between 20 and 15 Ma, and then in the middle latitudes of the northern hemisphere around 2.5 Ma. […] Over the past 55 million years, a succession of processes driven by tectonics combined to cool our planet. It is difficult to isolate their individual contributions or to be sure about the details of cause and effect over this long period, especially when there are uncertainties in dating and when one considers the complexity of the climate system with its web of internal feedbacks.” [Potential causes which have been highlighted include: The uplift of the Himalayas (leading to increased weathering, leading over geological time to an increased amount of CO2 being sequestered in calcium carbonate deposited on the ocean floor, lowering atmospheric CO2 levels), the isolation of Antarctica which created the Antarctic Circumpolar Current (leading to a cooling of Antarctica), the dry-out of the Mediterranean Sea ~5mya (which significantly lowered salt concentrations in the World Ocean, meaning that sea water froze at a higher temperature), and the formation of the Isthmus of Panama. – US].

“[F]or most of the last 1 million years, large ice sheets were present in the middle latitudes of the northern hemisphere and sea levels were lower than today. Indeed, ‘average conditions’ for the Quaternary Period involve much more ice than present. The interglacial peaks — such as the present Holocene interglacial, with its ice volume minima and high sea level — are the exception rather than the norm. The sea level maximum of the Last Interglacial (MIS 5) is higher than today. It also shows that cold glacial stages (c.80,000 years duration) are much longer than interglacials (c.15,000 years). […] Arctic willow […], the northernmost woody plant on Earth, is found in central European pollen records from the last glacial stage. […] For most of the Quaternary deciduous forests have been absent from most of Europe. […] the interglacial forests of temperate Europe that are so familiar to us today are, in fact, rather atypical when we consider the long view of Quaternary time. Furthermore, if the last glacial period is representative of earlier ones, for much of the Quaternary terrestrial ecosystems were continuously adjusting to a shifting climate.”

“Greenland ice cores typically have very clear banding […] that corresponds to individual years of snow accumulation. This is because the snow that falls in summer under the permanent Arctic sun differs in texture to the snow that falls in winter. The distinctive paired layers can be counted like tree rings to produce a finely resolved chronology with annual and even seasonal resolution. […] Ice accumulation is generally much slower in Antarctica, so the ice core record takes us much further back in time. […] As layers of snow become compacted into ice, air bubbles recording the composition of the atmosphere are sealed in discrete layers. This fossil air can be recovered to establish the changing concentration of greenhouse gases such as carbon dioxide (CO2) and methane (CH4). The ice core record therefore allows climate scientists to explore the processes involved in climate variability over very long timescales. […] By sampling each layer of ice and measuring its oxygen isotope composition, Dansgaard produced an annual record of air temperature for the last 100,000 years. […] Perhaps the most startling outcome of this work was the demonstration that global climate could change extremely rapidly. Dansgaard showed that dramatic shifts in mean air temperature (>10°C) had taken place in less than a decade. These findings were greeted with scepticism and there was much debate about the integrity of the Greenland record, but subsequent work from other drilling sites vindicated all of Dansgaard’s findings. […] The ice core records from Greenland reveal a remarkable sequence of abrupt warming and cooling cycles within the last glacial stage. These are known as Dansgaard–Oeschger (D–O) cycles. […] [A] series of D–O cycles between 65,000 and 10,000 years ago [caused] mean annual air temperatures on the Greenland ice sheet [to be] shifted by as much as 10°C. Twenty-five of these rapid warming events have been identified during the last glacial period. This discovery dispelled the long held notion that glacials were lengthy periods of stable and unremitting cold climate. The ice core record shows very clearly that even the glacial climate flipped back and forth. […] D–O cycles commence with a very rapid warming (between 5 and 10°C) over Greenland followed by a steady cooling […] Deglaciations are rapid because positive feedbacks speed up both the warming trend and ice sheet decay. […] The ice core records heralded a new era in climate science: the study of abrupt climate change. Most sedimentary records of ice age climate change yield relatively low resolution information — a thousand years may be packed into a few centimetres of marine or lake sediment. In contrast, ice cores cover every year. They also retain a greater variety of information about the ice age past than any other archive. We can even detect layers of volcanic ash in the ice and pinpoint the date of ancient eruptions.”

“There are strong thermal gradients in both hemispheres because the low latitudes receive the most solar energy and the poles the least. To redress these imbalances the atmosphere and oceans move heat polewards — this is the basis of the climate system. In the North Atlantic a powerful surface current takes warmth from the tropics to higher latitudes: this is the famous Gulf Stream and its northeastern extension the North Atlantic Drift. Two main forces drive this current: the strong southwesterly winds and the return flow of colder, saltier water known as North Atlantic Deep Water (NADW). The surface current loses much of its heat to air masses that give maritime Europe a moist, temperate climate. Evaporative cooling also increases its salinity so that it begins to sink. As the dense and cold water sinks to the deep ocean to form NADW, it exerts a strong pull on the surface currents to maintain the cycle. It returns south at depths >2,000 m. […] The thermohaline circulation in the North Atlantic was periodically interrupted during Heinrich Events when vast discharges of melting icebergs cooled the ocean surface and reduced its salinity. This shut down the formation of NADW and suppressed the Gulf Stream.”

Links:

Archibald Geikie.
Andrew Ramsay (geologist).
Albrecht Penck. Eduard BrücknerGunz glaciation. Mindel glaciation. Riss glaciation. Würm.
Insolation.
Perihelion and aphelion.
Deep Sea Drilling Project.
Foraminifera.
δ18O. Isotope fractionation.
Marine isotope stage.
Cesare Emiliani.
Nicholas Shackleton.
Brunhes–Matuyama reversal. Geomagnetic reversal. Magnetostratigraphy.
Climate: Long range Investigation, Mapping, and Prediction (CLIMAP).
Uranium–thorium dating. Luminescence dating. Optically stimulated luminescence. Cosmogenic isotope dating.
The role of orbital forcing in the Early-Middle Pleistocene Transition (paper).
European Project for Ice Coring in Antarctica (EPICA).
Younger Dryas.
Lake Agassiz.
Greenland ice core project (GRIP).
J Harlen Bretz. Missoula Floods.
Pleistocene megafauna.

February 25, 2018 Posted by | Astronomy, Engineering, Geology, History, Paleontology, Physics | Leave a comment

Systems Biology (I)

This book is really dense and is somewhat tough for me to blog. One significant problem is that: “The authors assume that the reader is already familiar with the material covered in a classic biochemistry course.” I know enough biochem to follow most of the stuff in this book, and I was definitely quite happy to have recently read John Finney’s book on the biochemical properties of water and Christopher Hall’s introduction to materials science, as both of those books’ coverage turned out to be highly relevant (these are far from the only relevant books I’ve read semi-recently – Atkins introduction to thermodynamics is another book that springs to mind) – but even so, what do you leave out when writing a post like this? I decided to leave out a lot. Posts covering books like this one are hard to write because it’s so easy for them to blow up in your face because you have to include so many details for the material included in the post to even start to make sense to people who didn’t read the original text. And if you leave out all the details, what’s really left? It’s difficult..

Anyway, some observations from the first chapters of the book below.

“[T]he biological world consists of self-managing and self-organizing systems which owe their existence to a steady supply of energy and information. Thermodynamics introduces a distinction between open and closed systems. Reversible processes occurring in closed systems (i.e. independent of their environment) automatically gravitate toward a state of equilibrium which is reached once the velocity of a given reaction in both directions becomes equal. When this balance is achieved, we can say that the reaction has effectively ceased. In a living cell, a similar condition occurs upon death. Life relies on certain spontaneous processes acting to unbalance the equilibrium. Such processes can only take place when substrates and products of reactions are traded with the environment, i.e. they are only possible in open systems. In turn, achieving a stable level of activity in an open system calls for regulatory mechanisms. When the reaction consumes or produces resources that are exchanged with the outside world at an uneven rate, the stability criterion can only be satisfied via a negative feedback loop […] cells and living organisms are thermodynamically open systems […] all structures which play a role in balanced biological activity may be treated as components of a feedback loop. This observation enables us to link and integrate seemingly unrelated biological processes. […] the biological structures most directly involved in the functions and mechanisms of life can be divided into receptors, effectors, information conduits and elements subject to regulation (reaction products and action results). Exchanging these elements with the environment requires an inflow of energy. Thus, living cells are — by their nature — open systems, requiring an energy source […] A thermodynamically open system lacking equilibrium due to a steady inflow of energy in the presence of automatic regulation is […] a good theoretical model of a living organism. […] Pursuing growth and adapting to changing environmental conditions calls for specialization which comes at the expense of reduced universality. A specialized cell is no longer self-sufficient. As a consequence, a need for higher forms of intercellular organization emerges. The structure which provides cells with suitable protection and ensures continued homeostasis is called an organism.”

“In biology, structure and function are tightly interwoven. This phenomenon is closely associated with the principles of evolution. Evolutionary development has produced structures which enable organisms to develop and maintain its architecture, perform actions and store the resources needed to survive. For this reason we introduce a distinction between support structures (which are akin to construction materials), function-related structures (fulfilling the role of tools and machines), and storage structures (needed to store important substances, achieving a compromise between tight packing and ease of access). […] Biology makes extensive use of small-molecule structures and polymers. The physical properties of polymer chains make them a key building block in biological structures. There are several reasons as to why polymers are indispensable in nature […] Sequestration of resources is subject to two seemingly contradictory criteria: 1. Maximize storage density; 2. Perform sequestration in such a way as to allow easy access to resources. […] In most biological systems, storage applies to energy and information. Other types of resources are only occasionally stored […]. Energy is stored primarily in the form of saccharides and lipids. Saccharides are derivatives of glucose, rendered insoluble (and thus easy to store) via polymerization.Their polymerized forms, stabilized with α-glycosidic bonds, include glycogen (in animals) and starch (in plantlife). […] It should be noted that the somewhat loose packing of polysaccharides […] makes them unsuitable for storing large amounts of energy. In a typical human organism only ca. 600 kcal of energy is stored in the form of glycogen, while (under normal conditions) more than 100,000 kcal exists as lipids. Lipids deposit usually assume the form of triglycerides (triacylglycerols). Their properties can be traced to the similarities between fatty acids and hydrocarbons. Storage efficiency (i.e. the amount of energy stored per unit of mass) is twice that of polysaccharides, while access remains adequate owing to the relatively large surface area and high volume of lipids in the organism.”

“Most living organisms store information in the form of tightly-packed DNA strands. […] It should be noted that only a small percentage of DNA (about few %) conveys biologically relevant information. The purpose of the remaining ballast is to enable suitable packing and exposure of these important fragments. If all of DNA were to consist of useful code, it would be nearly impossible to devise a packing strategy guaranteeing access to all of the stored information.”

“The seemingly endless diversity of biological functions frustrates all but the most persistent attempts at classification. For the purpose of this handbook we assume that each function can be associated either with a single cell or with a living organism. In both cases, biological functions are strictly subordinate to automatic regulation, based — in a stable state — on negative feedback loops, and in processes associated with change (for instance in embryonic development) — on automatic execution of predetermined biological programs. Individual components of a cell cannot perform regulatory functions on their own […]. Thus, each element involved in the biological activity of a cell or organism must necessarily participate in a regulatory loop based on processing information.”

“Proteins are among the most basic active biological structures. Most of the well-known proteins studied thus far perform effector functions: this group includes enzymes, transport proteins, certain immune system components (complement factors) and myofibrils. Their purpose is to maintain biological systems in a steady state. Our knowledge of receptor structures is somewhat poorer […] Simple structures, including individual enzymes and components of multienzyme systems, can be treated as “tools” available to the cell, while advanced systems, consisting of many mechanically-linked tools, resemble machines. […] Machinelike mechanisms are readily encountered in living cells. A classic example is fatty acid synthesis, performed by dedicated machines called synthases. […] Multiunit structures acting as machines can be encountered wherever complex biochemical processes need to be performed in an efficient manner. […] If the purpose of a machine is to generate motion then a thermally powered machine can accurately be called a motor. This type of action is observed e.g. in myocytes, where transmission involves reordering of protein structures using the energy generated by hydrolysis of high-energy bonds.”

“In biology, function is generally understood as specific physiochemical action, almost universally mediated by proteins. Most such actions are reversible which means that a single protein molecule may perform its function many times. […] Since spontaneous noncovalent surface interactions are very infrequent, the shape and structure of active sites — with high concentrations of hydrophobic residues — makes them the preferred area of interaction between functional proteins and their ligands. They alone provide the appropriate conditions for the formation of hydrogen bonds; moreover, their structure may determine the specific nature of interaction. The functional bond between a protein and a ligand is usually noncovalent and therefore reversible.”

“In general terms, we can state that enzymes accelerate reactions by lowering activation energies for processes which would otherwise occur very slowly or not at all. […] The activity of enzymes goes beyond synthesizing a specific protein-ligand complex (as in the case of antibodies or receptors) and involves an independent catalytic attack on a selected bond within the ligand, precipitating its conversion into the final product. The relative independence of both processes (binding of the ligand in the active site and catalysis) is evidenced by the phenomenon of noncompetitive inhibition […] Kinetic studies of enzymes have provided valuable insight into the properties of enzymatic inhibitors — an important field of study in medicine and drug research. Some inhibitors, particularly competitive ones (i.e. inhibitors which outcompete substrates for access to the enzyme), are now commonly used as drugs. […] Physical and chemical processes may only occur spontaneously if they generate energy, or non-spontaneously if they consume it. However, all processes occurring in a cell must have a spontaneous character because only these processes may be catalyzed by enzymes. Enzymes merely accelerate reactions; they do not provide energy. […] The change in enthalpy associated with a chemical process may be calculated as a net difference in the sum of molecular binding energies prior to and following the reaction. Entropy is a measure of the likelihood that a physical system will enter a given state. Since chaotic distribution of elements is considered the most probable, physical systems exhibit a general tendency to gravitate towards chaos. Any form of ordering is thermodynamically disadvantageous.”

“The chemical reactions which power biological processes are characterized by varying degrees of efficiency. In general, they tend to be on the lower end of the efficiency spectrum, compared to energy sources which drive matter transformation processes in our universe. In search for a common criterion to describe the efficiency of various energy sources, we can refer to the net loss of mass associated with a release of energy, according to Einstein’s formula:
E = mc2
The
M/M coefficient (relative loss of mass, given e.g. in %) allows us to compare the efficiency of energy sources. The most efficient processes are those involved in the gravitational collapse of stars. Their efficiency may reach 40 %, which means that 40 % of the stationary mass of the system is converted into energy. In comparison, nuclear reactions have an approximate efficiency of 0.8 %. The efficiency of chemical energy sources available to biological systems is incomparably lower and amounts to approximately 10(-7) % […]. Among chemical reactions, the most potent sources of energy are found in oxidation processes, commonly exploited by biological systems. Oxidation tends  to result in the largest net release of energy per unit of mass, although the efficiency of specific types of oxidation varies. […] given unrestricted access to atmospheric oxygen and to hydrogen atoms derived from hydrocarbons — the combustion of hydrogen (i.e. the synthesis of water; H2 + 1/2O2 = H2O) has become a principal source of energy in nature, next to photosynthesis, which exploits the energy of solar radiation. […] The basic process associated with the release of hydrogen and its subsequent oxidation (called the Krebs cycle) is carried by processes which transfer electrons onto oxygen atoms […]. Oxidation occurs in stages, enabling optimal use of the released energy. An important byproduct of water synthesis is the universal energy carrier known as ATP (synthesized separately). As water synthesis is a highly spontaneous process, it can be exploited to cover the energy debt incurred by endergonic synthesis of ATP, as long as both processes are thermodynamically coupled, enabling spontaneous catalysis of anhydride bonds in ATP. Water synthesis is a universal source of energy in heterotrophic systems. In contrast, autotrophic organisms rely on the energy of light which is exploited in the process of photosynthesis. Both processes yield ATP […] Preparing nutrients (hydrogen carriers) for participation in water synthesis follows different paths for sugars, lipids and proteins. This is perhaps obvious given their relative structural differences; however, in all cases the final form, which acts as a substrate for dehydrogenases, is acetyl-CoA“.

“Photosynthesis is a process which — from the point of view of electron transfer — can be treated as a counterpart of the respiratory chain. In heterotrophic organisms, mitochondria transport electrons from hydrogenated compounds (sugars, lipids, proteins) onto oxygen molecules, synthesizing water in the process, whereas in the course of photosynthesis electrons released by breaking down water molecules are used as a means of reducing oxydised carbon compounds […]. In heterotrophic organisms the respiratory chain has a spontaneous quality (owing to its oxidative properties); however any reverse process requires energy to occur. In the case of photosynthesis this energy is provided by sunlight […] Hydrogen combustion and photosynthesis are the basic sources of energy in the living world. […] For an energy source to become useful, non-spontaneous reactions must be coupled to its operation, resulting in a thermodynamically unified system. Such coupling can be achieved by creating a coherent framework in which the spontaneous and non-spontaneous processes are linked, either physically or chemically, using a bridging component which affects them both. If the properties of both reactions are different, the bridging component must also enable suitable adaptation and mediation. […] Direct exploitation of the energy released via the hydrolysis of ATP is possible usually by introducing an active binding carrier mediating the energy transfer. […] Carriers are considered active as long as their concentration ensures a sufficient release of energy to synthesize a new chemical bond by way of a non-spontaneous process. Active carriers are relatively short-lived […] Any active carrier which performs its function outside of the active site must be sufficiently stable to avoid breaking up prior to participating in the synthesis reaction. Such mobile carriers are usually produced when the required synthesis consists of several stages or cannot be conducted in the active site of the enzyme for sterical reasons. Contrary to ATP, active energy carriers are usually reaction-specific. […] Mobile energy carriers are usually formed as a result of hydrolysis of two high-energy ATP bonds. In many cases this is the minimum amount of energy required to power a reaction which synthesizes a single chemical bond. […] Expelling a mobile or unstable reaction component in order to increase the spontaneity of active energy carrier synthesis is a process which occurs in many biological mechanisms […] The action of active energy carriers may be compared to a ball rolling down a hill. The descending snowball gains sufficient energy to traverse another, smaller mound, adjacent to its starting point. In our case, the smaller hill represents the final synthesis reaction […] Understanding the role of active carriers is essential for the study of metabolic processes.”

“A second category of processes, directly dependent on energy sources, involves structural reconfiguration of proteins, which can be further differentiated into low and high-energy reconfiguration. Low-energy reconfiguration occurs in proteins which form weak, easily reversible bonds with ligands. In such cases, structural changes are powered by the energy released in the creation of the complex. […] Important low-energy reconfiguration processes may occur in proteins which consist of subunits. Structural changes resulting from relative motion of subunits typically do not involve significant expenditures of energy. Of particular note are the so-called allosteric proteins […] whose rearrangement is driven by a weak and reversible bond between the protein and an oxygen molecule. Allosteric proteins are genetically conditioned to possess two stable structural configurations, easily swapped as a result of binding or releasing ligands. Thus, they tend to have two comparable energy minima (separated by a low threshold), each of which may be treated as a global minimum corresponding to the native form of the protein. Given such properties, even a weakly interacting ligand may trigger significant structural reconfiguration. This phenomenon is of critical importance to a variety of regulatory proteins. In many cases, however, the second potential minimum in which the protein may achieve relative stability is separated from the global minimum by a high threshold requiring a significant expenditure of energy to overcome. […] Contrary to low-energy reconfigurations, the relative difference in ligand concentrations is insufficient to cover the cost of a difficult structural change. Such processes are therefore coupled to highly exergonic reactions such as ATP hydrolysis. […]  The link between a biological process and an energy source does not have to be immediate. Indirect coupling occurs when the process is driven by relative changes in the concentration of reaction components. […] In general, high-energy reconfigurations exploit direct coupling mechanisms while indirect coupling is more typical of low-energy processes”.

Muscle action requires a major expenditure of energy. There is a nonlinear dependence between the degree of physical exertion and the corresponding energy requirements. […] Training may improve the power and endurance of muscle tissue. Muscle fibers subjected to regular exertion may improve their glycogen storage capacity, ATP production rate, oxidative metabolism and the use of fatty acids as fuel.

February 4, 2018 Posted by | Biology, Books, Chemistry, Genetics, Molecular biology, Pharmacology, Physics | Leave a comment

Lakes (I)

“The aim of this book is to provide a condensed overview of scientific knowledge about lakes, their functioning as ecosystems that we are part of and depend upon, and their responses to environmental change. […] Each chapter briefly introduces concepts about the physical, chemical, and biological nature of lakes, with emphasis on how these aspects are connected, the relationships with human needs and impacts, and the implications of our changing global environment.”

I’m currently reading this book and I really like it so far. I have added some observations from the first half of the book and some coverage-related links below.

“High resolution satellites can readily detect lakes above 0.002 kilometres square (km2) in area; that’s equivalent to a circular waterbody some 50m across. Using this criterion, researchers estimate from satellite images that the world contains 117 million lakes, with a total surface area amounting to 5 million km2. […] continuous accumulation of materials on the lake floor, both from inflows and from the production of organic matter within the lake, means that lakes are ephemeral features of the landscape, and from the moment of their creation onwards, they begin to fill in and gradually disappear. The world’s deepest and most ancient freshwater ecosystem, Lake Baikal in Russia (Siberia), is a compelling example: it has a maximum depth of 1,642m, but its waters overlie a much deeper basin that over the twenty-five million years of its geological history has become filled with some 7,000m of sediments. Lakes are created in a great variety of ways: tectonic basins formed by movements in the Earth’s crust, the scouring and residual ice effects of glaciers, as well as fluvial, volcanic, riverine, meteorite impacts, and many other processes, including human construction of ponds and reservoirs. Tectonic basins may result from a single fault […] or from a series of intersecting fault lines. […] The oldest and deepest lakes in the world are generally of tectonic origin, and their persistence through time has allowed the evolution of endemic plants and animals; that is, species that are found only at those sites.”

“In terms of total numbers, most of the world’s lakes […] owe their origins to glaciers that during the last ice age gouged out basins in the rock and deepened river valleys. […] As the glaciers retreated, their terminal moraines (accumulations of gravel and sediments) created dams in the landscape, raising water levels or producing new lakes. […] During glacial retreat in many areas of the world, large blocks of glacial ice broke off and were left behind in the moraines. These subsequently melted out to produce basins that filled with water, called ‘kettle’ or ‘pothole’ lakes. Such waterbodies are well known across the plains of North America and Eurasia. […] The most violent of lake births are the result of volcanoes. The craters left behind after a volcanic eruption can fill with water to form small, often circular-shaped and acidic lakes. […] Much larger lakes are formed by the collapse of a magma chamber after eruption to produce caldera lakes. […] Craters formed by meteorite impacts also provide basins for lakes, and have proved to be of great scientific as well as human interest. […] There was a time when limnologists paid little attention to small lakes and ponds, but, this has changed with the realization that although such waterbodies are modest in size, they are extremely abundant throughout the world and make up a large total surface area. Furthermore, these smaller waterbodies often have high rates of chemical activity such as greenhouse gas production and nutrient cycling, and they are major habitats for diverse plants and animals”.

“For Forel, the science of lakes could be subdivided into different disciplines and subjects, all of which continue to occupy the attention of freshwater scientists today […]. First, the physical environment of a lake includes its geological origins and setting, the water balance and exchange of heat with the atmosphere, as well as the penetration of light, the changes in temperature with depth, and the waves, currents, and mixing processes that collectively determine the movement of water. Second, the chemical environment is important because lake waters contain a great variety of dissolved materials (‘solutes’) and particles that play essential roles in the functioning of the ecosystem. Third, the biological features of a lake include not only the individual species of plants, microbes, and animals, but also their organization into food webs, and the distribution and functioning of these communities across the bottom of the lake and in the overlying water.”

“In the simplest hydrological terms, lakes can be thought of as tanks of water in the landscape that are continuously topped up by their inflowing rivers, while spilling excess water via their outflow […]. Based on this model, we can pose the interesting question: how long does the average water molecule stay in the lake before leaving at the outflow? This value is referred to as the water residence time, and it can be simply calculated as the total volume of the lake divided by the water discharge at the outlet. This lake parameter is also referred to as the ‘flushing time’ (or ‘flushing rate’, if expressed as a proportion of the lake volume discharged per unit of time) because it provides an estimate of how fast mineral salts and pollutants can be flushed out of the lake basin. In general, lakes with a short flushing time are more resilient to the impacts of human activities in their catchments […] Each lake has its own particular combination of catchment size, volume, and climate, and this translates into a water residence time that varies enormously among lakes [from perhaps a month to more than a thousand years, US] […] A more accurate approach towards calculating the water residence time is to consider the question: if the lake were to be pumped dry, how long would it take to fill it up again? For most lakes, this will give a similar value to the outflow calculation, but for lakes where evaporation is a major part of the water balance, the residence time will be much shorter.”

“Each year, mineral and organic particles are deposited by wind on the lake surface and are washed in from the catchment, while organic matter is produced within the lake by aquatic plants and plankton. There is a continuous rain of this material downwards, ultimately accumulating as an annual layer of sediment on the lake floor. These lake sediments are storehouses of information about past changes in the surrounding catchment, and they provide a long-term memory of how the limnology of a lake has responded to those changes. The analysis of these natural archives is called ‘palaeolimnology’ (or ‘palaeoceanography’ for marine studies), and this branch of the aquatic sciences has yielded enormous insights into how lakes change through time, including the onset, effects, and abatement of pollution; changes in vegetation both within and outside the lake; and alterations in regional and global climate.”

“Sampling for palaeolimnological analysis is typically undertaken in the deepest waters to provide a more integrated and complete picture of the lake basin history. This is also usually the part of the lake where sediment accumulation has been greatest, and where the disrupting activities of bottom-dwelling animals (‘bioturbation’ of the sediments) may be reduced or absent. […] Some of the most informative microfossils to be found in lake sediments are diatoms, an algal group that has cell walls (‘frustules’) made of silica glass that resist decomposition. Each lake typically contains dozens to hundreds of different diatom species, each with its own characteristic set of environmental preferences […]. A widely adopted approach is to sample many lakes and establish a statistical relationship or ‘transfer function’ between diatom species composition (often by analysis of surface sediments) and a lake water variable such as temperature, pH, phosphorus, or dissolved organic carbon. This quantitative species–environment relationship can then be applied to the fossilized diatom species assemblage in each stratum of a sediment core from a lake in the same region, and in this way the physical and chemical fluctuations that the lake has experienced in the past can be reconstructed or ‘hindcast’ year-by-year. Other fossil indicators of past environmental change include algal pigments, DNA of algae and bacteria including toxic bloom species, and the remains of aquatic animals such as ostracods, cladocerans, and larval insects.”

“In lake and ocean studies, the penetration of sunlight into the water can be […] precisely measured with an underwater light meter (submersible radiometer), and such measurements always show that the decline with depth follows a sharp curve rather than a straight line […]. This is because the fate of sunlight streaming downwards in water is dictated by the probability of the photons being absorbed or deflected out of the light path; for example, a 50 per cent probability of photons being lost from the light beam by these processes per metre depth in a lake would result in sunlight values dropping from 100 per cent at the surface to 50 per cent at 1m, 25 per cent at 2m, 12.5 per cent at 3m, and so on. The resulting exponential curve means that for all but the clearest of lakes, there is only enough solar energy for plants, including photosynthetic cells in the plankton (phytoplankton), in the upper part of the water column. […] The depth limit for underwater photosynthesis or primary production is known as the ‘compensation depth‘. This is the depth at which carbon fixed by photosynthesis exactly balances the carbon lost by cellular respiration, so the overall production of new biomass (net primary production) is zero. This depth often corresponds to an underwater light level of 1 per cent of the sunlight just beneath the water surface […] The production of biomass by photosynthesis takes place at all depths above this level, and this zone is referred to as the ‘photic’ zone. […] biological processes in [the] ‘aphotic zone’ are mostly limited to feeding and decomposition. A Secchi disk measurement can be used as a rough guide to the extent of the photic zone: in general, the 1 per cent light level is about twice the Secchi depth.”

“[W]ater colour is now used in […] many powerful ways to track changes in water quality and other properties of lakes, rivers, estuaries, and the ocean. […] Lakes have different colours, hues, and brightness levels as a result of the materials that are dissolved and suspended within them. The purest of lakes are deep blue because the water molecules themselves absorb light in the green and, to a greater extent, red end of the spectrum; they scatter the remaining blue photons in all directions, mostly downwards but also back towards our eyes. […] Algae in the water typically cause it to be green and turbid because their suspended cells and colonies contain chlorophyll and other light-capturing molecules that absorb strongly in the blue and red wavebands, but not green. However there are some notable exceptions. Noxious algal blooms dominated by cyanobacteria are blue-green (cyan) in colour caused by their blue-coloured protein phycocyanin, in addition to chlorophyll.”

“[A]t the largest dimension, at the scale of the entire lake, there has to be a net flow from the inflowing rivers to the outflow, and […] from this landscape perspective, lakes might be thought of as enlarged rivers. Of course, this riverine flow is constantly disrupted by wind-induced movements of the water. When the wind blows across the surface, it drags the surface water with it to generate a downwind flow, and this has to be balanced by a return movement of water at depth. […] In large lakes, the rotation of the Earth has plenty of time to exert its weak effect as the water moves from one side of the lake to the other. As a result, the surface water no longer flows in a straight line, but rather is directed into two or more circular patterns or gyres that can move nearshore water masses rapidly into the centre of the lake and vice-versa. Gyres can therefore be of great consequence […] Unrelated to the Coriolis Effect, the interaction between wind-induced currents and the shoreline can also cause water to flow in circular, individual gyres, even in smaller lakes. […] At a much smaller scale, the blowing of wind across a lake can give rise to downward spiral motions in the water, called ‘Langmuir cells‘. […] These circulation features are commonly observed in lakes, where the spirals progressing in the general direction of the wind concentrate foam (on days of white-cap waves) or glossy, oily materials (on less windy days) into regularly spaced lines that are parallel to the direction of the wind. […] Density currents must also be included in this brief discussion of water movement […] Cold river water entering a warm lake will be denser than its surroundings and therefore sinks to the buttom, where it may continue to flow for considerable distances. […] Density currents contribute greatly to inshore-offshore exchanges of water, with potential effects on primary productivity, depp-water oxygenation, and the dispersion of pollutants.”

Links:

Limnology.
Drainage basin.
Lake Geneva. Lake Malawi. Lake Tanganyika. Lake Victoria. Lake Biwa. Lake Titicaca.
English Lake District.
Proglacial lakeLake Agassiz. Lake Ojibway.
Lake Taupo.
Manicouagan Reservoir.
Subglacial lake.
Thermokarst (-lake).
Bathymetry. Bathymetric chart. Hypsographic curve.
Várzea forest.
Lake Chad.
Colored dissolved organic matter.
H2O Temperature-density relationship. Thermocline. Epilimnion. Hypolimnion. Monomictic lake. Dimictic lake. Lake stratification.
Capillary wave. Gravity wave. Seiche. Kelvin wave. Poincaré wave.
Benthic boundary layer.
Kelvin–Helmholtz instability.

January 22, 2018 Posted by | Biology, Books, Botany, Chemistry, Geology, Paleontology, Physics | Leave a comment

Random stuff

I have almost stopped posting posts like these, which has resulted in the accumulation of a very large number of links and studies which I figured I might like to blog at some point. This post is mainly an attempt to deal with the backlog – I won’t cover the material in too much detail.

i. Do Bullies Have More Sex? The answer seems to be a qualified yes. A few quotes:

“Sexual behavior during adolescence is fairly widespread in Western cultures (Zimmer-Gembeck and Helfland 2008) with nearly two thirds of youth having had sexual intercourse by the age of 19 (Finer and Philbin 2013). […] Bullying behavior may aid in intrasexual competition and intersexual selection as a strategy when competing for mates. In line with this contention, bullying has been linked to having a higher number of dating and sexual partners (Dane et al. 2017; Volk et al. 2015). This may be one reason why adolescence coincides with a peak in antisocial or aggressive behaviors, such as bullying (Volk et al. 2006). However, not all adolescents benefit from bullying. Instead, bullying may only benefit adolescents with certain personality traits who are willing and able to leverage bullying as a strategy for engaging in sexual behavior with opposite-sex peers. Therefore, we used two independent cross-sectional samples of older and younger adolescents to determine which personality traits, if any, are associated with leveraging bullying into opportunities for sexual behavior.”

“…bullying by males signal the ability to provide good genes, material resources, and protect offspring (Buss and Shackelford 1997; Volk et al. 2012) because bullying others is a way of displaying attractive qualities such as strength and dominance (Gallup et al. 2007; Reijntjes et al. 2013). As a result, this makes bullies attractive sexual partners to opposite-sex peers while simultaneously suppressing the sexual success of same-sex rivals (Gallup et al. 2011; Koh and Wong 2015; Zimmer-Gembeck et al. 2001). Females may denigrate other females, targeting their appearance and sexual promiscuity (Leenaars et al. 2008; Vaillancourt 2013), which are two qualities relating to male mate preferences. Consequently, derogating these qualities lowers a rivals’ appeal as a mate and also intimidates or coerces rivals into withdrawing from intrasexual competition (Campbell 2013; Dane et al. 2017; Fisher and Cox 2009; Vaillancourt 2013). Thus, males may use direct forms of bullying (e.g., physical, verbal) to facilitate intersexual selection (i.e., appear attractive to females), while females may use relational bullying to facilitate intrasexual competition, by making rivals appear less attractive to males.”

The study relies on the use of self-report data, which I find very problematic – so I won’t go into the results here. I’m not quite clear on how those studies mentioned in the discussion ‘have found self-report data [to be] valid under conditions of confidentiality’ – and I remain skeptical. You’ll usually want data from independent observers (e.g. teacher or peer observations) when analyzing these kinds of things. Note in the context of the self-report data problem that if there’s a strong stigma associated with being bullied (there often is, or bullying wouldn’t work as well), asking people if they have been bullied is not much better than asking people if they’re bullying others.

ii. Some topical advice that some people might soon regret not having followed, from the wonderful Things I Learn From My Patients thread:

“If you are a teenage boy experimenting with fireworks, do not empty the gunpowder from a dozen fireworks and try to mix it in your mother’s blender. But if you do decide to do that, don’t hold the lid down with your other hand and stand right over it. This will result in the traumatic amputation of several fingers, burned and skinned forearms, glass shrapnel in your face, and a couple of badly scratched corneas as a start. You will spend months in rehab and never be able to use your left hand again.”

iii. I haven’t talked about the AlphaZero-Stockfish match, but I was of course aware of it and did read a bit about that stuff. Here’s a reddit thread where one of the Stockfish programmers answers questions about the match. A few quotes:

“Which of the two is stronger under ideal conditions is, to me, neither particularly interesting (they are so different that it’s kind of like comparing the maximum speeds of a fish and a bird) nor particularly important (since there is only one of them that you and I can download and run anyway). What is super interesting is that we have two such radically different ways to create a computer chess playing entity with superhuman abilities. […] I don’t think there is anything to learn from AlphaZero that is applicable to Stockfish. They are just too different, you can’t transfer ideas from one to the other.”

“Based on the 100 games played, AlphaZero seems to be about 100 Elo points stronger under the conditions they used. The current development version of Stockfish is something like 40 Elo points stronger than the version used in Google’s experiment. There is a version of Stockfish translated to hand-written x86-64 assembly language that’s about 15 Elo points stronger still. This adds up to roughly half the Elo difference between AlphaZero and Stockfish shown in Google’s experiment.”

“It seems that Stockfish was playing with only 1 GB for transposition tables (the area of memory used to store data about the positions previously encountered in the search), which is way too little when running with 64 threads.” [I seem to recall a comp sci guy observing elsewhere that this was less than what was available to his smartphone version of Stockfish, but I didn’t bookmark that comment].

“The time control was a very artificial fixed 1 minute/move. That’s not how chess is traditionally played. Quite a lot of effort has gone into Stockfish’s time management. It’s pretty good at deciding when to move quickly, and when to spend a lot of time on a critical decision. In a fixed time per move game, it will often happen that the engine discovers that there is a problem with the move it wants to play just before the time is out. In a regular time control, it would then spend extra time analysing all alternative moves and trying to find a better one. When you force it to move after exactly one minute, it will play the move it already know is bad. There is no doubt that this will cause it to lose many games it would otherwise have drawn.”

iv. Thrombolytics for Acute Ischemic Stroke – no benefit found.

“Thrombolysis has been rigorously studied in >60,000 patients for acute thrombotic myocardial infarction, and is proven to reduce mortality. It is theorized that thrombolysis may similarly benefit ischemic stroke patients, though a much smaller number (8120) has been studied in relevant, large scale, high quality trials thus far. […] There are 12 such trials 1-12. Despite the temptation to pool these data the studies are clinically heterogeneous. […] Data from multiple trials must be clinically and statistically homogenous to be validly pooled.14 Large thrombolytic studies demonstrate wide variations in anatomic stroke regions, small- versus large-vessel occlusion, clinical severity, age, vital sign parameters, stroke scale scores, and times of administration. […] Examining each study individually is therefore, in our opinion, both more valid and more instructive. […] Two of twelve studies suggest a benefit […] In comparison, twice as many studies showed harm and these were stopped early. This early stoppage means that the number of subjects in studies demonstrating harm would have included over 2400 subjects based on originally intended enrollments. Pooled analyses are therefore missing these phantom data, which would have further eroded any aggregate benefits. In their absence, any pooled analysis is biased toward benefit. Despite this, there remain five times as many trials showing harm or no benefit (n=10) as those concluding benefit (n=2), and 6675 subjects in trials demonstrating no benefit compared to 1445 subjects in trials concluding benefit.”

“Thrombolytics for ischemic stroke may be harmful or beneficial. The answer remains elusive. We struggled therefore, debating between a ‘yellow’ or ‘red’ light for our recommendation. However, over 60,000 subjects in trials of thrombolytics for coronary thrombosis suggest a consistent beneficial effect across groups and subgroups, with no studies suggesting harm. This consistency was found despite a very small mortality benefit (2.5%), and a very narrow therapeutic window (1% major bleeding). In comparison, the variation in trial results of thrombolytics for stroke and the daunting but consistent adverse effect rate caused by ICH suggested to us that thrombolytics are dangerous unless further study exonerates their use.”

“There is a Cochrane review that pooled estimates of effect. 17 We do not endorse this choice because of clinical heterogeneity. However, we present the NNT’s from the pooled analysis for the reader’s benefit. The Cochrane review suggested a 6% reduction in disability […] with thrombolytics. This would mean that 17 were treated for every 1 avoiding an unfavorable outcome. The review also noted a 1% increase in mortality (1 in 100 patients die because of thrombolytics) and a 5% increase in nonfatal intracranial hemorrhage (1 in 20), for a total of 6% harmed (1 in 17 suffers death or brain hemorrhage).”

v. Suicide attempts in Asperger Syndrome. An interesting finding: “Over 35% of individuals with AS reported that they had attempted suicide in the past.”

Related: Suicidal ideation and suicide plans or attempts in adults with Asperger’s syndrome attending a specialist diagnostic clinic: a clinical cohort study.

“374 adults (256 men and 118 women) were diagnosed with Asperger’s syndrome in the study period. 243 (66%) of 367 respondents self-reported suicidal ideation, 127 (35%) of 365 respondents self-reported plans or attempts at suicide, and 116 (31%) of 368 respondents self-reported depression. Adults with Asperger’s syndrome were significantly more likely to report lifetime experience of suicidal ideation than were individuals from a general UK population sample (odds ratio 9·6 [95% CI 7·6–11·9], p<0·0001), people with one, two, or more medical illnesses (p<0·0001), or people with psychotic illness (p=0·019). […] Lifetime experience of depression (p=0·787), suicidal ideation (p=0·164), and suicide plans or attempts (p=0·06) did not differ significantly between men and women […] Individuals who reported suicide plans or attempts had significantly higher Autism Spectrum Quotient scores than those who did not […] Empathy Quotient scores and ages did not differ between individuals who did or did not report suicide plans or attempts (table 4). Patients with self-reported depression or suicidal ideation did not have significantly higher Autism Spectrum Quotient scores, Empathy Quotient scores, or age than did those without depression or suicidal ideation”.

The fact that people with Asperger’s are more likely to be depressed and contemplate suicide is consistent with previous observations that they’re also more likely to die from suicide – for example a paper I blogged a while back found that in that particular (large Swedish population-based cohort-) study, people with ASD were more than 7 times as likely to die from suicide than were the comparable controls.

Also related: Suicidal tendencies hard to spot in some people with autism.

This link has some great graphs and tables of suicide data from the US.

Also autism-related: Increased perception of loudness in autism. This is one of the ‘important ones’ for me personally – I am much more sound-sensitive than are most people.

vi. Early versus Delayed Invasive Intervention in Acute Coronary Syndromes.

“Earlier trials have shown that a routine invasive strategy improves outcomes in patients with acute coronary syndromes without ST-segment elevation. However, the optimal timing of such intervention remains uncertain. […] We randomly assigned 3031 patients with acute coronary syndromes to undergo either routine early intervention (coronary angiography ≤24 hours after randomization) or delayed intervention (coronary angiography ≥36 hours after randomization). The primary outcome was a composite of death, myocardial infarction, or stroke at 6 months. A prespecified secondary outcome was death, myocardial infarction, or refractory ischemia at 6 months. […] Early intervention did not differ greatly from delayed intervention in preventing the primary outcome, but it did reduce the rate of the composite secondary outcome of death, myocardial infarction, or refractory ischemia and was superior to delayed intervention in high-risk patients.”

vii. Some wikipedia links:

Behrens–Fisher problem.
Sailing ship tactics (I figured I had to read up on this if I were to get anything out of the Aubrey-Maturin books).
Anatomical terms of muscle.
Phatic expression (“a phatic expression […] is communication which serves a social function such as small talk and social pleasantries that don’t seek or offer any information of value.”)
Three-domain system.
Beringian wolf (featured).
Subdural hygroma.
Cayley graph.
Schur polynomial.
Solar neutrino problem.
Hadamard product (matrices).
True polar wander.
Newton’s cradle.

viii. Determinant versus permanent (mathematics – technical).

ix. Some years ago I wrote a few English-language posts about some of the various statistical/demographic properties of immigrants living in Denmark, based on numbers included in a publication by Statistics Denmark. I did it by translating the observations included in that publication, which was only published in Danish. I was briefly considering doing the same thing again when the 2017 data arrived, but I decided not to do it as I recalled that it took a lot of time to write those posts back then, and it didn’t seem to me to be worth the effort – but Danish readers might be interested to have a look at the data, if they haven’t already – here’s a link to the publication Indvandrere i Danmark 2017.

x. A banter blitz session with grandmaster Peter Svidler, who recently became the first Russian ever to win the Russian Chess Championship 8 times. He’s currently shared-second in the World Rapid Championship after 10 rounds and is now in the top 10 on the live rating list in both classical and rapid – seems like he’s had a very decent year.

xi. I recently discovered Dr. Whitecoat’s blog. The patient encounters are often interesting.

December 28, 2017 Posted by | Astronomy, autism, Biology, Cardiology, Chess, Computer science, History, Mathematics, Medicine, Neurology, Physics, Psychiatry, Psychology, Random stuff, Statistics, Studies, Wikipedia, Zoology | Leave a comment

Plate Tectonics (II)

Some more observations and links below.

I may or may not add a third post about the book at a later point in time; there’s a lot of interesting stuff included in this book.

“Because of the thickness of the lithosphere, its bending causes […] a stretching of its upper surface. This stretching of the upper portion of the lithosphere manifests itself as earthquakes and normal faulting, the style of faulting that occurs when a region extends horizontally […]. Such earthquakes commonly occur after great earthquakes […] Having been bent down at the trench, the lithosphere […] slides beneath the overriding lithospheric plate. Fault plane solutions of shallow focus earthquakes […] provide the most direct evidence for this underthrusting. […] In great earthquakes, […] the deformation of the surface of the Earth that occurs during such earthquakes corroborates the evidence for underthrusting of the oceanic lithosphere beneath the landward side of the trench. The 1964 Alaskan earthquake provided the first clear example. […] Because the lithosphere is much colder than the asthenosphere, when a plate of lithosphere plunges into the asthenosphere at rates of tens to more than a hundred millimetres per year, it remains colder than the asthenosphere for tens of millions of years. In the asthenosphere, temperatures approach those at which some minerals in the rock can melt. Because seismic waves travel more slowly and attenuate (lose energy) more rapidly in hot, and especially in partially molten, rock than they do in colder rock, the asthenosphere is not only a zone of weakness, but also characterized by low speeds and high attenuation of seismic waves. […] many seismologists use the waves sent by earthquakes to study the Earth’s interior, with little regard for earthquakes themselves. The speeds at which these waves propagate and the rate at which the waves die out, or attenuate, have provided much of the data used to infer the Earth’s internal structure.”

S waves especially, but also P waves, lose much of their energy while passing through the asthenosphere. The lithosphere, however, transmits P and S waves with only modest loss of energy. This difference is apparent in the extent to which small earthquakes can be felt. In regions like the western United States or in Greece and Italy, the lithosphere is thin, and the asthenosphere reaches up to shallow depths. As a result earthquakes, especially small ones, are felt over relatively small areas. By contrast, in the eastern United States or in Eastern Europe, small earthquakes can be felt at large distances. […] Deep earthquakes occur several hundred kilometres west of Japan, but they are felt with greater intensity and can be more destructive in eastern than western Japan […]. This observation, of course, puzzled Japanese seismologists when they first discovered deep focus earthquakes; usually people close to the epicentre (the point directly over the earthquake) feel stronger shaking than people farther from it. […] Tokuji Utsu […] explained this greater intensity of shaking along the more distant, eastern side of the islands than on the closer, western side by appealing to a window of low attenuation parallel to the earthquake zone and plunging through the asthenosphere beneath Japan and the Sea of Japan to its west. Paths to eastern Japan travelled efficiently through that window, the subducted slab of lithosphere, whereas those to western Japan passed through the asthenosphere and were attenuated strongly.”

“Shallow earthquakes occur because stress on a fault surface exceeds the resistance to slip that friction imposes. When two objects are forced to slide past one another, and friction opposes the force that pushes one past the other, the frictional resistance can be increased by pressing the two objects together more forcefully. Many of us experience this when we put sandbags in the trunks […] of our cars in winter to give the tyres greater traction on slippery roads. The same applies to faults in the Earth’s crust. As the pressure increases with increasing depth in the Earth, frictional resistance to slip on faults should increase. For depths greater than a few tens of kilometres, the high pressure should press the two sides of a fault together so tightly that slip cannot occur. Thus, in theory, deep-focus earthquakes ought not to occur.”

“In general, rock […] is brittle at low temperatures but becomes soft and flows at high temperature. The intermediate- and deep-focus earthquakes occur within the lithosphere, where at a given depth, the temperature is atypically low. […] the existence of intermediate- or deep-focus earthquakes is usually cited as evidence for atypically cold material at asthenospheric depths. Most such earthquakes, therefore, occur in oceanic lithosphere that has been subducted within the last 10–20 million years, sufficiently recently that it has not heated up enough to become soft and weak […]. The inference that the intermediate- and deep-focus earthquakes occur within the lithosphere and not along its top edge remains poorly appreciated among Earth scientists. […] the fault plane solutions suggest that the state of stress in the downgoing slab is what one would expect if the slab deformed like a board, or slab of wood. Accordingly, we infer that the earthquakes occurring within the downgoing slab of lithosphere result from stress within the slab, not from movement of the slab past the surrounding asthenosphere. Because the lithosphere is much stronger than the surrounding asthenosphere, it can support much higher stresses than the asthenosphere can. […] observations are consistent with a cold, heavy slab sinking into the asthenosphere and being pulled downward by gravity acting on it, but then encountering resistance at depths of 500–700 km despite the pull of gravity acting on the excess mass of the slab. Where both intermediate and deep-focus earthquakes occur, a gap, or a minimum, in earthquake activity near a depth of 300 km marks the transition between the upper part of the slab stretched by gravity pulling it down and the lower part where the weight of the slab above it compresses it. In the transition region between them, there would be negligible stress and, therefore, no or few earthquakes.”

“Volcanoes occur where rock melts, and where that molten rock can rise to the surface. […] For essentially all minerals […] melting temperatures […] depend on the extent to which the minerals have been contaminated by impurities. […] hydrogen, when it enters most crystal lattices, lowers the melting temperature of the mineral. Hydrogen is most obviously present in water (H2O), but is hardly a major constituent of the oxygen-, silicon-, magnesium-, and iron-rich mantle. The top of the downgoing slab of lithosphere includes fractured crust and sediment deposited atop it. Oceanic crust has been stewing in seawater for tens of millions of years, so that its cracks have become full either of liquid water or of minerals to which water molecules have become loosely bound. […] the downgoing slab acts like a caravan of camels carrying water downward into an upper mantle desert. […] The downgoing slab of lithosphere carries water in cracks in oceanic crust and in the interstices among sediment grains, and when released to the mantle above it, hydrogen dissolved in crystal lattices lowers the melting temperature of that rock enough that some of it melts. Many of the world’s great volcanoes […] begin as small amounts of melt above the subducted slabs of lithosphere.”

“… (in most regions) plates of lithosphere behave as rigid, and therefore undeformable, objects. The high strength of intact lithosphere, stronger than either the asthenosphere below it or the material along the boundaries of plates, allows the lithospheric plates to move with respect to one another without deforming (much). […] The essence of ‘plate tectonics’ is that vast regions move with respect to one another as (nearly) rigid objects. […] Dan McKenzie of Cambridge University, one of the scientists to present the idea of rigid plates, often argued that plate tectonics was easy to accept because the kinematics, the description of relative movements of plates, could be separated from the dynamics, the system of forces that causes plates to move with respect to one another in the directions and at the speeds that they do. Making such a separation is impossible for the flow of most fluids, […] whose movement cannot be predicted without an understanding of the forces acting on separate parcels of fluid. In part because of its simplicity, plate tectonics passed from being a hypothesis to an accepted theory in a short time.”

“[F]or plates that move over the surface of a sphere, all relative motion can be described simply as a rotation about an axis that passes through the centre of the sphere. The Earth itself obviously rotates around an axis through the North and South Poles. Similarly, the relative displacement of two plates with respect to one another can be described as a rotation of one plate with respect to the other about an axis, or ‘pole’, of rotation […] if we know how two plates, for example Eurasia and Africa, move with respect to a third plate, like North America, we can calculate how those two plates (Eurasia and Africa) move with respect to each other. A rotation about an axis in the Arctic Ocean describes the movement of the Africa plate, with respect to the North America plate […]. Combining the relative motion of Africa with respect to North America with the relative motion of North America with respect to Eurasia allows us to calculate that the African continent moves toward Eurasia by a rotation about an axis that lies west of northern Africa. […] By combining the known relative motion of pairs of plates […] we can calculate how fast plates converge with respect to one another and in what direction.”

“[W]e can measure how plates move with respect to one another using Global Positioning System (GPS) measurements of points on nearly all of the plates. Such measurements show that speeds of relative motion between some pairs of plates have changed a little bit since 2 million years ago, but in general, the GPS measurements corroborate the inferences drawn both from rates of seafloor spreading determined using magnetic anomalies and from directions of relative plate motion determined using orientations of transform faults and fault plane solutions of earthquakes. […] Among tests of plate tectonics, none is more convincing than the GPS measurements […] numerous predictions of rates or directions of present-day plate motions and of large displacements of huge terrains have been confirmed many times over. […] When, more than 45 years ago, plate tectonics was proposed to describe relative motions of vast terrains, most saw it as an approximation that worked well, but that surely was imperfect. […] plate tectonics is imperfect, but GPS measurements show that the plates are surprisingly rigid. […] Long histories of plate motion can be reduced to relatively few numbers, the latitudes and longitudes of the poles of rotation, and the rates or amounts of rotation about those axes.”

Links:

Wadati–Benioff zone.
Translation (geometry).
Rotation (mathematics).
Poles of rotation.
Rotation around a fixed axis.
Euler’s rotation theorem.
Isochron dating.
Tanya Atwater.

December 25, 2017 Posted by | Books, Chemistry, Geology, Physics | Leave a comment

Plate Tectonics (I)

Some quotes and links related to the first half of the book‘s coverage:

“The fundamental principle of plate tectonics is that large expanses of terrain, thousands of kilometres in lateral extent, behave as thin (~100 km in thickness) rigid layers that move with respect to each another across the surface of the Earth. The word ‘plate’ carries the image of a thin rigid object, and ‘tectonics’ is a geological term that refers to large-scale processes that alter the structure of the Earth’s crust. […] The Earth is stratified with a light crust overlying denser mantle. Just as the height of icebergs depends on the mass of ice below the surface of the ocean, so […] the light crust of the Earth floats on the denser mantle, standing high where crust is thick, and lying low, deep below the ocean, where it should be thin. Wegener recognized that oceans are mostly deep, and he surmised correctly that the crust beneath oceans must be much thinner than that beneath continents.”

“From a measurement of the direction in which a hunk of rock is magnetized, one can infer where the North Pole lay relative to that rock at the time it was magnetized. It follows that if continents had drifted, rock of different ages on the continents should be magnetized in different directions, not just from each other but more importantly in directions inconsistent with the present-day magnetic field. […] In the 1950s, several studies using palaeomagnetism were carried out to test whether continents had drifted, and most such tests passed. […] Palaeomagnetic results not only supported the idea of continental drift, but they also offered constraints on timing and rates of drift […] in the 1960s, the idea of continental drift saw a renaissance, but subsumed within a broader framework, that of plate tectonics.”

“If one wants to study deformation of the Earth’s crust in action, the quick and dirty way is to study earthquakes. […] Until the 1960s, studying fracture zones in action was virtually impossible. Nearly all of them lie far offshore beneath the deep ocean. Then, in response to a treaty in the early 1960s disallowing nuclear explosions in the ocean, atmosphere, or space, but permitting underground testing of them, the Department of Defense of the USA put in place the World-Wide Standardized Seismograph Network, a global network with more than 100 seismograph stations. […] Suddenly remote earthquakes, not only those on fracture zones but also those elsewhere throughout the globe […], became amenable to study. […] the study of earthquakes played a crucial role in the recognition and acceptance of plate tectonics. […] By the early 1970s, the basic elements of plate tectonics had permeated essentially all of Earth science. In addition to the obvious consequences, like confirmation of continental drift, emphasis shifted from determining the history of the planet to understanding the processes that had shaped it.”

“[M]ost solids are strongest when cold, and become weaker when warmed. Temperature increases into the Earth. As a result the strongest rock lies close to the surface, and rock weakens with depth. Moreover, olivine, the dominant mineral in the upper mantle, seems to be stronger than most crustal minerals; so, in many regions, the strongest rock is at the top of the mantle. Beneath oceans where crust is thin, ~7 km, the lithosphere is mostly mantle […]. Because temperature increases gradually with depth, the boundary between strong lithosphere and underlying weak asthenosphere is not sharp. Nevertheless, because the difference in strength is large, subdividing the outer part of the Earth into two layers facilitates an understanding of plate tectonics. Reduced to its essence, the basic idea that we call plate tectonics is simply a description of the relative movements of separate plates of lithosphere as these plates move over the underlying weaker, hotter asthenosphere. […] Most of the Earth’s surface lies on one of the ~20 major plates, whose sizes vary from huge, like the Pacific plate, to small, like the Caribbean plate […], or even smaller. Narrow belts of earthquakes mark the boundaries of separate plates […]. The key to plate tectonics lies in these plates behaving as largely rigid objects, and therefore undergoing only negligible deformation.”

“Although the amounts and types of sediment deposited on the ocean bottom vary from place to place, the composition and structure of the oceanic crust is remarkably uniform beneath the deep ocean. The structure of oceanic lithosphere depends primarily on its age […] As the lithosphere ages, it thickens, and the rate at which it cools decreases. […] the rate that heat is lost through the seafloor decreases with the age of lithosphere. […] As the lithospheric plate loses heat and cools, like most solids, it contracts. This contraction manifests itself as a deepening of the ocean. […] Seafloor spreading in the Pacific occurs two to five times faster than it does in the Atlantic. […] when seafloor spreading is slow, new basalt rising to the surface at the ridge axis can freeze onto the older seafloor on its edges before rising as high as it would otherwise. As a result, a valley […] forms. Where spreading is faster, however, as in the Pacific, new basalt rises to a shallower depth and no such valley forms. […] The spreading apart of two plates along a mid-ocean ridge system occurs by divergence of the two plates along straight segments of mid-ocean ridge that are truncated at fracture zones. Thus, the plate boundary at a mid-ocean ridge has a zig-zag shape, with spreading centres making zigs and transform faults making zags along it.”

“Geochemists are confident that the volume of water in the oceans has not changed by a measurable amount for hundreds of millions, if not billions, of years. Yet, the geologic record shows several periods when continents were flooded to a much greater extent than today. For example, 90 million years ago, the Midwestern United States and neighbouring Canada were flooded. One could have sailed due north from the Gulf of Mexico to Hudson’s Bay and into the Arctic. […] If sea level has risen and fallen, while the volume of water has remained unchanged, then the volume of the basin holding the water must have changed. The rates at which seafloor is created at the different spreading centres today are not the same, and such rates at all spreading centres have varied over geologic time. Imagine a time in the past when seafloor at some of the spreading centres was created at a faster rate than it is today. If this relatively high rate had continued for a few tens of millions of years, there would have been more young ocean floor than today, and correspondingly less old floor […]. Thus, the average depth of the ocean would be shallower than it is today, and the volume of the ocean basin would be smaller than today. Water should have spilled onto the continent. Most now attribute the high sea level in the Cretaceous Period (145 to 65 million years ago) to unusually rapid creation of seafloor, and hence to a state when seafloor was younger on average than today.”

Wilson focused on the two major differences between ordinary strike-slip faults, or transcurrent faults, and transform faults on fracture zones. (1) If transcurrent faulting occurred, slip should occur along the entire fracture zone; but for transform faulting, only the portion between the segments of spreading centres would be active. (2) The sense of slip on the faults would be opposite for these two cases: if right-lateral for one, then left-lateral for the other […] The occurrences of earthquakes along a fault provide the most convincing evidence that the fault is active. Slip on most faults and most deformation of the Earth’s crust to make mountains occurs not slowly and steadily on human timescales, but abruptly during earthquakes. Accordingly, a map of earthquakes is, to a first approximation, a map of active faults on which regions, such as lithospheric plates, slide past one another […] When an earthquake occurs, slip on a fault takes place. One side of the fault slides past the other so that slip is parallel to the plane of the fault; the opening of cracks, into which cows or people can fall, is rare and atypical. Repeated studies of earthquakes and the surface ruptures accompanying them show that the slip during an earthquake is representative of the sense of cumulative displacement that has occurred on faults over geologic timescales. Thus earthquakes give us snapshots of processes that occur over thousands to millions of years. Two aspects of a fault define it: the orientation of the fault plane, which can be vertical or gently dipping, and the sense of slip: the direction that one side of the fault moves with respect to the other […] To a first approximation, boundaries between plates are single faults. Thus, if we can determine both the orientation of the fault plane and the sense of slip on it during an earthquake, we can infer the direction that one plate moves with respect to the other. Often during earthquakes, but not always, slip on the fault offsets the Earth’s surface, and we can directly observe the sense of motion […]. In the deep ocean, however, this cannot be done as a general practice, and we must rely on more indirect methods.”

“Because seafloor spreading creates new seafloor at the mid-ocean ridges, the newly formed crust must find accommodation: either the Earth must expand or lithosphere must be destroyed at the same rate that it is created. […] for the Earth not to expand (or contract), the sum total of new lithosphere made at spreading centres must be matched by the removal, by subduction, of an equal amount of lithosphere at island arc structures. […] Abundant evidence […] shows that subduction of lithosphere does occur. […] The subduction process […] differs fundamentally from that of seafloor spreading, in that subduction is asymmetric. Whereas two plates are created and grow larger at equal rates at spreading centers (mid-ocean ridges and rises), the areal extent of only one plate decreases at a subduction zone. The reason for this asymmetry derives from the marked dependence of the strength of rock on temperature. […] At spreading centres, hot weak rock deforms easily as it rises at mid-ocean ridges, cools, and then becomes attached to one of the two diverging plates. At subduction zones, however, cold and therefore strong lithosphere resists bending and contortion. […] two plates of lithosphere, each some 100 km thick, cannot simply approach one another, turn sharp corners […], and dive steeply into the asthenosphere. Much less energy is dissipated if one plate undergoes modest flexure and then slides at a gentle angle beneath the other, than if both plates were to undergo pronounced bending and then plunged together steeply into the asthenosphere. Nature takes the easier, energetically more efficient, process. […] Before it plunges beneath the island arc, the subducting plate of lithosphere bends down gently to cause a deep-sea trench […] As the plate bends down to form the trench, the lithosphere seaward of the trench is flexed upwards slightly. […] the outer topographic rise […] will be lower but wider for thicker lithosphere.”

Plate tectonics.
Andrija Mohorovičić. Mohorovičić discontinuity.
Archimedes’ principle.
Isostasy.
Harold Jeffreys. Keith Edward Bullen. Edward A. Irving. Harry Hammond Hess. Henry William Menard. Maurice Ewing.
Paleomagnetism.
Lithosphere. Asthenosphere.
Mid-ocean ridge. Bathymetry. Mid-Atlantic Ridge. East Pacific Rise. Seafloor spreading.
Fracture zone. Strike-slip fault. San Andreas Fault.
World-Wide Standardized Seismograph Network (USGS).
Vine–Matthews–Morley hypothesis.
Geomagnetic reversal. Proton precession magnetometer. Jaramillo (normal) event.
Potassium–argon dating.
Deep Sea Drilling Project.
“McKenzie Equations” for magma migration.
Transform fault.
Mendocino Fracture Zone.
Subduction.
P-wave. S-wave. Fault-plane solution. Compressional waves.
Triple junction.

December 23, 2017 Posted by | Books, Geology, Physics | Leave a comment

The Periodic Table

“After evolving for nearly 150 years through the work of numerous individuals, the periodic table remains at the heart of the study of chemistry. This is mainly because it is of immense practical benefit for making predictions about all manner of chemical and physical properties of the elements and possibilities for bond formation. Instead of having to learn the properties of the 100 or more elements, the modern chemist, or the student of chemistry, can make effective predictions from knowing the properties of typical members of each of the eight main groups and those of the transition metals and rare earth elements.”

I wasn’t very impressed with this book, but it wasn’t terrible. It didn’t include a lot of new stuff I didn’t already know and it focused in my opinion excessively on historical aspects; some of those things were interesting, for example the problems that confronted chemists trying to make sense of how best to categorize chemical elements in the late 19th century before the discovery of the neutron (the number of protons in the nucleus is not the same thing as the atomic weight of an atom – which was highly relevant because: “when it came to deciding upon the most important criterion for classifying the elements, Mendeleev insisted that atomic weight ordering would tolerate no exceptions”), but I’d have liked to learn a lot more about e.g. some of the chemical properties of the subgroups, instead of just revisiting stuff I’d learned earlier in other publications in the series. However I assume people who are new to chemistry – or who have forgot a lot, and would like to rectify this – might feel differently about the book and the way it covers the material included. However I don’t think this is one of the best publications in the physics/chemistry categories of this OUP series.

Some quotes and links below.

“Lavoisier held that an element should be defined as a material substance that has yet to be broken down into any more fundamental components. In 1789, Lavoisier published a list of 33 simple substances, or elements, according to this empirical criterion. […] the discovery of electricity enabled chemists to isolate many of the more reactive elements, which, unlike copper and iron, could not be obtained by heating their ores with charcoal (carbon). There have been a number of major episodes in the history of chemistry when half a dozen or so elements were discovered within a period of a few years. […] Following the discovery of radioactivity and nuclear fission, yet more elements were discovered. […] Today, we recognize about 90 naturally occurring elements. Moreover, an additional 25 or so elements have been artificially synthesized.”

“Chemical analogies between elements in the same group are […] of great interest in the field of medicine. For example, the element beryllium sits at the top of group 2 of the periodic table and above magnesium. Because of the similarity between these two elements, beryllium can replace the element magnesium that is essential to human beings. This behaviour accounts for one of the many ways in which beryllium is toxic to humans. Similarly, the element cadmium lies directly below zinc in the periodic table, with the result that cadmium can replace zinc in many vital enzymes. Similarities can also occur between elements lying in adjacent positions in rows of the periodic table. For example, platinum lies next to gold. It has long been known that an inorganic compound of platinum called cis-platin can cure various forms of cancer. As a result, many drugs have been developed in which gold atoms are made to take the place of platinum, and this has produced some successful new drugs. […] [R]ubidium […] lies directly below potassium in group 1 of the table. […] atoms of rubidium can mimic those of potassium, and so like potassium can easily be absorbed into the human body. This behaviour is exploited in monitoring techniques, since rubidium is attracted to cancers, especially those occurring in the brain.”

“Each horizontal row represents a single period of the table. On crossing a period, one passes from metals such as potassium and calcium on the left, through transition metals such as iron, cobalt, and nickel, then through some semi-metallic elements like germanium, and on to some non-metals such as arsenic, selenium, and bromine, on the right side of the table. In general, there is a smooth gradation in chemical and physical properties as a period is crossed, but exceptions to this general rule abound […] Metals themselves can vary from soft dull solids […] to hard shiny substances […]. Non-metals, on the other hand, tend to be solids or gases, such as carbon and oxygen respectively. In terms of their appearance, it is sometimes difficult to distinguish between solid metals and solid non-metals. […] The periodic trend from metals to non-metals is repeated with each period, so that when the rows are stacked, they form columns, or groups, of similar elements. Elements within a single group tend to share many important physical and chemical properties, although there are many exceptions.”

“There have been quite literally over 1,000 periodic tables published in print […] One of the ways of classifying the periodic tables that have been published is to consider three basic formats. First of all, there are the originally produced short-form tables published by the pioneers of the periodic table like Newlands, Lothar Meyer, and Mendeleev […] These tables essentially crammed all the then known elements into eight vertical columns or groups. […] As more information was gathered on the properties of the elements, and as more elements were discovered, a new kind of arrangement called the medium-long-form table […] began to gain prominence. Today, this form is almost completely ubiquitous. One odd feature is that the main body of the table does not contain all the elements. […] The ‘missing’ elements are grouped together in what looks like a separate footnote that lies below the main table. This act of separating off the rare earth elements, as they have traditionally been called, is performed purely for convenience. If it were not carried out, the periodic table would appear much wider, 32 elements wide to be precise, instead of 18 elements wide. The 32-wide element format does not lend itself readily to being reproduced on the inside cover of chemistry textbooks or on large wall-charts […] if the elements are shown in this expanded form, as they sometimes are, one has the long-form periodic table, which may be said to be more correct than the familiar medium-long form, in the sense that the sequence of elements is unbroken […] there are many forms of the periodic table, some designed for different uses. Whereas a chemist might favour a form that highlights the reactivity of the elements, an electrical engineer might wish to focus on similarities and patterns in electrical conductivities.”

“The periodic law states that after certain regular but varying intervals, the chemical elements show an approximate repetition in their properties. […] This periodic repetition of properties is the essential fact that underlies all aspects of the periodic system. […] The varying length of the periods of elements and the approximate nature of the repetition has caused some chemists to abandon the term ‘law’ in connection with chemical periodicity. Chemical periodicity may not seem as law-like as most laws of physics. […] A modern periodic table is much more than a collection of groups of elements showing similar chemical properties. In addition to what may be called ‘vertical relationships’, which embody triads of elements, a modern periodic table connects together groups of elements into an orderly sequence. A periodic table consists of a horizontal dimension, containing dissimilar elements, as well as a vertical dimension with similar elements.”

“[I]n modern terms, metals form positive ions by the loss of electrons, while non-metals gain electrons to form negative ions. Such oppositely charged ions combine together to form neutrally charged salts like sodium chloride or calcium bromide. There are further complementary aspects of metals and non-metals. Metal oxides or hydroxides dissolve in water to form bases, while non-metal oxides or hydroxides dissolve in water to form acids. An acid and a base react together in a ‘neutralization’ reaction to form a salt and water. Bases and acids, just like metals and non-metals from which they are formed, are also opposite but complementary.”

“[T]he law of constant proportion, [is] the fact that when two elements combine together, they do so in a constant ratio of their weights. […] The fact that macroscopic samples consist of a fixed ratio by weight of two elements reflects the fact that two particular atoms are combining many times over and, since they have particular masses, the product will also reflect that mass ratio. […] the law of multiple proportions [refers to the fact that] [w]hen one element A combines with another one, B, to form more than one compound, there is a simple ratio between the combining masses of B in the two compounds. For example, carbon and oxygen combine together to form carbon monoxide and carbon dioxide. The weight of combined oxygen in the dioxide is twice as much as the weight of combined oxygen in the monoxide.”

“One of his greatest triumphs, and perhaps the one that he is best remembered for, is Mendeleev’s correct prediction of the existence of several new elements. In addition, he corrected the atomic weights of some elements as well as relocating other elements to new positions within the periodic table. […] But not all of Mendeleev’s predictions were so dramatically successful, a feature that seems to be omitted from most popular accounts of the history of the periodic table. […] he was unsuccessful in as many as nine out of his eighteen published predictions […] some of the elements involved the rare earths which resemble each other very closely and which posed a major challenge to the periodic table for many years to come. […] The discovery of the inert gases at the end of the 19th century [also] represented an interesting challenge to the periodic system […] in spite of Mendeleev’s dramatic predictions of many other elements, he completely failed to predict this entire group of elements (He, Ne, Ar, Kr, Xe, Rn). Moreover, nobody else predicted these elements or even suspected their existence. The first of them to be isolated was argon, in 1894 […] Mendeleev […] could not accept the notion that elements could be converted into different ones. In fact, after the Curies began to report experiments that suggested the breaking up of atoms, Mendeleev travelled to Paris to see the evidence for himself, close to the end of his life. It is not clear whether he accepted this radical new notion even after his visit to the Curie laboratory.”

“While chemists had been using atomic weights to order the elements there had been a great deal of uncertainty about just how many elements remained to be discovered. This was due to the irregular gaps that occurred between the values of the atomic weights of successive elements in the periodic table. This complication disappeared when the switch was made to using atomic number. Now the gaps between successive elements became perfectly regular, namely one unit of atomic number. […] The discovery of isotopes […] came about partly as a matter of necessity. The new developments in atomic physics led to the discovery of a number of new elements such as Ra, Po, Rn, and Ac which easily assumed their rightful places in the periodic table. But in addition, 30 or so more apparent new elements were discovered over a short period of time. These new species were given provisional names like thorium emanation, radium emanation, actinium X, uranium X, thorium X, and so on, to indicate the elements which seemed to be producing them. […] To Soddy, the chemical inseparability [of such elements] meant only one thing, namely that these were two forms, or more, of the same chemical element. In 1913, he coined the term ‘isotopes’ to signify two or more atoms of the same element which were chemically completely inseparable, but which had different atomic weights.”

“The popular view reinforced in most textbooks is that chemistry is nothing but physics ‘deep down’ and that all chemical phenomena, and especially the periodic system, can be developed on the basis of quantum mechanics. […] This is important because chemistry books, especially textbooks aimed at teaching, tend to give the impression that our current explanation of the periodic system is essentially complete. This is just not the case […] the energies of the quantum states for any many-electron atom can be approximately calculated from first principles although there is extremely good agreement with observed energy values. Nevertheless, some global aspects of the periodic table have still not been derived from first principles to this day. […] We know where the periods close because we know that the noble gases occur at elements 2, 10, 18, 36, 54, etc. Similarly, we have a knowledge of the order of orbital filling from observations but not from theory. The conclusion, seldom acknowledged in textbook accounts of the explanation of the periodic table, is that quantum physics only partly explains the periodic table. Nobody has yet deduced the order of orbital filling from the principles of quantum mechanics. […] The situation that exists today is that chemistry, and in particular the periodic table, is regarded as being fully explained by quantum mechanics. Even though this is not quite the case, the explanatory role that the theory continues to play is quite undeniable. But what seems to be forgotten […] is that the periodic table led to the development of many aspects of modern quantum mechanics, and so it is rather short-sighted to insist that only the latter explains the former.”

“[N]uclei with an odd number of protons are invariably more unstable than those with an even number of protons. This difference in stability occurs because protons, like electrons, have a spin of one half and enter into energy orbitals, two by two, with opposite spins. It follows that even numbers of protons frequently produce total spins of zero and hence more stable nuclei than those with unpaired proton spins as occurs in nuclei with odd numbers of protons […] The larger the nuclear charge, the faster the motion of inner shell electrons. As a consequence of gaining relativistic speeds, such inner electrons are drawn closer to the nucleus, and this in turn has the effect of causing greater screening on the outermost electrons which determine the chemical properties of any particular element. It has been predicted that some atoms should behave chemically in a manner that is unexpected from their presumed positions in the periodic table. Relativistic effects thus pose the latest challenge to test the universality of the periodic table. […] The conclusion [however] seem to be that chemical periodicity is a remarkably robust phenomenon.”

Some links:

Periodic table.
History of the periodic table.
IUPAC.
Jöns Jacob Berzelius.
Valence (chemistry).
Equivalent weight. Atomic weight. Atomic number.
Rare-earth element. Transuranium element. Glenn T. Seaborg. Island of stability.
Old quantum theory. Quantum mechanics. Electron configuration.
Benjamin Richter. John Dalton. Joseph Louis Gay-Lussac. Amedeo Avogadro. Leopold Gmelin. Alexandre-Émile Béguyer de Chancourtois. John Newlands. Gustavus Detlef Hinrichs. Julius Lothar Meyer. Dmitri Mendeleev. Henry Moseley. Antonius van den Broek.
Diatomic molecule.
Prout’s hypothesis.
Döbereiner’s triads.
Karlsruhe Congress.
Noble gas.
Einstein’s theory of Brownian motion. Jean Baptiste Perrin.
Quantum number. Molecular orbitals. Madelung energy ordering rule.
Gilbert N. Lewis. (“G. N. Lewis is possibly the most significant chemist of the 20th century not to have been awarded a Nobel Prize.”) Irving Langmuir. Niels Bohr. Erwin Schrödinger.
Ionization energy.
Synthetic element.
Alternative periodic tables.
Group 3 element.

December 18, 2017 Posted by | Books, Chemistry, Medicine, Physics | Leave a comment

Nuclear Power (II)

This is my second and last post about the book. Some more links and quotes below.

“Many of the currently operating reactors were built in the late 1960s and 1970s. With a global hiatus on nuclear reactor construction following the Three Mile Island incident and the Chernobyl disaster, there is a dearth of nuclear power replacement capacity as the present fleet faces decommissioning. Nuclear power stations, like coal-, gas-, and oil-fired stations, produce heat to generate electricity and all require water for cooling. The US Geological Survey estimates that this use of water for cooling power stations accounts for over 3% of all water consumption. Most nuclear power plants are built close to the sea so that the ocean can be used as a heat dump. […] The need for such large quantities of water inhibits the use of nuclear power in arid regions of the world. […] The higher the operating temperature, the greater the water usage. […] [L]arge coal, gas and nuclear plants […] can consume millions of litres per hour”.

“A nuclear reactor is utilizing the strength of the force between nucleons while hydrocarbon burning is relying on the chemical bonding between molecules. Since the nuclear bonding is of the order of a million times stronger than the chemical bonding, the mass of hydrocarbon fuel necessary to produce a given amount of energy is about a million times greater than the equivalent mass of nuclear fuel. Thus, while a coal station might burn millions of tonnes of coal per year, a nuclear station with the same power output might consume a few tonnes.”

“There are a number of reasons why one might wish to reprocess the spent nuclear fuel. These include: to produce plutonium either for nuclear weapons or, increasingly, as a fuel-component for fast reactors; the recycling of all actinides for fast-breeder reactors, closing the nuclear fuel cycle, greatly increasing the energy extracted from natural uranium; the recycling of plutonium in order to produce mixed oxide fuels for thermal reactors; recovering enriched uranium from spent fuel to be recycled through thermal reactors; to extract expensive isotopes which are of value to medicine, agriculture, and industry. An integral part of this process is the management of the radioactive waste. Currently 40% of all nuclear fuel is obtained by reprocessing. […] The La Hague site is the largest reprocessing site in the world, with over half the global capacity at 1,700 tonnes of spent fuel per year. […] The world’s largest user of nuclear power, the USA, currently does not reprocess its fuel and hence produces [large] quantities of radioactive waste. […] The principal reprocessors of radioactive waste are France and the UK. Both countries receive material from other countries and after reprocessing return the raffinate to the country of origin for final disposition.”

“Nearly 45,000 tonnes of uranium are mined annually. More than half comes from the three largest producers, Canada, Kazakhstan, and Australia.”

“The designs of nuclear installations are required to be passed by national nuclear licensing agencies. These include strict safety and security features. The international standard for the integrity of a nuclear power plant is that it would withstand the crash of a Boeing 747 Jumbo Jet without the release of hazardous radiation beyond the site boundary. […] At Fukushima, the design was to current safety standards, taking into account the possibility of a severe earthquake; what had not been allowed for was the simultaneous tsunami strike.”

“The costing of nuclear power is notoriously controversial. Opponents point to the past large investments made in nuclear research and would like to factor this into the cost. There are always arguments about whether or not decommissioning costs and waste-management costs have been properly accounted for. […] which electricity source is most economical will vary from country to country […]. As with all industrial processes, there can be economies of scale. In the USA, and particularly in the UK, these economies of scale were never fully realized. In the UK, while several Magnox and AGR reactors were built, no two were of exactly the same design, resulting in no economies in construction costs, component manufacture, or staff training programmes. The issue is compounded by the high cost of licensing new designs. […] in France, the Regulatory Commission agreed a standard design for all plants and used a safety engineering process similar to that used for licensing aircraft. Public debate was thereafter restricted to local site issues. Economies of scale were achieved.”

“[C]onstruction costs […] are the largest single factor in the cost of nuclear electricity generation. […] Because the raw fuel is such a small fraction of the cost of nuclear power generation, the cost of electricity is not very sensitive to the cost of uranium, unlike the fossil fuels, for which fuel can represent up to 70% of the cost. Operating costs for nuclear plants have fallen dramatically as the French practice of standardization of design has spread. […] Generation III+ reactors are claimed to be half the size and capable of being built in much shorter times than the traditional PWRs. The 2008 contracted capital cost of building new plants containing two AP1000 reactors in the USA is around $10–$14billion, […] There is considerable experience of decommissioning of nuclear plants. In the USA, the cost of decommissioning a power plant is approximately $350 million. […] In France and Sweden, decommissioning costs are estimated to be 10–15% of construction costs and are included in the price charged for electricity. […] The UK has by far the highest estimates for decommissioning which are set at £1 billion per reactor. This exceptionally high figure is in part due to the much larger reactor core associated with graphite moderated piles. […] It is clear that in many countries nuclear-generated electricity is commercially competitive with fossil fuels despite the need to include the cost of capital and all waste disposal and decommissioning (factors that are not normally included for other fuels). […] At the present time, without the market of taxes and grants, electricity generated from renewable sources is generally more expensive than that from nuclear power or fossil fuels. This leaves the question: if nuclear power is so competitive, why is there not a global rush to build new nuclear power stations? The answer lies in the time taken to recoup investments. Investors in a new gas-fired power station can expect to recover their investment within 15 years. Because of the high capital start-up costs, nuclear power stations yield a slower rate of return, even though over the lifetime of the plant the return may be greater.”

“Throughout the 20th century, the population and GDP growth combined to drive the [global] demand for energy to increase at a rate of 4% per annum […]. The most conservative estimate is that the demand for energy will see global energy requirements double between 2000 and 2050. […] The demand for electricity is growing at twice the rate of the demand for energy. […] More than two-thirds of all electricity is generated by burning fossil fuels. […] The most rapidly growing renewable source of electricity generation is wind power […] wind is an intermittent source of electricity. […] The intermittency of wind power leads to [a] problem. The grid management has to supply a steady flow of electricity. Intermittency requires a heavy overhead on grid management, and there are serious concerns about the ability of national grids to cope with more than a 20% contribution from wind power. […] As for the other renewables, solar and geothermal power, significant electricity generation will be restricted to latitudes 40°S to 40°N and regions of suitable geological structures, respectively. Solar power and geothermal power are expected to increase but will remain a small fraction of the total electricity supply. […] In most industrialized nations, the current electricity supply is via a regional, national, or international grid. The electricity is generated in large (~1GW) power stations. This is a highly efficient means of electricity generation and distribution. If the renewable sources of electricity generation are to become significant, then a major restructuring of the distribution infrastructure will be necessary. While local ‘microgeneration’ can have significant benefits for small communities, it is not practical for the large-scale needs of big industrial cities in which most of the world’s population live.”

“Electricity cannot be stored in large quantities. If the installed generating capacity is designed to meet peak demand, there will be periods when the full capacity is not required. In most industrial countries, the average demand is only about one-third of peak consumption.”

Links:

Nuclear reprocessing. La Hague site. Radioactive waste. Yucca Mountain nuclear waste repository.
Bismuth phosphate process.
Nuclear decommissioning.
Uranium mining. Open-pit mining.
Wigner effect (Wigner heating). Windscale fire. Three Mile Island accident. Chernobyl disaster. Fukushima Daiichi nuclear disaster.
Fail-safe (engineering).
Treaty on the Non-Proliferation of Nuclear Weapons.
Economics of nuclear power plants.
Fusion power. Tokamak. ITER. High Power laser Energy Research facility (HiPER).
Properties of plasma.
Klystron.
World energy consumption by fuel source. Renewable energy.

 

December 16, 2017 Posted by | Books, Chemistry, Economics, Engineering, Physics | Leave a comment

Nuclear power (I)

I originally gave the book 2 stars, but after I had finished this post I changed that rating to 3 stars (which was not that surprising; already when I wrote my goodreads review shortly after having read the book I was conflicted about whether or not the book deserved the third star). One thing that kept me from giving the book a higher rating was that I thought that the author did not spend enough time on ‘the basic concepts’, a problem I also highlighted in my goodreads review. I’d fortunately recently covered some of those concepts in other books in the series, so it wasn’t too hard for me to follow what was going on, but as sometimes happens for authors of books in this series, I think the author simply was trying to cover too much stuff. But even so this is a nice introductory text on this topic.

I have added some links and quotes related to the first half or so of the book below. I prepared the link list before I started gathering quotes for my coverage, so there may be more overlap in terms of which topics are covered both in the quotes and the links than there usually is (I normally tend to reserve the links for topics and concepts which are covered in these books that I don’t find it necessary to cover in detail in the text – the links are meant to remind me/indicate which sort of topics are also covered in the book, aside from the topics included in the text coverage).

“According to Einstein’s mass–energy equation, the mass of any composite stable object has to be less than the sum of the masses of the parts; the difference is the binding energy of the object. […] The general features of the binding energies are simply understood as follows. We have seen that the measured radii of nuclei [increase] with the cube root of the mass number A. This is consistent with a structure of close packed nucleons. If each nucleon could only interact with its closest neighbours, the total binding energy would then itself be proportional to the number of nucleons. However, this would be an overestimate because nucleons at the surface of the nucleus would not have a complete set of nearest neighbours with which to interact […]. The binding energy would be reduced by the number of surface nucleons and this would be proportional to the surface area, itself proportional to A2/3. So far we have considered only the attractive short-range nuclear binding. However, the protons carry an electric charge and hence experience an electrical repulsion between each other. The electrical force between two protons is much weaker than the nuclear force at short distances but dominates at larger distances. Furthermore, the total electrical contribution increases with the number of pairs of protons.”

“The main characteristics of the empirical binding energy of nuclei […] can now be explained. For the very light nuclei, all the nucleons are in the surface, the electrical repulsion is negligible, and the binding energy increases as the volume and number of nucleons increases. Next, the surface effects start to slow the rate of growth of the binding energy yielding a region of most stable nuclei near charge number Z = 28 (iron). Finally, the electrical repulsion steadily increases until we reach the most massive stable nucleus (lead-208). Between iron and lead, not only does the binding energy decrease so also do the proton to neutron ratios since the neutrons do not experience the electrical repulsion. […] as the nuclei get heavier the Coulomb repulsion term requires an increasing number of neutrons for stability […] For an explanation of [the] peaks, we must turn to the quantum nature of the problem. […] Filled shells corresponded to particularly stable electronic structures […] In the nuclear case, a shell structure also exists separately for both the neutrons and the protons. […] Closed-shell nuclei are referred to as ‘magic number’ nuclei. […] there is a particular stability for nuclei with equal numbers of protons and neutrons.”

“As we move off the line of stable nuclei, by adding or subtracting neutrons, the isotopes become increasingly less stable indicated by increasing levels of beta radioactivity. Nuclei with a surfeit of neutrons emit an electron, hence converting one of the neutrons into a proton, while isotopes with a neutron deficiency can emit a positron with the conversion of a proton into a neutron. For the heavier nuclei, the neutron to proton ratio can be reduced by emitting an alpha particle. All nuclei heavier than lead are unstable and hence radioactive alpha emitters. […] The fact that almost all the radioactive isotopes heavier than lead follow [a] kind of decay chain and end up as stable isotopes of lead explains this element’s anomalously high natural abundance.”

“When two particles collide, they transfer energy and momentum between themselves. […] If the target is much lighter than the projectile, the projectile sweeps it aside with little loss of energy and momentum. If the target is much heavier than the projectile, the projectile simply bounces off the target with little loss of energy. The maximum transfer of energy occurs when the target and the projectile have the same mass. In trying to slow down the neutrons, we need to pass them through a moderator containing scattering centres of a similar mass. The obvious candidate is hydrogen, in which the single proton of the nucleus is the particle closest in mass to the neutron. At first glance, it would appear that water, with its low cost and high hydrogen content, would be the ideal moderator. There is a problem, however. Slow neutrons can combine with protons to form an isotope of hydrogen, deuterium. This removes neutrons from the chain reaction. To overcome this, the uranium fuel has to be enriched by increasing the proportion of uranium-235; this is expensive and technically difficult. An alternative is to use heavy water, that is, water in which the hydrogen is replaced by deuterium. It is not quite as effective as a moderator but it does not absorb neutrons. Heavy water is more expensive and its production more technically demanding than natural water. Finally, graphite (carbon) has a mass of 12 and hence is less efficient requiring a larger reactor core, but it is inexpensive and easily available.”

“[During the Manhattan Project,] Oak Ridge, Tennessee, was chosen as the facility to develop techniques for uranium enrichment (increasing the relative abundance of uranium-235) […] a giant gaseous diffusion facility was developed. Gaseous uranium hexafluoride was forced through a semi permeable membrane. The lighter isotopes passed through faster and at each pass through the membrane the uranium hexafluoride became more and more enriched. The technology is very energy consuming […]. At its peak, Oak Ridge consumed more electricity than New York and Washington DC combined. Almost one-third of all enriched uranium is still produced by this now obsolete technology. The bulk of enriched uranium today is produced in high-speed centrifuges which require much less energy.”

“In order to sustain a nuclear chain reaction, it is essential to have a critical mass of fissile material. This mass depends upon the fissile fuel being used and the topology of the structure containing it. […] The chain reaction is maintained by the neutrons and many of these leave the surface without contributing to the reaction chain. Surrounding the fissile material with a blanket of neutron reflecting material, such as beryllium metal, will keep the neutrons in play and reduce the critical mass. Partially enriched uranium will have an increased critical mass and natural uranium (0.7% uranium-235) will not go critical at any mass without a moderator to increase the number of slow neutrons which are the dominant fission triggers. The critical mass can also be decreased by compressing the fissile material.”

“It is now more than 50 years since operations of the first civil nuclear reactor began. In the intervening years, several hundred reactors have been operating, in total amounting to nearly 50 million hours of experience. This cumulative experience has led to significant advances in reactor design. Different reactor types are defined by their choice of fuel, moderator, control rods, and coolant systems. The major advances leading to greater efficiency, increased economy, and improved safety are referred to as ‘generations’. […] [F]irst generation reactors […] had the dual purpose to make electricity for public consumption and plutonium for the Cold War stockpiles of nuclear weapons. Many of the features of the design were incorporated to meet the need for plutonium production. These impacted on the electricity-generating cost and efficiency. The most important of these was the use of unenriched uranium due to the lack of large-scale enrichment plants in the UK, and the high uranium-238 content was helpful in the plutonium production but made the electricity generation less efficient.”

PWRs, BWRs, and VVERs are known as LWRs (Light Water Reactors). LWRs dominate the world’s nuclear power programme, with the USA operating 69 PWRs and 35 BWRs; Japan operates 63 LWRs, the bulk of which are BWRs; and France has 59 PWRs. Between them, these three countries generate 56% of the world’s nuclear power. […] In the 1990s, a series of advanced versions of the Generation II and III reactors began to receive certification. These included the ACR (Advanced CANDU Reactor), the EPR (European Pressurized Reactor), and Westinghouse AP1000 and APR1400 reactors (all developments of the PWR) and ESBWR (a development of the BWR). […] The ACR uses slightly enriched uranium and a light water coolant, allowing the core to be halved in size for the same power output. […] It would appear that two of the Generation III+ reactors, the EPR […] and AP1000, are set to dominate the world market for the next 20 years. […] […] the EPR is considerably safer than current reactor designs. […] A major advance is that the generation 3+ reactors produce only about 10 % of waste compared with earlier versions of LWRs. […] China has officially adopted the AP1000 design as a standard for future nuclear plants and has indicated a wish to see 100 nuclear plants under construction or in operation by 2020.”

“All thermal electricity-generating systems are examples of heat engines. A heat engine takes energy from a high-temperature environment to a low-temperature environment and in the process converts some of the energy into mechanical work. […] In general, the efficiency of the thermal cycle increases as the temperature difference between the low-temperature environment and the high-temperature environment increases. In PWRs, and nearly all thermal electricity-generating plants, the efficiency of the thermal cycle is 30–35%. At the much higher operating temperatures of Generation IV reactors, typically 850–10000C, it is hoped to increase this to 45–50%.
During the operation of a thermal nuclear reactor, there can be a build-up of fission products known as reactor poisons. These are materials with a large capacity to absorb neutrons and this can slow down the chain reaction; in extremes, it can lead to a complete close-down. Two important poisons are xenon-135 and samarium-149. […] During steady state operation, […] xenon builds up to an equilibrium level in 40–50 hours when a balance is reached between […] production […] and the burn-up of xenon by neutron capture. If the power of the reactor is increased, the amount of xenon increases to a higher equilibrium and the process is reversed if the power is reduced. If the reactor is shut down the burn-up of xenon ceases, but the build-up of xenon continues from the decay of iodine. Restarting the reactor is impeded by the higher level of xenon poisoning. Hence it is desirable to keep reactors running at full capacity as long as possible and to have the capacity to reload fuel while the reactor is on line. […] Nuclear plants operate at highest efficiency when operated continually close to maximum generating capacity. They are thus ideal for provision of base load. If their output is significantly reduced, then the build-up of reactor poisons can impact on their efficiency.”

Links:

Radioactivity. Alpha decay. Beta decay. Gamma decay. Free neutron decay.
Periodic table.
Rutherford scattering.
Isotope.
Neutrino. Positron. Antineutrino.
Binding energy.
Mass–energy equivalence.
Electron shell.
Decay chain.
Heisenberg uncertainty principle.
Otto Hahn. Lise Meitner. Fritz Strassman. Enrico Fermi. Leo Szilárd. Otto Frisch. Rudolf Peierls.
Uranium 238. Uranium 235. Plutonium.
Nuclear fission.
Chicago Pile 1.
Manhattan Project.
Uranium hexafluoride.
Heavy water.
Nuclear reactor coolant. Control rod.
Critical mass. Nuclear chain reaction.
Magnox reactor. UNGG reactor. CANDU reactor.
ZEEP.
Nuclear reactor classifications (a lot of the distinctions included in this article are also included in the book and described in some detail. The topics included here are also covered extensively).
USS Nautilus.
Nuclear fuel cycle.
Thorium-based nuclear power.
Heat engine. Thermodynamic cycle. Thermal efficiency.
Reactor poisoning. Xenon 135. Samarium 149.
Base load.

December 7, 2017 Posted by | Books, Chemistry, Engineering, Physics | Leave a comment

The history of astronomy

It’s been a while since I read this book, and I was for a while strongly considering not blogging it at all. In the end I figured I ought to cover it after all in at least a little bit of detail, though when I made the decision to cover the book here I also decided not to cover it in nearly as much detail as I usually cover the books in this series.

Below some random observations from the book which I found sufficiently interesting to add here.

“The Almagest is a magisterial work that provided geometrical models and related tables by which the movements of the Sun, Moon, and the five lesser planets could be calculated for the indefinite future. […] Its catalogue contains over 1,000 fixed stars arranged in 48 constellations, giving the longitude, latitude, and apparent brightness of each. […] the Almagest would dominate astronomy like a colossus for 14 centuries […] In the universities of the later Middle Ages, students would be taught Aristotle in philosophy and a simplified Ptolemy in astronomy. From Aristotle they would learn the basic truth that the heavens rotate uniformly about the central Earth. From the simplified Ptolemy they would learn of epicycles and eccentrics that violated this basic truth by generating orbits whose centre was not the Earth; and those expert enough to penetrate deeper into the Ptolemaic models would encounter equant theories that violated the (yet more basic) truth that heavenly motion is uniform. […] with the models of the Almagest – whose parameters would be refined over the centuries to come – the astronomer, and the astrologer, could compute the future positions of the planets with economy and reasonable accuracy. There were anomalies – the Moon, for example, would vary its apparent size dramatically in the Ptolemaic model but does not do so in reality, and Venus and Mercury were kept close to the Sun in the sky by a crude ad hoc device – but as a geometrical compendium of how to grind out planetary tables, the Almagest worked, and that was what mattered.”

“The revival of astronomy – and astrology – among the Latins was stimulated around the end of the first millennium when the astrolabe entered the West from Islamic Spain. Astrology in those days had a [‘]rational[‘] basis rooted in the Aristotelian analogy between the microcosm – the individual living body – and the macrocosm, the cosmos as a whole. Medical students were taught how to track the planets, so that they would know when the time was favourable for treating the corresponding organs in their patients.” [Aaargh! – US]

“The invention of printing in the 15th century had many consequences, none more significant than the stimulus it gave to the mathematical sciences. All scribes, being human, made occasional errors in preparing a copy of a manuscript. These errors would often be transmitted to copies of the copy. But if the works were literary and the later copyists attended to the meaning of the text, they might recognize and correct many of the errors introduced by their predecessors. Such control could rarely be exercised by copyists required to reproduce texts with significant numbers of mathematical symbols. As a result, a formidable challenge faced the medieval student of a mathematical or astronomical treatise, for it was available to him only in a manuscript copy that had inevitably become corrupt in transmission. After the introduction of printing, all this changed.”

“Copernicus, like his predecessors, had been content to work with observations handed down from the past, making new ones only when unavoidable and using instruments that left much to be desired. Tycho [Brahe], whose work marks the watershed between observational astronomy ancient and modern, saw accuracy of observation as the foundation of all good theorizing. He dreamed of having an observatory where he could pursue the research and development of precision instrumentation, and where a skilled team of assistants would test the instruments even as they were compiling a treasury of observations. Exploiting his contacts at the highest level, Tycho persuaded King Frederick II of Denmark to grant him the fiefdom of the island of Hven, and there, between 1576 and 1580, he constructed Uraniborg (‘Heavenly Castle’), the first scientific research institution of the modern era. […] Tycho was the first of the modern observers, and in his catalogue of 777 stars the positions of the brightest are accurate to a minute or so of arc; but he himself was probably most proud of his cosmology, which Galileo was not alone in seeing as a retrograde compromise. Tycho appreciated the advantages of heliocentic planetary models, but he was also conscious of the objections […]. In particular, his inability to detect annual parallax even with his superb instrumentation implied that the Copernican excuse, that the stars were too far away for annual parallax to be detected, was now implausible in the extreme. The stars, he calculated, would have to be at least 700 times further away than Saturn for him to have failed for this reason, and such a vast, purposeless empty space between the planets and the stars made no sense. He therefore looked for a cosmology that would have the geometrical advantages of the heliocentric models but would retain the Earth as the body physically at rest at the centre of the cosmos. The solution seems obvious in hindsight: make the Sun (and Moon) orbit the central Earth, and make the five planets into satellites of the Sun.”

“Until the invention of the telescope, each generation of astronomers had looked at much the same sky as their predecessors. If they knew more, it was chiefly because they had more books to read, more records to mine. […] Galileo could say of his predecessors, ‘If they had seen what we see, they would have judged as we judge’; and ever since his time, the astronomers of each generation have had an automatic advantage over their predecessors, because they possess apparatus that allows them access to objects unseen, unknown, and therefore unstudied in the past. […] astronomers [for a long time] found themselves in a situation where, as telescopes improved, the two coordinates of a star’s position on the heavenly sphere were being measured with ever increasing accuracy, whereas little was known of the star’s third coordinate, distance, except that its scale was enormous. Even the assumption that the nearest stars were the brightest was […rightly, US] being called into question, as the number of known proper motions increased and it emerged that not all the fastest-moving stars were bright.”

“We know little of how Newton’s thinking developed between 1679 and the visit from Halley in 1684, except for a confused exchange of letters between Newton and the Astronomer Royal, John Flamsteed […] the visit from the suitably deferential and tactful Halley encouraged Newton to promise him written proof that elliptical orbits would result from an inverse-square force of attraction residing in the Sun. The drafts grew and grew, and eventually resulted in The Mathematical Principles of Natural Philosophy (1687), better known in its abbreviated Latin title of the Principia. […] All three of Kepler’s laws (the second in ‘area’ form), which had been derived by their author from observations, with the help of a highly dubious dynamics, were now shown to be consequences of rectilinear motion under an inverse-square force. […] As the drafts of Principia multiplied, so too did the number of phenomena that at last found their explanation. The tides resulted from the difference between the effects on the land and on the seas of the attraction of Sun and Moon. The spinning Earth bulged at the equator and was flattened at the poles, and so was not strictly spherical; as a result, the attraction of Sun and Moon caused the Earth’s axis to wobble and so generated the precession of the equinoxes first noticed by Hipparchus. […] Newton was able to use the observed motions of the moons of Earth, Jupiter, and Saturn to calculate the masses of the parent planets, and he found that Jupiter and Saturn were huge compared to Earth – and, in all probability, to Mercury, Venus, and Mars.”

December 5, 2017 Posted by | Astronomy, Books, History, Mathematics, Physics | Leave a comment

Radioactivity

A few quotes from the book and some related links below. Here’s my very short goodreads review of the book.

Quotes:

“The main naturally occurring radionuclides of primordial origin are uranium-235, uranium-238, thorium-232, their decay products, and potassium-40. The average abundance of uranium, thorium, and potassium in the terrestrial crust is 2.6 parts per million, 10 parts per million, and 1% respectively. Uranium and thorium produce other radionuclides via neutron- and alpha-induced reactions, particularly deeply underground, where uranium and thorium have a high concentration. […] A weak source of natural radioactivity derives from nuclear reactions of primary and secondary cosmic rays with the atmosphere and the lithosphere, respectively. […] Accretion of extraterrestrial material, intensively exposed to cosmic rays in space, represents a minute contribution to the total inventory of radionuclides in the terrestrial environment. […] Natural radioactivity is [thus] mainly produced by uranium, thorium, and potassium. The total heat content of the Earth, which derives from this radioactivity, is 12.6 × 1024 MJ (one megajoule = 1 million joules), with the crust’s heat content standing at 5.4 × 1021 MJ. For comparison, this is significantly more than the 6.4 × 1013 MJ globally consumed for electricity generation during 2011. This energy is dissipated, either gradually or abruptly, towards the external layers of the planet, but only a small fraction can be utilized. The amount of energy available depends on the Earth’s geological dynamics, which regulates the transfer of heat to the surface of our planet. The total power dissipated by the Earth is 42 TW (one TW = 1 trillion watts): 8 TW from the crust, 32.3 TW from the mantle, 1.7 TW from the core. This amount of power is small compared to the 174,000 TW arriving to the Earth from the Sun.”

“Charged particles such as protons, beta and alpha particles, or heavier ions that bombard human tissue dissipate their energy locally, interacting with the atoms via the electromagnetic force. This interaction ejects electrons from the atoms, creating a track of electron–ion pairs, or ionization track. The energy that ions lose per unit path, as they move through matter, increases with the square of their charge and decreases linearly with their energy […] The energy deposited in the tissues and organs of your body by ionizing radiation is defined absorbed dose and is measured in gray. The dose of one gray corresponds to the energy of one joule deposited in one kilogram of tissue. The biological damage wrought by a given amount of energy deposited depends on the kind of ionizing radiation involved. The equivalent dose, measured in sievert, is the product of the dose and a factor w related to the effective damage induced into the living matter by the deposit of energy by specific rays or particles. For X-rays, gamma rays, and beta particles, a gray corresponds to a sievert; for neutrons, a dose of one gray corresponds to an equivalent dose of 5 to 20 sievert, and the factor w is equal to 5–20 (depending on the neutron energy). For protons and alpha particles, w is equal to 5 and 20, respectively. There is also another weighting factor taking into account the radiosensitivity of different organs and tissues of the body, to evaluate the so-called effective dose. Sometimes the dose is still quoted in rem, the old unit, with 100 rem corresponding to one sievert.”

“Neutrons emitted during fission reactions have a relatively high velocity. When still in Rome, Fermi had discovered that fast neutrons needed to be slowed down to increase the probability of their reaction with uranium. The fission reaction occurs with uranium-235. Uranium-238, the most common isotope of the element, merely absorbs the slow neutrons. Neutrons slow down when they are scattered by nuclei with a similar mass. The process is analogous to the interaction between two billiard balls in a head-on collision, in which the incoming ball stops and transfers all its kinetic energy to the second one. ‘Moderators’, such as graphite and water, can be used to slow neutrons down. […] When Fermi calculated whether a chain reaction could be sustained in a homogeneous mixture of uranium and graphite, he got a negative answer. That was because most neutrons produced by the fission of uranium-235 were absorbed by uranium-238 before inducing further fissions. The right approach, as suggested by Szilárd, was to use separated blocks of uranium and graphite. Fast neutrons produced by the splitting of uranium-235 in the uranium block would slow down, in the graphite block, and then produce fission again in the next uranium block. […] A minimum mass – the critical mass – is required to sustain the chain reaction; furthermore, the material must have a certain geometry. The fissile nuclides, capable of sustaining a chain reaction of nuclear fission with low-energy neutrons, are uranium-235 […], uranium-233, and plutonium-239. The last two don’t occur in nature but can be produced artificially by irradiating with neutrons thorium-232 and uranium-238, respectively – via a reaction called neutron capture. Uranium-238 (99.27%) is fissionable, but not fissile. In a nuclear weapon, the chain reaction occurs very rapidly, releasing the energy in a burst.”

“The basic components of nuclear power reactors, fuel, moderator, and control rods, are the same as in the first system built by Fermi, but the design of today’s reactors includes additional components such as a pressure vessel, containing the reactor core and the moderator, a containment vessel, and redundant and diverse safety systems. Recent technological advances in material developments, electronics, and information technology have further improved their reliability and performance. […] The moderator to slow down fast neutrons is sometimes still the graphite used by Fermi, but water, including ‘heavy water’ – in which the water molecule has a deuterium atom instead of a hydrogen atom – is more widely used. Control rods contain a neutron-absorbing material, such as boron or a combination of indium, silver, and cadmium. To remove the heat generated in the reactor core, a coolant – either a liquid or a gas – is circulating through the reactor core, transferring the heat to a heat exchanger or directly to a turbine. Water can be used as both coolant and moderator. In the case of boiling water reactors (BWRs), the steam is produced in the pressure vessel. In the case of pressurized water reactors (PWRs), the steam generator, which is the secondary side of the heat exchanger, uses the heat produced by the nuclear reactor to make steam for the turbines. The containment vessel is a one-metre-thick concrete and steel structure that shields the reactor.”

“Nuclear energy contributed 2,518 TWh of the world’s electricity in 2011, about 14% of the global supply. As of February 2012, there are 435 nuclear power plants operating in 31 countries worldwide, corresponding to a total installed capacity of 368,267 MW (electrical). There are 63 power plants under construction in 13 countries, with a capacity of 61,032 MW (electrical).”

“Since the first nuclear fusion, more than 60 years ago, many have argued that we need at least 30 years to develop a working fusion reactor, and this figure has stayed the same throughout those years.”

“[I]onizing radiation is […] used to improve many properties of food and other agricultural products. For example, gamma rays and electron beams are used to sterilize seeds, flour, and spices. They can also inhibit sprouting and destroy pathogenic bacteria in meat and fish, increasing the shelf life of food. […] More than 60 countries allow the irradiation of more than 50 kinds of foodstuffs, with 500,000 tons of food irradiated every year. About 200 cobalt-60 sources and more than 10 electron accelerators are dedicated to food irradiation worldwide. […] With the help of radiation, breeders can increase genetic diversity to make the selection process faster. The spontaneous mutation rate (number of mutations per gene, for each generation) is in the range 10-8–10-5. Radiation can increase this mutation rate to 10-5–10-2. […] Long-lived cosmogenic radionuclides provide unique methods to evaluate the ‘age’ of groundwaters, defined as the mean subsurface residence time after the isolation of the water from the atmosphere. […] Scientists can date groundwater more than a million years old, through chlorine-36, produced in the atmosphere by cosmic-ray reactions with argon.”

“Radionuclide imaging was developed in the 1950s using special systems to detect the emitted gamma rays. The gamma-ray detectors, called gamma cameras, use flat crystal planes, coupled to photomultiplier tubes, which send the digitized signals to a computer for image reconstruction. Images show the distribution of the radioactive tracer in the organs and tissues of interest. This method is based on the introduction of low-level radioactive chemicals into the body. […] More than 100 diagnostic tests based on radiopharmaceuticals are used to examine bones and organs such as lungs, intestines, thyroids, kidneys, the liver, and gallbladder. They exploit the fact that our organs preferentially absorb different chemical compounds. […] Many radiopharmaceuticals are based on technetium-99m (an excited state of technetium-99 – the ‘m’ stands for ‘metastable’ […]). This radionuclide is used for the imaging and functional examination of the heart, brain, thyroid, liver, and other organs. Technetium-99m is extracted from molybdenum-99, which has a much longer half-life and is therefore more transportable. It is used in 80% of the procedures, amounting to about 40,000 per day, carried out in nuclear medicine. Other radiopharmaceuticals include short-lived gamma-emitters such as cobalt-57, cobalt-58, gallium-67, indium-111, iodine-123, and thallium-201. […] Methods routinely used in medicine, such as X-ray radiography and CAT, are increasingly used in industrial applications, particularly in non-destructive testing of containers, pipes, and walls, to locate defects in welds and other critical parts of the structure.”

“Today, cancer treatment with radiation is generally based on the use of external radiation beams that can target the tumour in the body. Cancer cells are particularly sensitive to damage by ionizing radiation and their growth can be controlled or, in some cases, stopped. High-energy X-rays produced by a linear accelerator […] are used in most cancer therapy centres, replacing the gamma rays produced from cobalt-60. The LINAC produces photons of variable energy bombarding a target with a beam of electrons accelerated by microwaves. The beam of photons can be modified to conform to the shape of the tumour, which is irradiated from different angles. The main problem with X-rays and gamma rays is that the dose they deposit in the human tissue decreases exponentially with depth. A considerable fraction of the dose is delivered to the surrounding tissues before the radiation hits the tumour, increasing the risk of secondary tumours. Hence, deep-seated tumours must be bombarded from many directions to receive the right dose, while minimizing the unwanted dose to the healthy tissues. […] The problem of delivering the needed dose to a deep tumour with high precision can be solved using collimated beams of high-energy ions, such as protons and carbon. […] Contrary to X-rays and gamma rays, all ions of a given energy have a certain range, delivering most of the dose after they have slowed down, just before stopping. The ion energy can be tuned to deliver most of the dose to the tumour, minimizing the impact on healthy tissues. The ion beam, which does not broaden during the penetration, can follow the shape of the tumour with millimetre precision. Ions with higher atomic number, such as carbon, have a stronger biological effect on the tumour cells, so the dose can be reduced. Ion therapy facilities are [however] still very expensive – in the range of hundreds of millions of pounds – and difficult to operate.”

“About 50 million years ago, a global cooling trend took our planet from the tropical conditions at the beginning of the Tertiary to the ice ages of the Quaternary, when the Arctic ice cap developed. The temperature decrease was accompanied by a decrease in atmospheric CO2 from 2,000 to 300 parts per million. The cooling was probably caused by a reduced greenhouse effect and also by changes in ocean circulation due to plate tectonics. The drop in temperature was not constant as there were some brief periods of sudden warming. Ocean deep-water temperatures dropped from 12°C, 50 million years ago, to 6°C, 30 million years ago, according to archives in deep-sea sediments (today, deep-sea waters are about 2°C). […] During the last 2 million years, the mean duration of the glacial periods was about 26,000 years, while that of the warm periods – interglacials – was about 27,000 years. Between 2.6 and 1.1 million years ago, a full cycle of glacial advance and retreat lasted about 41,000 years. During the past 1.2 million years, this cycle has lasted 100,000 years. Stable and radioactive isotopes play a crucial role in the reconstruction of the climatic history of our planet”.

Links:

CUORE (Cryogenic Underground Observatory for Rare Events).
Borexino.
Lawrence Livermore National Laboratory.
Marie Curie. Pierre Curie. Henri Becquerel. Wilhelm Röntgen. Joseph Thomson. Ernest Rutherford. Hans Geiger. Ernest Marsden. Niels Bohr.
Ruhmkorff coil.
Electroscope.
Pitchblende (uraninite).
Mache.
Polonium. Becquerel.
Radium.
Alpha decay. Beta decay. Gamma radiation.
Plum pudding model.
Spinthariscope.
Robert Boyle. John Dalton. Dmitri Mendeleev. Frederick Soddy. James Chadwick. Enrico Fermi. Lise Meitner. Otto Frisch.
Periodic Table.
Exponential decay. Decay chain.
Positron.
Particle accelerator. Cockcroft-Walton generator. Van de Graaff generator.
Barn (unit).
Nuclear fission.
Manhattan Project.
Chernobyl disaster. Fukushima Daiichi nuclear disaster.
Electron volt.
Thermoluminescent dosimeter.
Silicon diode detector.
Enhanced geothermal system.
Chicago Pile Number 1. Experimental Breeder Reactor 1. Obninsk Nuclear Power Plant.
Natural nuclear fission reactor.
Gas-cooled reactor.
Generation I reactors. Generation II reactor. Generation III reactor. Generation IV reactor.
Nuclear fuel cycle.
Accelerator-driven subcritical reactor.
Thorium-based nuclear power.
Small, sealed, transportable, autonomous reactor.
Fusion power. P-p (proton-proton) chain reaction. CNO cycle. Tokamak. ITER (International Thermonuclear Experimental Reactor).
Sterile insect technique.
Phase-contrast X-ray imaging. Computed tomography (CT). SPECT (Single-photon emission computed tomography). PET (positron emission tomography).
Boron neutron capture therapy.
Radiocarbon dating. Bomb pulse.
Radioactive tracer.
Radithor. The Radiendocrinator.
Radioisotope heater unit. Radioisotope thermoelectric generator. Seebeck effect.
Accelerator mass spectrometry.
Atomic bombings of Hiroshima and Nagasaki. Treaty on the Non-Proliferation of Nuclear Weapons. IAEA.
Nuclear terrorism.
Swiss light source. Synchrotron.
Chronology of the universe. Stellar evolution. S-process. R-process. Red giant. Supernova. White dwarf.
Victor Hess. Domenico Pacini. Cosmic ray.
Allende meteorite.
Age of the Earth. History of Earth. Geomagnetic reversal. Uranium-lead dating. Clair Cameron Patterson.
Glacials and interglacials.
Taung child. Lucy. Ardi. Ardipithecus kadabba. Acheulean tools. Java Man. Ötzi.
Argon-argon dating. Fission track dating.

November 28, 2017 Posted by | Archaeology, Astronomy, Biology, Books, Cancer/oncology, Chemistry, Engineering, Geology, History, Medicine, Physics | Leave a comment

Isotopes

A decent book. Below some quotes and links.

“[A]ll mass spectrometers have three essential components — an ion source, a mass filter, and some sort of detector […] Mass spectrometers need to achieve high vacuum to allow the uninterrupted transmission of ions through the instrument. However, even high-vacuum systems contain residual gas molecules which can impede the passage of ions. Even at very high vacuum there will still be residual gas molecules in the vacuum system that present potential obstacles to the ion beam. Ions that collide with residual gas molecules lose energy and will appear at the detector at slightly lower mass than expected. This tailing to lower mass is minimized by improving the vacuum as much as possible, but it cannot be avoided entirely. The ability to resolve a small isotope peak adjacent to a large peak is called ‘abundance sensitivity’. A single magnetic sector TIMS has abundance sensitivity of about 1 ppm per mass unit at uranium masses. So, at mass 234, 1 ion in 1,000,000 will actually be 235U not 234U, and this will limit our ability to quantify the rare 234U isotope. […] AMS [accelerator mass spectrometry] instruments use very high voltages to achieve high abundance sensitivity. […] As I write this chapter, the human population of the world has recently exceeded seven billion. […] one carbon atom in 1012 is mass 14. So, detecting 14C is far more difficult than identifying a single person on Earth, and somewhat comparable to identifying an individual leaf in the Amazon rain forest. Such is the power of isotope ratio mass spectrometry.”

14C is produced in the Earth’s atmosphere by the interaction between nitrogen and cosmic ray neutrons that releases a free proton turning 147N into 146C in a process that we call an ‘n-p’ reaction […] Because the process is driven by cosmic ray bombardment, we call 14C a ‘cosmogenic’ isotope. The half-life of 14C is about 5,000 years, so we know that all the 14C on Earth is either cosmogenic or has been created by mankind through nuclear reactors and bombs — no ‘primordial’ 14C remains because any that originally existed has long since decayed. 14C is not the only cosmogenic isotope; 16O in the atmosphere interacts with cosmic radiation to produce the isotope 10Be (beryllium). […] The process by which a high energy cosmic ray particle removes several nucleons is called ‘spallation’. 10Be production from 16O is not restricted to the atmosphere but also occurs when cosmic rays impact rock surfaces. […] when cosmic rays hit a rock surface they don’t bounce off but penetrate the top 2 or 3 metres (m) — the actual ‘attenuation’ depth will vary for particles of different energy. Most of the Earth’s crust is made of silicate minerals based on bonds between oxygen and silicon. So, the same spallation process that produces 10Be in the atmosphere also occurs in rock surfaces. […] If we know the flux of cosmic rays impacting a surface, the rate of production of the cosmogenic isotopes with depth below the rock surface, and the rate of radioactive decay, it should be possible to convert the number of cosmogenic atoms into an exposure age. […] Rocks on Earth which are shielded from much of the cosmic radiation have much lower levels of isotopes like 10Be than have meteorites which, before they arrive on Earth, are exposed to the full force of cosmic radiation. […] polar scientists have used cores drilled through ice sheets in Antarctica and Greenland to compare 10Be at different depths and thereby reconstruct 10Be production through time. The 14C and 10Be records are closely correlated indicating the common response to changes in the cosmic ray flux.”

“[O]nce we have credible cosmogenic isotope production rates, […] there are two classes of applications, which we can call ‘exposure’ and ‘burial’ methodologies. Exposure studies simply measure the accumulation of the cosmogenic nuclide. Such studies are simplest when the cosmogenic nuclide is a stable isotope like 3He and 21Ne. These will just accumulate continuously as the sample is exposed to cosmic radiation. Slightly more complicated are cosmogenic isotopes that are radioactive […]. These isotopes accumulate through exposure but will also be destroyed by radioactive decay. Eventually, the isotopes achieve the condition known as ‘secular equilibrium’ where production and decay are balanced and no chronological information can be extracted. Secular equilibrium is achieved after three to four half-lives […] Imagine a boulder that has been transported from its place of origin to another place within a glacier — what we call a glacial erratic. While the boulder was deeply covered in ice, it would not have been exposed to cosmic radiation. Its cosmogenic isotopes will only have accumulated since the ice melted. So a cosmogenic isotope exposure age tells us the date at which the glacier retreated, and, by examining multiple erratics from different locations along the course of the glacier, allows us to construct a retreat history for the de-glaciation. […] Burial methodologies using cosmogenic isotopes work in situations where a rock was previously exposed to cosmic rays but is now located in a situation where it is shielded.”

“Cosmogenic isotopes are also being used extensively to recreate the seismic histories of tectonically active areas. Earthquakes occur when geological faults give way and rock masses move. A major earthquake is likely to expose new rock to the Earth’s surface. If the field geologist can identify rocks in a fault zone that (s)he is confident were brought to the surface in an earthquake, then a cosmogenic isotope exposure age would date the fault — providing, of course, that subsequent erosion can be ruled out or quantified. Precarious rocks are rock outcrops that could reasonably be expected to topple if subjected to a significant earthquake. Dating the exposed surface of precarious rocks with cosmogenic isotopes can reveal the amount of time that has elapsed since the last earthquake of a magnitude that would have toppled the rock. Constructing records of seismic history is not merely of academic interest; some of the world’s seismically active areas are also highly populated and developed.”

“One aspect of the natural decay series that acts in favour of the preservation of accurate age information is the fact that most of the intermediate isotopes are short-lived. For example, in both the U series the radon (Rn) isotopes, which might be expected to diffuse readily out of a mineral, have half-lives of only seconds or days, too short to allow significant losses. Some decay series isotopes though do have significantly long half-lives which offer the potential to be geochronometers in their own right. […] These techniques depend on the tendency of natural decay series to evolve towards a state of ‘secular equilibrium’ in which the activity of all species in the decay series is equal. […] at secular equilibrium, isotopes with long half-lives (i.e. small decay constants) will have large numbers of atoms whereas short-lived isotopes (high decay constants) will only constitute a relatively small number of atoms. Since decay constants vary by several orders of magnitude, so will the numbers of atoms of each isotope in the equilibrium decay series. […] Geochronological applications of natural decay series depend upon some process disrupting the natural decay series to introduce either a deficiency or an excess of an isotope in the series. The decay series will then gradually return to secular equilibrium and the geochronometer relies on measuring the extent to which equilibrium has been approached.”

“The ‘ring of fire’ volcanoes around the margin of the Pacific Ocean are a manifestation of subduction in which the oldest parts of the Pacific Ocean crust are being returned to the mantle below. The oldest parts of the Pacific Ocean crust are about 150 million years (Ma) old, with anything older having already disappeared into the mantle via subduction zones. The Atlantic Ocean doesn’t have a ring of fire because it is a relatively young ocean which started to form about 60 Ma ago, and its oldest rocks are not yet ready to form subduction zones. Thus, while continental crust persists for billions of years, oceanic crust is a relatively transient (in terms of geological time) phenomenon at the Earth’s surface.”

“Mantle rocks typically contain minerals such as olivine, pyroxene, spinel, and garnet. Unlike say ice, which melts to form water, mixtures of minerals do not melt in the proportions in which they occur in the rock. Rather, they undergo partial melting in which some minerals […] melt preferentially leaving a solid residue enriched in refractory minerals […]. We know this from experimentally melting mantle-like rocks in the laboratory, but also because the basalts produced by melting of the mantle are closer in composition to Ca-rich (clino-) pyroxene than to the olivine-rich rocks that dominate the solid pieces (or xenoliths) of mantle that are sometimes transferred to the surface by certain types of volcanic eruptions. […] Thirty years ago geologists fiercely debated whether the mantle was homogeneous or heterogeneous; mantle isotope geochemistry hasn’t yet elucidated all the details but it has put to rest the initial conundrum; Earth’s mantle is compositionally heterogeneous.”

Links:

Frederick Soddy.
Rutherford–Bohr model.
Isotopes of hydrogen.
Radioactive decay. Types of decay. Alpha decay. Beta decay. Electron capture decay. Branching fraction. Gamma radiation. Spontaneous fission.
Promethium.
Lanthanides.
Radiocarbon dating.
Hessel de Vries.
Dendrochronology.
Suess effect.
Bomb pulse.
Delta notation (non-wiki link).
Isotopic fractionation.
C3 carbon fixation. C4 carbon fixation.
Nitrogen-15 tracing.
Isotopes of strontium. Strontium isotope analysis.
Ötzi.
Mass spectrometry.
Geiger counter.
Townsend avalanche.
Gas proportional counter.
Scintillation detector.
Liquid scintillation spectometry. Photomultiplier tube.
Dynode.
Thallium-doped sodium iodide detectors. Semiconductor-based detectors.
Isotope separation (-enrichment).
Doubly labeled water.
Urea breath test.
Radiation oncology.
Brachytherapy.
Targeted radionuclide therapy.
Iodine-131.
MIBG scan.
Single-photon emission computed tomography.
Positron emission tomography.
Inductively coupled plasma (ICP) mass spectrometry.
Secondary ion mass spectrometry.
Faraday cup (-detector).
δ18O.
Stadials and interstadials. Oxygen isotope ratio cycle.
Insolation.
Gain and phase model.
Milankovitch cycles.
Perihelion and aphelion. Precession.
Equilibrium Clumped-Isotope Effects in Doubly Substituted Isotopologues of Ethane (non-wiki link).
Age of the Earth.
Uranium–lead dating.
Geochronology.
Cretaceous–Paleogene boundary.
Argon-argon dating.
Nuclear chain reaction. Critical mass.
Fukushima Daiichi nuclear disaster.
Natural nuclear fission reactor.
Continental crust. Oceanic crust. Basalt.
Core–mantle boundary.
Chondrite.
Ocean Island Basalt.
Isochron dating.

November 23, 2017 Posted by | Biology, Books, Botany, Chemistry, Geology, Medicine, Physics | Leave a comment

Materials… (II)

Some more quotes and links:

“Whether materials are stiff and strong, or hard or weak, is the territory of mechanics. […] the 19th century continuum theory of linear elasticity is still the basis of much of modern solid mechanics. A stiff material is one which does not deform much when a force acts on it. Stiffness is quite distinct from strength. A material may be stiff but weak, like a piece of dry spaghetti. If you pull it, it stretches only slightly […], but as you ramp up the force it soon breaks. To put this on a more scientific footing, so that we can compare different materials, we might devise a test in which we apply a force to stretch a bar of material and measure the increase in length. The fractional change in length is the strain; and the applied force divided by the cross-sectional area of the bar is the stress. To check that it is Hookean, we double the force and confirm that the strain has also doubled. To check that it is truly elastic, we remove the force and check that the bar returns to the same length that it started with. […] then we calculate the ratio of the stress to the strain. This ratio is the Young’s modulus of the material, a quantity which measures its stiffness. […] While we are measuring the change in length of the bar, we might also see if there is a change in its width. It is not unreasonable to think that as the bar stretches it also becomes narrower. The Poisson’s ratio of the material is defined as the ratio of the transverse strain to the longitudinal strain (without the minus sign).”

“There was much argument between Cauchy and Lamé and others about whether there are two stiffness moduli or one. […] In fact, there are two stiffness moduli. One describes the resistance of a material to shearing and the other to compression. The shear modulus is the stiffness in distortion, for example in twisting. It captures the resistance of a material to changes of shape, with no accompanying change of volume. The compression modulus (usually called the bulk modulus) expresses the resistance to changes of volume (but not shape). This is what occurs as a cube of material is lowered deep into the sea, and is squeezed on all faces by the water pressure. The Young’s modulus [is] a combination of the more fundamental shear and bulk moduli, since stretching in one direction produces changes in both shape and volume. […] A factor of about 10,000 covers the useful range of Young’s modulus in engineering materials. The stiffness can be traced back to the forces acting between atoms and molecules in the solid state […]. Materials like diamond or tungsten with strong bonds are stiff in the bulk, while polymer materials with weak intermolecular forces have low stiffness.”

“In pure compression, the concept of ‘strength’ has no meaning, since the material cannot fail or rupture. But materials can and do fail in tension or in shear. To judge how strong a material is we can go back for example to the simple tension arrangement we used for measuring stiffness, but this time make it into a torture test in which the specimen is put on the rack. […] We find […] that we reach a strain at which the material stops being elastic and is permanently stretched. We have reached the yield point, and beyond this we have damaged the material but it has not failed. After further yielding, the bar may fail by fracture […]. On the other hand, with a bar of cast iron, there comes a point where the bar breaks, noisily and without warning, and without yield. This is a failure by brittle fracture. The stress at which it breaks is the tensile strength of the material. For the ductile material, the stress at which plastic deformation starts is the tensile yield stress. Both are measures of strength. It is in metals that yield and plasticity are of the greatest significance and value. In working components, yield provides a safety margin between small-strain elasticity and catastrophic rupture. […] plastic deformation is [also] exploited in making things from metals like steel and aluminium. […] A useful feature of plastic deformation in metals is that plastic straining raises the yield stress, particularly at lower temperatures.”

“Brittle failure is not only noisy but often scary. Engineers keep well away from it. An elaborate theory of fracture mechanics has been built up to help them avoid it, and there are tough materials to hand which do not easily crack. […] Since small cracks and flaws are present in almost any engineering component […], the trick is not to avoid cracks but to avoid long cracks which exceed [a] critical length. […] In materials which can yield, the tip stress can be relieved by plastic deformation, and this is a potent toughening mechanism in some materials. […] The trick of compressing a material to suppress cracking is a powerful way to toughen materials.”

“Hardness is a property which materials scientists think of in a particular and practical way. It tells us how well a material resists being damaged or deformed by a sharp object. That is useful information and it can be obtained easily. […] Soft is sometimes the opposite of hard […] But a different kind of soft is squidgy. […] In the soft box, we find many everyday materials […]. Some soft materials such as adhesives and lubricants are of great importance in engineering. For all of them, the model of a stiff crystal lattice provides no guidance. There is usually no crystal. The units are polymer chains, or small droplets of liquids, or small solid particles, with weak forces acting between them, and little structural organization. Structures when they exist are fragile. Soft materials deform easily when forces act on them […]. They sit as a rule somewhere between rigid solids and simple liquids. Their mechanical behaviour is dominated by various kinds of plasticity.”

“In pure metals, the resistivity is extremely low […] and a factor of ten covers all of them. […] the low resistivity (or, put another way, the high conductivity) arises from the existence of a conduction band in the solid which is only partly filled. Electrons in the conduction band are mobile and drift in an applied electric field. This is the electric current. The electrons are subject to some scattering from lattice vibrations which impedes their motion and generates an intrinsic resistance. Scattering becomes more severe as the temperature rises and the amplitude of the lattice vibrations becomes greater, so that the resistivity of metals increases with temperature. Scattering is further increased by microstructural heterogeneities, such as grain boundaries, lattice distortions, and other defects, and by phases of different composition. So alloys have appreciably higher resistivities than their pure parent metals. Adding 5 per cent nickel to iron doubles the resistivity, although the resistivities of the two pure metals are similar. […] Resistivity depends fundamentally on band structure. […] Plastics and rubbers […] are usually insulators. […] Electronically conducting plastics would have many uses, and some materials [e.g. this one] are now known. […] The electrical resistivity of many metals falls to exactly zero as they are cooled to very low temperatures. The critical temperature at which this happens varies, but for pure metallic elements it always lies below 10 K. For a few alloys, it is a little higher. […] Superconducting windings provide stable and powerful magnetic fields for magnetic resonance imaging, and many industrial and scientific uses.”

“A permanent magnet requires no power. Its magnetization has its origin in the motion of electrons in atoms and ions in the solid, but only a few materials have the favourable combination of quantum properties to give rise to useful ferromagnetism. […] Ferromagnetism disappears completely above the so-called Curie temperature. […] Below the Curie temperature, ferromagnetic alignment throughout the material can be established by imposing an external polarizing field to create a net magnetization. In this way a practical permanent magnet is made. The ideal permanent magnet has an intense magnetization (a strong field) which remains after the polarizing field is switched off. It can only be demagnetized by applying a strong polarizing field in the opposite direction: the size of this field is the coercivity of the magnet material. For a permanent magnet, it should be as high as possible. […] Permanent magnets are ubiquitous but more or less invisible components of umpteen devices. There are a hundred or so in every home […]. There are also important uses for ‘soft’ magnetic materials, in devices where we want the ferromagnetism to be temporary, not permanent. Soft magnets lose their magnetization after the polarizing field is removed […] They have low coercivity, approaching zero. When used in a transformer, such a soft ferromagnetic material links the input and output coils by magnetic induction. Ideally, the magnetization should reverse during every cycle of the alternating current to minimize energy losses and heating. […] Silicon transformer steels yielded large gains in efficiency in electrical power distribution when they were first introduced in the 1920s, and they remain pre-eminent.”

“At least 50 families of plastics are produced commercially today. […] These materials all consist of linear string molecules, most with simple carbon backbones, a few with carbon-oxygen backbones […] Plastics as a group are valuable because they are lightweight and work well in wet environments, and don’t go rusty. They are mostly unaffected by acids and salts. But they burn, and they don’t much like sunlight as the ultraviolet light can break the polymer backbone. Most commercial plastics are mixed with substances which make it harder for them to catch fire and which filter out the ultraviolet light. Above all, plastics are used because they can be formed and shaped so easily. The string molecule itself is held together by strong chemical bonds and is resilient, but the forces between the molecules are weak. So plastics melt at low temperatures to produce rather viscous liquids […]. And with modest heat and a little pressure, they can be injected into moulds to produce articles of almost any shape”.

“The downward cascade of high purity to adulterated materials in recycling is a kind of entropy effect: unmixing is thermodynamically hard work. But there is an energy-driven problem too. Most materials are thermodynamically unstable (or metastable) in their working environments and tend to revert to the substances from which they were made. This is well-known in the case of metals, and is the usual meaning of corrosion. The metals are more stable when combined with oxygen than uncombined. […] Broadly speaking, ceramic materials are more stable thermodynamically, since they already contain much oxygen in chemical combination. Even so, ceramics used in the open usually fall victim to some environmental predator. Often it is water that causes damage. Water steals sodium and potassium from glass surfaces by slow leaching. The surface shrinks and cracks, so the glass loses its transparency. […] Stones and bricks may succumb to the stresses of repeated freezing when wet; limestones decay also by the chemical action of sulfur and nitrogen gasses in polluted rainwater. Even buried archaeological pots slowly react with water in a remorseless process similar to that of rock weathering.”

Ashby plot.
Alan Arnold Griffith.
Creep (deformation).
Amontons’ laws of friction.
Viscoelasticity.
Internal friction.
Surfactant.
Dispersant.
Rheology.
Liquid helium.
Conductor. Insulator. Semiconductor. P-type -ll-. N-type -ll-.
Hall–Héroult process.
Cuprate.
Magnetostriction.
Snell’s law.
Chromatic aberration.
Dispersion (optics).
Dye.
Density functional theory.
Glass.
Pilkington float process.
Superalloy.
Ziegler–Natta catalyst.
Transistor.
Integrated circuit.
Negative-index metamaterial.
Auxetics.
Titanium dioxide.
Hyperfine structure (/-interactions).
Diamond anvil cell.
Synthetic rubber.
Simon–Ehrlich wager.
Sankey diagram.

November 16, 2017 Posted by | Books, Chemistry, Engineering, Physics | Leave a comment

Materials (I)…

Useful matter is a good definition of materials. […] Materials are materials because inventive people find ingenious things to do with them. Or just because people use them. […] Materials science […] explains how materials are made and how they behave as we use them.”

I recently read this book, which I liked. Below I have added some quotes from the first half of the book, with some added hopefully helpful links, as well as a collection of links at the bottom of the post to other topics covered.

“We understand all materials by knowing about composition and microstructure. Despite their extraordinary minuteness, the atoms are the fundamental units, and they are real, with precise attributes, not least size. Solid materials tend towards crystallinity (for the good thermodynamic reason that it is the arrangement of lowest energy), and they usually achieve it, though often in granular, polycrystalline forms. Processing conditions greatly influence microstructures which may be mobile and dynamic, particularly at high temperatures. […] The idea that we can understand materials by looking at their internal structure in finer and finer detail goes back to the beginnings of microscopy […]. This microstructural view is more than just an important idea, it is the explanatory framework at the core of materials science. Many other concepts and theories exist in materials science, but this is the framework. It says that materials are intricately constructed on many length-scales, and if we don’t understand the internal structure we shall struggle to explain or to predict material behaviour.”

“Oxygen is the most abundant element in the earth’s crust and silicon the second. In nature, silicon occurs always in chemical combination with oxygen, the two forming the strong Si–O chemical bond. The simplest combination, involving no other elements, is silica; and most grains of sand are crystals of silica in the form known as quartz. […] The quartz crystal comes in right- and left-handed forms. Nothing like this happens in metals but arises frequently when materials are built from molecules and chemical bonds. The crystal structure of quartz has to incorporate two different atoms, silicon and oxygen, each in a repeating pattern and in the precise ratio 1:2. There is also the severe constraint imposed by the Si–O chemical bonds which require that each Si atom has four O neighbours arranged around it at the corners of a tetrahedron, every O bonded to two Si atoms. The crystal structure which quartz adopts (which of all possibilities is the one of lowest energy) is made up of triangular and hexagonal units. But within this there are buried helixes of Si and O atoms, and a helix must be either right- or left-handed. Once a quartz crystal starts to grow as right- or left-handed, its structure templates all the other helices with the same handedness. Equal numbers of right- and left-handed crystals occur in nature, but each is unambiguously one or the other.”

“In the living tree, and in the harvested wood that we use as a material, there is a hierarchy of structural levels, climbing all the way from the molecular to the scale of branch and trunk. The stiff cellulose chains are bundled into fibrils, which are themselves bonded by other organic molecules to build the walls of cells; which in turn form channels for the transport of water and nutrients, the whole having the necessary mechanical properties to support its weight and to resist the loads of wind and rain. In the living tree, the structure allows also for growth and repair. There are many things to be learned from biological materials, but the most universal is that biology builds its materials at many structural levels, and rarely makes a distinction between the material and the organism. Being able to build materials with hierarchical architectures is still more or less out of reach in materials engineering. Understanding how materials spontaneously self-assemble is the biggest challenge in contemporary nanotechnology.”

“The example of diamond shows two things about crystalline materials. First, anything we know about an atom and its immediate environment (neighbours, distances, angles) holds for every similar atom throughout a piece of material, however large; and second, everything we know about the unit cell (its size, its shape, and its symmetry) also applies throughout an entire crystal […] and by extension throughout a material made of a myriad of randomly oriented crystallites. These two general propositions provide the basis and justification for lattice theories of material behaviour which were developed from the 1920s onwards. We know that every solid material must be held together by internal cohesive forces. If it were not, it would fly apart and turn into a gas. A simple lattice theory says that if we can work out what forces act on the atoms in one unit cell, then this should be enough to understand the cohesion of the entire crystal. […] In lattice models which describe the cohesion and dynamics of the atoms, the role of the electrons is mainly in determining the interatomic bonding and the stiffness of the bond-spring. But in many materials, and especially in metals and semiconductors, some of the electrons are free to move about within the lattice. A lattice model of electron behaviour combines a geometrical description of the lattice with a more or less mechanical view of the atomic cores, and a fully quantum theoretical description of the electrons themselves. We need only to take account of the outer electrons of the atoms, as the inner electrons are bound tightly into the cores and are not itinerant. The outer electrons are the ones that form chemical bonds, so they are also called the valence electrons.”

“It is harder to push atoms closer together than to pull them further apart. While atoms are soft on the outside, they have harder cores, and pushed together the cores start to collide. […] when we bring a trillion atoms together to form a crystal, it is the valence electrons that are disturbed as the atoms approach each other. As the atomic cores come close to the equilibrium spacing of the crystal, the electron states of the isolated atoms morph into a set of collective states […]. These collective electron states have a continuous distribution of energies up to a top level, and form a ‘band’. But the separation of the valence electrons into distinct electron-pair states is preserved in the band structure, so that we find that the collective states available to the entire population of valence electrons in the entire crystal form a set of bands […]. Thus in silicon, there are two main bands.”

“The perfect crystal has atoms occupying all the positions prescribed by the geometry of its crystal lattice. But real crystalline materials fall short of perfection […] For instance, an individual site may be unoccupied (a vacancy). Or an extra atom may be squeezed into the crystal at a position which is not a lattice position (an interstitial). An atom may fall off its lattice site, creating a vacancy and an interstitial at the same time. Sometimes a site is occupied by the wrong kind of atom. Point defects of this kind distort the crystal in their immediate neighbourhood. Vacancies free up diffusional movement, allowing atoms to hop from site to site. Larger scale defects invariably exist too. A complete layer of atoms or unit cells may terminate abruptly within the crystal to produce a line defect (a dislocation). […] There are materials which try their best to crystallize, but find it hard to do so. Many polymer materials are like this. […] The best they can do is to form small crystalline regions in which the molecules lie side by side over limited distances. […] Often the crystalline domains comprise about half the material: it is a semicrystal. […] Crystals can be formed from the melt, from solution, and from the vapour. All three routes are used in industry and in the laboratory. As a rule, crystals that grow slowly are good crystals. Geological time can give wonderful results. Often, crystals are grown on a seed, a small crystal of the same material deliberately introduced into the crystallization medium. If this is a melt, the seed can gradually be pulled out, drawing behind it a long column of new crystal material. This is the Czochralski process, an important method for making semiconductors. […] However it is done, crystals invariably grow by adding material to the surface of a small particle to make it bigger.”

“As we go down the Periodic Table of elements, the atoms get heavier much more quickly than they get bigger. The mass of a single atom of uranium at the bottom of the Table is about 25 times greater than that of an atom of the lightest engineering metal, beryllium, at the top, but its radius is only 40 per cent greater. […] The density of solid materials of every kind is fixed mainly by where the constituent atoms are in the Periodic Table. The packing arrangement in the solid has only a small influence, although the crystalline form of a substance is usually a little denser than the amorphous form […] The range of solid densities available is therefore quite limited. At the upper end we hit an absolute barrier, with nothing denser than osmium (22,590 kg/m3). At the lower end we have some slack, as we can make lighter materials by the trick of incorporating holes to make foams and sponges and porous materials of all kinds. […] in the entire catalogue of available materials there is a factor of about a thousand for ingenious people to play with, from say 20 to 20,000 kg/m3.”

“The expansion of materials as we increase their temperature is a universal tendency. It occurs because as we raise the temperature the thermal energy of the atoms and molecules increases correspondingly, and this fights against the cohesive forces of attraction. The mean distance of separation between atoms in the solid (or the liquid) becomes larger. […] As a general rule, the materials with small thermal expansivities are metals and ceramics with high melting temperatures. […] Although thermal expansion is a smooth process which continues from the lowest temperatures to the melting point, it is sometimes interrupted by sudden jumps […]. Changes in crystal structure at precise temperatures are commonplace in materials of all kinds. […] There is a cluster of properties which describe the thermal behaviour of materials. Besides the expansivity, there is the specific heat, and also the thermal conductivity. These properties show us, for example, that it takes about four times as much energy to increase the temperature of 1 kilogram of aluminium by 1°C as 1 kilogram of silver; and that good conductors of heat are usually also good conductors of electricity. At everyday temperatures there is not a huge difference in specific heat between materials. […] In all crystalline materials, thermal conduction arises from the diffusion of phonons from hot to cold regions. As they travel, the phonons are subject to scattering both by collisions with other phonons, and with defects in the material. This picture explains why the thermal conductivity falls as temperature rises”.

 

Materials science.
Metals.
Inorganic compound.
Organic compound.
Solid solution.
Copper. Bronze. Brass. Alloy.
Electrical conductivity.
Steel. Bessemer converter. Gamma iron. Alpha iron. Cementite. Martensite.
Phase diagram.
Equation of state.
Calcite. Limestone.
Birefringence.
Portland cement.
Cellulose.
Wood.
Ceramic.
Mineralogy.
Crystallography.
Laue diffraction pattern.
Silver bromide. Latent image. Photographic film. Henry Fox Talbot.
Graphene. Graphite.
Thermal expansion.
Invar.
Dulong–Petit law.
Wiedemann–Franz law.

 

November 14, 2017 Posted by | Biology, Books, Chemistry, Engineering, Physics | Leave a comment

Physical chemistry

This is a good book, I really liked it, just as I really liked the other book in the series which I read by the same author, the one about the laws of thermodynamics (blog coverage here). I know much, much more about physics than I do about chemistry and even though some of it was review I learned a lot from this one. Recommended, certainly if you find the quotes below interesting. As usual, I’ve added some observations from the book and some links to topics/people/etc. covered/mentioned in the book below.

Some quotes:

“Physical chemists pay a great deal of attention to the electrons that surround the nucleus of an atom: it is here that the chemical action takes place and the element expresses its chemical personality. […] Quantum mechanics plays a central role in accounting for the arrangement of electrons around the nucleus. The early ‘Bohr model’ of the atom, […] with electrons in orbits encircling the nucleus like miniature planets and widely used in popular depictions of atoms, is wrong in just about every respect—but it is hard to dislodge from the popular imagination. The quantum mechanical description of atoms acknowledges that an electron cannot be ascribed to a particular path around the nucleus, that the planetary ‘orbits’ of Bohr’s theory simply don’t exist, and that some electrons do not circulate around the nucleus at all. […] Physical chemists base their understanding of the electronic structures of atoms on Schrödinger’s model of the hydrogen atom, which was formulated in 1926. […] An atom is often said to be mostly empty space. That is a remnant of Bohr’s model in which a point-like electron circulates around the nucleus; in the Schrödinger model, there is no empty space, just a varying probability of finding the electron at a particular location.”

“No more than two electrons may occupy any one orbital, and if two do occupy that orbital, they must spin in opposite directions. […] this form of the principle [the Pauli exclusion principleUS] […] is adequate for many applications in physical chemistry. At its very simplest, the principle rules out all the electrons of an atom (other than atoms of one-electron hydrogen and two-electron helium) having all their electrons in the 1s-orbital. Lithium, for instance, has three electrons: two occupy the 1s orbital, but the third cannot join them, and must occupy the next higher-energy orbital, the 2s-orbital. With that point in mind, something rather wonderful becomes apparent: the structure of the Periodic Table of the elements unfolds, the principal icon of chemistry. […] The first electron can enter the 1s-orbital, and helium’s (He) second electron can join it. At that point, the orbital is full, and lithium’s (Li) third electron must enter the next higher orbital, the 2s-orbital. The next electron, for beryllium (Be), can join it, but then it too is full. From that point on the next six electrons can enter in succession the three 2p-orbitals. After those six are present (at neon, Ne), all the 2p-orbitals are full and the eleventh electron, for sodium (Na), has to enter the 3s-orbital. […] Similar reasoning accounts for the entire structure of the Table, with elements in the same group all having analogous electron arrangements and each successive row (‘period’) corresponding to the next outermost shell of orbitals.”

“[O]n crossing the [Periodic] Table from left to right, atoms become smaller: even though they have progressively more electrons, the nuclear charge increases too, and draws the clouds in to itself. On descending a group, atoms become larger because in successive periods new outermost shells are started (as in going from lithium to sodium) and each new coating of cloud makes the atom bigger […] the ionization energy [is] the energy needed to remove one or more electrons from the atom. […] The ionization energy more or less follows the trend in atomic radii but in an opposite sense because the closer an electron lies to the positively charged nucleus, the harder it is to remove. Thus, ionization energy increases from left to right across the Table as the atoms become smaller. It decreases down a group because the outermost electron (the one that is most easily removed) is progressively further from the nucleus. […] the electron affinity [is] the energy released when an electron attaches to an atom. […] Electron affinities are highest on the right of the Table […] An ion is an electrically charged atom. That charge comes about either because the neutral atom has lost one or more of its electrons, in which case it is a positively charged cation […] or because it has captured one or more electrons and has become a negatively charged anion. […] Elements on the left of the Periodic Table, with their low ionization energies, are likely to lose electrons and form cations; those on the right, with their high electron affinities, are likely to acquire electrons and form anions. […] ionic bonds […] form primarily between atoms on the left and right of the Periodic Table.”

“Although the Schrödinger equation is too difficult to solve for molecules, powerful computational procedures have been developed by theoretical chemists to arrive at numerical solutions of great accuracy. All the procedures start out by building molecular orbitals from the available atomic orbitals and then setting about finding the best formulations. […] Depictions of electron distributions in molecules are now commonplace and very helpful for understanding the properties of molecules. It is particularly relevant to the development of new pharmacologically active drugs, where electron distributions play a central role […] Drug discovery, the identification of pharmacologically active species by computation rather than in vivo experiment, is an important target of modern computational chemistry.”

Work […] involves moving against an opposing force; heat […] is the transfer of energy that makes use of a temperature difference. […] the internal energy of a system that is isolated from external influences does not change. That is the First Law of thermodynamics. […] A system possesses energy, it does not possess work or heat (even if it is hot). Work and heat are two different modes for the transfer of energy into or out of a system. […] if you know the internal energy of a system, then you can calculate its enthalpy simply by adding to U the product of pressure and volume of the system (H = U + pV). The significance of the enthalpy […] is that a change in its value is equal to the output of energy as heat that can be obtained from the system provided it is kept at constant pressure. For instance, if the enthalpy of a system falls by 100 joules when it undergoes a certain change (such as a chemical reaction), then we know that 100 joules of energy can be extracted as heat from the system, provided the pressure is constant.”

“In the old days of physical chemistry (well into the 20th century), the enthalpy changes were commonly estimated by noting which bonds are broken in the reactants and which are formed to make the products, so A → B might be the bond-breaking step and B → C the new bond-formation step, each with enthalpy changes calculated from knowledge of the strengths of the old and new bonds. That procedure, while often a useful rule of thumb, often gave wildly inaccurate results because bonds are sensitive entities with strengths that depend on the identities and locations of the other atoms present in molecules. Computation now plays a central role: it is now routine to be able to calculate the difference in energy between the products and reactants, especially if the molecules are isolated as a gas, and that difference easily converted to a change of enthalpy. […] Enthalpy changes are very important for a rational discussion of changes in physical state (vaporization and freezing, for instance) […] If we know the enthalpy change taking place during a reaction, then provided the process takes place at constant pressure we know how much energy is released as heat into the surroundings. If we divide that heat transfer by the temperature, then we get the associated entropy change in the surroundings. […] provided the pressure and temperature are constant, a spontaneous change corresponds to a decrease in Gibbs energy. […] the chemical potential can be thought of as the Gibbs energy possessed by a standard-size block of sample. (More precisely, for a pure substance the chemical potential is the molar Gibbs energy, the Gibbs energy per mole of atoms or molecules.)”

“There are two kinds of work. One kind is the work of expansion that occurs when a reaction generates a gas and pushes back the atmosphere (perhaps by pressing out a piston). That type of work is called ‘expansion work’. However, a chemical reaction might do work other than by pushing out a piston or pushing back the atmosphere. For instance, it might do work by driving electrons through an electric circuit connected to a motor. This type of work is called ‘non-expansion work’. […] a change in the Gibbs energy of a system at constant temperature and pressure is equal to the maximum non-expansion work that can be done by the reaction. […] the link of thermodynamics with biology is that one chemical reaction might do the non-expansion work of building a protein from amino acids. Thus, a knowledge of the Gibbs energies changes accompanying metabolic processes is very important in bioenergetics, and much more important than knowing the enthalpy changes alone (which merely indicate a reaction’s ability to keep us warm).”

“[T]he probability that a molecule will be found in a state of particular energy falls off rapidly with increasing energy, so most molecules will be found in states of low energy and very few will be found in states of high energy. […] If the temperature is low, then the distribution declines so rapidly that only the very lowest levels are significantly populated. If the temperature is high, then the distribution falls off very slowly with increasing energy, and many high-energy states are populated. If the temperature is zero, the distribution has all the molecules in the ground state. If the temperature is infinite, all available states are equally populated. […] temperature […] is the single, universal parameter that determines the most probable distribution of molecules over the available states.”

“Mixing adds disorder and increases the entropy of the system and therefore lowers the Gibbs energy […] In the absence of mixing, a reaction goes to completion; when mixing of reactants and products is taken into account, equilibrium is reached when both are present […] Statistical thermodynamics, through the Boltzmann distribution and its dependence on temperature, allows physical chemists to understand why in some cases the equilibrium shifts towards reactants (which is usually unwanted) or towards products (which is normally wanted) as the temperature is raised. A rule of thumb […] is provided by a principle formulated by Henri Le Chatelier […] that a system at equilibrium responds to a disturbance by tending to oppose its effect. Thus, if a reaction releases energy as heat (is ‘exothermic’), then raising the temperature will oppose the formation of more products; if the reaction absorbs energy as heat (is ‘endothermic’), then raising the temperature will encourage the formation of more product.”

“Model building pervades physical chemistry […] some hold that the whole of science is based on building models of physical reality; much of physical chemistry certainly is.”

“For reasonably light molecules (such as the major constituents of air, N2 and O2) at room temperature, the molecules are whizzing around at an average speed of about 500 m/s (about 1000 mph). That speed is consistent with what we know about the propagation of sound, the speed of which is about 340 m/s through air: for sound to propagate, molecules must adjust their position to give a wave of undulating pressure, so the rate at which they do so must be comparable to their average speeds. […] a typical N2 or O2 molecule in air makes a collision every nanosecond and travels about 1000 molecular diameters between collisions. To put this scale into perspective: if a molecule is thought of as being the size of a tennis ball, then it travels about the length of a tennis court between collisions. Each molecule makes about a billion collisions a second.”

“X-ray diffraction makes use of the fact that electromagnetic radiation (which includes X-rays) consists of waves that can interfere with one another and give rise to regions of enhanced and diminished intensity. This so-called ‘diffraction pattern’ is characteristic of the object in the path of the rays, and mathematical procedures can be used to interpret the pattern in terms of the object’s structure. Diffraction occurs when the wavelength of the radiation is comparable to the dimensions of the object. X-rays have wavelengths comparable to the separation of atoms in solids, so are ideal for investigating their arrangement.”

“For most liquids the sample contracts when it freezes, so […] the temperature does not need to be lowered so much for freezing to occur. That is, the application of pressure raises the freezing point. Water, as in most things, is anomalous, and ice is less dense than liquid water, so water expands when it freezes […] when two gases are allowed to occupy the same container they invariably mix and each spreads uniformly through it. […] the quantity of gas that dissolves in any liquid is proportional to the pressure of the gas. […] When the temperature of [a] liquid is raised, it is easier for a dissolved molecule to gather sufficient energy to escape back up into the gas; the rate of impacts from the gas is largely unchanged. The outcome is a lowering of the concentration of dissolved gas at equilibrium. Thus, gases appear to be less soluble in hot water than in cold. […] the presence of dissolved substances affects the properties of solutions. For instance, the everyday experience of spreading salt on roads to hinder the formation of ice makes use of the lowering of freezing point of water when a salt is present. […] the boiling point is raised by the presence of a dissolved substance [whereas] the freezing point […] is lowered by the presence of a solute.”

“When a liquid and its vapour are present in a closed container the vapour exerts a characteristic pressure (when the escape of molecules from the liquid matches the rate at which they splash back down into it […][)] This characteristic pressure depends on the temperature and is called the ‘vapour pressure’ of the liquid. When a solute is present, the vapour pressure at a given temperature is lower than that of the pure liquid […] The extent of lowering is summarized by yet another limiting law of physical chemistry, ‘Raoult’s law’ [which] states that the vapour pressure of a solvent or of a component of a liquid mixture is proportional to the proportion of solvent or liquid molecules present. […] Osmosis [is] the tendency of solvent molecules to flow from the pure solvent to a solution separated from it by a [semi-]permeable membrane […] The entropy when a solute is present in a solvent is higher than when the solute is absent, so an increase in entropy, and therefore a spontaneous process, is achieved when solvent flows through the membrane from the pure liquid into the solution. The tendency for this flow to occur can be overcome by applying pressure to the solution, and the minimum pressure needed to overcome the tendency to flow is called the ‘osmotic pressure’. If one solution is put into contact with another through a semipermeable membrane, then there will be no net flow if they exert the same osmotic pressures and are ‘isotonic’.”

“Broadly speaking, the reaction quotient [‘Q’] is the ratio of concentrations, with product concentrations divided by reactant concentrations. It takes into account how the mingling of the reactants and products affects the total Gibbs energy of the mixture. The value of Q that corresponds to the minimum in the Gibbs energy […] is called the equilibrium constant and denoted K. The equilibrium constant, which is characteristic of a given reaction and depends on the temperature, is central to many discussions in chemistry. When K is large (1000, say), we can be reasonably confident that the equilibrium mixture will be rich in products; if K is small (0.001, say), then there will be hardly any products present at equilibrium and we should perhaps look for another way of making them. If K is close to 1, then both reactants and products will be abundant at equilibrium and will need to be separated. […] Equilibrium constants vary with temperature but not […] with pressure. […] van’t Hoff’s equation implies that if the reaction is strongly exothermic (releases a lot of energy as heat when it takes place), then the equilibrium constant decreases sharply as the temperature is raised. The opposite is true if the reaction is strongly endothermic (absorbs a lot of energy as heat). […] Typically it is found that the rate of a reaction [how fast it progresses] decreases as it approaches equilibrium. […] Most reactions go faster when the temperature is raised. […] reactions with high activation energies proceed slowly at low temperatures but respond sharply to changes of temperature. […] The surface area exposed by a catalyst is important for its function, for it is normally the case that the greater that area, the more effective is the catalyst.”

Links:

John Dalton.
Atomic orbital.
Electron configuration.
S,p,d,f orbitals.
Computational chemistry.
Atomic radius.
Covalent bond.
Gilbert Lewis.
Valence bond theory.
Molecular orbital theory.
Orbital hybridisation.
Bonding and antibonding orbitals.
Schrödinger equation.
Density functional theory.
Chemical thermodynamics.
Laws of thermodynamics/Zeroth law/First law/Second law/Third Law.
Conservation of energy.
Thermochemistry.
Bioenergetics.
Spontaneous processes.
Entropy.
Rudolf Clausius.
Chemical equilibrium.
Heat capacity.
Compressibility.
Statistical thermodynamics/statistical mechanics.
Boltzmann distribution.
State of matter/gas/liquid/solid.
Perfect gas/Ideal gas law.
Robert Boyle/Joseph Louis Gay-Lussac/Jacques Charles/Amedeo Avogadro.
Equation of state.
Kinetic theory of gases.
Van der Waals equation of state.
Maxwell–Boltzmann distribution.
Thermal conductivity.
Viscosity.
Nuclear magnetic resonance.
Debye–Hückel equation.
Ionic solids.
Catalysis.
Supercritical fluid.
Liquid crystal.
Graphene.
Benoît Paul Émile Clapeyron.
Phase (matter)/phase diagram/Gibbs’ phase rule.
Ideal solution/regular solution.
Henry’s law.
Chemical kinetics.
Electrochemistry.
Rate equation/First order reactions/Second order reactions.
Rate-determining step.
Arrhenius equation.
Collision theory.
Diffusion-controlled and activation-controlled reactions.
Transition state theory.
Photochemistry/fluorescence/phosphorescence/photoexcitation.
Photosynthesis.
Redox reactions.
Electrochemical cell.
Fuel cell.
Reaction dynamics.
Spectroscopy/emission spectroscopy/absorption spectroscopy/Raman spectroscopy.
Raman effect.
Magnetic resonance imaging.
Fourier-transform spectroscopy.
Electron paramagnetic resonance.
Mass spectrum.
Electron spectroscopy for chemical analysis.
Scanning tunneling microscope.
Chemisorption/physisorption.

October 5, 2017 Posted by | Biology, Books, Chemistry, Pharmacology, Physics | Leave a comment

Earth System Science

I decided not to rate this book. Some parts are great, some parts I didn’t think were very good.

I’ve added some quotes and links below. First a few links (I’ve tried not to add links here which I’ve also included in the quotes below):

Carbon cycle.
Origin of water on Earth.
Gaia hypothesis.
Albedo (climate and weather).
Snowball Earth.
Carbonate–silicate cycle.
Carbonate compensation depth.
Isotope fractionation.
CLAW hypothesis.
Mass-independent fractionation.
δ13C.
Great Oxygenation Event.
Acritarch.
Grypania.
Neoproterozoic.
Rodinia.
Sturtian glaciation.
Marinoan glaciation.
Ediacaran biota.
Cambrian explosion.
Quarternary.
Medieval Warm Period.
Little Ice Age.
Eutrophication.
Methane emissions.
Keeling curve.
CO2 fertilization effect.
Acid rain.
Ocean acidification.
Earth systems models.
Clausius–Clapeyron relation.
Thermohaline circulation.
Cryosphere.
The limits to growth.
Exoplanet Biosignature Gases.
Transiting Exoplanet Survey Satellite (TESS).
James Webb Space Telescope.
Habitable zone.
Kepler-186f.

A few quotes from the book:

“The scope of Earth system science is broad. It spans 4.5 billion years of Earth history, how the system functions now, projections of its future state, and ultimate fate. […] Earth system science is […] a deeply interdisciplinary field, which synthesizes elements of geology, biology, chemistry, physics, and mathematics. It is a young, integrative science that is part of a wider 21st-century intellectual trend towards trying to understand complex systems, and predict their behaviour. […] A key part of Earth system science is identifying the feedback loops in the Earth system and understanding the behaviour they can create. […] In systems thinking, the first step is usually to identify your system and its boundaries. […] what is part of the Earth system depends on the timescale being considered. […] The longer the timescale we look over, the more we need to include in the Earth system. […] for many Earth system scientists, the planet Earth is really comprised of two systems — the surface Earth system that supports life, and the great bulk of the inner Earth underneath. It is the thin layer of a system at the surface of the Earth […] that is the subject of this book.”

“Energy is in plentiful supply from the Sun, which drives the water cycle and also fuels the biosphere, via photosynthesis. However, the surface Earth system is nearly closed to materials, with only small inputs to the surface from the inner Earth. Thus, to support a flourishing biosphere, all the elements needed by life must be efficiently recycled within the Earth system. This in turn requires energy, to transform materials chemically and to move them physically around the planet. The resulting cycles of matter between the biosphere, atmosphere, ocean, land, and crust are called global biogeochemical cycles — because they involve biological, geological, and chemical processes. […] The global biogeochemical cycling of materials, fuelled by solar energy, has transformed the Earth system. […] It has made the Earth fundamentally different from its state before life and from its planetary neighbours, Mars and Venus. Through cycling the materials it needs, the Earth’s biosphere has bootstrapped itself into a much more productive state.”

“Each major element important for life has its own global biogeochemical cycle. However, every biogeochemical cycle can be conceptualized as a series of reservoirs (or ‘boxes’) of material connected by fluxes (or flows) of material between them. […] When a biogeochemical cycle is in steady state, the fluxes in and out of each reservoir must be in balance. This allows us to define additional useful quantities. Notably, the amount of material in a reservoir divided by the exchange flux with another reservoir gives the average ‘residence time’ of material in that reservoir with respect to the chosen process of exchange. For example, there are around 7 × 1016 moles of carbon dioxide (CO2) in today’s atmosphere, and photosynthesis removes around 9 × 1015 moles of CO2 per year, giving each molecule of CO2 a residence time of roughly eight years in the atmosphere before it is taken up, somewhere in the world, by photosynthesis. […] There are 3.8 × 1019 moles of molecular oxygen (O2) in today’s atmosphere, and oxidative weathering removes around 1 × 1013 moles of O2 per year, giving oxygen a residence time of around four million years with respect to removal by oxidative weathering. This makes the oxygen cycle […] a geological timescale cycle.”

“The water cycle is the physical circulation of water around the planet, between the ocean (where 97 per cent is stored), atmosphere, ice sheets, glaciers, sea-ice, freshwaters, and groundwater. […] To change the phase of water from solid to liquid or liquid to gas requires energy, which in the climate system comes from the Sun. Equally, when water condenses from gas to liquid or freezes from liquid to solid, energy is released. Solar heating drives evaporation from the ocean. This is responsible for supplying about 90 per cent of the water vapour to the atmosphere, with the other 10 per cent coming from evaporation on the land and freshwater surfaces (and sublimation of ice and snow directly to vapour). […] The water cycle is intimately connected to other biogeochemical cycles […]. Many compounds are soluble in water, and some react with water. This makes the ocean a key reservoir for several essential elements. It also means that rainwater can scavenge soluble gases and aerosols out of the atmosphere. When rainwater hits the land, the resulting solution can chemically weather rocks. Silicate weathering in turn helps keep the climate in a state where water is liquid.”

“In modern terms, plants acquire their carbon from carbon dioxide in the atmosphere, add electrons derived from water molecules to the carbon, and emit oxygen to the atmosphere as a waste product. […] In energy terms, global photosynthesis today captures about 130 terrawatts (1 TW = 1012 W) of solar energy in chemical form — about half of it in the ocean and about half on land. […] All the breakdown pathways for organic carbon together produce a flux of carbon dioxide back to the atmosphere that nearly balances photosynthetic uptake […] The surface recycling system is almost perfect, but a tiny fraction (about 0.1 per cent) of the organic carbon manufactured in photosynthesis escapes recycling and is buried in new sedimentary rocks. This organic carbon burial flux leaves an equivalent amount of oxygen gas behind in the atmosphere. Hence the burial of organic carbon represents the long-term source of oxygen to the atmosphere. […] the Earth’s crust has much more oxygen trapped in rocks in the form of oxidized iron and sulphur, than it has organic carbon. This tells us that there has been a net source of oxygen to the crust over Earth history, which must have come from the loss of hydrogen to space.”

“The oxygen cycle is relatively simple, because the reservoir of oxygen in the atmosphere is so massive that it dwarfs the reservoirs of organic carbon in vegetation, soils, and the ocean. Hence oxygen cannot get used up by the respiration or combustion of organic matter. Even the combustion of all known fossil fuel reserves can only put a small dent in the much larger reservoir of atmospheric oxygen (there are roughly 4 × 1017 moles of fossil fuel carbon, which is only about 1 per cent of the O2 reservoir). […] Unlike oxygen, the atmosphere is not the major surface reservoir of carbon. The amount of carbon in global vegetation is comparable to that in the atmosphere and the amount of carbon in soils (including permafrost) is roughly four times that in the atmosphere. Even these reservoirs are dwarfed by the ocean, which stores forty-five times as much carbon as the atmosphere, thanks to the fact that CO2 reacts with seawater. […] The exchange of carbon between the atmosphere and the land is largely biological, involving photosynthetic uptake and release by aerobic respiration (and, to a lesser extent, fires). […] Remarkably, when we look over Earth history there are fluctuations in the isotopic composition of carbonates, but no net drift up or down. This suggests that there has always been roughly one-fifth of carbon being buried in organic form and the other four-fifths as carbonate rocks. Thus, even on the early Earth, the biosphere was productive enough to support a healthy organic carbon burial flux.”

“The two most important nutrients for life are phosphorus and nitrogen, and they have very different biogeochemical cycles […] The largest reservoir of nitrogen is in the atmosphere, whereas the heavier phosphorus has no significant gaseous form. Phosphorus thus presents a greater recycling challenge for the biosphere. All phosphorus enters the surface Earth system from the chemical weathering of rocks on land […]. Phosphorus is concentrated in rocks in grains or veins of the mineral apatite. Natural selection has made plants on land and their fungal partners […] very effective at acquiring phosphorus from rocks, by manufacturing and secreting a range of organic acids that dissolve apatite. […] The average terrestrial ecosystem recycles phosphorus roughly fifty times before it is lost into freshwaters. […] The loss of phosphorus from the land is the ocean’s gain, providing the key input of this essential nutrient. Phosphorus is stored in the ocean as phosphate dissolved in the water. […] removal of phosphorus into the rock cycle balances the weathering of phosphorus from rocks on land. […] Although there is a large reservoir of nitrogen in the atmosphere, the molecules of nitrogen gas (N2) are extremely strongly bonded together, making nitrogen unavailable to most organisms. To split N2 and make nitrogen biologically available requires a remarkable biochemical feat — nitrogen fixation — which uses a lot of energy. In the ocean the dominant nitrogen fixers are cyanobacteria with a direct source of energy from sunlight. On land, various plants form a symbiotic partnership with nitrogen fixing bacteria, making a home for them in root nodules and supplying them with food in return for nitrogen. […] Nitrogen fixation and denitrification form the major input and output fluxes of nitrogen to both the land and the ocean, but there is also recycling of nitrogen within ecosystems. […] There is an intimate link between nutrient regulation and atmospheric oxygen regulation, because nutrient levels and marine productivity determine the source of oxygen via organic carbon burial. However, ocean nutrients are regulated on a much shorter timescale than atmospheric oxygen because their residence times are much shorter—about 2,000 years for nitrogen and 20,000 years for phosphorus.”

“[F]orests […] are vulnerable to increases in oxygen that increase the frequency and ferocity of fires. […] Combustion experiments show that fires only become self-sustaining in natural fuels when oxygen reaches around 17 per cent of the atmosphere. Yet for the last 370 million years there is a nearly continuous record of fossil charcoal, indicating that oxygen has never dropped below this level. At the same time, oxygen has never risen too high for fires to have prevented the slow regeneration of forests. The ease of combustion increases non-linearly with oxygen concentration, such that above 25–30 per cent oxygen (depending on the wetness of fuel) it is hard to see how forests could have survived. Thus oxygen has remained within 17–30 per cent of the atmosphere for at least the last 370 million years.”

“[T]he rate of silicate weathering increases with increasing CO2 and temperature. Thus, if something tends to increase CO2 or temperature it is counteracted by increased CO2 removal by silicate weathering. […] Plants are sensitive to variations in CO2 and temperature, and together with their fungal partners they greatly amplify weathering rates […] the most pronounced change in atmospheric CO2 over Phanerozoic time was due to plants colonizing the land. This started around 470 million years ago and escalated with the first forests 370 million years ago. The resulting acceleration of silicate weathering is estimated to have lowered the concentration of atmospheric CO2 by an order of magnitude […], and cooled the planet into a series of ice ages in the Carboniferous and Permian Periods.”

“The first photosynthesis was not the kind we are familiar with, which splits water and spits out oxygen as a waste product. Instead, early photosynthesis was ‘anoxygenic’ — meaning it didn’t produce oxygen. […] It could have used a range of compounds, in place of water, as a source of electrons with which to fix carbon from carbon dioxide and reduce it to sugars. Potential electron donors include hydrogen (H2) and hydrogen sulphide (H2S) in the atmosphere, or ferrous iron (Fe2+) dissolved in the ancient oceans. All of these are easier to extract electrons from than water. Hence they require fewer photons of sunlight and simpler photosynthetic machinery. The phylogenetic tree of life confirms that several forms of anoxygenic photosynthesis evolved very early on, long before oxygenic photosynthesis. […] If the early biosphere was fuelled by anoxygenic photosynthesis, plausibly based on hydrogen gas, then a key recycling process would have been the biological regeneration of this gas. Calculations suggest that once such recycling had evolved, the early biosphere might have achieved a global productivity up to 1 per cent of the modern marine biosphere. If early anoxygenic photosynthesis used the supply of reduced iron upwelling in the ocean, then its productivity would have been controlled by ocean circulation and might have reached 10 per cent of the modern marine biosphere. […] The innovation that supercharged the early biosphere was the origin of oxygenic photosynthesis using abundant water as an electron donor. This was not an easy process to evolve. To split water requires more energy — i.e. more high-energy photons of sunlight — than any of the earlier anoxygenic forms of photosynthesis. Evolution’s solution was to wire together two existing ‘photosystems’ in one cell and bolt on the front of them a remarkable piece of biochemical machinery that can rip apart water molecules. The result was the first cyanobacterial cell — the ancestor of all organisms performing oxygenic photosynthesis on the planet today. […] Once oxygenic photosynthesis had evolved, the productivity of the biosphere would no longer have been restricted by the supply of substrates for photosynthesis, as water and carbon dioxide were abundant. Instead, the availability of nutrients, notably nitrogen and phosphorus, would have become the major limiting factors on the productivity of the biosphere — as they still are today.” [If you’re curious to know more about how that fascinating ‘biochemical machinery’ works, this is a great book on these and related topics – US].

“On Earth, anoxygenic photosynthesis requires one photon per electron, whereas oxygenic photosynthesis requires two photons per electron. On Earth it took up to a billion years to evolve oxygenic photosynthesis, based on two photosystems that had already evolved independently in different types of anoxygenic photosynthesis. Around a fainter K- or M-type star […] oxygenic photosynthesis is estimated to require three or more photons per electron — and a corresponding number of photosystems — making it harder to evolve. […] However, fainter stars spend longer on the main sequence, giving more time for evolution to occur.”

“There was a lot more energy to go around in the post-oxidation world, because respiration of organic matter with oxygen yields an order of magnitude more energy than breaking food down anaerobically. […] The revolution in biological complexity culminated in the ‘Cambrian Explosion’ of animal diversity 540 to 515 million years ago, in which modern food webs were established in the ocean. […] Since then the most fundamental change in the Earth system has been the rise of plants on land […], beginning around 470 million years ago and culminating in the first global forests by 370 million years ago. This doubled global photosynthesis, increasing flows of materials. Accelerated chemical weathering of the land surface lowered atmospheric carbon dioxide levels and increased atmospheric oxygen levels, fully oxygenating the deep ocean. […] Although grasslands now cover about a third of the Earth’s productive land surface they are a geologically recent arrival. Grasses evolved amidst a trend of declining atmospheric carbon dioxide, and climate cooling and drying, over the past forty million years, and they only became widespread in two phases during the Miocene Epoch around seventeen and six million years ago. […] Since the rise of complex life, there have been several mass extinction events. […] whilst these rolls of the extinction dice marked profound changes in evolutionary winners and losers, they did not fundamentally alter the operation of the Earth system.” [If you’re interested in this kind of stuff, the evolution of food webs and so on, Herrera et al.’s wonderful book is a great place to start – US]

“The Industrial Revolution marks the transition from societies fuelled largely by recent solar energy (via biomass, water, and wind) to ones fuelled by concentrated ‘ancient sunlight’. Although coal had been used in small amounts for millennia, for example for iron making in ancient China, fossil fuel use only took off with the invention and refinement of the steam engine. […] With the Industrial Revolution, food and biomass have ceased to be the main source of energy for human societies. Instead the energy contained in annual food production, which supports today’s population, is at fifty exajoules (1 EJ = 1018 joules), only about a tenth of the total energy input to human societies of 500 EJ/yr. This in turn is equivalent to about a tenth of the energy captured globally by photosynthesis. […] solar energy is not very efficiently converted by photosynthesis, which is 1–2 per cent efficient at best. […] The amount of sunlight reaching the Earth’s land surface (2.5 × 1016 W) dwarfs current total human power consumption (1.5 × 1013 W) by more than a factor of a thousand.”

“The Earth system’s primary energy source is sunlight, which the biosphere converts and stores as chemical energy. The energy-capture devices — photosynthesizing organisms — construct themselves out of carbon dioxide, nutrients, and a host of trace elements taken up from their surroundings. Inputs of these elements and compounds from the solid Earth system to the surface Earth system are modest. Some photosynthesizers have evolved to increase the inputs of the materials they need — for example, by fixing nitrogen from the atmosphere and selectively weathering phosphorus out of rocks. Even more importantly, other heterotrophic organisms have evolved that recycle the materials that the photosynthesizers need (often as a by-product of consuming some of the chemical energy originally captured in photosynthesis). This extraordinary recycling system is the primary mechanism by which the biosphere maintains a high level of energy capture (productivity).”

“[L]ike all stars on the ‘main sequence’ (which generate energy through the nuclear fusion of hydrogen into helium), the Sun is burning inexorably brighter with time — roughly 1 per cent brighter every 100 million years — and eventually this will overheat the planet. […] Over Earth history, the silicate weathering negative feedback mechanism has counteracted the steady brightening of the Sun by removing carbon dioxide from the atmosphere. However, this cooling mechanism is near the limits of its operation, because CO2 has fallen to limiting levels for the majority of plants, which are key amplifiers of silicate weathering. Although a subset of plants have evolved which can photosynthesize down to lower CO2 levels [the author does not go further into this topic, but here’s a relevant link – US], they cannot draw CO2 down lower than about 10 ppm. This means there is a second possible fate for life — running out of CO2. Early models projected either CO2 starvation or overheating […] occurring about a billion years in the future. […] Whilst this sounds comfortingly distant, it represents a much shorter future lifespan for the Earth’s biosphere than its past history. Earth’s biosphere is entering its old age.”

September 28, 2017 Posted by | Astronomy, Biology, Books, Botany, Chemistry, Geology, Paleontology, Physics | Leave a comment

Sound

I gave the book two stars. As I was writing this post I was actually reconsidering, thinking about whether that was too harsh, whether the book deserved a third star. When I started out reading it I was assuming it would be a ‘physics book’ (I found it via browsing a list of physics books, so…), but that quickly turned out to be a mistaken assumption. There’s stuff about wave mechanics in there, sure, but this book also includes stuff about anatomy (a semi-detailed coverage of how the ear works), how musical instruments work, how bats use echolocation to find insects, and how animals who live underwater hear differently from the way we hear things. This book is really ‘all over the place’, which was probably part of why I didn’t like it as much as I might otherwise have. Lots of interesting stuff included, though – I learned quite a bit from this book.

I’ve added some quotes from the book below, and below the quotes I’ve added some links to stuff/concepts/etc. covered in the book.

“Decibels aren’t units — they are ratios […] To describe the sound of a device in decibels, it is vital to know what you are comparing it with. For airborne sound, the comparison is with a sound that is just hearable (corresponding to a pressure of twenty micropascals). […] Ultrasound engineers don’t care how much ‘louder than you can just about hear’ their ultrasound is, because no one can hear it in the first place. It’s power they like, and it’s watts they measure it in. […] Few of us care how much sound an object produces — what we want to know is how loud it will sound. And that depends on how far away the thing is. This may seem obvious, but it means that we can’t ever say that the SPL [sound pressure level] of a car horn is 90 dB, only that it has that value at some stated distance.”

“For an echo to be an echo, it must be heard more than about 1/ 20 of a second after the sound itself. If heard before that, the ear responds as if to a single, louder, sound. Thus 1/ 20 second is the auditory equivalent to the 1/ 5 of a second that our eyes need to see a changing thing as two separate images. […] Since airborne sounds travel about 10 metres in 1/ 20 second, rooms larger than this (in any dimension) are echo chambers waiting to happen.”

“Being able to hear is unremarkable: powerful sounds shake the body and can be detected even by single-celled organisms. But being able to hear as well as we do is little short of miraculous: we can quite easily detect a sound which delivers a power of 10−15 watts to the eardrums, despite the fact that it moves them only a fraction of the width of a hydrogen atom. Almost as impressive is the range of sound powers we can hear. The gap between the quietest audible sound level (the threshold of hearing, 0 dB) to the threshold of pain (around 130 dB) is huge: 130 dB is 1013 […] We can also hear a fairly wide range of frequencies; about ten octaves, a couple more than a piano keyboard. […] Our judgement of directionality, by contrast, is mediocre; even in favourable conditions we can only determine the direction of a sound’s source within about 10° horizontally or 20° vertically; many other animals can do very much better. […] Perhaps the most impressive of all our hearing abilities is that we can understand words whose levels are less than 10 per cent of that of background noise level (if that background is a broad spread of frequencies): this far surpasses any machine.”

“The nerve signals that emerge from the basilar membrane are not mimics of sound waves, but coded messages which contain three pieces of information: (a) how many nerve fibres are signalling at once, (b) how far along the basilar membrane those fibres are, and (c) how long the interval is between bursts of fibre signals. The brain extracts loudness information from a combination of (a) and (c), and pitch information from (b) and (c). […] The hearing system is a delicate one, and severe damage to the eardrums or ossicles is not uncommon. […] This condition is called conductive hearing loss. If damage to the inner ear or auditory nerve occurs, the result is sensorineural or ‘nerve’ hearing loss. It mostly affects higher frequencies and quieter sounds; in mild forms, it gives rise to a condition called recruitment, in which there is a sudden jump in the ‘hearability’ of sounds. A person suffering from recruitment and exposed to a sound of gradually increasing level can at first detect nothing and then suddenly hears the sound, which seems particularly loud. Hence the ‘there’s no need to shout’ protest in response to those who raise their voices just a little to make themselves heard on a second attempt. Sensorineural hearing loss is the commonest type, and its commonest cause is physical damage inflicted on the hair cells. […] About 360 million people worldwide (over 5 per cent of the global population) have ‘disabling’ hearing loss — that is, hearing loss greater than 40 dB in the better-hearing ear in adults and a hearing loss greater than 30 dB in the better-hearing ear in children […]. About one in three people over the age of sixty-five suffer from such hearing loss. […] [E]veryone’s ability to hear high-frequency sounds declines with age: newborn, we can hear up to 20 kHz, by the age of about forty this has fallen to around 16 kHz, and to 10 kHz by age sixty. Aged eighty, most of us are deaf to sounds above 8 kHz. The effect is called presbyacusis”.

“The acoustic reflex is one cause of temporary threshold shift (TTS), in which sounds which are usually quiet become inaudible. Unfortunately, the time the reflex takes to work […] is usually around 45 milliseconds, which is far longer than it takes an impulse sound, like a gunshot or explosion, to do considerable damage. […] Where the overburdened ear differs from other abused measuring instruments (biological and technological) is that it is not only the SPL of noise that matters: energy counts too. A noise at a level which would cause no more than irritation if listened to for a second can lead to significant hearing loss if it continues for an hour. The amount of TTS is proportional to the logarithm of the time for which the noise has been present — that is, doubling the exposure time more than doubles the amount. […] The amount of TTS reduces considerably if there is a pause in the noise, so if exposure to noise for long periods is unavoidable […], there is very significant benefit in removing oneself from the noisy area, if only for fifteen minutes.”

“Many highly effective technological solutions to noise have been developed. […] The first principle of noise control is to identify the source and remove it. […] Having dealt as far as possible with the noise source, the next step is to contain it. […] When noise can be neither avoided not contained, the next step is to keep its sources well separated from potential sufferers. One approach, used for thousands of years, is zoning: legislating for the restriction of noisy activities to particular areas, such as industrial zones, which are distant from residential districts. […] Where zone separation by distance is impracticable […], sound barriers are the main solution: a barrier that just cuts off the sight of a noise source will reduce the noise level by about 5 dB, and each additional metre will provide about an extra 1.5 dB reduction. […] Since barriers largely reflect rather than absorb, reflected sounds need consideration, but otherwise design and construction are simple, results are predictable, and costs are relatively low.”

“[T]he basic approaches to home sound reduction are simple: stop noise entering, destroy what does get in, and don’t add more to it yourself. There are three ways for sound to enter: via openings; by structure-borne vibration; and through walls, windows, doors, ceilings, and floors acting as diaphragms. In all three cases, the main point to bear in mind is that an acoustic shell is only as good as its weakest part: just as even a small hole in an otherwise watertight ship’s hull renders the rest useless, so does a single open window in a double-glazed house. In fact, the situation with noise is much worse than with water due to the logarithmic response of our ears: if we seal one of two identical holes in a boat we will halve the inflow. If we close one of two identical windows into a house, […] that 50 per cent reduction in acoustic intensity is only about a 2 per cent reduction in loudness. The second way to keep noise out is double glazing, since single-glazed windows make excellent diaphragms. Structure-borne sound is a much greater challenge […] One inexpensive, adaptable, and effective solution […] is the hanging of heavy velour drapes, with as many folds as possible. If something more drastic is required, it is vital to involve an expert: while an obvious solution is to thicken walls, it’s important to bear in mind that doubling thickness reduces transmission loss by only 6 dB (a sound power reduction of about three-quarters, but a loudness reduction of only about 40 per cent). This means that solid walls need to be very thick to work well. A far better approach is the use of porous absorbers and of multi-layer constructions. In a porous absorber like glass fibre, higher-frequency sound waves are lost through multiple reflections from the many internal surfaces. […] A well-fitted acoustically insulated door is also vital. The floor should not be neglected: even if there are no rooms beneath, hard floors are excellent both at generating noise when walked on and in transmitting that noise throughout the building. Carpet and underlay are highly effective at high frequencies but are almost useless at lower ones […] again there is no real alternative to bringing in an expert.”

“There are two reasons for the apparent silence of the sea: one physical, the other biological. The physical one is the impedance mismatch between air and water, in consequence of which the surface acts as an acoustic mirror, reflecting back almost all sound from below, so that land-dwellers hear no more than the breaking of the waves. […] underwater, the eardrum has water on one side and air on the other, and so impedance mismatching once more prevents most sound from entering. If we had no eardrums (nor air-filled middle ears) we would probably hear very well underwater. Underwater animals don’t need such complicated ears as ours: since the water around them is a similar density to their flesh, sound enters and passes through their whole bodies easily […] because the velocity of sound is about five times greater in water than in air, the wavelength corresponding to a particular frequency is also about five times greater than its airborne equivalent, so directionality is harder to come by.”

“Although there is little that electromagnetic radiation does above water that sound cannot do below it, sound has one unavoidable disadvantage: its velocity in water is much lower than that of electromagnetic radiation in air […]. Also, when waves are used to send data, the rate of that data transmission is directly proportional to the wave frequency — and audio sound waves are around 1,000 times lower in frequency than radio waves. For this reason ultrasound is used instead, since its frequencies can match those of radio waves. Another advantage is that it is easier to produce directional beams at ultrasonic frequencies to send the signal in only the direction you want. […] The distances over which sound can travel underwater are amazing. […] sound waves are absorbed far less in water than in air. At 1 kHz, absorption is about 5 dB/ km in air (at 30 per cent humidity) but only 0.06 dB/ km in seawater. Also, underwater sound waves are much more confined; a noise made in mid-air spreads in all directions, but in the sea the bed and the surface limit vertical spreading. […] The range of sound velocities underwater is [also] far larger than in air, because of the enormous variations in density, which is affected by temperature, pressure, and salinity […] somewhere under all oceans there is a layer at which sound velocity is low, sandwiched between regions in which it is higher. By refraction, sound waves from both above and below are diverted towards the region of minimum sound velocity, and are trapped there. This is the deep sound channel, a thin spherical shell extending through the world’s oceans. Since sound waves in the deep sound channel can move only horizontally, their intensity falls in proportion only to the distance they travel, rather than to the square of the distance, as they would in air or in water at a single temperature (in other words, they spread out in circles, not spheres). Sound absorption in the deep sound channel is very low […] and sound waves in the deep channel can readily circumnavigate the Earth.”

Links:

Sound.
Neuromast.
Monochord.
Echo.
Pierre-Simon Laplace.
Sonar.
Foley.
Long Range Acoustic Device.
Physics of sound.
Speed of sound.
Shock wave.
Doppler effect.
Acoustic mirror.
Acoustic impedance.
Snell’s law.
Diffraction grating.
Interference (wave propagation).
Acousto-optic effect.
Sound pressure.
Sound intensity.
Square-cube law.
Decibel.
Ultrasound.
Sound level meter.
Phon.
Standing wave.
Harmonic.
Resonance.
Helmholtz resonance.
Phonautograph.
Spectrogram.
Fourier series/Fourier transform/Fast Fourier transform.
Equalization (audio).
Absolute pitch.
Consonance and dissonance.
Pentatonic scale.
Major and minor.
Polyphony.
Rhytm.
Pitched percussion instrument/Unpitched percussion instrument.
Hearing.
Ear/pinna/tympanic membrane/Eustachian tube/Middle ear/Inner ear/Cochlea/Organ of Corti.
Otoacoustic emission.
Broca’s area/primary auditory cortex/Wernicke’s area/Haas effect.
Conductive hearing loss/Sensorineural hearing loss.
Microphone/Carbon microphone/Electret microphone/Ribbon microphone.
Piezoelectric effect.
Loudspeaker.
Missing fundamental.
Huffman coding.
Animal echolocation.
Phonon.
Infrasound.
Hydrophone.
Deep sound channel.
Tonpilz.
Stokes’ law of sound attenuation.
Noise.
Acoustic reflex.
Temporary threshold shift.
Active noise cancellation.
Sabine equation.

September 14, 2017 Posted by | Books, Physics | Leave a comment