Econstudentlog

Geophysics (II)

In the post I have added some observations from- and links related to the last half of the book’s coverage.

“It is often […] useful to describe a force in terms of the acceleration it produces. Acceleration is the rate of change of velocity; however, when a force acts on a body with a given mass, acceleration is also the force experienced by each unit of mass. For example, a 100 kg man weighs ten times more than a 10 kg child but each experiences the same gravitational acceleration, which is a property of the Earth. The gravitational and centrifugal accelerations have different directions: gravitational acceleration acts inwards towards the Earth’s centre, whereas centrifugal acceleration acts outwards away from the rotation axis. Gravity is the acceleration that results from combining these two accelerations. The direction of gravity defines the local vertical direction […] and thereby the horizontal plane. Due to the different directions of its component accelerations, gravity rarely acts radially towards the centre of the Earth; it only does so at the poles and at the equator. For similar reasons the value of gravity varies with latitude. […] The end result is that gravity is about 0.5 per cent stronger at the poles than at the equator. […] Using the measured values of gravity and the Earth’s radius, in conjunction with the gravitational constant, the mass and volume of the Earth can be obtained. Combining these gives a mean density for the Earth of 5,515 kg/m3. The average density of surface rocks is only half of this value, which implies that density must increase with depth in the Earth. This was an important discovery for scientists concerned with the size and shape of the Earth in the 18th and early 19th centuries. The variation of density with depth in the layered Earth […] was later established from the interpretation of P- and S-wave seismic velocities and the analysis of free oscillations.”

“The Moon’s influence on the Earth’s rotation is stronger than that of the Sun or the other planets in the solar system. The centre of mass of the Earth–Moon pair, called the barycentre, lies at about 4,600 km from the Earth’s centre — well within the Earth’s radius of 6,371 km. The Earth and Moon rotate about this point […]. The elliptical orbit of the Earth about the Sun is in reality the track followed by the barycentre. The rotation of the Earth–Moon pair about their barycentre causes a centrifugal acceleration in the Earth that is directed away from the Moon. The lunar gravitational attraction opposes this and the combined effect is to deform the equipotential surface of the tide and draw it out into the shape of a prolate ellipsoid, resembling a rugby ball. Consequently there is a tidal bulge on the far side of the Earth from the Moon, complementary to the tidal bulge on the near side. The bulges are unequal in size. Each day the Earth rotates under both tidal bulges, so that two unequal tides are experienced; they are resolved into a daily (diurnal) tide and a twice-daily (semi-diurnal) tide. Although we think of the tides as a fluctuation of sea level, they also take place in the solid planet, where they are known as bodily earth tides. These are manifest as displacements of the solid surface by up to 38 cm vertically and 5 cm horizontally. The Sun also contributes to the tides, creating semi-annual and annual components. […] The displacements of fluid and solid mass have a braking effect on the Earth’s rotation, slowing it down and gradually increasing the length of the day, currently at about 1.8 milliseconds per century. […] The reciprocal effect of the Earth’s gravitation on the Moon has slowed lunar rotation about its own axis to the extent that the Moon’s spin now has the same period as its rotation about the Earth. That is why it always presents the same face to us. Conservation of angular momentum results in a transfer of angular momentum from the Earth to the Moon, which is accomplished by an increase in the Earth–Moon distance of about 3.7 cm/yr (roughly the rate at which fingernails grow), and by a slowing of the Moon’s rotation rates about its own axis and about the Earth. In time, all three rotations will be synchronous, with a period of 48 present Earth-days. The Moon will then be stationary over the Earth and both bodies will present the same face to each other.”

“[I]sostatic compensation causes the crust to move vertically to seek a new hydrostatic equilibrium in response to changes in the load on the crust. Thus, when erosion removes surface material or when an ice-cap melts, the isostatic response is uplift of the mountain. Examples of this uplift are found in northern Canada and Fennoscandia, which were covered by a 1–2 kilometre-thick ice sheet during the last ice age; the surface load depressed the crust in these regions by up to 500 m. The ice age ended about 10,000 years ago, and subsequent postglacial isostatic adjustment has resulted in vertical crustal movements. The land uplift was initially faster than it is today, but it continues at rates of up to 9 mm/yr in Scandinavia and Finland […]. The phenomenon has been observed for decades by repeated high-precision levelling campaigns. […] the increase of temperature with depth results in anelastic behaviour of the deeper lithosphere. This is the same kind of behaviour that causes attenuation of seismic waves in the Earth […] A specific type of anelastic behaviour is viscoelasticity. In this mechanism a material responds to short-duration stresses in the same way that an elastic body does, but over very long time intervals it flows like a sticky viscous fluid. The flow of otherwise solid material in the mantle is understood to be a viscoelastic process. This type of behaviour has been invoked to explain the response of the upper mantle to the loading of northern Canada and Fennoscandia by the ice sheets. In each region the weight of an ice sheet depressed the central area, forcing it down into the mantle. The displaced mantle caused the surrounding land to bulge upward slightly, as a jelly does around a point where it is pressed down. As a result of postglacial relaxation the opposite motion is now happening: the peripheral bulge is sinking while the central region is being uplifted.”

“The molecules of an object are in constant motion and the energy of this motion is called kinetic energy. Temperature is a measure of the average kinetic energy of the molecules in a given volume. […] The total energy of motion of all the molecules in a volume is its internal energy. When two objects with different temperatures are in contact, they exchange internal energy until they have the same temperature. The energy transferred is the amount of heat exchanged. Thus, if heat is added to an object, its kinetic energy is increased, the motions of individual atoms and molecules speed up, and its temperature rises. Heat is a form of energy and is therefore measured in the standard energy unit, the joule. The expenditure of one joule per second defines a watt, the unit of power. […] The amount of geothermal heat flowing per second across a unit of surface area of the Earth is called the geothermal flux, or more simply the heat flow. It is measured in mW/m2. The Earth’s internal heat is its greatest source of energy. It powers global geological processes such as plate tectonics and the generation of the geomagnetic field. The annual amount of heat flowing out of the Earth is more than 100 times greater than the elastic energy released in earthquakes and ten times greater than the loss of kinetic energy as the planet’s rotation slows due to tidal friction. Although the solar radiation that falls on the Earth is a much larger source of energy, it is important mainly for its effect on natural processes at or above the Earth’s surface. The atmosphere and clouds reflect or absorb about 45 per cent of solar radiation, and the land and ocean surfaces reflect a further 5 per cent and absorb 50 per cent. Almost all of the energy absorbed at the surface and in the clouds and atmosphere is radiated back into space. The solar energy that reaches the surface penetrates only a short distance into the ground, because water and rocks are poor conductors of heat. […] The daily temperature fluctuation in rocks and sediments sinks to less than 1 per cent of its surface amplitude in a depth of only 1 metre. The annual seasonal change of temperature penetrates some nineteen times deeper, but its effects are barely felt below 20 m.”

“The [Earth’s] internal heat arises from two sources. Part is produced at the present time by radioactivity in crustal rocks and in the mantle, and part is primordial. […] The internal heat has to find its way out of the Earth. The three basic forms of heat transfer are radiation, conduction, and convection. Heat is also transferred in compositional and phase transitions. […] Heat is transported throughout the interior by conduction, and convection plays an important role in the mantle and fluid outer core. […] Heat transport by conduction is most important in solid regions of the Earth. Thermal conduction takes place by transferring energy in the vibrations of atoms, or in collisions between molecules, without bodily displacement of the material. The flow of heat through a material by conduction depends on two quantities: the rate at which temperature increases with depth (the temperature gradient), and the material’s ability to conduct heat, a physical property known as thermal conductivity. The product of the temperature gradient and the thermal conductivity defines the heat flow. […] Heat flow varies greatly over the Earth’s surface depending on the local geology and tectonic situation. The estimated average heat flow is 92 mW/m2. Multiplying this value by the Earth’s surface area, which is about 510 million km2, gives a global heat loss of about 47,000 GW […]. For comparison, the energy production of a large nuclear power plant is about 1 GW.”

An adiabatic thermal process is one in which heat is neither gained nor lost. This can be the case when a process occurs too quickly to allow heat to be exchanged, as in the rapid compressions and expansions during the passage of a seismic wave. The variation of temperature with depth under adiabatic conditions defines the adiabatic temperature gradient. […] Consider what would happen in a fluid if the temperature increases with depth more rapidly than the adiabatic gradient. If a small parcel of material at a particular depth is moved upward adiabatically to a shallower depth, it experiences a drop in pressure corresponding to the depth difference and a corresponding adiabatic decrease in temperature. However, the decrease is not as large as required by the real temperature gradient, so the adiabatically displaced parcel is now hotter and less dense than its environment. It experiences a buoyant uplift and continues to rise, losing heat and increasing in density until it is in equilibrium with its surroundings. Meanwhile, cooler material adjacent to its original depth fills the vacated place, closing the cycle. This process of heat transport, in which material and heat are transported together, is thermal convection. Eventually the loss of heat by convection brings the real temperature gradient close to the adiabatic gradient. Consequently, a well-mixed, convecting fluid has a temperature profile close to the adiabatic curve. Convection is the principal method of heat transport in the Earth’s fluid outer core. Convection is also an important process of heat transport in the mantle. […] Mantle convection plays a crucial role in the cooling history and evolution of the planet.”

“It is important to appreciate the timescale on which flow occurs in the mantle. The rate is quite different from the familiar flow of a sticky liquid, such as blood or motor oil […]. The mantle is vastly stiffer. Estimates of viscosity for the lower mantle are around 1022 Pa·s (pascal seconds), which is 1025 times that of water. This is an enormous factor (similar to the ratio of the mass of the entire Earth to a kilogram mass). The viscosity varies within the mantle, with the upper mantle about 20 times less viscous than the lower mantle. Flow takes place in the mantle by the migration of defects through the otherwise solid material. This is a slow process that produces flow rates on the order of centimetres per year. However, geological processes occur on a very long timescale, spanning tens or hundreds of millions of years. This allows convection to be an important factor in the transport of heat through the mantle.”

“The Sun has a strong magnetic field, greatly exceeding that of any planet. It arises from convection in the solar core and is sufficiently irregular that it produces regions of lower than normal temperature on the Sun’s surface, called sunspots. These affect the release of charged particles (electrons, protons, and alpha particles) from the Sun’s atmosphere. The particles are not bound to each other, but form a plasma that spreads out at supersonic speed. The flow of electric charge is called the solar wind; it is accompanied by a magnetic field known as the interplanetary magnetic field. The solar emissions are variable, controlled by changes in the Sun’s magnetic field. […] The magnetic field of a planet deflects the solar wind around it. This blocks the influx of solar radiation and prevents the atmosphere from being blown away […] Around the Earth (as well as the giant planets and Mercury) the region in which the planet’s magnetic field is stronger than the interplanetary field is called the magnetosphere; its shape resembles the bow-wave and wake of a moving ship. […] It compresses the field on the daytime side of the Earth, forming a bow shock, about 17 km thick, which deflects most of the solar wind around the planet. However, some of the plasma penetrates the barrier and forms a region called the magnetosheath; the boundary between the plasma and the magnetic field is called the magnetopause. The solar wind causes the magnetic field on the night-time side of the Earth to stretch out to form a magnetotail […] that extends several million kilometres ‘downwind’ from the Earth. Similar features characterize the magnetic fields of other planets. […] Rotation and the related Coriolis force, together with convection, are necessary factors for a self-sustaining dynamo”.

Links:
Gravity. Inertial force. Centrifugal force. Centripetal force.
Gravimeter. Gal (unit).
Reference ellipsoid. Undulation of the geoid. Satellite geodesy. Interferometric synthetic-aperture radar. Global Positioning System. Galileo. GLONASS. Differential GPS.
Gravity Recovery and Climate Experiment (GRACE). Gravity Field and Steady-State Ocean Circulation Explorer (GOCE).
Gradiometer.
Gravity surveying. Bouguer anomaly. Free-air gravity anomaly. Eötvös effect.
Isostasy.
Craton.
Solidus.
Diamond anvil cell.
Mantle plume.
Hotspot (geology).
Magnetism.
Earth’s magnetic field.
International Geomagnetic Reference Field.
Telluric current. Magnetotellurics.
SWARM mission.
Ferromagnetism. Curie point.
Paleomagnetism. Plate tectonics. Vine–Matthews–Morley hypothesis.
Geomagnetic reversal (“During the past 10 Myr there have been on average 4-5 reversals per Myr; the most recent full reversal happened 780,000 yr ago.”).
Magnetostratigraphy.

November 24, 2018 Posted by | Astronomy, Books, Geology, Physics | Leave a comment

Geophysics (I)

“Geophysics is a field of earth sciences that uses the methods of physics to investigate the physical properties of the Earth and the processes that have determined and continue to govern its evolution. Geophysical investigations cover a wide range of research fields, extending from surface changes that can be observed from Earth-orbiting satellites to unseen behaviour in the Earth’s deep interior. […] This book presents a general overview of the principal methods of geophysics that have contributed to our understanding of Planet Earth and how it works.”

I gave this book five stars on goodreads, where I deemed it: ‘An excellent introduction to the topic, with high-level yet satisfactorily detailed coverage of many areas of interest.’ It doesn’t cover these topics in the amount of detail they’re covered in books like Press & Siever (…a book which I incidentally covered, though not in much detail, here and here), but it’s a very decent introductory book on these topics. I have added some observations and links related to the first half of the book’s coverage below.

“The gravitational attractions of the other planets — especially Jupiter, whose mass is 2.5 times the combined mass of all the other planets — influence the Earth’s long-term orbital rotations in a complex fashion. The planets move with different periods around their differently shaped and sized orbits. Their gravitational attractions impose fluctuations on the Earth’s orbit at many frequencies, a few of which are more significant than the rest. One important effect is on the obliquity: the amplitude of the axial tilt is forced to change rhythmically between a maximum of 24.5 degrees and a minimum of 22.1 degrees with a period of 41,000 yr. Another gravitational interaction with the other planets causes the orientation of the elliptical orbit to change with respect to the stars […]. The line of apsides — the major axis of the ellipse — precesses around the pole to the ecliptic in a prograde sense (i.e. in the same sense as the Earth’s rotation) with a period of 100,000 yr. This is known as planetary precession. Additionally, the shape of the orbit changes with time […], so that the eccentricity varies cyclically between 0.005 (almost circular) and a maximum of 0.058; currently it is 0.0167 […]. The dominant period of the eccentricity fluctuation is 405,000 yr, on which a further fluctuation of around 100,000 yr is superposed, which is close to the period of the planetary precession.”

“The amount of solar energy received by a unit area of the Earth’s surface is called the insolation. […] The long-term fluctuations in the Earth’s rotation and orbital parameters influence the insolation […] and this causes changes in climate. When the obliquity is smallest, the axis is more upright with respect to the ecliptic than at present. The seasonal differences are then smaller and vary less between polar and equatorial regions. Conversely, a large axial tilt causes an extreme difference between summer and winter at all latitudes. The insolation at any point on the Earth thus changes with the obliquity cycle. Precession of the axis also changes the insolation. At present the north pole points away from the Sun at perihelion; one half of a precessional cycle later it will point away from the Sun at aphelion. This results in a change of insolation and an effect on climate with a period equal to that of the precession. The orbital eccentricity cycle changes the Earth–Sun distances at perihelion and aphelion, with corresponding changes in insolation. When the orbit is closest to being circular, the perihelion–aphelion difference in insolation is smallest, but when the orbit is more elongate this difference increases. In this way the changes in eccentricity cause long-term variations in climate. The periodic climatic changes due to orbital variations are called Milankovitch cycles, after the Serbian astronomer Milutin Milankovitch, who studied them systematically in the 1920s and 1930s. […] The evidence for cyclical climatic variations is found in geological sedimentary records and in long cores drilled into the ice on glaciers and in polar regions. […] Sedimentation takes place slowly over thousands of years, during which the Milankovitch cycles are recorded in the physical and chemical properties of the sediments. Analyses of marine sedimentary sequences deposited in the deep oceans over millions of years have revealed cyclical variations in a number of physical properties. Examples are bedding thickness, sediment colour, isotopic ratios, and magnetic susceptibility. […] The records of oxygen isotope ratios in long ice cores display Milankovitch cycles and are important evidence for the climatic changes, generally referred to as orbital forcing, which are brought about by the long-term variations in the Earth’s orbit and axial tilt.”

Stress is defined as the force acting on a unit area. The fractional deformation it causes is called strain. The stress–strain relationship describes the mechanical behaviour of a material. When subjected to a low stress, materials deform in an elastic manner so that stress and strain are proportional to each other and the material returns to its original unstrained condition when the stress is removed. Seismic waves usually propagate under conditions of low stress. If the stress is increased progressively, a material eventually reaches its elastic limit, beyond which it cannot return to its unstrained state. Further stress causes disproportionately large strain and permanent deformation. Eventually the stress causes the material to reach its breaking point, at which it ruptures. The relationship between stress and strain is an important aspect of seismology. Two types of elastic deformation—compressional and shear—are important in determining how seismic waves propagate in the Earth. Imagine a small block that is subject to a deforming stress perpendicular to one face of the block; this is called a normal stress. The block shortens in the direction it is squeezed, but it expands slightly in the perpendicular direction; when stretched, the opposite changes of shape occur. These reversible elastic changes depend on how the material responds to compression or tension. This property is described by a physical parameter called the bulk modulus. In a shear deformation, the stress acts parallel to the surface of the block, so that one edge moves parallel to the opposite edge, changing the shape but not the volume of the block. This elastic property is described by a parameter called the shear modulus. An earthquake causes normal and shear strains that result in four types of seismic wave. Each type of wave is described by two quantities: its wavelength and frequency. The wavelength is the distance between successive peaks of a vibration, and the frequency is the number of vibrations per second. Their product is the speed of the wave.”

“A seismic P-wave (also called a primary, compressional, or longitudinal wave) consists of a series of compressions and expansions caused by particles in the ground moving back and forward parallel to the direction in which the wave travels […] It is the fastest seismic wave and can pass through fluids, although with reduced speed. When it reaches the Earth’s surface, a P-wave usually causes nearly vertical motion, which is recorded by instruments and may be felt by people but usually does not result in severe damage. […] A seismic S-wave (i.e. secondary or shear wave) arises from shear deformation […] It travels by means of particle vibrations perpendicular to the direction of travel; for that reason it is also known as a transverse wave. The shear wave vibrations are further divided into components in the horizontal and vertical planes, labelled the SH- and SV-waves, respectively. […] an S-wave is slower than a P-wave, propagating about 58 per cent as fast […] Moreover, shear waves can only travel in a material that supports shear strain. This is the case for a solid object, in which the molecules have regular locations and intermolecular forces hold the object together. By contrast, a liquid (or gas) is made up of independent molecules that are not bonded to each other, and thus a fluid has no shear strength. For this reason S-waves cannot travel through a fluid. […] S-waves have components in both the horizontal and vertical planes, so when they reach the Earth’s surface they shake structures from side to side as well as up and down. They can have larger amplitudes than P-waves. Buildings are better able to resist up-and-down motion than side-to-side shaking, and as a result SH-waves can cause serious damage to structures. […] Surface waves spread out along the Earth’s surface around a point – called the epicentre – located vertically above the earthquake’s source […] Very deep earthquakes usually do not produce surface waves, but the surface waves caused by shallow earthquakes are very destructive. In contrast to seismic body waves, which can spread out in three dimensions through the Earth’s interior, the energy in a seismic surface wave is guided by the free surface. It is only able to spread out in two dimensions and is more concentrated. Consequently, surface waves have the largest amplitudes on the seismogram of a shallow earthquake […] and are responsible for the strongest ground motions and greatest damage. There are two types of surface wave. [Rayleigh waves & Love waves, US].”

“The number of earthquakes that occur globally each year falls off with increasing magnitude. Approximately 1.4 million earthquakes annually have magnitude 2 or larger; of these about 1,500 have magnitude 5 or larger. The number of very damaging earthquakes with magnitude above 7 varies from year to year but has averaged about 15-20 annually since 1900. On average, one earthquake per year has magnitude 8 or greater, although such large events occur at irregular intervals. A magnitude 9 earthquake may release more energy than the cumulative energy of all other earthquakes in the same year. […] Large earthquakes may be preceded by foreshocks, which are lesser events that occur shortly before and in the same region as the main shock. They indicate the build-up of stress that leads to the main rupture. Large earthquakes are also followed by smaller aftershocks on the same fault or near to it; their frequency decreases as time passes, following the main shock. Aftershocks may individually be large enough to have serious consequences for a damaged region, because they can cause already weakened structures to collapse. […] About 90 per cent of the world’s earthquakes and 75 per cent of its volcanoes occur in the circum-Pacific belt known as the ‘Ring of Fire‘. […] The relative motions of the tectonic plates at their margins, together with changes in the state of stress within the plates, are responsible for most of the world’s seismicity. Earthquakes occur much more rarely in the geographic interiors of the plates. However, large intraplate earthquakes do occur […] In 2001 an intraplate earthquake with magnitude 7.7 occurred on a previously unknown fault under Gujarat, India […], killing 20,000 people and destroying 400,000 homes. […] Earthquakes are a serious hazard for populations, their property, and the natural environment. Great effort has been invested in the effort to predict their occurrence, but as yet without general success. […] Scientists have made more progress in assessing the possible location of an earthquake than in predicting the time of its occurrence. Although a damaging event can occur whenever local stress in the crust exceeds the breaking point of underlying rocks, the active seismic belts where this is most likely to happen are narrow and well defined […]. Unfortunately many densely populated regions and great cities are located in some of the seismically most active regions.[…] it is not yet possible to forecast reliably where or when an earthquake will occur, or how large it is likely to be.”

Links:

Plate tectonics.
Geodesy.
Seismology. Seismometer.
Law of conservation of energy. Second law of thermodynamics (This book incidentally covers these topics in much more detail, and does it quite well – US).
Angular momentum.
Big Bang model. Formation and evolution of the Solar System (…I should probably mention here that I do believe Wikipedia covers these sorts of topics quite well).
Invariable plane. Ecliptic.
Newton’s law of universal gravitation.
Kepler’s laws of planetary motion.
Potential energy. Kinetic energy. Orbital eccentricity. Line of apsides. Axial tilt. Figure of the Earth. Nutation. Chandler wobble.
Torque. Precession.
Very-long-baseline interferometry.
Reflection seismology.
Geophone.
Seismic shadow zone. Ray tracing (physics).
Structure of the Earth. Core–mantle boundary. D” region. Mohorovičić discontinuity. Lithosphere. Asthenosphere. Mantle transition zone.
Peridotite. Olivine. Perovskite.
Seismic tomography.
Lithoprobe project.
Orogenic belt.
European Geotraverse ProjectEuropean Geotraverse Project.
Microseism. Seismic noise.
Elastic-rebound theory. Fault (geology).
Richter magnitude scale (…of note: “the Richter scale underestimates the size of very large earthquakes with magnitudes greater than about 8.5”). Seismic moment. Moment magnitude scale. Modified Mercalli intensity scale. European macroseismic scale.
Focal mechanism.
Transform fault. Euler pole. Triple junction.
Megathrust earthquake.
Alpine fault. East African Rift.

November 1, 2018 Posted by | Astronomy, Books, Geology, Physics | Leave a comment

Oceans (II)

In this post I have added some more observations from the book and some more links related to the book‘s coverage.

“Almost all the surface waves we observe are generated by wind stress, acting either locally or far out to sea. Although the wave crests appear to move forwards with the wind, this does not occur. Mechanical energy, created by the original disturbance that caused the wave, travels through the ocean at the speed of the wave, whereas water does not. Individual molecules of water simply move back and forth, up and down, in a generally circular motion. […] The greater the wind force, the bigger the wave, the more energy stored within its bulk, and the more energy released when it eventually breaks. The amount of energy is enormous. Over long periods of time, whole coastlines retreat before the pounding waves – cliffs topple, rocks are worn to pebbles, pebbles to sand, and so on. Individual storm waves can exert instantaneous pressures of up to 30,000 kilograms […] per square metre. […] The rate at which energy is transferred across the ocean is the same as the velocity of the wave. […] waves typically travel at speeds of 30-40 kilometres per hour, and […] waves with a greater wavelength will travel faster than those with a shorter wavelength. […] With increasing wind speed and duration over which the wind blows, the wave height, period, and length all increase. The distance over which the wind blows is known as fetch, and is critical in influencing the growth of waves — the greater the area of ocean over which a storm blows, then the larger and more powerful the waves generated. The three stages in wave development are known as sea, swell, and surf. […] The ocean is highly efficient at transmitting energy. Water offers so little resistance to the small orbital motion of water particles in waves that individual wave trains may continue for thousands of kilometres. […] When the wave train encounters shallow water — say 50 metres for a 100-metre wavelength — the waves first feel the bottom and begin to slow down in response to frictional resistance. Wavelength decreases, the crests bunch closer together, and wave height increases until the wave becomes unstable and topples forwards as surf. […] Very often, waves approach obliquely to the coast and set up a significant transfer of water and sediment along the shoreline. The long-shore currents so developed can be very powerful, removing beach sand and building out spits and bars across the mouths of estuaries.” (People who’re interested in knowing more about these topics will probably enjoy Fredric Raichlen’s book on these topics – I did, US.)

“Wind is the principal force that drives surface currents, but the pattern of circulation results from a more complex interaction of wind drag, pressure gradients, and Coriolis deflection. Wind drag is a very inefficient process by which the momentum of moving air molecules is transmitted to water molecules at the ocean surface setting them in motion. The speed of water molecules (the current), initially in the direction of the wind, is only about 3–4 per cent of the wind speed. This means that a wind blowing constantly over a period of time at 50 kilometres per hour will produce a water current of about 1 knot (2 kilometres per hour). […] Although the movement of wind may seem random, changing from one day to the next, surface winds actually blow in a very regular pattern on a planetary scale. The subtropics are known for the trade winds with their strong easterly component, and the mid-latitudes for persistent westerlies. Wind drag by such large-scale wind systems sets the ocean waters in motion. The trade winds produce a pair of equatorial currents moving to the west in each ocean, while the westerlies drive a belt of currents that flow to the east at mid-latitudes in both hemispheres. […] Deflection by the Coriolis force and ultimately by the position of the continents creates very large oval-shaped gyres in each ocean.”

“The control exerted by the oceans is an integral and essential part of the global climate system. […] The oceans are one of the principal long-term stores on Earth for carbon and carbon dioxide […] The oceans are like a gigantic sponge holding fifty times more carbon dioxide than the atmosphere […] the sea surface acts as a two-way control valve for gas transfer, which opens and closes in response to two key properties – gas concentration and ocean stirring. First, the difference in gas concentration between the air and sea controls the direction and rate of gas exchange. Gas concentration in water depends on temperature—cold water dissolves more carbon dioxide than warm water, and on biological processes—such as photosynthesis and respiration by microscopic plants, animals, and bacteria that make up the plankton. These transfer processes affect all gases […]. Second, the strength of the ocean-stirring process, caused by wind and foaming waves, affects the ease with which gases are absorbed at the surface. More gas is absorbed during stormy weather and, once dissolved, is quickly mixed downwards by water turbulence. […] The transfer of heat, moisture, and other gases between the ocean and atmosphere drives small-scale oscillations in climate. The El Niño Southern Oscillation (ENSO) is the best known, causing 3–7-year climate cycles driven by the interaction of sea-surface temperature and trade winds along the equatorial Pacific. The effects are worldwide in their impact through a process of atmospheric teleconnection — causing floods in Europe and North America, monsoon failure and severe drought in India, South East Asia, and Australia, as well as decimation of the anchovy fishing industry off Peru.”

“Earth’s climate has not always been as it is today […] About 100 million years ago, for example, palm trees and crocodiles lived as far north as 80°N – the equivalent of Arctic Canada or northern Greenland today. […] Most of the geological past has enjoyed warm conditions. These have been interrupted at irregular intervals by cold and glacial climates of altogether shorter duration […][,] the last [of them] beginning around 3 million years ago. We are still in the grip of this last icehouse state, although in one of its relatively brief interglacial phases. […] Sea level has varied in the past in close consort with climate change […]. Around twenty-five thousand years ago, at the height of the last Ice Age, the global sea level was 120 metres lower than today. Huge tracts of the continental shelves that rim today’s landmasses were exposed. […] Further back in time, 80 million years ago, the sea level was around 250–350 metres higher than today, so that 82 per cent of the planet was ocean and only 18 per cent remained as dry land. Such changes have been the norm throughout geological history and entirely the result of natural causes.”

“Most of the solar energy absorbed by seawater is converted directly to heat, and water temperature is vital for the distribution and activity of life in the oceans. Whereas mean temperature ranges from 0 to 40 degrees Celsius, 90 per cent of the oceans are permanently below 5°C. Most marine animals are ectotherms (cold-blooded), which means that they obtain their body heat from their surroundings. They generally have narrow tolerance limits and are restricted to particular latitudinal belts or water depths. Marine mammals and birds are endotherms (warm-blooded), which means that their metabolism generates heat internally thereby allowing the organism to maintain constant body temperature. They can tolerate a much wider range of external conditions. Coping with the extreme (hydrostatic) pressure exerted at depth within the ocean is a challenge. For every 30 metres of water, the pressure increases by 3 atmospheres – roughly equivalent to the weight of an elephant.”

“There are at least 6000 different species of diatom. […] An average litre of surface water from the ocean contains over half a million diatoms and other unicellular phytoplankton and many thousands of zooplankton.”

“Several different styles of movement are used by marine organisms. These include floating, swimming, jet propulsion, creeping, crawling, and burrowing. […] The particular physical properties of water that most affect movement are density, viscosity, and buoyancy. Seawater is about 800 times denser than air and nearly 100 times more viscous. Consequently there is much more resistance on movement than on land […] Most large marine animals, including all fishes and mammals, have adopted some form of active swimming […]. Swimming efficiency in fishes has been achieved by minimizing the three types of drag resistance created by friction, turbulence, and body form. To reduce surface friction, the body must be smooth and rounded like a sphere. The scales of most fish are also covered with slime as further lubrication. To reduce form drag, the cross-sectional area of the body should be minimal — a pencil shape is ideal. To reduce the turbulent drag as water flows around the moving body, a rounded front end and tapered rear is required. […] Fins play a versatile role in the movement of a fish. There are several types including dorsal fins along the back, caudal or tail fins, and anal fins on the belly just behind the anus. Operating together, the beating fins provide stability and steering, forwards and reverse propulsion, and braking. They also help determine whether the motion is up or down, forwards or backwards.”

Links:

Rip current.
Rogue wave. Agulhas Current. Kuroshio Current.
Tsunami.
Tide. Tidal range.
Geostrophic current.
Ekman Spiral. Ekman transport. Upwelling.
Global thermohaline circulation system. Antarctic bottom water. North Atlantic Deep Water.
Rio Grande Rise.
Denmark Strait. Denmark Strait cataract (/waterfall?).
Atmospheric circulation. Jet streams.
Monsoon.
Cyclone. Tropical cyclone.
Ozone layer. Ozone depletion.
Milankovitch cycles.
Little Ice Age.
Oxygen Isotope Stratigraphy of the Oceans.
Contourite.
Earliest known life forms. Cyanobacteria. Prokaryote. Eukaryote. Multicellular organism. Microbial mat. Ediacaran. Cambrian explosion. Pikaia. Vertebrate. Major extinction events. Permian–Triassic extinction event. (The author seems to disagree with the authors of this article about potential causes, in particular in so far as they relate to the formation of Pangaea – as I felt uncertain about the accuracy of the claims made in the book I decided against covering this topic in this post, even though I find it interesting).
Tethys Ocean.
Plesiosauria. Pliosauroidea. Ichthyosaur. Ammonoidea. Belemnites. Pachyaena. Cetacea.
Pelagic zone. Nekton. Benthic zone. Neritic zone. Oceanic zone. Bathyal zone. Hadal zone.
Phytoplankton. Silicoflagellates. Coccolithophore. Dinoflagellate. Zooplankton. Protozoa. Tintinnid. Radiolaria. Copepods. Krill. Bivalves.
Elasmobranchii.
Ampullae of Lorenzini. Lateral line.
Baleen whale. Humpback whale.
Coral reef.
Box jellyfish. Stonefish.
Horseshoe crab.
Greenland shark. Giant squid.
Hydrothermal vent. Pompeii worms.
Atlantis II Deep. Aragonite. Phosphorite. Deep sea mining. Oil platform. Methane clathrate.
Ocean thermal energy conversion. Tidal barrage.
Mariculture.
Exxon Valdez oil spill.
Bottom trawling.

June 24, 2018 Posted by | Biology, Books, Engineering, Geology, Paleontology, Physics | Leave a comment

Oceans (I)

I read this book quite some time ago, but back when I did I never blogged it; instead I just added a brief review on goodreads. I remember that the main reason why I decided against blogging it shortly after I’d read it was that the coverage overlapped a great deal with Mladenov’s marine biology text, which I had at that time just read and actually did blog in some detail. I figured if I wanted to blog this book as well I would be well-advised to wait a while, so that I’d at least have forget some of the stuff first – that way blogging the book might end up serving as a review of stuff I’d forgot, rather than as a review of stuff that would still be fresh in my memory and so wouldn’t really be worth reviewing anyway. So now here we are a few months later, and I have come to think it might be a good idea to blog the book.

Below I have added some quotes from the first half of the book and some links to topics/people/etc. covered.

“Several methods now exist for calculating the rate of plate motion. Most reliable for present-day plate movement are direct observations made using satellites and laser technology. These show that the Atlantic Ocean is growing wider at a rate of between 2 and 4 centimetres per year (about the rate at which fingernails grow), the Indian Ocean is attempting to grow at a similar rate but is being severely hampered by surrounding plate collisions, while the fastest spreading centre is the East Pacific Rise along which ocean crust is being created at rates of around 17 centimetres per year (the rate at which hair grows). […] The Nazca plate has been plunging beneath South America for at least 200 million years – the imposing Andes, the longest mountain chain on Earth, is the result. […] By around 120 million years ago, South America and Africa began to drift apart and the South Atlantic was born. […] sea levels rose higher than at any time during the past billion years, perhaps as much as 350 metres higher than today. Only 18 per cent of the globe was dry land — 82 per cent was under water. These excessively high sea levels were the result of increased spreading activity — new oceans, new ridges, and faster spreading rates all meant that the mid-ocean ridge systems collectively displaced a greater volume of water than ever before. Global warming was far more extreme than today. Temperatures in the ocean rose to around 30°C at the equator and as much as 14°C at the poles. Ocean circulation was very sluggish.”

“The land–ocean boundary is known as the shoreline. Seaward of this, all continents are surrounded by a broad, flat continental shelf, typically 10–100 kilometres wide, which slopes very gently (less than one-tenth of a degree) to the shelf edge at a water depth of around 100 metres. Beyond this the continental slope plunges to the deep-ocean floor. The slope is from tens to a few hundred kilometres wide and with a mostly gentle gradient of 3–8 degrees, but locally steeper where it is affected by faulting. The base of slope abuts the abyssal plain — flat, almost featureless expanses between 4 and 6 kilometres deep. The oceans are compartmentalized into abyssal basins separated by submarine mountain ranges and plateaus, which are the result of submarine volcanic outpourings. Those parts of the Earth that are formed of ocean crust are relatively lower, because they are made up of denser rocks — basalts. Those formed of less dense rocks (granites) of the continental crust are relatively higher. Seawater fills in the deeper parts, the ocean basins, to an average depth of around 4 kilometres. In fact, some parts are shallower because the ocean crust is new and still warm — these are the mid-ocean ridges at around 2.5 kilometres — whereas older, cooler crust drags the seafloor down to a depth of over 6 kilometres. […] The seafloor is almost entirely covered with sediment. In places, such as on the flanks of mid-ocean ridges, it is no more than a thin veneer. Elsewhere, along stable continental margins or beneath major deltas where deposition has persisted for millions of years, the accumulated thickness can exceed 15 kilometres. These areas are known as sedimentary basins“.

“The super-efficiency of water as a solvent is due to an asymmetrical bonding between hydrogen and oxygen atoms. The resultant water molecule has an angular or kinked shape with weakly charged positive and negative ends, rather like magnetic poles. This polar structure is especially significant when water comes into contact with substances whose elements are held together by the attraction of opposite electrical charges. Such ionic bonding is typical of many salts, such as sodium chloride (common salt) in which a positive sodium ion is attracted to a negative chloride ion. Water molecules infiltrate the solid compound, the positive hydrogen end being attracted to the chloride and the negative oxygen end to the sodium, surrounding and then isolating the individual ions, thereby disaggregating the solid [I should mention that if you’re interested in knowing (much) more this topic, and closely related topics, this book covers these things in great detail – US]. An apparently simple process, but extremely effective. […] Water is a super-solvent, absorbing gases from the atmosphere and extracting salts from the land. About 3 billion tonnes of dissolved chemicals are delivered by rivers to the oceans each year, yet their concentration in seawater has remained much the same for at least several hundreds of millions of years. Some elements remain in seawater for 100 million years, others for only a few hundred, but all are eventually cycled through the rocks. The oceans act as a chemical filter and buffer for planet Earth, control the distribution of temperature, and moderate climate. Inestimable numbers of calories of heat energy are transferred every second from the equator to the poles in ocean currents. But, the ocean configuration also insulates Antarctica and allows the build-up of over 4000 metres of ice and snow above the South Pole. […] Over many aeons, the oceans slowly accumulated dissolved chemical ions (and complex ions) of almost every element present in the crust and atmosphere. Outgassing from the mantle from volcanoes and vents along the mid-ocean ridges contributed a variety of other elements […] The composition of the first seas was mostly one of freshwater together with some dissolved gases. Today, however, the world ocean contains over 5 trillion tonnes of dissolved salts, and nearly 100 different chemical elements […] If the oceans’ water evaporated completely, the dried residue of salts would be equivalent to a 45-metre-thick layer over the entire planet.”

“The average time a single molecule of water remains in any one reservoir varies enormously. It may survive only one night as dew, up to a week in the atmosphere or as part of an organism, two weeks in rivers, and up to a year or more in soils and wetlands. Residence times in the oceans are generally over 4000 years, and water may remain in ice caps for tens of thousands of years. Although the ocean appears to be in a steady state, in which both the relative proportion and amounts of dissolved elements per unit volume are nearly constant, this is achieved by a process of chemical cycles and sinks. The input of elements from mantle outgassing and continental runoff must be exactly balanced by their removal from the oceans into temporary or permanent sinks. The principal sink is the sediment and the principal agent removing ions from solution is biological. […] The residence times of different elements vary enormously from tens of millions of years for chloride and sodium, to a few hundred years only for manganese, aluminium, and iron. […] individual water molecules have cycled through the atmosphere (or mantle) and returned to the seas more than a million times since the world ocean formed.”

“Because of its polar structure and hydrogen bonding between individual molecules, water has both a high capacity for storing large amounts of heat and one of the highest specific heat values of all known substances. This means that water can absorb (or release) large amounts of heat energy while changing relatively little in temperature. Beach sand, by contrast, has a specific heat five times lower than water, which explains why, on sunny days, beaches soon become too hot to stand on with bare feet while the sea remains pleasantly cool. Solar radiation is the dominant source of heat energy for the ocean and for the Earth as a whole. The differential in solar input with latitude is the main driver for atmospheric winds and ocean currents. Both winds and especially currents are the prime means of mitigating the polar–tropical heat imbalance, so that the polar oceans do not freeze solid, nor the equatorial oceans gently simmer. For example, the Gulf Stream transports some 550 trillion calories from the Caribbean Sea across the North Atlantic each second, and so moderates the climate of north-western Europe.”

“[W]hy is [the sea] mostly blue? The sunlight incident on the sea has a full spectrum of wavelengths, including the rainbow of colours that make up the visible spectrum […] The longer wavelengths (red) and very short (ultraviolet) are preferentially absorbed by water, rapidly leaving near-monochromatic blue light to penetrate furthest before it too is absorbed. The dominant hue that is backscattered, therefore, is blue. In coastal waters, suspended sediment and dissolved organic debris absorb additional short wavelengths (blue) resulting in a greener hue. […] The speed of sound in seawater is about 1500 metres per second, almost five times that in air. It is even faster where the water is denser, warmer, or more salty and shows a slow but steady increase with depth (related to increasing water pressure).”

“From top to bottom, the ocean is organized into layers, in which the physical and chemical properties of the ocean – salinity, temperature, density, and light penetration – show strong vertical segregation. […] Almost all properties of the ocean vary in some way with depth. Light penetration is attenuated by absorption and scattering, giving an upper photic and lower aphotic zone, with a more or less well-defined twilight region in between. Absorption of incoming solar energy also preferentially heats the surface waters, although with marked variations between latitudes and seasons. This results in a warm surface layer, a transition layer (the thermocline) through which the temperature decreases rapidly with depth, and a cold deep homogeneous zone reaching to the ocean floor. Exactly the same broad three-fold layering is true for salinity, except that salinity increases with depth — through the halocline. The density of seawater is controlled by its temperature, salinity, and pressure, such that colder, saltier, and deeper waters are all more dense. A rapid density change, known as the pycnocline, is therefore found at approximately the same depth as the thermocline and halocline. This varies from about 10 to 500 metres, and is often completely absent at the highest latitudes. Winds and waves thoroughly stir and mix the upper layers of the ocean, even destroying the layered structure during major storms, but barely touch the more stable, deep waters.”

Links:

Arvid Pardo. Law of the Sea Convention.
Polynesians.
Ocean exploration timeline (a different timeline is presented in the book, but there’s some overlap). Age of Discovery. Vasco da Gama. Christopher Columbus. John Cabot. Amerigo Vespucci. Ferdinand Magellan. Luigi Marsigli. James Cook.
HMS Beagle. HMS Challenger. Challenger expedition.
Deep Sea Drilling Project. Integrated Ocean Drilling Program. Joides resolution.
World Ocean.
Geological history of Earth (this article of course covers much more than is covered in the book, but the book does cover some highlights). Plate tectonics. Lithosphere. Asthenosphere. Convection. Global mid-ocean ridge system.
Pillow lava. Hydrothermal vent. Hot spring.
Ophiolite.
Mohorovičić discontinuity.
Mid-Atlantic Ridge. Subduction zone. Ring of Fire.
Pluton. Nappe. Mélange. Transform fault. Strike-slip fault. San Andreas fault.
Paleoceanography. Tethys Ocean. Laurasia. Gondwana.
Oceanic anoxic event. Black shale.
Seabed.
Bengal Fan.
Fracture zone.
Seamount.
Terrigenous sediment. Biogenic and chemogenic sediment. Halite. Gypsum.
Carbonate compensation depth.
Laurentian fan.
Deep-water sediment waves. Submarine landslide. Turbidity current.
Water cycle.
Ocean acidification.
Timing and Climatic Consequences ofthe Opening of Drake Passage. The Opening of the Tasmanian Gateway Drove Global Cenozoic Paleoclimatic and Paleoceanographic Changes (report)Antarctic Circumpolar Current.
SOFAR channel.
Bathymetry.

June 18, 2018 Posted by | Books, Chemistry, Geology, Papers, Physics | Leave a comment

Marine Biology (II)

Below some observations and links related to the second half of the book’s coverage:

[C]oral reefs occupy a very small proportion of the planet’s surface – about 284,000 square kilometres – roughly equivalent to the size of Italy [yet they] are home to an incredibly diversity of marine organisms – about a quarter of all marine species […]. Coral reef systems provide food for hundreds of millions of people, with about 10 per cent of all fish consumed globally caught on coral reefs. […] Reef-building corals thrive best at sea temperatures above about 23°C and few exist where sea temperatures fall below 18°C for significant periods of time. Thus coral reefs are absent at tropical latitudes where upwelling of cold seawater occurs, such as the west coasts of South America and Africa. […] they are generally restricted to areas of clear water less than about 50 metres deep. Reef-building corals are very intolerant of any freshening of seawater […] and so do not occur in areas exposed to intermittent influxes of freshwater, such as near the mouths of rivers, or in areas where there are high amounts of rainfall run-off. This is why coral reefs are absent along much of the tropical Atlantic coast of South America, which is exposed to freshwater discharge from the Amazon and Orinoco Rivers. Finally, reef-building corals flourish best in areas with moderate to high wave action, which keeps the seawater well aerated […]. Spectacular and productive coral reef systems have developed in those parts of the Global Ocean where this special combination of physical conditions converges […] Each colony consists of thousands of individual animals called polyps […] all reef-building corals have entered into an intimate relationship with plant cells. The tissues lining the inside of the tentacles and stomach cavity of the polyps are packed with photosynthetic cells called zooxanthellae, which are photosynthetic dinoflagellates […] Depending on the species, corals receive anything from about 50 per cent to 95 per cent of their food from their zooxanthellae. […] Healthy coral reefs are very productive marine systems. This is in stark contrast to the nutrient-poor and unproductive tropical waters adjacent to reefs. Coral reefs are, in general, roughly one hundred times more productive than the surrounding environment”.

“Overfishing constitutes a significant threat to coral reefs at this time. About an eighth of the world’s population – roughly 875 million people – live within 100 kilometres of a coral reef. Most of the people live in developing countries and island nations and depend greatly on fish obtained from coral reefs as a food source. […] Some of the fishing practices are very harmful. Once the large fish are removed from a coral reef, it becomes increasingly more difficult to make a living harvesting the more elusive and lower-value smaller fish that remain. Fishers thus resort to more destructive techniques such as dynamiting parts of the reef and scooping up the dead and stunned fish that float to the surface. People capturing fish for the tropical aquarium trade will often poison parts of the reef with sodium cyanide which paralyses the fish, making them easier to catch. An unfortunate side effect of this practice is that the poison kills corals. […] Coral reefs have only been seriously studied since the 1970s, which in most cases was well after human impacts had commenced. This makes it difficult to define what might actually constitute a ‘natural’ and healthy coral reef system, as would have existed prior to extensive human impacts.”

“Mangrove is a collective term applied to a diverse group of trees and scrubs that colonize protected muddy intertidal areas in tropical and subtropical regions, creating mangrove forests […] Mangroves are of great importance from a human perspective. The sheltered waters of a mangrove forest provide important nursery areas for juvenile fish, crabs, and shrimp. Many commercial fisheries depend on the existence of healthy mangrove forests, including blue crab, shrimp, spiny lobster, and mullet fisheries. Mangrove forests also stabilize the foreshore and protect the adjacent land from erosion, particularly from the effects of large storms and tsunamis. They also act as biological filters by removing excess nutrients and trapping sediment from land run-off before it enters the coastal environment, thereby protecting other habitats such as seagrass meadows and coral reefs. […] [However] mangrove forests are disappearing rapidly. In a twenty-year period between 1980 and 2000 the area of mangrove forest globally declined from around 20 million hectares to below 15 million hectares. In some specific regions the rate of mangrove loss is truly alarming. For example, Puerto Rico lost about 89 per cent of its mangrove forests between 1930 and 1985, while the southern part of India lost about 96 per cent of its mangroves between 1911 and 1989.”

“[A]bout 80 per cent of the entire volume of the Global Ocean, or roughly one billion cubic kilometres, consists of seawater with depths greater than 1,000 metres […] The deep ocean is a permanently dark environment devoid of sunlight, the last remnants of which cannot penetrate much beyond 200 metres in most parts of the Global Ocean, and no further than 800 metres or so in even the clearest oceanic waters. The only light present in the deep ocean is of biological origin […] Except in a few very isolated places, the deep ocean is a permanently cold environment, with sea temperatures ranging from about 2° to 4°C. […] Since there is no sunlight, there is no plant life, and thus no primary production of organic matter by photosynthesis. The base of the food chain in the deep ocean consists mostly of a ‘rain’ of small particles of organic material sinking down through the water column from the sunlit surface waters of the ocean. This reasonably constant rain of organic material is supplemented by the bodies of large fish and marine mammals that sink more rapidly to the bottom following death, and which provide sporadic feasts for deep-ocean bottom dwellers. […] Since food is a scarce commodity for deep-ocean fish, full advantage must be taken of every meal encountered. This has resulted in a number of interesting adaptations. Compared to fish in the shallow ocean, many deep-ocean fish have very large mouths capable of opening very wide, and often equipped with numerous long, sharp, inward-pointing teeth. […] These fish can capture and swallow whole prey larger than themselves so as not to pass up a rare meal simply because of its size. These fish also have greatly extensible stomachs to accommodate such meals.”

“In the pelagic environment of the deep ocean, animals must be able to keep themselves within an appropriate depth range without using up energy in their food-poor habitat. This is often achieved by reducing the overall density of the animal to that of seawater so that it is neutrally buoyant. Thus the tissues and bones of deep-sea fish are often rather soft and watery. […] There is evidence that deep-ocean organisms have developed biochemical adaptations to maintain the functionality of their cell membranes under pressure, including adjusting the kinds of lipid molecules present in membranes to retain membrane fluidity under high pressure. High pressures also affect protein molecules, often preventing them from folding up into the correct shapes for them to function as efficient metabolic enzymes. There is evidence that deep-ocean animals have evolved pressure-resistant variants of common enzymes that mitigate this problem. […] The pattern of species diversity of the deep-ocean benthos appears to differ from that of other marine communities, which are typically dominated by a small number of abundant and highly visible species which overshadow the presence of a large number of rarer and less obvious species which are also present. In the deep-ocean benthic community, in contrast, no one group of species tends to dominate, and the community consists of a high number of different species all occurring in low abundance. […] In general, species diversity increases with the size of a habitat – the larger the area of a habitat, the more species that have developed ways to successfully live in that habitat. Since the deep-ocean bottom is the largest single habitat on the planet, it follows that species diversity would be expected to be high.”

Seamounts represent a special kind of biological hotspot in the deep ocean. […] In contrast to the surrounding flat, soft-bottomed abyssal plains, seamounts provide a complex rocky platform that supports an abundance of organisms that are distinct from the surrounding deep-ocean benthos. […] Seamounts support a great diversity of fish species […] This [has] triggered the creation of new deep-ocean fisheries focused on seamounts. […] [However these species are generally] very slow-growing and long-lived and mature at a late age, and thus have a low reproductive potential. […] Seamount fisheries have often been described as mining operations rather than sustainable fisheries. They typically collapse within a few years of the start of fishing and the trawlers then move on to other unexplored seamounts to maintain the fishery. The recovery of localized fisheries will inevitably be very slow, if achievable at all, because of the low reproductive potential of these deep-ocean fish species. […] Comparisons of ‘fished’ and ‘unfished’ seamounts have clearly shown the extent of habitat damage and loss of species diversity brought about by trawl fishing, with the dense coral habitats reduced to rubble over much of the area investigated. […] Unfortunately, most seamounts exist in areas beyond national jurisdiction, which makes it very difficult to regulate fishing activities on them, although some efforts are underway to establish international treaties to better manage and protect seamount ecosystems.”

“Hydrothermal vents are unstable and ephemeral features of the deep ocean. […] The lifespan of a typical vent is likely in the order of tens of years. Thus the rich communities surrounding vents have a very limited lifespan. Since many vent animals can live only near vents, and the distance between vent systems can be hundreds to thousands of kilometres, it is a puzzle as to how vent animals escape a dying vent and colonize other distant vents or newly created vents. […] Hydrothermal vents are [however] not the only source of chemical-laden fluids supporting unique chemosynthetic-based communities in the deep ocean. Hydrogen sulphide and methane also ooze from the ocean buttom at some locations at temperatures similar to the surrounding seawater. These so-called ‘cold seeps‘ are often found along continental margins […] The communities associated with cold seeps are similar to hydrothermal vent communities […] Cold seeps appear to be more permanent sources of fluid compared to the ephemeral nature of hot water vents.”

“Seepage of crude oil into the marine environment occurs naturally from oil-containing geological formations below the seabed. It is estimated that around 600,000 tonnes of crude oil seeps into the marine environment each year, which represents almost half of all the crude oil entering the oceans. […] The human activities associated with exploring for and producing oil result in the release on average of an estimated 38,000 tonnes of crude oil into the oceans each year, which is about 6 per cent of the total anthropogenic input of oil into the oceans worldwide. Although small in comparison to natural seepage, crude oil pollution from this source can cause serious damage to coastal ecosystems because it is released near the coast and sometimes in very large, concentrated amounts. […] The transport of oil and oil products around the globe in tankers results in the release of about 150,000 tonnes of oil worldwide each year on average, or about 22 per cent of the total anthropogenic input. […] About 480,000 tonnes of oil make their way into the marine environment each year worldwide from leakage associated with the consumption of oil-derived products in cars and trucks, and to a lesser extent in boats. Oil lost from the operation of cars and trucks collects on paved urban areas from where it is washed off into streams and rivers, and from there into the oceans. Surprisingly, this represents the most significant source of human-derived oil pollution into the marine environment – about 72 per cent of the total. Because it is a very diffuse source of pollution, it is the most difficult to control.”

“Today it has been estimated that virtually all of the marine food resources in the Mediterranean sea have been reduced to less than 50 per cent of their original abundance […] The greatest impact has been on the larger predatory fish, which were the first to be targeted by fishers. […] It is estimated that, collectively, the European fish stocks of today are just one-tenth of their size in 1900. […] In 1950 the total global catch of marine seafood was just less than twenty million tonnes fresh weight. This increased steadily and rapidly until by the late 1980s more than eighty million tonnes were being taken each year […] Starting in the early 1990s, however, yields began to show signs of levelling off. […] By far the most heavily exploited marine fishery in the world is the Peruvian anchoveta (Engraulis ringens) fishery, which can account for 10 per cent or more of the global marine catch of seafood in any particular year. […] The anchoveta is a very oily fish, which makes it less desirable for direct consumption by humans. However, the high oil content makes it ideal for the production of fish meal and fish oil […] the demand for fish meal and fish oil is huge and about a third of the entire global catch of fish is converted into these products rather than consumed directly by humans. Feeding so much fish protein to livestock comes with a considerable loss of potential food energy (around 25 per cent) compared to if it was eaten directly by humans. This could be viewed as a potential waste of available energy for a rapidly growing human population […] around 90 per cent of the fish used to produce fish meal and oil is presently unpalatable to most people and thus unmarketable in large quantities as a human food”.

“On heavily fished areas of the continental shelves, the same parts of the sea floor can be repeatedly trawled many times per year. Such intensive bottom trawling causes great cumulative damage to seabed habitats. The trawls scrape and pulverize rich and complex bottom habitats built up over centuries by living organisms such as tube worms, cold-water corals, and oysters. These habitats are eventually reduced to uniform stretches of rubble and sand. For all intents and purposes these areas are permanently altered and become occupied by a much changed and much less rich community adapted to frequent disturbance.”

“The eighty million tonnes or so of marine seafood caught each year globally equates to about eleven kilograms of wild-caught marine seafood per person on the planet. […] What is perfectly clear […] on the basis of theory backed up by real data on marine fish catches, is that marine fisheries are now fully exploited and that there is little if any headroom for increasing the amount of wild-caught fish humans can extract from the oceans to feed a burgeoning human population. […] This conclusion is solidly supported by the increasingly precarious state of global marine fishery resources. The most recent information from the Food and Agriculture Organization of the United Nations (The State of World Fisheries and Aquaculture 2010) shows that over half (53 per cent of all fish stocks are fully exploited – their current catches are at or close to their maximum sustainable levels of production and there is no scope for further expansion. Another 32 per cent are overexploited and in decline. Of the remaining 15 per cent of stocks, 12 per cent are considered moderately exploited and only 3 per cent underexploited. […] in the mid 1970s 40 per cent of all fish stocks were in [the moderately exploited or unexploited] category as opposed to around 15 per cent now. […] the real question is not so much whether we can get more fish from the sea but whether we can sustain the amount of fish we are harvesting at present”.

Links:

Scleractinia.
Atoll. Fringing reef. Barrier reef.
Corallivore.
Broadcast spawning.
Acanthaster planci.
Coral bleaching. Ocean acidification.
Avicennia germinans. Pneumatophores. Lenticel.
Photophore. Lanternfish. Anglerfish. Black swallower.
Deep scattering layer. Taylor column.
Hydrothermal vent. Black smokers and white smokers. Chemosynthesis. Siboglinidae.
Intertidal zone. Tides. Tidal range.
Barnacle. Mussel.
Clupeidae. Gadidae. Scombridae.

March 16, 2018 Posted by | Biology, Books, Chemistry, Ecology, Evolutionary biology, Geology | Leave a comment

Marine Biology (I)

This book was ‘okay’.

Some quotes and links related to the first half of the book below.

Quotes:

“The Global Ocean has come to be divided into five regional oceans – the Pacific, Atlantic, Indian, Arctic, and Southern Oceans […] These oceans are large, seawater-filled basins that share characteristic structural features […] The edge of each basin consists of a shallow, gently sloping extension of the adjacent continental land mass and is term the continental shelf or continental margin. Continental shelves typically extend off-shore to depths of a couple of hundred metres and vary from several kilometres to hundreds of kilometres in width. […] At the outer edge of the continental shelf, the seafloor drops off abruptly and steeply to form the continental slope, which extends down to depths of 2–3 kilometres. The continental slope then flattens out and gives way to a vast expanse of flat, soft, ocean bottom — the abyssal plain — which extends over depths of about 3–5 kilometres and accounts for about 76 per cent of the Global Ocean floor. The abyssal plains are transected by extensive mid-ocean ridges—underwater mountain chains […]. Mid-ocean ridges form a continuous chain of mountains that extend linearly for 65,000 kilometres across the floor of the Global Ocean basins […]. In some places along the edges of the abyssal plains the ocean bottom is cut by narrow, oceanic trenches or canyons which plunge to extraordinary depths — 3–4 kilometres below the surrounding seafloor — and are thousands of kilometres long but only tens of kilometres wide. […] Seamounts are another distinctive and dramatic feature of ocean basins. Seamounts are typically extinct volcanoes that rise 1,000 or more metres above the surrounding ocean but do not reach the surface of the ocean. […] Seamounts generally occur in chains or clusters in association with mid-ocean ridges […] The Global Ocean contains an estimated 100,000 or so seamounts that rise more than 1,000 metres above the surrounding deep-ocean floor. […] on a planetary scale, the surface of the Global Ocean is moving in a series of enormous, roughly circular, wind-driven current systems, or gyres […] These gyres transport enormous volumes of water and heat energy from one part of an ocean basin to another

“We now know that the oceans are literally teeming with life. Viruses […] are astoundingly abundant – there are around ten million viruses per millilitre of seawater. Bacteria and other microorganisms occur at concentrations of around 1 million per millilitre”

“The water in the oceans is in the form of seawater, a dilute brew of dissolved ions, or salts […] Chloride and sodium ions are the predominant salts in seawater, along with smaller amounts of other ions such as sulphate, magnesium, calcium, and potassium […] The total amount of dissolved salts in seawater is termed its salinity. Seawater typically has a salinity of roughly 35 – equivalent to about 35 grams of salts in one kilogram of seawater. […] Most marine organisms are exposed to seawater that, compared to the temperature extremes characteristic of terrestrial environments, ranges within a reasonably moderate range. Surface waters in tropical parts of ocean basins are consistently warm throughout the year, ranging from about 20–27°C […]. On the other hand, surface seawater in polar parts of ocean basins can get as cold as −1.9°C. Sea temperatures typically decrease with depth, but not in a uniform fashion. A distinct zone of rapid temperature transition is often present that separates warm seawater at the surface from cooler deeper seawater. This zone is called the thermocline layer […]. In tropical ocean waters the thermocline layer is a strong, well-defined and permanent feature. It may start at around 100 metres and be a hundred or so metres thick. Sea temperatures above the thermocline can be a tropical 25°C or more, but only 6–7°C just below the thermocline. From there the temperature drops very gradually with increasing depth. Thermoclines in temperate ocean regions are a more seasonal phenomenon, becoming well established in the summer as the sun heats up the surface waters, and then breaking down in the autumn and winter. Thermoclines are generally absent in the polar regions of the Global Ocean. […] As a rule of thumb, in the clearest ocean waters some light will penetrate to depths of 150-200 metres, with red light being absorbed within the first few metres and green and blue light penetrating the deepest. At certain times of the year in temperate coastal seas light may penetrate only a few tens of metres […] In the oceans, pressure increases by an additional atmosphere every 10 metres […] Thus, an organism living at a depth of 100 metres on the continental shelf experiences a pressure ten times greater than an organism living at sea level; a creature living at 5 kilometres depth on an abyssal plain experiences pressures some 500 times greater than at the surface”.

“With very few exceptions, dissolved oxygen is reasonably abundant throughout all parts of the Global Ocean. However, the amount of oxygen in seawater is much less than in air — seawater at 20°C contains about 5.4 millilitres of oxygen per litre of seawater, whereas air at this temperature contains about 210 millilitres of oxygen per litre. The colder the seawater, the more oxygen it contains […]. Oxygen is not distributed evenly with depth in the oceans. Oxygen levels are typically high in a thin surface layer 10–20 metres deep. Here oxygen from the atmosphere can freely diffuse into the seawater […] Oxygen concentration then decreases rapidly with depth and reaches very low levels, sometimes close to zero, at depths of around 200–1,000 metres. This region is referred to as the oxygen minimum zone […] This zone is created by the low rates of replenishment of oxygen diffusing down from the surface layer of the ocean, combined with the high rates of depletion of oxygen by decaying particulate organic matter that sinks from the surface and accumulates at these depths. Beneath the oxygen minimum zone, oxygen content increases again with depth such that the deep oceans contain quite high levels of oxygen, though not generally as high as in the surface layer. […] In contrast to oxygen, carbon dioxide (CO2) dissolves readily in seawater. Some of it is then converted into carbonic acid (H2CO3), bicarbonate ion (HCO3-), and carbonate ion (CO32-), with all four compounds existing in equilibrium with one another […] The pH of seawater is inversely proportional to the amount of carbon dioxide dissolved in it. […] the warmer the seawater, the less carbon dioxide it can absorb. […] Seawater is naturally slightly alkaline, with a pH ranging from about 7.5 to 8.5, and marine organisms have become well adapted to life within this stable pH range. […] In the oceans, carbon is never a limiting factor to marine plant photosynthesis and growth, as it is for terrestrial plants.”

“Since the beginning of the industrial revolution, the average pH of the Global Ocean has dropped by about 0.1 pH unit, making it 30 per cent more acidic than in pre-industrial times. […] As a result, more and more parts of the oceans are falling below a pH of 7.5 for longer periods of time. This trend, termed ocean acidification, is having profound impacts on marine organisms and the overall functioning of the marine ecosystem. For example, many types of marine organisms such as corals, clams, oysters, sea urchins, and starfish manufacture external shells or internal skeletons containing calcium carbonate. When the pH of seawater drops below about 7.5, calcium carbonate starts to dissolve, and thus the shells and skeletons of these organisms begin to erode and weaken, with obvious impacts on the health of the animal. Also, these organisms produce their calcium carbonate structures by combining calcium dissolved in seawater with carbonate ion. As the pH decreases, more of the carbonate ions in seawater become bound up with the increasing numbers of hydrogen ions, making fewer carbonate ions available to the organisms for shell-forming purposes. It thus becomes more difficult for these organisms to secrete their calcium carbonate structures and grow.”

“Roughly half of the planet’s primary production — the synthesis of organic compounds by chlorophyll-bearing organisms using energy from the sun—is produced within the Global Ocean. On land the primary producers are large, obvious, and comparatively long-lived — the trees, shrubs, and grasses characteristic of the terrestrial landscape. The situation is quite different in the oceans where, for the most part, the primary producers are minute, short-lived microorganisms suspended in the sunlit surface layer of the oceans. These energy-fixing microorganisms — the oceans’ invisible forest — are responsible for almost all of the primary production in the oceans. […] A large amount, perhaps 30-50 per cent, of marine primary production is produced by bacterioplankton comprising tiny marine photosynthetic bacteria ranging from about 0.5 to 2 μm in size. […] light availability and the strength of vertical mixing are important factors limiting primary production in the oceans. Nutrient availability is the other main factor limiting the growth of primary producers. One important nutrient is nitrogen […] nitrogen is a key component of amino acids, which are the building blocks of proteins. […] Photosynthetic marine organisms also need phosphorus, which is a requirement for many important biological functions, including the synthesis of nucleic acids, a key component of DNA. Phosphorus in the oceans comes naturally from the erosion of rocks and soils on land, and is transported into the oceans by rivers, much of it in the form of dissolved phosphate (PO43−), which can be readily absorbed by marine photosynthetic organisms. […] Inorganic nitrogen and phosphorus compounds are abundant in deep-ocean waters. […] In practice, inorganic nitrogen and phosphorus compounds are not used up at exactly the same rate. Thus one will be depleted before the other and becomes the limiting nutrient at the time, preventing further photosynthesis and growth of marine primary producers until it is replenished. Nitrogen is often considered to be the rate-limiting nutrient in most oceanic environments, particularly in the open ocean. However, in coastal waters phosphorus is often the rate-limiting nutrient.”

“The overall pattern of primary production in the Global Ocean depends greatly on latitude […] In polar oceans primary production is a boom-and-bust affair driven by light availability. Here the oceans are well mixed throughout the year so nutrients are rarely limiting. However, during the polar winter there is no light, and thus no primary production is taking place. […] Although limited to a short seasonal pulse, the total amount of primary production can be quite high, especially in the polar Southern Ocean […] In tropical open oceans, primary production occurs at a low level throughout the year. Here light is never limiting but the permanent tropical thermocline prevents the mixing of deep, nutrient-rich seawater with the surface waters. […] open-ocean tropical waters are often referred to as ‘marine deserts’, with productivity […] comparable to a terrestrial desert. In temperate open-ocean regions, primary productivity is linked closely to seasonal events. […] Although occurring in a number of pulses, primary productivity in temperate oceans [is] similar to [that of] a temperate forest or grassland. […] Some of the most productive marine environments occur in coastal ocean above the continental shelves. This is the result of a phenomenon known as coastal upwelling which brings deep, cold, nutrient-rich seawater to the ocean surface, creating ideal conditions for primary productivity […], comparable to a terrestrial rainforest or cultivated farmland. These hotspots of marine productivity are created by wind acting in concert with the planet’s rotation. […] Coastal upwelling can occur when prevailing winds move in a direction roughly parallel to the edge of a continent so as to create offshore Ekman transport. Coastal upwelling is particularly prevalent along the west coasts of continents. […] Since coastal upwelling is dependent on favourable winds, it tends to be a seasonal or intermittent phenomenon and the strength of upwelling will depend on the strength of the winds. […] Important coastal upwelling zones around the world include the coasts of California, Oregon, northwest Africa, and western India in the northern hemisphere; and the coasts of Chile, Peru, and southwest Africa in the southern hemisphere. These regions are amongst the most productive marine ecosystems on the planet.”

“Considering the Global Ocean as a whole, it is estimated that total marine primary production is about 50 billion tonnes of carbon per year. In comparison, the total production of land plants, which can also be estimated using satellite data, is estimated at around 52 billion tonnes per year. […] Primary production in the oceans is spread out over a much larger surface area and so the average productivity per unit of surface area is much smaller than on land. […] the energy of primary production in the oceans flows to higher trophic levels through several different pathways of various lengths […]. Some energy is lost along each step of the pathway — on average the efficiency of energy transfer from one trophic level to the next is about 10 per cent. Hence, shorter pathways are more efficient. Via these pathways, energy ultimately gets transferred to large marine consumers such as large fish, marine mammals, marine turtles, and seabirds.”

“…it has been estimated that in the 17th century, somewhere between fifty million and a hundred million green turtles inhabited the Caribbean Sea, but numbers are now down to about 300,000. Since their numbers are now so low, their impact on seagrass communities is currently small, but in the past, green turtles would have been extraordinarily abundant grazers of seagrasses. It appears that in the past, green turtles thinned out seagrass beds, thereby reducing direct competition among different species of seagrass and allowing several species of seagrass to coexist. Without green turtles in the system, seagrass beds are generally overgrown monocultures of one dominant species. […] Seagrasses are of considerable importance to human society. […] It is therefore of great concern that seagrass meadows are in serious decline globally. In 2003 it was estimated that 15 per cent of the planet’s existing seagrass beds had disappeared in the preceding ten years. Much of this is the result of increasing levels of coastal development and dredging of the seabed, activities which release excessive amounts of sediment into coastal waters which smother seagrasses. […] The number of marine dead zones in the Global Ocean has roughly doubled every decade since the 1960s”.

“Sea ice is habitable because, unlike solid freshwater ice, it is a very porous substance. As sea ice forms, tiny spaces between the ice crystals become filled with a highly saline brine solution resistant to freezing. Through this process a three-dimensional network of brine channels and spaces, ranging from microscopic to several centimetres in size, is created within the sea ice. These channels are physically connected to the seawater beneath the ice and become colonized by a great variety of marine organisms. A significant amount of the primary production in the Arctic Ocean, perhaps up to 50 per cent in those areas permanently covered by sea ice, takes place in the ice. […] Large numbers of zooplanktonic organisms […] swarm about on the under surface of the ice, grazing on the ice community at the ice-seawater interface, and sheltering in the brine channels. […] These under-ice organisms provide the link to higher trophic levels in the Arctic food web […] They are an important food source for fish such as Arctic cod and glacial cod that graze along the bottom of the ice. These fish are in turn fed on by squid, seals, and whales.”

“[T]he Antarctic marine system consists of a ring of ocean about 10° of latitude wide – roughly 1,000 km. […] The Arctic and Antarctic marine systems can be considered geographic opposites. In contrast to the largely landlocked Arctic Ocean, the Southern Ocean surrounds the Antarctic continental land mass and is in open contact with the Atlantic, Indian, and Pacific Oceans. Whereas the Arctic Ocean is strongly influenced by river inputs, the Antarctic continent has no rivers, and so hard-bottomed seabed is common in the Southern Ocean, and there is no low-saline surface layer, as in the Arctic Ocean. Also, in contrast to the Arctic Ocean with its shallow, broad continental shelves, the Antarctic continental shelf is very narrow and steep. […] Antarctic waters are extremely nutrient rich, fertilized by a permanent upwelling of seawater that has its origins at the other end of the planet. […] This continuous upwelling of cold, nutrient-rich seawater, in combination with the long Antarctic summer day length, creates ideal conditions for phytoplankton growth, which drives the productivity of the Antarctic marine system. As in the Arctic, a well-developed sea-ice community is present. Antarctic ice algae are even more abundant and productive than in the Arctic Ocean because the sea ice is thinner, and there is thus more available light for photosynthesis. […] Antarctica’s most important marine species [is] the Antarctic krill […] Krill are very adept at surviving many months under starvation conditions — in the laboratory they can endure more than 200 days without food. During the winter months they lower their metabolic rate, shrink in body size, and revert back to a juvenile state. When food once again becomes abundant in the spring, they grow rapidly […] As the sea ice breaks up they leave the ice and begin feeding directly on the huge blooms of free-living diatoms […]. With so much food available they grow and reproduce quickly, and start to swarm in large numbers, often at densities in excess of 10,000 individuals per cubic metre — dense enough to colour the seawater a reddish-brown. Krill swarms are patchy and vary greatly in size […] Because the Antarctic marine system covers a large area, krill numbers are enormous, estimated at about 600 billion animals on average, or 500 million tonnes of krill. This makes Antarctic krill one of the most abundant animal species on the planet […] Antarctic krill are the main food source for many of Antarctica’s large marine animals, and a key link in a very short and efficient food chain […]. Krill comprise the staple diet of icefish, squid, baleen whales, leopard seals, fur seals, crabeater seals, penguins, and seabirds, including albatross. Thus, a very simple and efficient three-step food chain is in operation — diatoms eaten by krill in turn eaten by a suite of large consumers — which supports the large numbers of large marine animals living in the Southern Ocean.”

Links:

Ocean gyre. North Atlantic Gyre. Thermohaline circulation. North Atlantic Deep Water. Antarctic bottom water.
Cyanobacteria. Diatom. Dinoflagellate. Coccolithophore.
Trophic level.
Nitrogen fixation.
High-nutrient, low-chlorophyll regions.
Light and dark bottle method of measuring primary productivity. Carbon-14 method for estimating primary productivity.
Ekman spiral.
Peruvian anchoveta.
El Niño. El Niño–Southern Oscillation.
Copepod.
Dissolved organic carbon. Particulate organic matter. Microbial loop.
Kelp forest. Macrocystis. Sea urchin. Urchin barren. Sea otter.
Seagrass.
Green sea turtle.
Manatee.
Demersal fish.
Eutrophication. Harmful algal bloom.
Comb jelly. Asterias amurensis.
Great Pacific garbage patch.
Eelpout. Sculpin.
Polynya.
Crabeater seal.
Adélie penguin.
Anchor ice mortality.

March 13, 2018 Posted by | Biology, Books, Botany, Chemistry, Ecology, Geology, Zoology | Leave a comment

The Ice Age (II)

I really liked the book, recommended if you’re at all interested in this kind of stuff. Below some observations from the book’s second half, and some related links:

“Charles MacLaren, writing in 1842, […] argued that the formation of large ice sheets would result in a fall in sea level as water was taken from the oceans and stored frozen on the land. This insight triggered a new branch of ice age research – sea level change. This topic can get rather complicated because as ice sheets grow, global sea level falls. This is known as eustatic sea level change. As ice sheets increase in size, their weight depresses the crust and relative sea level will rise. This is known as isostatic sea level change. […] It is often quite tricky to differentiate between regional-scale isostatic factors and the global-scale eustatic sea level control.”

“By the late 1870s […] glacial geology had become a serious scholarly pursuit with a rapidly growing literature. […] [In the late 1880s] Carvill Lewis […] put forward the radical suggestion that the [sea] shells at Moel Tryfan and other elevated localities (which provided the most important evidence for the great marine submergence of Britain) were not in situ. Building on the earlier suggestions of Thomas Belt (1832–78) and James Croll, he argued that these materials had been dredged from the sea bed by glacial ice and pushed upslope so that ‘they afford no testimony to the former subsidence of the land’. Together, his recognition of terminal moraines and the reworking of marine shells undermined the key pillars of Lyell’s great marine submergence. This was a crucial step in establishing the primacy of glacial ice over icebergs in the deposition of the drift in Britain. […] By the end of the 1880s, it was the glacial dissenters who formed the eccentric minority. […] In the period leading up to World War One, there was [instead] much debate about whether the ice age involved a single phase of ice sheet growth and freezing climate (the monoglacial theory) or several phases of ice sheet build up and decay separated by warm interglacials (the polyglacial theory).”

“As the Earth rotates about its axis travelling through space in its orbit around the Sun, there are three components that change over time in elegant cycles that are entirely predictable. These are known as eccentricity, precession, and obliquity or ‘stretch, wobble, and roll’ […]. These orbital perturbations are caused by the gravitational pull of the other planets in our Solar System, especially Jupiter. Milankovitch calculated how each of these orbital cycles influenced the amount of solar radiation received at different latitudes over time. These are known as Milankovitch Cycles or Croll–Milankovitch Cycles to reflect the important contribution made by both men. […] The shape of the Earth’s orbit around the Sun is not constant. It changes from an almost circular orbit to one that is mildly elliptical (a slightly stretched circle) […]. This orbital eccentricity operates over a 400,000- and 100,000-year cycle. […] Changes in eccentricity have a relatively minor influence on the total amount of solar radiation reaching the Earth, but they are important for the climate system because they modulate the influence of the precession cycle […]. When eccentricity is high, for example, axial precession has a greater impact on seasonality. […] The Earth is currently tilted at an angle of 23.4° to the plane of its orbit around the Sun. Astronomers refer to this axial tilt as obliquity. This angle is not fixed. It rolls back and forth over a 41,000-year cycle from a tilt of 22.1° to 24.5° and back again […]. Even small changes in tilt can modify the strength of the seasons. With a greater angle of tilt, for example, we can have hotter summers and colder winters. […] Cooler, reduced insolation summers are thought to be a key factor in the initiation of ice sheet growth in the middle and high latitudes because they allow more snow to survive the summer melt season. Slightly warmer winters may also favour ice sheet build-up as greater evaporation from a warmer ocean will increase snowfall over the centres of ice sheet growth. […] The Earth’s axis of rotation is not fixed. It wobbles like a spinning top slowing down. This wobble traces a circle on the celestial sphere […]. At present the Earth’s rotational axis points toward Polaris (the current northern pole star) but in 11,000 years it will point towards another star, Vega. This slow circling motion is known as axial precession and it has important impacts on the Earth’s climate by causing the solstices and equinoxes to move around the Earth’s orbit. In other words, the seasons shift over time. Precession operates over a 19,000- and 23,000-year cycle. This cycle is often referred to as the Precession of the Equinoxes.”

The albedo of a surface is a measure of its ability to reflect solar energy. Darker surfaces tend to absorb most of the incoming solar energy and have low albedos. The albedo of the ocean surface in high latitudes is commonly about 10 per cent — in other words, it absorbs 90 per cent of the incoming solar radiation. In contrast, snow, glacial ice, and sea ice have much higher albedos and can reflect between 50 and 90 per cent of incoming solar energy back into the atmosphere. The elevated albedos of bright frozen surfaces are a key feature of the polar radiation budget. Albedo feedback loops are important over a range of spatial and temporal scales. A cooling climate will increase snow cover on land and the extent of sea ice in the oceans. These high albedo surfaces will then reflect more solar radiation to intensify and sustain the cooling trend, resulting in even more snow and sea ice. This positive feedback can play a major role in the expansion of snow and ice cover and in the initiation of a glacial phase. Such positive feedbacks can also work in reverse when a warming phase melts ice and snow to reveal dark and low albedo surfaces such as peaty soil or bedrock.”

“At the end of the Cretaceous, around 65 million years ago (Ma), lush forests thrived in the Polar Regions and ocean temperatures were much warmer than today. This warm phase continued for the next 10 million years, peaking during the Eocene thermal maximum […]. From that time onwards, however, Earth’s climate began a steady cooling that saw the initiation of widespread glacial conditions, first in Antarctica between 40 and 30 Ma, in Greenland between 20 and 15 Ma, and then in the middle latitudes of the northern hemisphere around 2.5 Ma. […] Over the past 55 million years, a succession of processes driven by tectonics combined to cool our planet. It is difficult to isolate their individual contributions or to be sure about the details of cause and effect over this long period, especially when there are uncertainties in dating and when one considers the complexity of the climate system with its web of internal feedbacks.” [Potential causes which have been highlighted include: The uplift of the Himalayas (leading to increased weathering, leading over geological time to an increased amount of CO2 being sequestered in calcium carbonate deposited on the ocean floor, lowering atmospheric CO2 levels), the isolation of Antarctica which created the Antarctic Circumpolar Current (leading to a cooling of Antarctica), the dry-out of the Mediterranean Sea ~5mya (which significantly lowered salt concentrations in the World Ocean, meaning that sea water froze at a higher temperature), and the formation of the Isthmus of Panama. – US].

“[F]or most of the last 1 million years, large ice sheets were present in the middle latitudes of the northern hemisphere and sea levels were lower than today. Indeed, ‘average conditions’ for the Quaternary Period involve much more ice than present. The interglacial peaks — such as the present Holocene interglacial, with its ice volume minima and high sea level — are the exception rather than the norm. The sea level maximum of the Last Interglacial (MIS 5) is higher than today. It also shows that cold glacial stages (c.80,000 years duration) are much longer than interglacials (c.15,000 years). […] Arctic willow […], the northernmost woody plant on Earth, is found in central European pollen records from the last glacial stage. […] For most of the Quaternary deciduous forests have been absent from most of Europe. […] the interglacial forests of temperate Europe that are so familiar to us today are, in fact, rather atypical when we consider the long view of Quaternary time. Furthermore, if the last glacial period is representative of earlier ones, for much of the Quaternary terrestrial ecosystems were continuously adjusting to a shifting climate.”

“Greenland ice cores typically have very clear banding […] that corresponds to individual years of snow accumulation. This is because the snow that falls in summer under the permanent Arctic sun differs in texture to the snow that falls in winter. The distinctive paired layers can be counted like tree rings to produce a finely resolved chronology with annual and even seasonal resolution. […] Ice accumulation is generally much slower in Antarctica, so the ice core record takes us much further back in time. […] As layers of snow become compacted into ice, air bubbles recording the composition of the atmosphere are sealed in discrete layers. This fossil air can be recovered to establish the changing concentration of greenhouse gases such as carbon dioxide (CO2) and methane (CH4). The ice core record therefore allows climate scientists to explore the processes involved in climate variability over very long timescales. […] By sampling each layer of ice and measuring its oxygen isotope composition, Dansgaard produced an annual record of air temperature for the last 100,000 years. […] Perhaps the most startling outcome of this work was the demonstration that global climate could change extremely rapidly. Dansgaard showed that dramatic shifts in mean air temperature (>10°C) had taken place in less than a decade. These findings were greeted with scepticism and there was much debate about the integrity of the Greenland record, but subsequent work from other drilling sites vindicated all of Dansgaard’s findings. […] The ice core records from Greenland reveal a remarkable sequence of abrupt warming and cooling cycles within the last glacial stage. These are known as Dansgaard–Oeschger (D–O) cycles. […] [A] series of D–O cycles between 65,000 and 10,000 years ago [caused] mean annual air temperatures on the Greenland ice sheet [to be] shifted by as much as 10°C. Twenty-five of these rapid warming events have been identified during the last glacial period. This discovery dispelled the long held notion that glacials were lengthy periods of stable and unremitting cold climate. The ice core record shows very clearly that even the glacial climate flipped back and forth. […] D–O cycles commence with a very rapid warming (between 5 and 10°C) over Greenland followed by a steady cooling […] Deglaciations are rapid because positive feedbacks speed up both the warming trend and ice sheet decay. […] The ice core records heralded a new era in climate science: the study of abrupt climate change. Most sedimentary records of ice age climate change yield relatively low resolution information — a thousand years may be packed into a few centimetres of marine or lake sediment. In contrast, ice cores cover every year. They also retain a greater variety of information about the ice age past than any other archive. We can even detect layers of volcanic ash in the ice and pinpoint the date of ancient eruptions.”

“There are strong thermal gradients in both hemispheres because the low latitudes receive the most solar energy and the poles the least. To redress these imbalances the atmosphere and oceans move heat polewards — this is the basis of the climate system. In the North Atlantic a powerful surface current takes warmth from the tropics to higher latitudes: this is the famous Gulf Stream and its northeastern extension the North Atlantic Drift. Two main forces drive this current: the strong southwesterly winds and the return flow of colder, saltier water known as North Atlantic Deep Water (NADW). The surface current loses much of its heat to air masses that give maritime Europe a moist, temperate climate. Evaporative cooling also increases its salinity so that it begins to sink. As the dense and cold water sinks to the deep ocean to form NADW, it exerts a strong pull on the surface currents to maintain the cycle. It returns south at depths >2,000 m. […] The thermohaline circulation in the North Atlantic was periodically interrupted during Heinrich Events when vast discharges of melting icebergs cooled the ocean surface and reduced its salinity. This shut down the formation of NADW and suppressed the Gulf Stream.”

Links:

Archibald Geikie.
Andrew Ramsay (geologist).
Albrecht Penck. Eduard BrücknerGunz glaciation. Mindel glaciation. Riss glaciation. Würm.
Insolation.
Perihelion and aphelion.
Deep Sea Drilling Project.
Foraminifera.
δ18O. Isotope fractionation.
Marine isotope stage.
Cesare Emiliani.
Nicholas Shackleton.
Brunhes–Matuyama reversal. Geomagnetic reversal. Magnetostratigraphy.
Climate: Long range Investigation, Mapping, and Prediction (CLIMAP).
Uranium–thorium dating. Luminescence dating. Optically stimulated luminescence. Cosmogenic isotope dating.
The role of orbital forcing in the Early-Middle Pleistocene Transition (paper).
European Project for Ice Coring in Antarctica (EPICA).
Younger Dryas.
Lake Agassiz.
Greenland ice core project (GRIP).
J Harlen Bretz. Missoula Floods.
Pleistocene megafauna.

February 25, 2018 Posted by | Astronomy, Engineering, Geology, History, Paleontology, Physics | Leave a comment

The Ice Age (I)

I’m currently reading this book. Some observations and links related to the first half of the book below:

“It is important to appreciate from the outset that the Quaternary ice age was not one long episode of unremitting cold climate. […] By exploring the landforms, sediments, and fossils of the Quaternary Period we can identify glacials: periods of severe cold climate when great ice sheets formed in the high middle latitudes of the northern hemisphere and glaciers and ice caps advanced in mountain regions around the world. We can also recognize periods of warm climate known as interglacials when mean air temperatures in the middle latitudes were comparable to, and sometimes higher than, those of the present. As the climate shifted from glacial to interglacial mode, the large ice sheets of Eurasia and North America retreated allowing forest biomes to re-colonize the ice free landscapes. It is also important to recognize that the ice age isn’t just about advancing and retreating ice sheets. Major environmental changes also took place in the Mediterranean region and in the tropics. The Sahara, for example, became drier, cooler, and dustier during glacial periods yet early in the present interglacial it was a mosaic of lakes and oases with tracts of lush vegetation. A defining feature of the Quaternary Period is the repeated fluctuation in climate as conditions shifted from glacial to interglacial, and back again, during the course of the last 2.5 million years or so. A key question in ice age research is why does the Earth’s climate system shift so dramatically and so frequently?”

“Today we have large ice masses in the Polar Regions, but a defining feature of the Quaternary is the build-up and decay of continental-scale ice sheets in the high middle latitudes of the northern hemisphere. […] the Laurentide and Cordilleran ice sheets […] covered most of Canada and large tracts of the northern USA during glacial stages. Around 22,000 years ago, when the Laurentide ice sheet reached its maximum extent during the most recent glacial stage, it was considerably larger in both surface area and volume (34.8 million km3) than the present-day East and West Antarctic ice sheets combined (27 million km3). With a major ice dome centred on Hudson Bay greater than 4 km thick, it formed the largest body of ice on Earth. This great mass of ice depressed the crust beneath its bed by many hundreds of metres. Now shed of this burden, the crust is still slowly recovering today at rates of up to 1 cm per year. Glacial ice extended out beyond the 38th parallel across the lowland regions of North America. Chicago, Boston, and New York all lie on thick glacial deposits left by the Laurentide ice sheet. […] With huge volumes of water locked up in the ice sheets, global sea level was about 120 m lower than present at the Last Glacial Maximum (LGM), exposing large expanses of continental shelf and creating land bridges that allowed humans, animals, and plants to move between continents. Migration from eastern Russia to Alaska, for example, was possible via the Bering land bridge.”

“Large ice sheets also developed in Europe. […] The British Isles lie in an especially sensitive location on the Atlantic fringe of Europe between latitudes 50 and 60° north. Because of this geography, the Quaternary deposits of Britain record especially dramatic shifts in environmental conditions. The most extensive glaciation saw ice sheets extend as far south as the Thames Valley with wide braided rivers charged with meltwater and sediment from the ice margin. Beyond the glacial ice much of southern Britain would have been a treeless, tundra steppe environment with tracts of permanently frozen ground […]. At the LGM […] [t]he Baltic and North Seas were dry land and Britain was connected to mainland Europe. Beyond the British and Scandinavian ice sheets, much of central and northern Europe was a treeless tundra steppe habitat. […] During warm interglacial stages […] [b]road-leaved deciduous woodland with grassland was the dominant vegetation […]. In the warmest parts of interglacials thermophilous […] insects from the Mediterranean were common in Britain whilst the large mammal fauna of the Last Interglacial (c.130,000 to 115,000 years ago) included even more exotic species such as the short tusked elephant, rhinoceros, and hippopotamus. In some interglacials, the rivers of southern Britain contained molluscs that now live in the Nile Valley. For much of the Quaternary, however, climate would have been in an intermediate state (either warming or cooling) between these glacial and interglacial extremes.”

“Glaciologists make a distinction between three main types of glacier (valley glaciers, ice caps, and ice sheets) on the basis of scale and topographic setting. A glacier is normally constrained by the surrounding topography such as a valley and has a clearly defined source area. An ice cap builds up as a dome-like form on a high plateau or mountain peak and may feed several outlet glaciers to valleys below. Ice sheets notionally exceed 50,000 km2 and are not constrained by topography.”

“We live in unusual times. For more than 90 per cent of its 4.6-billion-year history, Earth has been too warm — even at the poles — for ice sheets to form. Ice ages are not the norm for our planet. Periods of sustained (over several million years) large-scale glaciation can be called glacial epochs. Tillites in the geological record tells us that the Quaternary ice age is just one of at least six great glacial epochs that have taken place over the last three billion years or so […]. The Quaternary itself is the culmination of a much longer glacial epoch that began around 35 million years ago (Ma) when glaciers and ice sheets first formed in Antarctica. This is known as the Cenozoic glacial epoch. There is still much to learn about these ancient glacial epochs, especially the so-called Snowball Earth states of the Precambrian (before 542 Ma) when the boundary conditions for the global climate system were so different to those of today. […] This book is concerned with the Quaternary ice age – it has the richest and most varied records of environmental change. Because its sediments are so recent they have not been subjected to millions of years of erosion or deep burial and metamorphism. […] in aquatic settings, such as lakes and peat bogs, organic materials such as insects, leaves, and seeds, as well as microfossils such as pollen and fungal spores can be exceptionally well preserved in the fossil record. This allows us to create very detailed pictures of past ecosystems under glacial and interglacial conditions. This field of research is known as Quaternary paeloecology.”

“An erratic […] is a piece of rock that has been transported from its place of origin. […] Many erratics stand out because they lie on bedrock that is very different to their source. […] Erratics are normally associated with transport by glaciers or ice sheets, but in the early 19th century mechanisms such as the great deluge or rafting on icebergs were commonly invoked. […] Enormous erratic boulders […] were well known to 18th- and 19th-centery geologists. […] Their origin was a source of lively and protracted debate […] Early observers of Alpine glaciers had noted the presence of large boulders on the surface of active glaciers or forming part of the debris pile at the glacier snout. These were readily explainable, but erratic boulders had long been noted in locations that defied rational explanations. The erratics found at elevations far above their known sources, and in places such as Britain where glaciers were absent, were especially problematic for early students of landscape history. […] A huge deluge […] was commonly invoked to explain the disposition of such boulders and many saw them as more hard evidence in support of the Biblical flood. […] At this time, the Church of England held a strong influence over much of higher education and especially so in Cambridge and Oxford.”

Venetz [in the early 19th century] produced remarkably detailed topographic maps of lateral and terminal moraines that lay far down valley of the modern glaciers. He was able to show that many glaciers had advanced and retreated in the historical period. His was the first systematic analysis of climate-glacier-landscape interactions. […] In 1821, Venetz presented his findings to the Société Helvétiques des Sciences Naturelles, setting out Perraudin’s ideas alongside his own. The paper had little impact, however, and would not see publication until 1833. […] Jean de Charpentier [in his work] paid particular attention to the disposition of large erratic blocks and the occurrence of polished and striated bedrock surfaces in the deep valleys of western Switzerland. A major step forward was Charpentier’s recognition of a clear relationship between the elevation of the erratic blocks in the Rhône Valley and the vertical extent of glacially smoothed rock walls. He noted that the bedrock valley sides above the erratic blocks were not worn smooth because they must have been above the level of the ancient glacier surface. The rock walls below the erratics always bore the hallmarks of contact with glacial ice. We call this boundary the trimline. It is often clearly marked in hard bedrock because the texture of the valley sides above the glacier surface is fractured due to attack by frost weathering. The detachment of rock particles above the trimline adds debris to lateral moraines and the glacier surface. These insights allowed Charpentier to reconstruct the vertical extent of former glaciers for the first time. Venetz and Perraudin had already shown how to demarcate the length and width of glaciers using the terminal and lateral moraines in these valleys. Charpentier described some of the most striking erratic boulders in the Alps […]. As Charpentier mapped the giant erratics, polished bedrock surfaces, and moraines in the Rhône Valley, it became clear to him that the valley must once have been occupied by a truly enormous glacier or ‘glacier-monstre’ as he called it. […] In 1836, Charpentier published a key paper setting out the main findings of their [his and Venetz’] glacial work”.

“Even before Charpentier was thinking about large ice masses in Switzerland, Jens Esmark (1763-1839) […] had suggested that northern European glaciers had been much more extensive in the past and were responsible for the transport of large erratic boulders and the formation of moraines. Esmark also recognized the key role of deep bedrock erosion by glacial ice in the formation of the spectacular Norwegian fjords. He worked out that glaciers in Norway had once extended down to sea level. Esmark’s ideas were […] translated into English and published […] in 1826, a decade in advance of Charpentier’s paper. Esmark discussed a large body of evidence pointing to an extensive glaciation of northern Europe. […] his thinking was far in advance of his contemporaries […] Unfortunately, even Esmark’s carefully argued paper held little sway in Britain and elsewhere […] it would be many decades before there was general acceptance within the geological community that glaciers could spread out across low gradient landscapes. […] in the lecture theatres and academic societies of Paris, Berlin, and London, the geological establishment was slow to take up these ideas, even though they were published in both English and French and were widely available. Much of the debate in the 1820s and early 1830s centred on the controversy over the evolution of valleys between the fluvialists (Hutton, Playfair, and others), who advocated slow river erosion, and the diluvialists (Buckland, De la Beche, and others) who argued that big valleys and large boulders needed huge deluges. The role of glaciers in valley and fjord formation was not considered. […] The key elements of a glacial theory were in place but nobody was listening. […] It would be decades before a majority accepted that vast tracts of Eurasia and North America had once been covered by mighty ice sheets.”

“Most geologists in 1840 saw Agassiz’s great ice sheet as a retrograde step. It was just too catastrophist — a blatant violation of hard-won uniformitarian principles. It was the antithesis of the new rational geology and was not underpinned by carefully assembled field data. So, for many, as an explanation for the superficial deposits of the Quaternary, it was no more convincing than the deluge. […] Ancient climates were [also] supposed to be warmer not colder. The suggestion of a freezing glacial epoch in the recent geological past, followed by the temperate climate of the present, still jarred with the conventional wisdom that Earth history, from its juvenile molten state to the present, was an uninterrupted record of long-term cooling without abrupt change. Lyell’s drift ice theory [according to which erratics (and till) had been transported by icebergs drifting in water, instead of glaciers transporting the material over land – US] also provided an attractive alternative to Agassiz’s ice age because it did not demand a period of cold glacial climate in areas that now enjoy temperate conditions. […] If anything, the 1840 sessions at the Geological Society had galvanized support for floating ice as a mechanism for drift deposition in the lowlands. Lyell’s model proved to be remarkably resilient—its popularity proved to be the major obstacle to the wider adoption of the land ice theory. […] many refused to believe that glacier ice could advance across gently sloping lowland terrain. This was a reasonable objection at this time since the ice sheets of Greenland and Antarctica had not yet been investigated from a glaciological point of view. It is not difficult to understand why many British geologists rejected the glacial theory when the proximity and potency of the sea was so obvious and nobody knew how large ice sheets behaved.”

Hitchcock […] was one of the first Americans to publicly embrace Agassiz’s ideas […] but he later stepped back from a full endorsement, leaving a role for floating ice. This hesitant beginning set the tone for the next few decades in North America as its geologists began to debate whether they could see the work of ice sheets or icebergs. There was a particularly strong tradition of scriptural geology in 19th-century North America. Its practitioners attempted to reconcile their field observations with the Bible and there were often close links with like-minded souls in Britain. […] If the standing of Lyell extended the useful lifespan of the iceberg theory, it was gradually worn down by a growing body of field evidence from Europe and North America that pointed to the action of glacier ice. […] The continental glacial theory prevailed in North America because it provided a much better explanation for the vast majority of the features recorded in the landscape. The striking regularity and fixed alignment of many features could not be the work of icebergs whose wanderings were governed by winds and ocean currents. The southern limit of the glacial deposits is often marked by pronounced ridges in an otherwise low-relief landscape. These end moraines mark the edge of the former ice sheet and they cannot be formed by floating ice. It took a long time to put all the pieces of evidence together in North America because of the vast scale of the territory to be mapped. Once the patterns of erratic dispersal, large-scale scratching of bedrock, terminal moraines, drumlin fields, and other features were mapped, their systematic arrangement argued strongly against the agency of drifting ice. Unlike their counterparts in Britain, who were never very far from the sea, geologists working deep in the continental interior of North America found it much easier to dismiss the idea of a great marine submergence. Furthermore, icebergs just did not transport enough sediment to account for the enormous extent and great thickness of the Quaternary deposits. It was also realized that icebergs were just not capable of planing off hard bedrock to create plateau surfaces. Neither were they able to polish, scratch, or cut deep grooves into ancient bedrock. All these features pointed to the action of land-based glacial ice. Slowly, but surely, the reality of vast expanses of glacier ice covering much of Canada and the northern states of the USA became apparent.”

Links:

Quaternary.
The Parallel Roads of Glen Roy.
William Boyd Dawkins.
Adams mammoth.
Georges Cuvier.
Cryosphere.
Cirque (geology). Arête. Tarn. Moraine. Drumlin. Till/Tillite. Glacier morphology.
James Hutton.
William Buckland.
Diluvium.
Charles Lyell.
Giétro Glacier.
Cwm Idwal.
Timothy Abbott Conrad. Charles Whittlesey. James Dwight Dana.

February 23, 2018 Posted by | Books, Ecology, Geography, Geology, History, Paleontology | Leave a comment

Lakes (I)

“The aim of this book is to provide a condensed overview of scientific knowledge about lakes, their functioning as ecosystems that we are part of and depend upon, and their responses to environmental change. […] Each chapter briefly introduces concepts about the physical, chemical, and biological nature of lakes, with emphasis on how these aspects are connected, the relationships with human needs and impacts, and the implications of our changing global environment.”

I’m currently reading this book and I really like it so far. I have added some observations from the first half of the book and some coverage-related links below.

“High resolution satellites can readily detect lakes above 0.002 kilometres square (km2) in area; that’s equivalent to a circular waterbody some 50m across. Using this criterion, researchers estimate from satellite images that the world contains 117 million lakes, with a total surface area amounting to 5 million km2. […] continuous accumulation of materials on the lake floor, both from inflows and from the production of organic matter within the lake, means that lakes are ephemeral features of the landscape, and from the moment of their creation onwards, they begin to fill in and gradually disappear. The world’s deepest and most ancient freshwater ecosystem, Lake Baikal in Russia (Siberia), is a compelling example: it has a maximum depth of 1,642m, but its waters overlie a much deeper basin that over the twenty-five million years of its geological history has become filled with some 7,000m of sediments. Lakes are created in a great variety of ways: tectonic basins formed by movements in the Earth’s crust, the scouring and residual ice effects of glaciers, as well as fluvial, volcanic, riverine, meteorite impacts, and many other processes, including human construction of ponds and reservoirs. Tectonic basins may result from a single fault […] or from a series of intersecting fault lines. […] The oldest and deepest lakes in the world are generally of tectonic origin, and their persistence through time has allowed the evolution of endemic plants and animals; that is, species that are found only at those sites.”

“In terms of total numbers, most of the world’s lakes […] owe their origins to glaciers that during the last ice age gouged out basins in the rock and deepened river valleys. […] As the glaciers retreated, their terminal moraines (accumulations of gravel and sediments) created dams in the landscape, raising water levels or producing new lakes. […] During glacial retreat in many areas of the world, large blocks of glacial ice broke off and were left behind in the moraines. These subsequently melted out to produce basins that filled with water, called ‘kettle’ or ‘pothole’ lakes. Such waterbodies are well known across the plains of North America and Eurasia. […] The most violent of lake births are the result of volcanoes. The craters left behind after a volcanic eruption can fill with water to form small, often circular-shaped and acidic lakes. […] Much larger lakes are formed by the collapse of a magma chamber after eruption to produce caldera lakes. […] Craters formed by meteorite impacts also provide basins for lakes, and have proved to be of great scientific as well as human interest. […] There was a time when limnologists paid little attention to small lakes and ponds, but, this has changed with the realization that although such waterbodies are modest in size, they are extremely abundant throughout the world and make up a large total surface area. Furthermore, these smaller waterbodies often have high rates of chemical activity such as greenhouse gas production and nutrient cycling, and they are major habitats for diverse plants and animals”.

“For Forel, the science of lakes could be subdivided into different disciplines and subjects, all of which continue to occupy the attention of freshwater scientists today […]. First, the physical environment of a lake includes its geological origins and setting, the water balance and exchange of heat with the atmosphere, as well as the penetration of light, the changes in temperature with depth, and the waves, currents, and mixing processes that collectively determine the movement of water. Second, the chemical environment is important because lake waters contain a great variety of dissolved materials (‘solutes’) and particles that play essential roles in the functioning of the ecosystem. Third, the biological features of a lake include not only the individual species of plants, microbes, and animals, but also their organization into food webs, and the distribution and functioning of these communities across the bottom of the lake and in the overlying water.”

“In the simplest hydrological terms, lakes can be thought of as tanks of water in the landscape that are continuously topped up by their inflowing rivers, while spilling excess water via their outflow […]. Based on this model, we can pose the interesting question: how long does the average water molecule stay in the lake before leaving at the outflow? This value is referred to as the water residence time, and it can be simply calculated as the total volume of the lake divided by the water discharge at the outlet. This lake parameter is also referred to as the ‘flushing time’ (or ‘flushing rate’, if expressed as a proportion of the lake volume discharged per unit of time) because it provides an estimate of how fast mineral salts and pollutants can be flushed out of the lake basin. In general, lakes with a short flushing time are more resilient to the impacts of human activities in their catchments […] Each lake has its own particular combination of catchment size, volume, and climate, and this translates into a water residence time that varies enormously among lakes [from perhaps a month to more than a thousand years, US] […] A more accurate approach towards calculating the water residence time is to consider the question: if the lake were to be pumped dry, how long would it take to fill it up again? For most lakes, this will give a similar value to the outflow calculation, but for lakes where evaporation is a major part of the water balance, the residence time will be much shorter.”

“Each year, mineral and organic particles are deposited by wind on the lake surface and are washed in from the catchment, while organic matter is produced within the lake by aquatic plants and plankton. There is a continuous rain of this material downwards, ultimately accumulating as an annual layer of sediment on the lake floor. These lake sediments are storehouses of information about past changes in the surrounding catchment, and they provide a long-term memory of how the limnology of a lake has responded to those changes. The analysis of these natural archives is called ‘palaeolimnology’ (or ‘palaeoceanography’ for marine studies), and this branch of the aquatic sciences has yielded enormous insights into how lakes change through time, including the onset, effects, and abatement of pollution; changes in vegetation both within and outside the lake; and alterations in regional and global climate.”

“Sampling for palaeolimnological analysis is typically undertaken in the deepest waters to provide a more integrated and complete picture of the lake basin history. This is also usually the part of the lake where sediment accumulation has been greatest, and where the disrupting activities of bottom-dwelling animals (‘bioturbation’ of the sediments) may be reduced or absent. […] Some of the most informative microfossils to be found in lake sediments are diatoms, an algal group that has cell walls (‘frustules’) made of silica glass that resist decomposition. Each lake typically contains dozens to hundreds of different diatom species, each with its own characteristic set of environmental preferences […]. A widely adopted approach is to sample many lakes and establish a statistical relationship or ‘transfer function’ between diatom species composition (often by analysis of surface sediments) and a lake water variable such as temperature, pH, phosphorus, or dissolved organic carbon. This quantitative species–environment relationship can then be applied to the fossilized diatom species assemblage in each stratum of a sediment core from a lake in the same region, and in this way the physical and chemical fluctuations that the lake has experienced in the past can be reconstructed or ‘hindcast’ year-by-year. Other fossil indicators of past environmental change include algal pigments, DNA of algae and bacteria including toxic bloom species, and the remains of aquatic animals such as ostracods, cladocerans, and larval insects.”

“In lake and ocean studies, the penetration of sunlight into the water can be […] precisely measured with an underwater light meter (submersible radiometer), and such measurements always show that the decline with depth follows a sharp curve rather than a straight line […]. This is because the fate of sunlight streaming downwards in water is dictated by the probability of the photons being absorbed or deflected out of the light path; for example, a 50 per cent probability of photons being lost from the light beam by these processes per metre depth in a lake would result in sunlight values dropping from 100 per cent at the surface to 50 per cent at 1m, 25 per cent at 2m, 12.5 per cent at 3m, and so on. The resulting exponential curve means that for all but the clearest of lakes, there is only enough solar energy for plants, including photosynthetic cells in the plankton (phytoplankton), in the upper part of the water column. […] The depth limit for underwater photosynthesis or primary production is known as the ‘compensation depth‘. This is the depth at which carbon fixed by photosynthesis exactly balances the carbon lost by cellular respiration, so the overall production of new biomass (net primary production) is zero. This depth often corresponds to an underwater light level of 1 per cent of the sunlight just beneath the water surface […] The production of biomass by photosynthesis takes place at all depths above this level, and this zone is referred to as the ‘photic’ zone. […] biological processes in [the] ‘aphotic zone’ are mostly limited to feeding and decomposition. A Secchi disk measurement can be used as a rough guide to the extent of the photic zone: in general, the 1 per cent light level is about twice the Secchi depth.”

“[W]ater colour is now used in […] many powerful ways to track changes in water quality and other properties of lakes, rivers, estuaries, and the ocean. […] Lakes have different colours, hues, and brightness levels as a result of the materials that are dissolved and suspended within them. The purest of lakes are deep blue because the water molecules themselves absorb light in the green and, to a greater extent, red end of the spectrum; they scatter the remaining blue photons in all directions, mostly downwards but also back towards our eyes. […] Algae in the water typically cause it to be green and turbid because their suspended cells and colonies contain chlorophyll and other light-capturing molecules that absorb strongly in the blue and red wavebands, but not green. However there are some notable exceptions. Noxious algal blooms dominated by cyanobacteria are blue-green (cyan) in colour caused by their blue-coloured protein phycocyanin, in addition to chlorophyll.”

“[A]t the largest dimension, at the scale of the entire lake, there has to be a net flow from the inflowing rivers to the outflow, and […] from this landscape perspective, lakes might be thought of as enlarged rivers. Of course, this riverine flow is constantly disrupted by wind-induced movements of the water. When the wind blows across the surface, it drags the surface water with it to generate a downwind flow, and this has to be balanced by a return movement of water at depth. […] In large lakes, the rotation of the Earth has plenty of time to exert its weak effect as the water moves from one side of the lake to the other. As a result, the surface water no longer flows in a straight line, but rather is directed into two or more circular patterns or gyres that can move nearshore water masses rapidly into the centre of the lake and vice-versa. Gyres can therefore be of great consequence […] Unrelated to the Coriolis Effect, the interaction between wind-induced currents and the shoreline can also cause water to flow in circular, individual gyres, even in smaller lakes. […] At a much smaller scale, the blowing of wind across a lake can give rise to downward spiral motions in the water, called ‘Langmuir cells‘. […] These circulation features are commonly observed in lakes, where the spirals progressing in the general direction of the wind concentrate foam (on days of white-cap waves) or glossy, oily materials (on less windy days) into regularly spaced lines that are parallel to the direction of the wind. […] Density currents must also be included in this brief discussion of water movement […] Cold river water entering a warm lake will be denser than its surroundings and therefore sinks to the buttom, where it may continue to flow for considerable distances. […] Density currents contribute greatly to inshore-offshore exchanges of water, with potential effects on primary productivity, depp-water oxygenation, and the dispersion of pollutants.”

Links:

Limnology.
Drainage basin.
Lake Geneva. Lake Malawi. Lake Tanganyika. Lake Victoria. Lake Biwa. Lake Titicaca.
English Lake District.
Proglacial lakeLake Agassiz. Lake Ojibway.
Lake Taupo.
Manicouagan Reservoir.
Subglacial lake.
Thermokarst (-lake).
Bathymetry. Bathymetric chart. Hypsographic curve.
Várzea forest.
Lake Chad.
Colored dissolved organic matter.
H2O Temperature-density relationship. Thermocline. Epilimnion. Hypolimnion. Monomictic lake. Dimictic lake. Lake stratification.
Capillary wave. Gravity wave. Seiche. Kelvin wave. Poincaré wave.
Benthic boundary layer.
Kelvin–Helmholtz instability.

January 22, 2018 Posted by | Biology, Books, Botany, Chemistry, Geology, Paleontology, Physics | Leave a comment

Rivers (II)

Some more observations from the book and related links below.

“By almost every measure, the Amazon is the greatest of all the large rivers. Encompassing more than 7 million square kilometres, its drainage basin is the largest in the world and makes up 5% of the global land surface. The river accounts for nearly one-fifth of all the river water discharged into the oceans. The flow is so great that water from the Amazon can still be identified 125 miles out in the Atlantic […] The Amazon has some 1,100 tributaries, and 7 of these are more than 1,600 kilometres long. […] In the lowlands, most Amazonian rivers have extensive floodplains studded with thousands of shallow lakes. Up to one-quarter of the entire Amazon Basin is periodically flooded, and these lakes become progressively connected with each other as the water level rise.”

“To hydrologists, the term ‘flood’ refers to a river’s annual peak discharge period, whether the water inundates the surrounding landscape or not. In more common parlance, however, a flood is synonymous with the river overflowing it’s banks […] Rivers flood in the natural course of events. This often occurs on the floodplain, as the name implies, but flooding can affect almost all of the length of the river. Extreme weather, particularly heavy or protracted rainfall, is the most frequent cause of flooding. The melting of snow and ice is another common cause. […] River floods are one of the most common natural hazards affecting human society, frequently causing social disruption, material damage, and loss of life. […] Most floods have a seasonal element in their occurence […] It is a general rule that the magnitude of a flood is inversely related to its frequency […] Many of the less predictable causes of flooding occur after a valley has been blocked by a natural dam as a result of a landslide, glacier, or lava flow. Natural dams may cause upstream flooding as the blocked river forms a lake and downstream flooding as a result of failure of the dam.”

“The Tigris-Euphrates, Nile, and Indus are all large, exotic river systems, but in other respects they are quite different. The Nile has a relatively gentle gradient in Egypt and a channel that has experienced only small changes over the last few thousand years, by meander cut-off and a minor shift eastwards. The river usually flooded in a regular and predictable way. The stability and long continuity of the Egyptian civilization may be a reflection of its river’s relative stability. The steeper channel of the Indus, by contrast, has experienced major avulsions over great distances on the lower Indus Plain and some very large floods caused by the failure of glacier ice dams in the Himalayan mountains. Likely explanations for the abandonment of many Harappan cities […] take account of damage caused by major floods and/or the disruption caused by channel avulsion leading to a loss of water supply. Channel avulsion was also a problem for the Sumerian civilization on the alluvial plain called Mesopotamia […] known for the rise and fall of its numerous city states. Most of these cities were situated along the Euphrates River, probably because it was more easily controlled for irrigation purposes than the Tigris, which flowed faster and carried much more water. However, the Euphrates was an anastomosing river with multiple channels that diverge and rejoin. Over time, individual branch channels ceased to flow as others formed, and settlements located on these channels inevitably declined and were abandoned as their water supply ran dry, while others expanded as their channels carried greater amounts of water.”

“During the colonization of the Americas in the mid-18th century and the imperial expansion into Africa and Asia in the late 19th century, rivers were commonly used as boundaries because they were the first, and frequently the only, features mapped by European explorers. The diplomats in Europe who negotiated the allocation of colonial territories claimed by rival powers knew little of the places they were carving up. Often, their limited knowledge was based solely on maps that showed few details, rivers being the only distinct physical features marked. Today, many international river boundaries remain as legacies of those historical decisions based on poor geographical knowledge because states have been reluctant to alter their territorial boundaries from original delimitation agreements. […] no less than three-quarters of the world’s international boundaries follow rivers for at least part of their course. […] approximately 60% of the world’s fresh water is drawn from rivers shared by more than one country.”

“The sediments carried in rivers, laid down over many years, represent a record of the changes that have occurred in the drainage basin through the ages. Analysis of these sediments is one way in which physical geographers can interpret the historical development of landscapes. They can study the physical and chemical characteristics of the sediments itself and/or the biological remains they contain, such as pollen or spores. […] The simple rate at which material is deposited by a river can be a good reflection of how conditions have changed in the drainage basin. […] Pollen from surrounding plants is often found in abundance in fluvial sediments, and the analysis of pollen can yield a great deal of information about past conditions in an area. […] Very long sediment cores taken from lakes and swamps enable us to reconstruct changes in vegetation over very long time periods, in some cases over a million years […] Because climate is a strong determinant of vegetation, pollen analysis has also proved to be an important method for tracing changes in past climates.”

“The energy in flowing and falling water has been harnessed to perform work by turning water-wheels for more than 2,000 years. The moving water turns a large wheel and a shaft connected to the wheel axle transmits the power from the water through a system of gears and cogs to work machinery, such as a millstone to grind corn. […] The early medieval watermill was able to do the work of between 30 and 60 people, and by the end of the 10th century in Europe, waterwheels were commonly used in a wide range of industries, including powering forge hammers, oil and silk mills, sugar-cane crushers, ore-crushing mills, breaking up bark in tanning mills, pounding leather, and grinding stones. Nonetheless, most were still used for grinding grains for preparation into various types of food and drink. The Domesday Book, a survey prepared in England in AD 1086, lists 6,082 watermills, although this is probably a conservative estimate because many mills were not recorded in the far north of the country. By 1300, this number had risen to exceed 10,000. [..] Medieval watermills typically powered their wheels by using a dam or weir to concentrate the falling water and pond a reserve supply. These modifications to rivers became increasingly common all over Europe, and by the end of the Middle Ages, in the mid-15th century, watermills were in use on a huge number of rivers and streams. The importance of water power continued into the Industrial Revolution […]. The early textile factories were built to produce cloth using machines driven by waterwheels, so they were often called mills. […] [Today,] about one-third of all countries rely on hydropower for more than half their electricity. Globally, hydropower provides about 20% of the world’s total electricity supply.”

“Deliberate manipulation of river channels through engineering works, including dam construction, diversion, channelization, and culverting, […] has a long history. […] In Europe today, almost 80% of the total discharge of the continent’s major rivers is affected by measures designed to regulate flow, whether for drinking water supply, hydroelectric power generation, flood control, or any other reason. The proportion in individual countries is higher still. About 90% of rivers in the UK are regulated as a result of these activities, while in the Netherlands this percentage is close to 100. By contrast, some of the largest rivers on other continents, including the Amazon and the Congo, are hardly manipulated at all. […] Direct and intentional modifications to rivers are complemented by the impacts of land use and land use changes which frequently result in the alteration of rivers as an unintended side effect. Deforestation, afforestation, land drainage, agriculture, and the use of fire have all had significant impacts, with perhaps the most extreme effects produced by construction activity and urbanization. […] The major methods employed in river regulation are the construction of large dams […], the building of run-of-river impoundments such as weirs and locks, and by channelization, a term that covers a range of river engineering works including widening, deepening, straightening, and the stabilization of banks. […] Many aspects of a dynamic river channel and its associated ecosystems are mutually adjusting, so a human activity in a landscape that affects the supply of water or sediment is likely to set off a complex cascade of other alterations.”

“The methods of storage (in reservoirs) and distribution (by canal) have not changed fundamentally since the earliest river irrigation schemes, with the exception of some contemporary projects’ use of pumps to distribute water over greater distances. Nevertheless, many irrigation canals still harness the force of gravity. Half the world’s large dams (defined as being 15 metres or higher) were built exclusively or primarily for irrigation, and about one-third of the world’s irrigated cropland relies on reservoir water. In several countries, including such populous nations as India and China, more than 50% of arable land is irrigated by river water supplied from dams. […] Sadly, many irrigation schemes are not well managed and a number of environmental problems are frequently experienced as a result, both on-site and off-site. In many large networks of irrigation canals, less than half of the water diverted from a river or reservoir actually benefits crops. A lot of water seeps away through unlined canals or evaporates before reaching the fields. Some also runs off the fields or infiltrates through the soil, unused by plants, because farmers apply too much water or at the wrong time. Much of this water seeps back into nearby streams or joins underground aquifers, so can be used again, but the quality of water may deteriorate if it picks up salts, fertilizers, or pesticides. Excessive applications of irrigation water often result in rising water tables beneath fields, causing salinization and waterlogging. These processes reduce crop yields on irrigation schemes all over the world.”

“[Deforestation can contribute] to the degradation of aquatic habitats in numerous ways. The loss of trees along river banks can result in changes in the species found in the river because fewer trees means a decline in plant matter and insects falling from them, items eaten by some fish. Fewer trees on river banks also results in less shade. More sunlight reaching the river results in warmer water and the enhanced growth of algae. A change in species can occur as fish that feed on falling food are edged out by those able to feed on algae. Deforestation also typically results in more runoff and more soil erosion. This sediment may cover spawning grounds, leading to lower reproduction rates. […] Grazing and trampling by livestock reduces vegetation cover and causes the compaction of soil, which reduces its infiltration capacity. As rainwater passes over or through the soil in areas of intensive agriculture, it picks up residues from pesticides and fertilizers and transport them to rivers. In this way, agriculture has become a leading source of river pollution in certain parts of the world. Concentration of nitrates and phosphates, derived from fertilizers, have risen notably in many rivers in Europe and North America since the 1950s and have led to a range of […] problems encompassed under the term ‘eutrophication’ – the raising of biological productivity caused by nutrient enrichment. […] In slow-moving rivers […] the growth of algae reduces light penetration and depletes the oxygen in the water, sometimes causing fish kills.”

“One of the most profound ways in which people alter rivers is by damming them. Obstructing a river and controlling its flow in this way brings about a raft of changes. A dam traps sediments and nutrients, alters the river’s temperature and chemistry, and affects the processes of erosion and deposition by which the river sculpts the landscape. Dams create more uniform flow in rivers, usually by reducing peak flows and increasing minimum flows. Since the natural variation in flow is important for river ecosystems and their biodiversity, when dams even out flows the result is commonly fewer fish of fewer species. […] the past 50 years or so has seen a marked escalation in the rate and scale of construction of dams all over the world […]. At the beginning of the 21st century, there were about 800,000 dams worldwide […] In some large river systems, the capacity of dams is sufficient to hold more than the entire annual discharge of the river. […] Globally, the world’s major reservoirs are thought to control about 15% of the runoff from the land. The volume of water trapped worldwide in reservoirs of all sizes is no less than five times the total global annual river flow […] Downstream of a reservoir, the hydrological regime of a river is modified. Discharge, velocity, water quality, and thermal characteristics are all affected, leading to changes in the channel and its landscape, plants, and animals, both on the river itself and in deltas, estuaries, and offshore. By slowing the flow of river water, a dam acts as a trap for sediment and hence reduces loads in the river downstream. As a result, the flow downstream of the dam is highly erosive. A relative lack of silt arriving at a river’s delta can result in more coastal erosion and the intrusion of seawater that brings salt into delta ecosystems. […] The dam-barrier effect on migratory fish and their access to spawning grounds has been recognized in Europe since medieval times.”

“One of the most important effects cities have on rivers is the way in which urbanization affects flood runoff. Large areas of cities are typically impermeable, being covered by concrete, stone, tarmac, and bitumen. This tends to increase the amount of runoff produced in urban areas, an effect exacerbated by networks of storm drains and sewers. This water carries relatively little sediment (again, because soil surfaces have been covered by impermeable materials), so when it reaches a river channel it typically causes erosion and widening. Larger and more frequent floods are another outcome of the increase in runoff generated by urban areas. […] It […] seems very likely that efforts to manage the flood hazard on the Mississippi have contributed to an increased risk of damage from tropical storms on the Gulf of Mexico coast. The levées built along the river have contributed to the loss of coastal wetlands, starving them of sediment and fresh water, thereby reducing their dampening effect on storm surge levels. This probably enhanced the damage from Hurricane Katrina which struck the city of New Orleans in 2005.”

Links:

Onyx River.
Yangtze. Yangtze floods.
Missoula floods.
Murray River.
Ganges.
Thalweg.
Southeastern Anatolia Project.
Water conflict.
Hydropower.
Fulling mill.
Maritime transport.
Danube.
Lock (water navigation).
Hydrometry.
Yellow River.
Aswan High Dam. Warragamba Dam. Three Gorges Dam.
Onchocerciasis.
River restoration.

January 16, 2018 Posted by | Biology, Books, Ecology, Engineering, Geography, Geology, History | Leave a comment

Rivers (I)

I gave the book one star on goodreads. My review on goodreads explains why. In this post I’ll disregard the weak parts of the book and only cover ‘the good stuff’. Part of the reason why I gave the book one star instead of two was that I wanted to punish the author for wasting my time with irrelevant stuff when it was clear to me that he could actually have been providing useful information instead; some parts of the book are quite good.

Some quotes and links below.

“[W]ater is continuously on the move, being recycled between the land, oceans, and atmosphere: an eternal succession known as the hydrological cycle. Rivers play a key role in the hydrological cycle, draining water from the land and moving it ultimately to the sea. Any rain or melted snow that doesn’t evaporate or seep into the earth flows downhill over the land surface under the influence of gravity. This flow is channelled by small irregularities in the topography into rivulets that merge to become gullies that feed into larger channels. The flow of rivers is augmented with water flowing through the soil and from underground stores, but a river is more than simply water flowing to the sea. A river also carries rocks and other sediments, dissolved minerals, plants, and animals, both dead and alive. In doing so, rivers transport large amounts of material and provide habitats for a great variety of wildlife. They carve valleys and deposit plains, being largely responsible for shaping the Earth’s continental landscapes. Rivers change progressively over their course from headwaters to mouth, from steep streams that are narrow and turbulent to wider, deeper, often meandering channels. From upstream to downstream, a continuum of change occurs: the volume of water flowing usually increases and coarse sediments grade into finer material. In its upper reaches, a river erodes its bed and banks, but this removal of earth, pebbles, and sometimes boulders gives way to the deposition of material in lower reaches. In tune with these variations in the physical characteristics of the river, changes can also be seen in the types of creatures and plants that make the river their home. […] Rivers interact with the sediments beneath the channel and with the air above. The water flowing in many rivers comes both directly from the air as rainfall – or another form of precipitation – and also from groundwater sources held in rocks and gravels beneath, both being flows of water through the hydrological cycle.”

“One interesting aspect of rivers is that they seem to be organized hierarchically. When viewed from an aircraft or on a map, rivers form distinct networks like the branches of a tree. Small tributary channels join together to form larger channels which in turn merge to form still larger rivers. This progressive increase in river size is often described using a numerical ordering scheme in which the smallest stream is called first order, the union of two first-order channels produces a second-order river, the union of two second-order channels produces a third-order river, and so on. Stream order only increases when two channels of the same rank merge. Very large rivers, such as the Nile and Mississippi, are tenth-order rivers; the Amazon twelfth order. Each river drains an area of land that is proportional to its size. This area is known by several different terms: drainage basin, river basin, or catchment (‘watershed’ is also used in American English, but this word means the drainage divide between two adjacent basins in British English). In the same way that a river network is made up of a hierarchy of low-order rivers nested within higher-order rivers, their drainage basins also fit together to form a nested hierarchy. In other words, smaller units are repeating elements nested within larger units. All of these units are linked by flows of water, sediment, and energy. Recognizing rivers as being made up of a series of units that are arranged hierarchically provides a potent framework in which to study the patterns and processes associated with rivers. […] processes operating at the upper levels of the hierarchy exert considerable influence over features lower down in the hierarchy, but not the other way around. […] Generally, the larger the spatial scale, the slower the processes and rates of change.”

The stuff above incidentally – and curiously – links very closely with the material covered in Holland’s book on complexity, which I finished just the day before I started reading this one. That book has a lot more stuff about things like nested hierarchies and that ‘potent framework’ mentioned above, and how to go about analyzing such things. (I found that book hard to blog – at least at first, which is why I’m right now covering this book instead; but I do hope to get to it later, it was quite interesting).

“Measuring the length of a river is more complicated than it sounds. […] Disagreements about the true source of many rivers have been a continuous feature of [the] history of exploration. […] most rivers typically have many tributaries and hence numerous sources. […] But it gets more confusing. Some rivers do not have a mouth. […] Some rivers have more than one channel. […] Yet another important part of measuring the length of a river is the scale at which it is measured. Fundamentally, the length of a river varies with the map scale because different amounts of detail are generalized at different scales.”

“Two particularly important properties of river flow are velocity and discharge – the volume of water moving past a point over some interval of time […]. A continuous record of discharge plotted against time is called a hydrograph which, depending on the time frame chosen, may give a detailed depiction of a flood event over a few days, or the discharge pattern over a year or more. […] River flow is dependent upon many different factors, including the area and shape of the drainage basin. If all else is equal, larger basins experience larger flows. A river draining a circular basin tends to have a peak in flow because water from all its tributaries arrives at more or less the same time as compared to a river draining a long, narrow basin in which water arrives from tributaries in a more staggered manner. The surface conditions in a basin are also important. Vegetation, for example, intercepts rainfall and hence slows down its movement into rivers. Climate is a particularly significant determinant of river flow. […] All the rivers with the greatest flows are almost entirely located in the humid tropics, where rainfall is abundant throughout the year. […] Rivers in the humid tropics experience relatively constant flows throughout the year, but perennial rivers in more seasonal climates exhibit marked seasonality in flow. […] Some rivers are large enough to flow through more than one climate region. Some desert rivers, for instance, are perennial because they receive most of their flow from high rainfall areas outside the desert. These are known as ‘exotic’ rivers. The Nile is an example […]. These rivers lose large amounts of water – by evaporation and infiltration into soils – while flowing through the desert, but their volumes are such that they maintain their continuity and reach the sea. By contrast, many exotic desert rivers do not flow into the sea but deliver their water to interior basins.”

…and in rare cases, so much water is contributed to the interior basin that that basin’s actually categorized as a ‘sea’. However humans tend to mess such things up. Amu Darya and Syr Darya used to flow into the Aral Sea, until Soviet planners decided they shouldn’t do that anymore. Goodbye Aral Sea – hello Aralkum Desert!

“An important measure of the way a river system moulds its landscape is the ‘drainage density’. This is the sum of the channel length divided by the total area drained, which reflects the spacing of channels. Hence, drainage density expresses the degree to which a river dissects the landscape, effectively controlling the texture of relief. Numerous studies have shown that drainage density has a great range in different regions, depending on conditions of climate, vegetation, and geology particularly. […] Rivers shape the Earth’s continental landscapes in three main ways: by the erosion, transport, and deposition of sediments. These three processes have been used to recognize a simple three-part classification of individual rivers and river networks according to the dominant process in each of three areas: source, transfer, and depositional zones. The first zone consists of the river’s upper reaches, the area from which most of the water and sediment are derived. This is where most of the river’s erosion occurs, and this eroded material is transported through the second zone to be deposited in the third zone. These three zones are idealized because some sediment is eroded, stored, and transported in each of them, but within each zone one process is dominant.”

“The flow of water carries […] sediment in three ways: dissolved material […] moves in solution; small particles are carried in suspension; and larger particles are transported along the stream bed by rolling, sliding, or a bouncing movement known as ‘saltation’. […] Globally, it is estimated that rivers transport around 15 billion tonnes of suspended material annually to the oceans, plus about another 4 billion tonnes of dissolved material. In its upper reaches, a river might flow across bedrock but further downstream this is much less likely. Alluvial rivers are flanked by a floodplain, the channel cut into material that the river itself has transported and deposited. The floodplain is a relatively flat area which is periodically inundated during periods of high flow […] When water spills out onto the floodplain, the velocity of flow decreases and sediment begins to settle, causing fresh deposits of alluvium on the floodplain. Certain patterns of alluvial river channels have been seen on every continent and are divided at the most basic level into straight, meandering, and braided. Straight channels are rare in nature […] The most common river channel pattern is a series of bends known as meanders […]. Meanders develop because erosion becomes concentrated on the outside of a bend and deposition on the inside. As these linked processes continue, the meander bend can become more emphasized, and a particularly sinuous meander may eventually be cut off at its narrow neck, leaving an oxbow lake as evidence of its former course. Alluvial meanders migrate, both down and across their floodplain […]. This lateral migration is an important process in the formation of floodplains. Braided rivers can be recognized by their numerous flows that split off and rejoin each other to give a braided appearance. These multiple intersecting flows are separated by small and often temporary islands of alluvium. Braided rivers typically carry abundant sediment and are found in areas with a fairly steep gradient, often near mountainous regions.”

“The meander cut-off creating an oxbow lake is one way in which a channel makes an abrupt change of course, a characteristic of some alluvial rivers that is generally referred to as ‘avulsion’. It is a natural process by which flow diverts out of an established channel into a new permanent course on the adjacent floodplain, a change in course that can present a major threat to human activities. Rapid, frequent, and often significant avulsions have typified many rivers on the Indo-Gangetic plains of South Asia. In India, the Kosi River has migrated about 100 kilometres westward in the last 200 years […] Why a river suddenly avulses is not understood completely, but earthquakes play a part on the Indo-Gangetic plains. […] Most rivers eventually flow into the sea or a lake, where they deposit sediment which builds up into a landform known as a delta. The name comes from the Greek letter delta, Δ, shaped like a triangle or fan, one of the classic shapes a delta can take. […] Material laid down at the end of a river can continue underwater far beyond the delta as a deep-sea fan.”

“The organisms found in fluvial ecosystems are commonly classified according to the methods they use to gather food and feed. ‘Shredders’ are organisms that consume small sections of leaves; ‘grazers’ and ‘scrapers’ consume algae from the surfaces of objects such as stones and large plants; ‘collectors’ feed on fine organic matter produced by the breakdown of other once-living things; and ‘predators’ eat other living creatures. The relative importance of these groups of creatures typically changes as one moves from the headwaters of a river to stretches further downstream […] small headwater streams are often shaded by overhanging vegetation which limits sunlight and photosynthesis but contributes organic matter by leaf fall. Shredders and collectors typically dominate in these stretches, but further downstream, where the river is wider and thus receives more sunlight and less leaf fall, the situation is quite different. […] There’s no doubting the numerous fundamental ways in which a river’s biology is dependent upon its physical setting, particularly in terms of climate, geology, and topography. Nevertheless, these relationships also work in reverse. The biological components of rivers also act to shape the physical environment, particularly at more local scales. Beavers provide a good illustration of the ways in which the physical structure of rivers can be changed profoundly by large mammals. […] rivers can act both as corridors for species dispersal but also as barriers to the dispersal of organisms.”

 

Drainage system (geomorphology).
Perennial stream.
Nilometer.
Mekong.
Riverscape.
Oxbow lake.
Channel River.
Long profile of a river.
Bengal fan.
River continuum concept.
Flood pulse concept.
Riparian zone.

 

January 11, 2018 Posted by | Books, Ecology, Geography, Geology | Leave a comment

Plate Tectonics (II)

Some more observations and links below.

I may or may not add a third post about the book at a later point in time; there’s a lot of interesting stuff included in this book.

“Because of the thickness of the lithosphere, its bending causes […] a stretching of its upper surface. This stretching of the upper portion of the lithosphere manifests itself as earthquakes and normal faulting, the style of faulting that occurs when a region extends horizontally […]. Such earthquakes commonly occur after great earthquakes […] Having been bent down at the trench, the lithosphere […] slides beneath the overriding lithospheric plate. Fault plane solutions of shallow focus earthquakes […] provide the most direct evidence for this underthrusting. […] In great earthquakes, […] the deformation of the surface of the Earth that occurs during such earthquakes corroborates the evidence for underthrusting of the oceanic lithosphere beneath the landward side of the trench. The 1964 Alaskan earthquake provided the first clear example. […] Because the lithosphere is much colder than the asthenosphere, when a plate of lithosphere plunges into the asthenosphere at rates of tens to more than a hundred millimetres per year, it remains colder than the asthenosphere for tens of millions of years. In the asthenosphere, temperatures approach those at which some minerals in the rock can melt. Because seismic waves travel more slowly and attenuate (lose energy) more rapidly in hot, and especially in partially molten, rock than they do in colder rock, the asthenosphere is not only a zone of weakness, but also characterized by low speeds and high attenuation of seismic waves. […] many seismologists use the waves sent by earthquakes to study the Earth’s interior, with little regard for earthquakes themselves. The speeds at which these waves propagate and the rate at which the waves die out, or attenuate, have provided much of the data used to infer the Earth’s internal structure.”

S waves especially, but also P waves, lose much of their energy while passing through the asthenosphere. The lithosphere, however, transmits P and S waves with only modest loss of energy. This difference is apparent in the extent to which small earthquakes can be felt. In regions like the western United States or in Greece and Italy, the lithosphere is thin, and the asthenosphere reaches up to shallow depths. As a result earthquakes, especially small ones, are felt over relatively small areas. By contrast, in the eastern United States or in Eastern Europe, small earthquakes can be felt at large distances. […] Deep earthquakes occur several hundred kilometres west of Japan, but they are felt with greater intensity and can be more destructive in eastern than western Japan […]. This observation, of course, puzzled Japanese seismologists when they first discovered deep focus earthquakes; usually people close to the epicentre (the point directly over the earthquake) feel stronger shaking than people farther from it. […] Tokuji Utsu […] explained this greater intensity of shaking along the more distant, eastern side of the islands than on the closer, western side by appealing to a window of low attenuation parallel to the earthquake zone and plunging through the asthenosphere beneath Japan and the Sea of Japan to its west. Paths to eastern Japan travelled efficiently through that window, the subducted slab of lithosphere, whereas those to western Japan passed through the asthenosphere and were attenuated strongly.”

“Shallow earthquakes occur because stress on a fault surface exceeds the resistance to slip that friction imposes. When two objects are forced to slide past one another, and friction opposes the force that pushes one past the other, the frictional resistance can be increased by pressing the two objects together more forcefully. Many of us experience this when we put sandbags in the trunks […] of our cars in winter to give the tyres greater traction on slippery roads. The same applies to faults in the Earth’s crust. As the pressure increases with increasing depth in the Earth, frictional resistance to slip on faults should increase. For depths greater than a few tens of kilometres, the high pressure should press the two sides of a fault together so tightly that slip cannot occur. Thus, in theory, deep-focus earthquakes ought not to occur.”

“In general, rock […] is brittle at low temperatures but becomes soft and flows at high temperature. The intermediate- and deep-focus earthquakes occur within the lithosphere, where at a given depth, the temperature is atypically low. […] the existence of intermediate- or deep-focus earthquakes is usually cited as evidence for atypically cold material at asthenospheric depths. Most such earthquakes, therefore, occur in oceanic lithosphere that has been subducted within the last 10–20 million years, sufficiently recently that it has not heated up enough to become soft and weak […]. The inference that the intermediate- and deep-focus earthquakes occur within the lithosphere and not along its top edge remains poorly appreciated among Earth scientists. […] the fault plane solutions suggest that the state of stress in the downgoing slab is what one would expect if the slab deformed like a board, or slab of wood. Accordingly, we infer that the earthquakes occurring within the downgoing slab of lithosphere result from stress within the slab, not from movement of the slab past the surrounding asthenosphere. Because the lithosphere is much stronger than the surrounding asthenosphere, it can support much higher stresses than the asthenosphere can. […] observations are consistent with a cold, heavy slab sinking into the asthenosphere and being pulled downward by gravity acting on it, but then encountering resistance at depths of 500–700 km despite the pull of gravity acting on the excess mass of the slab. Where both intermediate and deep-focus earthquakes occur, a gap, or a minimum, in earthquake activity near a depth of 300 km marks the transition between the upper part of the slab stretched by gravity pulling it down and the lower part where the weight of the slab above it compresses it. In the transition region between them, there would be negligible stress and, therefore, no or few earthquakes.”

“Volcanoes occur where rock melts, and where that molten rock can rise to the surface. […] For essentially all minerals […] melting temperatures […] depend on the extent to which the minerals have been contaminated by impurities. […] hydrogen, when it enters most crystal lattices, lowers the melting temperature of the mineral. Hydrogen is most obviously present in water (H2O), but is hardly a major constituent of the oxygen-, silicon-, magnesium-, and iron-rich mantle. The top of the downgoing slab of lithosphere includes fractured crust and sediment deposited atop it. Oceanic crust has been stewing in seawater for tens of millions of years, so that its cracks have become full either of liquid water or of minerals to which water molecules have become loosely bound. […] the downgoing slab acts like a caravan of camels carrying water downward into an upper mantle desert. […] The downgoing slab of lithosphere carries water in cracks in oceanic crust and in the interstices among sediment grains, and when released to the mantle above it, hydrogen dissolved in crystal lattices lowers the melting temperature of that rock enough that some of it melts. Many of the world’s great volcanoes […] begin as small amounts of melt above the subducted slabs of lithosphere.”

“… (in most regions) plates of lithosphere behave as rigid, and therefore undeformable, objects. The high strength of intact lithosphere, stronger than either the asthenosphere below it or the material along the boundaries of plates, allows the lithospheric plates to move with respect to one another without deforming (much). […] The essence of ‘plate tectonics’ is that vast regions move with respect to one another as (nearly) rigid objects. […] Dan McKenzie of Cambridge University, one of the scientists to present the idea of rigid plates, often argued that plate tectonics was easy to accept because the kinematics, the description of relative movements of plates, could be separated from the dynamics, the system of forces that causes plates to move with respect to one another in the directions and at the speeds that they do. Making such a separation is impossible for the flow of most fluids, […] whose movement cannot be predicted without an understanding of the forces acting on separate parcels of fluid. In part because of its simplicity, plate tectonics passed from being a hypothesis to an accepted theory in a short time.”

“[F]or plates that move over the surface of a sphere, all relative motion can be described simply as a rotation about an axis that passes through the centre of the sphere. The Earth itself obviously rotates around an axis through the North and South Poles. Similarly, the relative displacement of two plates with respect to one another can be described as a rotation of one plate with respect to the other about an axis, or ‘pole’, of rotation […] if we know how two plates, for example Eurasia and Africa, move with respect to a third plate, like North America, we can calculate how those two plates (Eurasia and Africa) move with respect to each other. A rotation about an axis in the Arctic Ocean describes the movement of the Africa plate, with respect to the North America plate […]. Combining the relative motion of Africa with respect to North America with the relative motion of North America with respect to Eurasia allows us to calculate that the African continent moves toward Eurasia by a rotation about an axis that lies west of northern Africa. […] By combining the known relative motion of pairs of plates […] we can calculate how fast plates converge with respect to one another and in what direction.”

“[W]e can measure how plates move with respect to one another using Global Positioning System (GPS) measurements of points on nearly all of the plates. Such measurements show that speeds of relative motion between some pairs of plates have changed a little bit since 2 million years ago, but in general, the GPS measurements corroborate the inferences drawn both from rates of seafloor spreading determined using magnetic anomalies and from directions of relative plate motion determined using orientations of transform faults and fault plane solutions of earthquakes. […] Among tests of plate tectonics, none is more convincing than the GPS measurements […] numerous predictions of rates or directions of present-day plate motions and of large displacements of huge terrains have been confirmed many times over. […] When, more than 45 years ago, plate tectonics was proposed to describe relative motions of vast terrains, most saw it as an approximation that worked well, but that surely was imperfect. […] plate tectonics is imperfect, but GPS measurements show that the plates are surprisingly rigid. […] Long histories of plate motion can be reduced to relatively few numbers, the latitudes and longitudes of the poles of rotation, and the rates or amounts of rotation about those axes.”

Links:

Wadati–Benioff zone.
Translation (geometry).
Rotation (mathematics).
Poles of rotation.
Rotation around a fixed axis.
Euler’s rotation theorem.
Isochron dating.
Tanya Atwater.

December 25, 2017 Posted by | Books, Chemistry, Geology, Physics | Leave a comment

Plate Tectonics (I)

Some quotes and links related to the first half of the book‘s coverage:

“The fundamental principle of plate tectonics is that large expanses of terrain, thousands of kilometres in lateral extent, behave as thin (~100 km in thickness) rigid layers that move with respect to each another across the surface of the Earth. The word ‘plate’ carries the image of a thin rigid object, and ‘tectonics’ is a geological term that refers to large-scale processes that alter the structure of the Earth’s crust. […] The Earth is stratified with a light crust overlying denser mantle. Just as the height of icebergs depends on the mass of ice below the surface of the ocean, so […] the light crust of the Earth floats on the denser mantle, standing high where crust is thick, and lying low, deep below the ocean, where it should be thin. Wegener recognized that oceans are mostly deep, and he surmised correctly that the crust beneath oceans must be much thinner than that beneath continents.”

“From a measurement of the direction in which a hunk of rock is magnetized, one can infer where the North Pole lay relative to that rock at the time it was magnetized. It follows that if continents had drifted, rock of different ages on the continents should be magnetized in different directions, not just from each other but more importantly in directions inconsistent with the present-day magnetic field. […] In the 1950s, several studies using palaeomagnetism were carried out to test whether continents had drifted, and most such tests passed. […] Palaeomagnetic results not only supported the idea of continental drift, but they also offered constraints on timing and rates of drift […] in the 1960s, the idea of continental drift saw a renaissance, but subsumed within a broader framework, that of plate tectonics.”

“If one wants to study deformation of the Earth’s crust in action, the quick and dirty way is to study earthquakes. […] Until the 1960s, studying fracture zones in action was virtually impossible. Nearly all of them lie far offshore beneath the deep ocean. Then, in response to a treaty in the early 1960s disallowing nuclear explosions in the ocean, atmosphere, or space, but permitting underground testing of them, the Department of Defense of the USA put in place the World-Wide Standardized Seismograph Network, a global network with more than 100 seismograph stations. […] Suddenly remote earthquakes, not only those on fracture zones but also those elsewhere throughout the globe […], became amenable to study. […] the study of earthquakes played a crucial role in the recognition and acceptance of plate tectonics. […] By the early 1970s, the basic elements of plate tectonics had permeated essentially all of Earth science. In addition to the obvious consequences, like confirmation of continental drift, emphasis shifted from determining the history of the planet to understanding the processes that had shaped it.”

“[M]ost solids are strongest when cold, and become weaker when warmed. Temperature increases into the Earth. As a result the strongest rock lies close to the surface, and rock weakens with depth. Moreover, olivine, the dominant mineral in the upper mantle, seems to be stronger than most crustal minerals; so, in many regions, the strongest rock is at the top of the mantle. Beneath oceans where crust is thin, ~7 km, the lithosphere is mostly mantle […]. Because temperature increases gradually with depth, the boundary between strong lithosphere and underlying weak asthenosphere is not sharp. Nevertheless, because the difference in strength is large, subdividing the outer part of the Earth into two layers facilitates an understanding of plate tectonics. Reduced to its essence, the basic idea that we call plate tectonics is simply a description of the relative movements of separate plates of lithosphere as these plates move over the underlying weaker, hotter asthenosphere. […] Most of the Earth’s surface lies on one of the ~20 major plates, whose sizes vary from huge, like the Pacific plate, to small, like the Caribbean plate […], or even smaller. Narrow belts of earthquakes mark the boundaries of separate plates […]. The key to plate tectonics lies in these plates behaving as largely rigid objects, and therefore undergoing only negligible deformation.”

“Although the amounts and types of sediment deposited on the ocean bottom vary from place to place, the composition and structure of the oceanic crust is remarkably uniform beneath the deep ocean. The structure of oceanic lithosphere depends primarily on its age […] As the lithosphere ages, it thickens, and the rate at which it cools decreases. […] the rate that heat is lost through the seafloor decreases with the age of lithosphere. […] As the lithospheric plate loses heat and cools, like most solids, it contracts. This contraction manifests itself as a deepening of the ocean. […] Seafloor spreading in the Pacific occurs two to five times faster than it does in the Atlantic. […] when seafloor spreading is slow, new basalt rising to the surface at the ridge axis can freeze onto the older seafloor on its edges before rising as high as it would otherwise. As a result, a valley […] forms. Where spreading is faster, however, as in the Pacific, new basalt rises to a shallower depth and no such valley forms. […] The spreading apart of two plates along a mid-ocean ridge system occurs by divergence of the two plates along straight segments of mid-ocean ridge that are truncated at fracture zones. Thus, the plate boundary at a mid-ocean ridge has a zig-zag shape, with spreading centres making zigs and transform faults making zags along it.”

“Geochemists are confident that the volume of water in the oceans has not changed by a measurable amount for hundreds of millions, if not billions, of years. Yet, the geologic record shows several periods when continents were flooded to a much greater extent than today. For example, 90 million years ago, the Midwestern United States and neighbouring Canada were flooded. One could have sailed due north from the Gulf of Mexico to Hudson’s Bay and into the Arctic. […] If sea level has risen and fallen, while the volume of water has remained unchanged, then the volume of the basin holding the water must have changed. The rates at which seafloor is created at the different spreading centres today are not the same, and such rates at all spreading centres have varied over geologic time. Imagine a time in the past when seafloor at some of the spreading centres was created at a faster rate than it is today. If this relatively high rate had continued for a few tens of millions of years, there would have been more young ocean floor than today, and correspondingly less old floor […]. Thus, the average depth of the ocean would be shallower than it is today, and the volume of the ocean basin would be smaller than today. Water should have spilled onto the continent. Most now attribute the high sea level in the Cretaceous Period (145 to 65 million years ago) to unusually rapid creation of seafloor, and hence to a state when seafloor was younger on average than today.”

Wilson focused on the two major differences between ordinary strike-slip faults, or transcurrent faults, and transform faults on fracture zones. (1) If transcurrent faulting occurred, slip should occur along the entire fracture zone; but for transform faulting, only the portion between the segments of spreading centres would be active. (2) The sense of slip on the faults would be opposite for these two cases: if right-lateral for one, then left-lateral for the other […] The occurrences of earthquakes along a fault provide the most convincing evidence that the fault is active. Slip on most faults and most deformation of the Earth’s crust to make mountains occurs not slowly and steadily on human timescales, but abruptly during earthquakes. Accordingly, a map of earthquakes is, to a first approximation, a map of active faults on which regions, such as lithospheric plates, slide past one another […] When an earthquake occurs, slip on a fault takes place. One side of the fault slides past the other so that slip is parallel to the plane of the fault; the opening of cracks, into which cows or people can fall, is rare and atypical. Repeated studies of earthquakes and the surface ruptures accompanying them show that the slip during an earthquake is representative of the sense of cumulative displacement that has occurred on faults over geologic timescales. Thus earthquakes give us snapshots of processes that occur over thousands to millions of years. Two aspects of a fault define it: the orientation of the fault plane, which can be vertical or gently dipping, and the sense of slip: the direction that one side of the fault moves with respect to the other […] To a first approximation, boundaries between plates are single faults. Thus, if we can determine both the orientation of the fault plane and the sense of slip on it during an earthquake, we can infer the direction that one plate moves with respect to the other. Often during earthquakes, but not always, slip on the fault offsets the Earth’s surface, and we can directly observe the sense of motion […]. In the deep ocean, however, this cannot be done as a general practice, and we must rely on more indirect methods.”

“Because seafloor spreading creates new seafloor at the mid-ocean ridges, the newly formed crust must find accommodation: either the Earth must expand or lithosphere must be destroyed at the same rate that it is created. […] for the Earth not to expand (or contract), the sum total of new lithosphere made at spreading centres must be matched by the removal, by subduction, of an equal amount of lithosphere at island arc structures. […] Abundant evidence […] shows that subduction of lithosphere does occur. […] The subduction process […] differs fundamentally from that of seafloor spreading, in that subduction is asymmetric. Whereas two plates are created and grow larger at equal rates at spreading centers (mid-ocean ridges and rises), the areal extent of only one plate decreases at a subduction zone. The reason for this asymmetry derives from the marked dependence of the strength of rock on temperature. […] At spreading centres, hot weak rock deforms easily as it rises at mid-ocean ridges, cools, and then becomes attached to one of the two diverging plates. At subduction zones, however, cold and therefore strong lithosphere resists bending and contortion. […] two plates of lithosphere, each some 100 km thick, cannot simply approach one another, turn sharp corners […], and dive steeply into the asthenosphere. Much less energy is dissipated if one plate undergoes modest flexure and then slides at a gentle angle beneath the other, than if both plates were to undergo pronounced bending and then plunged together steeply into the asthenosphere. Nature takes the easier, energetically more efficient, process. […] Before it plunges beneath the island arc, the subducting plate of lithosphere bends down gently to cause a deep-sea trench […] As the plate bends down to form the trench, the lithosphere seaward of the trench is flexed upwards slightly. […] the outer topographic rise […] will be lower but wider for thicker lithosphere.”

Plate tectonics.
Andrija Mohorovičić. Mohorovičić discontinuity.
Archimedes’ principle.
Isostasy.
Harold Jeffreys. Keith Edward Bullen. Edward A. Irving. Harry Hammond Hess. Henry William Menard. Maurice Ewing.
Paleomagnetism.
Lithosphere. Asthenosphere.
Mid-ocean ridge. Bathymetry. Mid-Atlantic Ridge. East Pacific Rise. Seafloor spreading.
Fracture zone. Strike-slip fault. San Andreas Fault.
World-Wide Standardized Seismograph Network (USGS).
Vine–Matthews–Morley hypothesis.
Geomagnetic reversal. Proton precession magnetometer. Jaramillo (normal) event.
Potassium–argon dating.
Deep Sea Drilling Project.
“McKenzie Equations” for magma migration.
Transform fault.
Mendocino Fracture Zone.
Subduction.
P-wave. S-wave. Fault-plane solution. Compressional waves.
Triple junction.

December 23, 2017 Posted by | Books, Geology, Physics | Leave a comment

Civil engineering (II)

Some more quotes and links:

“Major earthquakes occur every year in different parts of the world. The various continents that make up the surface of the Earth are moving slowly relative to each other. The rough boundaries between the tectonic plates try to resist this relative motion but eventually the energy stored in the interface (or geological fault) becomes too big to resist and slip occurs, releasing the energy. The energy travels as a wave through the crust of the Earth, shaking the ground as it passes. The speed at which the wave travels depends on the stiffness and density of the material through which it is passing. Topographic effects may concentrate the energy of the shaking. Mexico City sits on the bed of a former lake, surrounded by hills. Once the energy reaches this bowl-like location it becomes trapped and causes much more damage than would be experienced if the city were sitting on a flat plain without the surrounding mountains. Designing a building to withstand earthquake shaking is possible, provided we have some idea about the nature and magnitude and geological origin of the loadings. […] Heavy mud or tile roofs on flimsy timber walls are a disaster – the mass of the roof sways from side to side as it picks up energy from the shaking ground and, in collapsing, flattens the occupants. Provision of some diagonal bracing to prevent the structure from deforming when it is shaken can be straightforward. Shops like to have open spaces for ground floor display areas. There are often post-earthquake pictures of buildings which have lost a storey as this unbraced ground floor structure collapsed. […] Earthquakes in developing countries tend to attract particular coverage. The extent of the damage caused is high because the enforcement of design codes (if they exist) is poor. […] The majority of the damage in Haiti was the result of poor construction and the total lack of any building code requirements.”

“[A]n aircraft is a large structure, and the structural design is subject to the same laws of equilibrium and material behaviour as any structure which is destined never to leave the ground. […] The A380 is an enormous structure, some 25 m high, 73 m long and with a wingspan of about 80 m […]. For comparison, St Paul’s Cathedral in London is 73 m wide at the transept; and the top of the inner dome, visible from inside the cathedral, is about 65 m above the floor of the nave. […] The rules of structural mechanics that govern the design of aircraft structures are no different from those that govern the design of structures that are intended to remain on the ground. In the mid 20th century many aircraft and civil structural engineers would not have recognized any serious intellectual boundary between their activities. The aerodynamic design of an aircraft ensures smooth flow of air over the structure to reduce resistance and provide lift. Bridges in exposed places are not in need of lift but can benefit from reduced resistance to air flow resulting from the use of continuous hollow sections (box girders) rather than trusses to form the deck. The stresses can also flow more smoothly within the box, and the steel be used more efficiently. Testing of potential box girder shapes in wind tunnels helps to check the influence of the presence of the ground or water not far below the deck on the character of the wind flow.”

“Engineering is concerned with finding solutions to problems. The initial problems faced by the engineer relate to the identification of the set of functional criteria which truly govern the design and which will be generated by the client or the promoter of the project. […] The more forcefully the criteria are stated the less freedom the design engineer will have in the search for an appropriate solution. Design is the translation of ideas into achievement. […] The designer starts with (or has access to) a mental store of solutions previously adopted for related problems and then seeks to compromise as necessary in order to find the optimum solution satisfying multiple criteria. The design process will often involve iteration of concept and technology and the investigation of radically different solutions and may also require consultation with the client concerning the possibility of modification of some of the imposed functional criteria if the problem has been too tightly defined. […] The term technology is being used here to represent that knowledge and those techniques which will be necessary in order to realize the concept; recognizing that a concept which has no appreciation of the technologies available for construction may require the development of new technologies in order that it may be realized. Civil engineering design continues through the realization of the project by the constructor or contractor. […] The process of design extends to the eventual assessment of the performance of the completed project as perceived by the client or user (who may not have been party to the original problem definition).”

“An arch or vault curved only in one direction transmits loads by means of forces developed within the thickness of the structure which then push outwards at the boundaries. A shell structure is a generalization of such a vault which is curved in more than one direction. An intact eggshell is very stiff under any loading applied orthogonally (at right angles) to the shell. If the eggshell is broken it becomes very flexible and to stiffen it again restraint is required along the free edge to replace the missing shell. The techniques of prestressing concrete permit the creation of very exciting and daring shell structures with extraordinarily small thickness but the curvatures of the shells and the shapes of the edges dictate the support requirements.”

“In the 19th century it was quicker to travel from Rome to Ancona by sea round the southern tip of the boot of Italy (a distance of at least 2000 km) than to travel overland, a distance of some 200 km as the crow flies. Land-based means of transport require infrastructure that must be planned and constructed and then maintained. Even today water transport is used on a large scale for bulky or heavy items for which speed is not necessary.”

“High speed rail works well (economically) in areas such as Europe and Japan where there is adequate infrastructure in the destination cities for access to and from the railway stations. In parts of the world – such as much of the USA – where the distances are much greater, population densities lower, railway networks much less developed, and local transport in cities much less coordinated (and the motor car has dominated for far longer) the economic case for high speed rail is harder to make. The most successful schemes for high speed rail have involved construction of new routes with dedicated track for the high speed trains with straighter alignments, smoother curves, and gentler gradients than conventional railways – and consequent reduced scope for delays resulting from mixing of high speed and low speed trains on the same track”.

“The Millennium Bridge is a suspension bridge with a very low sag-to-span ratio which lends itself very readily to sideways oscillation. There are plenty of rather bouncy suspension footbridges around the world but the modes of vibration are predominantly those in the plane of the bridge, involving vertical movements. Modes which involve lateral movement and twisting of the deck are always there but being out-of-plane may be overlooked. The more flexible the bridge in any mode of deformation, the more movement there is when people walk across. There is a tendency for people to vary their pace to match the movements of the bridge. Such an involuntary feedback mechanism is guaranteed to lead to resonance of the structure and continued build-up of movements. There will usually be some structural limitation on the magnitude of the oscillations – as the geometry of the bridge changes so the natural frequency will change subtly – but it can still be a bit alarming for the user. […] The Millennium Bridge was stabilized (retrofitted) by the addition of restraining members and additional damping mechanisms to prevent growth of oscillation and to move the natural frequency of this mode of vibration away from the likely frequencies of human footfall. The revised design […] ensured that dynamic response would be acceptable for crowd loading up to two people per square metre. At this density walking becomes difficult so it is seen as a conservative criterion.”

“The development of appropriately safe systems requires that […] parallel control systems should be truly independent so that they are not likely to fail simultaneously. Robustness is thus about ensuring that safety can be maintained even when some elements of the system cease to operate. […] There is a human element in all systems, providing some overall control and an ability to react in critical circumstances. The human intervention is particularly important where all electronic or computer control systems are eliminated and the clock is ticking inexorably towards disaster. Although ultimately whenever a structural failure occurs there is some purely mechanical explanation – some element of the structure was overloaded because some mode of response had been overlooked – there is often a significant human factor which must be considered. We may think that we fully understand the mechanical operation, but may neglect to ensure that the human elements are properly controlled. A requirement for robustness implies both that the damage consequent on the removal of a single element of the structure or system should not be disproportionate (mechanical or structural robustness) but also that the project should not be jeopardized by human failure (organizational robustness). […] A successful civil engineering project is likely to have evident robustness in concept, technology, and realization. A concept which is unclear, a technology in its infancy, and components of realization which lacks coherence will all contribute to potential disaster.”

“Tunnelling inevitably requires removal of ground from the face with a tendency for the ground above and ahead of the tunnel to fall into the gap. The success of the tunnelling operation can be expressed in terms of the volume loss: the proportion of the volume of the tunnel which is unintentionally excavated causing settlement at the ground surface – the smaller this figure the better. […] How can failure of the tunnel be avoided? One route to assurance will be to perform numerical analysis of the tunnel construction process with close simulations of all the stages of excavation and loading of the new structure. Computer analyses are popular because they appear simple to perform, even in three dimensions. However, such analyses can be no more reliable than the models of soil behaviour on which they are based and on the way in which the rugged detail of construction is translated into numerical instructions. […] Whatever one’s confidence in the numerical analysis it will obviously not be a bad idea to observe the tunnel while it is being constructed. Obvious things to observe include tunnel convergence – the change in the cross-section of the tunnel in different directions – and movements at the ground surface and existing buildings over the tunnel. […] observation is not of itself sufficient unless there is some structured strategy for dealing with the observations. At Heathrow […] the data were not interpreted until after the failure had occurred. It was then clear that significant and undesirable movements had been occurring and could have been detected at least two months before the failure.”

“Fatigue is a term used to describe a failure which develops as a result of repeated loading – possibly over many thousands or millions of cycles. […] Fatigue cannot be avoided, and the rate of development of damage may not be easy to predict. It often requires careful techniques of inspection to identify the presence of incipient cracks which may eventually prove structurally devastating.”

“Some projects would clearly be regarded as failures – a dam bursts, a flood protection dyke is overtopped, a building or bridge collapses. In each case there is the possibility of a technical description of the processes leading to the failure – in the end the strength of the material in some location has been exceeded by the demands of the applied loads or the load carrying paths have been disrupted. But failure can also be financial or economic. Such failures are less evident: a project that costs considerably more than the original estimate has in some way failed to meet its expectations. A project that, once built, is quite unable to generate the revenue that was expected in order to justify the original capital outlay has also failed.”

1999 Jiji earthquake.
Taipei 101. Tuned mass damper.
Tacoma Narrows Bridge (1940). Brooklyn Bridge. Golden Gate Bridge.
Sydney Opera House. Jørn Utzon. Ove Arup. Christiani & Nielsen.
Bell Rock Lighthouse. Northern Lighthouse Board. Richard Henry Brunton.
Panama Canal. Culebra Cut. Gatun Lake. Panamax.
Great Western Railway.
Shinkansen. TGV.
Ronan Point.
New Austrian tunnelling method.
Crossrail.
Fukushima Daiichi nuclear disaster.
Turnkey project. Unit price contract.
Colin Buchanan.
Dongtan.

December 21, 2017 Posted by | Books, Economics, Engineering, Geology | Leave a comment

Civil engineering (I)

I have included some quotes from the first half of the book below, and some links related to the book’s coverage:

“Today, the term ‘civil engineering’ distinguishes the engineering of the provision of infrastructure from […] many other branches of engineering that have come into existence. It thus has a somewhat narrower scope now than it had in the 18th and early 19th centuries. There is a tendency to define it by exclusion: civil engineering is not mechanical engineering, not electrical engineering, not aeronautical engineering, not chemical engineering… […] Civil engineering today is seen as encompassing much of the infrastructure of modern society provided it does not move – roads, buildings, dams, tunnels, drains, airports (but not aeroplanes or air traffic control), railways (but not railway engines or signalling), power stations (but not turbines). The fuzzy definition of civil engineering as the engineering of infrastructure […] should make us recognize that there are no precise boundaries and that any practising engineer is likely to have to communicate across whatever boundaries appear to have been created. […] The boundary with science is also fuzzy. Engineering is concerned with the solution of problems now, and cannot necessarily wait for the underlying science to catch up. […] All engineering is concerned with finding solutions to problems for which there is rarely a single answer. Presented with an appropriate ‘solution-neutral problem definition’ the engineer needs to find ways of applying existing or emergent technologies to the solution of the problem.”

“[T]he behaviour of the soil or other materials that make up the ground in its natural state is rather important to engineers. However, although it can be guessed from exploratory probings and from knowledge of the local geological history, the exact nature of the ground can never be discovered before construction begins. By contrast, road embankments are formed of carefully prepared soils; and water-retaining dams may also be constructed from selected soils and rocks – these can be seen as ‘designer soils’. […] Soils are formed of mineral particles packed together with surrounding voids – the particles can never pack perfectly. […] The voids around the soil particles are filled with either air or water or a mixture of the two. In northern climes the ground is saturated with water for much of the time. For deformation of the soil to occur, any change in volume must be accompanied by movement of water through and out of the voids. Clay particles are small, the surrounding voids are small, and movement of water through these voids is slow – the permeability is said to be low. If a new load, such as a bridge deck or a tall building, is to be constructed, the ground will want to react to the new loads. A clayey soil will be unable to react instantly because of the low permeability and, as a result, there will be delayed deformations as the water is squeezed out of the clay ground and the clay slowly consolidates. The consolidation of a thick clay layer may take centuries to approach completion.”

“Rock (or stone) is a good construction material. Evidently there are different types of rock with different strengths and different abilities to resist the decay that is encouraged by sunshine, moisture, and frost, but rocks are generally strong, dimensionally stable materials: they do not shrink or twist with time. We might measure the strength of a type of rock in terms of the height of a column of that rock that will just cause the lowest layer of the rock to crush: on such a scale sandstone would have a strength of about 2 kilometres, good limestone about 4 kilometres. A solid pyramid 150 m high uses quite a small proportion of this available strength. […] Iron has been used for several millennia for elements such as bars and chain links which might be used in conjunction with other structural materials, particularly stone. Stone is very strong when compressed, or pushed, but not so strong in tension: when it is pulled cracks may open up. The provision of iron links between adjacent stone blocks can help to provide some tensile strength. […] Cast iron can be formed into many different shapes and is resistant to rust but is brittle – when it breaks it loses all its strength very suddenly. Wrought iron, a mixture of iron with a low proportion of carbon, is more ductile – it can be stretched without losing all its strength – and can be beaten or rolled (wrought) into simple shapes. Steel is a mixture of iron with a higher proportion of carbon than wrought iron and with other elements […] which provide particular mechanical benefits. Mild steel has a remarkable ductility – a tolerance of being stretched – which results from its chemical composition and which allows it to be rolled into sheets or extruded into chosen shapes without losing its strength and stiffness. There are limits on the ratio of the quantities of carbon and other elements to that of the iron itself in order to maintain these desirable properties for the mixture. […] Steel is very strong and stiff in tension or pulling: steel wire and steel cables are obviously very well suited for hanging loads.”

“As concrete sets, the chemical reactions that turn a sloppy mixture of cement and water and stones into a rock-like solid produce a lot of heat. If a large volume of concrete is poured without any special precautions then, as it cools down, having solidified, it will shrink and crack. The Hoover Dam was built as a series of separate concrete columns of limited dimension through which pipes carrying cooling water were passed in order to control the temperature rise. […] Concrete is mixed as a heavy fluid with no strength until it starts to set. Embedding bars of a material such as steel, which is strong in tension, in the fluid concrete gives some tensile strength. Reinforced concrete is used today for huge amounts of construction throughout the world. When the amount of steel present in the concrete is substantial, additives are used to encourage the fresh concrete to flow through intricate spaces and form a good bond with the steel. For the steel to start to resist tensile loads it has to stretch a little; if the concrete around the steel also stretches it may crack. The concrete has little reliable tensile strength and is intended to protect the steel. The concrete can be used more efficiently if the steel reinforcement, in the form of cables or rods, is tensioned, either before the concrete has set or after the concrete has set but before it starts to carry its eventual live loads. The concrete is forced into compression by the stretched steel. […] Such prestressed concrete gives amazing possibilities for very slender and daring structures […] the concrete must be able to withstand the tension in the steel, whether or not the full working loads are being applied. For an arch bridge made from prestressed concrete, the prestress from the steel cables tries to lift up the concrete and reduce the span whereas the traffic loads on the bridge are trying to push it down and increase the span. The location and amount of the prestress has to be chosen to provide the optimum use of the available strength under all possible load combinations. The pressure vessels used to contain the central reactor of a nuclear power station provide a typical example of the application of prestressed concrete.”

“There are many civil engineering contributions required in the several elements of [a] power station […]. The electricity generation side of a nuclear power station is subject to exactly the same design constraints as any other power station. Pipework leading the steam and water through the plant has to be able to cope with severe temperature variations, rotating machinery requires foundations which not only have to be precisely aligned but also have to be able to tolerate the high frequency vibrations arising from the rotations. Residual small out-of-balance forces, transmitted to the foundation continuously over long periods, could degrade the stiffness of the ground. Every system has its resonant frequency at which applied cyclic loads will tend to be amplified, possibly uncontrollably, unless prevented by the damping properties of the foundation materials. Even if the rotating machinery is being operated well away from any resonant frequency under normal conditions, there will be start-up periods in which the frequency sweeps up from stationery, zero frequency, and so an undesirable resonance may be triggered on the way”.

“The material which we see so often on modern road surfaces, […] asphalt […], was introduced in the early 20th century. Binding together the surface layers of stones with bitumen or tar gave the running surface a better strength. Tar is a viscous material which deforms with time under load; ruts may form, particularly in hot weather. Special treatments can be used for the asphalt to reduce the surface noise made by tyres; porous asphalt can encourage drainage. On the other hand, a running surface that is more resistant to traffic loading can be provided with a concrete slab reinforced with a crisscross steel mesh to maintain its integrity between deliberately inserted construction joints, so that any cracking resulting from seasonal thermal contraction occurs at locations chosen by the engineer rather than randomly across the concrete slab. The initial costs of concrete road surfaces are higher than the asphalt alternatives but the full-life costs may be lower.”

“A good supply of fresh water is one essential element of civilized infrastructure; some control of the waste water from houses and industries is another. The two are, of course, not completely independent since one of the desirable requirements of a source of fresh water is that it should not have been contaminated with waste before it reaches its destination of consumption: hence the preference for long aqueducts or pipelines starting from natural springs, rather than taking water from rivers which were probably already contaminated by upstream conurbations. It is curious how often in history this lesson has had to be relearnt.”

“The object of controlled disposal is the same as for nuclear waste: to contain it and prevent any of the toxic constituents from finding their way into the food chain or into water supplies. Simply to remove everything that could possibly be contaminated and dump it to landfill seems the easy option, particularly if use can be made of abandoned quarries or other holes in the ground. But the quantities involved make this an unsustainable long-term proposition. Cities become surrounded with artificial hills of uncertain composition which are challenging to develop for industrial or residential purposes because decomposing waste often releases gases which may be combustible (and useful) or poisonous; because waste often contains toxic substances which have to be prevented from finding pathways to man either upwards to the air or sideways towards water supplies; because the properties of waste (whether or not decomposed or decomposing) are not easy to determine and probably not particularly desirable from an engineering point of view; and because developers much prefer greenfield sites to sites of uncertainty and contamination.”

“There are regularly more or less serious floods in different parts of the world. Some of these are simply the result of unusually high quantities of rainfall which overload the natural river channels, often exacerbated by changes in land use (such as the felling of areas of forest) which encourage more rapid runoff or impose a man-made canalization of the river (by building on flood plains into which the rising river would previously have been able to spill) […]. Some of the incidents are the result of unusual encroachments by the sea, a consequence of a combination of high tide and adverse wind and weather conditions. The potential for disastrous consequences is of course enhanced when both on-shore and off-shore circumstances combine. […] Folk memory for natural disasters tends to be quite short. If the interval between events is typically greater than, say, 5–10 years people may assume that such events are extraordinary and rare. They may suppose that building on the recently flooded plains will be safe for the foreseeable future.”

Links:

Civil engineering.
École Nationale des Ponts et Chaussées.
Institution of Civil Engineers.
Christopher Wren. John Smeaton. Thomas Telford. William Rankine.
Leaning Tower of Pisa.
Cruck. Trabeated system. Corbel. Voussoir. Flange. I-beam.
Hardwick Hall. Blackfriars Bridge. Forth Bridge. Sydney Harbour Bridge.
Gothic architecture.
Buckling.
Pozzolana. Concrete. Grout.
Gravity dam. Arch dam. Hoover Dam. Malpasset Dam.
Torness Nuclear Power Station.
Plastic. Carbon fiber reinforced polymer.
Roman roads. Via Appia.
Sanitation.
Aqueduct. Pont du Gard.
Charles Yelverton O’Connor. Goldfields Water Supply Scheme.
1854 Broad Street cholera outbreak. John Snow. Great Stink of 1858. Joseph Bazalgette.
Brent Spar.
Clywedog Reservoir.
Acqua alta.
North Sea flood of 1953. Hurricane Katrina.
Delta Works. Oosterscheldekering. Thames Barrier.
Groyne. Breakwater.

December 20, 2017 Posted by | Books, Economics, Engineering, Geology | Leave a comment

Radioactivity

A few quotes from the book and some related links below. Here’s my very short goodreads review of the book.

Quotes:

“The main naturally occurring radionuclides of primordial origin are uranium-235, uranium-238, thorium-232, their decay products, and potassium-40. The average abundance of uranium, thorium, and potassium in the terrestrial crust is 2.6 parts per million, 10 parts per million, and 1% respectively. Uranium and thorium produce other radionuclides via neutron- and alpha-induced reactions, particularly deeply underground, where uranium and thorium have a high concentration. […] A weak source of natural radioactivity derives from nuclear reactions of primary and secondary cosmic rays with the atmosphere and the lithosphere, respectively. […] Accretion of extraterrestrial material, intensively exposed to cosmic rays in space, represents a minute contribution to the total inventory of radionuclides in the terrestrial environment. […] Natural radioactivity is [thus] mainly produced by uranium, thorium, and potassium. The total heat content of the Earth, which derives from this radioactivity, is 12.6 × 1024 MJ (one megajoule = 1 million joules), with the crust’s heat content standing at 5.4 × 1021 MJ. For comparison, this is significantly more than the 6.4 × 1013 MJ globally consumed for electricity generation during 2011. This energy is dissipated, either gradually or abruptly, towards the external layers of the planet, but only a small fraction can be utilized. The amount of energy available depends on the Earth’s geological dynamics, which regulates the transfer of heat to the surface of our planet. The total power dissipated by the Earth is 42 TW (one TW = 1 trillion watts): 8 TW from the crust, 32.3 TW from the mantle, 1.7 TW from the core. This amount of power is small compared to the 174,000 TW arriving to the Earth from the Sun.”

“Charged particles such as protons, beta and alpha particles, or heavier ions that bombard human tissue dissipate their energy locally, interacting with the atoms via the electromagnetic force. This interaction ejects electrons from the atoms, creating a track of electron–ion pairs, or ionization track. The energy that ions lose per unit path, as they move through matter, increases with the square of their charge and decreases linearly with their energy […] The energy deposited in the tissues and organs of your body by ionizing radiation is defined absorbed dose and is measured in gray. The dose of one gray corresponds to the energy of one joule deposited in one kilogram of tissue. The biological damage wrought by a given amount of energy deposited depends on the kind of ionizing radiation involved. The equivalent dose, measured in sievert, is the product of the dose and a factor w related to the effective damage induced into the living matter by the deposit of energy by specific rays or particles. For X-rays, gamma rays, and beta particles, a gray corresponds to a sievert; for neutrons, a dose of one gray corresponds to an equivalent dose of 5 to 20 sievert, and the factor w is equal to 5–20 (depending on the neutron energy). For protons and alpha particles, w is equal to 5 and 20, respectively. There is also another weighting factor taking into account the radiosensitivity of different organs and tissues of the body, to evaluate the so-called effective dose. Sometimes the dose is still quoted in rem, the old unit, with 100 rem corresponding to one sievert.”

“Neutrons emitted during fission reactions have a relatively high velocity. When still in Rome, Fermi had discovered that fast neutrons needed to be slowed down to increase the probability of their reaction with uranium. The fission reaction occurs with uranium-235. Uranium-238, the most common isotope of the element, merely absorbs the slow neutrons. Neutrons slow down when they are scattered by nuclei with a similar mass. The process is analogous to the interaction between two billiard balls in a head-on collision, in which the incoming ball stops and transfers all its kinetic energy to the second one. ‘Moderators’, such as graphite and water, can be used to slow neutrons down. […] When Fermi calculated whether a chain reaction could be sustained in a homogeneous mixture of uranium and graphite, he got a negative answer. That was because most neutrons produced by the fission of uranium-235 were absorbed by uranium-238 before inducing further fissions. The right approach, as suggested by Szilárd, was to use separated blocks of uranium and graphite. Fast neutrons produced by the splitting of uranium-235 in the uranium block would slow down, in the graphite block, and then produce fission again in the next uranium block. […] A minimum mass – the critical mass – is required to sustain the chain reaction; furthermore, the material must have a certain geometry. The fissile nuclides, capable of sustaining a chain reaction of nuclear fission with low-energy neutrons, are uranium-235 […], uranium-233, and plutonium-239. The last two don’t occur in nature but can be produced artificially by irradiating with neutrons thorium-232 and uranium-238, respectively – via a reaction called neutron capture. Uranium-238 (99.27%) is fissionable, but not fissile. In a nuclear weapon, the chain reaction occurs very rapidly, releasing the energy in a burst.”

“The basic components of nuclear power reactors, fuel, moderator, and control rods, are the same as in the first system built by Fermi, but the design of today’s reactors includes additional components such as a pressure vessel, containing the reactor core and the moderator, a containment vessel, and redundant and diverse safety systems. Recent technological advances in material developments, electronics, and information technology have further improved their reliability and performance. […] The moderator to slow down fast neutrons is sometimes still the graphite used by Fermi, but water, including ‘heavy water’ – in which the water molecule has a deuterium atom instead of a hydrogen atom – is more widely used. Control rods contain a neutron-absorbing material, such as boron or a combination of indium, silver, and cadmium. To remove the heat generated in the reactor core, a coolant – either a liquid or a gas – is circulating through the reactor core, transferring the heat to a heat exchanger or directly to a turbine. Water can be used as both coolant and moderator. In the case of boiling water reactors (BWRs), the steam is produced in the pressure vessel. In the case of pressurized water reactors (PWRs), the steam generator, which is the secondary side of the heat exchanger, uses the heat produced by the nuclear reactor to make steam for the turbines. The containment vessel is a one-metre-thick concrete and steel structure that shields the reactor.”

“Nuclear energy contributed 2,518 TWh of the world’s electricity in 2011, about 14% of the global supply. As of February 2012, there are 435 nuclear power plants operating in 31 countries worldwide, corresponding to a total installed capacity of 368,267 MW (electrical). There are 63 power plants under construction in 13 countries, with a capacity of 61,032 MW (electrical).”

“Since the first nuclear fusion, more than 60 years ago, many have argued that we need at least 30 years to develop a working fusion reactor, and this figure has stayed the same throughout those years.”

“[I]onizing radiation is […] used to improve many properties of food and other agricultural products. For example, gamma rays and electron beams are used to sterilize seeds, flour, and spices. They can also inhibit sprouting and destroy pathogenic bacteria in meat and fish, increasing the shelf life of food. […] More than 60 countries allow the irradiation of more than 50 kinds of foodstuffs, with 500,000 tons of food irradiated every year. About 200 cobalt-60 sources and more than 10 electron accelerators are dedicated to food irradiation worldwide. […] With the help of radiation, breeders can increase genetic diversity to make the selection process faster. The spontaneous mutation rate (number of mutations per gene, for each generation) is in the range 10-8–10-5. Radiation can increase this mutation rate to 10-5–10-2. […] Long-lived cosmogenic radionuclides provide unique methods to evaluate the ‘age’ of groundwaters, defined as the mean subsurface residence time after the isolation of the water from the atmosphere. […] Scientists can date groundwater more than a million years old, through chlorine-36, produced in the atmosphere by cosmic-ray reactions with argon.”

“Radionuclide imaging was developed in the 1950s using special systems to detect the emitted gamma rays. The gamma-ray detectors, called gamma cameras, use flat crystal planes, coupled to photomultiplier tubes, which send the digitized signals to a computer for image reconstruction. Images show the distribution of the radioactive tracer in the organs and tissues of interest. This method is based on the introduction of low-level radioactive chemicals into the body. […] More than 100 diagnostic tests based on radiopharmaceuticals are used to examine bones and organs such as lungs, intestines, thyroids, kidneys, the liver, and gallbladder. They exploit the fact that our organs preferentially absorb different chemical compounds. […] Many radiopharmaceuticals are based on technetium-99m (an excited state of technetium-99 – the ‘m’ stands for ‘metastable’ […]). This radionuclide is used for the imaging and functional examination of the heart, brain, thyroid, liver, and other organs. Technetium-99m is extracted from molybdenum-99, which has a much longer half-life and is therefore more transportable. It is used in 80% of the procedures, amounting to about 40,000 per day, carried out in nuclear medicine. Other radiopharmaceuticals include short-lived gamma-emitters such as cobalt-57, cobalt-58, gallium-67, indium-111, iodine-123, and thallium-201. […] Methods routinely used in medicine, such as X-ray radiography and CAT, are increasingly used in industrial applications, particularly in non-destructive testing of containers, pipes, and walls, to locate defects in welds and other critical parts of the structure.”

“Today, cancer treatment with radiation is generally based on the use of external radiation beams that can target the tumour in the body. Cancer cells are particularly sensitive to damage by ionizing radiation and their growth can be controlled or, in some cases, stopped. High-energy X-rays produced by a linear accelerator […] are used in most cancer therapy centres, replacing the gamma rays produced from cobalt-60. The LINAC produces photons of variable energy bombarding a target with a beam of electrons accelerated by microwaves. The beam of photons can be modified to conform to the shape of the tumour, which is irradiated from different angles. The main problem with X-rays and gamma rays is that the dose they deposit in the human tissue decreases exponentially with depth. A considerable fraction of the dose is delivered to the surrounding tissues before the radiation hits the tumour, increasing the risk of secondary tumours. Hence, deep-seated tumours must be bombarded from many directions to receive the right dose, while minimizing the unwanted dose to the healthy tissues. […] The problem of delivering the needed dose to a deep tumour with high precision can be solved using collimated beams of high-energy ions, such as protons and carbon. […] Contrary to X-rays and gamma rays, all ions of a given energy have a certain range, delivering most of the dose after they have slowed down, just before stopping. The ion energy can be tuned to deliver most of the dose to the tumour, minimizing the impact on healthy tissues. The ion beam, which does not broaden during the penetration, can follow the shape of the tumour with millimetre precision. Ions with higher atomic number, such as carbon, have a stronger biological effect on the tumour cells, so the dose can be reduced. Ion therapy facilities are [however] still very expensive – in the range of hundreds of millions of pounds – and difficult to operate.”

“About 50 million years ago, a global cooling trend took our planet from the tropical conditions at the beginning of the Tertiary to the ice ages of the Quaternary, when the Arctic ice cap developed. The temperature decrease was accompanied by a decrease in atmospheric CO2 from 2,000 to 300 parts per million. The cooling was probably caused by a reduced greenhouse effect and also by changes in ocean circulation due to plate tectonics. The drop in temperature was not constant as there were some brief periods of sudden warming. Ocean deep-water temperatures dropped from 12°C, 50 million years ago, to 6°C, 30 million years ago, according to archives in deep-sea sediments (today, deep-sea waters are about 2°C). […] During the last 2 million years, the mean duration of the glacial periods was about 26,000 years, while that of the warm periods – interglacials – was about 27,000 years. Between 2.6 and 1.1 million years ago, a full cycle of glacial advance and retreat lasted about 41,000 years. During the past 1.2 million years, this cycle has lasted 100,000 years. Stable and radioactive isotopes play a crucial role in the reconstruction of the climatic history of our planet”.

Links:

CUORE (Cryogenic Underground Observatory for Rare Events).
Borexino.
Lawrence Livermore National Laboratory.
Marie Curie. Pierre Curie. Henri Becquerel. Wilhelm Röntgen. Joseph Thomson. Ernest Rutherford. Hans Geiger. Ernest Marsden. Niels Bohr.
Ruhmkorff coil.
Electroscope.
Pitchblende (uraninite).
Mache.
Polonium. Becquerel.
Radium.
Alpha decay. Beta decay. Gamma radiation.
Plum pudding model.
Spinthariscope.
Robert Boyle. John Dalton. Dmitri Mendeleev. Frederick Soddy. James Chadwick. Enrico Fermi. Lise Meitner. Otto Frisch.
Periodic Table.
Exponential decay. Decay chain.
Positron.
Particle accelerator. Cockcroft-Walton generator. Van de Graaff generator.
Barn (unit).
Nuclear fission.
Manhattan Project.
Chernobyl disaster. Fukushima Daiichi nuclear disaster.
Electron volt.
Thermoluminescent dosimeter.
Silicon diode detector.
Enhanced geothermal system.
Chicago Pile Number 1. Experimental Breeder Reactor 1. Obninsk Nuclear Power Plant.
Natural nuclear fission reactor.
Gas-cooled reactor.
Generation I reactors. Generation II reactor. Generation III reactor. Generation IV reactor.
Nuclear fuel cycle.
Accelerator-driven subcritical reactor.
Thorium-based nuclear power.
Small, sealed, transportable, autonomous reactor.
Fusion power. P-p (proton-proton) chain reaction. CNO cycle. Tokamak. ITER (International Thermonuclear Experimental Reactor).
Sterile insect technique.
Phase-contrast X-ray imaging. Computed tomography (CT). SPECT (Single-photon emission computed tomography). PET (positron emission tomography).
Boron neutron capture therapy.
Radiocarbon dating. Bomb pulse.
Radioactive tracer.
Radithor. The Radiendocrinator.
Radioisotope heater unit. Radioisotope thermoelectric generator. Seebeck effect.
Accelerator mass spectrometry.
Atomic bombings of Hiroshima and Nagasaki. Treaty on the Non-Proliferation of Nuclear Weapons. IAEA.
Nuclear terrorism.
Swiss light source. Synchrotron.
Chronology of the universe. Stellar evolution. S-process. R-process. Red giant. Supernova. White dwarf.
Victor Hess. Domenico Pacini. Cosmic ray.
Allende meteorite.
Age of the Earth. History of Earth. Geomagnetic reversal. Uranium-lead dating. Clair Cameron Patterson.
Glacials and interglacials.
Taung child. Lucy. Ardi. Ardipithecus kadabba. Acheulean tools. Java Man. Ötzi.
Argon-argon dating. Fission track dating.

November 28, 2017 Posted by | Archaeology, Astronomy, Biology, Books, Cancer/oncology, Chemistry, Engineering, Geology, History, Medicine, Physics | Leave a comment

Isotopes

A decent book. Below some quotes and links.

“[A]ll mass spectrometers have three essential components — an ion source, a mass filter, and some sort of detector […] Mass spectrometers need to achieve high vacuum to allow the uninterrupted transmission of ions through the instrument. However, even high-vacuum systems contain residual gas molecules which can impede the passage of ions. Even at very high vacuum there will still be residual gas molecules in the vacuum system that present potential obstacles to the ion beam. Ions that collide with residual gas molecules lose energy and will appear at the detector at slightly lower mass than expected. This tailing to lower mass is minimized by improving the vacuum as much as possible, but it cannot be avoided entirely. The ability to resolve a small isotope peak adjacent to a large peak is called ‘abundance sensitivity’. A single magnetic sector TIMS has abundance sensitivity of about 1 ppm per mass unit at uranium masses. So, at mass 234, 1 ion in 1,000,000 will actually be 235U not 234U, and this will limit our ability to quantify the rare 234U isotope. […] AMS [accelerator mass spectrometry] instruments use very high voltages to achieve high abundance sensitivity. […] As I write this chapter, the human population of the world has recently exceeded seven billion. […] one carbon atom in 1012 is mass 14. So, detecting 14C is far more difficult than identifying a single person on Earth, and somewhat comparable to identifying an individual leaf in the Amazon rain forest. Such is the power of isotope ratio mass spectrometry.”

14C is produced in the Earth’s atmosphere by the interaction between nitrogen and cosmic ray neutrons that releases a free proton turning 147N into 146C in a process that we call an ‘n-p’ reaction […] Because the process is driven by cosmic ray bombardment, we call 14C a ‘cosmogenic’ isotope. The half-life of 14C is about 5,000 years, so we know that all the 14C on Earth is either cosmogenic or has been created by mankind through nuclear reactors and bombs — no ‘primordial’ 14C remains because any that originally existed has long since decayed. 14C is not the only cosmogenic isotope; 16O in the atmosphere interacts with cosmic radiation to produce the isotope 10Be (beryllium). […] The process by which a high energy cosmic ray particle removes several nucleons is called ‘spallation’. 10Be production from 16O is not restricted to the atmosphere but also occurs when cosmic rays impact rock surfaces. […] when cosmic rays hit a rock surface they don’t bounce off but penetrate the top 2 or 3 metres (m) — the actual ‘attenuation’ depth will vary for particles of different energy. Most of the Earth’s crust is made of silicate minerals based on bonds between oxygen and silicon. So, the same spallation process that produces 10Be in the atmosphere also occurs in rock surfaces. […] If we know the flux of cosmic rays impacting a surface, the rate of production of the cosmogenic isotopes with depth below the rock surface, and the rate of radioactive decay, it should be possible to convert the number of cosmogenic atoms into an exposure age. […] Rocks on Earth which are shielded from much of the cosmic radiation have much lower levels of isotopes like 10Be than have meteorites which, before they arrive on Earth, are exposed to the full force of cosmic radiation. […] polar scientists have used cores drilled through ice sheets in Antarctica and Greenland to compare 10Be at different depths and thereby reconstruct 10Be production through time. The 14C and 10Be records are closely correlated indicating the common response to changes in the cosmic ray flux.”

“[O]nce we have credible cosmogenic isotope production rates, […] there are two classes of applications, which we can call ‘exposure’ and ‘burial’ methodologies. Exposure studies simply measure the accumulation of the cosmogenic nuclide. Such studies are simplest when the cosmogenic nuclide is a stable isotope like 3He and 21Ne. These will just accumulate continuously as the sample is exposed to cosmic radiation. Slightly more complicated are cosmogenic isotopes that are radioactive […]. These isotopes accumulate through exposure but will also be destroyed by radioactive decay. Eventually, the isotopes achieve the condition known as ‘secular equilibrium’ where production and decay are balanced and no chronological information can be extracted. Secular equilibrium is achieved after three to four half-lives […] Imagine a boulder that has been transported from its place of origin to another place within a glacier — what we call a glacial erratic. While the boulder was deeply covered in ice, it would not have been exposed to cosmic radiation. Its cosmogenic isotopes will only have accumulated since the ice melted. So a cosmogenic isotope exposure age tells us the date at which the glacier retreated, and, by examining multiple erratics from different locations along the course of the glacier, allows us to construct a retreat history for the de-glaciation. […] Burial methodologies using cosmogenic isotopes work in situations where a rock was previously exposed to cosmic rays but is now located in a situation where it is shielded.”

“Cosmogenic isotopes are also being used extensively to recreate the seismic histories of tectonically active areas. Earthquakes occur when geological faults give way and rock masses move. A major earthquake is likely to expose new rock to the Earth’s surface. If the field geologist can identify rocks in a fault zone that (s)he is confident were brought to the surface in an earthquake, then a cosmogenic isotope exposure age would date the fault — providing, of course, that subsequent erosion can be ruled out or quantified. Precarious rocks are rock outcrops that could reasonably be expected to topple if subjected to a significant earthquake. Dating the exposed surface of precarious rocks with cosmogenic isotopes can reveal the amount of time that has elapsed since the last earthquake of a magnitude that would have toppled the rock. Constructing records of seismic history is not merely of academic interest; some of the world’s seismically active areas are also highly populated and developed.”

“One aspect of the natural decay series that acts in favour of the preservation of accurate age information is the fact that most of the intermediate isotopes are short-lived. For example, in both the U series the radon (Rn) isotopes, which might be expected to diffuse readily out of a mineral, have half-lives of only seconds or days, too short to allow significant losses. Some decay series isotopes though do have significantly long half-lives which offer the potential to be geochronometers in their own right. […] These techniques depend on the tendency of natural decay series to evolve towards a state of ‘secular equilibrium’ in which the activity of all species in the decay series is equal. […] at secular equilibrium, isotopes with long half-lives (i.e. small decay constants) will have large numbers of atoms whereas short-lived isotopes (high decay constants) will only constitute a relatively small number of atoms. Since decay constants vary by several orders of magnitude, so will the numbers of atoms of each isotope in the equilibrium decay series. […] Geochronological applications of natural decay series depend upon some process disrupting the natural decay series to introduce either a deficiency or an excess of an isotope in the series. The decay series will then gradually return to secular equilibrium and the geochronometer relies on measuring the extent to which equilibrium has been approached.”

“The ‘ring of fire’ volcanoes around the margin of the Pacific Ocean are a manifestation of subduction in which the oldest parts of the Pacific Ocean crust are being returned to the mantle below. The oldest parts of the Pacific Ocean crust are about 150 million years (Ma) old, with anything older having already disappeared into the mantle via subduction zones. The Atlantic Ocean doesn’t have a ring of fire because it is a relatively young ocean which started to form about 60 Ma ago, and its oldest rocks are not yet ready to form subduction zones. Thus, while continental crust persists for billions of years, oceanic crust is a relatively transient (in terms of geological time) phenomenon at the Earth’s surface.”

“Mantle rocks typically contain minerals such as olivine, pyroxene, spinel, and garnet. Unlike say ice, which melts to form water, mixtures of minerals do not melt in the proportions in which they occur in the rock. Rather, they undergo partial melting in which some minerals […] melt preferentially leaving a solid residue enriched in refractory minerals […]. We know this from experimentally melting mantle-like rocks in the laboratory, but also because the basalts produced by melting of the mantle are closer in composition to Ca-rich (clino-) pyroxene than to the olivine-rich rocks that dominate the solid pieces (or xenoliths) of mantle that are sometimes transferred to the surface by certain types of volcanic eruptions. […] Thirty years ago geologists fiercely debated whether the mantle was homogeneous or heterogeneous; mantle isotope geochemistry hasn’t yet elucidated all the details but it has put to rest the initial conundrum; Earth’s mantle is compositionally heterogeneous.”

Links:

Frederick Soddy.
Rutherford–Bohr model.
Isotopes of hydrogen.
Radioactive decay. Types of decay. Alpha decay. Beta decay. Electron capture decay. Branching fraction. Gamma radiation. Spontaneous fission.
Promethium.
Lanthanides.
Radiocarbon dating.
Hessel de Vries.
Dendrochronology.
Suess effect.
Bomb pulse.
Delta notation (non-wiki link).
Isotopic fractionation.
C3 carbon fixation. C4 carbon fixation.
Nitrogen-15 tracing.
Isotopes of strontium. Strontium isotope analysis.
Ötzi.
Mass spectrometry.
Geiger counter.
Townsend avalanche.
Gas proportional counter.
Scintillation detector.
Liquid scintillation spectometry. Photomultiplier tube.
Dynode.
Thallium-doped sodium iodide detectors. Semiconductor-based detectors.
Isotope separation (-enrichment).
Doubly labeled water.
Urea breath test.
Radiation oncology.
Brachytherapy.
Targeted radionuclide therapy.
Iodine-131.
MIBG scan.
Single-photon emission computed tomography.
Positron emission tomography.
Inductively coupled plasma (ICP) mass spectrometry.
Secondary ion mass spectrometry.
Faraday cup (-detector).
δ18O.
Stadials and interstadials. Oxygen isotope ratio cycle.
Insolation.
Gain and phase model.
Milankovitch cycles.
Perihelion and aphelion. Precession.
Equilibrium Clumped-Isotope Effects in Doubly Substituted Isotopologues of Ethane (non-wiki link).
Age of the Earth.
Uranium–lead dating.
Geochronology.
Cretaceous–Paleogene boundary.
Argon-argon dating.
Nuclear chain reaction. Critical mass.
Fukushima Daiichi nuclear disaster.
Natural nuclear fission reactor.
Continental crust. Oceanic crust. Basalt.
Core–mantle boundary.
Chondrite.
Ocean Island Basalt.
Isochron dating.

November 23, 2017 Posted by | Biology, Books, Botany, Chemistry, Geology, Medicine, Physics | Leave a comment

Earth System Science

I decided not to rate this book. Some parts are great, some parts I didn’t think were very good.

I’ve added some quotes and links below. First a few links (I’ve tried not to add links here which I’ve also included in the quotes below):

Carbon cycle.
Origin of water on Earth.
Gaia hypothesis.
Albedo (climate and weather).
Snowball Earth.
Carbonate–silicate cycle.
Carbonate compensation depth.
Isotope fractionation.
CLAW hypothesis.
Mass-independent fractionation.
δ13C.
Great Oxygenation Event.
Acritarch.
Grypania.
Neoproterozoic.
Rodinia.
Sturtian glaciation.
Marinoan glaciation.
Ediacaran biota.
Cambrian explosion.
Quarternary.
Medieval Warm Period.
Little Ice Age.
Eutrophication.
Methane emissions.
Keeling curve.
CO2 fertilization effect.
Acid rain.
Ocean acidification.
Earth systems models.
Clausius–Clapeyron relation.
Thermohaline circulation.
Cryosphere.
The limits to growth.
Exoplanet Biosignature Gases.
Transiting Exoplanet Survey Satellite (TESS).
James Webb Space Telescope.
Habitable zone.
Kepler-186f.

A few quotes from the book:

“The scope of Earth system science is broad. It spans 4.5 billion years of Earth history, how the system functions now, projections of its future state, and ultimate fate. […] Earth system science is […] a deeply interdisciplinary field, which synthesizes elements of geology, biology, chemistry, physics, and mathematics. It is a young, integrative science that is part of a wider 21st-century intellectual trend towards trying to understand complex systems, and predict their behaviour. […] A key part of Earth system science is identifying the feedback loops in the Earth system and understanding the behaviour they can create. […] In systems thinking, the first step is usually to identify your system and its boundaries. […] what is part of the Earth system depends on the timescale being considered. […] The longer the timescale we look over, the more we need to include in the Earth system. […] for many Earth system scientists, the planet Earth is really comprised of two systems — the surface Earth system that supports life, and the great bulk of the inner Earth underneath. It is the thin layer of a system at the surface of the Earth […] that is the subject of this book.”

“Energy is in plentiful supply from the Sun, which drives the water cycle and also fuels the biosphere, via photosynthesis. However, the surface Earth system is nearly closed to materials, with only small inputs to the surface from the inner Earth. Thus, to support a flourishing biosphere, all the elements needed by life must be efficiently recycled within the Earth system. This in turn requires energy, to transform materials chemically and to move them physically around the planet. The resulting cycles of matter between the biosphere, atmosphere, ocean, land, and crust are called global biogeochemical cycles — because they involve biological, geological, and chemical processes. […] The global biogeochemical cycling of materials, fuelled by solar energy, has transformed the Earth system. […] It has made the Earth fundamentally different from its state before life and from its planetary neighbours, Mars and Venus. Through cycling the materials it needs, the Earth’s biosphere has bootstrapped itself into a much more productive state.”

“Each major element important for life has its own global biogeochemical cycle. However, every biogeochemical cycle can be conceptualized as a series of reservoirs (or ‘boxes’) of material connected by fluxes (or flows) of material between them. […] When a biogeochemical cycle is in steady state, the fluxes in and out of each reservoir must be in balance. This allows us to define additional useful quantities. Notably, the amount of material in a reservoir divided by the exchange flux with another reservoir gives the average ‘residence time’ of material in that reservoir with respect to the chosen process of exchange. For example, there are around 7 × 1016 moles of carbon dioxide (CO2) in today’s atmosphere, and photosynthesis removes around 9 × 1015 moles of CO2 per year, giving each molecule of CO2 a residence time of roughly eight years in the atmosphere before it is taken up, somewhere in the world, by photosynthesis. […] There are 3.8 × 1019 moles of molecular oxygen (O2) in today’s atmosphere, and oxidative weathering removes around 1 × 1013 moles of O2 per year, giving oxygen a residence time of around four million years with respect to removal by oxidative weathering. This makes the oxygen cycle […] a geological timescale cycle.”

“The water cycle is the physical circulation of water around the planet, between the ocean (where 97 per cent is stored), atmosphere, ice sheets, glaciers, sea-ice, freshwaters, and groundwater. […] To change the phase of water from solid to liquid or liquid to gas requires energy, which in the climate system comes from the Sun. Equally, when water condenses from gas to liquid or freezes from liquid to solid, energy is released. Solar heating drives evaporation from the ocean. This is responsible for supplying about 90 per cent of the water vapour to the atmosphere, with the other 10 per cent coming from evaporation on the land and freshwater surfaces (and sublimation of ice and snow directly to vapour). […] The water cycle is intimately connected to other biogeochemical cycles […]. Many compounds are soluble in water, and some react with water. This makes the ocean a key reservoir for several essential elements. It also means that rainwater can scavenge soluble gases and aerosols out of the atmosphere. When rainwater hits the land, the resulting solution can chemically weather rocks. Silicate weathering in turn helps keep the climate in a state where water is liquid.”

“In modern terms, plants acquire their carbon from carbon dioxide in the atmosphere, add electrons derived from water molecules to the carbon, and emit oxygen to the atmosphere as a waste product. […] In energy terms, global photosynthesis today captures about 130 terrawatts (1 TW = 1012 W) of solar energy in chemical form — about half of it in the ocean and about half on land. […] All the breakdown pathways for organic carbon together produce a flux of carbon dioxide back to the atmosphere that nearly balances photosynthetic uptake […] The surface recycling system is almost perfect, but a tiny fraction (about 0.1 per cent) of the organic carbon manufactured in photosynthesis escapes recycling and is buried in new sedimentary rocks. This organic carbon burial flux leaves an equivalent amount of oxygen gas behind in the atmosphere. Hence the burial of organic carbon represents the long-term source of oxygen to the atmosphere. […] the Earth’s crust has much more oxygen trapped in rocks in the form of oxidized iron and sulphur, than it has organic carbon. This tells us that there has been a net source of oxygen to the crust over Earth history, which must have come from the loss of hydrogen to space.”

“The oxygen cycle is relatively simple, because the reservoir of oxygen in the atmosphere is so massive that it dwarfs the reservoirs of organic carbon in vegetation, soils, and the ocean. Hence oxygen cannot get used up by the respiration or combustion of organic matter. Even the combustion of all known fossil fuel reserves can only put a small dent in the much larger reservoir of atmospheric oxygen (there are roughly 4 × 1017 moles of fossil fuel carbon, which is only about 1 per cent of the O2 reservoir). […] Unlike oxygen, the atmosphere is not the major surface reservoir of carbon. The amount of carbon in global vegetation is comparable to that in the atmosphere and the amount of carbon in soils (including permafrost) is roughly four times that in the atmosphere. Even these reservoirs are dwarfed by the ocean, which stores forty-five times as much carbon as the atmosphere, thanks to the fact that CO2 reacts with seawater. […] The exchange of carbon between the atmosphere and the land is largely biological, involving photosynthetic uptake and release by aerobic respiration (and, to a lesser extent, fires). […] Remarkably, when we look over Earth history there are fluctuations in the isotopic composition of carbonates, but no net drift up or down. This suggests that there has always been roughly one-fifth of carbon being buried in organic form and the other four-fifths as carbonate rocks. Thus, even on the early Earth, the biosphere was productive enough to support a healthy organic carbon burial flux.”

“The two most important nutrients for life are phosphorus and nitrogen, and they have very different biogeochemical cycles […] The largest reservoir of nitrogen is in the atmosphere, whereas the heavier phosphorus has no significant gaseous form. Phosphorus thus presents a greater recycling challenge for the biosphere. All phosphorus enters the surface Earth system from the chemical weathering of rocks on land […]. Phosphorus is concentrated in rocks in grains or veins of the mineral apatite. Natural selection has made plants on land and their fungal partners […] very effective at acquiring phosphorus from rocks, by manufacturing and secreting a range of organic acids that dissolve apatite. […] The average terrestrial ecosystem recycles phosphorus roughly fifty times before it is lost into freshwaters. […] The loss of phosphorus from the land is the ocean’s gain, providing the key input of this essential nutrient. Phosphorus is stored in the ocean as phosphate dissolved in the water. […] removal of phosphorus into the rock cycle balances the weathering of phosphorus from rocks on land. […] Although there is a large reservoir of nitrogen in the atmosphere, the molecules of nitrogen gas (N2) are extremely strongly bonded together, making nitrogen unavailable to most organisms. To split N2 and make nitrogen biologically available requires a remarkable biochemical feat — nitrogen fixation — which uses a lot of energy. In the ocean the dominant nitrogen fixers are cyanobacteria with a direct source of energy from sunlight. On land, various plants form a symbiotic partnership with nitrogen fixing bacteria, making a home for them in root nodules and supplying them with food in return for nitrogen. […] Nitrogen fixation and denitrification form the major input and output fluxes of nitrogen to both the land and the ocean, but there is also recycling of nitrogen within ecosystems. […] There is an intimate link between nutrient regulation and atmospheric oxygen regulation, because nutrient levels and marine productivity determine the source of oxygen via organic carbon burial. However, ocean nutrients are regulated on a much shorter timescale than atmospheric oxygen because their residence times are much shorter—about 2,000 years for nitrogen and 20,000 years for phosphorus.”

“[F]orests […] are vulnerable to increases in oxygen that increase the frequency and ferocity of fires. […] Combustion experiments show that fires only become self-sustaining in natural fuels when oxygen reaches around 17 per cent of the atmosphere. Yet for the last 370 million years there is a nearly continuous record of fossil charcoal, indicating that oxygen has never dropped below this level. At the same time, oxygen has never risen too high for fires to have prevented the slow regeneration of forests. The ease of combustion increases non-linearly with oxygen concentration, such that above 25–30 per cent oxygen (depending on the wetness of fuel) it is hard to see how forests could have survived. Thus oxygen has remained within 17–30 per cent of the atmosphere for at least the last 370 million years.”

“[T]he rate of silicate weathering increases with increasing CO2 and temperature. Thus, if something tends to increase CO2 or temperature it is counteracted by increased CO2 removal by silicate weathering. […] Plants are sensitive to variations in CO2 and temperature, and together with their fungal partners they greatly amplify weathering rates […] the most pronounced change in atmospheric CO2 over Phanerozoic time was due to plants colonizing the land. This started around 470 million years ago and escalated with the first forests 370 million years ago. The resulting acceleration of silicate weathering is estimated to have lowered the concentration of atmospheric CO2 by an order of magnitude […], and cooled the planet into a series of ice ages in the Carboniferous and Permian Periods.”

“The first photosynthesis was not the kind we are familiar with, which splits water and spits out oxygen as a waste product. Instead, early photosynthesis was ‘anoxygenic’ — meaning it didn’t produce oxygen. […] It could have used a range of compounds, in place of water, as a source of electrons with which to fix carbon from carbon dioxide and reduce it to sugars. Potential electron donors include hydrogen (H2) and hydrogen sulphide (H2S) in the atmosphere, or ferrous iron (Fe2+) dissolved in the ancient oceans. All of these are easier to extract electrons from than water. Hence they require fewer photons of sunlight and simpler photosynthetic machinery. The phylogenetic tree of life confirms that several forms of anoxygenic photosynthesis evolved very early on, long before oxygenic photosynthesis. […] If the early biosphere was fuelled by anoxygenic photosynthesis, plausibly based on hydrogen gas, then a key recycling process would have been the biological regeneration of this gas. Calculations suggest that once such recycling had evolved, the early biosphere might have achieved a global productivity up to 1 per cent of the modern marine biosphere. If early anoxygenic photosynthesis used the supply of reduced iron upwelling in the ocean, then its productivity would have been controlled by ocean circulation and might have reached 10 per cent of the modern marine biosphere. […] The innovation that supercharged the early biosphere was the origin of oxygenic photosynthesis using abundant water as an electron donor. This was not an easy process to evolve. To split water requires more energy — i.e. more high-energy photons of sunlight — than any of the earlier anoxygenic forms of photosynthesis. Evolution’s solution was to wire together two existing ‘photosystems’ in one cell and bolt on the front of them a remarkable piece of biochemical machinery that can rip apart water molecules. The result was the first cyanobacterial cell — the ancestor of all organisms performing oxygenic photosynthesis on the planet today. […] Once oxygenic photosynthesis had evolved, the productivity of the biosphere would no longer have been restricted by the supply of substrates for photosynthesis, as water and carbon dioxide were abundant. Instead, the availability of nutrients, notably nitrogen and phosphorus, would have become the major limiting factors on the productivity of the biosphere — as they still are today.” [If you’re curious to know more about how that fascinating ‘biochemical machinery’ works, this is a great book on these and related topics – US].

“On Earth, anoxygenic photosynthesis requires one photon per electron, whereas oxygenic photosynthesis requires two photons per electron. On Earth it took up to a billion years to evolve oxygenic photosynthesis, based on two photosystems that had already evolved independently in different types of anoxygenic photosynthesis. Around a fainter K- or M-type star […] oxygenic photosynthesis is estimated to require three or more photons per electron — and a corresponding number of photosystems — making it harder to evolve. […] However, fainter stars spend longer on the main sequence, giving more time for evolution to occur.”

“There was a lot more energy to go around in the post-oxidation world, because respiration of organic matter with oxygen yields an order of magnitude more energy than breaking food down anaerobically. […] The revolution in biological complexity culminated in the ‘Cambrian Explosion’ of animal diversity 540 to 515 million years ago, in which modern food webs were established in the ocean. […] Since then the most fundamental change in the Earth system has been the rise of plants on land […], beginning around 470 million years ago and culminating in the first global forests by 370 million years ago. This doubled global photosynthesis, increasing flows of materials. Accelerated chemical weathering of the land surface lowered atmospheric carbon dioxide levels and increased atmospheric oxygen levels, fully oxygenating the deep ocean. […] Although grasslands now cover about a third of the Earth’s productive land surface they are a geologically recent arrival. Grasses evolved amidst a trend of declining atmospheric carbon dioxide, and climate cooling and drying, over the past forty million years, and they only became widespread in two phases during the Miocene Epoch around seventeen and six million years ago. […] Since the rise of complex life, there have been several mass extinction events. […] whilst these rolls of the extinction dice marked profound changes in evolutionary winners and losers, they did not fundamentally alter the operation of the Earth system.” [If you’re interested in this kind of stuff, the evolution of food webs and so on, Herrera et al.’s wonderful book is a great place to start – US]

“The Industrial Revolution marks the transition from societies fuelled largely by recent solar energy (via biomass, water, and wind) to ones fuelled by concentrated ‘ancient sunlight’. Although coal had been used in small amounts for millennia, for example for iron making in ancient China, fossil fuel use only took off with the invention and refinement of the steam engine. […] With the Industrial Revolution, food and biomass have ceased to be the main source of energy for human societies. Instead the energy contained in annual food production, which supports today’s population, is at fifty exajoules (1 EJ = 1018 joules), only about a tenth of the total energy input to human societies of 500 EJ/yr. This in turn is equivalent to about a tenth of the energy captured globally by photosynthesis. […] solar energy is not very efficiently converted by photosynthesis, which is 1–2 per cent efficient at best. […] The amount of sunlight reaching the Earth’s land surface (2.5 × 1016 W) dwarfs current total human power consumption (1.5 × 1013 W) by more than a factor of a thousand.”

“The Earth system’s primary energy source is sunlight, which the biosphere converts and stores as chemical energy. The energy-capture devices — photosynthesizing organisms — construct themselves out of carbon dioxide, nutrients, and a host of trace elements taken up from their surroundings. Inputs of these elements and compounds from the solid Earth system to the surface Earth system are modest. Some photosynthesizers have evolved to increase the inputs of the materials they need — for example, by fixing nitrogen from the atmosphere and selectively weathering phosphorus out of rocks. Even more importantly, other heterotrophic organisms have evolved that recycle the materials that the photosynthesizers need (often as a by-product of consuming some of the chemical energy originally captured in photosynthesis). This extraordinary recycling system is the primary mechanism by which the biosphere maintains a high level of energy capture (productivity).”

“[L]ike all stars on the ‘main sequence’ (which generate energy through the nuclear fusion of hydrogen into helium), the Sun is burning inexorably brighter with time — roughly 1 per cent brighter every 100 million years — and eventually this will overheat the planet. […] Over Earth history, the silicate weathering negative feedback mechanism has counteracted the steady brightening of the Sun by removing carbon dioxide from the atmosphere. However, this cooling mechanism is near the limits of its operation, because CO2 has fallen to limiting levels for the majority of plants, which are key amplifiers of silicate weathering. Although a subset of plants have evolved which can photosynthesize down to lower CO2 levels [the author does not go further into this topic, but here’s a relevant link – US], they cannot draw CO2 down lower than about 10 ppm. This means there is a second possible fate for life — running out of CO2. Early models projected either CO2 starvation or overheating […] occurring about a billion years in the future. […] Whilst this sounds comfortingly distant, it represents a much shorter future lifespan for the Earth’s biosphere than its past history. Earth’s biosphere is entering its old age.”

September 28, 2017 Posted by | Astronomy, Biology, Books, Botany, Chemistry, Geology, Paleontology, Physics | Leave a comment

Magnetism

This book was ‘okay…ish’, but I must admit I was a bit disappointed; the coverage was much too superficial, and I’m reasonably sure the lack of formalism made the coverage harder for me to follow than it could have been. I gave the book two stars on goodreads.

Some quotes and links below.

Quotes:

“In the 19th century, the principles were established on which the modern electromagnetic world could be built. The electrical turbine is the industrialized embodiment of Faraday’s idea of producing electricity by rotating magnets. The turbine can be driven by the wind or by falling water in hydroelectric power stations; it can be powered by steam which is itself produced by boiling water using the heat produced from nuclear fission or burning coal or gas. Whatever the method, rotating magnets inducing currents feed the appetite of the world’s cities for electricity, lighting our streets, powering our televisions and computers, and providing us with an abundant source of energy. […] rotating magnets are the engine of the modern world. […] Modern society is built on the widespread availability of cheap electrical power, and almost all of it comes from magnets whirling around in turbines, producing electric current by the laws discovered by Oersted, Ampère, and Faraday.”

“Maxwell was the first person to really understand that a beam of light consists of electric and magnetic oscillations propagating together. The electric oscillation is in one plane, at right angles to the magnetic oscillation. Both of them are in directions at right angles to the direction of propagation. […] The oscillations of electricity and magnetism in a beam of light are governed by Maxwell’s four beautiful equations […] Above all, Einstein’s work on relativity was motivated by a desire to preserve the integrity of Maxwell’s equations at all costs. The problem was this: Maxwell had derived a beautiful expression for the speed of light, but the speed of light with respect to whom? […] Einstein deduced that the way to fix this would be to say that all observers will measure the speed of any beam of light to be the same. […] Einstein showed that magnetism is a purely relativistic effect, something that wouldn’t even be there without relativity. Magnetism is an example of relativity in everyday life. […] Magnetic fields are what electric fields look like when you are moving with respect to the charges that ‘cause’ them. […] every time a magnetic field appears in nature, it is because a charge is moving with respect to the observer. Charge flows down a wire to make an electric current and this produces magnetic field. Electrons orbit an atom and this ‘orbital’ motion produces a magnetic field. […] the magnetism of the Earth is due to electrical currents deep inside the planet. Motion is the key in each and every case, and magnetic fields are the evidence that charge is on the move. […] Einstein’s theory of relativity casts magnetism in a new light. Magnetic fields are a relativistic correction which you observe when charges move relative to you.”

“[T]he Bohr–van Leeuwen theorem […] states that if you assume nothing more than classical physics, and then go on to model a material as a system of electrical charges, then you can show that the system can have no net magnetization; in other words, it will not be magnetic. Simply put, there are no lodestones in a purely classical Universe. This should have been a revolutionary and astonishing result, but it wasn’t, principally because it came about 20 years too late to knock everyone’s socks off. By 1921, the initial premise of the Bohr–van Leeuwen theorem, the correctness of classical physics, was known to be wrong […] But when you think about it now, the Bohr–van Leeuwen theorem gives an extraordinary demonstration of the failure of classical physics. Just by sticking a magnet to the door of your refrigerator, you have demonstrated that the Universe is not governed by classical physics.”

“[M]ost real substances are weakly diamagnetic, meaning that when placed in a magnetic field they become weakly magnetic in the opposite direction to the field. Water does this, and since animals are mostly water, it applies to them. This is the basis of Andre Geim’s levitating frog experiment: a live frog is placed in a strong magnetic field and because of its diamagnetism it becomes weakly magnetic. In the experiment, a non-uniformity of the magnetic field induces a force on the frog’s induced magnetism and, hey presto, the frog levitates in mid-air.”

“In a conventional hard disk technology, the disk needs to be spun very fast, around 7,000 revolutions per minute. […] The read head floats on a cushion of air about 15 nanometres […] above the surface of the rotating disk, reading bits off the disk at tens of megabytes per second. This is an extraordinary engineering achievement when you think about it. If you were to scale up a hard disk so that the disk is a few kilometres in diameter rather a few centimetres, then the read head would be around the size of the White House and would be floating over the surface of the disk on a cushion of air one millimetre thick (the diameter of the head of a pin) while the disk rotated below it at a speed of several million miles per hour (fast enough to go round the equator a couple of dozen times in a second). On this scale, the bits would be spaced a few centimetres apart around each track. Hard disk drives are remarkable. […] Although hard disks store an astonishing amount of information and are cheap to manufacture, they are not fast information retrieval systems. To access a particular piece of information involves moving the head and rotating the disk to a particular spot, taking perhaps a few milliseconds. This sounds quite rapid, but with processors buzzing away and performing operations every nanosecond or so, a few milliseconds is glacial in comparison. For this reason, modern computers often use solid state memory to store temporary information, reserving the hard disk for longer-term bulk storage. However, there is a trade-off between cost and performance.”

“In general, there is a strong economic drive to store more and more information in a smaller and smaller space, and hence a need to find a way to make smaller and smaller bits. […] [However] greater miniturization comes at a price. The point is the following: when you try to store a bit of information in a magnetic medium, an important constraint on the usefulness of the technology is how long the information will last for. Almost always the information is being stored at room temperature and so needs to be robust to the ever present random jiggling effects produced by temperature […] It turns out that the crucial parameter controlling this robustness is the ratio of the energy needed to reverse the bit of information (in other words, the energy required to change the magnetization from one direction to the reverse direction) to a characteristic energy associated with room temperature (an energy which is, expressed in electrical units, approximately one-fortieth of a Volt). So if the energy to flip a magnetic bit is very large, the information can persist for thousands of years […] while if it is very small, the information might only last for a small fraction of a second […] This energy is proportional to the volume of the magnetic bit, and so one immediately sees a problem with making bits smaller and smaller: though you can store bits of information at higher density, there is a very real possibility that the information might be very rapidly scrambled by thermal fluctuations. This motivates the search for materials in which it is very hard to flip the magnetization from one state to the other.”

“The change in the Earth’s magnetic field over time is a fairly noticeable phenomenon. Every decade or so, compass needles in Africa are shifting by a degree, and the magnetic field overall on planet Earth is about 10% weaker than it was in the 19th century.”

Below I have added some links to topics and people covered/mentioned in the book. Many of the links below have likely also been included in some of the other posts about books from the A Brief Introduction OUP physics series which I’ve posted this year – the main point of adding these links is to give some idea what kind of stuff’s covered in the book:

Magnetism.
Magnetite.
Lodestone.
William Gilbert/De Magnete.
Alessandro Volta.
Ampère’s circuital law.
Charles-Augustin de Coulomb.
Hans Christian Ørsted.
Leyden jar
/voltaic cell/battery (electricity).
Solenoid.
Electromagnet.
Homopolar motor.
Michael Faraday.
Electromagnetic induction.
Dynamo.
Zeeman effect.
Alternating current/Direct current.
Nikola Tesla.
Thomas Edison.
Force field (physics).
Ole Rømer.
Centimetre–gram–second system of units.
James Clerk Maxwell.
Maxwell’s equations.
Permittivity.
Permeability (electromagnetism).
Gauss’ law.
Michelson–Morley experiment
.
Special relativity.
Drift velocity.
Curie’s law.
Curie temperature.
Andre Geim.
Diamagnetism.
Paramagnetism.
Exchange interaction.
Magnetic domain.
Domain wall (magnetism).
Stern–Gerlach experiment.
Dirac equation.
Giant magnetoresistance.
Spin valve.
Racetrack memory.
Perpendicular recording.
Bubble memory (“an example of a brilliant idea which never quite made it”, as the author puts it).
Single-molecule magnet.
Spintronics.
Earth’s magnetic field.
Aurora.
Van Allen radiation belt.
South Atlantic Anomaly.
Geomagnetic storm.
Geomagnetic reversal.
Magnetar.
ITER (‘International Thermonuclear Experimental Reactor’).
Antiferromagnetism.
Spin glass.
Quantum spin liquid.
Multiferroics.
Spin ice.
Magnetic monopole.
Ice rules.

August 28, 2017 Posted by | Books, Computer science, Engineering, Geology, Physics | Leave a comment

The Antarctic

“A very poor book with poor coverage, mostly about politics and history (and a long collection of names of treaties and organizations). I would definitely not have finished it if it were much longer than it is.”

That was what I wrote about the book in my goodreads review. I was strongly debating whether or not to blog it at all, but I decided in the end to just settle for some very lazy coverage of the book, only consisting of links to content covered in the book. I only cover the book here to at least have some chance of remembering which kinds of things were covered in the book later on.

If you’re interested enough in the Antarctic to read a book about it, read Scott’s Last Expedition instead of this one (here’s my goodreads review of Scott).

Links:

Antarctica (featured).
Antarctic Convergence.
Antarctic Circle.
Southern Ocean.
Antarctic Circumpolar Current.
West Antarctic Ice Sheet.
East Antarctic Ice Sheet.
McMurdo Dry Valleys.
Notothenioidei.
Patagonian toothfish.
Antarctic krill.
Fabian Gottlieb von Bellingshausen.
Edward Bransfield.
James Clark Ross.
United States Exploring Expedition.
Heroic Age of Antarctic Exploration (featured).
Nimrod Expedition (featured).
Roald Amundsen.
Wilhelm Filchner.
Japanese Antarctic Expedition.
Terra Nova Expedition (featured).
Lincoln Ellsworth.
British Graham Land expedition.
German Antarctic Expedition (1938–1939).
Operation Highjump.
Operation Windmill.
Operation Deep Freeze.
Commonwealth Trans-Antarctic Expedition.
Caroline Mikkelsen.
International Association of Antarctica Tour Operators.
Territorial claims in Antarctica.
International Geophysical Year.
Antarctic Treaty System.
Operation Tabarin.
Scientific Committee on Antarctic Research.
United Nations Convention on the Law of the Sea.
Convention on the Continental Shelf.
Council of Managers of National Antarctic Programs.
British Antarctic Survey.
International Polar Year.
Antarctic ozone hole.
Gamburtsev Mountain Range.
Pine Island Glacier (‘good article’).
Census of Antarctic Marine Life.
Lake Ellsworth Consortium.
Antarctic fur seal.
Southern elephant seal.
Grytviken (whaling-related).
International Convention for the Regulation of Whaling.
International Whaling Commission.
Ocean Drilling Program.
Convention on the Regulation of Antarctic Mineral Resource Activities.
Agreement on the Conservation of Albatrosses and Petrels.

July 3, 2017 Posted by | Biology, Books, Geography, Geology, History, Wikipedia | Leave a comment