Econstudentlog

Physical chemistry

This is a good book, I really liked it, just as I really liked the other book in the series which I read by the same author, the one about the laws of thermodynamics (blog coverage here). I know much, much more about physics than I do about chemistry and even though some of it was review I learned a lot from this one. Recommended, certainly if you find the quotes below interesting. As usual, I’ve added some observations from the book and some links to topics/people/etc. covered/mentioned in the book below.

Some quotes:

“Physical chemists pay a great deal of attention to the electrons that surround the nucleus of an atom: it is here that the chemical action takes place and the element expresses its chemical personality. […] Quantum mechanics plays a central role in accounting for the arrangement of electrons around the nucleus. The early ‘Bohr model’ of the atom, […] with electrons in orbits encircling the nucleus like miniature planets and widely used in popular depictions of atoms, is wrong in just about every respect—but it is hard to dislodge from the popular imagination. The quantum mechanical description of atoms acknowledges that an electron cannot be ascribed to a particular path around the nucleus, that the planetary ‘orbits’ of Bohr’s theory simply don’t exist, and that some electrons do not circulate around the nucleus at all. […] Physical chemists base their understanding of the electronic structures of atoms on Schrödinger’s model of the hydrogen atom, which was formulated in 1926. […] An atom is often said to be mostly empty space. That is a remnant of Bohr’s model in which a point-like electron circulates around the nucleus; in the Schrödinger model, there is no empty space, just a varying probability of finding the electron at a particular location.”

“No more than two electrons may occupy any one orbital, and if two do occupy that orbital, they must spin in opposite directions. […] this form of the principle [the Pauli exclusion principleUS] […] is adequate for many applications in physical chemistry. At its very simplest, the principle rules out all the electrons of an atom (other than atoms of one-electron hydrogen and two-electron helium) having all their electrons in the 1s-orbital. Lithium, for instance, has three electrons: two occupy the 1s orbital, but the third cannot join them, and must occupy the next higher-energy orbital, the 2s-orbital. With that point in mind, something rather wonderful becomes apparent: the structure of the Periodic Table of the elements unfolds, the principal icon of chemistry. […] The first electron can enter the 1s-orbital, and helium’s (He) second electron can join it. At that point, the orbital is full, and lithium’s (Li) third electron must enter the next higher orbital, the 2s-orbital. The next electron, for beryllium (Be), can join it, but then it too is full. From that point on the next six electrons can enter in succession the three 2p-orbitals. After those six are present (at neon, Ne), all the 2p-orbitals are full and the eleventh electron, for sodium (Na), has to enter the 3s-orbital. […] Similar reasoning accounts for the entire structure of the Table, with elements in the same group all having analogous electron arrangements and each successive row (‘period’) corresponding to the next outermost shell of orbitals.”

“[O]n crossing the [Periodic] Table from left to right, atoms become smaller: even though they have progressively more electrons, the nuclear charge increases too, and draws the clouds in to itself. On descending a group, atoms become larger because in successive periods new outermost shells are started (as in going from lithium to sodium) and each new coating of cloud makes the atom bigger […] the ionization energy [is] the energy needed to remove one or more electrons from the atom. […] The ionization energy more or less follows the trend in atomic radii but in an opposite sense because the closer an electron lies to the positively charged nucleus, the harder it is to remove. Thus, ionization energy increases from left to right across the Table as the atoms become smaller. It decreases down a group because the outermost electron (the one that is most easily removed) is progressively further from the nucleus. […] the electron affinity [is] the energy released when an electron attaches to an atom. […] Electron affinities are highest on the right of the Table […] An ion is an electrically charged atom. That charge comes about either because the neutral atom has lost one or more of its electrons, in which case it is a positively charged cation […] or because it has captured one or more electrons and has become a negatively charged anion. […] Elements on the left of the Periodic Table, with their low ionization energies, are likely to lose electrons and form cations; those on the right, with their high electron affinities, are likely to acquire electrons and form anions. […] ionic bonds […] form primarily between atoms on the left and right of the Periodic Table.”

“Although the Schrödinger equation is too difficult to solve for molecules, powerful computational procedures have been developed by theoretical chemists to arrive at numerical solutions of great accuracy. All the procedures start out by building molecular orbitals from the available atomic orbitals and then setting about finding the best formulations. […] Depictions of electron distributions in molecules are now commonplace and very helpful for understanding the properties of molecules. It is particularly relevant to the development of new pharmacologically active drugs, where electron distributions play a central role […] Drug discovery, the identification of pharmacologically active species by computation rather than in vivo experiment, is an important target of modern computational chemistry.”

Work […] involves moving against an opposing force; heat […] is the transfer of energy that makes use of a temperature difference. […] the internal energy of a system that is isolated from external influences does not change. That is the First Law of thermodynamics. […] A system possesses energy, it does not possess work or heat (even if it is hot). Work and heat are two different modes for the transfer of energy into or out of a system. […] if you know the internal energy of a system, then you can calculate its enthalpy simply by adding to U the product of pressure and volume of the system (H = U + pV). The significance of the enthalpy […] is that a change in its value is equal to the output of energy as heat that can be obtained from the system provided it is kept at constant pressure. For instance, if the enthalpy of a system falls by 100 joules when it undergoes a certain change (such as a chemical reaction), then we know that 100 joules of energy can be extracted as heat from the system, provided the pressure is constant.”

“In the old days of physical chemistry (well into the 20th century), the enthalpy changes were commonly estimated by noting which bonds are broken in the reactants and which are formed to make the products, so A → B might be the bond-breaking step and B → C the new bond-formation step, each with enthalpy changes calculated from knowledge of the strengths of the old and new bonds. That procedure, while often a useful rule of thumb, often gave wildly inaccurate results because bonds are sensitive entities with strengths that depend on the identities and locations of the other atoms present in molecules. Computation now plays a central role: it is now routine to be able to calculate the difference in energy between the products and reactants, especially if the molecules are isolated as a gas, and that difference easily converted to a change of enthalpy. […] Enthalpy changes are very important for a rational discussion of changes in physical state (vaporization and freezing, for instance) […] If we know the enthalpy change taking place during a reaction, then provided the process takes place at constant pressure we know how much energy is released as heat into the surroundings. If we divide that heat transfer by the temperature, then we get the associated entropy change in the surroundings. […] provided the pressure and temperature are constant, a spontaneous change corresponds to a decrease in Gibbs energy. […] the chemical potential can be thought of as the Gibbs energy possessed by a standard-size block of sample. (More precisely, for a pure substance the chemical potential is the molar Gibbs energy, the Gibbs energy per mole of atoms or molecules.)”

“There are two kinds of work. One kind is the work of expansion that occurs when a reaction generates a gas and pushes back the atmosphere (perhaps by pressing out a piston). That type of work is called ‘expansion work’. However, a chemical reaction might do work other than by pushing out a piston or pushing back the atmosphere. For instance, it might do work by driving electrons through an electric circuit connected to a motor. This type of work is called ‘non-expansion work’. […] a change in the Gibbs energy of a system at constant temperature and pressure is equal to the maximum non-expansion work that can be done by the reaction. […] the link of thermodynamics with biology is that one chemical reaction might do the non-expansion work of building a protein from amino acids. Thus, a knowledge of the Gibbs energies changes accompanying metabolic processes is very important in bioenergetics, and much more important than knowing the enthalpy changes alone (which merely indicate a reaction’s ability to keep us warm).”

“[T]he probability that a molecule will be found in a state of particular energy falls off rapidly with increasing energy, so most molecules will be found in states of low energy and very few will be found in states of high energy. […] If the temperature is low, then the distribution declines so rapidly that only the very lowest levels are significantly populated. If the temperature is high, then the distribution falls off very slowly with increasing energy, and many high-energy states are populated. If the temperature is zero, the distribution has all the molecules in the ground state. If the temperature is infinite, all available states are equally populated. […] temperature […] is the single, universal parameter that determines the most probable distribution of molecules over the available states.”

“Mixing adds disorder and increases the entropy of the system and therefore lowers the Gibbs energy […] In the absence of mixing, a reaction goes to completion; when mixing of reactants and products is taken into account, equilibrium is reached when both are present […] Statistical thermodynamics, through the Boltzmann distribution and its dependence on temperature, allows physical chemists to understand why in some cases the equilibrium shifts towards reactants (which is usually unwanted) or towards products (which is normally wanted) as the temperature is raised. A rule of thumb […] is provided by a principle formulated by Henri Le Chatelier […] that a system at equilibrium responds to a disturbance by tending to oppose its effect. Thus, if a reaction releases energy as heat (is ‘exothermic’), then raising the temperature will oppose the formation of more products; if the reaction absorbs energy as heat (is ‘endothermic’), then raising the temperature will encourage the formation of more product.”

“Model building pervades physical chemistry […] some hold that the whole of science is based on building models of physical reality; much of physical chemistry certainly is.”

“For reasonably light molecules (such as the major constituents of air, N2 and O2) at room temperature, the molecules are whizzing around at an average speed of about 500 m/s (about 1000 mph). That speed is consistent with what we know about the propagation of sound, the speed of which is about 340 m/s through air: for sound to propagate, molecules must adjust their position to give a wave of undulating pressure, so the rate at which they do so must be comparable to their average speeds. […] a typical N2 or O2 molecule in air makes a collision every nanosecond and travels about 1000 molecular diameters between collisions. To put this scale into perspective: if a molecule is thought of as being the size of a tennis ball, then it travels about the length of a tennis court between collisions. Each molecule makes about a billion collisions a second.”

“X-ray diffraction makes use of the fact that electromagnetic radiation (which includes X-rays) consists of waves that can interfere with one another and give rise to regions of enhanced and diminished intensity. This so-called ‘diffraction pattern’ is characteristic of the object in the path of the rays, and mathematical procedures can be used to interpret the pattern in terms of the object’s structure. Diffraction occurs when the wavelength of the radiation is comparable to the dimensions of the object. X-rays have wavelengths comparable to the separation of atoms in solids, so are ideal for investigating their arrangement.”

“For most liquids the sample contracts when it freezes, so […] the temperature does not need to be lowered so much for freezing to occur. That is, the application of pressure raises the freezing point. Water, as in most things, is anomalous, and ice is less dense than liquid water, so water expands when it freezes […] when two gases are allowed to occupy the same container they invariably mix and each spreads uniformly through it. […] the quantity of gas that dissolves in any liquid is proportional to the pressure of the gas. […] When the temperature of [a] liquid is raised, it is easier for a dissolved molecule to gather sufficient energy to escape back up into the gas; the rate of impacts from the gas is largely unchanged. The outcome is a lowering of the concentration of dissolved gas at equilibrium. Thus, gases appear to be less soluble in hot water than in cold. […] the presence of dissolved substances affects the properties of solutions. For instance, the everyday experience of spreading salt on roads to hinder the formation of ice makes use of the lowering of freezing point of water when a salt is present. […] the boiling point is raised by the presence of a dissolved substance [whereas] the freezing point […] is lowered by the presence of a solute.”

“When a liquid and its vapour are present in a closed container the vapour exerts a characteristic pressure (when the escape of molecules from the liquid matches the rate at which they splash back down into it […][)] This characteristic pressure depends on the temperature and is called the ‘vapour pressure’ of the liquid. When a solute is present, the vapour pressure at a given temperature is lower than that of the pure liquid […] The extent of lowering is summarized by yet another limiting law of physical chemistry, ‘Raoult’s law’ [which] states that the vapour pressure of a solvent or of a component of a liquid mixture is proportional to the proportion of solvent or liquid molecules present. […] Osmosis [is] the tendency of solvent molecules to flow from the pure solvent to a solution separated from it by a [semi-]permeable membrane […] The entropy when a solute is present in a solvent is higher than when the solute is absent, so an increase in entropy, and therefore a spontaneous process, is achieved when solvent flows through the membrane from the pure liquid into the solution. The tendency for this flow to occur can be overcome by applying pressure to the solution, and the minimum pressure needed to overcome the tendency to flow is called the ‘osmotic pressure’. If one solution is put into contact with another through a semipermeable membrane, then there will be no net flow if they exert the same osmotic pressures and are ‘isotonic’.”

“Broadly speaking, the reaction quotient [‘Q’] is the ratio of concentrations, with product concentrations divided by reactant concentrations. It takes into account how the mingling of the reactants and products affects the total Gibbs energy of the mixture. The value of Q that corresponds to the minimum in the Gibbs energy […] is called the equilibrium constant and denoted K. The equilibrium constant, which is characteristic of a given reaction and depends on the temperature, is central to many discussions in chemistry. When K is large (1000, say), we can be reasonably confident that the equilibrium mixture will be rich in products; if K is small (0.001, say), then there will be hardly any products present at equilibrium and we should perhaps look for another way of making them. If K is close to 1, then both reactants and products will be abundant at equilibrium and will need to be separated. […] Equilibrium constants vary with temperature but not […] with pressure. […] van’t Hoff’s equation implies that if the reaction is strongly exothermic (releases a lot of energy as heat when it takes place), then the equilibrium constant decreases sharply as the temperature is raised. The opposite is true if the reaction is strongly endothermic (absorbs a lot of energy as heat). […] Typically it is found that the rate of a reaction [how fast it progresses] decreases as it approaches equilibrium. […] Most reactions go faster when the temperature is raised. […] reactions with high activation energies proceed slowly at low temperatures but respond sharply to changes of temperature. […] The surface area exposed by a catalyst is important for its function, for it is normally the case that the greater that area, the more effective is the catalyst.”

Links:

John Dalton.
Atomic orbital.
Electron configuration.
S,p,d,f orbitals.
Computational chemistry.
Atomic radius.
Covalent bond.
Gilbert Lewis.
Valence bond theory.
Molecular orbital theory.
Orbital hybridisation.
Bonding and antibonding orbitals.
Schrödinger equation.
Density functional theory.
Chemical thermodynamics.
Laws of thermodynamics/Zeroth law/First law/Second law/Third Law.
Conservation of energy.
Thermochemistry.
Bioenergetics.
Spontaneous processes.
Entropy.
Rudolf Clausius.
Chemical equilibrium.
Heat capacity.
Compressibility.
Statistical thermodynamics/statistical mechanics.
Boltzmann distribution.
State of matter/gas/liquid/solid.
Perfect gas/Ideal gas law.
Robert Boyle/Joseph Louis Gay-Lussac/Jacques Charles/Amedeo Avogadro.
Equation of state.
Kinetic theory of gases.
Van der Waals equation of state.
Maxwell–Boltzmann distribution.
Thermal conductivity.
Viscosity.
Nuclear magnetic resonance.
Debye–Hückel equation.
Ionic solids.
Catalysis.
Supercritical fluid.
Liquid crystal.
Graphene.
Benoît Paul Émile Clapeyron.
Phase (matter)/phase diagram/Gibbs’ phase rule.
Ideal solution/regular solution.
Henry’s law.
Chemical kinetics.
Electrochemistry.
Rate equation/First order reactions/Second order reactions.
Rate-determining step.
Arrhenius equation.
Collision theory.
Diffusion-controlled and activation-controlled reactions.
Transition state theory.
Photochemistry/fluorescence/phosphorescence/photoexcitation.
Photosynthesis.
Redox reactions.
Electrochemical cell.
Fuel cell.
Reaction dynamics.
Spectroscopy/emission spectroscopy/absorption spectroscopy/Raman spectroscopy.
Raman effect.
Magnetic resonance imaging.
Fourier-transform spectroscopy.
Electron paramagnetic resonance.
Mass spectrum.
Electron spectroscopy for chemical analysis.
Scanning tunneling microscope.
Chemisorption/physisorption.

Advertisements

October 5, 2017 Posted by | Biology, Books, Chemistry, Pharmacology, Physics | Leave a comment

Earth System Science

I decided not to rate this book. Some parts are great, some parts I didn’t think were very good.

I’ve added some quotes and links below. First a few links (I’ve tried not to add links here which I’ve also included in the quotes below):

Carbon cycle.
Origin of water on Earth.
Gaia hypothesis.
Albedo (climate and weather).
Snowball Earth.
Carbonate–silicate cycle.
Carbonate compensation depth.
Isotope fractionation.
CLAW hypothesis.
Mass-independent fractionation.
δ13C.
Great Oxygenation Event.
Acritarch.
Grypania.
Neoproterozoic.
Rodinia.
Sturtian glaciation.
Marinoan glaciation.
Ediacaran biota.
Cambrian explosion.
Quarternary.
Medieval Warm Period.
Little Ice Age.
Eutrophication.
Methane emissions.
Keeling curve.
CO2 fertilization effect.
Acid rain.
Ocean acidification.
Earth systems models.
Clausius–Clapeyron relation.
Thermohaline circulation.
Cryosphere.
The limits to growth.
Exoplanet Biosignature Gases.
Transiting Exoplanet Survey Satellite (TESS).
James Webb Space Telescope.
Habitable zone.
Kepler-186f.

A few quotes from the book:

“The scope of Earth system science is broad. It spans 4.5 billion years of Earth history, how the system functions now, projections of its future state, and ultimate fate. […] Earth system science is […] a deeply interdisciplinary field, which synthesizes elements of geology, biology, chemistry, physics, and mathematics. It is a young, integrative science that is part of a wider 21st-century intellectual trend towards trying to understand complex systems, and predict their behaviour. […] A key part of Earth system science is identifying the feedback loops in the Earth system and understanding the behaviour they can create. […] In systems thinking, the first step is usually to identify your system and its boundaries. […] what is part of the Earth system depends on the timescale being considered. […] The longer the timescale we look over, the more we need to include in the Earth system. […] for many Earth system scientists, the planet Earth is really comprised of two systems — the surface Earth system that supports life, and the great bulk of the inner Earth underneath. It is the thin layer of a system at the surface of the Earth […] that is the subject of this book.”

“Energy is in plentiful supply from the Sun, which drives the water cycle and also fuels the biosphere, via photosynthesis. However, the surface Earth system is nearly closed to materials, with only small inputs to the surface from the inner Earth. Thus, to support a flourishing biosphere, all the elements needed by life must be efficiently recycled within the Earth system. This in turn requires energy, to transform materials chemically and to move them physically around the planet. The resulting cycles of matter between the biosphere, atmosphere, ocean, land, and crust are called global biogeochemical cycles — because they involve biological, geological, and chemical processes. […] The global biogeochemical cycling of materials, fuelled by solar energy, has transformed the Earth system. […] It has made the Earth fundamentally different from its state before life and from its planetary neighbours, Mars and Venus. Through cycling the materials it needs, the Earth’s biosphere has bootstrapped itself into a much more productive state.”

“Each major element important for life has its own global biogeochemical cycle. However, every biogeochemical cycle can be conceptualized as a series of reservoirs (or ‘boxes’) of material connected by fluxes (or flows) of material between them. […] When a biogeochemical cycle is in steady state, the fluxes in and out of each reservoir must be in balance. This allows us to define additional useful quantities. Notably, the amount of material in a reservoir divided by the exchange flux with another reservoir gives the average ‘residence time’ of material in that reservoir with respect to the chosen process of exchange. For example, there are around 7 × 1016 moles of carbon dioxide (CO2) in today’s atmosphere, and photosynthesis removes around 9 × 1015 moles of CO2 per year, giving each molecule of CO2 a residence time of roughly eight years in the atmosphere before it is taken up, somewhere in the world, by photosynthesis. […] There are 3.8 × 1019 moles of molecular oxygen (O2) in today’s atmosphere, and oxidative weathering removes around 1 × 1013 moles of O2 per year, giving oxygen a residence time of around four million years with respect to removal by oxidative weathering. This makes the oxygen cycle […] a geological timescale cycle.”

“The water cycle is the physical circulation of water around the planet, between the ocean (where 97 per cent is stored), atmosphere, ice sheets, glaciers, sea-ice, freshwaters, and groundwater. […] To change the phase of water from solid to liquid or liquid to gas requires energy, which in the climate system comes from the Sun. Equally, when water condenses from gas to liquid or freezes from liquid to solid, energy is released. Solar heating drives evaporation from the ocean. This is responsible for supplying about 90 per cent of the water vapour to the atmosphere, with the other 10 per cent coming from evaporation on the land and freshwater surfaces (and sublimation of ice and snow directly to vapour). […] The water cycle is intimately connected to other biogeochemical cycles […]. Many compounds are soluble in water, and some react with water. This makes the ocean a key reservoir for several essential elements. It also means that rainwater can scavenge soluble gases and aerosols out of the atmosphere. When rainwater hits the land, the resulting solution can chemically weather rocks. Silicate weathering in turn helps keep the climate in a state where water is liquid.”

“In modern terms, plants acquire their carbon from carbon dioxide in the atmosphere, add electrons derived from water molecules to the carbon, and emit oxygen to the atmosphere as a waste product. […] In energy terms, global photosynthesis today captures about 130 terrawatts (1 TW = 1012 W) of solar energy in chemical form — about half of it in the ocean and about half on land. […] All the breakdown pathways for organic carbon together produce a flux of carbon dioxide back to the atmosphere that nearly balances photosynthetic uptake […] The surface recycling system is almost perfect, but a tiny fraction (about 0.1 per cent) of the organic carbon manufactured in photosynthesis escapes recycling and is buried in new sedimentary rocks. This organic carbon burial flux leaves an equivalent amount of oxygen gas behind in the atmosphere. Hence the burial of organic carbon represents the long-term source of oxygen to the atmosphere. […] the Earth’s crust has much more oxygen trapped in rocks in the form of oxidized iron and sulphur, than it has organic carbon. This tells us that there has been a net source of oxygen to the crust over Earth history, which must have come from the loss of hydrogen to space.”

“The oxygen cycle is relatively simple, because the reservoir of oxygen in the atmosphere is so massive that it dwarfs the reservoirs of organic carbon in vegetation, soils, and the ocean. Hence oxygen cannot get used up by the respiration or combustion of organic matter. Even the combustion of all known fossil fuel reserves can only put a small dent in the much larger reservoir of atmospheric oxygen (there are roughly 4 × 1017 moles of fossil fuel carbon, which is only about 1 per cent of the O2 reservoir). […] Unlike oxygen, the atmosphere is not the major surface reservoir of carbon. The amount of carbon in global vegetation is comparable to that in the atmosphere and the amount of carbon in soils (including permafrost) is roughly four times that in the atmosphere. Even these reservoirs are dwarfed by the ocean, which stores forty-five times as much carbon as the atmosphere, thanks to the fact that CO2 reacts with seawater. […] The exchange of carbon between the atmosphere and the land is largely biological, involving photosynthetic uptake and release by aerobic respiration (and, to a lesser extent, fires). […] Remarkably, when we look over Earth history there are fluctuations in the isotopic composition of carbonates, but no net drift up or down. This suggests that there has always been roughly one-fifth of carbon being buried in organic form and the other four-fifths as carbonate rocks. Thus, even on the early Earth, the biosphere was productive enough to support a healthy organic carbon burial flux.”

“The two most important nutrients for life are phosphorus and nitrogen, and they have very different biogeochemical cycles […] The largest reservoir of nitrogen is in the atmosphere, whereas the heavier phosphorus has no significant gaseous form. Phosphorus thus presents a greater recycling challenge for the biosphere. All phosphorus enters the surface Earth system from the chemical weathering of rocks on land […]. Phosphorus is concentrated in rocks in grains or veins of the mineral apatite. Natural selection has made plants on land and their fungal partners […] very effective at acquiring phosphorus from rocks, by manufacturing and secreting a range of organic acids that dissolve apatite. […] The average terrestrial ecosystem recycles phosphorus roughly fifty times before it is lost into freshwaters. […] The loss of phosphorus from the land is the ocean’s gain, providing the key input of this essential nutrient. Phosphorus is stored in the ocean as phosphate dissolved in the water. […] removal of phosphorus into the rock cycle balances the weathering of phosphorus from rocks on land. […] Although there is a large reservoir of nitrogen in the atmosphere, the molecules of nitrogen gas (N2) are extremely strongly bonded together, making nitrogen unavailable to most organisms. To split N2 and make nitrogen biologically available requires a remarkable biochemical feat — nitrogen fixation — which uses a lot of energy. In the ocean the dominant nitrogen fixers are cyanobacteria with a direct source of energy from sunlight. On land, various plants form a symbiotic partnership with nitrogen fixing bacteria, making a home for them in root nodules and supplying them with food in return for nitrogen. […] Nitrogen fixation and denitrification form the major input and output fluxes of nitrogen to both the land and the ocean, but there is also recycling of nitrogen within ecosystems. […] There is an intimate link between nutrient regulation and atmospheric oxygen regulation, because nutrient levels and marine productivity determine the source of oxygen via organic carbon burial. However, ocean nutrients are regulated on a much shorter timescale than atmospheric oxygen because their residence times are much shorter—about 2,000 years for nitrogen and 20,000 years for phosphorus.”

“[F]orests […] are vulnerable to increases in oxygen that increase the frequency and ferocity of fires. […] Combustion experiments show that fires only become self-sustaining in natural fuels when oxygen reaches around 17 per cent of the atmosphere. Yet for the last 370 million years there is a nearly continuous record of fossil charcoal, indicating that oxygen has never dropped below this level. At the same time, oxygen has never risen too high for fires to have prevented the slow regeneration of forests. The ease of combustion increases non-linearly with oxygen concentration, such that above 25–30 per cent oxygen (depending on the wetness of fuel) it is hard to see how forests could have survived. Thus oxygen has remained within 17–30 per cent of the atmosphere for at least the last 370 million years.”

“[T]he rate of silicate weathering increases with increasing CO2 and temperature. Thus, if something tends to increase CO2 or temperature it is counteracted by increased CO2 removal by silicate weathering. […] Plants are sensitive to variations in CO2 and temperature, and together with their fungal partners they greatly amplify weathering rates […] the most pronounced change in atmospheric CO2 over Phanerozoic time was due to plants colonizing the land. This started around 470 million years ago and escalated with the first forests 370 million years ago. The resulting acceleration of silicate weathering is estimated to have lowered the concentration of atmospheric CO2 by an order of magnitude […], and cooled the planet into a series of ice ages in the Carboniferous and Permian Periods.”

“The first photosynthesis was not the kind we are familiar with, which splits water and spits out oxygen as a waste product. Instead, early photosynthesis was ‘anoxygenic’ — meaning it didn’t produce oxygen. […] It could have used a range of compounds, in place of water, as a source of electrons with which to fix carbon from carbon dioxide and reduce it to sugars. Potential electron donors include hydrogen (H2) and hydrogen sulphide (H2S) in the atmosphere, or ferrous iron (Fe2+) dissolved in the ancient oceans. All of these are easier to extract electrons from than water. Hence they require fewer photons of sunlight and simpler photosynthetic machinery. The phylogenetic tree of life confirms that several forms of anoxygenic photosynthesis evolved very early on, long before oxygenic photosynthesis. […] If the early biosphere was fuelled by anoxygenic photosynthesis, plausibly based on hydrogen gas, then a key recycling process would have been the biological regeneration of this gas. Calculations suggest that once such recycling had evolved, the early biosphere might have achieved a global productivity up to 1 per cent of the modern marine biosphere. If early anoxygenic photosynthesis used the supply of reduced iron upwelling in the ocean, then its productivity would have been controlled by ocean circulation and might have reached 10 per cent of the modern marine biosphere. […] The innovation that supercharged the early biosphere was the origin of oxygenic photosynthesis using abundant water as an electron donor. This was not an easy process to evolve. To split water requires more energy — i.e. more high-energy photons of sunlight — than any of the earlier anoxygenic forms of photosynthesis. Evolution’s solution was to wire together two existing ‘photosystems’ in one cell and bolt on the front of them a remarkable piece of biochemical machinery that can rip apart water molecules. The result was the first cyanobacterial cell — the ancestor of all organisms performing oxygenic photosynthesis on the planet today. […] Once oxygenic photosynthesis had evolved, the productivity of the biosphere would no longer have been restricted by the supply of substrates for photosynthesis, as water and carbon dioxide were abundant. Instead, the availability of nutrients, notably nitrogen and phosphorus, would have become the major limiting factors on the productivity of the biosphere — as they still are today.” [If you’re curious to know more about how that fascinating ‘biochemical machinery’ works, this is a great book on these and related topics – US].

“On Earth, anoxygenic photosynthesis requires one photon per electron, whereas oxygenic photosynthesis requires two photons per electron. On Earth it took up to a billion years to evolve oxygenic photosynthesis, based on two photosystems that had already evolved independently in different types of anoxygenic photosynthesis. Around a fainter K- or M-type star […] oxygenic photosynthesis is estimated to require three or more photons per electron — and a corresponding number of photosystems — making it harder to evolve. […] However, fainter stars spend longer on the main sequence, giving more time for evolution to occur.”

“There was a lot more energy to go around in the post-oxidation world, because respiration of organic matter with oxygen yields an order of magnitude more energy than breaking food down anaerobically. […] The revolution in biological complexity culminated in the ‘Cambrian Explosion’ of animal diversity 540 to 515 million years ago, in which modern food webs were established in the ocean. […] Since then the most fundamental change in the Earth system has been the rise of plants on land […], beginning around 470 million years ago and culminating in the first global forests by 370 million years ago. This doubled global photosynthesis, increasing flows of materials. Accelerated chemical weathering of the land surface lowered atmospheric carbon dioxide levels and increased atmospheric oxygen levels, fully oxygenating the deep ocean. […] Although grasslands now cover about a third of the Earth’s productive land surface they are a geologically recent arrival. Grasses evolved amidst a trend of declining atmospheric carbon dioxide, and climate cooling and drying, over the past forty million years, and they only became widespread in two phases during the Miocene Epoch around seventeen and six million years ago. […] Since the rise of complex life, there have been several mass extinction events. […] whilst these rolls of the extinction dice marked profound changes in evolutionary winners and losers, they did not fundamentally alter the operation of the Earth system.” [If you’re interested in this kind of stuff, the evolution of food webs and so on, Herrera et al.’s wonderful book is a great place to start – US]

“The Industrial Revolution marks the transition from societies fuelled largely by recent solar energy (via biomass, water, and wind) to ones fuelled by concentrated ‘ancient sunlight’. Although coal had been used in small amounts for millennia, for example for iron making in ancient China, fossil fuel use only took off with the invention and refinement of the steam engine. […] With the Industrial Revolution, food and biomass have ceased to be the main source of energy for human societies. Instead the energy contained in annual food production, which supports today’s population, is at fifty exajoules (1 EJ = 1018 joules), only about a tenth of the total energy input to human societies of 500 EJ/yr. This in turn is equivalent to about a tenth of the energy captured globally by photosynthesis. […] solar energy is not very efficiently converted by photosynthesis, which is 1–2 per cent efficient at best. […] The amount of sunlight reaching the Earth’s land surface (2.5 × 1016 W) dwarfs current total human power consumption (1.5 × 1013 W) by more than a factor of a thousand.”

“The Earth system’s primary energy source is sunlight, which the biosphere converts and stores as chemical energy. The energy-capture devices — photosynthesizing organisms — construct themselves out of carbon dioxide, nutrients, and a host of trace elements taken up from their surroundings. Inputs of these elements and compounds from the solid Earth system to the surface Earth system are modest. Some photosynthesizers have evolved to increase the inputs of the materials they need — for example, by fixing nitrogen from the atmosphere and selectively weathering phosphorus out of rocks. Even more importantly, other heterotrophic organisms have evolved that recycle the materials that the photosynthesizers need (often as a by-product of consuming some of the chemical energy originally captured in photosynthesis). This extraordinary recycling system is the primary mechanism by which the biosphere maintains a high level of energy capture (productivity).”

“[L]ike all stars on the ‘main sequence’ (which generate energy through the nuclear fusion of hydrogen into helium), the Sun is burning inexorably brighter with time — roughly 1 per cent brighter every 100 million years — and eventually this will overheat the planet. […] Over Earth history, the silicate weathering negative feedback mechanism has counteracted the steady brightening of the Sun by removing carbon dioxide from the atmosphere. However, this cooling mechanism is near the limits of its operation, because CO2 has fallen to limiting levels for the majority of plants, which are key amplifiers of silicate weathering. Although a subset of plants have evolved which can photosynthesize down to lower CO2 levels [the author does not go further into this topic, but here’s a relevant link – US], they cannot draw CO2 down lower than about 10 ppm. This means there is a second possible fate for life — running out of CO2. Early models projected either CO2 starvation or overheating […] occurring about a billion years in the future. […] Whilst this sounds comfortingly distant, it represents a much shorter future lifespan for the Earth’s biosphere than its past history. Earth’s biosphere is entering its old age.”

September 28, 2017 Posted by | Astronomy, Biology, Books, Botany, Chemistry, Geology, Paleontology, Physics | Leave a comment

Sound

I gave the book two stars. As I was writing this post I was actually reconsidering, thinking about whether that was too harsh, whether the book deserved a third star. When I started out reading it I was assuming it would be a ‘physics book’ (I found it via browsing a list of physics books, so…), but that quickly turned out to be a mistaken assumption. There’s stuff about wave mechanics in there, sure, but this book also includes stuff about anatomy (a semi-detailed coverage of how the ear works), how musical instruments work, how bats use echolocation to find insects, and how animals who live underwater hear differently from the way we hear things. This book is really ‘all over the place’, which was probably part of why I didn’t like it as much as I might otherwise have. Lots of interesting stuff included, though – I learned quite a bit from this book.

I’ve added some quotes from the book below, and below the quotes I’ve added some links to stuff/concepts/etc. covered in the book.

“Decibels aren’t units — they are ratios […] To describe the sound of a device in decibels, it is vital to know what you are comparing it with. For airborne sound, the comparison is with a sound that is just hearable (corresponding to a pressure of twenty micropascals). […] Ultrasound engineers don’t care how much ‘louder than you can just about hear’ their ultrasound is, because no one can hear it in the first place. It’s power they like, and it’s watts they measure it in. […] Few of us care how much sound an object produces — what we want to know is how loud it will sound. And that depends on how far away the thing is. This may seem obvious, but it means that we can’t ever say that the SPL [sound pressure level] of a car horn is 90 dB, only that it has that value at some stated distance.”

“For an echo to be an echo, it must be heard more than about 1/ 20 of a second after the sound itself. If heard before that, the ear responds as if to a single, louder, sound. Thus 1/ 20 second is the auditory equivalent to the 1/ 5 of a second that our eyes need to see a changing thing as two separate images. […] Since airborne sounds travel about 10 metres in 1/ 20 second, rooms larger than this (in any dimension) are echo chambers waiting to happen.”

“Being able to hear is unremarkable: powerful sounds shake the body and can be detected even by single-celled organisms. But being able to hear as well as we do is little short of miraculous: we can quite easily detect a sound which delivers a power of 10−15 watts to the eardrums, despite the fact that it moves them only a fraction of the width of a hydrogen atom. Almost as impressive is the range of sound powers we can hear. The gap between the quietest audible sound level (the threshold of hearing, 0 dB) to the threshold of pain (around 130 dB) is huge: 130 dB is 1013 […] We can also hear a fairly wide range of frequencies; about ten octaves, a couple more than a piano keyboard. […] Our judgement of directionality, by contrast, is mediocre; even in favourable conditions we can only determine the direction of a sound’s source within about 10° horizontally or 20° vertically; many other animals can do very much better. […] Perhaps the most impressive of all our hearing abilities is that we can understand words whose levels are less than 10 per cent of that of background noise level (if that background is a broad spread of frequencies): this far surpasses any machine.”

“The nerve signals that emerge from the basilar membrane are not mimics of sound waves, but coded messages which contain three pieces of information: (a) how many nerve fibres are signalling at once, (b) how far along the basilar membrane those fibres are, and (c) how long the interval is between bursts of fibre signals. The brain extracts loudness information from a combination of (a) and (c), and pitch information from (b) and (c). […] The hearing system is a delicate one, and severe damage to the eardrums or ossicles is not uncommon. […] This condition is called conductive hearing loss. If damage to the inner ear or auditory nerve occurs, the result is sensorineural or ‘nerve’ hearing loss. It mostly affects higher frequencies and quieter sounds; in mild forms, it gives rise to a condition called recruitment, in which there is a sudden jump in the ‘hearability’ of sounds. A person suffering from recruitment and exposed to a sound of gradually increasing level can at first detect nothing and then suddenly hears the sound, which seems particularly loud. Hence the ‘there’s no need to shout’ protest in response to those who raise their voices just a little to make themselves heard on a second attempt. Sensorineural hearing loss is the commonest type, and its commonest cause is physical damage inflicted on the hair cells. […] About 360 million people worldwide (over 5 per cent of the global population) have ‘disabling’ hearing loss — that is, hearing loss greater than 40 dB in the better-hearing ear in adults and a hearing loss greater than 30 dB in the better-hearing ear in children […]. About one in three people over the age of sixty-five suffer from such hearing loss. […] [E]veryone’s ability to hear high-frequency sounds declines with age: newborn, we can hear up to 20 kHz, by the age of about forty this has fallen to around 16 kHz, and to 10 kHz by age sixty. Aged eighty, most of us are deaf to sounds above 8 kHz. The effect is called presbyacusis”.

“The acoustic reflex is one cause of temporary threshold shift (TTS), in which sounds which are usually quiet become inaudible. Unfortunately, the time the reflex takes to work […] is usually around 45 milliseconds, which is far longer than it takes an impulse sound, like a gunshot or explosion, to do considerable damage. […] Where the overburdened ear differs from other abused measuring instruments (biological and technological) is that it is not only the SPL of noise that matters: energy counts too. A noise at a level which would cause no more than irritation if listened to for a second can lead to significant hearing loss if it continues for an hour. The amount of TTS is proportional to the logarithm of the time for which the noise has been present — that is, doubling the exposure time more than doubles the amount. […] The amount of TTS reduces considerably if there is a pause in the noise, so if exposure to noise for long periods is unavoidable […], there is very significant benefit in removing oneself from the noisy area, if only for fifteen minutes.”

“Many highly effective technological solutions to noise have been developed. […] The first principle of noise control is to identify the source and remove it. […] Having dealt as far as possible with the noise source, the next step is to contain it. […] When noise can be neither avoided not contained, the next step is to keep its sources well separated from potential sufferers. One approach, used for thousands of years, is zoning: legislating for the restriction of noisy activities to particular areas, such as industrial zones, which are distant from residential districts. […] Where zone separation by distance is impracticable […], sound barriers are the main solution: a barrier that just cuts off the sight of a noise source will reduce the noise level by about 5 dB, and each additional metre will provide about an extra 1.5 dB reduction. […] Since barriers largely reflect rather than absorb, reflected sounds need consideration, but otherwise design and construction are simple, results are predictable, and costs are relatively low.”

“[T]he basic approaches to home sound reduction are simple: stop noise entering, destroy what does get in, and don’t add more to it yourself. There are three ways for sound to enter: via openings; by structure-borne vibration; and through walls, windows, doors, ceilings, and floors acting as diaphragms. In all three cases, the main point to bear in mind is that an acoustic shell is only as good as its weakest part: just as even a small hole in an otherwise watertight ship’s hull renders the rest useless, so does a single open window in a double-glazed house. In fact, the situation with noise is much worse than with water due to the logarithmic response of our ears: if we seal one of two identical holes in a boat we will halve the inflow. If we close one of two identical windows into a house, […] that 50 per cent reduction in acoustic intensity is only about a 2 per cent reduction in loudness. The second way to keep noise out is double glazing, since single-glazed windows make excellent diaphragms. Structure-borne sound is a much greater challenge […] One inexpensive, adaptable, and effective solution […] is the hanging of heavy velour drapes, with as many folds as possible. If something more drastic is required, it is vital to involve an expert: while an obvious solution is to thicken walls, it’s important to bear in mind that doubling thickness reduces transmission loss by only 6 dB (a sound power reduction of about three-quarters, but a loudness reduction of only about 40 per cent). This means that solid walls need to be very thick to work well. A far better approach is the use of porous absorbers and of multi-layer constructions. In a porous absorber like glass fibre, higher-frequency sound waves are lost through multiple reflections from the many internal surfaces. […] A well-fitted acoustically insulated door is also vital. The floor should not be neglected: even if there are no rooms beneath, hard floors are excellent both at generating noise when walked on and in transmitting that noise throughout the building. Carpet and underlay are highly effective at high frequencies but are almost useless at lower ones […] again there is no real alternative to bringing in an expert.”

“There are two reasons for the apparent silence of the sea: one physical, the other biological. The physical one is the impedance mismatch between air and water, in consequence of which the surface acts as an acoustic mirror, reflecting back almost all sound from below, so that land-dwellers hear no more than the breaking of the waves. […] underwater, the eardrum has water on one side and air on the other, and so impedance mismatching once more prevents most sound from entering. If we had no eardrums (nor air-filled middle ears) we would probably hear very well underwater. Underwater animals don’t need such complicated ears as ours: since the water around them is a similar density to their flesh, sound enters and passes through their whole bodies easily […] because the velocity of sound is about five times greater in water than in air, the wavelength corresponding to a particular frequency is also about five times greater than its airborne equivalent, so directionality is harder to come by.”

“Although there is little that electromagnetic radiation does above water that sound cannot do below it, sound has one unavoidable disadvantage: its velocity in water is much lower than that of electromagnetic radiation in air […]. Also, when waves are used to send data, the rate of that data transmission is directly proportional to the wave frequency — and audio sound waves are around 1,000 times lower in frequency than radio waves. For this reason ultrasound is used instead, since its frequencies can match those of radio waves. Another advantage is that it is easier to produce directional beams at ultrasonic frequencies to send the signal in only the direction you want. […] The distances over which sound can travel underwater are amazing. […] sound waves are absorbed far less in water than in air. At 1 kHz, absorption is about 5 dB/ km in air (at 30 per cent humidity) but only 0.06 dB/ km in seawater. Also, underwater sound waves are much more confined; a noise made in mid-air spreads in all directions, but in the sea the bed and the surface limit vertical spreading. […] The range of sound velocities underwater is [also] far larger than in air, because of the enormous variations in density, which is affected by temperature, pressure, and salinity […] somewhere under all oceans there is a layer at which sound velocity is low, sandwiched between regions in which it is higher. By refraction, sound waves from both above and below are diverted towards the region of minimum sound velocity, and are trapped there. This is the deep sound channel, a thin spherical shell extending through the world’s oceans. Since sound waves in the deep sound channel can move only horizontally, their intensity falls in proportion only to the distance they travel, rather than to the square of the distance, as they would in air or in water at a single temperature (in other words, they spread out in circles, not spheres). Sound absorption in the deep sound channel is very low […] and sound waves in the deep channel can readily circumnavigate the Earth.”

Links:

Sound.
Neuromast.
Monochord.
Echo.
Pierre-Simon Laplace.
Sonar.
Foley.
Long Range Acoustic Device.
Physics of sound.
Speed of sound.
Shock wave.
Doppler effect.
Acoustic mirror.
Acoustic impedance.
Snell’s law.
Diffraction grating.
Interference (wave propagation).
Acousto-optic effect.
Sound pressure.
Sound intensity.
Square-cube law.
Decibel.
Ultrasound.
Sound level meter.
Phon.
Standing wave.
Harmonic.
Resonance.
Helmholtz resonance.
Phonautograph.
Spectrogram.
Fourier series/Fourier transform/Fast Fourier transform.
Equalization (audio).
Absolute pitch.
Consonance and dissonance.
Pentatonic scale.
Major and minor.
Polyphony.
Rhytm.
Pitched percussion instrument/Unpitched percussion instrument.
Hearing.
Ear/pinna/tympanic membrane/Eustachian tube/Middle ear/Inner ear/Cochlea/Organ of Corti.
Otoacoustic emission.
Broca’s area/primary auditory cortex/Wernicke’s area/Haas effect.
Conductive hearing loss/Sensorineural hearing loss.
Microphone/Carbon microphone/Electret microphone/Ribbon microphone.
Piezoelectric effect.
Loudspeaker.
Missing fundamental.
Huffman coding.
Animal echolocation.
Phonon.
Infrasound.
Hydrophone.
Deep sound channel.
Tonpilz.
Stokes’ law of sound attenuation.
Noise.
Acoustic reflex.
Temporary threshold shift.
Active noise cancellation.
Sabine equation.

September 14, 2017 Posted by | Books, Physics | Leave a comment

Light

I gave the book two stars. Some quotes and links below.

“Lenses are ubiquitous in image-forming devices […] Imaging instruments have two components: the lens itself, and a light detector, which converts the light into, typically, an electrical signal. […] In every case the location of the lens with respect to the detector is a key design parameter, as is the focal length of the lens which quantifies its ‘ray-bending’ power. The focal length is set by the curvature of the surfaces of the lens and its thickness. More strongly curved surfaces and thicker materials are used to make lenses with short focal lengths, and these are used usually in instruments where a high magnification is needed, such as a microscope. Because the refractive index of the lens material usually depends on the colour of light, rays of different colours are bent by different amounts at the surface, leading to a focus for each colour occurring in a different position. […] lenses with a big diameter and a short focal length will produce the tiniest images of point-like objects. […] about the best you can do in any lens system you could actually make is an image size of approximately one wavelength. This is the fundamental limit to the pixel size for lenses used in most optical instruments, such as cameras and binoculars. […] Much more sophisticated methods are required to see even smaller things. The reason is that the wave nature of light puts a lower limit on the size of a spot of light. […] At the other extreme, both ground- and space-based telescopes for astronomy are very large instruments with relatively simple optical imaging components […]. The distinctive feature of these imaging systems is their size. The most distant stars are very, very faint. Hardly any of their light makes it to the Earth. It is therefore very important to collect as much of it as possible. This requires a very big lens or mirror”.

“[W]hat sort of wave is light? This was […] answered in the 19th century by James Clerk Maxwell, who showed that it is an oscillation of a new kind of entity: the electromagnetic field. This field is effectively a force that acts on electric charges and magnetic materials. […] In the early 19th century, Michael Faraday had shown the close connections between electric and magnetic fields. Maxwell brought them together, as the electromagnetic force field. […] in the wave model, light can be considered as very high frequency oscillations of the electromagnetic field. One consequence of this idea is that moving electric charges can generate light waves. […] When […] charges accelerate — that is, when they change their speed or their direction of motion — then a simple law of physics is that they emit light. Understanding this was one of the great achievements of the theory of electromagnetism.”

“It was the observation of interference effects in a famous experiment by Thomas Young in 1803 that really put the wave picture of light as the leading candidate as an explanation of the nature of light. […] It is interference of light waves that causes the colours in a thin film of oil floating on water. Interference transforms very small distances, on the order of the wavelength of light, into very big changes in light intensity — from no light to four times as bright as the individual constituent waves. Such changes in intensity are easy to detect or see, and thus interference is a very good way to measure small changes in displacement on the scale of the wavelength of light. Many optical sensors are based on interference effects.”

“[L]ight beams […] gradually diverge as they propagate. This is because a beam of light, which by definition has a limited spatial extent, must be made up of waves that propagate in more than one direction. […] This phenomenon is called diffraction. […] if you want to transmit light over long distances, then diffraction could be a problem. It will cause the energy in the light beam to spread out, so that you would need a bigger and bigger optical system and detector to capture all of it. This is important for telecommunications, since nearly all of the information transmitted over long-distance communications links is encoded on to light beams. […] The means to manage diffraction so that long-distance communication is possible is to use wave guides, such as optical fibres.”

“[O]ptical waves […] guided along a fibre or in a glass ‘chip’ […] underpins the long-distance telecommunications infrastructure that connects people across different continents and powers the Internet. The reason it is so effective is that light-based communications have much more capacity for carrying information than do electrical wires, or even microwave cellular networks. […] In optical communications, […] bits are represented by the intensity of the light beam — typically low intensity is a 0 and higher intensity a 1. The more of these that arrive per second, the faster the communication rate. […] Why is optics so good for communications? There are two reasons. First, light beams don’t easily influence each other, so that a single fibre can support many light pulses (usually of different colours) simultaneously without the messages getting scrambled up. The reason for this is that the glass of which the fibre is made does not absorb light (or only absorbs it in tiny amounts), and so does not heat up and disrupt other pulse trains. […] the ‘crosstalk’ between light beams is very weak in most materials, so that many beams can be present at once without causing a degradation of the signal. This is very different from electrons moving down a copper wire, which is the usual way in which local ‘wired’ communications links function. Electrons tend to heat up the wire, dissipating their energy. This makes the signals harder to receive, and thus the number of different signal channels has to be kept small enough to avoid this problem. Second, light waves oscillate at very high frequencies, and this allows very short pulses to be generated This means that the pulses can be spaced very close together in time, making the transmission of more bits of information per second possible. […] Fibre-based optical networks can also support a very wide range of colours of light.”

“Waves can be defined by their wavelength, amplitude, and phase […]. Particles are defined by their position and direction of travel […], and a collection of particles by their density […] and range of directions. The media in which the light moves are characterized by their refractive indices. This can vary across space. […] Hamilton showed that what was important was how rapidly the refractive index changed in space compared with the length of an optical wave. That is, if the changes in index took place on a scale of close to a wavelength, then the wave character of light was evident. If it varied more smoothly and very slowly in space then the particle picture provided an adequate description. He showed how the simpler ray picture emerges from the more complex wave picture in certain commonly encountered situations. The appearance of wave-like phenomena, such as diffraction and interference, occurs when the size scales of the wavelength of light and the structures in which it propagates are similar. […] Particle-like behaviour — motion along a well-defined trajectory — is sufficient to describe the situation when all objects are much bigger than the wavelength of light, and have no sharp edges.”

“When things are heated up, they change colour. Take a lump of metal. As it gets hotter and hotter it first glows red, then orange, and then white. Why does this happen? This question stumped many of the great scientists [in the 19th century], including Maxwell himself. The problem was that Maxwell’s theory of light, when applied to this problem, indicated that the colour should get bluer and bluer as the temperature increased, without a limit, eventually moving out of the range of human vision into the ultraviolet—beyond blue—region of the spectrum. But this does not happen in practice. […] Max Planck […] came up with an idea to explain the spectrum emitted by hot objects — so-called ‘black bodies’. He conjectured that when light and matter interact, they do so only by exchanging discrete ‘packets’, or quanta, or energy. […] this conjecture was set to radically change physics.”

“What Dirac did was to develop a quantum mechanical version of Maxwell’s theory of electromagnetic fields. […] It set the quantum field up as the fundamental entity on which the universe is built — neither particle nor wave, but both at once; complete wave–particle duality. It is a beautiful reconciliation of all the phenomena that light exhibits, and provides a framework in which to understand all optical effects, both those from the classical world of Newton, Maxwell, and Hamilton and those of the quantum world of Planck, Einstein, and Bohr. […] Light acts as a particle of more or less well-defined energy when it interacts with matter. Yet it retains its ability to exhibit wave-like phenomena at the same time. The resolution [was] a new concept: the quantum field. Light particles — photons — are excitations of this field, which propagates according to quantum versions of Maxwell’s equations for light waves. Quantum fields, of which light is perhaps the simplest example, are now regarded as being the fundamental entities of the universe, underpinning all types of material and non-material things. The only explanation is that the stuff of the world is neither particle nor wave but both. This is the nature of reality.”

Some links:

Light.
Optics.
Watt.
Irradiance.
Coherence (physics).
Electromagnetic spectrum.
Joseph von Fraunhofer.
Spectroscopy.
Wave.
Transverse wave.
Wavelength.
Spatial frequency.
Polarization (waves).
Specular reflection.
Negative-index metamaterial.
Birefringence.
Interference (wave propagation).
Diffraction.
Young’s interference experiment.
Holography.
Photoactivated localization microscopy.
Stimulated emission depletion (STED) microscopy.
Fourier’s theorem (I found it hard to find a good source on this one. According to the book, “Fourier’s theorem says in simple terms that the smaller you focus light, the broader the range of wave directions you need to achieve this spot”)
X-ray diffraction.
Brewster’s angle.
Liquid crystal.
Liquid crystal display.
Wave–particle duality.
Fermat’s principle.
Wavefront.
Maupertuis’ principle.
Johann Jakob Balmer.
Max Planck.
Photoelectric effect.
Niels Bohr.
Matter wave.
Quantum vacuum.
Lamb shift.
Light-emitting diode.
Fluorescent tube.
Synchrotron radiation.
Quantum state.
Quantum fluctuation.
Spontaneous emission/stimulated emission.
Photodetector.
Laser.
Optical cavity.
X-ray absorption spectroscopy.
Diamond Light Source.
Mode-locking.
Stroboscope.
Femtochemistry.
Spacetime.
Atomic clock.
Time dilation.
High harmonic generation.
Frequency comb.
Optical tweezers.
Bose–Einstein condensate.
Pump probe spectroscopy.
Vulcan laser.
Plasma (physics).
Nonclassical light.
Photon polarization.
Quantum entanglement.
Bell test experiments.
Quantum key distribution/Quantum cryptography/Quantum computing.

August 31, 2017 Posted by | Books, Chemistry, Computer science, Physics | Leave a comment

Magnetism

This book was ‘okay…ish’, but I must admit I was a bit disappointed; the coverage was much too superficial, and I’m reasonably sure the lack of formalism made the coverage harder for me to follow than it could have been. I gave the book two stars on goodreads.

Some quotes and links below.

Quotes:

“In the 19th century, the principles were established on which the modern electromagnetic world could be built. The electrical turbine is the industrialized embodiment of Faraday’s idea of producing electricity by rotating magnets. The turbine can be driven by the wind or by falling water in hydroelectric power stations; it can be powered by steam which is itself produced by boiling water using the heat produced from nuclear fission or burning coal or gas. Whatever the method, rotating magnets inducing currents feed the appetite of the world’s cities for electricity, lighting our streets, powering our televisions and computers, and providing us with an abundant source of energy. […] rotating magnets are the engine of the modern world. […] Modern society is built on the widespread availability of cheap electrical power, and almost all of it comes from magnets whirling around in turbines, producing electric current by the laws discovered by Oersted, Ampère, and Faraday.”

“Maxwell was the first person to really understand that a beam of light consists of electric and magnetic oscillations propagating together. The electric oscillation is in one plane, at right angles to the magnetic oscillation. Both of them are in directions at right angles to the direction of propagation. […] The oscillations of electricity and magnetism in a beam of light are governed by Maxwell’s four beautiful equations […] Above all, Einstein’s work on relativity was motivated by a desire to preserve the integrity of Maxwell’s equations at all costs. The problem was this: Maxwell had derived a beautiful expression for the speed of light, but the speed of light with respect to whom? […] Einstein deduced that the way to fix this would be to say that all observers will measure the speed of any beam of light to be the same. […] Einstein showed that magnetism is a purely relativistic effect, something that wouldn’t even be there without relativity. Magnetism is an example of relativity in everyday life. […] Magnetic fields are what electric fields look like when you are moving with respect to the charges that ‘cause’ them. […] every time a magnetic field appears in nature, it is because a charge is moving with respect to the observer. Charge flows down a wire to make an electric current and this produces magnetic field. Electrons orbit an atom and this ‘orbital’ motion produces a magnetic field. […] the magnetism of the Earth is due to electrical currents deep inside the planet. Motion is the key in each and every case, and magnetic fields are the evidence that charge is on the move. […] Einstein’s theory of relativity casts magnetism in a new light. Magnetic fields are a relativistic correction which you observe when charges move relative to you.”

“[T]he Bohr–van Leeuwen theorem […] states that if you assume nothing more than classical physics, and then go on to model a material as a system of electrical charges, then you can show that the system can have no net magnetization; in other words, it will not be magnetic. Simply put, there are no lodestones in a purely classical Universe. This should have been a revolutionary and astonishing result, but it wasn’t, principally because it came about 20 years too late to knock everyone’s socks off. By 1921, the initial premise of the Bohr–van Leeuwen theorem, the correctness of classical physics, was known to be wrong […] But when you think about it now, the Bohr–van Leeuwen theorem gives an extraordinary demonstration of the failure of classical physics. Just by sticking a magnet to the door of your refrigerator, you have demonstrated that the Universe is not governed by classical physics.”

“[M]ost real substances are weakly diamagnetic, meaning that when placed in a magnetic field they become weakly magnetic in the opposite direction to the field. Water does this, and since animals are mostly water, it applies to them. This is the basis of Andre Geim’s levitating frog experiment: a live frog is placed in a strong magnetic field and because of its diamagnetism it becomes weakly magnetic. In the experiment, a non-uniformity of the magnetic field induces a force on the frog’s induced magnetism and, hey presto, the frog levitates in mid-air.”

“In a conventional hard disk technology, the disk needs to be spun very fast, around 7,000 revolutions per minute. […] The read head floats on a cushion of air about 15 nanometres […] above the surface of the rotating disk, reading bits off the disk at tens of megabytes per second. This is an extraordinary engineering achievement when you think about it. If you were to scale up a hard disk so that the disk is a few kilometres in diameter rather a few centimetres, then the read head would be around the size of the White House and would be floating over the surface of the disk on a cushion of air one millimetre thick (the diameter of the head of a pin) while the disk rotated below it at a speed of several million miles per hour (fast enough to go round the equator a couple of dozen times in a second). On this scale, the bits would be spaced a few centimetres apart around each track. Hard disk drives are remarkable. […] Although hard disks store an astonishing amount of information and are cheap to manufacture, they are not fast information retrieval systems. To access a particular piece of information involves moving the head and rotating the disk to a particular spot, taking perhaps a few milliseconds. This sounds quite rapid, but with processors buzzing away and performing operations every nanosecond or so, a few milliseconds is glacial in comparison. For this reason, modern computers often use solid state memory to store temporary information, reserving the hard disk for longer-term bulk storage. However, there is a trade-off between cost and performance.”

“In general, there is a strong economic drive to store more and more information in a smaller and smaller space, and hence a need to find a way to make smaller and smaller bits. […] [However] greater miniturization comes at a price. The point is the following: when you try to store a bit of information in a magnetic medium, an important constraint on the usefulness of the technology is how long the information will last for. Almost always the information is being stored at room temperature and so needs to be robust to the ever present random jiggling effects produced by temperature […] It turns out that the crucial parameter controlling this robustness is the ratio of the energy needed to reverse the bit of information (in other words, the energy required to change the magnetization from one direction to the reverse direction) to a characteristic energy associated with room temperature (an energy which is, expressed in electrical units, approximately one-fortieth of a Volt). So if the energy to flip a magnetic bit is very large, the information can persist for thousands of years […] while if it is very small, the information might only last for a small fraction of a second […] This energy is proportional to the volume of the magnetic bit, and so one immediately sees a problem with making bits smaller and smaller: though you can store bits of information at higher density, there is a very real possibility that the information might be very rapidly scrambled by thermal fluctuations. This motivates the search for materials in which it is very hard to flip the magnetization from one state to the other.”

“The change in the Earth’s magnetic field over time is a fairly noticeable phenomenon. Every decade or so, compass needles in Africa are shifting by a degree, and the magnetic field overall on planet Earth is about 10% weaker than it was in the 19th century.”

Below I have added some links to topics and people covered/mentioned in the book. Many of the links below have likely also been included in some of the other posts about books from the A Brief Introduction OUP physics series which I’ve posted this year – the main point of adding these links is to give some idea what kind of stuff’s covered in the book:

Magnetism.
Magnetite.
Lodestone.
William Gilbert/De Magnete.
Alessandro Volta.
Ampère’s circuital law.
Charles-Augustin de Coulomb.
Hans Christian Ørsted.
Leyden jar
/voltaic cell/battery (electricity).
Solenoid.
Electromagnet.
Homopolar motor.
Michael Faraday.
Electromagnetic induction.
Dynamo.
Zeeman effect.
Alternating current/Direct current.
Nikola Tesla.
Thomas Edison.
Force field (physics).
Ole Rømer.
Centimetre–gram–second system of units.
James Clerk Maxwell.
Maxwell’s equations.
Permittivity.
Permeability (electromagnetism).
Gauss’ law.
Michelson–Morley experiment
.
Special relativity.
Drift velocity.
Curie’s law.
Curie temperature.
Andre Geim.
Diamagnetism.
Paramagnetism.
Exchange interaction.
Magnetic domain.
Domain wall (magnetism).
Stern–Gerlach experiment.
Dirac equation.
Giant magnetoresistance.
Spin valve.
Racetrack memory.
Perpendicular recording.
Bubble memory (“an example of a brilliant idea which never quite made it”, as the author puts it).
Single-molecule magnet.
Spintronics.
Earth’s magnetic field.
Aurora.
Van Allen radiation belt.
South Atlantic Anomaly.
Geomagnetic storm.
Geomagnetic reversal.
Magnetar.
ITER (‘International Thermonuclear Experimental Reactor’).
Antiferromagnetism.
Spin glass.
Quantum spin liquid.
Multiferroics.
Spin ice.
Magnetic monopole.
Ice rules.

August 28, 2017 Posted by | Books, Computer science, Geology, Physics | Leave a comment

Detecting Cosmic Neutrinos with IceCube at the Earth’s South Pole

I thought there were a bit too many questions/interruptions for my taste, mainly because you can’t really hear the questions posed by the members of the audience, but aside from that it’s a decent lecture. I’ve added a few links below which covers some of the topics discussed in the lecture.

Neutrino astronomy.
Antarctic Impulse Transient Antenna (ANITA).
Hydrophone.
Neutral pion decays.
IceCube Neutrino Observatory.
Evidence for High-Energy Extraterrestrial Neutrinos at the IceCube Detector (Science).
Atmospheric and astrophysical neutrinos above 1 TeV interacting in IceCube.
Notes on isotropy.
Measuring the flavor ratio of astrophysical neutrinos.
Blazar.
Supernova 1987A neutrino emissions.

July 18, 2017 Posted by | Astronomy, Lectures, Physics, Studies | Leave a comment

Gravity

“The purpose of this book is to give the reader a very brief introduction to various different aspects of gravity. We start by looking at the way in which the theory of gravity developed historically, before moving on to an outline of how it is understood by scientists today. We will then consider the consequences of gravitational physics on the Earth, in the Solar System, and in the Universe as a whole. The final chapter describes some of the frontiers of current research in theoretical gravitational physics.”

I was not super impressed by this book, mainly because the level of coverage was not quite as high as has been the level of coverage of some of the other physics books in the OUP – A Brief Introduction series. But it’s definitely an okay book about this topic, I was much closer to a three star rating on goodreads than a one star rating, and I did learn some new things from it. I might still change my mind about my two-star rating of the book.

I’ll cover the book the same way I’ve covered some of the other books in the series; I’ll post some quotes with some observations of interest, and then I’ll add some supplementary links towards the end of the post. ‘As usual’ (see e.g. also the introductory remarks to this post) I’ll add links to topics even if I have previously, perhaps on multiple occasions, added the same links when covering other books – the idea behind the links is to remind me – and indicate to you – which kinds of topics are covered in the book.

“[O]ver large distances it is gravity that dominates. This is because gravity is only ever attractive and because it can never be screened. So while most large objects are electrically neutral, they can never be gravitationally neutral. The gravitational force between objects with mass always acts to pull those objects together, and always increases as they become more massive.”

“The challenges involved in testing Newton’s law of gravity in the laboratory arise principally due to the weakness of the gravitational force compared to the other forces of nature. This weakness means that even the smallest residual electric charges on a piece of experimental equipment can totally overwhelm the gravitational force, making it impossible to measure. All experimental equipment therefore needs to be prepared with the greatest of care, and the inevitable electric charges that sneak through have to be screened by introducing metal shields that reduce their influence. This makes the construction of laboratory experiments to test gravity extremely difficult, and explains why we have so far only probed gravity down to scales a little below 1mm (this can be compared to around a billionth of a billionth of a millimetre for the electric force).”

“There are a large number of effects that result from Einstein’s theory. […] [T]he anomalous orbit of the planet Mercury; the bending of starlight around the Sun; the time delay of radio signals as they pass by the Sun; and the behaviour of gyroscopes in orbit around the Earth […] are four of the most prominent relativistic gravitational effects that can be observed in the Solar System.” [As an aside, I only yesterday watched the first ~20 minutes of the first of Nima Arkani-Hamed’s lectures on the topic of ‘Robustness of GR. Attempts to Modify Gravity’, which was recently uploaded on the IAS youtube channel, before I concluded that I was probably not going to be able to follow the lecture – I would have been able to tell Hamed, on account of having read this book, that the name of the ‘American’ astronomer whose name eluded him early on in the lecture (5 minutes in or so) was John Couch Adams (who was in fact British, not American)].

“[T]he overall picture we are left with is very encouraging for Einstein’s theory of gravity. The foundational assumptions of this theory, such as the constancy of mass and the Universality of Free Fall, have been tested to extremely high accuracy. The inverse square law that formed the basis of Newton’s theory, and which is a good first approximation to Einstein’s theory, has been tested from the sub-millimetre scale all the way up to astrophysical scales. […] We […] have very good evidence that Newton’s inverse square law is a good approximation to gravity over a wide range of distance scales. These scales range from a fraction of a millimetre, to hundreds of millions of metres. […] We are also now in possession of a number of accurate experimental results that probe the tiny, subtle effects that result from Einstein’s theory specifically. This data allows us direct experimental insight into the relationship between matter and the curvature of space-time, and all of it is so far in good agreement with Einstein’s predictions.”

“[A]ll of the objects in the Solar System are, relatively speaking, rather slow moving and not very dense. […] If we set our sights a little further though, we can find objects that are much more extreme than anything we have available nearby. […] observations of them have allowed us to explore gravity in ways that are simply impossible in our own Solar System. The extreme nature of these objects amplifies the effects of Einstein’s theory […] Just as the orbit of Mercury precesses around the Sun so too the neutron stars in the Hulse–Taylor binary system precess around each other. To compare with similar effects in our Solar System, the orbit of the Hulse–Taylor pulsar precesses as much in a day as Mercury does in a century.”

“[I]n Einstein’s theory, gravity is due to the curvature of space-time. Massive objects like stars and planets deform the shape of the space-time in which they exist, so that other bodies that move through it appear to have their trajectories bent. It is the mistaken interpretation of the motion of these bodies as occurring in a flat space that leads us to infer that there is a force called gravity. In fact, it is just the curvature of space-time that is at work. […] The relevance of this for gravitational waves is that if a group of massive bodies are in relative motion […], then the curvature of the space-time in which they exist is not usually fixed in time. The curvature of the space-time is set by the massive bodies, so if the bodies are in motion, the curvature of space-time should be expected to be constantly changing. […] in Einstein’s theory, space-time is a dynamical entity. As an example of this, consider the supernovae […] Before their cores collapse, leading to catastrophic explosion, they are relatively stable objects […] After they explode they settle down to a neutron star or a black hole, and once again return to a relatively stable state, with a gravitational field that doesn’t change much with time. During the explosion, however, they eject huge amounts of mass and energy. Their gravitational field changes rapidly throughout this process, and therefore so does the curvature of the space-time around them.

Like any system that is pushed out of equilibrium and made to change rapidly, this causes disturbances in the form of waves. A more down-to-earth example of a wave is what happens when you throw a stone into a previously still pond. The water in the pond was initially in a steady state, but the stone causes a rapid change in the amount of water at one point. The water in the pond tries to return to its tranquil initial state, which results in the propagation of the disturbance, in the form of ripples that move away from the point where the stone landed. Likewise, a loud noise in a previously quiet room originates from a change in air pressure at a point (e.g. a stereo speaker). The disturbance in the air pressure propagates outwards as a pressure wave as the air tries to return to a stable state, and we perceive these pressure waves as sound. So it is with gravity. If the curvature of space-time is pushed out of equilibrium, by the motion of mass or energy, then this disturbance travels outwards as waves. This is exactly what occurs when a star collapses and its outer envelope is ejected by the subsequent explosion. […] The speed with which waves propagate usually depends on the medium through which they travel. […] The medium for gravitational waves is space-time itself, and according to Einstein’s theory, they propagate at exactly the same speed as light. […] [If a gravitational wave passes through a cloud of gas,] the gravitational wave is not a wave in the gas, but rather a propagating disturbance in the space-time in which the gas exists. […] although the atoms in the gas might be closer together (or further apart) than they were before the wave passed through them, it is not because the atoms have moved, but because the amount of space between them has been decreased (or increased) by the wave. The gravitational wave changes the distance between objects by altering how much space there is in between them, not by moving them within a fixed space.”

“If we look at the right galaxies, or collect enough data, […] we can use it to determine the gravitational fields that exist in space. […] we find that there is more gravity than we expected there to be, from the astrophysical bodies that we can see directly. There appears to be a lot of mass, which bends light via its gravitational field, but that does not interact with the light in any other way. […] Moving to even smaller scales, we can look at how individual galaxies behave. It has been known since the 1970s that the rate at which galaxies rotate is too high. What I mean is that if the only source of gravity in a galaxy was the visible matter within it (mostly stars and gas), then any galaxy that rotated as fast as those we see around us would tear itself apart. […] That they do not fly apart, despite their rapid rotation, strongly suggests that the gravitational fields within them are larger than we initially suspected. Again, the logical conclusion is that there appears to be matter in galaxies that we cannot see but which contributes to the gravitational field. […] Many of the different physical processes that occur in the Universe lead to the same surprising conclusion: the gravitational fields we infer, by looking at the Universe around us, require there to be more matter than we can see with our telescopes. Beyond this, in order for the largest structures in the Universe to have evolved into their current state, and in order for the seeds of these structures to look the way they do in the CMB, this new matter cannot be allowed to interact with light at all (or, at most, interact only very weakly). This means that not only do we not see this matter, but that it cannot be seen at all using light, because light is required to pass straight through it. […] The substance that gravitates in this way but cannot be seen is referred to as dark matter. […] There needs to be approximately five times as much dark matter as there is ordinary matter. […] the evidence for the existence of dark matter comes from so many different sources that it is hard to argue with it.”

“[T]here seems to be a type of anti-gravity at work when we look at how the Universe expands. This anti-gravity is required in order to force matter apart, rather than pull it together, so that the expansion of the Universe can accelerate. […] The source of this repulsive gravity is referred to by scientists as dark energy […] our current overall picture of the Universe is as follows: only around 5 per cent of the energy in the Universe is in the form of normal matter; about 25 per cent is thought to be in the form of the gravitationally attractive dark matter; and the remaining 70 per cent is thought to be in the form of the gravitationally repulsive dark energy. These proportions, give or take a few percentage points here and there, seem sufficient to explain all astronomical observations that have been made to date. The total of all three of these types of energy, added together, also seems to be just the right amount to make space flat […] The flat Universe, filled with mostly dark energy and dark matter, is usually referred to as the Concordance Model of the Universe. Among astronomers, it is now the consensus view that this is the model of the Universe that best fits their data.”

 

The universality of free fall.
Galileo’s Leaning Tower of Pisa experiment.
Isaac Newton/Philosophiæ Naturalis Principia Mathematica/Newton’s law of universal gravitation.
Kepler’s laws of planetary motion.
Luminiferous aether.
Special relativity.
Spacetime.
General relativity.
Spacetime curvature.
Pound–Rebka experiment.
Gravitational time dilation.
Gravitational redshift space-probe experiment (Essot & Levine).
Michelson–Morley experiment.
Hughes–Drever experiment.
Tests of special relativity.
Eötvös experiment.
Torsion balance.
Cavendish experiment.
LAGEOS.
Interferometry.
Geodetic precession.
Frame-dragging.
Gravity Probe B.
White dwarf/neutron star/supernova/gravitational collapse/black hole.
Hulse–Taylor binary.
Arecibo Observatory.
PSR J1738+0333.
Gravitational wave.
Square Kilometre Array.
PSR J0337+1715.
LIGO.
Weber bar.
MiniGrail.
Laser Interferometer Space Antenna.
Edwin Hubble/Hubble’s Law.
Physical cosmology.
Alexander Friedmann/Friedmann equations.
Cosmological constant.
Georges Lemaître.
Ralph Asher Alpher/Robert Hermann/CMB/Arno Penzias/Robert Wilson.
Cosmic Background Explorer.
The BOOMERanG experiment.
Millimeter Anisotropy eXperiment IMaging Array.
Wilkinson Microwave Anisotropy Probe.
High-Z Supernova Search Team.
CfA Redshift Survey/CfA2 Great Wall/2dF Galaxy Redshift Survey/Sloan Digital Sky Survey/Sloan Great Wall.
Gravitational lensing.
Inflation (cosmology).
Lambda-CDM model.
BICEP2.
Large Synoptic Survey Telescope.
Grand Unified Theory.
Renormalization (quantum theory).
String theory.
Loop quantum gravity.
Unruh effect.
Hawking radiation.
Anthropic principle.

July 15, 2017 Posted by | Astronomy, Books, cosmology, Physics | Leave a comment

Probing the Early Universe through Observations of the Cosmic Microwave Background

This lecture/talk is a few years old, but it was only made public on the IAS channel last week (…along with a lot of other lectures – the IAS channel has added a lot of stuff recently, including more than 150 lectures within the last week or so; so if you’re interested you should go have a look).

Below the lecture I have added a few links with stuff (wiki-articles and a few papers) related to the topics covered in the lecture. I didn’t read those links, but I skimmed them (and a few others, which I subsequently decided not to include as their coverage did not overlap sufficiently with the stuff covered in the lecture) and decided to add them in order to remind myself what kind of stuff was included in the lecture/allow others to infer what kind of stuff might be included in the lecture. The links naturally go into a lot more detail than does the lecture, but these are the sort of topics discussed/included.

The lecture is long (90 minutes + a short Q&A), but it was interesting enough for me to watch all of it. The lecturer displays a very high level of speech disfluency throughout the lecture, in the sense that I might not be surprised if I were told that the most commonly word encountered during this lecture was ‘um’ or ‘uh’, rather than more commonly encountered mode words like ‘the’, but you get used to it (at least I managed to sort of ‘tune it out’ after a while). I should caution that there’s a short ‘jump’ very early on in the lecture (at the 2 minute mark or so) where a small amount of frames were apparently dropped, but that should not scare you away from watching the lecture; that frame drop is the only one of its kind during the lecture, aside from a similar brief ‘jump’ around the 1 hour 9 minute mark.

Some links:

Astronomical interferometer.
Polarimetry.
Bolometer.
Fourier transform.
Boomerang : A Balloon-borne Millimeter Wave Telescope and Total Power Receiver for Mapping Anisotropy in the Cosmic Microwave Background.
Observations of the Temperature and Polarization Anisotropies with Boomerang 2003.
THE COBE DIFFUSE INFRARED BACKGROUND EXPERIMENT SEARCH FOR THE COSMIC INFRARED BACKGROUND: I. LIMITS AND DETECTIONS.
Detection of the Power Spectrum of Cosmic Microwave Background Lensing by the Atacama Cosmology Telescope.
Secondary anisotropies of the CMB (review article).
Planck early results. VIII. The all-sky early Sunyaev-Zeldovich cluster sample.
Sunyaev–Zel’dovich effect.
A CMB Polarization Primer.
MEASUREMENT OF COSMIC MICROWAVE BACKGROUND POLARIZATION POWER SPECTRA FROM TWO YEARS OF BICEP DATA.
Spider: a balloon-borne CMB polarimeter for large angular scales.

July 13, 2017 Posted by | Astronomy, cosmology, Lectures, Physics | Leave a comment

Stars

“Every atom of our bodies has been part of a star, and every informed person should know something of how the stars evolve.”

I gave the book three stars on goodreads. At times it’s a bit too popular-science-y for me, and I think the level of coverage is a little bit lower than that of some of the other physics books in the ‘A Very Brief Introduction‘ series by Oxford University Press, but on the other hand it did teach me some new things and explained some other things I knew about but did not fully understand before and I’m well aware that it can be really hard to strike the right balance when writing books like these. I don’t like it when authors employ analogies instead of equations to explain stuff, but on the other hand I’ve seen some of the relevant equations before, e.g. in the context of IAS lectures, so I was okay with skipping some of the math because I know how the math here can really blow up in your face fast – and it’s not like this book has no math or equations, but I think it’s the kind of math most people should be able to deal with. It’s a decent introduction to the topic, and I must admit I have yet really to be significantly disappointed in a book from the physics part of this OUP series – they’re good books, readable and interesting.

Below I have added some quotes and observations from the book, as well as some relevant links to material or people covered in the book. Some of the links below I have also added previously when covering other books in the physics series, but I do not really care about that as I try to cover each book separately; the two main ideas behind adding links of this kind are: 1) to remind me which topics (…which I was unable to cover in detail in the post using quotes, because there’s too much stuff to cover in the book for that to make sense…) were covered in the book, and: 2) to give people who might be interested in reading the book an idea of which topics are covered therein; if I neglected to add relevant links simply because such topics were also covered in other books I’ve covered here, the link collection would not accomplish what I’d like it to accomplish. The link collection was gathered while I was reading the book (I was bookmarking relevant wiki articles along the way while reading the book), whereas the quotes included in the post were only added to the post after I had finished adding the links from the link collection; I am well aware that some topics covered in the quotes of the book are also covered in the link collection, but I didn’t care enough about this ‘double coverage of topics’ to remove those links that refer to material also covered in my quotes in this post from the link collection.

I think the part of the book coverage related to finding good quotes to include in this post was harder than it has been in the context of some of the other physics books I’ve covered recently, because the author goes into quite some detail explaining some specific dynamics of star evolution which are not easy to boil down to a short quote which is still meaningful to people who do not know the context. The fact that he does go into those details was of course part of the reason why I liked the book.

“[W]e cannot consider heat energy in isolation from the other large energy store that the Sun has – gravity. Clearly, gravity is an energy source, since if it were not for the resistance of gas pressure, it would make all the Sun’s gas move inwards at high speed. So heat and gravity are both potential sources of energy, and must be related by the need to keep the Sun in equilibrium. As the Sun tries to cool down, energy must be swapped between these two forms to keep the Sun in balance […] the heat energy inside the Sun is not enough to spread all of its contents out over space and destroy it as an identifiable object. The Sun is gravitationally bound – its heat energy is significant, but cannot supply enough energy to loosen gravity’s grip, and unbind the Sun. This means that when pressure balances gravity for any system (as in the Sun), the total heat energy T is always slightly less than that needed (V) to disperse it. In fact, it turns out to be exactly half of what would be needed for this dispersal, so that 2T + V = 0, or V = −2 T. The quantities T and V have opposite signs, because energy has to be supplied to overcome gravity, that is, you have to use T to try to cancel some of V. […] you need to supply energy to a star in order to overcome its gravity and disperse all of its gas to infinity. In line with this, the star’s total energy (thermal plus gravitational) is E = T + V = −T, that is, the total energy is minus its thermal energy, and so is itself negative. That is, a star is a gravitationally bound object. Whenever the system changes slowly enough that pressure always balances gravity, these two energies always have to be in this 1:2 ratio. […] This reasoning shows that cooling, shrinking, and heating up all go together, that is, as the Sun tries to cool down, its interior heats up. […] Because E = –T, when the star loses energy (by radiating), making its total energy E more negative, the thermal energy T gets more positive, that is, losing energy makes the star heat up. […] This result, that stars heat up when they try to cool, is central to understanding why stars evolve.”

“[T]he whole of chemistry is simply the science of electromagnetic interaction of atoms with each other. Specifically, chemistry is what happens when electrons stick atoms together to make molecules. The electrons doing the sticking are the outer ones, those furthest from the nucleus. The physical rules governing the arrangement of electrons around the nucleus mean that atoms divide into families characterized by their outer electron configurations. Since the outer electrons specify the chemical properties of the elements, these families have similar chemistry. This is the origin of the periodic table of the elements. In this sense, chemistry is just a specialized branch of physics. […] atoms can combine, or react, in many different ways. A chemical reaction means that the electrons sticking atoms together are rearranging themselves. When this happens, electromagnetic energy may be released, […] or an energy supply may be needed […] Just as we measured gravitational binding energy as the amount of energy needed to disperse a body against the force of its own gravity, molecules have electromagnetic binding energies measured by the energies of the orbiting electrons holding them together. […] changes of electronic binding only produce chemical energy yields, which are far too small to power stars. […] Converting hydrogen into helium is about 15 million times more effective than burning oil. This is because strong nuclear forces are so much more powerful than electromagnetic forces.”

“[T]here are two chains of reactions which can convert hydrogen to helium. The rate at which they occur is in both cases quite sensitive to the gas density, varying as its square, but extremely sensitive to the gas temperature […] If the temperature is below a certain threshold value, the total energy output from hydrogen burning is completely negligible. If the temperature rises only slightly above this threshold, the energy output becomes enormous. It becomes so enormous that the effect of all this energy hitting the gas in the star’s centre is life-threatening to it. […] energy is related to mass. So being hit by energy is like being hit by mass: luminous energy exerts a pressure. For a luminosity above a certain limiting value related to the star’s mass, the pressure will blow it apart. […] The central temperature of the Sun, and stars like it, must be almost precisely at the threshold value. It is this temperature sensitivity which fixes the Sun’s central temperature at the value of ten million degrees […] All stars burning hydrogen in their centres must have temperatures close to this value. […] central temperature [is] roughly proportional to the ratio of mass to radius [and this means that] the radius of a hydrogen-burning star is approximately proportional to its mass […] You might wonder how the star ‘knows’ that its radius is supposed to have this value. This is simple: if the radius is too large, the star’s central temperature is too low to produce any nuclear luminosity at all. […] the star will shrink in an attempt to provide the luminosity from its gravitational binding energy. But this shrinking is just what it needs to adjust the temperature in its centre to the right value to start hydrogen burning and produce exactly the right luminosity. Similarly, if the star’s radius is slightly too small, its nuclear luminosity will grow very rapidly. This increases the radiation pressure, and forces the star to expand, again back to the right radius and so the right luminosity. These simple arguments show that the star’s structure is self-adjusting, and therefore extremely stable […] The basis of this stability is the sensitivity of the nuclear luminosity to temperature and so radius, which controls it like a thermostat.”

“Hydrogen burning produces a dense and growing ball of helium at the star’s centre. […] the star has a weight problem to solve – the helium ball feels its own weight, and that of all the rest of the star as well. A similar effect led to the ignition of hydrogen in the first place […] we can see what happens as the core mass grows. Let’s imagine that the core mass has doubled. Then the core radius also doubles, and its volume grows by a factor 2 × 2 × 2 = 8. This is a bigger factor than the mass growth, so the density is 2/(2 × 2 × 2) = 1/4 of its original value. We end with the surprising result that as the helium core mass grows in time, its central number density drops. […] Because pressure is proportional to density, the central pressure of the core drops also […] Since the density of the hydrogen envelope does not change over time, […] the helium core becomes less and less able to cope with its weight problem as its mass increases. […] The end result is that once the helium core contains more than about 10% of the star’s mass, its pressure is too low to support the weight of the star, and things have to change drastically. […] massive stars have much shorter main-sequence lifetimes, decreasing like the inverse square of their masses […] A star near the minimum main-sequence mass of one-tenth of the Sun’s has an unimaginably long lifetime of almost 1013 years, nearly a thousand times the Sun’s. All low-mass stars are still in the first flush of youth. This is the fundamental fact of stellar life: massive stars have short lives, and low-mass stars live almost forever – certainly far longer than the current age of the Universe.”

“We have met all three […] timescales [see links below – US] for the Sun. The nuclear time is ten billion years, the thermal timescale is thirty million years, and the dynamical one […] just half an hour. […] Each timescale says how long the star takes to react to changes of the given type. The dynamical time tells us that if we mess up the hydrostatic balance between pressure and weight, the star will react by moving its mass around for a few dynamical times (in the Sun’s case, a few hours) and then settle down to a new state in which pressure and weight are in balance. And because this time is so short compared with the thermal time, the stellar material will not have lost or gained any significant amount of heat, but simply carried this around […] although the star quickly finds a new hydrostatic equilibrium, this will not correspond to thermal equilibrium, where heat moves smoothly outwards through the star at precisely the rate determined by the nuclear reactions deep in the centre. Instead, some bits of the star will be too cool to pass all this heat on outwards, and some will be too hot to absorb much of it. Over a thermal timescale (a few tens of millions of years in the Sun), the cool parts will absorb the extra heat they need from the stellar radiation field, and the hot parts rid themselves of the excess they have, until we again reach a new state of thermal equilibrium. Finally, the nuclear timescale tells us the time over which the star synthesizes new chemical elements, radiating the released energy into space.”

“[S]tars can end their lives in just one of three possible ways: white dwarf, neutron star, or black hole.”

“Stars live a long time, but must eventually die. Their stores of nuclear energy are finite, so they cannot shine forever. […] they are forced onwards through a succession of evolutionary states because the virial theorem connects gravity with thermodynamics and prevents them from cooling down. So main-sequence dwarfs inexorably become red giants, and then supergiants. What breaks this chain? Its crucial link is that the pressure supporting a star depends on how hot it is. This link would snap if the star was instead held up by a pressure which did not care about its heat content. Finally freed from the demand to stay hot to support itself, a star like this would slowly cool down and die. This would be an endpoint for stellar evolution. […] Electron degeneracy pressure does not depend on temperature, only density. […] one possible endpoint of stellar evolution arises when a star is so compressed that electron degeneracy is its main form of pressure. […] [Once] the star is a supergiant […] a lot of its mass is in a hugely extended envelope, several hundred times the Sun’s radius. Because of this vast size, the gravity tying the envelope to the core is very weak. […] Even quite small outward forces can easily overcome this feeble pull and liberate mass from the envelope, so a lot of the star’s mass is blown out into space. Eventually, almost the entire remaining envelope is ejected as a roughly spherical cloud of gas. The core quickly exhausts the thin shell of nuclear-burning material on its surface. Now gravity makes the core contract in on itself and become denser, increasing the electron degeneracy pressure further. The core ends as an extremely compact star, with a radius similar to the Earth’s, but a mass similar to the Sun, supported by this pressure. This is a white dwarf. […] Even though its surface is at least initially hot, its small surface means that it is faint. […] White dwarfs cannot start nuclear reactions, so eventually they must cool down and become dark, cold, dead objects. But before this happens, they still glow from the heat energy left over from their earlier evolution, slowly getting fainter. Astronomers observe many white dwarfs in the sky, suggesting that this is how a large fraction of all stars end their lives. […] Stars with an initial mass more than about seven times the Sun’s cannot end as white dwarfs.”

“In many ways, a neutron star is a vastly more compact version of a white dwarf, with the fundamental difference that its pressure arises from degenerate neutrons, not degenerate electrons. One can show that the ratio of the two stellar radii, with white dwarfs about one thousand times bigger than the 10 kilometres of a neutron star, is actually just the ratio of neutron to electron mass.”

“Most massive stars are not isolated, but part of a binary system […]. If one is a normal star, and the other a neutron star, and the binary is not very wide, there are ways for gas to fall from the normal star on to the neutron star. […] Accretion on to very compact objects like neutron stars almost always occurs through a disc, since the gas that falls in always has some rotation. […] a star’s luminosity cannot be bigger than the Eddington limit. At this limit, the pressure of the radiation balances the star’s gravity at its surface, so any more luminosity blows matter off the star. The same sort of limit must apply to accretion: if this tries to make too high a luminosity, radiation pressure will tend to blow away the rest of the gas that is trying to fall in, and so reduce the luminosity until it is below the limit. […] a neutron star is only 10 kilometres in radius, compared with the 700,000 kilometres of the Sun. This can only happen if this very small surface gets very hot. The surface of a healthily accreting neutron star reaches about 10 million degrees, compared with the 6,000 or so of the Sun. […] The radiation from such intensely hot surfaces comes out at much shorter wavelengths than the visible emission from the Sun – the surfaces of a neutron star and its accretion disc emit photons that are much more energetic than those of visible light. Accreting neutron stars and black holes make X-rays.”

“[S]tar formation […] is harder to understand than any other part of stellar evolution. So we use our knowledge of the later stages of stellar evolution to help us understand star formation. Working backwards in this way is a very common procedure in astronomy […] We know much less about how stars form than we do about any later part of their evolution. […] The cyclic nature of star formation, with stars being born from matter chemically enriched by earlier generations, and expelling still more processed material into space as they die, defines a cosmic epoch – the epoch of stars. The end of this epoch will arrive only when the stars have turned all the normal matter of the Universe into iron, and left it locked in dead remnants such as black holes.”

Stellar evolution.
Gustav Kirchhoff.
Robert Bunsen.
Joseph von Fraunhofer.
Spectrograph.
Absorption spectroscopy.
Emission spectrum.
Doppler effect.
Parallax.
Stellar luminosity.
Cecilia Payne-Gaposchkin.
Ejnar Hertzsprung/Henry Norris Russell/Hertzsprung–Russell diagram.
Red giant.
White dwarf (featured article).
Main sequence (featured article).
Gravity/Electrostatics/Strong nuclear force.
Pressure/Boyle’s law/Charles’s law.
Hermann von Helmholtz.
William Thomson (Kelvin).
Gravitational binding energy.
Thermal energy/Gravitational energy.
Virial theorem.
Kelvin-Helmholtz time scale.
Chemical energy/Bond-dissociation energy.
Nuclear binding energy.
Nuclear fusion.
Heisenberg’s uncertainty principle.
Quantum tunnelling.
Pauli exclusion principle.
Eddington limit.
Convection.
Electron degeneracy pressure.
Nuclear timescale.
Number density.
Dynamical timescale/free-fall time.
Hydrostatic equilibrium/Thermal equilibrium.
Core collapse.
Hertzsprung gap.
Supergiant star.
Chandrasekhar limit.
Core-collapse supernova (‘good article’).
Crab Nebula.
Stellar nucleosynthesis.
Neutron star.
Schwarzschild radius.
Black hole (‘good article’).
Roy Kerr.
Pulsar.
Jocelyn Bell.
Anthony Hewish.
Accretion/Accretion disk.
X-ray binary.
Binary star evolution.
SS 433.
Gamma ray burst.
Hubble’s law/Hubble time.
Cosmic distance ladder/Standard candle/Cepheid variable.
Star formation.
Pillars of Creation.
Jeans instability.
Initial mass function.

July 2, 2017 Posted by | Astronomy, Books, Chemistry, Physics | Leave a comment

Astrophysics

Here’s what I wrote about the book on goodreads:

“I think the author was trying to do too much with this book. He covers a very large number of topics, but unfortunately the book is not easy to read because he covers in a few pages topics which other authors write entire books about. If he’d covered fewer topics in greater detail I think the end result would have been better. Despite having watched a large number of lectures on related topics and read academic texts about some of the topics covered in the book, I found the book far from easy to read, certainly compared to other physics books in this series (the books about nuclear physics and particle physics are both significantly easier to read, in my opinion). The author sometimes seemed to me to have difficulties understanding how large the potential knowledge gap between him and the reader of the book might be.

Worth reading if you know some stuff already and you’re willing to put in a bit of work, but don’t expect too much from the coverage.”

I gave the book two stars on goodreads.

I decided early on while reading the book that the only way I was going to cover this book at all here would be by posting a link-heavy post. I have added some quotes as well, but most of what’s going on in this book I’ll only cover by adding some relevant links to wiki articles dealing with these topics – as the link collection below should illustrate, although the subtitle of the book is ‘A Very Short Introduction’ it actually covers a great deal of ground (…too much ground, that’s part of the problem, as indicated above…). There are a lot of links because it’s just that kind of book.

First, a few quotes from the book:

“In thinking about the structure of an accretion disc it is helpful to imagine that it comprises a large number of solid rings, each of which spins as if each of its particles were in orbit around the central mass […] The speed of a circular orbit of radius r around a compact mass such as the Sun or a black hole is proportional to 1/r, so the speed increases inwards. It follows that there is shear within an accretion disc: each rotating ring slides past the ring just outside it, and, in the presence of any friction or viscosity within the fluid, each ring twists or torques the ring just outside it in the direction of rotation, trying to get it to rotate faster.

Torque is to angular momentum what force is to linear momentum: the quantity that sets its rate of change. Just as Newton’s laws yield that force is equal to rate of change of momentum, the rate of change of a body’s angular momentum is equal to the torque on the body. Hence the existence of the torque from smaller rings to bigger rings implies an outward transport of angular momentum through the accretion disc. When the disc is in a steady state this outward transport of angular momentum by viscosity is balanced by an inward transport of angular momentum by gas as it spirals inwards through the disc, carrying its angular momentum with it.”

“The differential equations that govern the motion of the planets are easily written down, and astronomical observations furnish the initial conditions to great precision. But with this precision we can predict the configuration of the planets only up to ∼ 40 Myr into the future — if the initial conditions are varied within the observational uncertainties, the predictions for 50 or 60 Myr later differ quite significantly. If you want to obtain predictions for 60 Myr that are comparable in precision to those we have for 40 Myr in the future, you require initial conditions that are 100 times more precise: for example, you require the current positions of the planets to within an error of 15m. If you want comparable predictions 60.15Myr in the future, you have to know the current positions to within 15mm.”

“An important feature of the solutions to the differential equations of the solar system is that after some variable, say the eccentricity of Mercury’s orbit, has fluctuated in a narrow range for millions of years, it will suddenly shift to a completely different range. This behaviour reflects the importance of resonances for the dynamics of the system: at some moment a resonant condition becomes satisfied and the flow of energy within the system changes because a small disturbance can accumulate over thousands or millions of cycles into a large effect. If we start the integrations from a configuration that differs ever so little from the previous configuration, the resonant condition will fail to be satisfied, or be satisfied much earlier or later, and the solutions will look quite different.”

“In Chapter 4 we saw that the physics of accretion discs around stars and black holes is all about the outward transport of angular momentum, and that moving angular momentum outwards heats a disc. Outward transport of angular momentum is similarly important for galactic discs. […] in a gaseous accretion disc angular momentum is primarily transported by the magnetic field. In a stellar disc, this job has to be done by the gravitational field because stars only interact gravitationally. Spiral structure provides the gravitational field needed to transport angular momentum outwards.

In addition to carrying angular momentum out through the stellar disc, spiral arms regularly shock interstellar gas, causing it to become denser, and a fraction of it to collapse into new stars. For this reason, spiral structure is most easily traced in the distribution of young stars, especially massive, luminous stars, because all massive stars are young. […] Spiral arms are waves of enhanced star density that propagate through a stellar disc rather as sound waves propagate through air. Like sound waves they carry energy, and this energy is eventually converted from the ordered form it takes in the wave to the kinetic energy of randomly moving stars. That is, spiral arms heat the stellar disc.”

“[I]f you take any reasonably representative group of galaxies, from the group’s luminosity, you can deduce the quantity of ordinary matter it should contain. This quantity proves to be roughly ten times the amount of ordinary matter that’s in the galaxies. So most ordinary matter must lie between the galaxies rather than within them.”

“The nature of a galaxy is largely determined by three numbers: its luminosity, its bulge-to-disc ratio, and the ratio of its mass of cold gas to the mass in stars. Since stars form from cold gas, this last ratio determines how youthful the galaxy’s stellar population is.

A youthful stellar population contains massive stars, which are short-lived, luminous, and blue […] An old stellar population contains only low-mass, faint, and red stars. Moreover, the spatial distribution of young stars can be very lumpy because the stars have not had time to be spread around the system […] a galaxy with a young stellar population looks very different from one with an old population: it is more lumpy/streaky, bluer, and has a higher luminosity than a galaxy of similar stellar mass with an old stellar population.”

Links:

Accretion disk.
Supermassive black hole.
Quasar.
Magnetorotational instability.
Astrophysical jet.
Herbig–Haro object.
SS 433.
Cygnus A.
Collimated light.
Light curve.
Lyman-alpha line.
Balmer series.
Star formation.
Stellar evolution.
Black-body radiation.
Helium flash.
White dwarf (featured article).
Planetary nebula.
Photosphere.
Corona.
Solar transition region.
Photodissociation.
Carbon detonation.
X-ray binary.
Inverse Compton scattering.
Microquasar.
Quasi-periodic oscillation.
Urbain Le Verrier.
Perturbation theory.
Elliptic orbit.
Precession.
Axial precession.
Libration.
Orbital resonance.
Jupiter trojan (featured article).
Late Heavy Bombardment.
Exoplanet.
Lorentz factor.
Radio galaxy.
Gamma-ray burst (featured article).
Cosmic ray.
Hulse–Taylor binary.
Special relativity.
Lorentz covariance.
Lorentz transformation.
Muon.
Relativistic Doppler effect.
Superluminal motion.
Fermi acceleration.
Shock waves in astrophysics.
Ram pressure.
Synchrotron radiation.
General relativity (featured article).
Gravitational redshift.
Gravitational lens.
Fermat’s principle.
SBS 0957+561.
Strong gravitational lensing/Weak gravitational lensing.
Gravitational microlensing.
Shapiro delay.
Gravitational wave.
Dark matter.
Dwarf spheroidal galaxy.
Luminosity function.
Lenticular galaxy.
Spiral galaxy.
Disc galaxy.
Elliptical galaxy.
Stellar dynamics.
Constant of motion.
Bulge (astronomy).
Interacting galaxy.
Coma cluster.
Galaxy cluster.
Anemic galaxy.
Decoupling (cosmology).

June 20, 2017 Posted by | Astronomy, Books, Physics | Leave a comment

Cosmology: Recent Results and Future Prospects

This is another old lecture from my bookmarks. I’m reasonably certain the main reason why I did not blog this earlier is that it’s a rather general and not very detailed overview lecture, so it doesn’t actually contain a lot of new stuff. Hubble’s work, the discovery of the cosmic microwave background, properties of the early universe and how it evolved, discussion of the cosmological constant, dark matter and dark energy, some recent observational results – most of the stuff he talks about should be familiar territory to people interested in the field. Before I watched the lecture I had expected it to include a lot more ‘recent results’ and ‘future prospects’ than were actually included; a big part of the lecture is just an overview of what we’ve learned since the 1930es.

June 7, 2017 Posted by | Astronomy, Lectures, Physics | Leave a comment

Nuclear physics

Below I have posted a few observations from the book, as well as a number of links to coverage of other topics mentioned/covered in the book. It’s a good book, the level of coverage is very decent considering the format of the publication.

“Electrons are held in place, remote from the nucleus, by the electrical attraction of opposite charges, electrons being negatively and the atomic nucleus positively charged. A temperature of a few thousand degrees is sufficient to break this attraction completely and liberate all of the electrons from within atoms. Even room temperature can be enough to release one or two; the ease with which electrons can be moved from one atom to another is the source of chemistry, biology, and life.”

“Quantum mechanics explains the behaviour of electrons in atoms, and of nucleons in nuclei. In an atom, electrons cannot go just where they please, but are restricted like someone on a ladder who can only step on individual rungs. When an electron drops from a rung with high energy to one that is lower down, the excess energy is carried away by a photon of light. The spectrum of these photons reveals the pattern of energy levels within the atom. Similar constraints apply to nucleons in nuclei. Nuclei in excited states, with one or more protons or neutrons on a high rung, also give up energy by emitting photons. The main difference between what happens to atomic electrons relative to atomic nuclei is the nature of the radiated light. In the former the light may be in the visible spectrum, whose photons have relatively low energy, whereas in the case of nuclei the light consists of X-rays and gamma rays, whose photons have energies that are millions of times greater. This is the origin of gamma radioactivity.”

“[A]ll particles that feel the strong interaction are made of quarks. […] Quarks that form nuclear particles come in two flavours, known as up (u) or down (d), with electrical charges that are fractions, +2/3 or −1/3 respectively, of a proton’s charge. Thus uud forms a proton and ddu a neutron. In addition to electrical charge, quarks possess another form of charge, known as colour. This is the fundamental source of the strong nuclear force. Whereas electric charge occurs in some positive or negative numerical amount, for colour charge there are three distinct varieties of each. These are referred to as red, green, or blue, by analogy with colours, but are just names and have no deeper significance. […] colour charge and electric charge obey very similar rules. For example, analogous to the behaviour of electric charge, colour charges of the same colour repel, whereas different colours can attract […]. A proton or neutron is thus formed when three quarks, each with a different colour, mutually attract one another. In this configuration the colour forces have neutralized, analogous to the way that positive and negative charges neutralize within an atom.”

“The relativistic quantum theory of colour is known as quantum chromodynamics (QCD). It is similar in spirit to quantum electrodynamics (QED). QED implies that the electromagnetic force is transmitted by the exchange of massless photons; by analogy, in QCD the force between quarks, within nucleons, is due to the exchange of massless gluons.”

“In a nutshell, the quarks in heavy nuclei are found to have, on average, slightly lower momenta than in isolated protons or neutrons. In spatial terms, this equates with the interpretation that individual quarks are, on average, less confined than in free nucleons. […] The overall conclusion is that the quarks are more liberated in nuclei when in a region of relatively high density. […] This interpretation of the microstructure of atomic nuclei suggests that nuclei are more than simply individual nucleons bound by the strong force. There is a tendency, under extreme pressure or density, for them to merge, their constituent quarks freed to flow more liberally. […] This freeing of quarks is a liberation of colour charges, and in theory should happen for gluons also. Thus, it is a precursor of what is hypothesized to occur within atomic nuclei under conditions of extreme temperature and pressure […] atoms are unable to survive at high temperatures and pressure, as in the sun for example, and their constituent electric charges—electrons and protons—flow independently as electrically charged gases. This is a state of matter known as plasma. Analogously, under even more extreme conditions, the coloured quarks are unable to configure into individual neutrons and protons. Instead, the quarks and gluons are theorized to flow freely as a quark–gluon plasma (QGP).”

“The mass of a nucleus is not simply the sum of the masses of its constituent nucleons. […] some energy is taken up to bind the nucleus together. This ‘binding energy’ is the difference between the mass of the nucleus and its constituents. […] The larger the binding energy, the greater is the propensity for the nucleus to be stable. Its actual stability is often determined by the relative size of the binding energy of the nucleus to that of its near neighbours in the periodic table of elements, or of other isotopes of the original elemental nucleus. As nature seeks stability by minimizing energy, a nucleus will seek to lower the total mass, or equivalently, to increase the binding energy. […] An effective guide to stability, and the pattern of radioactive decays, is given by the semi-empirical mass formula (SEMF).”

“For light nuclei the binding energy grows with A [the mass of the nucleus – US] until electrostatic repulsion takes over in large nuclei. […] At large values of Z [# of protons – US], the penalty of electrostatic charge, which extends throughout the nucleus, requires further neutrons to add to the short range attraction in compensation. Eventually, for Z > 82, the amount of electrostatic repulsion is so large that nuclei cannot remain stable, even when they have large numbers of neutrons. […] All nuclei heavier than lead are radioactive.”

“Three minutes after the big bang, the material universe consisted primarily of the following: 75% protons; 24% helium nuclei; a small number of deuterons; traces of lithium, beryllium, and boron, and free electrons. […] 300,000 years later, the ambient temperature had fallen below 10,000 degrees, that is similar to or cooler than the outer regions of our sun today. At these energies the negatively charged electrons were at last able to be held fast by electrical attraction to the positively charged atomic nuclei whereby they combined to form neutral atoms. Electromagnetic radiation was set free and the universe became transparent as light could roam unhindered across space.
The big bang did not create the elements necessary for life, such as carbon, however. Carbon is the next lightest element after boron, but its synthesis presented an insuperable barrier in the very early universe. The huge stability of alpha particles frustrates attempts to make carbon by collisions between any pair of lighter isotopes. […] Thus no carbon or heavier isotopes were formed during big bang nucleosynthesis. Their synthesis would require the emergence of stars.”

“In the heat of the big bang, quarks and gluons swarmed independently in quark–gluon plasma. Inside the sun, relatively cool, they form protons but the temperature is nonetheless too high for atoms to survive. Thus inside the sun, electrons and protons swarm independently as electrical plasma. It is primarily protons that fuel the sun today. […] Protons can bump into one another and initiate a set of nuclear processes that eventually converts four of them into helium-4 […] As the energy mc² locked into a single helium-4 nucleus is less than that in the original four protons, the excess is released into the surroundings, some of it eventually providing warmth here on earth. […] because the sun produces these reactions continuously over aeons, unlike big bang nucleosynthesis, which lasted mere minutes, unstable isotopes, such as tritium, play no role in solar nucleosynthesis.”

“Although individual antiparticles are regularly produced from the energy in collisions between cosmic rays, or in accelerator laboratories such as at CERN, there is no evidence for antimatter in bulk in the universe at large. […] To date, all the evidence is that the universe at large is made of matter to the exclusion of antimatter. […] One of the great mysteries in physics is how the symmetry between matter and antimatter was disturbed.”

Some links:

Nuclear physics.
Alpha decay/beta decay/gamma radiation.
Positron emission.
Isotope.
Rutherford model.
Bohr model.
Spin.
Nucleon.
Nuclear fission.
X-ray crystallography.
Pion.
EMC effect.
Magic number.
Cosmic ray spallation.
Asymptotic giant branch.
CNO cycle.
Transuranium elements.
Actinide.
Island of stability.
Transfermium Wars.
Nuclear drip line.
Halo nucleus.
Hyperon/hypernucleus.
Lambda baryon.
Strangelet.
Quark star.
Antineutron.
Radiation therapy.
Rutherford backscattering spectrometry.
Particle-induced X-ray emission.

June 5, 2017 Posted by | Books, Physics | Leave a comment

Extraordinary Physics with Millisecond Pulsars

A few related links:
Nanograv.org.
Millisecond pulsar.
PSR J0348+0432.
Pulsar timing array.
Detection of Gravitational Waves using Pulsar Timing (paper).
The strong equivalence principle.
European Pulsar Timing Array.
Parkes Observatory.
Gravitational wave.
Gravitational waves from binary supermassive black holes missing in pulsar observations (paper – it’s been a long time since I watched the lecture, but in my bookmarks I noted that some of the stuff included in this publication was covered in the lecture).

May 24, 2017 Posted by | Astronomy, Lectures, Papers, Physics | Leave a comment

Out of this World: A history of Structure in the Universe

This lecture is much less technical than were the last couple of lectures I posted, and if I remember correctly it’s aimed at a general audience (…the sort of ‘general audience’ that attends IAS lectures, but even so…). The lecture itself is quite short, only roughly 35 minutes long, but there’s a long Q&A session afterwards.

May 21, 2017 Posted by | Astronomy, Lectures, Physics | Leave a comment

Hydrodynamical Simulations of Galaxy Formation: Progress, Pitfalls, and Promises

“This calculation was relatively expensive, about 19 million CPU hours were spent on it.”

….

Posts including only one lecture is a recent innovation here on the blog as I have in the past bundled lectures so that a lecture post would include at least 2 or 3 lectures, but I am starting to come around to the idea that these new types of posts are a good idea. I have been going over some old lectures I’ve watched in the past recently, and it turns out that there are quite a few lectures I never got around to blogging; I have mentioned before how the 3 lectures per post format was likely suboptimal, in the sense that they tended to lead to lectures never being covered e.g. because of the long time lag between watching a lecture and blogging it (in the case of book blogging I tend to be much more likely to spend my time covering books I read recently, rather than books I read a while ago, and the same dynamic goes for lectures), and I think this impression is now confirmed.

As some of the lectures I’ll be covering in posts like these in the future are lectures I watched a long time ago my coverage will probably be limited to the actual lectures and the comments I wrote down when I first watched the lecture in question. I don’t want to add a few big lecture posts to just get rid of the backlog, mostly because this blog is obviously not nearly as active as it used to be, and adding single-lecture posts dropwise is an easy (…low-effort) and convenient way for me to keep the blog at least somewhat active. What I wrote down in my comments about the lecture above when I watched it, aside from the quote above, is that considering the very high-level physics included it was sort of surprising to me that the lecture was not so technical as to not be worth watching – but it wasn’t. You’ll certainly not understand all of it, but it’s interesting stuff.

May 18, 2017 Posted by | Astronomy, Lectures, Physics | Leave a comment

Random stuff

It’s been a long time since I last posted one of these posts, so a great number of links of interest has accumulated in my bookmarks. I intended to include a large number of these in this post and this of course means that I surely won’t cover each specific link included in this post in anywhere near the amount of detail it deserves, but that can’t be helped.

i. Autism Spectrum Disorder Grown Up: A Chart Review of Adult Functioning.

“For those diagnosed with ASD in childhood, most will become adults with a significant degree of disability […] Seltzer et al […] concluded that, despite considerable heterogeneity in social outcomes, “few adults with autism live independently, marry, go to college, work in competitive jobs or develop a large network of friends”. However, the trend within individuals is for some functional improvement over time, as well as a decrease in autistic symptoms […]. Some authors suggest that a sub-group of 15–30% of adults with autism will show more positive outcomes […]. Howlin et al. (2004), and Cederlund et al. (2008) assigned global ratings of social functioning based on achieving independence, friendships/a steady relationship, and education and/or a job. These two papers described respectively 22% and 27% of groups of higher functioning (IQ above 70) ASD adults as attaining “Very Good” or “Good” outcomes.”

“[W]e evaluated the adult outcomes for 45 individuals diagnosed with ASD prior to age 18, and compared this with the functioning of 35 patients whose ASD was identified after 18 years. Concurrent mental illnesses were noted for both groups. […] Comparison of adult outcome within the group of subjects diagnosed with ASD prior to 18 years of age showed significantly poorer functioning for those with co-morbid Intellectual Disability, except in the domain of establishing intimate relationships [my emphasis. To make this point completely clear, one way to look at these results is that apparently in the domain of partner-search autistics diagnosed during childhood are doing so badly in general that being intellectually disabled on top of being autistic is apparently conferring no additional disadvantage]. Even in the normal IQ group, the mean total score, i.e. the sum of the 5 domains, was relatively low at 12.1 out of a possible 25. […] Those diagnosed as adults had achieved significantly more in the domains of education and independence […] Some authors have described a subgroup of 15–27% of adult ASD patients who attained more positive outcomes […]. Defining an arbitrary adaptive score of 20/25 as “Good” for our normal IQ patients, 8 of thirty four (25%) of those diagnosed as adults achieved this level. Only 5 of the thirty three (15%) diagnosed in childhood made the cutoff. (The cut off was consistent with a well, but not superlatively, functioning member of society […]). None of the Intellectually Disabled ASD subjects scored above 10. […] All three groups had a high rate of co-morbid psychiatric illnesses. Depression was particularly frequent in those diagnosed as adults, consistent with other reports […]. Anxiety disorders were also prevalent in the higher functioning participants, 25–27%. […] Most of the higher functioning ASD individuals, whether diagnosed before or after 18 years of age, were functioning well below the potential implied by their normal range intellect.”

Related papers: Social Outcomes in Mid- to Later Adulthood Among Individuals Diagnosed With Autism and Average Nonverbal IQ as Children, Adults With Autism Spectrum Disorders.

ii. Premature mortality in autism spectrum disorder. This is a Swedish matched case cohort study. Some observations from the paper:

“The aim of the current study was to analyse all-cause and cause-specific mortality in ASD using nationwide Swedish population-based registers. A further aim was to address the role of intellectual disability and gender as possible moderators of mortality and causes of death in ASD. […] Odds ratios (ORs) were calculated for a population-based cohort of ASD probands (n = 27 122, diagnosed between 1987 and 2009) compared with gender-, age- and county of residence-matched controls (n = 2 672 185). […] During the observed period, 24 358 (0.91%) individuals in the general population died, whereas the corresponding figure for individuals with ASD was 706 (2.60%; OR = 2.56; 95% CI 2.38–2.76). Cause-specific analyses showed elevated mortality in ASD for almost all analysed diagnostic categories. Mortality and patterns for cause-specific mortality were partly moderated by gender and general intellectual ability. […] Premature mortality was markedly increased in ASD owing to a multitude of medical conditions. […] Mortality was significantly elevated in both genders relative to the general population (males: OR = 2.87; females OR = 2.24)”.

“Individuals in the control group died at a mean age of 70.20 years (s.d. = 24.16, median = 80), whereas the corresponding figure for the entire ASD group was 53.87 years (s.d. = 24.78, median = 55), for low-functioning ASD 39.50 years (s.d. = 21.55, median = 40) and high-functioning ASD 58.39 years (s.d. = 24.01, median = 63) respectively. […] Significantly elevated mortality was noted among individuals with ASD in all analysed categories of specific causes of death except for infections […] ORs were highest in cases of mortality because of diseases of the nervous system (OR = 7.49) and because of suicide (OR = 7.55), in comparison with matched general population controls.”

iii. Adhesive capsulitis of shoulder. This one is related to a health scare I had a few months ago. A few quotes:

Adhesive capsulitis (also known as frozen shoulder) is a painful and disabling disorder of unclear cause in which the shoulder capsule, the connective tissue surrounding the glenohumeral joint of the shoulder, becomes inflamed and stiff, greatly restricting motion and causing chronic pain. Pain is usually constant, worse at night, and with cold weather. Certain movements or bumps can provoke episodes of tremendous pain and cramping. […] People who suffer from adhesive capsulitis usually experience severe pain and sleep deprivation for prolonged periods due to pain that gets worse when lying still and restricted movement/positions. The condition can lead to depression, problems in the neck and back, and severe weight loss due to long-term lack of deep sleep. People who suffer from adhesive capsulitis may have extreme difficulty concentrating, working, or performing daily life activities for extended periods of time.”

Some other related links below:

The prevalence of a diabetic condition and adhesive capsulitis of the shoulder.
“Adhesive capsulitis is characterized by a progressive and painful loss of shoulder motion of unknown etiology. Previous studies have found the prevalence of adhesive capsulitis to be slightly greater than 2% in the general population. However, the relationship between adhesive capsulitis and diabetes mellitus (DM) is well documented, with the incidence of adhesive capsulitis being two to four times higher in diabetics than in the general population. It affects about 20% of people with diabetes and has been described as the most disabling of the common musculoskeletal manifestations of diabetes.”

Adhesive Capsulitis (review article).
“Patients with type I diabetes have a 40% chance of developing a frozen shoulder in their lifetimes […] Dominant arm involvement has been shown to have a good prognosis; associated intrinsic pathology or insulin-dependent diabetes of more than 10 years are poor prognostic indicators.15 Three stages of adhesive capsulitis have been described, with each phase lasting for about 6 months. The first stage is the freezing stage in which there is an insidious onset of pain. At the end of this period, shoulder ROM [range of motion] becomes limited. The second stage is the frozen stage, in which there might be a reduction in pain; however, there is still restricted ROM. The third stage is the thawing stage, in which ROM improves, but can take between 12 and 42 months to do so. Most patients regain a full ROM; however, 10% to 15% of patients suffer from continued pain and limited ROM.”

Musculoskeletal Complications in Type 1 Diabetes.
“The development of periarticular thickening of skin on the hands and limited joint mobility (cheiroarthropathy) is associated with diabetes and can lead to significant disability. The objective of this study was to describe the prevalence of cheiroarthropathy in the well-characterized Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) cohort and examine associated risk factors […] This cross-sectional analysis was performed in 1,217 participants (95% of the active cohort) in EDIC years 18/19 after an average of 24 years of follow-up. Cheiroarthropathy — defined as the presence of any one of the following: adhesive capsulitis, carpal tunnel syndrome, flexor tenosynovitis, Dupuytren’s contracture, or a positive prayer sign [related link] — was assessed using a targeted medical history and standardized physical examination. […] Cheiroarthropathy was present in 66% of subjects […] Cheiroarthropathy is common in people with type 1 diabetes of long duration (∼30 years) and is related to longer duration and higher levels of glycemia. Clinicians should include cheiroarthropathy in their routine history and physical examination of patients with type 1 diabetes because it causes clinically significant functional disability.”

Musculoskeletal disorders in diabetes mellitus: an update.
“Diabetes mellitus (DM) is associated with several musculoskeletal disorders. […] The exact pathophysiology of most of these musculoskeletal disorders remains obscure. Connective tissue disorders, neuropathy, vasculopathy or combinations of these problems, may underlie the increased incidence of musculoskeletal disorders in DM. The development of musculoskeletal disorders is dependent on age and on the duration of DM; however, it has been difficult to show a direct correlation with the metabolic control of DM.”

Rheumatic Manifestations of Diabetes Mellitus.

Prevalence of symptoms and signs of shoulder problems in people with diabetes mellitus.

Musculoskeletal Disorders of the Hand and Shoulder in Patients with Diabetes.
“In addition to micro- and macroangiopathic complications, diabetes mellitus is also associated with several musculoskeletal disorders of the hand and shoulder that can be debilitating (1,2). Limited joint mobility, also termed diabetic hand syndrome or cheiropathy (3), is characterized by skin thickening over the dorsum of the hands and restricted mobility of multiple joints. While this syndrome is painless and usually not disabling (2,4), other musculoskeletal problems occur with increased frequency in diabetic patients, including Dupuytren’s disease [“Dupuytren’s disease […] may be observed in up to 42% of adults with diabetes mellitus, typically in patients with long-standing T1D” – link], carpal tunnel syndrome [“The prevalence of [carpal tunnel syndrome, CTS] in patients with diabetes has been estimated at 11–30 % […], and is dependent on the duration of diabetes. […] Type I DM patients have a high prevalence of CTS with increasing duration of disease, up to 85 % after 54 years of DM” – link], palmar flexor tenosynovitis or trigger finger [“The incidence of trigger finger [/stenosing tenosynovitis] is 7–20 % of patients with diabetes comparing to only about 1–2 % in nondiabetic patients” – link], and adhesive capsulitis of the shoulder (5–10). The association of adhesive capsulitis with pain, swelling, dystrophic skin, and vasomotor instability of the hand constitutes the “shoulder-hand syndrome,” a rare but potentially disabling manifestation of diabetes (1,2).”

“The prevalence of musculoskeletal disorders was greater in diabetic patients than in control patients (36% vs. 9%, P < 0.01). Adhesive capsulitis was present in 12% of the diabetic patients and none of the control patients (P < 0.01), Dupuytren’s disease in 16% of diabetic and 3% of control patients (P < 0.01), and flexor tenosynovitis in 12% of diabetic and 2% of control patients (P < 0.04), while carpal tunnel syndrome occurred in 12% of diabetic patients and 8% of control patients (P = 0.29). Musculoskeletal disorders were more common in patients with type 1 diabetes than in those with type 2 diabetes […]. Forty-three patients [out of 100] with type 1 diabetes had either hand or shoulder disorders (37 with hand disorders, 6 with adhesive capsulitis of the shoulder, and 10 with both syndromes), compared with 28 patients [again out of 100] with type 2 diabetes (24 with hand disorders, 4 with adhesive capsulitis of the shoulder, and 3 with both syndromes, P = 0.03).”

Association of Diabetes Mellitus With the Risk of Developing Adhesive Capsulitis of the Shoulder: A Longitudinal Population-Based Followup Study.
“A total of 78,827 subjects with at least 2 ambulatory care visits with a principal diagnosis of DM in 2001 were recruited for the DM group. The non-DM group comprised 236,481 age- and sex-matched randomly sampled subjects without DM. […] During a 3-year followup period, 946 subjects (1.20%) in the DM group and 2,254 subjects (0.95%) in the non-DM group developed ACS. The crude HR of developing ACS for the DM group compared to the non-DM group was 1.333 […] the association between DM and ACS may be explained at least in part by a DM-related chronic inflammatory process with increased growth factor expression, which in turn leads to joint synovitis and subsequent capsular fibrosis.”

It is important to note when interpreting the results of the above paper that these results are based on Taiwanese population-level data, and type 1 diabetes – which is obviously the high-risk diabetes subgroup in this particular context – is rare in East Asian populations (as observed in Sperling et al., “A child in Helsinki, Finland is almost 400 times more likely to develop diabetes than a child in Sichuan, China”. Taiwanese incidence of type 1 DM in children is estimated at ~5 in 100.000).

iv. Parents who let diabetic son starve to death found guilty of first-degree murder. It’s been a while since I last saw one of these ‘boost-your-faith-in-humanity’-cases, but they in my impression do pop up every now and then. I should probably keep at hand one of these articles in case my parents ever express worry to me that they weren’t good parents; they could have done a lot worse…

v. Freedom of medicine. One quote from the conclusion of Cochran’s post:

“[I]t is surely possible to materially improve the efficacy of drug development, of medical research as a whole. We’re doing better than we did 500 years ago – although probably worse than we did 50 years ago. But I would approach it by learning as much as possible about medical history, demographics, epidemiology, evolutionary medicine, theory of senescence, genetics, etc. Read Koch, not Hayek. There is no royal road to medical progress.”

I agree, and I was considering including some related comments and observations about health economics in this post – however I ultimately decided against doing that in part because the post was growing unwieldy; I might include those observations in another post later on. Here’s another somewhat older Westhunt post I at some point decided to bookmark – I in particular like the following neat quote from the comments, which expresses a view I have of course expressed myself in the past here on this blog:

“When you think about it, falsehoods, stupid crap, make the best group identifiers, because anyone might agree with you when you’re obviously right. Signing up to clear nonsense is a better test of group loyalty. A true friend is with you when you’re wrong. Ideally, not just wrong, but barking mad, rolling around in your own vomit wrong.”

vi. Economic Costs of Diabetes in the U.S. in 2012.

“Approximately 59% of all health care expenditures attributed to diabetes are for health resources used by the population aged 65 years and older, much of which is borne by the Medicare program […]. The population 45–64 years of age incurs 33% of diabetes-attributed costs, with the remaining 8% incurred by the population under 45 years of age. The annual attributed health care cost per person with diabetes […] increases with age, primarily as a result of increased use of hospital inpatient and nursing facility resources, physician office visits, and prescription medications. Dividing the total attributed health care expenditures by the number of people with diabetes, we estimate the average annual excess expenditures for the population aged under 45 years, 45–64 years, and 65 years and above, respectively, at $4,394, $5,611, and $11,825.”

“Our logistic regression analysis with NHIS data suggests that diabetes is associated with a 2.4 percentage point increase in the likelihood of leaving the workforce for disability. This equates to approximately 541,000 working-age adults leaving the workforce prematurely and 130 million lost workdays in 2012. For the population that leaves the workforce early because of diabetes-associated disability, we estimate that their average daily earnings would have been $166 per person (with the amount varying by demographic). Presenteeism accounted for 30% of the indirect cost of diabetes. The estimate of a 6.6% annual decline in productivity attributed to diabetes (in excess of the estimated decline in the absence of diabetes) equates to 113 million lost workdays per year.”

vii. Total red meat intake of ≥0.5 servings/d does not negatively influence cardiovascular disease risk factors: a systemically searched meta-analysis of randomized controlled trials.

viii. Effect of longer term modest salt reduction on blood pressure: Cochrane systematic review and meta-analysis of randomised trials. Did I blog this paper at some point in the past? I could not find any coverage of it on the blog when I searched for it so I decided to include it here, even if I have a nagging suspicion I may have talked about these findings before. What did they find? The short version is this:

“A modest reduction in salt intake for four or more weeks causes significant and, from a population viewpoint, important falls in blood pressure in both hypertensive and normotensive individuals, irrespective of sex and ethnic group. Salt reduction is associated with a small physiological increase in plasma renin activity, aldosterone, and noradrenaline and no significant change in lipid concentrations. These results support a reduction in population salt intake, which will lower population blood pressure and thereby reduce cardiovascular disease.”

ix. Some wikipedia links:

Heroic Age of Antarctic Exploration (featured).

Wien’s displacement law.

Kuiper belt (featured).

Treason (one quote worth including here: “Currently, the consensus among major Islamic schools is that apostasy (leaving Islam) is considered treason and that the penalty is death; this is supported not in the Quran but in the Hadith.[42][43][44][45][46][47]“).

Lymphatic filariasis.

File:World map of countries by number of cigarettes smoked per adult per year.

Australian gold rushes.

Savant syndrome (“It is estimated that 10% of those with autism have some form of savant abilities”). A small sidenote of interest to Danish readers: The Danish Broadcasting Corporation recently featured a series about autistics with ‘special abilities’ – the show was called ‘The hidden talents’ (De skjulte talenter), and after multiple people had nagged me to watch it I ended up deciding to do so. Most of the people in that show presumably had some degree of ‘savantism’ combined with autism at the milder end of the spectrum, i.e. Asperger’s. I was somewhat conflicted about what to think about the show and did consider blogging it in detail (in Danish?), but I decided against it. However I do want to add here to Danish readers reading along who’ve seen the show that they would do well to repeatedly keep in mind that a) the great majority of autistics do not have abilities like these, b) many autistics with abilities like these presumably do quite poorly, and c) that many autistics have even greater social impairments than do people like e.g. (the very likeable, I have to add…) Louise Wille from the show).

Quark–gluon plasma.

Simo Häyhä.

Chernobyl liquidators.

Black Death (“Over 60% of Norway’s population died in 1348–1350”).

Renault FT (“among the most revolutionary and influential tank designs in history”).

Weierstrass function (“an example of a pathological real-valued function on the real line. The function has the property of being continuous everywhere but differentiable nowhere”).

W Ursae Majoris variable.

Void coefficient. (“a number that can be used to estimate how much the reactivity of a nuclear reactor changes as voids (typically steam bubbles) form in the reactor moderator or coolant. […] Reactivity is directly related to the tendency of the reactor core to change power level: if reactivity is positive, the core power tends to increase; if it is negative, the core power tends to decrease; if it is zero, the core power tends to remain stable. […] A positive void coefficient means that the reactivity increases as the void content inside the reactor increases due to increased boiling or loss of coolant; for example, if the coolant acts as a neutron absorber. If the void coefficient is large enough and control systems do not respond quickly enough, this can form a positive feedback loop which can quickly boil all the coolant in the reactor. This happened in the RBMK reactor that was destroyed in the Chernobyl disaster.”).

Gregor MacGregor (featured) (“a Scottish soldier, adventurer, and confidence trickster […] MacGregor’s Poyais scheme has been called one of the most brazen confidence tricks in history.”).

Stimming.

Irish Civil War.

March 10, 2017 Posted by | Astronomy, autism, Cardiology, Diabetes, Economics, Epidemiology, Health Economics, History, Infectious disease, Mathematics, Medicine, Papers, Physics, Psychology, Random stuff, Wikipedia | Leave a comment

Particle Physics

20090213

20090703

(Smbc, second one here. There were a lot of relevant ones to choose from – this one also seems ‘relevant’. And this one. And this one. This one? This one? This one? Maybe this one? In the end I decided to only include the two comics displayed above, but you should be aware of the others…)

The book is a bit dated, it was published before the LHC even started operations. But it’s a decent read. I can’t say I liked it as much as I liked the other books in the series which I recently covered, on galaxies and the laws of thermodynamics, mostly because this book was a bit more pop-science-y than those books, and so the level of coverage was at times a little bit disappointing compared to the level of coverage provided in the aforementioned books throughout their coverage – but that said the book is far from terrible, I learned a lot, and I can imagine the author faced a very difficult task.

Below I have added a few observations from the book and some links to articles about some key concepts and things mentioned/covered in the book.

“[T]oday we view the collisions between high-energy particles as a means of studying the phenomena that ruled when the universe was newly born. We can study how matter was created and discover what varieties there were. From this we can construct the story of how the material universe has developed from that original hot cauldron to the cool conditions here on Earth today, where matter is made from electrons, without need for muons and taus, and where the seeds of atomic nuclei are just the up and down quarks, without need for strange or charming stuff.

In very broad terms, this is the story of what has happened. The matter that was born in the hot Big Bang consisted of quarks and particles like the electron. As concerns the quarks, the strange, charm, bottom, and top varieties are highly unstable, and died out within a fraction of a second, the weak force converting them into their more stable progeny, the up and down varieties which survive within us today. A similar story took place for the electron and its heavier versions, the muon and tau. This latter pair are also unstable and died out, courtesy of the weak force, leaving the electron as survivor. In the process of these decays, lots of neutrinos and electromagnetic radiation were also produced, which continue to swarm throughout the universe some 14 billion years later.

The up and down quarks and the electrons were the survivors while the universe was still very young and hot. As it cooled, the quarks were stuck to one another, forming protons and neutrons. The mutual gravitational attraction among these particles gathered them into large clouds that were primaeval stars. As they bumped into one another in the heart of these stars, the protons and neutrons built up the seeds of heavier elements. Some stars became unstable and exploded, ejecting these atomic nuclei into space, where they trapped electrons to form atoms of matter as we know it. […] What we can now do in experiments is in effect reverse the process and observe matter change back into its original primaeval forms.”

“A fully grown human is a bit less than two metres tall. […] to set the scale I will take humans to be about 1 metre in ‘order of magnitude’ […yet another smbc comic springs to mind here] […] Then, going to the large scales of astronomy, we have the radius of the Earth, some 107 m […]; that of the Sun is 109 m; our orbit around the Sun is 1011 m […] note that the relative sizes of the Earth, Sun, and our orbit are factors of about 100. […] Whereas the atom is typically 10–10 m across, its central nucleus measures only about 10–14 to 10–15 m. So beware the oft-quoted analogy that atoms are like miniature solar systems with the ‘planetary electrons’ encircling the ‘nuclear sun’. The real solar system has a factor 1/100 between our orbit and the size of the central Sun; the atom is far emptier, with 1/10,000 as the corresponding ratio between the extent of its central nucleus and the radius of the atom. And this emptiness continues. Individual protons and neutrons are about 10–15 m in diameter […] the relative size of quark to proton is some 1/10,000 (at most!). The same is true for the ‘planetary’ electron relative to the proton ‘sun’: 1/10,000 rather than the ‘mere’ 1/100 of the real solar system. So the world within the atom is incredibly empty.”

“Our inability to see atoms has to do with the fact that light acts like a wave and waves do not scatter easily from small objects. To see a thing, the wavelength of the beam must be smaller than that thing is. Therefore, to see molecules or atoms needs illuminations whose wavelengths are similar to or smaller than them. Light waves, like those our eyes are sensitive to, have wavelength about 10–7 m […]. This is still a thousand times bigger than the size of an atom. […] To have any chance of seeing molecules and atoms we need light with wavelengths much shorter than these. [And so we move into the world of X-ray crystallography and particle accelerators] […] To probe deep within atoms we need a source of very short wavelength. […] the technique is to use the basic particles […], such as electrons and protons, and speed them in electric fields. The higher their speed, the greater their energy and momentum and the shorter their associated wavelength. So beams of high-energy particles can resolve things as small as atoms.”

“About 400 billion neutrinos from the Sun pass through each one of us each second.”

“For a century beams of particles have been used to reveal the inner structure of atoms. These have progressed from naturally occurring alpha and beta particles, courtesy of natural radioactivity, through cosmic rays to intense beams of electrons, protons, and other particles at modern accelerators. […] Different particles probe matter in complementary ways. It has been by combining the information from [the] various approaches that our present rich picture has emerged. […] It was the desire to replicate the cosmic rays under controlled conditions that led to modern high-energy physics at accelerators. […] Electrically charged particles are accelerated by electric forces. Apply enough electric force to an electron, say, and it will go faster and faster in a straight line […] Under the influence of a magnetic field, the path of a charged particle will curve. By using electric fields to speed them, and magnetic fields to bend their trajectory, we can steer particles round circles over and over again. This is the basic idea behind huge rings, such as the 27-km-long accelerator at CERN in Geneva. […] our ability to learn about the origins and nature of matter have depended upon advances on two fronts: the construction of ever more powerful accelerators, and the development of sophisticated means of recording the collisions.”

Matter.
Particle.
Particle physics.
Strong interaction.
Weak interaction (‘good article’).
Electron (featured).
Quark (featured).
Fundamental interactions.
Electronvolt.
Electromagnetic spectrum.
Cathode ray.
Alpha particle.
Cloud chamber.
Atomic spectroscopy.
Ionization.
Resonance (particle physics).
Spin (physics).
Beta decay.
Neutrino.
Neutrino astronomy.
Antiparticle.
Baryon/meson.
Pion.
Particle accelerator/Cyclotron/Synchrotron/Linear particle accelerator.
Collider.
B-factory.
Particle detector.
Cherenkov radiation.
Sudbury Neutrino Observatory.
Quantum chromodynamics.
Color charge.
Force carrier.
W and Z bosons.
Electroweak interaction (/theory).
Exotic matter.
Strangeness.
Strange quark.
Charm (quantum number).
Antimatter.
Inverse beta decay.
Dark matter.
Standard model.
Supersymmetry.
Higgs boson.
Quark–gluon plasma.
CP violation.

February 9, 2017 Posted by | Books, Physics | Leave a comment

The Laws of Thermodynamics

Here’s a relevant 60 symbols video with Mike Merrifield. Below a few observations from the book, and some links.

“Among the hundreds of laws that describe the universe, there lurks a mighty handful. These are the laws of thermodynamics, which summarize the properties of energy and its transformation from one form to another. […] The mighty handful consists of four laws, with the numbering starting inconveniently at zero and ending at three. The first two laws (the ‘zeroth’ and the ‘first’) introduce two familiar but nevertheless enigmatic properties, the temperature and the energy. The third of the four (the ‘second law’) introduces what many take to be an even more elusive property, the entropy […] The second law is one of the all-time great laws of science […]. The fourth of the laws (the ‘third law’) has a more technical role, but rounds out the structure of the subject and both enables and foils its applications.”

Classical thermodynamics is the part of thermodynamics that emerged during the nineteenth century before everyone was fully convinced about the reality of atoms, and concerns relationships between bulk properties. You can do classical thermodynamics even if you don’t believe in atoms. Towards the end of the nineteenth century, when most scientists accepted that atoms were real and not just an accounting device, there emerged the version of thermodynamics called statistical thermodynamics, which sought to account for the bulk properties of matter in terms of its constituent atoms. The ‘statistical’ part of the name comes from the fact that in the discussion of bulk properties we don’t need to think about the behaviour of individual atoms but we do need to think about the average behaviour of myriad atoms. […] In short, whereas dynamics deals with the behaviour of individual bodies, thermodynamics deals with the average behaviour of vast numbers of them.”

“In everyday language, heat is both a noun and a verb. Heat flows; we heat. In thermodynamics heat is not an entity or even a form of energy: heat is a mode of transfer of energy. It is not a form of energy, or a fluid of some kind, or anything of any kind. Heat is the transfer of energy by virtue of a temperature difference. Heat is the name of a process, not the name of an entity.”

“The supply of 1J of energy as heat to 1 g of water results in an increase in temperature of about 0.2°C. Substances with a high heat capacity (water is an example) require a larger amount of heat to bring about a given rise in temperature than those with a small heat capacity (air is an example). In formal thermodynamics, the conditions under which heating takes place must be specified. For instance, if the heating takes place under conditions of constant pressure with the sample free to expand, then some of the energy supplied as heat goes into expanding the sample and therefore to doing work. Less energy remains in the sample, so its temperature rises less than when it is constrained to have a constant volume, and therefore we report that its heat capacity is higher. The difference between heat capacities of a system at constant volume and at constant pressure is of most practical significance for gases, which undergo large changes in volume as they are heated in vessels that are able to expand.”

“Heat capacities vary with temperature. An important experimental observation […] is that the heat capacity of every substance falls to zero when the temperature is reduced towards absolute zero (T = 0). A very small heat capacity implies that even a tiny transfer of heat to a system results in a significant rise in temperature, which is one of the problems associated with achieving very low temperatures when even a small leakage of heat into a sample can have a serious effect on the temperature”.

“A crude restatement of Clausius’s statement is that refrigerators don’t work unless you turn them on.”

“The Gibbs energy is of the greatest importance in chemistry and in the field of bioenergetics, the study of energy utilization in biology. Most processes in chemistry and biology occur at constant temperature and pressure, and so to decide whether they are spontaneous and able to produce non-expansion work we need to consider the Gibbs energy. […] Our bodies live off Gibbs energy. Many of the processes that constitute life are non-spontaneous reactions, which is why we decompose and putrefy when we die and these life-sustaining reactions no longer continue. […] In biology a very important ‘heavy weight’ reaction involves the molecule adenosine triphosphate (ATP). […] When a terminal phosphate group is snipped off by reaction with water […], to form adenosine diphosphate (ADP), there is a substantial decrease in Gibbs energy, arising in part from the increase in entropy when the group is liberated from the chain. Enzymes in the body make use of this change in Gibbs energy […] to bring about the linking of amino acids, and gradually build a protein molecule. It takes the effort of about three ATP molecules to link two amino acids together, so the construction of a typical protein of about 150 amino acid groups needs the energy released by about 450 ATP molecules. […] The ADP molecules, the husks of dead ATP molecules, are too valuable just to discard. They are converted back into ATP molecules by coupling to reactions that release even more Gibbs energy […] and which reattach a phosphate group to each one. These heavy-weight reactions are the reactions of metabolism of the food that we need to ingest regularly.”

Links of interest below – the stuff covered in the links is the sort of stuff covered in this book:

Laws of thermodynamics (article includes links to many other articles of interest, including links to each of the laws mentioned above).
System concepts.
Intensive and extensive properties.
Mechanical equilibrium.
Thermal equilibrium.
Diathermal wall.
Thermodynamic temperature.
Thermodynamic beta.
Ludwig Boltzmann.
Boltzmann constant.
Maxwell–Boltzmann distribution.
Conservation of energy.
Work (physics).
Internal energy.
Heat (physics).
Microscopic view of heat.
Reversible process (thermodynamics).
Carnot’s theorem.
Enthalpy.
Fluctuation-dissipation theorem.
Noether’s theorem.
Entropy.
Thermal efficiency.
Rudolf Clausius.
Spontaneous process.
Residual entropy.
Heat engine.
Coefficient of performance.
Helmholtz free energy.
Gibbs free energy.
Phase transition.
Chemical equilibrium.
Superconductivity.
Superfluidity.
Absolute zero.

February 5, 2017 Posted by | Biology, Books, Chemistry, Physics | Leave a comment

Galaxies

I have added some observations from the book below, as well as some links covering people/ideas/stuff discussed/mentioned in the book.

“On average, out of every 100 newly born star systems, 60 are binaries and 40 are triples. Solitary stars like the Sun are later ejected from triple systems formed in this way.”

“…any object will become a black hole if it is sufficiently compressed. For any mass, there is a critical radius, called the Schwarzschild radius, for which this occurs. For the Sun, the Schwarzschild radius is just under 3 km; for the Earth, it is just under 1 cm. In either case, if the entire mass of the object were squeezed within the appropriate Schwarzschild radius it would become a black hole.”

“It only became possible to study the centre of our Galaxy when radio telescopes and other instruments that do not rely on visible light became available. There is a great deal of dust in the plane of the Milky Way […] This blocks out visible light. But longer wavelengths penetrate the dust more easily. That is why sunsets are red – short wavelength (blue) light is scattered out of the line of sight by dust in the atmosphere, while the longer wavelength red light gets through to your eyes. So our understanding of the galactic centre is largely based on infrared and radio observations.”

“there is strong evidence that the Milky Way Galaxy is a completely ordinary disc galaxy, a typical representative of its class. Since that is the case, it means that we can confidently use our inside knowledge of the structure and evolution of our own Galaxy, based on close-up observations, to help our understanding of the origin and nature of disc galaxies in general. We do not occupy a special place in the Universe; but this was only finally established at the end of the 20th century. […] in the decades following Hubble’s first measurements of the cosmological distance scale, the Milky Way still seemed like a special place. Hubble’s calculation of the distance scale implied that other galaxies are relatively close to our Galaxy, and so they would not have to be very big to appear as large as they do on the sky; the Milky Way seemed to be by far the largest galaxy in the Universe. We now know that Hubble was wrong. […] the value he initially found for the Hubble Constant was about seven times bigger than the value accepted today. In other words, all the extragalactic distances Hubble inferred were seven times too small. But this was not realized overnight. The cosmological distance scale was only revised slowly, over many decades, as observations improved and one error after another was corrected. […] The importance of determining the cosmological distance scale accurately, more than half a century after Hubble’s pioneering work, was still so great that it was a primary justification for the existence of the Hubble Space Telescope (HST).”

“The key point to grasp […] is that the expansion described by [Einstein’s] equations is an expansion of space as time passes. The cosmological redshift is not a Doppler effect caused by galaxies moving outward through space, as if fleeing from the site of some great explosion, but occurs because the space between the galaxies is stretching. So the spaces between galaxies increase while light is on its way from one galaxy to another. This stretches the light waves to longer wavelengths, which means shifting them towards the red end of the spectrum. […] The second key point about the universal expansion is that it does not have a centre. There is nothing special about the fact that we observe galaxies receding with redshifts proportional to their distances from the Milky Way. […] whichever galaxy you happen to be sitting in, you will see the same thing – redshift proportional to distance.”

“The age of the Universe is determined by studying some of the largest things in the Universe, clusters of galaxies, and analysing their behaviour using the general theory of relativity. Our understanding of how stars work, from which we calculate their ages, comes from studying some of the smallest things in the Universe, the nuclei of atoms, and using the other great theory of 20th-century physics, quantum mechanics, to calculate how nuclei fuse with one another to release the energy that keeps stars shining. The fact that the two ages agree with one another, and that the ages of the oldest stars are just a little bit less than the age of the Universe, is one of the most compelling reasons to think that the whole of 20th-century physics works and provides a good description of the world around us, from the very small scale to the very large scale.”

“Planets are small objects orbiting a large central mass, and the gravity of the Sun dominates their motion. Because of this, the speed with which a planet moves […] is inversely proportional to the square of its distance from the centre of the Solar System. Jupiter is farther from the Sun than we are, so it moves more slowly in its orbit than the Earth, as well as having a larger orbit. But all the stars in the disc of a galaxy move at the same speed. Stars farther out from the centre still have bigger orbits, so they still take longer to complete one circuit of the galaxy. But they are all travelling at essentially the same orbital speed through space.”

“The importance of studying objects at great distances across the Universe is that when we look at an object that is, say, 10 billion light years away, we see it by light which left it 10 billion years ago. This is the ‘look back time’, and it means that telescopes are in a sense time machines, showing us what the Universe was like when it was younger. The light from a distant galaxy is old, in the sense that it has been a long time on its journey; but the galaxy we see using that light is a young galaxy. […] For distant objects, because light has taken a long time on its journey to us, the Universe has expanded significantly while the light was on its way. […] This raises problems defining exactly what you mean by the ‘present distance’ to a remote galaxy”

“Among the many advantages that photographic and electronic recording methods have over the human eye, the most fundamental is that the longer they look, the more they see. Human eyes essentially give us a real-time view of our surroundings, and allow us to see things – such as stars – that are brighter than a certain limit. If an object is too faint to see, once your eyes have adapted to the dark no amount of staring in its direction will make it visible. But the detectors attached to modern telescopes keep on adding up the light from faint sources as long as they are pointing at them. A longer exposure will reveal fainter objects than a short exposure does, as the photons (particles of light) from the source fall on the detector one by one and the total gradually grows.”

“Nobody can be quite sure where the supermassive black holes at the hearts of galaxies today came from, but it seems at least possible that […] merging of black holes left over from the first generation of stars [in the universe] began the process by which supermassive black holes, feeding off the matter surrounding them, formed. […] It seems very unlikely that supermassive black holes formed first and then galaxies grew around them; they must have formed together, in a process sometimes referred to as co-evolution, from the seeds provided by the original black holes of a few hundred solar masses and the raw materials of the dense clouds of baryons in the knots in the filamentary structure. […] About one in a hundred of the galaxies seen at low redshifts are actively involved in the late stages of mergers, but these processes take so little time, compared with the age of the Universe, that the statistics imply that about half of all the galaxies visible nearby are the result of mergers between similarly sized galaxies in the past seven or eight billion years. Disc galaxies like the Milky Way seem themselves to have been built up from smaller sub-units, starting out with the spheroid and adding bits and pieces as time passed. […] there were many more small galaxies when the Universe was young than we see around us today. This is exactly what we would expect if many of the small galaxies have either grown larger through mergers or been swallowed up by larger galaxies.”

Links of interest:

Galaxy (‘featured article’).
Leonard Digges.
Thomas Wright.
William Herschel.
William Parsons.
The Great Debate.
Parallax.
Extinction (astronomy).
Henrietta Swan Leavitt (‘good article’).
Cepheid variable.
Ejnar Hertzsprung. (Before reading this book, I had no idea one of the people behind the famous Hertzsprung–Russell diagram was a Dane. I blame my physics teachers. I was probably told this by one of them, but if the guy in question had been a better teacher, I’d have listened, and I’d have known this.).
Globular cluster (‘featured article’).
Vesto Slipher.
Redshift (‘featured article’).
Refracting telescope/Reflecting telescope.
Disc galaxy.
Edwin Hubble.
Milton Humason.
Doppler effect.
Milky Way.
Orion Arm.
Stellar population.
Sagittarius A*.
Minkowski space.
General relativity (featured).
The Big Bang theory (featured).
Age of the universe.
Malmquist bias.
Type Ia supernova.
Dark energy.
Baryons/leptons.
Cosmic microwave background.
Cold dark matter.
Lambda-CDM model.
Lenticular galaxy.
Active galactic nucleus.
Quasar.
Hubble Ultra-Deep Field.
Stellar evolution.
Velocity dispersion.
Hawking radiation.
Ultimate fate of the universe.

 

February 5, 2017 Posted by | Astronomy, Books, cosmology, Physics | Leave a comment

Random stuff

i. Fire works a little differently than people imagine. A great ask-science comment. See also AugustusFink-nottle’s comment in the same thread.

ii.

iii. I was very conflicted about whether to link to this because I haven’t actually spent any time looking at it myself so I don’t know if it’s any good, but according to somebody (?) who linked to it on SSC the people behind this stuff have academic backgrounds in evolutionary biology, which is something at least (whether you think this is a good thing or not will probably depend greatly on your opinion of evolutionary biologists, but I’ve definitely learned a lot more about human mating patterns, partner interaction patterns, etc. from evolutionary biologists than I have from personal experience, so I’m probably in the ‘they-sometimes-have-interesting-ideas-about-these-topics-and-those-ideas-may-not-be-terrible’-camp). I figure these guys are much more application-oriented than were some of the previous sources I’ve read on related topics, such as e.g. Kappeler et al. I add the link mostly so that if I in five years time have a stroke that obliterates most of my decision-making skills, causing me to decide that entering the dating market might be a good idea, I’ll have some idea where it might make sense to start.

iv. Stereotype (In)Accuracy in Perceptions of Groups and Individuals.

“Are stereotypes accurate or inaccurate? We summarize evidence that stereotype accuracy is one of the largest and most replicable findings in social psychology. We address controversies in this literature, including the long-standing  and continuing but unjustified emphasis on stereotype inaccuracy, how to define and assess stereotype accuracy, and whether stereotypic (vs. individuating) information can be used rationally in person perception. We conclude with suggestions for building theory and for future directions of stereotype (in)accuracy research.”

A few quotes from the paper:

Demographic stereotypes are accurate. Research has consistently shown moderate to high levels of correspondence accuracy for demographic (e.g., race/ethnicity, gender) stereotypes […]. Nearly all accuracy correlations for consensual stereotypes about race/ethnicity and  gender exceed .50 (compared to only 5% of social psychological findings; Richard, Bond, & Stokes-Zoota, 2003).[…] Rather than being based in cultural myths, the shared component of stereotypes is often highly accurate. This pattern cannot be easily explained by motivational or social-constructionist theories of stereotypes and probably reflects a “wisdom of crowds” effect […] personal stereotypes are also quite accurate, with correspondence accuracy for roughly half exceeding r =.50.”

“We found 34 published studies of racial-, ethnic-, and gender-stereotype accuracy. Although not every study examined discrepancy scores, when they did, a plurality or majority of all consensual stereotype judgments were accurate. […] In these 34 studies, when stereotypes were inaccurate, there was more evidence of underestimating than overestimating actual demographic group differences […] Research assessing the accuracy of  miscellaneous other stereotypes (e.g., about occupations, college majors, sororities, etc.) has generally found accuracy levels comparable to those for demographic stereotypes”

“A common claim […] is that even though many stereotypes accurately capture group means, they are still not accurate because group means cannot describe every individual group member. […] If people were rational, they would use stereotypes to judge individual targets when they lack information about targets’ unique personal characteristics (i.e., individuating information), when the stereotype itself is highly diagnostic (i.e., highly informative regarding the judgment), and when available individuating information is ambiguous or incompletely useful. People’s judgments robustly conform to rational predictions. In the rare situations in which a stereotype is highly diagnostic, people rely on it (e.g., Crawford, Jussim, Madon, Cain, & Stevens, 2011). When highly diagnostic individuating information is available, people overwhelmingly rely on it (Kunda & Thagard, 1996; effect size averaging r = .70). Stereotype biases average no higher than r = .10 ( Jussim, 2012) but reach r = .25 in the absence of individuating information (Kunda & Thagard, 1996). The more diagnostic individuating information  people have, the less they stereotype (Crawford et al., 2011; Krueger & Rothbart, 1988). Thus, people do not indiscriminately apply their stereotypes to all individual  members of stereotyped groups.” (Funder incidentally talked about this stuff as well in his book Personality Judgment).

One thing worth mentioning in the context of stereotypes is that if you look at stuff like crime data – which sadly not many people do – and you stratify based on stuff like country of origin, then the sub-group differences you observe tend to be very large. Some of the differences you observe between subgroups are not in the order of something like 10%, which is probably the sort of difference which could easily be ignored without major consequences; some subgroup differences can easily be in the order of one or two orders of magnitude. The differences are in some contexts so large as to basically make it downright idiotic to assume there are no differences – it doesn’t make sense, it’s frankly a stupid thing to do. To give an example, in Germany the probability that a random person, about whom you know nothing, has been a suspect in a thievery case is 22% if that random person happens to be of Algerian extraction, whereas it’s only 0,27% if you’re dealing with an immigrant from China. Roughly one in 13 of those Algerians have also been involved in a case of ‘body (bodily?) harm’, which is the case for less than one in 400 of the Chinese immigrants.

v. Assessing Immigrant Integration in Sweden after the May 2013 Riots. Some data from the article:

“Today, about one-fifth of Sweden’s population has an immigrant background, defined as those who were either born abroad or born in Sweden to two immigrant parents. The foreign born comprised 15.4 percent of the Swedish population in 2012, up from 11.3 percent in 2000 and 9.2 percent in 1990 […] Of the estimated 331,975 asylum applicants registered in EU countries in 2012, 43,865 (or 13 percent) were in Sweden. […] More than half of these applications were from Syrians, Somalis, Afghanis, Serbians, and Eritreans. […] One town of about 80,000 people, Södertälje, since the mid-2000s has taken in more Iraqi refugees than the United States and Canada combined.”

“Coupled with […] macroeconomic changes, the largely humanitarian nature of immigrant arrivals since the 1970s has posed challenges of labor market integration for Sweden, as refugees often arrive with low levels of education and transferable skills […] high unemployment rates have disproportionately affected immigrant communities in Sweden. In 2009-10, Sweden had the highest gap between native and immigrant employment rates among OECD countries. Approximately 63 percent of immigrants were employed compared to 76 percent of the native-born population. This 13 percentage-point gap is significantly greater than the OECD average […] Explanations for the gap include less work experience and domestic formal qualifications such as language skills among immigrants […] Among recent immigrants, defined as those who have been in the country for less than five years, the employment rate differed from that of the native born by more than 27 percentage points. In 2011, the Swedish newspaper Dagens Nyheter reported that 35 percent of the unemployed registered at the Swedish Public Employment Service were foreign born, up from 22 percent in 2005.”

“As immigrant populations have grown, Sweden has experienced a persistent level of segregation — among the highest in Western Europe. In 2008, 60 percent of native Swedes lived in areas where the majority of the population was also Swedish, and 20 percent lived in areas that were virtually 100 percent Swedish. In contrast, 20 percent of Sweden’s foreign born lived in areas where more than 40 percent of the population was also foreign born.”

vi. Book recommendations. Or rather, author recommendations. A while back I asked ‘the people of SSC’ if they knew of any fiction authors I hadn’t read yet which were both funny and easy to read. I got a lot of good suggestions, and the roughly 20 Dick Francis novels I’ve read during the fall I’ve read as a consequence of that thread.

vii. On the genetic structure of Denmark.

viii. Religious Fundamentalism and Hostility against Out-groups: A Comparison of Muslims and Christians in Western Europe.

“On the basis of an original survey among native Christians and Muslims of Turkish and Moroccan origin in Germany, France, the Netherlands, Belgium, Austria and Sweden, this paper investigates four research questions comparing native Christians to Muslim immigrants: (1) the extent of religious fundamentalism; (2) its socio-economic determinants; (3) whether it can be distinguished from other indicators of religiosity; and (4) its relationship to hostility towards out-groups (homosexuals, Jews, the West, and Muslims). The results indicate that religious fundamentalist attitudes are much more widespread among Sunnite Muslims than among native Christians, even after controlling for the different demographic and socio-economic compositions of these groups. […] Fundamentalist believers […] show very high levels of out-group hostility, especially among Muslims.”

ix. Portal: Dinosaurs. It would have been so incredibly awesome to have had access to this kind of stuff back when I was a child. The portal includes links to articles with names like ‘Bone Wars‘ – what’s not to like? Again, awesome!

x. “you can’t determine if something is truly random from observations alone. You can only determine if something is not truly random.” (link) An important insight well expressed.

xi. Chessprogramming. If you’re interested in having a look at how chess programs work, this is a neat resource. The wiki contains lots of links with information on specific sub-topics of interest. Also chess-related: The World Championship match between Carlsen and Karjakin has started. To the extent that I’ll be following the live coverage, I’ll be following Svidler et al.’s coverage on chess24. Robin van Kampen and Eric Hansen – both 2600+ elo GMs – did quite well yesterday, in my opinion.

xii. Justified by More Than Logos Alone (Razib Khan).

“Very few are Roman Catholic because they have read Aquinas’ Five Ways. Rather, they are Roman Catholic, in order of necessity, because God aligns with their deep intuitions, basic cognitive needs in terms of cosmological coherency, and because the church serves as an avenue for socialization and repetitive ritual which binds individuals to the greater whole. People do not believe in Catholicism as often as they are born Catholics, and the Catholic religion is rather well fitted to a range of predispositions to the typical human.”

November 12, 2016 Posted by | Books, Chemistry, Chess, Data, dating, Demographics, Genetics, Geography, immigration, Paleontology, Papers, Physics, Psychology, Random stuff, Religion | Leave a comment