(Smbc, second one here. There were a lot of relevant ones to choose from – this one also seems ‘relevant’. And this one. And this one. This one? This one? This one? Maybe this one? In the end I decided to only include the two comics displayed above, but you should be aware of the others…)
The book is a bit dated, it was published before the LHC even started operations. But it’s a decent read. I can’t say I liked it as much as I liked the other books in the series which I recently covered, on galaxies and the laws of thermodynamics, mostly because this book was a bit more pop-science-y than those books, and so the level of coverage was at times a little bit disappointing compared to the level of coverage provided in the aforementioned books throughout their coverage – but that said the book is far from terrible, I learned a lot, and I can imagine the author faced a very difficult task.
Below I have added a few observations from the book and some links to articles about some key concepts and things mentioned/covered in the book.
“[T]oday we view the collisions between high-energy particles as a means of studying the phenomena that ruled when the universe was newly born. We can study how matter was created and discover what varieties there were. From this we can construct the story of how the material universe has developed from that original hot cauldron to the cool conditions here on Earth today, where matter is made from electrons, without need for muons and taus, and where the seeds of atomic nuclei are just the up and down quarks, without need for strange or charming stuff.
In very broad terms, this is the story of what has happened. The matter that was born in the hot Big Bang consisted of quarks and particles like the electron. As concerns the quarks, the strange, charm, bottom, and top varieties are highly unstable, and died out within a fraction of a second, the weak force converting them into their more stable progeny, the up and down varieties which survive within us today. A similar story took place for the electron and its heavier versions, the muon and tau. This latter pair are also unstable and died out, courtesy of the weak force, leaving the electron as survivor. In the process of these decays, lots of neutrinos and electromagnetic radiation were also produced, which continue to swarm throughout the universe some 14 billion years later.
The up and down quarks and the electrons were the survivors while the universe was still very young and hot. As it cooled, the quarks were stuck to one another, forming protons and neutrons. The mutual gravitational attraction among these particles gathered them into large clouds that were primaeval stars. As they bumped into one another in the heart of these stars, the protons and neutrons built up the seeds of heavier elements. Some stars became unstable and exploded, ejecting these atomic nuclei into space, where they trapped electrons to form atoms of matter as we know it. […] What we can now do in experiments is in effect reverse the process and observe matter change back into its original primaeval forms.”
“A fully grown human is a bit less than two metres tall. […] to set the scale I will take humans to be about 1 metre in ‘order of magnitude’ […yet another smbc comic springs to mind here…] […] Then, going to the large scales of astronomy, we have the radius of the Earth, some 107 m […]; that of the Sun is 109 m; our orbit around the Sun is 1011 m […] note that the relative sizes of the Earth, Sun, and our orbit are factors of about 100. […] Whereas the atom is typically 10–10 m across, its central nucleus measures only about 10–14 to 10–15 m. So beware the oft-quoted analogy that atoms are like miniature solar systems with the ‘planetary electrons’ encircling the ‘nuclear sun’. The real solar system has a factor 1/100 between our orbit and the size of the central Sun; the atom is far emptier, with 1/10,000 as the corresponding ratio between the extent of its central nucleus and the radius of the atom. And this emptiness continues. Individual protons and neutrons are about 10–15 m in diameter […] the relative size of quark to proton is some 1/10,000 (at most!). The same is true for the ‘planetary’ electron relative to the proton ‘sun’: 1/10,000 rather than the ‘mere’ 1/100 of the real solar system. So the world within the atom is incredibly empty.”
“Our inability to see atoms has to do with the fact that light acts like a wave and waves do not scatter easily from small objects. To see a thing, the wavelength of the beam must be smaller than that thing is. Therefore, to see molecules or atoms needs illuminations whose wavelengths are similar to or smaller than them. Light waves, like those our eyes are sensitive to, have wavelength about 10–7 m […]. This is still a thousand times bigger than the size of an atom. […] To have any chance of seeing molecules and atoms we need light with wavelengths much shorter than these. [And so we move into the world of X-ray crystallography and particle accelerators] […] To probe deep within atoms we need a source of very short wavelength. […] the technique is to use the basic particles […], such as electrons and protons, and speed them in electric fields. The higher their speed, the greater their energy and momentum and the shorter their associated wavelength. So beams of high-energy particles can resolve things as small as atoms.”
“About 400 billion neutrinos from the Sun pass through each one of us each second.”
“For a century beams of particles have been used to reveal the inner structure of atoms. These have progressed from naturally occurring alpha and beta particles, courtesy of natural radioactivity, through cosmic rays to intense beams of electrons, protons, and other particles at modern accelerators. […] Different particles probe matter in complementary ways. It has been by combining the information from [the] various approaches that our present rich picture has emerged. […] It was the desire to replicate the cosmic rays under controlled conditions that led to modern high-energy physics at accelerators. […] Electrically charged particles are accelerated by electric forces. Apply enough electric force to an electron, say, and it will go faster and faster in a straight line […] Under the influence of a magnetic field, the path of a charged particle will curve. By using electric fields to speed them, and magnetic fields to bend their trajectory, we can steer particles round circles over and over again. This is the basic idea behind huge rings, such as the 27-km-long accelerator at CERN in Geneva. […] our ability to learn about the origins and nature of matter have depended upon advances on two fronts: the construction of ever more powerful accelerators, and the development of sophisticated means of recording the collisions.”
Weak interaction (‘good article’).
Resonance (particle physics).
Particle accelerator/Cyclotron/Synchrotron/Linear particle accelerator.
Sudbury Neutrino Observatory.
W and Z bosons.
Electroweak interaction (/theory).
Charm (quantum number).
Inverse beta decay.
“Among the hundreds of laws that describe the universe, there lurks a mighty handful. These are the laws of thermodynamics, which summarize the properties of energy and its transformation from one form to another. […] The mighty handful consists of four laws, with the numbering starting inconveniently at zero and ending at three. The first two laws (the ‘zeroth’ and the ‘first’) introduce two familiar but nevertheless enigmatic properties, the temperature and the energy. The third of the four (the ‘second law’) introduces what many take to be an even more elusive property, the entropy […] The second law is one of the all-time great laws of science […]. The fourth of the laws (the ‘third law’) has a more technical role, but rounds out the structure of the subject and both enables and foils its applications.”
“Classical thermodynamics is the part of thermodynamics that emerged during the nineteenth century before everyone was fully convinced about the reality of atoms, and concerns relationships between bulk properties. You can do classical thermodynamics even if you don’t believe in atoms. Towards the end of the nineteenth century, when most scientists accepted that atoms were real and not just an accounting device, there emerged the version of thermodynamics called statistical thermodynamics, which sought to account for the bulk properties of matter in terms of its constituent atoms. The ‘statistical’ part of the name comes from the fact that in the discussion of bulk properties we don’t need to think about the behaviour of individual atoms but we do need to think about the average behaviour of myriad atoms. […] In short, whereas dynamics deals with the behaviour of individual bodies, thermodynamics deals with the average behaviour of vast numbers of them.”
“In everyday language, heat is both a noun and a verb. Heat flows; we heat. In thermodynamics heat is not an entity or even a form of energy: heat is a mode of transfer of energy. It is not a form of energy, or a fluid of some kind, or anything of any kind. Heat is the transfer of energy by virtue of a temperature difference. Heat is the name of a process, not the name of an entity.”
“The supply of 1J of energy as heat to 1 g of water results in an increase in temperature of about 0.2°C. Substances with a high heat capacity (water is an example) require a larger amount of heat to bring about a given rise in temperature than those with a small heat capacity (air is an example). In formal thermodynamics, the conditions under which heating takes place must be specified. For instance, if the heating takes place under conditions of constant pressure with the sample free to expand, then some of the energy supplied as heat goes into expanding the sample and therefore to doing work. Less energy remains in the sample, so its temperature rises less than when it is constrained to have a constant volume, and therefore we report that its heat capacity is higher. The difference between heat capacities of a system at constant volume and at constant pressure is of most practical significance for gases, which undergo large changes in volume as they are heated in vessels that are able to expand.”
“Heat capacities vary with temperature. An important experimental observation […] is that the heat capacity of every substance falls to zero when the temperature is reduced towards absolute zero (T = 0). A very small heat capacity implies that even a tiny transfer of heat to a system results in a significant rise in temperature, which is one of the problems associated with achieving very low temperatures when even a small leakage of heat into a sample can have a serious effect on the temperature”.
“A crude restatement of Clausius’s statement is that refrigerators don’t work unless you turn them on.”
“The Gibbs energy is of the greatest importance in chemistry and in the field of bioenergetics, the study of energy utilization in biology. Most processes in chemistry and biology occur at constant temperature and pressure, and so to decide whether they are spontaneous and able to produce non-expansion work we need to consider the Gibbs energy. […] Our bodies live off Gibbs energy. Many of the processes that constitute life are non-spontaneous reactions, which is why we decompose and putrefy when we die and these life-sustaining reactions no longer continue. […] In biology a very important ‘heavy weight’ reaction involves the molecule adenosine triphosphate (ATP). […] When a terminal phosphate group is snipped off by reaction with water […], to form adenosine diphosphate (ADP), there is a substantial decrease in Gibbs energy, arising in part from the increase in entropy when the group is liberated from the chain. Enzymes in the body make use of this change in Gibbs energy […] to bring about the linking of amino acids, and gradually build a protein molecule. It takes the effort of about three ATP molecules to link two amino acids together, so the construction of a typical protein of about 150 amino acid groups needs the energy released by about 450 ATP molecules. […] The ADP molecules, the husks of dead ATP molecules, are too valuable just to discard. They are converted back into ATP molecules by coupling to reactions that release even more Gibbs energy […] and which reattach a phosphate group to each one. These heavy-weight reactions are the reactions of metabolism of the food that we need to ingest regularly.”
Links of interest below – the stuff covered in the links is the sort of stuff covered in this book:
Laws of thermodynamics (article includes links to many other articles of interest, including links to each of the laws mentioned above).
Intensive and extensive properties.
Conservation of energy.
Microscopic view of heat.
Reversible process (thermodynamics).
Coefficient of performance.
Helmholtz free energy.
Gibbs free energy.
I have added some observations from the book below, as well as some links covering people/ideas/stuff discussed/mentioned in the book.
“On average, out of every 100 newly born star systems, 60 are binaries and 40 are triples. Solitary stars like the Sun are later ejected from triple systems formed in this way.”
“…any object will become a black hole if it is sufficiently compressed. For any mass, there is a critical radius, called the Schwarzschild radius, for which this occurs. For the Sun, the Schwarzschild radius is just under 3 km; for the Earth, it is just under 1 cm. In either case, if the entire mass of the object were squeezed within the appropriate Schwarzschild radius it would become a black hole.”
“It only became possible to study the centre of our Galaxy when radio telescopes and other instruments that do not rely on visible light became available. There is a great deal of dust in the plane of the Milky Way […] This blocks out visible light. But longer wavelengths penetrate the dust more easily. That is why sunsets are red – short wavelength (blue) light is scattered out of the line of sight by dust in the atmosphere, while the longer wavelength red light gets through to your eyes. So our understanding of the galactic centre is largely based on infrared and radio observations.”
“there is strong evidence that the Milky Way Galaxy is a completely ordinary disc galaxy, a typical representative of its class. Since that is the case, it means that we can confidently use our inside knowledge of the structure and evolution of our own Galaxy, based on close-up observations, to help our understanding of the origin and nature of disc galaxies in general. We do not occupy a special place in the Universe; but this was only finally established at the end of the 20th century. […] in the decades following Hubble’s first measurements of the cosmological distance scale, the Milky Way still seemed like a special place. Hubble’s calculation of the distance scale implied that other galaxies are relatively close to our Galaxy, and so they would not have to be very big to appear as large as they do on the sky; the Milky Way seemed to be by far the largest galaxy in the Universe. We now know that Hubble was wrong. […] the value he initially found for the Hubble Constant was about seven times bigger than the value accepted today. In other words, all the extragalactic distances Hubble inferred were seven times too small. But this was not realized overnight. The cosmological distance scale was only revised slowly, over many decades, as observations improved and one error after another was corrected. […] The importance of determining the cosmological distance scale accurately, more than half a century after Hubble’s pioneering work, was still so great that it was a primary justification for the existence of the Hubble Space Telescope (HST).”
“The key point to grasp […] is that the expansion described by [Einstein’s] equations is an expansion of space as time passes. The cosmological redshift is not a Doppler effect caused by galaxies moving outward through space, as if fleeing from the site of some great explosion, but occurs because the space between the galaxies is stretching. So the spaces between galaxies increase while light is on its way from one galaxy to another. This stretches the light waves to longer wavelengths, which means shifting them towards the red end of the spectrum. […] The second key point about the universal expansion is that it does not have a centre. There is nothing special about the fact that we observe galaxies receding with redshifts proportional to their distances from the Milky Way. […] whichever galaxy you happen to be sitting in, you will see the same thing – redshift proportional to distance.”
“The age of the Universe is determined by studying some of the largest things in the Universe, clusters of galaxies, and analysing their behaviour using the general theory of relativity. Our understanding of how stars work, from which we calculate their ages, comes from studying some of the smallest things in the Universe, the nuclei of atoms, and using the other great theory of 20th-century physics, quantum mechanics, to calculate how nuclei fuse with one another to release the energy that keeps stars shining. The fact that the two ages agree with one another, and that the ages of the oldest stars are just a little bit less than the age of the Universe, is one of the most compelling reasons to think that the whole of 20th-century physics works and provides a good description of the world around us, from the very small scale to the very large scale.”
“Planets are small objects orbiting a large central mass, and the gravity of the Sun dominates their motion. Because of this, the speed with which a planet moves […] is inversely proportional to the square of its distance from the centre of the Solar System. Jupiter is farther from the Sun than we are, so it moves more slowly in its orbit than the Earth, as well as having a larger orbit. But all the stars in the disc of a galaxy move at the same speed. Stars farther out from the centre still have bigger orbits, so they still take longer to complete one circuit of the galaxy. But they are all travelling at essentially the same orbital speed through space.”
“The importance of studying objects at great distances across the Universe is that when we look at an object that is, say, 10 billion light years away, we see it by light which left it 10 billion years ago. This is the ‘look back time’, and it means that telescopes are in a sense time machines, showing us what the Universe was like when it was younger. The light from a distant galaxy is old, in the sense that it has been a long time on its journey; but the galaxy we see using that light is a young galaxy. […] For distant objects, because light has taken a long time on its journey to us, the Universe has expanded significantly while the light was on its way. […] This raises problems defining exactly what you mean by the ‘present distance’ to a remote galaxy”
“Among the many advantages that photographic and electronic recording methods have over the human eye, the most fundamental is that the longer they look, the more they see. Human eyes essentially give us a real-time view of our surroundings, and allow us to see things – such as stars – that are brighter than a certain limit. If an object is too faint to see, once your eyes have adapted to the dark no amount of staring in its direction will make it visible. But the detectors attached to modern telescopes keep on adding up the light from faint sources as long as they are pointing at them. A longer exposure will reveal fainter objects than a short exposure does, as the photons (particles of light) from the source fall on the detector one by one and the total gradually grows.”
“Nobody can be quite sure where the supermassive black holes at the hearts of galaxies today came from, but it seems at least possible that […] merging of black holes left over from the first generation of stars [in the universe] began the process by which supermassive black holes, feeding off the matter surrounding them, formed. […] It seems very unlikely that supermassive black holes formed first and then galaxies grew around them; they must have formed together, in a process sometimes referred to as co-evolution, from the seeds provided by the original black holes of a few hundred solar masses and the raw materials of the dense clouds of baryons in the knots in the filamentary structure. […] About one in a hundred of the galaxies seen at low redshifts are actively involved in the late stages of mergers, but these processes take so little time, compared with the age of the Universe, that the statistics imply that about half of all the galaxies visible nearby are the result of mergers between similarly sized galaxies in the past seven or eight billion years. Disc galaxies like the Milky Way seem themselves to have been built up from smaller sub-units, starting out with the spheroid and adding bits and pieces as time passed. […] there were many more small galaxies when the Universe was young than we see around us today. This is exactly what we would expect if many of the small galaxies have either grown larger through mergers or been swallowed up by larger galaxies.”
Links of interest:
Galaxy (‘featured article’).
The Great Debate.
Henrietta Swan Leavitt (‘good article’).
Ejnar Hertzsprung. (Before reading this book, I had no idea one of the people behind the famous Hertzsprung–Russell diagram was a Dane. I blame my physics teachers. I was probably told this by one of them, but if the guy in question had been a better teacher, I’d have listened, and I’d have known this.).
Globular cluster (‘featured article’).
Redshift (‘featured article’).
Refracting telescope/Reflecting telescope.
General relativity (featured).
The Big Bang theory (featured).
Age of the universe.
Type Ia supernova.
Cosmic microwave background.
Cold dark matter.
Active galactic nucleus.
Hubble Ultra-Deep Field.
Ultimate fate of the universe.
i. Fire works a little differently than people imagine. A great ask-science comment. See also AugustusFink-nottle’s comment in the same thread.
iii. I was very conflicted about whether to link to this because I haven’t actually spent any time looking at it myself so I don’t know if it’s any good, but according to somebody (?) who linked to it on SSC the people behind this stuff have academic backgrounds in evolutionary biology, which is something at least (whether you think this is a good thing or not will probably depend greatly on your opinion of evolutionary biologists, but I’ve definitely learned a lot more about human mating patterns, partner interaction patterns, etc. from evolutionary biologists than I have from personal experience, so I’m probably in the ‘they-sometimes-have-interesting-ideas-about-these-topics-and-those-ideas-may-not-be-terrible’-camp). I figure these guys are much more application-oriented than were some of the previous sources I’ve read on related topics, such as e.g. Kappeler et al. I add the link mostly so that if I in five years time have a stroke that obliterates most of my decision-making skills, causing me to decide that entering the dating market might be a good idea, I’ll have some idea where it might make sense to start.
“Are stereotypes accurate or inaccurate? We summarize evidence that stereotype accuracy is one of the largest and most replicable findings in social psychology. We address controversies in this literature, including the long-standing and continuing but unjustified emphasis on stereotype inaccuracy, how to define and assess stereotype accuracy, and whether stereotypic (vs. individuating) information can be used rationally in person perception. We conclude with suggestions for building theory and for future directions of stereotype (in)accuracy research.”
A few quotes from the paper:
“Demographic stereotypes are accurate. Research has consistently shown moderate to high levels of correspondence accuracy for demographic (e.g., race/ethnicity, gender) stereotypes […]. Nearly all accuracy correlations for consensual stereotypes about race/ethnicity and gender exceed .50 (compared to only 5% of social psychological findings; Richard, Bond, & Stokes-Zoota, 2003).[…] Rather than being based in cultural myths, the shared component of stereotypes is often highly accurate. This pattern cannot be easily explained by motivational or social-constructionist theories of stereotypes and probably reflects a “wisdom of crowds” effect […] personal stereotypes are also quite accurate, with correspondence accuracy for roughly half exceeding r =.50.”
“We found 34 published studies of racial-, ethnic-, and gender-stereotype accuracy. Although not every study examined discrepancy scores, when they did, a plurality or majority of all consensual stereotype judgments were accurate. […] In these 34 studies, when stereotypes were inaccurate, there was more evidence of underestimating than overestimating actual demographic group differences […] Research assessing the accuracy of miscellaneous other stereotypes (e.g., about occupations, college majors, sororities, etc.) has generally found accuracy levels comparable to those for demographic stereotypes”
“A common claim […] is that even though many stereotypes accurately capture group means, they are still not accurate because group means cannot describe every individual group member. […] If people were rational, they would use stereotypes to judge individual targets when they lack information about targets’ unique personal characteristics (i.e., individuating information), when the stereotype itself is highly diagnostic (i.e., highly informative regarding the judgment), and when available individuating information is ambiguous or incompletely useful. People’s judgments robustly conform to rational predictions. In the rare situations in which a stereotype is highly diagnostic, people rely on it (e.g., Crawford, Jussim, Madon, Cain, & Stevens, 2011). When highly diagnostic individuating information is available, people overwhelmingly rely on it (Kunda & Thagard, 1996; effect size averaging r = .70). Stereotype biases average no higher than r = .10 ( Jussim, 2012) but reach r = .25 in the absence of individuating information (Kunda & Thagard, 1996). The more diagnostic individuating information people have, the less they stereotype (Crawford et al., 2011; Krueger & Rothbart, 1988). Thus, people do not indiscriminately apply their stereotypes to all individual members of stereotyped groups.” (Funder incidentally talked about this stuff as well in his book Personality Judgment).
One thing worth mentioning in the context of stereotypes is that if you look at stuff like crime data – which sadly not many people do – and you stratify based on stuff like country of origin, then the sub-group differences you observe tend to be very large. Some of the differences you observe between subgroups are not in the order of something like 10%, which is probably the sort of difference which could easily be ignored without major consequences; some subgroup differences can easily be in the order of one or two orders of magnitude. The differences are in some contexts so large as to basically make it downright idiotic to assume there are no differences – it doesn’t make sense, it’s frankly a stupid thing to do. To give an example, in Germany the probability that a random person, about whom you know nothing, has been a suspect in a thievery case is 22% if that random person happens to be of Algerian extraction, whereas it’s only 0,27% if you’re dealing with an immigrant from China. Roughly one in 13 of those Algerians have also been involved in a case of ‘body (bodily?) harm’, which is the case for less than one in 400 of the Chinese immigrants.
v. Assessing Immigrant Integration in Sweden after the May 2013 Riots. Some data from the article:
“Today, about one-fifth of Sweden’s population has an immigrant background, defined as those who were either born abroad or born in Sweden to two immigrant parents. The foreign born comprised 15.4 percent of the Swedish population in 2012, up from 11.3 percent in 2000 and 9.2 percent in 1990 […] Of the estimated 331,975 asylum applicants registered in EU countries in 2012, 43,865 (or 13 percent) were in Sweden. […] More than half of these applications were from Syrians, Somalis, Afghanis, Serbians, and Eritreans. […] One town of about 80,000 people, Södertälje, since the mid-2000s has taken in more Iraqi refugees than the United States and Canada combined.”
“Coupled with […] macroeconomic changes, the largely humanitarian nature of immigrant arrivals since the 1970s has posed challenges of labor market integration for Sweden, as refugees often arrive with low levels of education and transferable skills […] high unemployment rates have disproportionately affected immigrant communities in Sweden. In 2009-10, Sweden had the highest gap between native and immigrant employment rates among OECD countries. Approximately 63 percent of immigrants were employed compared to 76 percent of the native-born population. This 13 percentage-point gap is significantly greater than the OECD average […] Explanations for the gap include less work experience and domestic formal qualifications such as language skills among immigrants […] Among recent immigrants, defined as those who have been in the country for less than five years, the employment rate differed from that of the native born by more than 27 percentage points. In 2011, the Swedish newspaper Dagens Nyheter reported that 35 percent of the unemployed registered at the Swedish Public Employment Service were foreign born, up from 22 percent in 2005.”
“As immigrant populations have grown, Sweden has experienced a persistent level of segregation — among the highest in Western Europe. In 2008, 60 percent of native Swedes lived in areas where the majority of the population was also Swedish, and 20 percent lived in areas that were virtually 100 percent Swedish. In contrast, 20 percent of Sweden’s foreign born lived in areas where more than 40 percent of the population was also foreign born.”
vi. Book recommendations. Or rather, author recommendations. A while back I asked ‘the people of SSC’ if they knew of any fiction authors I hadn’t read yet which were both funny and easy to read. I got a lot of good suggestions, and the roughly 20 Dick Francis novels I’ve read during the fall I’ve read as a consequence of that thread.
“On the basis of an original survey among native Christians and Muslims of Turkish and Moroccan origin in Germany, France, the Netherlands, Belgium, Austria and Sweden, this paper investigates four research questions comparing native Christians to Muslim immigrants: (1) the extent of religious fundamentalism; (2) its socio-economic determinants; (3) whether it can be distinguished from other indicators of religiosity; and (4) its relationship to hostility towards out-groups (homosexuals, Jews, the West, and Muslims). The results indicate that religious fundamentalist attitudes are much more widespread among Sunnite Muslims than among native Christians, even after controlling for the different demographic and socio-economic compositions of these groups. […] Fundamentalist believers […] show very high levels of out-group hostility, especially among Muslims.”
ix. Portal: Dinosaurs. It would have been so incredibly awesome to have had access to this kind of stuff back when I was a child. The portal includes links to articles with names like ‘Bone Wars‘ – what’s not to like? Again, awesome!
x. “you can’t determine if something is truly random from observations alone. You can only determine if something is not truly random.” (link) An important insight well expressed.
xi. Chessprogramming. If you’re interested in having a look at how chess programs work, this is a neat resource. The wiki contains lots of links with information on specific sub-topics of interest. Also chess-related: The World Championship match between Carlsen and Karjakin has started. To the extent that I’ll be following the live coverage, I’ll be following Svidler et al.’s coverage on chess24. Robin van Kampen and Eric Hansen – both 2600+ elo GMs – did quite well yesterday, in my opinion.
xii. Justified by More Than Logos Alone (Razib Khan).
“Very few are Roman Catholic because they have read Aquinas’ Five Ways. Rather, they are Roman Catholic, in order of necessity, because God aligns with their deep intuitions, basic cognitive needs in terms of cosmological coherency, and because the church serves as an avenue for socialization and repetitive ritual which binds individuals to the greater whole. People do not believe in Catholicism as often as they are born Catholics, and the Catholic religion is rather well fitted to a range of predispositions to the typical human.”
i. On the youtube channel of the Institute for Advanced Studies there has been a lot of activity over the last week or two (far more than 100 new lectures have been uploaded, and it seems new uploads are still being added at this point), and I’ve been watching a few of the recently uploaded astrophysics lectures. They’re quite technical, but you can watch them and follow enough of the content to have an enjoyable time despite not understanding everything:
This is a good lecture, very interesting. One major point made early on: “the take-away message is that the most common planet in the galaxy, at least at shorter periods, are planets for which there is no analogue in the solar system. The most common kind of planet in the galaxy is a planet with a radius of two Earth radii.” Another big take-away message is that small planets seem to be quite common (as noted in the conclusions, “16% of Sun-like stars have an Earth-sized planet”).
Of the lectures included in this post this was the one I liked the least; there are too many (‘obstructive’) questions/interactions between lecturer and attendants along the way, and the interactions/questions are difficult to hear/understand. If you consider watching both this lecture and the lecture below, I would say that it would probably be wise to watch the lecture below this one before you watch this one; I concluded that in retrospect some of the observations made early on in the lecture below would have been useful to know about before watching this lecture. (The first half of the lecture below was incidentally to me somewhat easier to follow than was the second half, but especially the first half hour of it is really quite good, despite the bad start (which one can always blame on Microsoft…)).
ii. Words I’ve encountered recently (…or ‘recently’ – it’s been a while since I last posted one of these lists): Divagations, periphrasis, reedy, architrave, sett, pedipalp, tout, togs, edentulous, moue, tatty, tearaway, prorogue, piscine, fillip, sop, panniers, auxology, roister, prepossessing, cantle, catamite, couth, ordure, biddy, recrudescence, parvenu, scupper, husting, hackle, expatiate, affray, tatterdemalion, eructation, coppice, dekko, scull, fulmination, pollarding, grotty, secateurs, bumf (I must admit that I like this word – it seems fitting, somehow, to use that word for this concept…), durophagy, randy, (brief note to self: Advise people having children who ask me about suggestions for how to name them against using this name (or variants such as Randi), it does not seem like a great idea), effete, apricity, sororal, bint, coition, abaft, eaves, gadabout, lugubriously, retroussé, landlubber, deliquescence, antimacassar, inanition.
iii. “The point of rigour is not to destroy all intuition; instead, it should be used to destroy bad intuition while clarifying and elevating good intuition. It is only with a combination of both rigorous formalism and good intuition that one can tackle complex mathematical problems; one needs the former to correctly deal with the fine details, and the latter to correctly deal with the big picture. Without one or the other, you will spend a lot of time blundering around in the dark (which can be instructive, but is highly inefficient). So once you are fully comfortable with rigorous mathematical thinking, you should revisit your intuitions on the subject and use your new thinking skills to test and refine these intuitions rather than discard them. One way to do this is to ask yourself dumb questions; another is to relearn your field.” (Terry Tao, There’s more to mathematics than rigour and proofs)
iv. A century of trends in adult human height. A figure from the paper (Figure 3 – Change in adult height between the 1896 and 1996 birth cohorts):
(Click to view full size. WordPress seems to have changed the way you add images to a blog post – if this one is even so annoyingly large, I apologize, I have tried to minimize it while still retaining detail, but the original file is huge). An observation from the paper:
“Men were taller than women in every country, on average by ~11 cm in the 1896 birth cohort and ~12 cm in the 1996 birth cohort […]. In the 1896 birth cohort, the male-female height gap in countries where average height was low was slightly larger than in taller nations. In other words, at the turn of the 20th century, men seem to have had a relative advantage over women in undernourished compared to better-nourished populations.”
v. I found this paper, on Exercise and Glucose Metabolism in Persons with Diabetes Mellitus, interesting in part because I’ve been very surprised a few times by offhand online statements made by diabetic athletes, who had observed that their blood glucose really didn’t drop all that fast during exercise. Rapid and annoyingly large drops in blood glucose during exercise have been a really consistent feature of my own life with diabetes during adulthood. It seems that there may be big inter-individual differences in terms of the effects of exercise on glucose in diabetics. From the paper:
“Typically, prolonged moderate-intensity aerobic exercise (i.e., 30–70% of one’s VO2max) causes a reduction in glucose concentrations because of a failure in circulating insulin levels to decrease at the onset of exercise.12 During this type of physical activity, glucose utilization may be as high as 1.5 g/min in adolescents with type 1 diabetes13 and exceed 2.0 g/min in adults with type 1 diabetes,14 an amount that quickly lowers circulating glucose levels. Persons with type 1 diabetes have large interindividual differences in blood glucose responses to exercise, although some intraindividual reproducibility exists.15 The wide ranging glycemic responses among individuals appears to be related to differences in pre-exercise blood glucose concentrations, the level of circulating counterregulatory hormones and the type/duration of the activity.2“
“Einstein emerges from this collection of quotes, drawn from many different sources, as a complete and fully rounded human being […] Knowledge of the darker side of Einstein’s life makes his achievement in science and in public affairs even more miraculous. This book shows him as he was – not a superhuman genius but a human genius, and all the greater for being human.”
I’ve recently read The Ultimate Quotable Einstein, from the foreword of which the above quote is taken, which contains roughly 1600 quotes by or about Albert Einstein; most of the quotes are by Einstein himself, but the book also includes more than 50 pages towards the end of the book containing quotes by others about him. I was probably not in the main target group, but I do like good quote collections and I figured there might be enough good quotes in the book for it to make sense for me to give it a try. On the other hand after having read the foreword by Freeman Dyson I knew there would probably be a lot of quotes in the book which I probably wouldn’t find too interesting; I’m not really sure why I should give a crap if/why a guy who died more than 60 years ago and whom I have never met and never will was having an affair during the early 1920s, or why I should care what Einstein thought about his mother or his ex-wife, but if that kind of stuff interests you the book has stuff about those kinds of things as well. My own interest in Einstein, such as it is, is mainly in ‘Einstein the scientist’ (and perhaps also in this particular context ‘Einstein the aphorist’), not ‘Einstein the father’ or ‘Einstein the husband’. I also don’t find the political views which he held to be very interesting, but again if you want to know what Einstein thought about things like Zionism, pacifism, and world government the book includes quotes about such topics as well.
Overall I should say that I was a little underwhelmed by the book and the quotes it includes, but I would also note that people who are interested in knowing more about Einstein will likely find a lot of valuable source material here, and that I did give the book 3 stars on goodreads. I did learn a lot of new things about Einstein by reading the book, but this is not surprising given how little I knew about him before I started reading the book; for example I had no idea that he was offered the presidency of Israel a few years before his death. I noticed only two quotes which were included more than once (a quote on pages 187-188 was repeated on page 453, and a quote on page 295 was repeated on page 455), and although I cannot guarantee that there aren’t any other repeats almost all quotes included in the book are unique, in the sense that they’re only included once in the coverage. However it should also be mentioned in this context that there are a few quotes on specific themes which are very similar to other quotes included elsewhere in the coverage. I do consider this unavoidable considering the number of quotes included, though.
I have included some sample quotes from the book below – I have tried to include quotes on a wide variety of topics. All quotes without a source below are sourced quotes by Einstein (the book also contains a small collection of quotes ‘attributed to Einstein’, many of which are either not sourced or sourced in such a manner that Calaprice did not feel convinced that the quote was actually by Einstein – none of the quotes from that part of the book’s coverage are included below).
“When a blind beetle crawls over the surface of a curved branch, it doesn’t notice that the track it has covered is indeed curved. I was lucky enough to notice what the beetle didn’t notice.” (“in answer to his son Eduard’s question about why he is so famous, 1922.”)
“Teaching should be such that what is offered is perceived as a valuable gift and not as a hard duty.”
“I am not prepared to accept all his conclusions, but I consider his work an immensely valuable contribution to the science of human behavior.” (Einstein said this about Sigmund Freud during an interview. Yeah…)
“I consider him the best of the living writers.” (on Bertrand Russell. Russell incidentally also admired Einstein immensely – the last part of the book, including quotes by others about Einstein, includes this one by him: “Of all the public figures that I have known, Einstein was the one who commanded my most wholehearted admiration.”)
“I cannot understand the passive response of the whole civilized world to this modern barbarism. Doesn’t the world see that Hitler is aiming for war?” (1933. Related link.)
“Children don’t heed the life experience of their parents, and nations ignore history. Bad lessons always have to be learned anew.”
“Few people are capable of expressing with equanimity opinions that differ from the prejudices of their social environment. Most people are even incapable of forming such opinions.”
“Sometimes one pays most for things one gets for nothing.”
“Thanks to my fortunate idea of introducing the relativity principle into physics, you (and others) now enormously overrate my scientific abilities, to the point where this makes me quite uncomfortable.” (To Arnold Sommerfeld, 1908)
“No fairer destiny could be allotted to any physical theory than that it should of itself point out the way to the introduction of a more comprehensive theory, in which it lives on as a limiting case.”
“Mother nature, or more precisely an experiment, is a resolute and seldom friendly referee […]. She never says “yes” to a theory; but only “maybe” under the best of circumstances, and in most cases simply “no”.”
“The aim of science is, on the one hand, a comprehension, as complete as possible, of the connection between the sense experiences in their totality, and, on the other hand, the accomplishment of this aim by the use of a minimum of primary concepts and relations.” A related quote from the book: “Although it is true that it is the goal of science to discover rules which permit the association and foretelling of facts, this is not its only aim. It also seeks to reduce the connections discovered to the smallest possible number of mutually independent conceptual elements. It is in this striving after the rational unification of the manifold that it encounters its greatest successes.”
“According to general relativity, the concept of space detached from any physical content does not exist. The physical reality of space is represented by a field whose components are continuous functions of four independent variables – the coordinates of space and time.”
“One thing I have learned in a long life: that all our science, measured against reality, is primitive and childlike – and yet it is the most precious thing we have.”
“”Why should I? Everybody knows me there” (upon being told by his wife to dress properly when going to the office). “Why should I? No one knows me there” (upon being told to dress properly for his first big conference).”
“Marriage is but slavery made to appear civilized.”
“Nothing is more destructive of respect for the government and the law of the land than passing laws that cannot be enforced.”
“Einstein would be one of the greatest theoretical physicists of all time even if he had not written a single line on relativity.” (Max Born)
“Einstein’s [violin] playing is excellent, but he does not deserve his world fame; there are many others just as good.” (“A music critic on an early 1920s performance, unaware that Einstein’s fame derived from physics, not music. Quoted in Reiser, Albert Einstein, 202-203″)
Below are three new lectures from the Institute of Advanced Study. As far as I’ve gathered they’re all from an IAS symposium called ‘Lens of Computation on the Sciences’ – all three lecturers are computer scientists, but you don’t have to be a computer scientist to watch these lectures.
Should computer scientists and economists band together more and try to use the insights from one field to help solve problems in the other field? Roughgarden thinks so, and provides examples of how this might be done/has been done. Applications discussed in the lecture include traffic management and auction design. I’m not sure how much of this lecture is easy to follow for people who don’t know anything about either topic (i.e., computer science and economics), but I found it not too difficult to follow – it probably helped that I’ve actually done work on a few of the things he touches upon in the lecture, such as basic auction theory, the fixed point theorems and related proofs, basic queueing theory and basic discrete maths/graph theory. Either way there are certainly much more technical lectures than this one available at the IAS channel.
I don’t have Facebook and I’m not planning on ever getting a FB account, so I’m not really sure I care about the things this guy is trying to do, but the lecturer does touch upon some interesting topics in network theory. Not a great lecture in my opinion and occasionally I think the lecturer ‘drifts’ a bit, talking without saying very much, but it’s also not a terrible lecture. A few times I was really annoyed that you can’t see where he’s pointing that damn laser pointer, but this issue should not stop you from watching the video, especially not if you have an interest in analytical aspects of how to approach and make sense of ‘Big Data’.
I’ve noticed that Scott Alexander has said some nice things about Scott Aaronson a few times, but until now I’ve never actually read any of the latter guy’s stuff or watched any lectures by him. I agree with Scott (Alexander) that Scott (Aaronson) is definitely a smart guy. This is an interesting lecture; I won’t pretend I understood all of it, but it has some thought-provoking ideas and important points in the context of quantum computing and it’s actually a quite entertaining lecture; I was close to laughing a couple of times.
Here’s my goodreads review of the book. As mentioned in the review, the book was overall a slightly disappointing read – but there were some decent quotes included in the book, and I decided that I ought to post a post with some sample quotes here as it would be a relatively easy post to write. Do note while reading this post that the book had a lot of bad quotes, so you should not take the sample quotes I’ve posted below to be representative of the book’s coverage in general.
i. “The aim of science is to seek the simplest explanation of complex facts. We are apt to fall into the error of thinking that the facts are simple because simplicity is the goal of our quest. The guiding motto in the life of every natural philosopher should be “Seek simplicity and distrust it.”” (Alfred North Whitehead)
ii. “Poor data and good reasoning give poor results. Good data and poor reasoning give poor results. Poor data and poor reasoning give rotten results.” (Edmund C. Berkeley)
iii. “By no process of sound reasoning can a conclusion drawn from limited data have more than a limited application.” (J.W. Mellor)
iv. “The energy produced by the breaking down of the atom is a very poor kind of thing. Anyone who expects a source of power from the transformation of these atoms is talking moonshine.” (Ernest Rutherford, 1933).
v. “An experiment is a question which science poses to Nature, and a measurement is the recording of Nature’s answer.” (Max Planck)
vi. “A fact doesn’t have to be understood to be true.” (Heinlein)
vii. “God was invented to explain mystery. God is always invented to explain those things that you do not understand. Now, when you finally discover how something works, you get some laws which you’re taking away from God; you don’t need him anymore. But you need him for the other mysteries. So therefore you leave him to create the universe because we haven’t figured that out yet; you need him for understanding those things which you don’t believe the laws will explain, such as consciousness, or why you only live to a certain length of time – life and death – stuff like that. God is always associated with those things that you do not understand.” (Feynman)
viii. “Hypotheses are the scaffolds which are erected in front of a building and removed when the building is completed. They are indispensable to the worker; but he must not mistake the scaffolding for the building.” (Goethe)
ix. “We are to admit no more cause of natural things than such as are both true and sufficient to explain their appearances.” (Newton)
x. “It is the province of knowledge to speak and it is the privilege of wisdom to listen.” (Oliver Wendell Holmes)
xi. “Light crosses space with the prodigious velocity of 6,000 leagues per second.
La Science Populaire
April 28, 1881″
“A typographical error slipped into our last issue that is important to correct. The speed of light is 76,000 leagues per hour – and not 6,000.
La Science Populaire
May 19, 1881″
“A note correcting a first error appeared in our issue number 68, indicating that the speed of light is 76,000 leagues per hour. Our readers have corrected this new error. The speed of light is approximately 76,000 leagues per second.
La Science Populaire
xii. “All models are wrong but some are useful.” (G. E. P. Box)
xiii. “the downward movement of a mass of gold or lead, or of any other body endowed with weight, is quicker in proportion to its size.” (Aristotle)
xiv. “those whom devotion to abstract discussions has rendered unobservant of the facts are too ready to dogmatize on the basis of a few observations” (-ll-).
xv. “it may properly be asked whether science can be undertaken without taking the risk of skating on the possibly thin ice of supposition. The important thing to know is when one is on the more solid ground of observation and when one is on the ice.” (W. M. O’Neil)
xvi. “If I could remember the names of all these particles, I’d be a botanist.” (Enrico Fermi)
xvii. “Theoretical physicists are accustomed to living in a world which is removed from tangible objects by two levels of abstraction. From tangible atoms we move by one level of abstraction to invisible fields and particles. A second level of abstraction takes us from fields and particles to the symmetry-groups by which fields and particles are related. The superstring theory takes us beyond symmetry-groups to two further levels of abstraction. The third level of abstraction is the interpretation of symmetry-groups in terms of states in ten-dimensional space-time. The fourth level is the world of the superstrings by whose dynamical behavior the states are defined.” (Freeman Dyson)
xviii. “Space tells matter how to move . . . and matter tells space how to curve.” (John Wheeler)
xix. “the universe is not a rigid and inimitable edifice where independent matter is housed in independent space and time; it is an amorphous continuum, without any fixed architecture, plastic and variable, constantly subject to change and distortion. Wherever there is matter and motion, the continuum is disturbed. Just as a fish swimming in the sea agitates the water around it, so a star, a comet, or a galaxy distorts the geometry of the space-time through which it moves.” (Lincoln Barnett)
xx. “most physicists today place the probability of the existence of tachyons only slightly higher than the existence of unicorns” (Nick Herbert).
I was debating whether to post this, but considering how long it’s been since my last post I decided to do it. A large number of lectures have recently been uploaded by the Institute for Advanced Studies, and despite the fact that most of my ‘blogging-related activities’ these days relate to book reading I have watched a few of those lectures, and so I decided to post a couple of the lectures here:
I liked this lecture. Part II of the lecture in particular, starting around the 38 minute mark, dealt with stuff reasonably closely related to things I’d read about before (‘relatively’…) recently, back when I read Lammer’s text (blog coverage here); so although I didn’t remember the stuff covered in Lammer’s text in too much detail, it was definitely helpful to have worked with this stuff before. However I do believe you can watch the lecture and sort of understand what she’s talking about without knowing a great deal about these topics, at least if you don’t care too much about understanding all the details (I’d note that there are a lot of things going on ‘behind the scenes’ here, and that you can say a lot of stuff about topics closely related to this talk, like outgassing processes and how they relate to things like volcanism as well as e.g. the dynamic interactions between atmospheric molecules and the solar wind taking place in the early stages of stellar evolution). As is always the case for IAS lectures it’s really hard to hear the questions being asked and that’s annoying, but actually I think miss Schilchting is reasonably good at repeating the question or sort of answer them in a way that enables you to gather what’s ‘going on’; at least the fact that you can’t hear the questions is in my opinion a somewhat bigger problem in the lecture below (relatedly you can actually also see where the laser pointer is pointing in this lecture, at least some of the time – you can’t in the lecture below).
As mentioned this one was harder to follow, at least for me.
I hope to find time to blog a bit more in the days to come. One of several reasons why I’ve not blogged more than I have during the last weeks is that I recently realized that if I put in a bit of effort I’d be able to reach 150 books this year (I’m currently at 143 books, but very close to 144), with 50 non-fiction books (I think going for 52 would be a bit too much, but I’m not ruling it out yet – I’m currently at 47 non-fiction books (…but very close to 48)). I should note that I update the book post to which I link above much more often than I update ‘the blog’ in general with new posts. The reason why the ‘read 150 books this year goal’ is relevant is of course that every time I blog a book here on the blog, this takes away a substantial amount of time which I can’t spend actually reading books. Goodreads incidentally have recently made a nice ‘book of the year’ profile where you can see more details about the books I’ve read etc. From that profile I realized that my implicit working goal of reading 100 pages/day over the year has already been met (I’m currently at ~42.000 pages).
i. Two lectures from the Institute for Advanced Studies:
The IAS has recently uploaded a large number of lectures on youtube, and the ones I blog here are a few of those where you can actually tell from the title what the lecture is about; I find it outright weird that these people don’t include the topic covered in the lecture in their lecture titles.
As for the video above, as usual for the IAS videos it’s annoying that you can’t hear the questions asked by the audience, but the sound quality of this video is at least quite a bit better than the sound quality of the video below (which has a couple of really annoying sequences, in particular around the 15-16 minutes mark (it gets better), where the image is also causing problems, and in the last couple of minutes of the Q&A things are also not exactly optimal as the lecturer leaves the area covered by the camera in order to write something on the blackboard – but you don’t know what he’s writing and you can’t see the lecturer, because the camera isn’t following him). I found most of the above lecture easier to follow than I did the lecture posted below, though in either case you’ll probably not understand all of it unless you’re an astrophysicist – you definitely won’t in case of the latter lecture. I found it helpful to look up a few topics along the way, e.g. the wiki articles about the virial theorem (/also dealing with virial mass/radius), active galactic nucleus (this is the ‘AGN’ she refers to repeatedly), and the Tully–Fisher relation.
Given how many questions are asked along the way it’s really annoying that you in most cases can’t hear what people are asking about – this is definitely an area where there’s room for improvement in the context of the IAS videos. The lecture was not easy to follow but I figured along the way that I understood enough of it to make it worth watching the lecture to the end (though I’d say you’ll not miss much if you stop after the lecture – around the 1.05 hours mark – and skip the subsequent Q&A). I’ve relatively recently read about related topics, e.g. pulsar formation and wave- and fluid dynamics, and if I had not I probably would not have watched this lecture to the end.
ii. A vocabulary.com update. I’m slowly working my way up to the ‘Running Dictionary’ rank (I’m only a walking dictionary at this point); here’s some stuff from my progress page:
I recently learned from a note added to a list that I’ve actually learned a very large proportion of all words available on vocabulary.com, which probably also means that I may have been too harsh on the word selection algorithm in past posts here on the blog; if there aren’t (/m)any new words left to learn it should not be surprising that the algorithm presents me with words I’ve already mastered, and it’s not the algorithm’s fault that there aren’t more words available for me to learn (well, it is to the extent that you’re of the opinion that questions should be automatically created by the algorithm as well, but I don’t think we’re quite there yet at this point). The aforementioned note was added in June, and here’s the important part: “there are words on your list that Vocabulary.com can’t teach yet. Vocabulary.com can teach over 12,000 words, but sadly, these aren’t among them”. ‘Over 12.000’ – and I’ve mastered 11.300. When the proportion of mastered words is this high, not only will the default random word algorithm mostly present you with questions related to words you’ve already mastered; but it actually also starts to get hard to find lists with many words you’ve not already mastered – I’ll often load lists with one hundred words and then realize that I’ve mastered every word on the list. This is annoying if you have a desire to continually be presented with both new words as well as old ones. Unless vocabulary.com increases the rate with which they add new words I’ll run out of new words to learn, and if that happens I’m sure it’ll be much more difficult for me to find motivation to use the site.
With all that stuff out of the way, if you’re not a regular user of the site I should note – again – that it’s an excellent resource if you desire to increase your vocabulary. Below is a list of words I’ve encountered on the site in recent weeks(/months?):
Copacetic, frumpy, elision, termagant, harridan, quondam, funambulist, phantasmagoria, eyelet, cachinnate, wilt, quidnunc, flocculent, galoot, frangible, prevaricate, clarion, trivet, noisome, revenant, myrmidon (I have included this word once before in a post of this type, but it is in my opinion a very nice word with which more people should be familiar…), debenture, teeter, tart, satiny, romp, auricular, terpsichorean, poultice, ululation, fusty, tangy, honorarium, eyas, bumptious, muckraker, bayou, hobble, omphaloskepsis, extemporize, virago, rarefaction, flibbertigibbet, finagle, emollient.
iii. I don’t think I’d do things exactly the way she’s suggesting here, but the general idea/approach seems to me appealing enough for it to be worth at least keeping in mind if I ever decide to start dating/looking for a partner.
iv. Some wikipedia links:
Tarrare (featured). A man with odd eating habits and an interesting employment history (“Dr. Courville was keen to continue his investigations into Tarrare’s eating habits and digestive system, and approached General Alexandre de Beauharnais with a suggestion that Tarrare’s unusual abilities and behaviour could be put to military use. A document was placed inside a wooden box which was in turn fed to Tarrare. Two days later, the box was retrieved from his excrement, with the document still in legible condition. Courville proposed to de Beauharnais that Tarrare could thus serve as a military courier, carrying documents securely through enemy territory with no risk of their being found if he were searched.” Yeah…).
1740 Batavia massacre (featured).
v. I am also fun.
The Institute for Advanced Studies recently released a number of new lectures on youtube and I’ve watched a few of them.
Both this lecture and the one below start abruptly with no introduction, but I don’t think much stuff was covered before the beginning of this recording. The stuff in both lectures is ‘reasonably’ closely related to content covered in the book on pulsars/supernovae/neutron stars by McNamara which I recently finished (goodreads link) (…for some definitions of ‘reasonably’ I should perhaps add – it’s not that closely related, and for example Ramirez’ comment around the 50 minute mark that they’re disregarding magnetic fields seemed weird to me in the context of McNamara’s coverage). The first lecture was definitely much easier for me to follow than was the last one. The fact that you can’t hear the questions being asked I found annoying, but there aren’t that many questions being asked along the way. I was surprised to learn via google that Ramirez seems to be affiliated with the Niels Bohr Institute of Copenhagen (link).
Here’s a third lecture from the IAS:
I really didn’t think much of this lecture, but some of you might like it. It’s very non-technical compared to the first two lectures above, and unlike them the video recording did not start abruptly in the ‘middle’ of the lecture – which in this case on the other hand also means that you can actually easily skip the first 6-7 minutes without missing out on anything. Given the stuff he talks about in roughly the last 10 minutes of the lecture (aside from the concluding remarks) this is probably a reasonable place to remind you that Feynman’s lectures on the character of physical law are available on youtube and uploaded on this blog (see the link). If you have not watched those lectures, I actually think you should probably do that before watching a lecture like the one above – it’s in all likelihood a better use of your time. If you’re curious about things like cosmological scales and haven’t watched any of videos in the Khan Academy cosmology and astronomy lecture series, this is incidentally a good place to go have a look; the first few videos in the lecture series are really nice. Tegmark talks in his lecture about how we’ve underestimated how large the universe is, but I don’t really think the lecture adequately conveys just how mindbogglingly large the universe is, and I think Salman Khan’s lectures are much better if you want to get ‘a proper perspective’ of these things, to the extent that obtaining a ‘proper perspective’ is even possible given the limitations of the human mind.
Lastly, a couple more lectures from khanacademymedicine:
This is a neat little overview, especially if you’re unfamiliar with the topic.
i. Motte-and-bailey castle (‘good article’).
“A motte-and-bailey castle is a fortification with a wooden or stone keep situated on a raised earthwork called a motte, accompanied by an enclosed courtyard, or bailey, surrounded by a protective ditch and palisade. Relatively easy to build with unskilled, often forced labour, but still militarily formidable, these castles were built across northern Europe from the 10th century onwards, spreading from Normandy and Anjou in France, into the Holy Roman Empire in the 11th century. The Normans introduced the design into England and Wales following their invasion in 1066. Motte-and-bailey castles were adopted in Scotland, Ireland, the Low Countries and Denmark in the 12th and 13th centuries. By the end of the 13th century, the design was largely superseded by alternative forms of fortification, but the earthworks remain a prominent feature in many countries. […]
Various methods were used to build mottes. Where a natural hill could be used, scarping could produce a motte without the need to create an artificial mound, but more commonly much of the motte would have to be constructed by hand. Four methods existed for building a mound and a tower: the mound could either be built first, and a tower placed on top of it; the tower could alternatively be built on the original ground surface and then buried within the mound; the tower could potentially be built on the original ground surface and then partially buried within the mound, the buried part forming a cellar beneath; or the tower could be built first, and the mound added later.
Regardless of the sequencing, artificial mottes had to be built by piling up earth; this work was undertaken by hand, using wooden shovels and hand-barrows, possibly with picks as well in the later periods. Larger mottes took disproportionately more effort to build than their smaller equivalents, because of the volumes of earth involved. The largest mottes in England, such as Thetford, are estimated to have required up to 24,000 man-days of work; smaller ones required perhaps as little as 1,000. […] Taking into account estimates of the likely available manpower during the period, historians estimate that the larger mottes might have taken between four and nine months to build. This contrasted favourably with stone keeps of the period, which typically took up to ten years to build. Very little skilled labour was required to build motte and bailey castles, which made them very attractive propositions if forced peasant labour was available, as was the case after the Norman invasion of England. […]
The type of soil would make a difference to the design of the motte, as clay soils could support a steeper motte, whilst sandier soils meant that a motte would need a more gentle incline. Where available, layers of different sorts of earth, such as clay, gravel and chalk, would be used alternatively to build in strength to the design. Layers of turf could also be added to stabilise the motte as it was built up, or a core of stones placed as the heart of the structure to provide strength. Similar issues applied to the defensive ditches, where designers found that the wider the ditch was dug, the deeper and steeper the sides of the scarp could be, making it more defensive. […]
Although motte-and-bailey castles are the best known castle design, they were not always the most numerous in any given area. A popular alternative was the ringwork castle, involving a palisade being built on top of a raised earth rampart, protected by a ditch. The choice of motte and bailey or ringwork was partially driven by terrain, as mottes were typically built on low ground, and on deeper clay and alluvial soils. Another factor may have been speed, as ringworks were faster to build than mottes. Some ringwork castles were later converted into motte-and-bailey designs, by filling in the centre of the ringwork to produce a flat-topped motte. […]
In England, William invaded from Normandy in 1066, resulting in three phases of castle building in England, around 80% of which were in the motte-and-bailey pattern. […] around 741 motte-and-bailey castles [were built] in England and Wales alone. […] Many motte-and-bailey castles were occupied relatively briefly and in England many were being abandoned by the 12th century, and others neglected and allowed to lapse into disrepair. In the Low Countries and Germany, a similar transition occurred in the 13th and 14th centuries. […] One factor was the introduction of stone into castle building. The earliest stone castles had emerged in the 10th century […] Although wood was a more powerful defensive material than was once thought, stone became increasingly popular for military and symbolic reasons.”
ii. Battle of Midway (featured). Lots of good stuff in there. One aspect I had not been aware of beforehand was that Allied codebreakers also here (I was quite familiar with the works of Turing and others in Bletchley Park) played a key role:
“Admiral Nimitz had one priceless advantage: cryptanalysts had partially broken the Japanese Navy’s JN-25b code. Since the early spring of 1942, the US had been decoding messages stating that there would soon be an operation at objective “AF”. It was not known where “AF” was, but Commander Joseph J. Rochefort and his team at Station HYPO were able to confirm that it was Midway; Captain Wilfred Holmes devised a ruse of telling the base at Midway (by secure undersea cable) to broadcast an uncoded radio message stating that Midway’s water purification system had broken down. Within 24 hours, the code breakers picked up a Japanese message that “AF was short on water.” HYPO was also able to determine the date of the attack as either 4 or 5 June, and to provide Nimitz with a complete IJN order of battle. Japan had a new codebook, but its introduction had been delayed, enabling HYPO to read messages for several crucial days; the new code, which had not yet been cracked, came into use shortly before the attack began, but the important breaks had already been made.[nb 8]
As a result, the Americans entered the battle with a very good picture of where, when, and in what strength the Japanese would appear. Nimitz knew that the Japanese had negated their numerical advantage by dividing their ships into four separate task groups, all too widely separated to be able to support each other.[nb 9] […] The Japanese, by contrast, remained almost totally unaware of their opponent’s true strength and dispositions even after the battle began. […] Four Japanese aircraft carriers — Akagi, Kaga, Soryu and Hiryu, all part of the six-carrier force that had attacked Pearl Harbor six months earlier — and a heavy cruiser were sunk at a cost of the carrier Yorktown and a destroyer. After Midway and the exhausting attrition of the Solomon Islands campaign, Japan’s capacity to replace its losses in materiel (particularly aircraft carriers) and men (especially well-trained pilots) rapidly became insufficient to cope with mounting casualties, while the United States’ massive industrial capabilities made American losses far easier to bear. […] The Battle of Midway has often been called “the turning point of the Pacific”. However, the Japanese continued to try to secure more strategic territory in the South Pacific, and the U.S. did not move from a state of naval parity to one of increasing supremacy until after several more months of hard combat. Thus, although Midway was the Allies’ first major victory against the Japanese, it did not radically change the course of the war. Rather, it was the cumulative effects of the battles of Coral Sea and Midway that reduced Japan’s ability to undertake major offensives.”
One thing which really strikes you (well, struck me) when reading this stuff is how incredibly capital-intensive the war at sea really was; this was one of the most important sea battles of the Second World War, yet the total Japanese death toll at Midway was just 3,057. To put that number into perspective, it is significantly smaller than the average number of people killed each day in Stalingrad (according to one estimate, the Soviets alone suffered 478,741 killed or missing during those roughly 5 months (~150 days), which comes out at roughly 3000/day).
iii. History of time-keeping devices (featured). ‘Exactly what it says on the tin’, as they’d say on TV Tropes.
It took a long time to get from where we were to where we are today; the horologists of the past faced a lot of problems you’ve most likely never even thought about. What do you do for example do if your ingenious water clock has trouble keeping time because variation in water temperature causes issues? Well, you use mercury instead of water, of course! (“Since Yi Xing’s clock was a water clock, it was affected by temperature variations. That problem was solved in 976 by Zhang Sixun by replacing the water with mercury, which remains liquid down to −39 °C (−38 °F).”).
iv. Microbial metabolism.
“Microbial metabolism is the means by which a microbe obtains the energy and nutrients (e.g. carbon) it needs to live and reproduce. Microbes use many different types of metabolic strategies and species can often be differentiated from each other based on metabolic characteristics. The specific metabolic properties of a microbe are the major factors in determining that microbe’s ecological niche, and often allow for that microbe to be useful in industrial processes or responsible for biogeochemical cycles. […]
All microbial metabolisms can be arranged according to three principles:
1. How the organism obtains carbon for synthesising cell mass:
- autotrophic – carbon is obtained from carbon dioxide (CO2)
- heterotrophic – carbon is obtained from organic compounds
- mixotrophic – carbon is obtained from both organic compounds and by fixing carbon dioxide
2. How the organism obtains reducing equivalents used either in energy conservation or in biosynthetic reactions:
- lithotrophic – reducing equivalents are obtained from inorganic compounds
- organotrophic – reducing equivalents are obtained from organic compounds
3. How the organism obtains energy for living and growing:
- chemotrophic – energy is obtained from external chemical compounds
- phototrophic – energy is obtained from light
In practice, these terms are almost freely combined. […] Most microbes are heterotrophic (more precisely chemoorganoheterotrophic), using organic compounds as both carbon and energy sources. […] Heterotrophic microbes are extremely abundant in nature and are responsible for the breakdown of large organic polymers such as cellulose, chitin or lignin which are generally indigestible to larger animals. Generally, the breakdown of large polymers to carbon dioxide (mineralization) requires several different organisms, with one breaking down the polymer into its constituent monomers, one able to use the monomers and excreting simpler waste compounds as by-products, and one able to use the excreted wastes. There are many variations on this theme, as different organisms are able to degrade different polymers and secrete different waste products. […]
Biochemically, prokaryotic heterotrophic metabolism is much more versatile than that of eukaryotic organisms, although many prokaryotes share the most basic metabolic models with eukaryotes, e. g. using glycolysis (also called EMP pathway) for sugar metabolism and the citric acid cycle to degrade acetate, producing energy in the form of ATP and reducing power in the form of NADH or quinols. These basic pathways are well conserved because they are also involved in biosynthesis of many conserved building blocks needed for cell growth (sometimes in reverse direction). However, many bacteria and archaea utilize alternative metabolic pathways other than glycolysis and the citric acid cycle. […] The metabolic diversity and ability of prokaryotes to use a large variety of organic compounds arises from the much deeper evolutionary history and diversity of prokaryotes, as compared to eukaryotes. […]
Many microbes (phototrophs) are capable of using light as a source of energy to produce ATP and organic compounds such as carbohydrates, lipids, and proteins. Of these, algae are particularly significant because they are oxygenic, using water as an electron donor for electron transfer during photosynthesis. Phototrophic bacteria are found in the phyla Cyanobacteria, Chlorobi, Proteobacteria, Chloroflexi, and Firmicutes. Along with plants these microbes are responsible for all biological generation of oxygen gas on Earth. […] As befits the large diversity of photosynthetic bacteria, there are many different mechanisms by which light is converted into energy for metabolism. All photosynthetic organisms locate their photosynthetic reaction centers within a membrane, which may be invaginations of the cytoplasmic membrane (Proteobacteria), thylakoid membranes (Cyanobacteria), specialized antenna structures called chlorosomes (Green sulfur and non-sulfur bacteria), or the cytoplasmic membrane itself (heliobacteria). Different photosynthetic bacteria also contain different photosynthetic pigments, such as chlorophylls and carotenoids, allowing them to take advantage of different portions of the electromagnetic spectrum and thereby inhabit different niches. Some groups of organisms contain more specialized light-harvesting structures (e.g. phycobilisomes in Cyanobacteria and chlorosomes in Green sulfur and non-sulfur bacteria), allowing for increased efficiency in light utilization. […]
Most photosynthetic microbes are autotrophic, fixing carbon dioxide via the Calvin cycle. Some photosynthetic bacteria (e.g. Chloroflexus) are photoheterotrophs, meaning that they use organic carbon compounds as a carbon source for growth. Some photosynthetic organisms also fix nitrogen […] Nitrogen is an element required for growth by all biological systems. While extremely common (80% by volume) in the atmosphere, dinitrogen gas (N2) is generally biologically inaccessible due to its high activation energy. Throughout all of nature, only specialized bacteria and Archaea are capable of nitrogen fixation, converting dinitrogen gas into ammonia (NH3), which is easily assimilated by all organisms. These prokaryotes, therefore, are very important ecologically and are often essential for the survival of entire ecosystems. This is especially true in the ocean, where nitrogen-fixing cyanobacteria are often the only sources of fixed nitrogen, and in soils, where specialized symbioses exist between legumes and their nitrogen-fixing partners to provide the nitrogen needed by these plants for growth.
Nitrogen fixation can be found distributed throughout nearly all bacterial lineages and physiological classes but is not a universal property. Because the enzyme nitrogenase, responsible for nitrogen fixation, is very sensitive to oxygen which will inhibit it irreversibly, all nitrogen-fixing organisms must possess some mechanism to keep the concentration of oxygen low. […] The production and activity of nitrogenases is very highly regulated, both because nitrogen fixation is an extremely energetically expensive process (16–24 ATP are used per N2 fixed) and due to the extreme sensitivity of the nitrogenase to oxygen.” (A lot of the stuff above was of course for me either review or closely related to stuff I’ve already read in the coverage provided in Beer et al., a book I’ve talked about before here on the blog).
v. Uranium (featured). It’s hard to know what to include here as the article has a lot of stuff, but I found this part in particular, well, interesting:
“During the Cold War between the Soviet Union and the United States, huge stockpiles of uranium were amassed and tens of thousands of nuclear weapons were created using enriched uranium and plutonium made from uranium. Since the break-up of the Soviet Union in 1991, an estimated 600 short tons (540 metric tons) of highly enriched weapons grade uranium (enough to make 40,000 nuclear warheads) have been stored in often inadequately guarded facilities in the Russian Federation and several other former Soviet states. Police in Asia, Europe, and South America on at least 16 occasions from 1993 to 2005 have intercepted shipments of smuggled bomb-grade uranium or plutonium, most of which was from ex-Soviet sources. From 1993 to 2005 the Material Protection, Control, and Accounting Program, operated by the federal government of the United States, spent approximately US $550 million to help safeguard uranium and plutonium stockpiles in Russia. This money was used for improvements and security enhancements at research and storage facilities. Scientific American reported in February 2006 that in some of the facilities security consisted of chain link fences which were in severe states of disrepair. According to an interview from the article, one facility had been storing samples of enriched (weapons grade) uranium in a broom closet before the improvement project; another had been keeping track of its stock of nuclear warheads using index cards kept in a shoe box.”
Some other observations from the article below:
“Uranium is a naturally occurring element that can be found in low levels within all rock, soil, and water. Uranium is the 51st element in order of abundance in the Earth’s crust. Uranium is also the highest-numbered element to be found naturally in significant quantities on Earth and is almost always found combined with other elements. Along with all elements having atomic weights higher than that of iron, it is only naturally formed in supernovae. The decay of uranium, thorium, and potassium-40 in the Earth’s mantle is thought to be the main source of heat that keeps the outer core liquid and drives mantle convection, which in turn drives plate tectonics. […]
Natural uranium consists of three major isotopes: uranium-238 (99.28% natural abundance), uranium-235 (0.71%), and uranium-234 (0.0054%). […] Uranium-238 is the most stable isotope of uranium, with a half-life of about 4.468×109 years, roughly the age of the Earth. Uranium-235 has a half-life of about 7.13×108 years, and uranium-234 has a half-life of about 2.48×105 years. For natural uranium, about 49% of its alpha rays are emitted by each of 238U atom, and also 49% by 234U (since the latter is formed from the former) and about 2.0% of them by the 235U. When the Earth was young, probably about one-fifth of its uranium was uranium-235, but the percentage of 234U was probably much lower than this. […]
Worldwide production of U3O8 (yellowcake) in 2013 amounted to 70,015 tonnes, of which 22,451 t (32%) was mined in Kazakhstan. Other important uranium mining countries are Canada (9,331 t), Australia (6,350 t), Niger (4,518 t), Namibia (4,323 t) and Russia (3,135 t). […] Australia has 31% of the world’s known uranium ore reserves and the world’s largest single uranium deposit, located at the Olympic Dam Mine in South Australia. There is a significant reserve of uranium in Bakouma a sub-prefecture in the prefecture of Mbomou in Central African Republic. […] Uranium deposits seem to be log-normal distributed. There is a 300-fold increase in the amount of uranium recoverable for each tenfold decrease in ore grade. In other words, there is little high grade ore and proportionately much more low grade ore available.”
vi. Radiocarbon dating (featured).
Radiocarbon dating (also referred to as carbon dating or carbon-14 dating) is a method of determining the age of an object containing organic material by using the properties of radiocarbon (14C), a radioactive isotope of carbon. The method was invented by Willard Libby in the late 1940s and soon became a standard tool for archaeologists. Libby received the Nobel Prize for his work in 1960. The radiocarbon dating method is based on the fact that radiocarbon is constantly being created in the atmosphere by the interaction of cosmic rays with atmospheric nitrogen. The resulting radiocarbon combines with atmospheric oxygen to form radioactive carbon dioxide, which is incorporated into plants by photosynthesis; animals then acquire 14C by eating the plants. When the animal or plant dies, it stops exchanging carbon with its environment, and from that point onwards the amount of 14C it contains begins to reduce as the 14C undergoes radioactive decay. Measuring the amount of 14C in a sample from a dead plant or animal such as piece of wood or a fragment of bone provides information that can be used to calculate when the animal or plant died. The older a sample is, the less 14C there is to be detected, and because the half-life of 14C (the period of time after which half of a given sample will have decayed) is about 5,730 years, the oldest dates that can be reliably measured by radiocarbon dating are around 50,000 years ago, although special preparation methods occasionally permit dating of older samples.
The idea behind radiocarbon dating is straightforward, but years of work were required to develop the technique to the point where accurate dates could be obtained. […]
The development of radiocarbon dating has had a profound impact on archaeology. In addition to permitting more accurate dating within archaeological sites than did previous methods, it allows comparison of dates of events across great distances. Histories of archaeology often refer to its impact as the “radiocarbon revolution”.”
I’ve read about these topics before in a textbook setting (e.g. here), but/and I should note that the article provides quite detailed coverage and I think most people will encounter some new information by having a look at it even if they’re superficially familiar with this topic. The article has a lot of stuff about e.g. ‘what you need to correct for’, which some of you might find interesting.
vii. Raccoon (featured). One interesting observation from the article:
“One aspect of raccoon behavior is so well known that it gives the animal part of its scientific name, Procyon lotor; “lotor” is neo-Latin for “washer”. In the wild, raccoons often dabble for underwater food near the shore-line. They then often pick up the food item with their front paws to examine it and rub the item, sometimes to remove unwanted parts. This gives the appearance of the raccoon “washing” the food. The tactile sensitivity of raccoons’ paws is increased if this rubbing action is performed underwater, since the water softens the hard layer covering the paws. However, the behavior observed in captive raccoons in which they carry their food to water to “wash” or douse it before eating has not been observed in the wild. Naturalist Georges-Louis Leclerc, Comte de Buffon, believed that raccoons do not have adequate saliva production to moisten food thereby necessitating dousing, but this hypothesis is now considered to be incorrect. Captive raccoons douse their food more frequently when a watering hole with a layout similar to a stream is not farther away than 3 m (10 ft). The widely accepted theory is that dousing in captive raccoons is a fixed action pattern from the dabbling behavior performed when foraging at shores for aquatic foods. This is supported by the observation that aquatic foods are doused more frequently. Cleaning dirty food does not seem to be a reason for “washing”. Experts have cast doubt on the veracity of observations of wild raccoons dousing food.
And here’s another interesting set of observations:
“In Germany—where the racoon is called the Waschbär (literally, “wash-bear” or “washing bear”) due to its habit of “dousing” food in water—two pairs of pet raccoons were released into the German countryside at the Edersee reservoir in the north of Hesse in April 1934 by a forester upon request of their owner, a poultry farmer. He released them two weeks before receiving permission from the Prussian hunting office to “enrich the fauna.”  Several prior attempts to introduce raccoons in Germany were not successful. A second population was established in eastern Germany in 1945 when 25 raccoons escaped from a fur farm at Wolfshagen, east of Berlin, after an air strike. The two populations are parasitologically distinguishable: 70% of the raccoons of the Hessian population are infected with the roundworm Baylisascaris procyonis, but none of the Brandenburgian population has the parasite. The estimated number of raccoons was 285 animals in the Hessian region in 1956, over 20,000 animals in the Hessian region in 1970 and between 200,000 and 400,000 animals in the whole of Germany in 2008. By 2012 it was estimated that Germany now had more than a million raccoons.“
I recently finished this book. I gave the book two stars on goodreads, however I also added this comment to the information about the book on my 2014 book list: “Close to three stars, but the poor language of the publication made it difficult for me to justify giving it that rating.” I liked reading the book, but I do not condone sloppy and/or unclear language and this is something I’ll usually punish.
When I started reading the book I’d assumed there might be some overlap with Gale’s book, which I haven’t covered here, but that actually turned out not really to be the case; the two books focus on different things even if a few of the questions they ask are quite similar, and so the coverage of the two books actually don’t much overlap. Most of this was new stuff, which was nice. The book is in my opinion more technical and harder to read than was Gale’s book, but again, they deal with different stuff so I’m not sure how much sense it makes to compare them. Lammer doesn’t shy away from covering relevant math formulas where they might be helpful to improve understanding, and/but some of these are not easy to understand if you do not have a background in physics; you need to know some stuff about things like electromagnetism, thermodynamics, and plasma physics to understand all of the stuff in this book. I didn’t – I certainly didn’t know ‘enough’ – but his coverage is fortunately such that even if you mentally skip a few formulas without really understanding the details of the dynamics they model, you’ll usually still be able to figure out roughly how they work wrt. the specific issue at hand because he also talks about how they work and which conclusions to draw; though you’ll likely still need to look up some unfamiliar terms along the way in order not to be completely confused.
I should make clear that although it may sound from the above as if this is really a rather dull book about mathematical formulas and complicated physics, one thing I find it really hard to term this book is ‘boring’. The book talks about what the Earth was like back when Earth was covered by magma oceans. Freaking magma oceans! It talks about how the Earth was quite likely early on in its ‘lifetime’ (before life on the planet, it should perhaps be noted) covered by a huge hydrogen atmosphere, and how that early atmosphere was blown away by a Sun which was spinning much faster than it does today, bombarding the early proto-atmospheres of the newly formed planets with huge numbers of highly charged particles despite the sun shining ‘less brightly’ back then than it does now. It talks about how a slightly different atmospheric composition back then, with more hydrogen, might have lead to the Earth being unable to get rid of all that hydrogen, most likely leading to the Earth having ended up as a ‘waterworld’ without continents, completely covered by water. It talks about how the Sun slowed down after what was most likely just a few million years, and how it has since then been doing things quite differently from the way it did things in the beginning. Phenomena such as outgassing and impact events are discussed. The book talks about how conditions were different on Mars and Venus from the way they were on Earth, and what role various factors might have played in terms of explaining how the atmospheres got to be the way they are now, and why those planets turned out quite different. The role of gravity, the role of a magnetosphere, which concrete processes lead to loss of (/which) atmospheric components. There’s a lot of stuff in this book, and much of it I found really quite interesting. But it is also hard to read, sometimes hard to understand, and certainly far from always particularly well-written. The topics covered I found quite interesting though.
I was wondering how to cover this book, but I decided early on that given how many things I was looking up along the way it would make a lot of sense to bookmark some relevant links and add them to this post; so below I have added a list of terms and concepts covered in the book. Some of the concepts are much better covered in the book than in the links (the wiki article on atmospheric escape for example has very little stuff on this topic compared to the stuff included in the book about this topic), but in other cases there’s a lot of stuff in the wiki article which was not included in the book (naturally, or it would not have made sense for me to look up stuff there). So the stuff in the links don’t add up to the material covered in the book, but the articles should give you a clue what kind of book this is. Below the list I have added a few quotes from the book. As should be obvious from the number of links, the book has a lot of content despite the relatively low page-count.
Atmosphere of Earth.
Energetic neutral atom.
“As contrasted to meteorology which studies the properties and behavior of the lower atmosphere between the surface and the tropopause where the weather phenomena are generated, aeronomy is a division of atmospheric science that studies physics and chemistry of the upper atmosphere that extends from above the troposphere up to the altitudes where it is modified by the solar wind plasma. […] The central part of [this] monograph presents a detailed discussion of the atmospheric loss mechanisms due to the action of various thermal and non-thermal escape processes for the neutral and ionized particles from a hot, extended atmospheric corona. Scenarios for the formation and evolution of the atmospheres of Earth, Venus, and Mars, that is, the planets orbiting within the habitable zone around the Sun, are considered. A crucial role of the magnetosphere of a planet in protecting its hot, extended, and partially ionized corona from the solar wind erosion is discussed. […] The book presents a brief review of the present state of knowledge of the aeronomy of planetary atmospheres and of their evolution during the lifetime of their host stars by taking into account conventionally accepted concepts, as well as recent observational and theoretical results.”
“the classical concept of the habitable zone and its related questions of what makes a planet habitable is much more complex than having a big rocky body located at the right distance from its host star. […] A careful study of various astrophysical and geophysical aspects indicate that Earth-analogue class I habitats have to be located at the right distance of the habitable zone from their host stars, must lose their protoatmospheres during the right time period, should maintain plate tectonics over the planet’s lifetime, should have nitrogen as the main atmospheric species after the stellar activity decreased to moderate values and finally, the planet’s interior should have developed conditions that an intrinsic strong global magnetic field could evolve.” [I should probably add here that this specific stuff is covered extensively in Gale’s book, but doesn’t make up too much of the coverage of this book].
“The mantle solidification of a magma ocean is a fast process and ends at ∼105 years for Earth-size planets with low volatile contents and at ≤3Myr [million years, US] for planets with higher volatile contents and magma ocean depths of ≤2,000km […] During the magma ocean solidification process, H2O and CO2 molecules can enter the solidifying minerals in relative low quantities [8, 9]. As a result the H2O/CO2 volatiles will degas into dense steam atmospheres […] If the early Earth would have obtained slightly more material from water-rich planetesimals, its CO2 content would have been much higher and Earth’s oceans could have been tens to hundreds of kilometers deep […]. Such environmental conditions would have resulted in a globally covered water world [43, 44] which is surrounded by a Venus-type dense CO2 atmosphere and a hydrogen envelope.” [I tried while reading this to imagine a magma ocean which was something like 2000 kilometers deep, but I failed to do so. Just think about this…]
“There is observational evidence from solar proxies with younger age compared to the present Sun, that during the early history of the Solar System the EUV flux was up to∼100 times larger as it is today […] The evolution of planetary atmospheres can only be understood if one considers that the radiation and particle environment of the Sun or a planet’s host star changed during their life time. The magnetic activity of solar-type stars declines steadily during their evolution on the Zero-Age-Main-Sequence (ZAMS). According to the solar standard model, the Sun’s photospheric luminosity was ∼30 % lower ∼4.5 Gyr ago […] when the Sun arrived on the ZAMS compared to present levels. The observed faster rotation of young stars is responsible for an enhanced magnetic activity and related heating processes in the chromosphere, X-ray emissions are ≥1,000, and EUV, and UV ∼100 and ∼10 times higher compared to today’s solar values. Moreover, the production rate of high-energy particles is orders of magnitude higher at young stars, and from observable stellar mass loss-activity relations one can also expect a much stronger solar/stellar wind during the active stellar phase.”
“The nuclear evolution of the Sun is well known from stellar evolutionary theory and backed by helioseismological observations of the internal solar structure . The results of these evolutionary solar models, indicate that the young Sun was ∼10% cooler and ∼15% smaller compared to the modern Sun ∼4.6Gyr ago. According to the solar standard model, due to accelerating nuclear reactions in the Sun’s core, the Sun is a slowly evolving variable G-type star that has undergone an ∼30% increase in luminosity over the past ∼4.5Gyr. […] the outward flowing plasma carries away angular momentum from the star [which explains] the observed spin-down to slower rotation of young stars after their arrival at the ZAMS [the book mentions elsewhere that it’s been estimated based on observations of other star systems that the young sun was rotating more than 10 times as fast as it does now]. […] the early Earth may have lost during [the first 100 million years] an amount of hydrogen equivalent of ∼20EOs [Earth Oceans] thermally […] after the loss of [a large amount of the original steam atmosphere,] the Earth’s atmosphere environment near the surface reached the critical temperature of ∼650 K. After reaching this temperature the remaining H2O-vapor of ∼1EO could condense and collapsed into the liquid water ocean . Additional amount[s] of water could have been delivered also continuously via impacts, but the bulk of the early Earth’s initial water inventory is most likely a by-product of a condensed fraction of the catastrophically outgassed steam atmosphere. […] One should […] note that in the case of the early Earth due to the Moon forming impact a fraction of ≤30% of atmosphere could have also been lost to space .”
“The present average atmospheric mass loss of hydrogen, oxygen, and nitrogen ions from the Earth is ∼ 1.3 × 103 g s-1”
“The first protoatmosphere will be captured and accumulated hydrogen- and helium-rich gas envelopes from the nebula. Depending on the planetary formation time, the nebula dissipation time, the numbers of additional planets including gas giants in the system, the protoplanet’s gravity, its orbit location, and the host star’s radiation and plasma environment terrestrial planets may capture tens or even several hundreds of the Earth ocean equivalent amounts of hydrogen around its rocky core.
The second protoatmosphere depends on the initial volatile content of the protoplanet when accretion finished. During the magma ocean solidification […] steam atmospheres with surface pressures ranging from∼100 to several 104 bar can be catastrophically outgassed.
Finally, secondary atmospheres will be produced by tectonic activity such as volcanos and by the delivery of volatiles via large impacts. The origin and initial state of a planet’s protoatmosphere, therefore, determines a planet’s atmospheric evolution and finally if the planet will evolve to an Earth-analog class I habitat or not. […] The efficiency of the solar/stellar forcing is essentially inversely proportional to the square of the distance to the planet’s host star. From that, it follows that the closer a planet orbits around its host star, the more efficient are the atmospheric escape processes. The main effects caused by the stellar radiation and plasma environment on the atmospheres of an effected planet are to ionize, chemically modify, heated, expand, and slowly erode the upper atmosphere throughout the lifetime of a planet. The highest thermal and non-thermal atmospheric escape rates are obtained during the early active phase of the planet’s host star […] Besides the orbital location, a planet’s gravity constitutes an additional major protection mechanism especially for thermal escape of its atmosphere, while the nonthermal escape processes are affected on a weaker scale.”
Some random observations and some links:
i. I’ve written about diabetic hypoglycemia before – I even blogged a book on the topic just a few weeks ago. So I’ll keep this short. Here’s the key observation from the post to which I link: “Hypoglycemia causes functional brain failure that is corrected in the vast majority of instances after the plasma glucose concentration is raised”.
Functional brain failure is pretty much what it sounds like – the brain stops working. The point I want to make here is that hypoglycemia can strike pretty much at any point in time, including when I’m doing stuff like blogging or commenting. I sometimes develop hypoglycemia while deeply engrossed in some intellectual activity, like reading, writing or chess, in part because in those situations I have a tendency to forget to listen to my body’s signals – perhaps I forget to eat because this stuff is really much more interesting than food, perhaps I don’t really care that I should probably take a blood test now because I’d really much rather just finish this book chapter/chess game/blogpost/whatever. That happens. When it happens while I’m blogging, what comes out the other end may look funny. I occasionally write stuff that’s incoherent and stupid. Sometimes the explanation is simple: I’m an idiot. Sometimes other things play a role as well.
This is a variable you cannot observe, but which I have a lot of information about. It’s a variable I’d like readers of this blog to at least be aware of.
ii. Maxwell wrote this post, which you should consider reading. I won’t pretend to have good reasons/justifications for disliking people I conceive of as arrogant, but I do want to note that I do this and always have. Arrogance is a trait I dislike immensely.
iii. Over the last few days I’ve been reading Okasha’s great book Evolution and the Levels of Selection (I’ve almost finished it and I expect to blog it tomorrow) – so of course when Zach Weiner came up with this joke yesterday, I laughed. Loudly:
(Click to view full size. The comic of course has almost nothing to do with the content of the book, but I’ll take any excuse I can get for blogging that comic…)
iv. The Feynman Lectures on Physics. Available to you, online, free of charge. Stuff like this sometimes makes me think we live in a very nice world at this point.
But then I read posts/watch videos like this one and I’m reminded that things are, complicated.
v. A few Khan Academy lectures:
i. Albert Stevens.
“Albert Stevens (1887–1966), also known as patient CAL-1, was the subject of a human radiation experiment, and survived the highest known accumulated radiation dose in any human. On May 14, 1945, he was injected with 131 kBq (3.55 µCi) of plutonium without his knowledge or informed consent.
Plutonium remained present in his body for the remainder of his life, the amount decaying slowly through radioactive decay and biological elimination. Stevens died of heart disease some 20 years later, having accumulated an effective radiation dose of 64 Sv (6400 rem) over that period. The current annual permitted dose for a radiation worker in the United States is 5 rem. […] Steven’s annual dose was approximately 60 times this amount.”
“Plutonium was handled extensively by chemists, technicians, and physicists taking part in the Manhattan Project, but the effects of plutonium exposure on the human body were largely unknown. A few mishaps in 1944 had caused certain alarm amongst project leaders, and contamination was becoming a major problem in and outside the laboratories. […] As the Manhattan Project continued to use plutonium, airborne contamination began to be a major concern. Nose swipes were taken frequently of the workers, with numerous cases of moderate and high readings. […] Tracer experiments were begun in 1944 with rats and other animals with the knowledge of all of the Manhattan project managers and health directors of the various sites. In 1945, human tracer experiments began with the intent to determine how to properly analyze excretion samples to estimate body burden. Numerous analytic methods were devised by the lead doctors at the Met Lab (Chicago), Los Alamos, Rochester, Oak Ridge, and Berkeley. The first human plutonium injection experiments were approved in April 1945 for three tests: April 10 at the Manhattan Project Army Hospital in Oak Ridge, April 26 at Billings Hospital in Chicago, and May 14 at the University of California Hospital in San Francisco. Albert Stevens was the person selected in the California test and designated CAL-1 in official documents. […] The plutonium experiments were not isolated events. During this time, cancer researchers were attempting to discover whether certain radioactive elements might be useful to treat cancer. Recent studies on radium, polonium, and uranium proved foundational to the study of Pu toxicity. […] The mastermind behind this human experiment with plutonium was Dr. Joseph Gilbert Hamilton, a Manhattan Project doctor in charge of the human experiments in California. Hamilton had been experimenting on people (including himself) since the 1930s at Berkeley. […] Hamilton eventually succumbed to the radiation that he explored for most of his adult life: he died of leukemia at the age of 49.”
“Although Stevens was the person who received the highest dose of radiation during the plutonium experiments, he was neither the first nor the last subject to be studied. Eighteen people aged 4 to 69 were injected with plutonium. Subjects who were chosen for the experiment had been diagnosed with a terminal disease. They lived from 6 days up to 44 years past the time of their injection. Eight of the 18 died within 2 years of the injection. All died from their preexisting terminal illness, or cardiac illnesses. […] As with all radiological testing during World War II, it would have been difficult to receive informed consent for Pu injection studies on civilians. Within the Manhattan Project, plutonium was referred to often by its code “49” or simply the “product.” Few outside of the Manhattan Project would have known of plutonium, much less of the dangers of radioactive isotopes inside the body. There is no evidence that Stevens had any idea that he was the subject of a secret government experiment in which he would be subjected to a substance that would have no benefit to his health.”
The best part is perhaps this: Stevens was not terminal: “He had checked into the University of California Hospital in San Francisco with a gastric ulcer that was misdiagnosed as terminal cancer.” It seems pretty obvious from the fact that one of the people involved in these experiments survived for 44 years and the fact that four other experimentees were still alive by the time Stevens died that he was not the only one who was misdiagnosed, and one interpretation of the fact that more than half survived beyond two years might be that the definition of ‘terminal’ applied in this context may have been, well, slightly flexible (especially considering how large injections of radioactive poisons in these people may not exactly have increased their life expectancies). Today people usually use this term for conditions which people can expect to die from within 6 months – 2 years is a long time in this context. It may however also to some extent just have reflected the state of medical science at the time – also illustrative in that respect is how the surgeons screwed him over during his illness: “Half of the left lobe of the liver, the entire spleen, most of the ninth rib, lymph nodes, part of the pancreas, and a portion of the omentum… were taken out” to help prevent the spread of the cancer that Stevens did not have.” In case you were wondering, not only did they not tell him he was part of an experiment; they also did not ever tell him he had been misdiagnosed with cancer.
ii. Aberration of light.
“The aberration of light (also referred to as astronomical aberration or stellar aberration) is an astronomical phenomenon which produces an apparent motion of celestial objects about their locations dependent on the velocity of the observer. Aberration causes objects to appear to be angled or tilted towards the direction of motion of the observer compared to when the observer is stationary. The change in angle is typically very small, on the order of v/c where c is the speed of light and v the velocity of the observer. In the case of “stellar” or “annual” aberration, the apparent position of a star to an observer on Earth varies periodically over the course of a year as the Earth’s velocity changes as it revolves around the Sun […] Aberration is historically significant because of its role in the development of the theories of light, electromagnetism and, ultimately, the theory of Special Relativity. […] In 1729, James Bradley provided a classical explanation for it in terms of the finite speed of light relative to the motion of the Earth in its orbit around the Sun, which he used to make one of the earliest measurements of the speed of light. However, Bradley’s theory was incompatible with 19th century theories of light, and aberration became a major motivation for the aether drag theories of Augustin Fresnel (in 1818) and G. G. Stokes (in 1845), and for Hendrick Lorentz‘ aether theory of electromagnetism in 1892. The aberration of light, together with Lorentz’ elaboration of Maxwell’s electrodynamics, the moving magnet and conductor problem, the negative aether drift experiments, as well as the Fizeau experiment, led Albert Einstein to develop the theory of Special Relativity in 1905, which provided a conclusive explanation for the aberration phenomenon. […]
Aberration may be explained as the difference in angle of a beam of light in different inertial frames of reference. A common analogy is to the apparent direction of falling rain: If rain is falling vertically in the frame of reference of a person standing still, then to a person moving forwards the rain will appear to arrive at an angle, requiring the moving observer to tilt their umbrella forwards. The faster the observer moves, the more tilt is needed.
The net effect is that light rays striking the moving observer from the sides in a stationary frame will come angled from ahead in the moving observer’s frame. This effect is sometimes called the “searchlight” or “headlight” effect.
In the case of annual aberration of starlight, the direction of incoming starlight as seen in the Earth’s moving frame is tilted relative to the angle observed in the Sun’s frame. Since the direction of motion of the Earth changes during its orbit, the direction of this tilting changes during the course of the year, and causes the apparent position of the star to differ from its true position as measured in the inertial frame of the Sun.
While classical reasoning gives intuition for aberration, it leads to a number of physical paradoxes […] The theory of Special Relativity is required to correctly account for aberration.”
The article has much more, in particular it has a lot of stuff about historical aspects pertaining to this topic.
iii. Spanish Armada.
“The Spanish Armada (Spanish: Grande y Felicísima Armada or Armada Invencible, literally “Great and Most Fortunate Navy” or “Invincible Fleet”) was a Spanish fleet of 130 ships that sailed from A Coruña in August 1588 under the command of the Duke of Medina Sidonia with the purpose of escorting an army from Flanders to invade England. The strategic aim was to overthrow Queen Elizabeth I of England and the Tudor establishment of Protestantism in England, with the expectation that this would put a stop to English interference in the Spanish Netherlands and to the harm caused to Spanish interests by English and Dutch privateering.
The Armada chose not to attack the English fleet at Plymouth, then failed to establish a temporary anchorage in the Solent, after one Spanish ship had been captured by Francis Drake in the English Channel, and finally dropped anchor off Calais. While awaiting communications from the Duke of Parma‘s army the Armada was scattered by an English fireship attack. In the ensuing Battle of Gravelines the Spanish fleet was damaged and forced to abandon its rendezvous with Parma’s army, who were blockaded in harbour by Dutch flyboats. The Armada managed to regroup and, driven by southwest winds, withdrew north, with the English fleet harrying it up the east coast of England. The commander ordered a return to Spain, but the Armada was disrupted during severe storms in the North Atlantic and a large portion of the vessels were wrecked on the coasts of Scotland and Ireland. Of the initial 130 ships over a third failed to return. […] The expedition was the largest engagement of the undeclared Anglo-Spanish War (1585–1604). The following year England organised a similar large-scale campaign against Spain, the Drake-Norris Expedition, also known as the Counter-Armada of 1589, which was also unsuccessful. […]
The fleet was composed of 130 ships, 8,000 sailors and 18,000 soldiers, and bore 1,500 brass guns and 1,000 iron guns. […] In the Spanish Netherlands 30,000 soldiers awaited the arrival of the armada, the plan being to use the cover of the warships to convey the army on barges to a place near London. All told, 55,000 men were to have been mustered, a huge army for that time. […] The English fleet outnumbered the Spanish, with 200 ships to 130, while the Spanish fleet outgunned the English—its available firepower was 50% more than that of the English. The English fleet consisted of the 34 ships of the royal fleet (21 of which were galleons of 200 to 400 tons), and 163 other ships, 30 of which were of 200 to 400 tons and carried up to 42 guns each; 12 of these were privateers owned by Lord Howard of Effingham, Sir John Hawkins and Sir Francis Drake. […] The Armada was delayed by bad weather […], and was not sighted in England until 19 July, when it appeared off The Lizard in Cornwall. The news was conveyed to London by a system of beacons that had been constructed all the way along the south coast.”
“During all the engagements, the Spanish heavy guns could not easily be run in for reloading because of their close spacing and the quantities of supplies stowed between decks […] Instead the gunners fired once and then jumped to the rigging to attend to their main task as marines ready to board enemy ships, as had been the practice in naval warfare at the time. In fact, evidence from Armada wrecks in Ireland shows that much of the fleet’s ammunition was never spent. Their determination to fight by boarding, rather than cannon fire at a distance, proved a weakness for the Spanish; it had been effective on occasions such as the battles of Lepanto and Ponta Delgada (1582), but the English were aware of this strength and sought to avoid it by keeping their distance. With its superior manoeuvrability, the English fleet provoked Spanish fire while staying out of range. The English then closed, firing repeated and damaging broadsides into the enemy ships. This also enabled them to maintain a position to windward so that the heeling Armada hulls were exposed to damage below the water line. Many of the gunners were killed or wounded, and the task of manning the cannon often fell to the regular foot soldiers on board, who did not know how to operate the guns. The ships were close enough for sailors on the upper decks of the English and Spanish ships to exchange musket fire. […] The outcome seemed to vindicate the English strategy and resulted in a revolution in naval battle tactics with the promotion of gunnery, which until then had played a supporting role to the tasks of ramming and boarding.”
“In September 1588 the Armada sailed around Scotland and Ireland into the North Atlantic. The ships were beginning to show wear from the long voyage, and some were kept together by having their hulls bundled up with cables. Supplies of food and water ran short. The intention would have been to keep well to the west of the coast of Scotland and Ireland, in the relative safety of the open sea. However, there being at that time no way of accurately measuring longitude, the Spanish were not aware that the Gulf Stream was carrying them north and east as they tried to move west, and they eventually turned south much further to the east than planned, a devastating navigational error. Off the coasts of Scotland and Ireland the fleet ran into a series of powerful westerly winds […] Because so many anchors had been abandoned during the escape from the English fireships off Calais, many of the ships were incapable of securing shelter as they reached the coast of Ireland and were driven onto the rocks. Local men looted the ships. […] more ships and sailors were lost to cold and stormy weather than in direct combat. […] Following the gales it is reckoned that 5,000 men died, by drowning, starvation and slaughter at the hands of English forces after they were driven ashore in Ireland; only half of the Spanish Armada fleet returned home to Spain. Reports of the passage around Ireland abound with strange accounts of hardship and survival.
In the end, 67 ships and fewer than 10,000 men survived. Many of the men were near death from disease, as the conditions were very cramped and most of the ships ran out of food and water. Many more died in Spain, or on hospital ships in Spanish harbours, from diseases contracted during the voyage.”
“Viral hemorrhagic septicemia (VHS) is a deadly infectious fish disease caused by the Viral hemorrhagic septicemia virus (VHSV, or VHSv). It afflicts over 50 species of freshwater and marine fish in several parts of the northern hemisphere. VHS is caused by the viral hemorrhagic septicemia virus (VHSV), different strains of which occur in different regions, and affect different species. There are no signs that the disease affects human health. VHS is also known as “Egtved disease,” and VHSV as “Egtved virus.”
Historically, VHS was associated mostly with freshwater salmonids in western Europe, documented as a pathogenic disease among cultured salmonids since the 1950s. Today it is still a major concern for many fish farms in Europe and is therefore being watched closely by the European Community Reference Laboratory for Fish Diseases. It was first discovered in the US in 1988 among salmon returning from the Pacific in Washington State. This North American genotype was identified as a distinct, more marine-stable strain than the European genotype. VHS has since been found afflicting marine fish in the northeastern Pacific Ocean, the North Sea, and the Baltic Sea. Since 2005, massive die-offs have occurred among a wide variety of freshwater species in the Great Lakes region of North America.”
The article isn’t that great but I figured I should include it anyway because I find it sort of fascinating how almost all humans alive can and do live their entire lives without necessarily ever knowing anything about stuff like this. Humans have some really obvious blind spots when it comes to knowledge about some of the stuff we put into our mouths on a regular basis.
v. Bird migration.
“Bird migration is the regular seasonal movement, often north and south along a flyway between breeding and wintering grounds, undertaken by many species of birds. Migration, which carries high costs in predation and mortality, including from hunting by humans, is driven primarily by availability of food. Migration occurs mainly in the Northern Hemisphere where birds are funnelled on to specific routes by natural barriers such as the Mediterranean Sea or the Caribbean Sea.”
“Historically, migration has been recorded as much as 3,000 years ago by Ancient Greek authors including Homer and Aristotle […] Aristotle noted that cranes traveled from the steppes of Scythia to marshes at the headwaters of the Nile. […] Aristotle however suggested that swallows and other birds hibernated. […] It was not until the end of the eighteenth century that migration as an explanation for the winter disappearance of birds from northern climes was accepted […] [and Aristotle’s hibernation] belief persisted as late as 1878, when Elliott Coues listed the titles of no less than 182 papers dealing with the hibernation of swallows.”
“Approximately 1800 of the world’s 10,000 bird species are long-distance migrants. […] Within a species not all populations may be migratory; this is known as “partial migration”. Partial migration is very common in the southern continents; in Australia, 44% of non-passerine birds and 32% of passerine species are partially migratory. In some species, the population at higher latitudes tends to be migratory and will often winter at lower latitude. The migrating birds bypass the latitudes where other populations may be sedentary, where suitable wintering habitats may already be occupied. This is an example of leap-frog migration. Many fully migratory species show leap-frog migration (birds that nest at higher latitudes spend the winter at lower latitudes), and many show the alternative, chain migration, where populations ‘slide’ more evenly North and South without reversing order.
Within a population, it is common for different ages and/or sexes to have different patterns of timing and distance. […] Many, if not most, birds migrate in flocks. For larger birds, flying in flocks reduces the energy cost. Geese in a V-formation may conserve 12–20% of the energy they would need to fly alone. […] Seabirds fly low over water but gain altitude when crossing land, and the reverse pattern is seen in landbirds. However most bird migration is in the range of 150 m (500 ft) to 600 m (2000 ft). Bird strike aviation records from the United States show most collisions occur below 600 m (2000 ft) and almost none above 1800 m (6000 ft). Bird migration is not limited to birds that can fly. Most species of penguin migrate by swimming.”
“Some Bar-tailed Godwits have the longest known non-stop flight of any migrant, flying 11,000 km from Alaska to their New Zealand non-breeding areas. Prior to migration, 55 percent of their bodyweight is stored fat to fuel this uninterrupted journey. […] The Arctic Tern has the longest-distance migration of any bird, and sees more daylight than any other, moving from its Arctic breeding grounds to the Antarctic non-breeding areas. One Arctic Tern, ringed (banded) as a chick on the Farne Islands off the British east coast, reached Melbourne, Australia in just three months from fledging, a sea journey of over 22,000 km (14,000 mi). […] The most pelagic species, mainly in the ‘tubenose’ order Procellariiformes, are great wanderers, and the albatrosses of the southern oceans may circle the globe as they ride the “roaring forties” outside the breeding season. The tubenoses spread widely over large areas of open ocean, but congregate when food becomes available. Many are also among the longest-distance migrants; Sooty Shearwaters nesting on the Falkland Islands migrate 14,000 km (8,700 mi) between the breeding colony and the North Atlantic Ocean off Norway. Some Manx Shearwaters do this same journey in reverse. As they are long-lived birds, they may cover enormous distances during their lives; one record-breaking Manx Shearwater is calculated to have flown 8 million km (5 million miles) during its over-50 year lifespan.”
“Bird migration is primarily, but not entirely, a Northern Hemisphere phenomenon. This is because land birds in high northern latitudes, where food becomes scarce in winter, leave for areas further south (including the Southern Hemisphere) to overwinter, and because the continental landmass is much larger in the Northern Hemisphere [see also this post]. In contrast, among (pelagic) seabirds, species of the Southern Hemisphere are more likely to migrate. This is because there is a large area of ocean in the Southern Hemisphere, and more islands suitable for seabirds to nest.”
This is where you share interesting stuff you’ve come across since the last time I posted one of these.
I figured I should post a bit of content as well, so here we go:
(Chichen Itza is not located in ‘Southern America’, but aside from that I don’t have a lot of stuff to complain about in relation to that lecture. As I’ve mentioned before I generally like Crawford’s lectures.)
ii. I haven’t read this (yet? Maybe I won’t – I hate when articles are gated; even if I can usually get around that, I take this sort of approach to matters as a strong signal that the authors don’t really want me to read it in the first place (if they wanted me to read it, why would they make it so difficult for me to do so?)), but as it sort of conceptually relates to some of the work Boyd & Richerson talk about in their book, which I read some chapters of yesterday, I figured I should link to it anyway: Third-party punishment increases cooperation in children through (misaligned) expectations and conditional cooperation. Here’s the abstract:
“The human ability to establish cooperation, even in large groups of genetically unrelated strangers, depends upon the enforcement of cooperation norms. Third-party punishment is one important factor to explain high levels of cooperation among humans, although it is still somewhat disputed whether other animal species also use this mechanism for promoting cooperation. We study the effectiveness of third-party punishment to increase children’s cooperative behavior in a large-scale cooperation game. Based on an experiment with 1,120 children, aged 7 to 11 y, we find that the threat of third-party punishment more than doubles cooperation rates, despite the fact that children are rarely willing to execute costly punishment. We can show that the higher cooperation levels with third-party punishment are driven by two components. First, cooperation is a rational (expected payoff-maximizing) response to incorrect beliefs about the punishment behavior of third parties. Second, cooperation is a conditionally cooperative reaction to correct beliefs that third party punishment will increase a partner’s level of cooperation.”
I should note that I yesterday also started reading a book on conflict resolution which covers the behavioural patterns of social animals in some detail, and which actually also ‘sort of relate, a bit’ to this type of stuff. A lot of stuff that people do they do for different reasons than the ones they usually apply themselves to explain their behaviours (if they even bother to do that at all..), but scientists in many different areas of research are making progress in terms of finding out ‘what’s really going on’, and there are probably a lot more potentially useful approaches to these types of problems than most people usually imagine. Many smart people seem at this point to me to be familiar with some of the results of the heuristics-and-biases literature/approach to human behaviour because that stuff’s been popularized a lot over the last decade or two, and they probably have a tendency to interpret human behaviour using that sort of contextual framework, perhaps combined with the usual genes/environment-type conceptual approaches. Perhaps they combine that stuff with the approaches that are most common among people with their educational backgrounds (people with a medical degree may be prone to using biological models, an economist might perhaps apply game theory, and an evolutionary biologist might ask what a chimpanzee would have done). This isn’t a problem as such, but many people might do well to try to keep in mind every now and then that there are a lot other theoretical frameworks one might decide to apply in order to make sense of what humans do than the one(s) they usually apply themselves, and that some of these may actually add a lot information even if they’re much less well-known. Some of the methodological differences relate to levels of analysis (are we trying to understand one individual or a group of individuals?), but that’s far from the whole story. To take a different kind of example, it has turned out that animal models are actually really nice tools if you want to understand some of the details involved in addictive behaviours, and they seem to be useful if you want to deal with conflict resolution stuff as well, at least judging from the stuff I’ve read in that new book so far (one could of course consider animal models to be a subset of the genetic modeling framework, but in an applied context it makes a lot of sense to keep them separate from each other and to consider them to be distinct subfields…). I have a nagging suspicion that animal models may also be very useful when it comes to explaining various forms of what people usually refer to as ’emotional behaviours’, and that despite the fact that a lot of people tend to consider that kind of stuff ‘unanalyzable’, it probably isn’t if you use the right tools and ask the right questions. You don’t need to be a doctor or a biologist to see why hard-to-observe purely ‘biological effects’ having behavioural effects may be important, but are these sorts of dynamics taken sufficiently into account when people interact with each other? I’m not sure. Mathematical modeling approaches like the one above are other ways (of course various approaches can be combined, making this stuff even more complicated…) to proceed and they seem to me to be, certainly when they generate testable predictions, potentially useful a well – not necessarily always only because we learn whether the predictions are correct or not, but also because mathematical thinking in general allows/requires you to think more carefully about stuff and identify relevant variables and pathways (but I’ve talked about this before).
I should point out that I wrote the passage above in part because very occasionally I encounter a Fan of The Hard Sciences (on the internet) who seems to think that rejecting all kinds of human behavioural theory/-research (‘Social Science’) on account of it not being Hard Enough to generate trustworthy inferences is a good way to go – I actually had a brief encounter with one of those not too long ago, which was part of what motivated me to write the stuff above (and the stuff below). That guy expressed the opinion that you’d learn more about human nature by reading a Dostoyevsky novel than you would by reading e.g. Leary & Hoyle’s textbook. I’m perhaps now being rather more blunt than I usually am, but I thought I should make it clear here, so that there are no misunderstandings, that I tend to consider people with that kind of approach to things to be clueless fools who don’t have any idea what they’re talking about. Perhaps I should also mention that I have in fact read both so I feel qualified to judge on the matter, but this is probably arguably besides the point; the disagreement goes much deeper than just the truth content of the specific statement in question, as the much bigger problem is the methodological divide. Some skepticism is often required in behavioural sciences, among other things because establishing causal inference is really hard in many areas, but if you want your skepticism to make sense and be taken seriously you need to know enough about the topic and potential problems to actually formulate a relevant and cogent criticism. In that context I emphasize that ‘unbundling’ is really important – if you’re speaking to someone who’s familiar with at least some part of ‘the field of social science’, criticizing ‘The Social Sciences’ in general terms will probably just make you look stupid unless you add a lot of caveats. That’s because it’s not one field. Do the same sort of problems arise when people evaluate genetic models of human behavioural variance and ‘sociological approaches’? Applied microeconomics? Attachment theory? Evolutionary biology? All of these areas, and many others, play some role and provide part of the picture as to why people behave the way they do. Quantum physics and cell biology are arguably closer connected than are some of the various subfields which might be categorized as belonging to ‘the field’ of ‘social science’. Disregarding this heterogeneity seems to be more common than I’d wish it was, as is ‘indiscriminatory skepticism’ (‘all SS is BS’). A problem with indiscriminatory skepticism of this sort is incidentally that it’s sort of self-perpetuating in a way; that approach to matters pretty much precludes you from ever learning anything about the topic, because anyone who has anything to teach you will think you’re a fool whom it’s not worth spending time talking to (certainly if they’re in a bad mood on account of having slept badly last night…). This dynamic may not seem problematic at all to people who think all SS is BS, but of course it might be worth pointing out to those kinds of people that by maintaining that sort of approach to the subject matter they’re probably also cutting themselves off from learning about research taking place in areas they hadn’t even considered to belong to the field of social science in the first place. Symptom analyses of medical problems are usually not considered to be research belonging to the social sciences, but that’s mostly just the result of a categorization convention; medical problems, or the absence of them, impact our social behaviours in all kinds of ways we’re often not aware of. Is it medical science when a doctor performs the analysis, but social science when the psychologist analyzes the same data? Is what that guy is doing social science or statistics? Sometimes the lines seem to get really blurry to me. Discriminatory skepticism is better (and probably justified, given methodological differences across areas), but contains its own host of problems. Often discriminatory skepticism seems to imply that you disregard certain levels of analysis completely – instead of ‘all SS i BS’, it becomes ‘all SS belonging to this level of analysis is BS’. Maybe that’s better than the most sensible alternative (‘perhaps it’s not all BS’) if the science is really bad, but even in those situations you’ll have contagion effects as well which may cause problems (‘culture? That’s the sort of crap cultural anthropologists deal with, isn’t it? Those people are full of crap. I’m not going to spend time on that stuff.’ So you disregard those aspects of behaviour completely, even if perhaps they do matter and can be subject to scientific analysis of a different type than the one the Assigned Bad Guys (‘Cultural Anthropologists’) usually apply).
I don’t think we’ll ever get to the point where we have a Big All-Encompassing Theory of How Humans Work because there are too many variables, but that does not mean that the analysis of specific behaviours and specific variables is without merit. Understanding that I may feel argumentative right now because I’ve misjudged my insulin requirements (or didn’t sleep enough, or haven’t had enough to eat, or had a fight with my mother yesterday, or…) is important knowledge to take into account, and you can add a lot of other similarly-useful observations to your toolbox if you spend some time on this type of stuff. A big problem with not doing the research is that not doing the research does not protect you from adopting faulty models – rather it seems to me that it almost guarantees that you do. Humans need explanations for why things happen, and ‘things that happen’ include social behaviours; they/we need causal models to make sense of the world, and having no good information will not stop them from coming up with theories about why people behave the way they do (social scientists realized that a while back..). And as a result of this, people might end up using a novel written 150 years ago to obtain insights into why humans behave the way they do, instead of perhaps relying on a textbook written last year containing the combined insights of hundreds of researchers who looked at these things in a lot of detail. The researchers might be wrong, sure, but even so this approach still seems … stupid. ‘I don’t trust the social scientists, so instead I’ll rely on the life lessons and social rules taught to me by my illiterate grandmother when I was a child.’ Or whatever. You can easily end up doing stuff like this, without ever even suspecting, much less realizing, that that’s what you’re doing.
Comments on the topics covered above are welcome, but I must admit that I didn’t really write this stuff to start a dicussion about these things – it was more of a, ‘this is where I’m coming from and these are some thoughts on this topic which I’ve had, and now you know’-posting.
iii. Enough lecturing. Let’s have a chess video. International Master Christof Sielecki recently played a tournament in Mallorca, and he’s made some excellent videos talking about his games. Here’s one of those videos:
I incidentally think I have learned quite a bit from watching his material on youtube. I may have talked about his youtube channel here on the blog before, but even if I have I don’t mind repeating myself as you should know about it if you’re interested in chess. He is one of the strongest players online providing this sort of content, and he provides a lot of content. If you’re a beginner some of his material may be beyond you, but not all of it; I don’t think his opening videos for example are particularly difficult to understand or follow, even if you’re not a very strong player. And if you’re a ‘strong club player’ I think this is the best chess channel on youtube.
i. Great Fire of London (featured).
“The Great Fire of London was a major conflagration that swept through the central parts of the English city of London, from Sunday, 2 September to Wednesday, 5 September 1666. The fire gutted the medieval City of London inside the old Roman city wall. It threatened, but did not reach, the aristocratic district of Westminster, Charles II‘s Palace of Whitehall, and most of the suburban slums. It consumed 13,200 houses, 87 parish churches, St. Paul’s Cathedral and most of the buildings of the City authorities. It is estimated to have destroyed the homes of 70,000 of the City’s 80,000 inhabitants.”
Do note that even though this fire was a really big deal the ‘70,000 out of 80,000’ number can be misleading as many Londoners didn’t actually live in the City proper:
“By the late 17th century, the City proper—the area bounded by the City wall and the River Thames—was only a part of London, covering some 700.0 acres (2.833 km2; 1.0938 sq mi), and home to about 80,000 people, or one sixth of London’s inhabitants. The City was surrounded by a ring of inner suburbs, where most Londoners lived.”
I thought I should include a few observations related to how well people behaved in this terrible situation – humans are really wonderful sometimes, and of course the people affected by the fire did everything they could to stick together and help each other out:
“Order in the streets broke down as rumours arose of suspicious foreigners setting fires. The fears of the homeless focused on the French and Dutch, England‘s enemies in the ongoing Second Anglo-Dutch War; these substantial immigrant groups became victims of lynchings and street violence.” […] [no, wait…]
“Suspicion soon arose in the threatened city that the fire was no accident. The swirling winds carried sparks and burning flakes long distances to lodge on thatched roofs and in wooden gutters, causing seemingly unrelated house fires to break out far from their source and giving rise to rumours that fresh fires were being set on purpose. Foreigners were immediately suspects because of the current Second Anglo-Dutch War. As fear and suspicion hardened into certainty on the Monday, reports circulated of imminent invasion, and of foreign undercover agents seen casting “fireballs” into houses, or caught with hand grenades or matches. There was a wave of street violence. William Taswell saw a mob loot the shop of a French painter and level it to the ground, and watched in horror as a blacksmith walked up to a Frenchman in the street and hit him over the head with an iron bar.
The fears of terrorism received an extra boost from the disruption of communications and news as facilities were devoured by the fire. The General Letter Office in Threadneedle Street, through which post for the entire country passed, burned down early on Monday morning. The London Gazette just managed to put out its Monday issue before the printer’s premises went up in flames (this issue contained mainly society gossip, with a small note about a fire that had broken out on Sunday morning and “which continues still with great violence”). The whole nation depended on these communications, and the void they left filled up with rumours. There were also religious alarms of renewed Gunpowder Plots. As suspicions rose to panic and collective paranoia on the Monday, both the Trained Bands and the Coldstream Guards focused less on fire fighting and more on rounding up foreigners, Catholics, and any odd-looking people, and arresting them or rescuing them from mobs, or both together.”
I didn’t really know what to think about this part:
“An example of the urge to identify scapegoats for the fire is the acceptance of the confession of a simple-minded French watchmaker, Robert Hubert, who claimed he was an agent of the Pope and had started the Great Fire in Westminster. He later changed his story to say that he had started the fire at the bakery in Pudding Lane. Hubert was convicted, despite some misgivings about his fitness to plead, and hanged at Tyburn on 28 September 1666. After his death, it became apparent that he had not arrived in London until two days after the fire started.”
Just one year before the fire, London had incidentally been hit by a plague outbreak which “is believed to have killed a sixth of London’s inhabitants, or 80,000 people”. Being a Londoner during the 1660s probably wasn’t a great deal of fun. On the other hand this disaster was actually not that big of a deal when compared to e.g. the 1556 Shaanxi earthquake.
ii. Sea (featured). I was considering reading an oceanography textbook a while back, but I decided against it and I read this article ‘instead’. Some interesting stuff in there. A few observations from the article:
“About 97.2 percent of the Earth’s water is found in the sea, some 1,360,000,000 cubic kilometres (330,000,000 cu mi) of salty water. Of the rest, 2.15 percent is accounted for by ice in glaciers, surface deposits and sea ice, and 0.65 percent by vapour and liquid fresh water in lakes, rivers, the ground and the air.”
“The water in the sea was once thought to come from the Earth’s volcanoes, starting 4 billion years ago, released by degassing from molten rock.(pp24–25) More recent work suggests that much of the Earth’s water may have come from comets.” (This stuff covers 70 percent of the planet and we still are not completely sure how it got to be here. I’m often amazed at how much stuff we know about the world, but very occasionally I also get amazed at the things we don’t know. This seems like the sort of thing we somehow ‘ought to know’..)
“An important characteristic of seawater is that it is salty. Salinity is usually measured in parts per thousand (expressed with the ‰ sign or “per mil”), and the open ocean has about 35 grams (1.2 oz) of solids per litre, a salinity of 35‰ (about 90% of the water in the ocean has between 34‰ and 35‰ salinity). […] The constituents of table salt, sodium and chloride, make up about 85 percent of the solids in solution. […] The salinity of a body of water varies with evaporation from its surface (increased by high temperatures, wind and wave motion), precipitation, the freezing or melting of sea ice, the melting of glaciers, the influx of fresh river water, and the mixing of bodies of water of different salinities.”
“Sea temperature depends on the amount of solar radiation falling on its surface. In the tropics, with the sun nearly overhead, the temperature of the surface layers can rise to over 30 °C (86 °F) while near the poles the temperature in equilibrium with the sea ice is about −2 °C (28 °F). There is a continuous circulation of water in the oceans. Warm surface currents cool as they move away from the tropics, and the water becomes denser and sinks. The cold water moves back towards the equator as a deep sea current, driven by changes in the temperature and density of the water, before eventually welling up again towards the surface. Deep seawater has a temperature between −2 °C (28 °F) and 5 °C (41 °F) in all parts of the globe.”
“The amount of light that penetrates the sea depends on the angle of the sun, the weather conditions and the turbidity of the water. Much light gets reflected at the surface, and red light gets absorbed in the top few metres. […] There is insufficient light for photosynthesis and plant growth beyond a depth of about 200 metres (660 ft).”
“Over most of geologic time, the sea level has been higher than it is today.(p74) The main factor affecting sea level over time is the result of changes in the oceanic crust, with a downward trend expected to continue in the very long term. At the last glacial maximum, some 20,000 years ago, the sea level was 120 metres (390 ft) below its present-day level.” (this of course had some very interesting ecological effects – van der Geer et al. had some interesting observations on that topic)
“On her 68,890-nautical-mile (127,580 km) journey round the globe, HMS Challenger discovered about 4,700 new marine species, and made 492 deep sea soundings, 133 bottom dredges, 151 open water trawls and 263 serial water temperature observations.”
“Seaborne trade carries more than US $4 trillion worth of goods each year.”
“Many substances enter the sea as a result of human activities. Combustion products are transported in the air and deposited into the sea by precipitation. Industrial outflows and sewage contribute heavy metals, pesticides, PCBs, disinfectants, household cleaning products and other synthetic chemicals. These become concentrated in the surface film and in marine sediment, especially estuarine mud. The result of all this contamination is largely unknown because of the large number of substances involved and the lack of information on their biological effects. The heavy metals of greatest concern are copper, lead, mercury, cadmium and zinc which may be bio-accumulated by marine invertebrates. They are cumulative toxins and are passed up the food chain.
Much floating plastic rubbish does not biodegrade, instead disintegrating over time and eventually breaking down to the molecular level. Rigid plastics may float for years. In the centre of the Pacific gyre there is a permanent floating accumulation of mostly plastic waste and there is a similar garbage patch in the Atlantic. […] Run-off of fertilisers from agricultural land is a major source of pollution in some areas and the discharge of raw sewage has a similar effect. The extra nutrients provided by these sources can cause excessive plant growth. Nitrogen is often the limiting factor in marine systems, and with added nitrogen, algal blooms and red tides can lower the oxygen level of the water and kill marine animals. Such events have created dead zones in the Baltic Sea and the Gulf of Mexico.”
iii. List of chemical compounds with unusual names. Technically this is not an article, but I decided to include it here anyway. A few examples from the list:
“Sonic hedgehog: A protein named after Sonic the Hedgehog.”
iv. Operation Proboi. When trying to make sense of e.g. the reactions of people living in the Baltic countries to Russia’s ‘current activities’ in the Ukraine, it probably helps to know stuff like this. 1949 isn’t that long ago – if my father had been born in Latvia he might have been one of the people in the photo.
v. Schrödinger equation. I recently started reading A. C. Phillips’ Introduction to Quantum Mechanics – chapter 2 deals with this topic. Due to the technical nature of the book I’m incidentally not sure to which extent I’ll cover the book here (or for that matter whether I’ll be able to finish it..) – if I do decide to cover it in some detail I’ll probably include relevant links to wikipedia along the way. The wiki has a lot of stuff on these topics, but textbooks are really helpful in terms of figuring out the order in which you should proceed.
vi. Happisburgh footprints. ‘A small step for man, …’
“The Happisburgh footprints were a set of fossilized hominin footprints that date to the early Pleistocene. They were discovered in May 2013 in a newly uncovered sediment layer on a beach at Happisburgh […] in Norfolk, England, and were destroyed by the tide shortly afterwards. Results of research on the footprints were announced on 7 February 2014, and identified them as dating to more than 800,000 years ago, making them the oldest known hominin footprints outside Africa. Before the Happisburgh discovery, the oldest known footprints in Britain were at Uskmouth in South Wales, from the Mesolithic and carbon-dated to 4,600 BC.”
The fact that we found these footprints is awesome. The fact that we can tell that they are as old as they are is awesome. There’s a lot of awesome stuff going on here – Happisburg also simply seems to be a gift that keeps on giving:
“Happisburgh has produced a number of significant archaeological finds over many years. As the shoreline is subject to severe coastal erosion, new material is constantly being exposed along the cliffs and on the beach. Prehistoric discoveries have been noted since 1820, when fishermen trawling oyster beds offshore found their nets had brought up teeth, bones, horns and antlers from elephants, rhinos, giant deer and other extinct species. […]
In 2000, a black flint handaxe dating to between 600,000 and 800,000 years ago was found by a man walking on the beach. In 2012, for the television documentary Britain’s Secret Treasures, the handaxe was selected by a panel of experts from the British Museum and the Council for British Archaeology as the most important item on a list of fifty archaeological discoveries made by members of the public. Since its discovery, the palaeolithic history of Happisburgh has been the subject of the Ancient Human Occupation of Britain (AHOB) and Pathways to Ancient Britain (PAB) projects […] Between 2005 and 2010 eighty palaeolithic flint tools, mostly cores, flakes and flake tools were excavated from the foreshore in sediment dating back to up to 950,000 years ago.”
vii. Keep (‘good article’).
“A keep (from the Middle English kype) is a type of fortified tower built within castles during the Middle Ages by European nobility. Scholars have debated the scope of the word keep, but usually consider it to refer to large towers in castles that were fortified residences, used as a refuge of last resort should the rest of the castle fall to an adversary. The first keeps were made of timber and formed a key part of the motte and bailey castles that emerged in Normandy and Anjou during the 10th century; the design spread to England as a result of the Norman invasion of 1066, and in turn spread into Wales during the second half of the 11th century and into Ireland in the 1170s. The Anglo-Normans and French rulers began to build stone keeps during the 10th and 11th centuries; these included Norman keeps, with a square or rectangular design, and circular shell keeps. Stone keeps carried considerable political as well as military importance and could take up to a decade to build.
During the 12th century new designs began to be introduced – in France, quatrefoil-shaped keeps were introduced, while in England polygonal towers were built. By the end of the century, French and English keep designs began to diverge: Philip II of France built a sequence of circular keeps as part of his bid to stamp his royal authority on his new territories, while in England castles were built that abandoned the use of keeps altogether. In Spain, keeps were increasingly incorporated into both Christian and Islamic castles, although in Germany tall towers called Bergfriede were preferred to keeps in the western fashion. In the second half of the 14th century there was a resurgence in the building of keeps. In France, the keep at Vincennes began a fashion for tall, heavily machicolated designs, a trend adopted in Spain most prominently through the Valladolid school of Spanish castle design. Meanwhile, in England tower keeps became popular amongst the most wealthy nobles: these large keeps, each uniquely designed, formed part of the grandest castles built during the period.
By the 16th century, however, keeps were slowly falling out of fashion as fortifications and residences. Many were destroyed between the 17th and 18th centuries in civil wars, or incorporated into gardens as an alternative to follies. During the 19th century, keeps became fashionable once again and in England and France a number were restored or redesigned by Gothic architects. Despite further damage to many French and Spanish keeps during the wars of the 20th century, keeps now form an important part of the tourist and heritage industry in Europe. […]
“By the 15th century it was increasingly unusual for a lord to build both a keep and a large gatehouse at the same castle, and by the early 16th century the gatehouse had easily overtaken the keep as the more fashionable feature: indeed, almost no new keeps were built in England after this period. The classical Palladian style began to dominate European architecture during the 17th century, causing a further move away from the use of keeps. […] From the 17th century onwards, some keeps were deliberately destroyed. In England, many were destroyed after the end of the Second English Civil War in 1649, when Parliament took steps to prevent another royalist uprising by slighting, or damaging, castles so as to prevent them from having any further military utility. Slighting was quite expensive and took considerable effort to carry out, so damage was usually done in the most cost efficient fashion with only selected walls being destroyed. Keeps were singled out for particular attention in this process because of their continuing political and cultural importance, and the prestige they lent their former royalist owners […] There were some equivalent destruction of keeps in France in the 17th and 18th centuries […] The Spanish Civil War and First and Second World Wars in the 20th century caused damage to many castle keeps across Europe; in particular, the famous keep at Coucy was destroyed by the German Army in 1917. By the late 20th century, however, the conservation of castle keeps formed part of government policy across France, England, Ireland and Spain. In the 21st century in England, most keeps are ruined and form part of the tourism and heritage industries, rather than being used as functioning buildings – the keep of Windsor Castle being a rare exception. This is contrast to the fate of bergfried towers in Germany, large numbers of which were restored as functional buildings in the late 19th and early 20th century, often as government offices or youth hostels, or the modern conversion of tower houses, which in many cases have become modernised domestic homes.”
“The Battles of Khalkhyn Gol […] constituted the decisive engagement of the undeclared Soviet–Japanese border conflicts fought among the Soviet Union, Mongolia and the Empire of Japan in 1939. The conflict was named after the river Khalkhyn Gol, which passes through the battlefield. In Japan, the decisive battle of the conflict is known as the Nomonhan Incident […] after a nearby village on the border between Mongolia and Manchuria. The battles resulted in the defeat of the Japanese Sixth Army. […]
While this engagement is little-known in the West, it played an important part in subsequent Japanese conduct in World War II. This defeat, together with other factors, moved the Imperial General Staff in Tokyo away from the policy of the North Strike Group favored by the Army, which wanted to seize Siberia as far as Lake Baikal for its resources. […] Other factors included the signing of the Nazi-Soviet non-aggression pact, which deprived the Army of the basis of its war policy against the USSR. Nomonhan earned the Kwantung Army the displeasure of officials in Tokyo, not so much due to its defeat, but because it was initiated and escalated without direct authorization from the Japanese government. Politically, the defeat also shifted support to the South Strike Group, favored by the Navy, which wanted to seize the resources of Southeast Asia, especially the petroleum and mineral-rich Dutch East Indies. Two days after the Eastern Front of World War II broke out, the Japanese army and navy leaders adopted on 24 June 1941 a resolution “not intervening in German Soviet war for the time being”. In August 1941, Japan and the Soviet Union reaffirmed their neutrality pact. Since the European colonial powers were weakening and suffering early defeats in the war with Germany, coupled with their embargoes on Japan (especially of vital oil) in the second half of 1941, Japan’s focus was ultimately focused on the south, and led to its decision to launch the attack on Pearl Harbor, on 7 December that year.”
Note that there’s some disagreement in the reddit thread as to how important Khalkhin Gol really was – one commenter e.g. argues that: “Khalkhin Gol is overhyped as a factor in the Japanese decision for the southern plan.”
ix. Medical aspects, Hiroshima, Japan, 1946. Technically this is also not a wikipedia article, but multiple wikipedia articles link to it and it is a wikipedia link. The link is to a video featuring multiple people who were harmed by the first nuclear weapon used by humans in warfare. Extensive tissue damage, severe burns, scars – it’s worth having in mind that dying from cancer is not the only concern facing people who survive a nuclear blast. A few related links: a) How did cleanup in Nagasaki and Hiroshima proceed following the atom bombs? b) Minutes of the second meeting of the Target Committee Los Alamos, May 10-11, 1945. c) Keloid. d) Japan in the 1950s (pictures).
It is occasionally slightly annoying that you can’t tell what she’s pointing at (a recurring problem in these lectures), but aside from this it’s a nice lecture – and this is a rather minor problem.
Most of this stuff was review to me, but it’s a nice overview lecture in case you have never had a closer look at this topic. There are some sound issues along the way, but otherwise the coverage is quite nice.
This one is technically not a lecture as much as a conversation, but I figured I should cover it somewhere and this may be as good a place as any. If you’re going to watch both this one and the lecture above, you should know that the order I posted them in is not random – the lectures overlap a little (Ed Copeland is one of those “lots of people [who] are playing with that idea” which Crawford mentions towards the end of her lecture) and I think it makes most sense to watch Crawford’s lecture before you watch Brady and Ed Copeland’s discussion if you’re going to watch both.
Incidentally the fact that this is not a lecture does not in my opinion subtract from the coverage provided in the video – if anything I think it may well add. Instead of a lecturer talking to hundreds of people simply following a script without really knowing whether they understand what he’s talking about due to lack of feedback, here you have one expert talking to a very curious individual who asks quite a few questions along the way and makes sure the ideas presented are explained and clarified whenever explanation or clarification is needed. Of course the standard lecture does have its merits as well, but I really like these ‘longer-than-average’ Sixty Symbols conversation videos.
Again I’m not sure I’d categorize this as a lecture, but it’s close enough for me to include it here. Unfortunately if you’re not an at least reasonably strong player who knows some basic concepts I assume some of the stuff covered may well be beyond you – I’ve seen it remarked before in the comments to some of Sielecki’s videos that there are other channels which are better suited for new/weak players – and I’m not sure how many people might find the video interesting, but I figured I might as well include it anyway. If comments like “this move is terrible because black loses control over the f5 square – which means his position is basically lost” (he doesn’t actually say this in the video, but it’s the kind of thing he might say) would be hard for you to understand (‘why would I care about the f5 square?’ ‘Why is it lost? What are you talking about? The position looks fine to me!’ …or perhaps even: ‘the f5 square? What’s that?’), this video may not be for you (in the latter case it most certainly isn’t).
At some point I should probably read his lectures, but I don’t see that happening anytime soon. In the meantime lectures like the ones posted below in this post are good, if imperfect, substitutes: They are very enjoyable to watch. He repeats himself quite a bit; I assume that part of the reason is that this stuff is from before internet lectures became a thing, and there would have been no way for people to learn what he’d said in previous lectures, making it a reasonable strategy for the lecturer to repeat main points made in previous lectures so that newcomers not be completely lost.
The sound is really awful in the beginning of the second lecture especially, but a lot of the stuff covered there is review and the sound problem gets fixed around 17 minutes in. More generally the sound quality varies somewhat and it isn’t that great. Neither is the image quality – it’s quite grainy most of the time and this sometimes makes it hard to see what he’s written/drawn on the blackboard. The last lecture in particular would presumably have been much easier to follow if you could actually tell the differences among the various colours of chalk he’s using. There are also problems in all videos with the image freezing up around the one-hour mark (the sound keeps working, so he’ll talk without you being able to see what he’s doing), but this problem fortunately lasts only a very short while (30 seconds or so). In my opinion minor technical issues such as these really should not keep you from watching these lectures – these are lectures given before I was even born, by a Nobel Prize winning physicist – the fact that you can watch them at all is quite remarkable.
I had fun watching these lectures. Here’s one neat quote from the third lecture: “Now in order to describe both the space and the time pictures, I’m going to make a kind of graph which we call… – which is very handy – if I call it by its name you’ll be frightened so I’m not going to call it by its name.” I couldn’t hold back a brief laugh at that point – I’m sure some of you understand why. Here’s another nice one, related to Eddington‘s work on the coupling constant: “The first idea was by Eddington, and experiments were very crude in those days and the number looked very close to 136, so he proved by pure logic that it had to be 136. Then it turned out that them experiments showed that that was a little wrong, that it was closer to 137, so he found a slight error in the logic and proved [loud laughter in the background] with pure logic that it had to be exactly the integer 137.” There are a lot more of these in the lectures and incidentally if you manage to watch these lectures without at any point feeling a desire to laugh, your sense of humour is most likely very different from mine. I’m sure you’ll have a lot more fun watching these lectures than you’ll have reading articles like this one.
I will emphasize that these lectures are meant for the general public. Knowledge about stuff like vector algebra, modular arithmetic and complex numbers is not required, even though he implicitly covers this kind of stuff in the lectures. He tries very hard to keep things as simple as possible while still dealing with the main ideas; if you’re the least bit curious don’t miss out on this stuff due to some faulty assumption that this stuff is somehow beyond you. Either way you’ll probably have fun watching these lectures, whether or not you understand all of the stuff he covers.
Oh right, the lectures:
(This is the one I talked about with really bad sound in the beginning. The issue is as mentioned resolved approximately 17 minutes in.)