It’s been a long time since I last posted one of these posts, so a great number of links of interest has accumulated in my bookmarks. I intended to include a large number of these in this post and this of course means that I surely won’t cover each specific link included in this post in anywhere near the amount of detail it deserves, but that can’t be helped.
“For those diagnosed with ASD in childhood, most will become adults with a significant degree of disability […] Seltzer et al […] concluded that, despite considerable heterogeneity in social outcomes, “few adults with autism live independently, marry, go to college, work in competitive jobs or develop a large network of friends”. However, the trend within individuals is for some functional improvement over time, as well as a decrease in autistic symptoms […]. Some authors suggest that a sub-group of 15–30% of adults with autism will show more positive outcomes […]. Howlin et al. (2004), and Cederlund et al. (2008) assigned global ratings of social functioning based on achieving independence, friendships/a steady relationship, and education and/or a job. These two papers described respectively 22% and 27% of groups of higher functioning (IQ above 70) ASD adults as attaining “Very Good” or “Good” outcomes.”
“[W]e evaluated the adult outcomes for 45 individuals diagnosed with ASD prior to age 18, and compared this with the functioning of 35 patients whose ASD was identified after 18 years. Concurrent mental illnesses were noted for both groups. […] Comparison of adult outcome within the group of subjects diagnosed with ASD prior to 18 years of age showed significantly poorer functioning for those with co-morbid Intellectual Disability, except in the domain of establishing intimate relationships [my emphasis. To make this point completely clear, one way to look at these results is that apparently in the domain of partner-search autistics diagnosed during childhood are doing so badly in general that being intellectually disabled on top of being autistic is apparently conferring no additional disadvantage]. Even in the normal IQ group, the mean total score, i.e. the sum of the 5 domains, was relatively low at 12.1 out of a possible 25. […] Those diagnosed as adults had achieved significantly more in the domains of education and independence […] Some authors have described a subgroup of 15–27% of adult ASD patients who attained more positive outcomes […]. Defining an arbitrary adaptive score of 20/25 as “Good” for our normal IQ patients, 8 of thirty four (25%) of those diagnosed as adults achieved this level. Only 5 of the thirty three (15%) diagnosed in childhood made the cutoff. (The cut off was consistent with a well, but not superlatively, functioning member of society […]). None of the Intellectually Disabled ASD subjects scored above 10. […] All three groups had a high rate of co-morbid psychiatric illnesses. Depression was particularly frequent in those diagnosed as adults, consistent with other reports […]. Anxiety disorders were also prevalent in the higher functioning participants, 25–27%. […] Most of the higher functioning ASD individuals, whether diagnosed before or after 18 years of age, were functioning well below the potential implied by their normal range intellect.”
ii. Premature mortality in autism spectrum disorder. This is a Swedish matched case cohort study. Some observations from the paper:
“The aim of the current study was to analyse all-cause and cause-specific mortality in ASD using nationwide Swedish population-based registers. A further aim was to address the role of intellectual disability and gender as possible moderators of mortality and causes of death in ASD. […] Odds ratios (ORs) were calculated for a population-based cohort of ASD probands (n = 27 122, diagnosed between 1987 and 2009) compared with gender-, age- and county of residence-matched controls (n = 2 672 185). […] During the observed period, 24 358 (0.91%) individuals in the general population died, whereas the corresponding figure for individuals with ASD was 706 (2.60%; OR = 2.56; 95% CI 2.38–2.76). Cause-specific analyses showed elevated mortality in ASD for almost all analysed diagnostic categories. Mortality and patterns for cause-specific mortality were partly moderated by gender and general intellectual ability. […] Premature mortality was markedly increased in ASD owing to a multitude of medical conditions. […] Mortality was significantly elevated in both genders relative to the general population (males: OR = 2.87; females OR = 2.24)”.
“Individuals in the control group died at a mean age of 70.20 years (s.d. = 24.16, median = 80), whereas the corresponding figure for the entire ASD group was 53.87 years (s.d. = 24.78, median = 55), for low-functioning ASD 39.50 years (s.d. = 21.55, median = 40) and high-functioning ASD 58.39 years (s.d. = 24.01, median = 63) respectively. […] Significantly elevated mortality was noted among individuals with ASD in all analysed categories of specific causes of death except for infections […] ORs were highest in cases of mortality because of diseases of the nervous system (OR = 7.49) and because of suicide (OR = 7.55), in comparison with matched general population controls.”
iii. Adhesive capsulitis of shoulder. This one is related to a health scare I had a few months ago. A few quotes:
“Adhesive capsulitis (also known as frozen shoulder) is a painful and disabling disorder of unclear cause in which the shoulder capsule, the connective tissue surrounding the glenohumeral joint of the shoulder, becomes inflamed and stiff, greatly restricting motion and causing chronic pain. Pain is usually constant, worse at night, and with cold weather. Certain movements or bumps can provoke episodes of tremendous pain and cramping. […] People who suffer from adhesive capsulitis usually experience severe pain and sleep deprivation for prolonged periods due to pain that gets worse when lying still and restricted movement/positions. The condition can lead to depression, problems in the neck and back, and severe weight loss due to long-term lack of deep sleep. People who suffer from adhesive capsulitis may have extreme difficulty concentrating, working, or performing daily life activities for extended periods of time.”
Some other related links below:
The prevalence of a diabetic condition and adhesive capsulitis of the shoulder.
“Adhesive capsulitis is characterized by a progressive and painful loss of shoulder motion of unknown etiology. Previous studies have found the prevalence of adhesive capsulitis to be slightly greater than 2% in the general population. However, the relationship between adhesive capsulitis and diabetes mellitus (DM) is well documented, with the incidence of adhesive capsulitis being two to four times higher in diabetics than in the general population. It affects about 20% of people with diabetes and has been described as the most disabling of the common musculoskeletal manifestations of diabetes.”
Adhesive Capsulitis (review article).
“Patients with type I diabetes have a 40% chance of developing a frozen shoulder in their lifetimes […] Dominant arm involvement has been shown to have a good prognosis; associated intrinsic pathology or insulin-dependent diabetes of more than 10 years are poor prognostic indicators.15 Three stages of adhesive capsulitis have been described, with each phase lasting for about 6 months. The first stage is the freezing stage in which there is an insidious onset of pain. At the end of this period, shoulder ROM [range of motion] becomes limited. The second stage is the frozen stage, in which there might be a reduction in pain; however, there is still restricted ROM. The third stage is the thawing stage, in which ROM improves, but can take between 12 and 42 months to do so. Most patients regain a full ROM; however, 10% to 15% of patients suffer from continued pain and limited ROM.”
Musculoskeletal Complications in Type 1 Diabetes.
“The development of periarticular thickening of skin on the hands and limited joint mobility (cheiroarthropathy) is associated with diabetes and can lead to significant disability. The objective of this study was to describe the prevalence of cheiroarthropathy in the well-characterized Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) cohort and examine associated risk factors […] This cross-sectional analysis was performed in 1,217 participants (95% of the active cohort) in EDIC years 18/19 after an average of 24 years of follow-up. Cheiroarthropathy — defined as the presence of any one of the following: adhesive capsulitis, carpal tunnel syndrome, flexor tenosynovitis, Dupuytren’s contracture, or a positive prayer sign [related link] — was assessed using a targeted medical history and standardized physical examination. […] Cheiroarthropathy was present in 66% of subjects […] Cheiroarthropathy is common in people with type 1 diabetes of long duration (∼30 years) and is related to longer duration and higher levels of glycemia. Clinicians should include cheiroarthropathy in their routine history and physical examination of patients with type 1 diabetes because it causes clinically significant functional disability.”
Musculoskeletal disorders in diabetes mellitus: an update.
“Diabetes mellitus (DM) is associated with several musculoskeletal disorders. […] The exact pathophysiology of most of these musculoskeletal disorders remains obscure. Connective tissue disorders, neuropathy, vasculopathy or combinations of these problems, may underlie the increased incidence of musculoskeletal disorders in DM. The development of musculoskeletal disorders is dependent on age and on the duration of DM; however, it has been difficult to show a direct correlation with the metabolic control of DM.”
Musculoskeletal Disorders of the Hand and Shoulder in Patients with Diabetes.
“In addition to micro- and macroangiopathic complications, diabetes mellitus is also associated with several musculoskeletal disorders of the hand and shoulder that can be debilitating (1,2). Limited joint mobility, also termed diabetic hand syndrome or cheiropathy (3), is characterized by skin thickening over the dorsum of the hands and restricted mobility of multiple joints. While this syndrome is painless and usually not disabling (2,4), other musculoskeletal problems occur with increased frequency in diabetic patients, including Dupuytren’s disease [“Dupuytren’s disease […] may be observed in up to 42% of adults with diabetes mellitus, typically in patients with long-standing T1D” – link], carpal tunnel syndrome [“The prevalence of [carpal tunnel syndrome, CTS] in patients with diabetes has been estimated at 11–30 % […], and is dependent on the duration of diabetes. […] Type I DM patients have a high prevalence of CTS with increasing duration of disease, up to 85 % after 54 years of DM” – link], palmar flexor tenosynovitis or trigger finger [“The incidence of trigger finger [/stenosing tenosynovitis] is 7–20 % of patients with diabetes comparing to only about 1–2 % in nondiabetic patients” – link], and adhesive capsulitis of the shoulder (5–10). The association of adhesive capsulitis with pain, swelling, dystrophic skin, and vasomotor instability of the hand constitutes the “shoulder-hand syndrome,” a rare but potentially disabling manifestation of diabetes (1,2).”
“The prevalence of musculoskeletal disorders was greater in diabetic patients than in control patients (36% vs. 9%, P < 0.01). Adhesive capsulitis was present in 12% of the diabetic patients and none of the control patients (P < 0.01), Dupuytren’s disease in 16% of diabetic and 3% of control patients (P < 0.01), and flexor tenosynovitis in 12% of diabetic and 2% of control patients (P < 0.04), while carpal tunnel syndrome occurred in 12% of diabetic patients and 8% of control patients (P = 0.29). Musculoskeletal disorders were more common in patients with type 1 diabetes than in those with type 2 diabetes […]. Forty-three patients [out of 100] with type 1 diabetes had either hand or shoulder disorders (37 with hand disorders, 6 with adhesive capsulitis of the shoulder, and 10 with both syndromes), compared with 28 patients [again out of 100] with type 2 diabetes (24 with hand disorders, 4 with adhesive capsulitis of the shoulder, and 3 with both syndromes, P = 0.03).”
Association of Diabetes Mellitus With the Risk of Developing Adhesive Capsulitis of the Shoulder: A Longitudinal Population-Based Followup Study.
“A total of 78,827 subjects with at least 2 ambulatory care visits with a principal diagnosis of DM in 2001 were recruited for the DM group. The non-DM group comprised 236,481 age- and sex-matched randomly sampled subjects without DM. […] During a 3-year followup period, 946 subjects (1.20%) in the DM group and 2,254 subjects (0.95%) in the non-DM group developed ACS. The crude HR of developing ACS for the DM group compared to the non-DM group was 1.333 […] the association between DM and ACS may be explained at least in part by a DM-related chronic inflammatory process with increased growth factor expression, which in turn leads to joint synovitis and subsequent capsular fibrosis.”
It is important to note when interpreting the results of the above paper that these results are based on Taiwanese population-level data, and type 1 diabetes – which is obviously the high-risk diabetes subgroup in this particular context – is rare in East Asian populations (as observed in Sperling et al., “A child in Helsinki, Finland is almost 400 times more likely to develop diabetes than a child in Sichuan, China”. Taiwanese incidence of type 1 DM in children is estimated at ~5 in 100.000).
iv. Parents who let diabetic son starve to death found guilty of first-degree murder. It’s been a while since I last saw one of these ‘boost-your-faith-in-humanity’-cases, but they in my impression do pop up every now and then. I should probably keep at hand one of these articles in case my parents ever express worry to me that they weren’t good parents; they could have done a lot worse…
v. Freedom of medicine. One quote from the conclusion of Cochran’s post:
“[I]t is surely possible to materially improve the efficacy of drug development, of medical research as a whole. We’re doing better than we did 500 years ago – although probably worse than we did 50 years ago. But I would approach it by learning as much as possible about medical history, demographics, epidemiology, evolutionary medicine, theory of senescence, genetics, etc. Read Koch, not Hayek. There is no royal road to medical progress.”
I agree, and I was considering including some related comments and observations about health economics in this post – however I ultimately decided against doing that in part because the post was growing unwieldy; I might include those observations in another post later on. Here’s another somewhat older Westhunt post I at some point decided to bookmark – I in particular like the following neat quote from the comments, which expresses a view I have of course expressed myself in the past here on this blog:
“When you think about it, falsehoods, stupid crap, make the best group identifiers, because anyone might agree with you when you’re obviously right. Signing up to clear nonsense is a better test of group loyalty. A true friend is with you when you’re wrong. Ideally, not just wrong, but barking mad, rolling around in your own vomit wrong.”
“Approximately 59% of all health care expenditures attributed to diabetes are for health resources used by the population aged 65 years and older, much of which is borne by the Medicare program […]. The population 45–64 years of age incurs 33% of diabetes-attributed costs, with the remaining 8% incurred by the population under 45 years of age. The annual attributed health care cost per person with diabetes […] increases with age, primarily as a result of increased use of hospital inpatient and nursing facility resources, physician office visits, and prescription medications. Dividing the total attributed health care expenditures by the number of people with diabetes, we estimate the average annual excess expenditures for the population aged under 45 years, 45–64 years, and 65 years and above, respectively, at $4,394, $5,611, and $11,825.”
“Our logistic regression analysis with NHIS data suggests that diabetes is associated with a 2.4 percentage point increase in the likelihood of leaving the workforce for disability. This equates to approximately 541,000 working-age adults leaving the workforce prematurely and 130 million lost workdays in 2012. For the population that leaves the workforce early because of diabetes-associated disability, we estimate that their average daily earnings would have been $166 per person (with the amount varying by demographic). Presenteeism accounted for 30% of the indirect cost of diabetes. The estimate of a 6.6% annual decline in productivity attributed to diabetes (in excess of the estimated decline in the absence of diabetes) equates to 113 million lost workdays per year.”
viii. Effect of longer term modest salt reduction on blood pressure: Cochrane systematic review and meta-analysis of randomised trials. Did I blog this paper at some point in the past? I could not find any coverage of it on the blog when I searched for it so I decided to include it here, even if I have a nagging suspicion I may have talked about these findings before. What did they find? The short version is this:
“A modest reduction in salt intake for four or more weeks causes significant and, from a population viewpoint, important falls in blood pressure in both hypertensive and normotensive individuals, irrespective of sex and ethnic group. Salt reduction is associated with a small physiological increase in plasma renin activity, aldosterone, and noradrenaline and no significant change in lipid concentrations. These results support a reduction in population salt intake, which will lower population blood pressure and thereby reduce cardiovascular disease.”
ix. Some wikipedia links:
Heroic Age of Antarctic Exploration (featured).
Kuiper belt (featured).
Treason (one quote worth including here: “Currently, the consensus among major Islamic schools is that apostasy (leaving Islam) is considered treason and that the penalty is death; this is supported not in the Quran but in the Hadith.“).
Savant syndrome (“It is estimated that 10% of those with autism have some form of savant abilities”). A small sidenote of interest to Danish readers: The Danish Broadcasting Corporation recently featured a series about autistics with ‘special abilities’ – the show was called ‘The hidden talents’ (De skjulte talenter), and after multiple people had nagged me to watch it I ended up deciding to do so. Most of the people in that show presumably had some degree of ‘savantism’ combined with autism at the milder end of the spectrum, i.e. Asperger’s. I was somewhat conflicted about what to think about the show and did consider blogging it in detail (in Danish?), but I decided against it. However I do want to add here to Danish readers reading along who’ve seen the show that they would do well to repeatedly keep in mind that a) the great majority of autistics do not have abilities like these, b) many autistics with abilities like these presumably do quite poorly, and c) that many autistics have even greater social impairments than do people like e.g. (the very likeable, I have to add…) Louise Wille from the show).
Black Death (“Over 60% of Norway’s population died in 1348–1350”).
Renault FT (“among the most revolutionary and influential tank designs in history”).
Weierstrass function (“an example of a pathological real-valued function on the real line. The function has the property of being continuous everywhere but differentiable nowhere”).
Void coefficient. (“a number that can be used to estimate how much the reactivity of a nuclear reactor changes as voids (typically steam bubbles) form in the reactor moderator or coolant. […] Reactivity is directly related to the tendency of the reactor core to change power level: if reactivity is positive, the core power tends to increase; if it is negative, the core power tends to decrease; if it is zero, the core power tends to remain stable. […] A positive void coefficient means that the reactivity increases as the void content inside the reactor increases due to increased boiling or loss of coolant; for example, if the coolant acts as a neutron absorber. If the void coefficient is large enough and control systems do not respond quickly enough, this can form a positive feedback loop which can quickly boil all the coolant in the reactor. This happened in the RBMK reactor that was destroyed in the Chernobyl disaster.”).
Gregor MacGregor (featured) (“a Scottish soldier, adventurer, and confidence trickster […] MacGregor’s Poyais scheme has been called one of the most brazen confidence tricks in history.”).
I find it difficult to find the motivation to finish the half-finished drafts I have lying around, so this will have to do. Some random stuff below.
(15.000 views… In some sense that seems really ‘unfair’ to me, but on the other hand I doubt neither Beethoven nor Gilels care; they’re both long dead, after all…)
ii. New/newish words I’ve encountered in books, on vocabulary.com or elsewhere:
Agley, peripeteia, dissever, halidom, replevin, socage, organdie, pouffe, dyarchy, tauricide, temerarious, acharnement, cadger, gravamen, aspersion, marronage, adumbrate, succotash, deuteragonist, declivity, marquetry, machicolation, recusal.
iii. A lecture:
It’s been a long time since I watched it so I don’t have anything intelligent to say about it now, but I figured it might be of interest to one or two of the people who still subscribe to the blog despite the infrequent updates.
iv. A few wikipedia articles (I won’t comment much on the contents or quote extensively from the articles the way I’ve done in previous wikipedia posts – the links shall have to suffice for now):
Russian political jokes. Some of those made me laugh (e.g. this one: “A judge walks out of his chambers laughing his head off. A colleague approaches him and asks why he is laughing. “I just heard the funniest joke in the world!” “Well, go ahead, tell me!” says the other judge. “I can’t – I just gave someone ten years for it!”).
v. World War 2, if you think of it as a movie, has a highly unrealistic and implausible plot, according to this amusing post by Scott Alexander. Having recently read a rather long book about these topics, one aspect I’d have added had I written the piece myself would be that an additional factor making the setting seem even more implausible is how so many presumably quite smart people were so – what at least in retrospect seems – unbelievably stupid when it came to Hitler’s ideas and intentions before the war. Going back to Churchill’s own life I’d also add that if you were to make a movie about Churchill’s life during the war, which you could probably relatively easily do if you were to just base it upon his own copious and widely shared notes, then it could probably be made into a quite decent movie. His own comments, remarks, and observations certainly made for a great book.
i. Some new words I’ve encountered (not all of them are from vocabulary.com, but many of them are):
Uxoricide, persnickety, logy, philoprogenitive, impassive, hagiography, gunwale, flounce, vivify, pelage, irredentism, pertinacity,callipygous, valetudinarian, recrudesce, adjuration, epistolary, dandle, picaresque, humdinger, newel, lightsome, lunette, inflect, misoneism, cormorant, immanence, parvenu, sconce, acquisitiveness, lingual, Macaronic, divot, mettlesome, logomachy, raffish, marginalia, omnifarious, tatter, licit.
ii. A lecture:
I got annoyed a few times by the fact that you can’t tell where he’s pointing when he’s talking about the slides, which makes the lecture harder to follow than it ought to be, but it’s still an interesting lecture.
iii. Facts about Dihydrogen Monoxide. Includes coverage of important neglected topics such as ‘What is the link between Dihydrogen Monoxide and school violence?’ After reading the article, I am frankly outraged that this stuff’s still legal!
iv. Some wikipedia links of interest:
“Steganography […] is the practice of concealing a file, message, image, or video within another file, message, image, or video. The word steganography combines the Greek words steganos (στεγανός), meaning “covered, concealed, or protected”, and graphein (γράφειν) meaning “writing”. […] Generally, the hidden messages appear to be (or be part of) something else: images, articles, shopping lists, or some other cover text. For example, the hidden message may be in invisible ink between the visible lines of a private letter. Some implementations of steganography that lack a shared secret are forms of security through obscurity, whereas key-dependent steganographic schemes adhere to Kerckhoffs’s principle.
The advantage of steganography over cryptography alone is that the intended secret message does not attract attention to itself as an object of scrutiny. Plainly visible encrypted messages—no matter how unbreakable—arouse interest, and may in themselves be incriminating in countries where encryption is illegal. Thus, whereas cryptography is the practice of protecting the contents of a message alone, steganography is concerned with concealing the fact that a secret message is being sent, as well as concealing the contents of the message.”
H. H. Holmes. A really nice guy.
“Herman Webster Mudgett (May 16, 1861 – May 7, 1896), better known under the name of Dr. Henry Howard Holmes or more commonly just H. H. Holmes, was one of the first documented serial killers in the modern sense of the term. In Chicago, at the time of the 1893 World’s Columbian Exposition, Holmes opened a hotel which he had designed and built for himself specifically with murder in mind, and which was the location of many of his murders. While he confessed to 27 murders, of which nine were confirmed, his actual body count could be up to 200. He brought an unknown number of his victims to his World’s Fair Hotel, located about 3 miles (4.8 km) west of the fair, which was held in Jackson Park. Besides being a serial killer, H. H. Holmes was also a successful con artist and a bigamist. […]
Holmes purchased an empty lot across from the drugstore where he built his three-story, block-long hotel building. Because of its enormous structure, local people dubbed it “The Castle”. The building was 162 feet long and 50 feet wide. […] The ground floor of the Castle contained Holmes’ own relocated drugstore and various shops, while the upper two floors contained his personal office and a labyrinth of rooms with doorways opening to brick walls, oddly-angled hallways, stairways leading to nowhere, doors that could only be opened from the outside and a host of other strange and deceptive constructions. Holmes was constantly firing and hiring different workers during the construction of the Castle, claiming that “they were doing incompetent work.” His actual reason was to ensure that he was the only one who fully understood the design of the building.”
“The Minnesota Starvation Experiment […] was a clinical study performed at the University of Minnesota between November 19, 1944 and December 20, 1945. The investigation was designed to determine the physiological and psychological effects of severe and prolonged dietary restriction and the effectiveness of dietary rehabilitation strategies.
The motivation of the study was twofold: First, to produce a definitive treatise on the subject of human starvation based on a laboratory simulation of severe famine and, second, to use the scientific results produced to guide the Allied relief assistance to famine victims in Europe and Asia at the end of World War II. It was recognized early in 1944 that millions of people were in grave danger of mass famine as a result of the conflict, and information was needed regarding the effects of semi-starvation—and the impact of various rehabilitation strategies—if postwar relief efforts were to be effective.”
“most of the subjects experienced periods of severe emotional distress and depression.:161 There were extreme reactions to the psychological effects during the experiment including self-mutilation (one subject amputated three fingers of his hand with an axe, though the subject was unsure if he had done so intentionally or accidentally). Participants exhibited a preoccupation with food, both during the starvation period and the rehabilitation phase. Sexual interest was drastically reduced, and the volunteers showed signs of social withdrawal and isolation.:123–124 […] One of the crucial observations of the Minnesota Starvation Experiment […] is that the physical effects of the induced semi-starvation during the study closely approximate the conditions experienced by people with a range of eating disorders such as anorexia nervosa and bulimia nervosa.”
Post-vasectomy pain syndrome. Vasectomy reversal is a risk people probably know about, but this one seems to also be worth being aware of if one is considering having a vasectomy.
Transport in the Soviet Union (‘good article’). A few observations from the article:
“By the mid-1970s, only eight percent of the Soviet population owned a car. […] From 1924 to 1971 the USSR produced 1 million vehicles […] By 1975 only 8 percent of rural households owned a car. […] Growth of motor vehicles had increased by 224 percent in the 1980s, while hardcore surfaced roads only increased by 64 percent. […] By the 1980s Soviet railways had become the most intensively used in the world. Most Soviet citizens did not own private transport, and if they did, it was difficult to drive long distances due to the poor conditions of many roads. […] Road transport played a minor role in the Soviet economy, compared to domestic rail transport or First World road transport. According to historian Martin Crouch, road traffic of goods and passengers combined was only 14 percent of the volume of rail transport. It was only late in its existence that the Soviet authorities put emphasis on road construction and maintenance […] Road transport as a whole lagged far behind that of rail transport; the average distance moved by motor transport in 1982 was 16.4 kilometres (10.2 mi), while the average for railway transport was 930 km per ton and 435 km per ton for water freight. In 1982 there was a threefold increase in investment since 1960 in motor freight transport, and more than a thirtyfold increase since 1940.”
i. Two lectures from the Institute for Advanced Studies:
The IAS has recently uploaded a large number of lectures on youtube, and the ones I blog here are a few of those where you can actually tell from the title what the lecture is about; I find it outright weird that these people don’t include the topic covered in the lecture in their lecture titles.
As for the video above, as usual for the IAS videos it’s annoying that you can’t hear the questions asked by the audience, but the sound quality of this video is at least quite a bit better than the sound quality of the video below (which has a couple of really annoying sequences, in particular around the 15-16 minutes mark (it gets better), where the image is also causing problems, and in the last couple of minutes of the Q&A things are also not exactly optimal as the lecturer leaves the area covered by the camera in order to write something on the blackboard – but you don’t know what he’s writing and you can’t see the lecturer, because the camera isn’t following him). I found most of the above lecture easier to follow than I did the lecture posted below, though in either case you’ll probably not understand all of it unless you’re an astrophysicist – you definitely won’t in case of the latter lecture. I found it helpful to look up a few topics along the way, e.g. the wiki articles about the virial theorem (/also dealing with virial mass/radius), active galactic nucleus (this is the ‘AGN’ she refers to repeatedly), and the Tully–Fisher relation.
Given how many questions are asked along the way it’s really annoying that you in most cases can’t hear what people are asking about – this is definitely an area where there’s room for improvement in the context of the IAS videos. The lecture was not easy to follow but I figured along the way that I understood enough of it to make it worth watching the lecture to the end (though I’d say you’ll not miss much if you stop after the lecture – around the 1.05 hours mark – and skip the subsequent Q&A). I’ve relatively recently read about related topics, e.g. pulsar formation and wave- and fluid dynamics, and if I had not I probably would not have watched this lecture to the end.
ii. A vocabulary.com update. I’m slowly working my way up to the ‘Running Dictionary’ rank (I’m only a walking dictionary at this point); here’s some stuff from my progress page:
I recently learned from a note added to a list that I’ve actually learned a very large proportion of all words available on vocabulary.com, which probably also means that I may have been too harsh on the word selection algorithm in past posts here on the blog; if there aren’t (/m)any new words left to learn it should not be surprising that the algorithm presents me with words I’ve already mastered, and it’s not the algorithm’s fault that there aren’t more words available for me to learn (well, it is to the extent that you’re of the opinion that questions should be automatically created by the algorithm as well, but I don’t think we’re quite there yet at this point). The aforementioned note was added in June, and here’s the important part: “there are words on your list that Vocabulary.com can’t teach yet. Vocabulary.com can teach over 12,000 words, but sadly, these aren’t among them”. ‘Over 12.000’ – and I’ve mastered 11.300. When the proportion of mastered words is this high, not only will the default random word algorithm mostly present you with questions related to words you’ve already mastered; but it actually also starts to get hard to find lists with many words you’ve not already mastered – I’ll often load lists with one hundred words and then realize that I’ve mastered every word on the list. This is annoying if you have a desire to continually be presented with both new words as well as old ones. Unless vocabulary.com increases the rate with which they add new words I’ll run out of new words to learn, and if that happens I’m sure it’ll be much more difficult for me to find motivation to use the site.
With all that stuff out of the way, if you’re not a regular user of the site I should note – again – that it’s an excellent resource if you desire to increase your vocabulary. Below is a list of words I’ve encountered on the site in recent weeks(/months?):
Copacetic, frumpy, elision, termagant, harridan, quondam, funambulist, phantasmagoria, eyelet, cachinnate, wilt, quidnunc, flocculent, galoot, frangible, prevaricate, clarion, trivet, noisome, revenant, myrmidon (I have included this word once before in a post of this type, but it is in my opinion a very nice word with which more people should be familiar…), debenture, teeter, tart, satiny, romp, auricular, terpsichorean, poultice, ululation, fusty, tangy, honorarium, eyas, bumptious, muckraker, bayou, hobble, omphaloskepsis, extemporize, virago, rarefaction, flibbertigibbet, finagle, emollient.
iii. I don’t think I’d do things exactly the way she’s suggesting here, but the general idea/approach seems to me appealing enough for it to be worth at least keeping in mind if I ever decide to start dating/looking for a partner.
iv. Some wikipedia links:
Tarrare (featured). A man with odd eating habits and an interesting employment history (“Dr. Courville was keen to continue his investigations into Tarrare’s eating habits and digestive system, and approached General Alexandre de Beauharnais with a suggestion that Tarrare’s unusual abilities and behaviour could be put to military use. A document was placed inside a wooden box which was in turn fed to Tarrare. Two days later, the box was retrieved from his excrement, with the document still in legible condition. Courville proposed to de Beauharnais that Tarrare could thus serve as a military courier, carrying documents securely through enemy territory with no risk of their being found if he were searched.” Yeah…).
1740 Batavia massacre (featured).
v. I am also fun.
i. Motte-and-bailey castle (‘good article’).
“A motte-and-bailey castle is a fortification with a wooden or stone keep situated on a raised earthwork called a motte, accompanied by an enclosed courtyard, or bailey, surrounded by a protective ditch and palisade. Relatively easy to build with unskilled, often forced labour, but still militarily formidable, these castles were built across northern Europe from the 10th century onwards, spreading from Normandy and Anjou in France, into the Holy Roman Empire in the 11th century. The Normans introduced the design into England and Wales following their invasion in 1066. Motte-and-bailey castles were adopted in Scotland, Ireland, the Low Countries and Denmark in the 12th and 13th centuries. By the end of the 13th century, the design was largely superseded by alternative forms of fortification, but the earthworks remain a prominent feature in many countries. […]
Various methods were used to build mottes. Where a natural hill could be used, scarping could produce a motte without the need to create an artificial mound, but more commonly much of the motte would have to be constructed by hand. Four methods existed for building a mound and a tower: the mound could either be built first, and a tower placed on top of it; the tower could alternatively be built on the original ground surface and then buried within the mound; the tower could potentially be built on the original ground surface and then partially buried within the mound, the buried part forming a cellar beneath; or the tower could be built first, and the mound added later.
Regardless of the sequencing, artificial mottes had to be built by piling up earth; this work was undertaken by hand, using wooden shovels and hand-barrows, possibly with picks as well in the later periods. Larger mottes took disproportionately more effort to build than their smaller equivalents, because of the volumes of earth involved. The largest mottes in England, such as Thetford, are estimated to have required up to 24,000 man-days of work; smaller ones required perhaps as little as 1,000. […] Taking into account estimates of the likely available manpower during the period, historians estimate that the larger mottes might have taken between four and nine months to build. This contrasted favourably with stone keeps of the period, which typically took up to ten years to build. Very little skilled labour was required to build motte and bailey castles, which made them very attractive propositions if forced peasant labour was available, as was the case after the Norman invasion of England. […]
The type of soil would make a difference to the design of the motte, as clay soils could support a steeper motte, whilst sandier soils meant that a motte would need a more gentle incline. Where available, layers of different sorts of earth, such as clay, gravel and chalk, would be used alternatively to build in strength to the design. Layers of turf could also be added to stabilise the motte as it was built up, or a core of stones placed as the heart of the structure to provide strength. Similar issues applied to the defensive ditches, where designers found that the wider the ditch was dug, the deeper and steeper the sides of the scarp could be, making it more defensive. […]
Although motte-and-bailey castles are the best known castle design, they were not always the most numerous in any given area. A popular alternative was the ringwork castle, involving a palisade being built on top of a raised earth rampart, protected by a ditch. The choice of motte and bailey or ringwork was partially driven by terrain, as mottes were typically built on low ground, and on deeper clay and alluvial soils. Another factor may have been speed, as ringworks were faster to build than mottes. Some ringwork castles were later converted into motte-and-bailey designs, by filling in the centre of the ringwork to produce a flat-topped motte. […]
In England, William invaded from Normandy in 1066, resulting in three phases of castle building in England, around 80% of which were in the motte-and-bailey pattern. […] around 741 motte-and-bailey castles [were built] in England and Wales alone. […] Many motte-and-bailey castles were occupied relatively briefly and in England many were being abandoned by the 12th century, and others neglected and allowed to lapse into disrepair. In the Low Countries and Germany, a similar transition occurred in the 13th and 14th centuries. […] One factor was the introduction of stone into castle building. The earliest stone castles had emerged in the 10th century […] Although wood was a more powerful defensive material than was once thought, stone became increasingly popular for military and symbolic reasons.”
ii. Battle of Midway (featured). Lots of good stuff in there. One aspect I had not been aware of beforehand was that Allied codebreakers also here (I was quite familiar with the works of Turing and others in Bletchley Park) played a key role:
“Admiral Nimitz had one priceless advantage: cryptanalysts had partially broken the Japanese Navy’s JN-25b code. Since the early spring of 1942, the US had been decoding messages stating that there would soon be an operation at objective “AF”. It was not known where “AF” was, but Commander Joseph J. Rochefort and his team at Station HYPO were able to confirm that it was Midway; Captain Wilfred Holmes devised a ruse of telling the base at Midway (by secure undersea cable) to broadcast an uncoded radio message stating that Midway’s water purification system had broken down. Within 24 hours, the code breakers picked up a Japanese message that “AF was short on water.” HYPO was also able to determine the date of the attack as either 4 or 5 June, and to provide Nimitz with a complete IJN order of battle. Japan had a new codebook, but its introduction had been delayed, enabling HYPO to read messages for several crucial days; the new code, which had not yet been cracked, came into use shortly before the attack began, but the important breaks had already been made.[nb 8]
As a result, the Americans entered the battle with a very good picture of where, when, and in what strength the Japanese would appear. Nimitz knew that the Japanese had negated their numerical advantage by dividing their ships into four separate task groups, all too widely separated to be able to support each other.[nb 9] […] The Japanese, by contrast, remained almost totally unaware of their opponent’s true strength and dispositions even after the battle began. […] Four Japanese aircraft carriers — Akagi, Kaga, Soryu and Hiryu, all part of the six-carrier force that had attacked Pearl Harbor six months earlier — and a heavy cruiser were sunk at a cost of the carrier Yorktown and a destroyer. After Midway and the exhausting attrition of the Solomon Islands campaign, Japan’s capacity to replace its losses in materiel (particularly aircraft carriers) and men (especially well-trained pilots) rapidly became insufficient to cope with mounting casualties, while the United States’ massive industrial capabilities made American losses far easier to bear. […] The Battle of Midway has often been called “the turning point of the Pacific”. However, the Japanese continued to try to secure more strategic territory in the South Pacific, and the U.S. did not move from a state of naval parity to one of increasing supremacy until after several more months of hard combat. Thus, although Midway was the Allies’ first major victory against the Japanese, it did not radically change the course of the war. Rather, it was the cumulative effects of the battles of Coral Sea and Midway that reduced Japan’s ability to undertake major offensives.”
One thing which really strikes you (well, struck me) when reading this stuff is how incredibly capital-intensive the war at sea really was; this was one of the most important sea battles of the Second World War, yet the total Japanese death toll at Midway was just 3,057. To put that number into perspective, it is significantly smaller than the average number of people killed each day in Stalingrad (according to one estimate, the Soviets alone suffered 478,741 killed or missing during those roughly 5 months (~150 days), which comes out at roughly 3000/day).
iii. History of time-keeping devices (featured). ‘Exactly what it says on the tin’, as they’d say on TV Tropes.
It took a long time to get from where we were to where we are today; the horologists of the past faced a lot of problems you’ve most likely never even thought about. What do you do for example do if your ingenious water clock has trouble keeping time because variation in water temperature causes issues? Well, you use mercury instead of water, of course! (“Since Yi Xing’s clock was a water clock, it was affected by temperature variations. That problem was solved in 976 by Zhang Sixun by replacing the water with mercury, which remains liquid down to −39 °C (−38 °F).”).
iv. Microbial metabolism.
“Microbial metabolism is the means by which a microbe obtains the energy and nutrients (e.g. carbon) it needs to live and reproduce. Microbes use many different types of metabolic strategies and species can often be differentiated from each other based on metabolic characteristics. The specific metabolic properties of a microbe are the major factors in determining that microbe’s ecological niche, and often allow for that microbe to be useful in industrial processes or responsible for biogeochemical cycles. […]
All microbial metabolisms can be arranged according to three principles:
1. How the organism obtains carbon for synthesising cell mass:
- autotrophic – carbon is obtained from carbon dioxide (CO2)
- heterotrophic – carbon is obtained from organic compounds
- mixotrophic – carbon is obtained from both organic compounds and by fixing carbon dioxide
2. How the organism obtains reducing equivalents used either in energy conservation or in biosynthetic reactions:
- lithotrophic – reducing equivalents are obtained from inorganic compounds
- organotrophic – reducing equivalents are obtained from organic compounds
3. How the organism obtains energy for living and growing:
- chemotrophic – energy is obtained from external chemical compounds
- phototrophic – energy is obtained from light
In practice, these terms are almost freely combined. […] Most microbes are heterotrophic (more precisely chemoorganoheterotrophic), using organic compounds as both carbon and energy sources. […] Heterotrophic microbes are extremely abundant in nature and are responsible for the breakdown of large organic polymers such as cellulose, chitin or lignin which are generally indigestible to larger animals. Generally, the breakdown of large polymers to carbon dioxide (mineralization) requires several different organisms, with one breaking down the polymer into its constituent monomers, one able to use the monomers and excreting simpler waste compounds as by-products, and one able to use the excreted wastes. There are many variations on this theme, as different organisms are able to degrade different polymers and secrete different waste products. […]
Biochemically, prokaryotic heterotrophic metabolism is much more versatile than that of eukaryotic organisms, although many prokaryotes share the most basic metabolic models with eukaryotes, e. g. using glycolysis (also called EMP pathway) for sugar metabolism and the citric acid cycle to degrade acetate, producing energy in the form of ATP and reducing power in the form of NADH or quinols. These basic pathways are well conserved because they are also involved in biosynthesis of many conserved building blocks needed for cell growth (sometimes in reverse direction). However, many bacteria and archaea utilize alternative metabolic pathways other than glycolysis and the citric acid cycle. […] The metabolic diversity and ability of prokaryotes to use a large variety of organic compounds arises from the much deeper evolutionary history and diversity of prokaryotes, as compared to eukaryotes. […]
Many microbes (phototrophs) are capable of using light as a source of energy to produce ATP and organic compounds such as carbohydrates, lipids, and proteins. Of these, algae are particularly significant because they are oxygenic, using water as an electron donor for electron transfer during photosynthesis. Phototrophic bacteria are found in the phyla Cyanobacteria, Chlorobi, Proteobacteria, Chloroflexi, and Firmicutes. Along with plants these microbes are responsible for all biological generation of oxygen gas on Earth. […] As befits the large diversity of photosynthetic bacteria, there are many different mechanisms by which light is converted into energy for metabolism. All photosynthetic organisms locate their photosynthetic reaction centers within a membrane, which may be invaginations of the cytoplasmic membrane (Proteobacteria), thylakoid membranes (Cyanobacteria), specialized antenna structures called chlorosomes (Green sulfur and non-sulfur bacteria), or the cytoplasmic membrane itself (heliobacteria). Different photosynthetic bacteria also contain different photosynthetic pigments, such as chlorophylls and carotenoids, allowing them to take advantage of different portions of the electromagnetic spectrum and thereby inhabit different niches. Some groups of organisms contain more specialized light-harvesting structures (e.g. phycobilisomes in Cyanobacteria and chlorosomes in Green sulfur and non-sulfur bacteria), allowing for increased efficiency in light utilization. […]
Most photosynthetic microbes are autotrophic, fixing carbon dioxide via the Calvin cycle. Some photosynthetic bacteria (e.g. Chloroflexus) are photoheterotrophs, meaning that they use organic carbon compounds as a carbon source for growth. Some photosynthetic organisms also fix nitrogen […] Nitrogen is an element required for growth by all biological systems. While extremely common (80% by volume) in the atmosphere, dinitrogen gas (N2) is generally biologically inaccessible due to its high activation energy. Throughout all of nature, only specialized bacteria and Archaea are capable of nitrogen fixation, converting dinitrogen gas into ammonia (NH3), which is easily assimilated by all organisms. These prokaryotes, therefore, are very important ecologically and are often essential for the survival of entire ecosystems. This is especially true in the ocean, where nitrogen-fixing cyanobacteria are often the only sources of fixed nitrogen, and in soils, where specialized symbioses exist between legumes and their nitrogen-fixing partners to provide the nitrogen needed by these plants for growth.
Nitrogen fixation can be found distributed throughout nearly all bacterial lineages and physiological classes but is not a universal property. Because the enzyme nitrogenase, responsible for nitrogen fixation, is very sensitive to oxygen which will inhibit it irreversibly, all nitrogen-fixing organisms must possess some mechanism to keep the concentration of oxygen low. […] The production and activity of nitrogenases is very highly regulated, both because nitrogen fixation is an extremely energetically expensive process (16–24 ATP are used per N2 fixed) and due to the extreme sensitivity of the nitrogenase to oxygen.” (A lot of the stuff above was of course for me either review or closely related to stuff I’ve already read in the coverage provided in Beer et al., a book I’ve talked about before here on the blog).
v. Uranium (featured). It’s hard to know what to include here as the article has a lot of stuff, but I found this part in particular, well, interesting:
“During the Cold War between the Soviet Union and the United States, huge stockpiles of uranium were amassed and tens of thousands of nuclear weapons were created using enriched uranium and plutonium made from uranium. Since the break-up of the Soviet Union in 1991, an estimated 600 short tons (540 metric tons) of highly enriched weapons grade uranium (enough to make 40,000 nuclear warheads) have been stored in often inadequately guarded facilities in the Russian Federation and several other former Soviet states. Police in Asia, Europe, and South America on at least 16 occasions from 1993 to 2005 have intercepted shipments of smuggled bomb-grade uranium or plutonium, most of which was from ex-Soviet sources. From 1993 to 2005 the Material Protection, Control, and Accounting Program, operated by the federal government of the United States, spent approximately US $550 million to help safeguard uranium and plutonium stockpiles in Russia. This money was used for improvements and security enhancements at research and storage facilities. Scientific American reported in February 2006 that in some of the facilities security consisted of chain link fences which were in severe states of disrepair. According to an interview from the article, one facility had been storing samples of enriched (weapons grade) uranium in a broom closet before the improvement project; another had been keeping track of its stock of nuclear warheads using index cards kept in a shoe box.”
Some other observations from the article below:
“Uranium is a naturally occurring element that can be found in low levels within all rock, soil, and water. Uranium is the 51st element in order of abundance in the Earth’s crust. Uranium is also the highest-numbered element to be found naturally in significant quantities on Earth and is almost always found combined with other elements. Along with all elements having atomic weights higher than that of iron, it is only naturally formed in supernovae. The decay of uranium, thorium, and potassium-40 in the Earth’s mantle is thought to be the main source of heat that keeps the outer core liquid and drives mantle convection, which in turn drives plate tectonics. […]
Natural uranium consists of three major isotopes: uranium-238 (99.28% natural abundance), uranium-235 (0.71%), and uranium-234 (0.0054%). […] Uranium-238 is the most stable isotope of uranium, with a half-life of about 4.468×109 years, roughly the age of the Earth. Uranium-235 has a half-life of about 7.13×108 years, and uranium-234 has a half-life of about 2.48×105 years. For natural uranium, about 49% of its alpha rays are emitted by each of 238U atom, and also 49% by 234U (since the latter is formed from the former) and about 2.0% of them by the 235U. When the Earth was young, probably about one-fifth of its uranium was uranium-235, but the percentage of 234U was probably much lower than this. […]
Worldwide production of U3O8 (yellowcake) in 2013 amounted to 70,015 tonnes, of which 22,451 t (32%) was mined in Kazakhstan. Other important uranium mining countries are Canada (9,331 t), Australia (6,350 t), Niger (4,518 t), Namibia (4,323 t) and Russia (3,135 t). […] Australia has 31% of the world’s known uranium ore reserves and the world’s largest single uranium deposit, located at the Olympic Dam Mine in South Australia. There is a significant reserve of uranium in Bakouma a sub-prefecture in the prefecture of Mbomou in Central African Republic. […] Uranium deposits seem to be log-normal distributed. There is a 300-fold increase in the amount of uranium recoverable for each tenfold decrease in ore grade. In other words, there is little high grade ore and proportionately much more low grade ore available.”
vi. Radiocarbon dating (featured).
Radiocarbon dating (also referred to as carbon dating or carbon-14 dating) is a method of determining the age of an object containing organic material by using the properties of radiocarbon (14C), a radioactive isotope of carbon. The method was invented by Willard Libby in the late 1940s and soon became a standard tool for archaeologists. Libby received the Nobel Prize for his work in 1960. The radiocarbon dating method is based on the fact that radiocarbon is constantly being created in the atmosphere by the interaction of cosmic rays with atmospheric nitrogen. The resulting radiocarbon combines with atmospheric oxygen to form radioactive carbon dioxide, which is incorporated into plants by photosynthesis; animals then acquire 14C by eating the plants. When the animal or plant dies, it stops exchanging carbon with its environment, and from that point onwards the amount of 14C it contains begins to reduce as the 14C undergoes radioactive decay. Measuring the amount of 14C in a sample from a dead plant or animal such as piece of wood or a fragment of bone provides information that can be used to calculate when the animal or plant died. The older a sample is, the less 14C there is to be detected, and because the half-life of 14C (the period of time after which half of a given sample will have decayed) is about 5,730 years, the oldest dates that can be reliably measured by radiocarbon dating are around 50,000 years ago, although special preparation methods occasionally permit dating of older samples.
The idea behind radiocarbon dating is straightforward, but years of work were required to develop the technique to the point where accurate dates could be obtained. […]
The development of radiocarbon dating has had a profound impact on archaeology. In addition to permitting more accurate dating within archaeological sites than did previous methods, it allows comparison of dates of events across great distances. Histories of archaeology often refer to its impact as the “radiocarbon revolution”.”
I’ve read about these topics before in a textbook setting (e.g. here), but/and I should note that the article provides quite detailed coverage and I think most people will encounter some new information by having a look at it even if they’re superficially familiar with this topic. The article has a lot of stuff about e.g. ‘what you need to correct for’, which some of you might find interesting.
vii. Raccoon (featured). One interesting observation from the article:
“One aspect of raccoon behavior is so well known that it gives the animal part of its scientific name, Procyon lotor; “lotor” is neo-Latin for “washer”. In the wild, raccoons often dabble for underwater food near the shore-line. They then often pick up the food item with their front paws to examine it and rub the item, sometimes to remove unwanted parts. This gives the appearance of the raccoon “washing” the food. The tactile sensitivity of raccoons’ paws is increased if this rubbing action is performed underwater, since the water softens the hard layer covering the paws. However, the behavior observed in captive raccoons in which they carry their food to water to “wash” or douse it before eating has not been observed in the wild. Naturalist Georges-Louis Leclerc, Comte de Buffon, believed that raccoons do not have adequate saliva production to moisten food thereby necessitating dousing, but this hypothesis is now considered to be incorrect. Captive raccoons douse their food more frequently when a watering hole with a layout similar to a stream is not farther away than 3 m (10 ft). The widely accepted theory is that dousing in captive raccoons is a fixed action pattern from the dabbling behavior performed when foraging at shores for aquatic foods. This is supported by the observation that aquatic foods are doused more frequently. Cleaning dirty food does not seem to be a reason for “washing”. Experts have cast doubt on the veracity of observations of wild raccoons dousing food.
And here’s another interesting set of observations:
“In Germany—where the racoon is called the Waschbär (literally, “wash-bear” or “washing bear”) due to its habit of “dousing” food in water—two pairs of pet raccoons were released into the German countryside at the Edersee reservoir in the north of Hesse in April 1934 by a forester upon request of their owner, a poultry farmer. He released them two weeks before receiving permission from the Prussian hunting office to “enrich the fauna.”  Several prior attempts to introduce raccoons in Germany were not successful. A second population was established in eastern Germany in 1945 when 25 raccoons escaped from a fur farm at Wolfshagen, east of Berlin, after an air strike. The two populations are parasitologically distinguishable: 70% of the raccoons of the Hessian population are infected with the roundworm Baylisascaris procyonis, but none of the Brandenburgian population has the parasite. The estimated number of raccoons was 285 animals in the Hessian region in 1956, over 20,000 animals in the Hessian region in 1970 and between 200,000 and 400,000 animals in the whole of Germany in 2008. By 2012 it was estimated that Germany now had more than a million raccoons.“
Sorry for the infrequent updates. I realized blogging Wodehouse books takes more time than I’d imagined, so posting this sort of stuff is probably a better idea.
“On the first day of the evacuation, only 7,669 men were evacuated, but by the end of the eighth day, a total of 338,226 soldiers had been rescued by a hastily assembled fleet of over 800 boats. Many of the troops were able to embark from the harbour’s protective mole onto 39 British destroyers and other large ships, while others had to wade out from the beaches, waiting for hours in the shoulder-deep water. Some were ferried from the beaches to the larger ships by the famous little ships of Dunkirk, a flotilla of hundreds of merchant marine boats, fishing boats, pleasure craft, and lifeboats called into service for the emergency. The BEF lost 68,000 soldiers during the French campaign and had to abandon nearly all of their tanks, vehicles, and other equipment.”
One way to make sense of the scale of the operations here is to compare them with the naval activities on D-day four years later. The British evacuated more people from France during three consecutive days in 1940 (30th and 31st of May, and 1st of June) than the Allies (Americans and British combined) landed on D-day four years later, and the British evacuated roughly as many people on the 31st of May (68,014) as they landed by sea on D-day (75,215). Here’s a part of the story I did not know:
“Three British divisions and a host of logistic and labour troops were cut off to the south of the Somme by the German “race to the sea”. At the end of May, a further two divisions began moving to France with the hope of establishing a Second BEF. The majority of the 51st (Highland) Division was forced to surrender on 12 June, but almost 192,000 Allied personnel, 144,000 of them British, were evacuated through various French ports from 15–25 June under the codename Operation Ariel. […] More than 100,000 evacuated French troops were quickly and efficiently shuttled to camps in various parts of southwestern England, where they were temporarily lodged before being repatriated. British ships ferried French troops to Brest, Cherbourg, and other ports in Normandy and Brittany, although only about half of the repatriated troops were deployed against the Germans before the surrender of France. For many French soldiers, the Dunkirk evacuation represented only a few weeks’ delay before being killed or captured by the German army after their return to France.”
ii. A pretty awesome display by the current world chess champion:
If you feel the same way I do about Maurice Ashley, you’ll probably want to skip the first few minutes of this video. Don’t miss the games, though – this is great stuff. Do keep in mind when watching this video that the clock is a really important part of this event; other players in the past have played a lot more people at the same time while blindfolded than Carlsen does here – “Although not a full-time chess professional [Najdorf] was one of the world’s leading chess players in the 1950s and 1960s and he excelled in playing blindfold chess: he broke the world record twice, by playing blindfold 40 games in Rosario, 1943, and 45 in São Paulo, 1947, becoming the world blindfold chess champion” (link) – but a game clock changes things a lot. A few comments and discussion here.
In very slightly related news, I recently got in my first win against a grandmaster in a bullet game on the ICC.
iii. Gastric-brooding frog.
“The genus was unique because it contained the only two known frog species that incubated the prejuvenile stages of their offspring in the stomach of the mother. […] What makes these frogs unique among all frog species is their form of parental care. Following external fertilization by the male, the female would take the eggs or embryos into her mouth and swallow them. […] Eggs found in females measured up to 5.1 mm in diameter and had large yolk supplies. These large supplies are common among species that live entirely off yolk during their development. Most female frogs had around 40 ripe eggs, almost double that of the number of juveniles ever found in the stomach (21–26). This means one of two things, that the female fails to swallow all the eggs or the first few eggs to be swallowed are digested. […] During the period that the offspring were present in the stomach the frog would not eat. […] The birth process was widely spaced and may have occurred over a period of as long as a week. However, if disturbed the female may regurgitate all the young frogs in a single act of propulsive vomiting.”
Fascinating creatures.. Unfortunately they’re no longer around (they’re classified as extinct).
iv. I’m sort of conflicted about what to think about this:
“Epidemiological studies show that patients with type-2-diabetes (T2DM) and individuals with a diabetes-independent elevation in blood glucose have an increased risk for developing dementia, specifically dementia due to Alzheimer’s disease (AD). These observations suggest that abnormal glucose metabolism likely plays a role in some aspects of AD pathogenesis, leading us to investigate the link between aberrant glucose metabolism, T2DM, and AD in murine models. […] Recent epidemiological studies demonstrate that individuals with type-2 diabetes (T2DM) are 2–4 times more likely to develop AD (3–5), individuals with elevated blood glucose levels are at an increased risk to develop dementia (5), and those with elevated blood glucose levels have a more rapid conversion from mild cognitive impairment (MCI) to AD (6), suggesting that disrupted glucose homeostasis could play a […] causal role in AD pathogenesis. Although several prominent features of T2DM, including increased insulin resistance and decreased insulin production, are at the forefront of AD research (7–10), questions regarding the effects of elevated blood glucose independent of insulin resistance on AD pathology remain largely unexplored. In order to investigate the potential role of glucose metabolism in AD, we combined glucose clamps and in vivo microdialysis as a method to measure changes in brain metabolites in awake, freely moving mice during a hyperglycemic challenge. Our findings suggest that acute hyperglycemia raises interstitial fluid (ISF) Aβ levels by altering neuronal activity, which increases Aβ production. […] Since extracellular Aβ, and subsequently tau, aggregate in a concentration-dependent manner during the preclinical period of AD while individuals are cognitively normal (27), our findings suggest that repeated episodes of transient hyperglycemia, such as those found in T2DM, could both initiate and accelerate plaque accumulation. Thus, the correlation between hyperglycemia and increased ISF Aβ provides one potential explanation for the increased risk of AD and dementia in T2DM patients or individuals with elevated blood glucose levels. In addition, our work suggests that KATP channels within the hippocampus act as metabolic sensors and couple alterations in glucose concentrations with changes in electrical activity and extracellular Aβ levels. Not only does this offer one mechanistic explanation for the epidemiological link between T2DM and AD, but it also provides a potential therapeutic target for AD. Given that FDA-approved drugs already exist for the modulation of KATP channels and previous work demonstrates the benefits of sulfonylureas for treating animal models of AD (26), the identification of these channels as a link between hyperglycemia and AD pathology creates an avenue for translational research in AD.”
Why am I conflicted? Well, on the one hand it’s nice to know that they’re making progress in terms of figuring out why people get Alzheimer’s and potential therapeutic targets are being identified. On the other hand this – “our findings suggest that repeated episodes of transient hyperglycemia […] could both initiate and accelerate plaque accumulation” – is bad news if you’re a type 1 diabetic (I’d much rather have them identify risk factors to which I’m not exposed).
v. I recently noticed that Khan Academy has put up some videos about diabetes. From the few ones I’ve had a look at they don’t seem to contain much stuff I don’t already know so I’m not sure I’ll explore this playlist in any more detail, but I figured I might as well share a few of the videos here; the first one is about the pathophysiology of type 1 diabetes and the second one’s about diabetic nephropathy (kidney disease):
vi. On Being the Right Size, by J. B. S. Haldane. A neat little text. A few quotes:
“To the mouse and any smaller animal [gravity] presents practically no dangers. You can drop a mouse down a thousand-yard mine shaft; and, on arriving at the bottom, it gets a slight shock and walks away, provided that the ground is fairly soft. A rat is killed, a man is broken, a horse splashes. For the resistance presented to movement by the air is proportional to the surface of the moving object. Divide an animal’s length, breadth, and height each by ten; its weight is reduced to a thousandth, but its surface only to a hundredth. So the resistance to falling in the case of the small animal is relatively ten times greater than the driving force.
An insect, therefore, is not afraid of gravity; it can fall without danger, and can cling to the ceiling with remarkably little trouble. It can go in for elegant and fantastic forms of support like that of the daddy-longlegs. But there is a force which is as formidable to an insect as gravitation to a mammal. This is surface tension. A man coming out of a bath carries with him a film of water of about one-fiftieth of an inch in thickness. This weighs roughly a pound. A wet mouse has to carry about its own weight of water. A wet fly has to lift many times its own weight and, as everyone knows, a fly once wetted by water or any other liquid is in a very serious position indeed. An insect going for a drink is in as great danger as a man leaning out over a precipice in search of food. If it once falls into the grip of the surface tension of the water—that is to say, gets wet—it is likely to remain so until it drowns. A few insects, such as water-beetles, contrive to be unwettable; the majority keep well away from their drink by means of a long proboscis. […]
It is an elementary principle of aeronautics that the minimum speed needed to keep an aeroplane of a given shape in the air varies as the square root of its length. If its linear dimensions are increased four times, it must fly twice as fast. Now the power needed for the minimum speed increases more rapidly than the weight of the machine. So the larger aeroplane, which weighs sixty-four times as much as the smaller, needs one hundred and twenty-eight times its horsepower to keep up. Applying the same principle to the birds, we find that the limit to their size is soon reached. An angel whose muscles developed no more power weight for weight than those of an eagle or a pigeon would require a breast projecting for about four feet to house the muscles engaged in working its wings, while to economize in weight, its legs would have to be reduced to mere stilts. Actually a large bird such as an eagle or kite does not keep in the air mainly by moving its wings. It is generally to be seen soaring, that is to say balanced on a rising column of air. And even soaring becomes more and more difficult with increasing size. Were this not the case eagles might be as large as tigers and as formidable to man as hostile aeroplanes.
But it is time that we pass to some of the advantages of size. One of the most obvious is that it enables one to keep warm. All warmblooded animals at rest lose the same amount of heat from a unit area of skin, for which purpose they need a food-supply proportional to their surface and not to their weight. Five thousand mice weigh as much as a man. Their combined surface and food or oxygen consumption are about seventeen times a man’s. In fact a mouse eats about one quarter its own weight of food every day, which is mainly used in keeping it warm. For the same reason small animals cannot live in cold countries. In the arctic regions there are no reptiles or amphibians, and no small mammals. The smallest mammal in Spitzbergen is the fox. The small birds fly away in winter, while the insects die, though their eggs can survive six months or more of frost. The most successful mammals are bears, seals, and walruses.” [I think he’s a bit too categorical in his statements here and this topic is more contested today than it probably was when he wrote his text – see wikipedia’s coverage of Bergmann’s rule].
i. Lock (water transport). Zumerchik and Danver’s book covered this kind of stuff as well, sort of, and I figured that since I’m not going to blog the book – for reasons provided in my goodreads review here – I might as well add a link or two here instead. The words ‘sort of’ above are in my opinion justified because the book coverage is so horrid you’d never even know what a lock is used for from reading that book; you’d need to look that up elsewhere.
On a related note there’s a lot of stuff in that book about the history of water transport etc. which you probably won’t get from these articles, but having a look here will give you some idea about which sort of topics many of the chapters of the book are dealing with. Also, stuff like this and this. The book coverage of the latter topic is incidentally much, much more detailed than is that wiki article, and the article – as well as many other articles about related topics (economic history, etc.) on the wiki, to the extent that they even exist – could clearly be improved greatly by adding content from books like this one. However I’m not going to be the guy doing that.
ii. Congruence (geometry).
I’d note that this is a topic which seems to be reasonably well covered on wikipedia; there’s for example also a ‘good article’ on the Everglades and a featured article about the Everglades National Park. A few quotes and observations from the article:
“The geography and ecology of the Everglades involve the complex elements affecting the natural environment throughout the southern region of the U.S. state of Florida. Before drainage, the Everglades were an interwoven mesh of marshes and prairies covering 4,000 square miles (10,000 km2). […] Although sawgrass and sloughs are the enduring geographical icons of the Everglades, other ecosystems are just as vital, and the borders marking them are subtle or nonexistent. Pinelands and tropical hardwood hammocks are located throughout the sloughs; the trees, rooted in soil inches above the peat, marl, or water, support a variety of wildlife. The oldest and tallest trees are cypresses, whose roots are specially adapted to grow underwater for months at a time.”
“A vast marshland could only have been formed due to the underlying rock formations in southern Florida. The floor of the Everglades formed between 25 million and 2 million years ago when the Florida peninsula was a shallow sea floor. The peninsula has been covered by sea water at least seven times since the earliest bedrock formation. […] At only 5,000 years of age, the Everglades is a young region in geological terms. Its ecosystems are in constant flux as a result of the interplay of three factors: the type and amount of water present, the geology of the region, and the frequency and severity of fires. […] Water is the dominant element in the Everglades, and it shapes the land, vegetation, and animal life of South Florida. The South Florida climate was once arid and semi-arid, interspersed with wet periods. Between 10,000 and 20,000 years ago, sea levels rose, submerging portions of the Florida peninsula and causing the water table to rise. Fresh water saturated the limestone, eroding some of it and creating springs and sinkholes. The abundance of fresh water allowed new vegetation to take root, and through evaporation formed thunderstorms. Limestone was dissolved by the slightly acidic rainwater. The limestone wore away, and groundwater came into contact with the surface, creating a massive wetland ecosystem. […] Only two seasons exist in the Everglades: wet (May to November) and dry (December to April). […] The Everglades are unique; no other wetland system in the world is nourished primarily from the atmosphere. […] Average annual rainfall in the Everglades is approximately 62 inches (160 cm), though fluctuations of precipitation are normal.”
“Between 1871 and 2003, 40 tropical cyclones struck the Everglades, usually every one to three years.”
“Islands of trees featuring dense temperate or tropical trees are called tropical hardwood hammocks. They may rise between 1 and 3 feet (0.30 and 0.91 m) above water level in freshwater sloughs, sawgrass prairies, or pineland. These islands illustrate the difficulty of characterizing the climate of the Everglades as tropical or subtropical. Hammocks in the northern portion of the Everglades consist of more temperate plant species, but closer to Florida Bay the trees are tropical and smaller shrubs are more prevalent. […] Islands vary in size, but most range between 1 and 10 acres (0.40 and 4.05 ha); the water slowly flowing around them limits their size and gives them a teardrop appearance from above. The height of the trees is limited by factors such as frost, lightning, and wind: the majority of trees in hammocks grow no higher than 55 feet (17 m). […] There are more than 50 varieties of tree snails in the Everglades; the color patterns and designs unique to single islands may be a result of the isolation of certain hammocks. […] An estimated 11,000 species of seed-bearing plants and 400 species of land or water vertebrates live in the Everglades, but slight variations in water levels affect many organisms and reshape land formations.”
“Because much of the coast and inner estuaries are built by mangroves—and there is no border between the coastal marshes and the bay—the ecosystems in Florida Bay are considered part of the Everglades. […] Sea grasses stabilize sea beds and protect shorelines from erosion by absorbing energy from waves. […] Sea floor patterns of Florida Bay are formed by currents and winds. However, since 1932, sea levels have been rising at a rate of 1 foot (0.30 m) per 100 years. Though mangroves serve to build and stabilize the coastline, seas may be rising more rapidly than the trees are able to build.”
iv. Chang and Eng Bunker. Not a long article, but interesting:
“Chang (Chinese: 昌; pinyin: Chāng; Thai: จัน, Jan, rtgs: Chan) and Eng (Chinese: 恩; pinyin: Ēn; Thai: อิน In) Bunker (May 11, 1811 – January 17, 1874) were Thai-American conjoined twin brothers whose condition and birthplace became the basis for the term “Siamese twins”.”
I loved some of the implicit assumptions in this article: “Determined to live as normal a life they could, Chang and Eng settled on their small plantation and bought slaves to do the work they could not do themselves. […] Chang and Adelaide [his wife] would become the parents of eleven children. Eng and Sarah [‘the other wife’] had ten.”
A ‘normal life’ indeed… The women the twins married were incidentally sisters who ended up disliking each other (I can’t imagine why…).
v. Genie (feral child). This is a very long article, and you should be warned that many parts of it may not be pleasant to read. From the article:
“Genie (born 1957) is the pseudonym of a feral child who was the victim of extraordinarily severe abuse, neglect and social isolation. Her circumstances are prominently recorded in the annals of abnormal child psychology. When Genie was a baby her father decided that she was severely mentally retarded, causing him to dislike her and withhold as much care and attention as possible. Around the time she reached the age of 20 months Genie’s father decided to keep her as socially isolated as possible, so from that point until she reached 13 years, 7 months, he kept her locked alone in a room. During this time he almost always strapped her to a child’s toilet or bound her in a crib with her arms and legs completely immobilized, forbade anyone from interacting with her, and left her severely malnourished. The extent of Genie’s isolation prevented her from being exposed to any significant amount of speech, and as a result she did not acquire language during childhood. Her abuse came to the attention of Los Angeles child welfare authorities on November 4, 1970.
In the first several years after Genie’s early life and circumstances came to light, psychologists, linguists and other scientists focused a great deal of attention on Genie’s case, seeing in her near-total isolation an opportunity to study many aspects of human development. […] In early January 1978 Genie’s mother suddenly decided to forbid all of the scientists except for one from having any contact with Genie, and all testing and scientific observations of her immediately ceased. Most of the scientists who studied and worked with Genie have not seen her since this time. The only post-1977 updates on Genie and her whereabouts are personal observations or secondary accounts of them, and all are spaced several years apart. […]
Genie’s father had an extremely low tolerance for noise, to the point of refusing to have a working television or radio in the house. Due to this, the only sounds Genie ever heard from her parents or brother on a regular basis were noises when they used the bathroom. Although Genie’s mother claimed that Genie had been able to hear other people talking in the house, her father almost never allowed his wife or son to speak and viciously beat them if he heard them talking without permission. They were particularly forbidden to speak to or around Genie, so what conversations they had were therefore always very quiet and out of Genie’s earshot, preventing her from being exposed to any meaningful language besides her father’s occasional swearing. […] Genie’s father fed Genie as little as possible and refused to give her solid food […]
In late October 1970, Genie’s mother and father had a violent argument in which she threatened to leave if she could not call her parents. He eventually relented, and later that day Genie’s mother was able to get herself and Genie away from her husband while he was out of the house […] She and Genie went to live with her parents in Monterey Park. Around three weeks later, on November 4, after being told to seek disability benefits for the blind, Genie’s mother decided to do so in nearby Temple City, California and brought Genie along with her.
On account of her near-blindness, instead of the disabilities benefits office Genie’s mother accidentally entered the general social services office next door. The social worker who greeted them instantly sensed something was not right when she first saw Genie and was shocked to learn Genie’s true age was 13, having estimated from her appearance and demeanor that she was around 6 or 7 and possibly autistic. She notified her supervisor, and after questioning Genie’s mother and confirming Genie’s age they immediately contacted the police. […]
Upon admission to Children’s Hospital, Genie was extremely pale and grossly malnourished. She was severely undersized and underweight for her age, standing 4 ft 6 in (1.37 m) and weighing only 59 pounds (27 kg) […] Genie’s gross motor skills were extremely weak; she could not stand up straight nor fully straighten any of her limbs. Her movements were very hesitant and unsteady, and her characteristic “bunny walk”, in which she held her hands in front of her like claws, suggested extreme difficulty with sensory processing and an inability to integrate visual and tactile information. She had very little endurance, only able to engage in any physical activity for brief periods of time. […]
Despite tests conducted shortly after her admission which determined Genie had normal vision in both eyes she could not focus them on anything more than 10 feet (3 m) away, which corresponded to the dimensions of the room she was kept in. She was also completely incontinent, and gave no response whatsoever to extreme temperatures. As Genie never ate solid food as a child she was completely unable to chew and had very severe dysphagia, completely unable to swallow any solid or even soft food and barely able to swallow liquids. Because of this she would hold anything which she could not swallow in her mouth until her saliva broke it down, and if this took too long she would spit it out and mash it with her fingers. She constantly salivated and spat, and continually sniffed and blew her nose on anything that happened to be nearby.
Genie’s behavior was typically highly anti-social, and proved extremely difficult for others to control. She had no sense of personal property, frequently pointing to or simply taking something she wanted from someone else, and did not have any situational awareness whatsoever, acting on any of her impulses regardless of the setting. […] Doctors found it extremely difficult to test Genie’s mental age, but on two attempts they found Genie scored at the level of a 13-month-old. […] When upset Genie would wildly spit, blow her nose into her clothing, rub mucus all over her body, frequently urinate, and scratch and strike herself. These tantrums were usually the only times Genie was at all demonstrative in her behavior. […] Genie clearly distinguished speaking from other environmental sounds, but she remained almost completely silent and was almost entirely unresponsive to speech. When she did vocalize, it was always extremely soft and devoid of tone. Hospital staff initially thought that the responsiveness she did show to them meant she understood what they were saying, but later determined that she was instead responding to nonverbal signals that accompanied their speaking. […] Linguists later determined that in January 1971, two months after her admission, Genie only showed understanding of a few names and about 15–20 words. Upon hearing any of these, she invariably responded to them as if they had been spoken in isolation. Hospital staff concluded that her active vocabulary at that time consisted of just two short phrases, “stop it” and “no more”. Beyond negative commands, and possibly intonation indicating a question, she showed no understanding of any grammar whatsoever. […] Genie had a great deal of difficulty learning to count in sequential order. During Genie’s stay with the Riglers, the scientists spent a great deal of time attempting to teach her to count. She did not start to do so at all until late 1972, and when she did her efforts were extremely deliberate and laborious. By 1975 she could only count up to 7, which even then remained very difficult for her.”
“From January 1978 until 1993, Genie moved through a series of at least four additional foster homes and institutions. In some of these locations she was further physically abused and harassed to extreme degrees, and her development continued to regress. […] Genie is a ward of the state of California, and is living in an undisclosed location in the Los Angeles area. In May 2008, ABC News reported that someone who spoke under condition of anonymity had hired a private investigator who located Genie in 2000. She was reportedly living a relatively simple lifestyle in a small private facility for mentally underdeveloped adults, and appeared to be happy. Although she only spoke a few words, she could still communicate fairly well in sign language.“
i. Invasion of Poland. I recently realized I had no idea e.g. how long it took for the Germans and Soviets to defeat Poland during WW2 (the answer is 1 month and five days). The Germans attacked more than two weeks before the Soviets did. The article has lots of links, like most articles about such topics on wikipedia. Incidentally the question of why France and Britain applied a double standard and only declared war on Germany, and not the Soviet Union, is discussed in much detail in the links provided by u/OldWorldGlory here.
ii. Huaynaputina. From the article:
“A few days before the eruption, someone reported booming noise from the volcano and fog-like gas being emitted from its crater. The locals scrambled to appease the volcano, preparing girls, pets, and flowers for sacrifice.”
This makes sense – what else would one do in a situation like that? Finding a few virgins, dogs and flowers seems like the sensible approach – yes, you have to love humans and how they always react in sensible ways to such crises.
I’m not really sure the rest of the article is really all that interesting, but I found the above sentence both amusing and depressing enough to link to it here.
iii. Albert Pierrepoint. This guy killed hundreds of people.
On the other hand people were fine with it – it was his job. Well, sort of, this is actually slightly complicated. (“Pierrepoint was often dubbed the Official Executioner, despite there being no such job or title”).
Anyway this article is clearly the story of a guy who achieved his childhood dream – though unlike other children, he did not dream of becoming a fireman or a pilot, but rather of becoming the Official Executioner of the country. I’m currently thinking of using Pierrepoint as the main character in the motivational story I plan to tell my nephew when he’s a bit older.
iv. Second Crusade (featured). Considering how many different ‘states’ and ‘kingdoms’ were involved, a surprisingly small amount of people were actually fighting; the article notes that “[t]here were perhaps 50,000 troops in total” on the Christian side when the attack on Damascus was initiated. It wasn’t enough, as the outcome of the crusade was a decisive Muslim victory in the ‘Holy Land’ (Middle East).
v. 0.999… (featured). This thing is equal to one, but it can sometimes be really hard to get even very smart people to accept this fact. Lots of details and some proofs presented in the article.
vi. Shapley–Folkman lemma (‘good article’ – but also a somewhat technical article).
vii. Multituberculata. This article is not that special, but I add it here also because I think it ought to be and I’m actually sort of angry that it’s not; sometimes the coverage provided on wikipedia simply strikes me as grossly unfair, even if this is perhaps a slightly odd way to think about stuff. As pointed out in the article (Agustí points this out in his book as well), “The multituberculates existed for about 120 million years, and are often considered the most successful, diversified, and long-lasting mammals in natural history.” Yet notice how much (/little) coverage the article provides. Now compare the article with this article, or this.
It’s been quite a while since the last time I posted a ‘here’s some interesting stuff I’ve found online’-post, so I’ll do that now even though I actually don’t spend much time randomly looking around for interesting stuff online these days. I added some wikipedia links I’d saved for a ‘wikipedia articles of interest’-post because it usually takes quite a bit of time to write a standard wikipedia post (as it takes time to figure out what to include and what not to include in the coverage) and I figured that if I didn’t add those links here I’d never get around to blogging them.
iii. I found this article about the so-called “Einstellung” effect in chess interesting. I’m however not sure how important this stuff really is. I don’t think it’s sub-optimal for a player to spend a significant amount of time in positions like the ones they analyzed on ideas that don’t work, because usually you’ll only have to spot one idea that does to win the game. It’s obvious that one can argue people spend ‘too much’ time looking for a winning combination in positions where by design no winning combinations exist, but the fact of the matter is that in positions where ‘familiar patterns’ pop up winning resources often do exist, and you don’t win games by overlooking those or by failing to spend time looking for them; occasional suboptimal moves in some contexts may be a reasonable price to pay for increasing your likelihood of finding/playing the best/winning moves when those do exist. Here’s a slightly related link dealing with the question of the potential number of games/moves in chess. Here’s a good wiki article about pawn structures, and here’s one about swindles in chess. I incidentally very recently became a member of the ICC, and I’m frankly impressed with the player pool – which is huge and includes some really strong players (players like Morozevich and Tomashevsky seem to play there regularly). Since I started out on the site I’ve already beaten 3 IMs in bullet and lost a game against Islandic GM Henrik Danielsen. The IMs I’ve beaten were far from the strongest players in the player pool, but in my experience you don’t get to play titled players nearly as often as that on other sites if you’re at my level.
v. You may already have seen this one, but in case you have not: A Philosopher Walks Into A Coffee Shop. More than one of these made me laugh out loud. If you like the post you should take a look at the comments as well, there are some brilliant ones there as well.
vi. Amdahl’s law.
vii. Eigendecomposition of a matrix. On a related note I’m currently reading Imboden and Pfenninger’s Introduction to Systems Analysis (which goodreads for some reason has listed under a wrong title, as the goodreads book title is really the subtitle of the book), and today I had a look at the wiki article on Jacobian matrices and determinants for that reason (the book is about as technical as you’d expect from a book with a title like that).
i. Pendle witches.
“The trials of the Pendle witches in 1612 are among the most famous witch trials in English history, and some of the best recorded of the 17th century. The twelve accused lived in the area around Pendle Hill in Lancashire, and were charged with the murders of ten people by the use of witchcraft. All but two were tried at Lancaster Assizes on 18–19 August 1612, along with the Samlesbury witches and others, in a series of trials that have become known as the Lancashire witch trials. One was tried at York Assizes on 27 July 1612, and another died in prison. Of the eleven who went to trial – nine women and two men – ten were found guilty and executed by hanging; one was found not guilty.
The official publication of the proceedings by the clerk to the court, Thomas Potts, in his The Wonderfull Discoverie of Witches in the Countie of Lancaster, and the number of witches hanged together – nine at Lancaster and one at York – make the trials unusual for England at that time. It has been estimated that all the English witch trials between the early 15th and early 18th centuries resulted in fewer than 500 executions; this series of trials accounts for more than two per cent of that total.”
“One of the accused, Demdike, had been regarded in the area as a witch for fifty years, and some of the deaths the witches were accused of had happened many years before Roger Nowell started to take an interest in 1612. The event that seems to have triggered Nowell’s investigation, culminating in the Pendle witch trials, occurred on 21 March 1612.
On her way to Trawden Forest, Demdike’s granddaughter, Alizon Device, encountered John Law, a pedlar from Halifax, and asked him for some pins. Seventeenth-century metal pins were handmade and relatively expensive, but they were frequently needed for magical purposes, such as in healing – particularly for treating warts – divination, and for love magic, which may have been why Alizon was so keen to get hold of them and why Law was so reluctant to sell them to her. Whether she meant to buy them, as she claimed, and Law refused to undo his pack for such a small transaction, or whether she had no money and was begging for them, as Law’s son Abraham claimed, is unclear. A few minutes after their encounter Alizon saw Law stumble and fall, perhaps because he suffered a stroke; he managed to regain his feet and reach a nearby inn. Initially Law made no accusations against Alizon, but she appears to have been convinced of her own powers; when Abraham Law took her to visit his father a few days after the incident, she reportedly confessed and asked for his forgiveness.
Alizon Device, her mother Elizabeth, and her brother James were summoned to appear before Nowell on 30 March 1612. Alizon confessed that she had sold her soul to the Devil, and that she had told him to lame John Law after he had called her a thief. Her brother, James, stated that his sister had also confessed to bewitching a local child. Elizabeth was more reticent, admitting only that her mother, Demdike, had a mark on her body, something that many, including Nowell, would have regarded as having been left by the Devil after he had sucked her blood.”
“The Pendle witches were tried in a group that also included the Samlesbury witches, Jane Southworth, Jennet Brierley, and Ellen Brierley, the charges against whom included child murder and cannibalism; Margaret Pearson, the so-called Padiham witch, who was facing her third trial for witchcraft, this time for killing a horse; and Isobel Robey from Windle, accused of using witchcraft to cause sickness.
Some of the accused Pendle witches, such as Alizon Device, seem to have genuinely believed in their guilt, but others protested their innocence to the end.”
“Nine-year-old Jennet Device was a key witness for the prosecution, something that would not have been permitted in many other 17th-century criminal trials. However, King James had made a case for suspending the normal rules of evidence for witchcraft trials in his Daemonologie. As well as identifying those who had attended the Malkin Tower meeting, Jennet also gave evidence against her mother, brother, and sister. […] When Jennet was asked to stand up and give evidence against her mother, Elizabeth began to scream and curse her daughter, forcing the judges to have her removed from the courtroom before the evidence could be heard. Jennet was placed on a table and stated that she believed her mother had been a witch for three or four years. She also said her mother had a familiar called Ball, who appeared in the shape of a brown dog. Jennet claimed to have witnessed conversations between Ball and her mother, in which Ball had been asked to help with various murders. James Device also gave evidence against his mother, saying he had seen her making a clay figure of one of her victims, John Robinson. Elizabeth Device was found guilty.
James Device pleaded not guilty to the murders by witchcraft of Anne Townley and John Duckworth. However he, like Chattox, had earlier made a confession to Nowell, which was read out in court. That, and the evidence presented against him by his sister Jennet, who said that she had seen her brother asking a black dog he had conjured up to help him kill Townley, was sufficient to persuade the jury to find him guilty.”
“Many of the allegations made in the Pendle witch trials resulted from members of the Demdike and Chattox families making accusations against each other. Historian John Swain has said that the outbreaks of witchcraft in and around Pendle demonstrate the extent to which people could make a living either by posing as a witch, or by accusing or threatening to accuse others of being a witch. Although it is implicit in much of the literature on witchcraft that the accused were victims, often mentally or physically abnormal, for some at least, it may have been a trade like any other, albeit one with significant risks. There may have been bad blood between the Demdike and Chattox families because they were in competition with each other, trying to make a living from healing, begging, and extortion.”
This article is the only one of the five ‘main articles’ in this post which is not a featured article. I looked this one up because the Burnham & Anderson book I’m currently reading talks about this stuff quite a bit. The book will probably be one of the most technical books I’ll read this year, and I’m not sure how much of it I’ll end up covering here. Basically most of the book deals with the stuff ‘covered’ in the (very short) ‘Relationship between models and reality’ section of the wiki article. There are a lot of details the article left out… The same could be said about the related wiki article about AIC (both articles incidentally include the book in their references).
The first thing that would spring to mind if someone asked me what I knew about it would probably be something along the lines of: “…well, it’s huge…”
…and it is. But we know a lot more than that – some observations from the article:
“The atmosphere of Jupiter is the largest planetary atmosphere in the Solar System. It is mostly made of molecular hydrogen and helium in roughly solar proportions; other chemical compounds are present only in small amounts […] The atmosphere of Jupiter lacks a clear lower boundary and gradually transitions into the liquid interior of the planet. […] The Jovian atmosphere shows a wide range of active phenomena, including band instabilities, vortices (cyclones and anticyclones), storms and lightning. […] Jupiter has powerful storms, always accompanied by lightning strikes. The storms are a result of moist convection in the atmosphere connected to the evaporation and condensation of water. They are sites of strong upward motion of the air, which leads to the formation of bright and dense clouds. The storms form mainly in belt regions. The lightning strikes on Jupiter are hundreds of times more powerful than those seen on Earth.” [However do note that later on in the article it is stated that: “On Jupiter lighting strikes are on average a few times more powerful than those on Earth.”]
“The composition of Jupiter’s atmosphere is similar to that of the planet as a whole. Jupiter’s atmosphere is the most comprehensively understood of those of all the gas giants because it was observed directly by the Galileo atmospheric probe when it entered the Jovian atmosphere on December 7, 1995. Other sources of information about Jupiter’s atmospheric composition include the Infrared Space Observatory (ISO), the Galileo and Cassini orbiters, and Earth-based observations.”
“The visible surface of Jupiter is divided into several bands parallel to the equator. There are two types of bands: lightly colored zones and relatively dark belts. […] The alternating pattern of belts and zones continues until the polar regions at approximately 50 degrees latitude, where their visible appearance becomes somewhat muted. The basic belt-zone structure probably extends well towards the poles, reaching at least to 80° North or South.
The difference in the appearance between zones and belts is caused by differences in the opacity of the clouds. Ammonia concentration is higher in zones, which leads to the appearance of denser clouds of ammonia ice at higher altitudes, which in turn leads to their lighter color. On the other hand, in belts clouds are thinner and are located at lower altitudes. The upper troposphere is colder in zones and warmer in belts. […] The Jovian bands are bounded by zonal atmospheric flows (winds), called jets. […] The location and width of bands, speed and location of jets on Jupiter are remarkably stable, having changed only slightly between 1980 and 2000. […] However bands vary in coloration and intensity over time […] These variations were first observed in the early seventeenth century.”
“Jupiter radiates much more heat than it receives from the Sun. It is estimated that the ratio between the power emitted by the planet and that absorbed from the Sun is 1.67 ± 0.09.”
“Wife selling in England was a way of ending an unsatisfactory marriage by mutual agreement that probably began in the late 17th century, when divorce was a practical impossibility for all but the very wealthiest. After parading his wife with a halter around her neck, arm, or waist, a husband would publicly auction her to the highest bidder. […] Although the custom had no basis in law and frequently resulted in prosecution, particularly from the mid-19th century onwards, the attitude of the authorities was equivocal. At least one early 19th-century magistrate is on record as stating that he did not believe he had the right to prevent wife sales, and there were cases of local Poor Law Commissioners forcing husbands to sell their wives, rather than having to maintain the family in workhouses.”
“Until the passing of the Marriage Act of 1753, a formal ceremony of marriage before a clergyman was not a legal requirement in England, and marriages were unregistered. All that was required was for both parties to agree to the union, so long as each had reached the legal age of consent, which was 12 for girls and 14 for boys. Women were completely subordinated to their husbands after marriage, the husband and wife becoming one legal entity, a legal status known as coverture. […] Married women could not own property in their own right, and were indeed themselves the property of their husbands. […] Five distinct methods of breaking up a marriage existed in the early modern period of English history. One was to sue in the ecclesiastical courts for separation from bed and board (a mensa et thoro), on the grounds of adultery or life-threatening cruelty, but it did not allow a remarriage. From the 1550s, until the Matrimonial Causes Act became law in 1857, divorce in England was only possible, if at all, by the complex and costly procedure of a private Act of Parliament. Although the divorce courts set up in the wake of the 1857 Act made the procedure considerably cheaper, divorce remained prohibitively expensive for the poorer members of society.[nb 1] An alternative was to obtain a “private separation”, an agreement negotiated between both spouses, embodied in a deed of separation drawn up by a conveyancer. Desertion or elopement was also possible, whereby the wife was forced out of the family home, or the husband simply set up a new home with his mistress. Finally, the less popular notion of wife selling was an alternative but illegitimate method of ending a marriage.”
“Although some 19th-century wives objected, records of 18th-century women resisting their sales are non-existent. With no financial resources, and no skills on which to trade, for many women a sale was the only way out of an unhappy marriage. Indeed the wife is sometimes reported as having insisted on the sale. […] Although the initiative was usually the husband’s, the wife had to agree to the sale. An 1824 report from Manchester says that “after several biddings she [the wife] was knocked down for 5s; but not liking the purchaser, she was put up again for 3s and a quart of ale”. Frequently the wife was already living with her new partner. In one case in 1804 a London shopkeeper found his wife in bed with a stranger to him, who, following an altercation, offered to purchase the wife. The shopkeeper agreed, and in this instance the sale may have been an acceptable method of resolving the situation. However, the sale was sometimes spontaneous, and the wife could find herself the subject of bids from total strangers. In March 1766, a carpenter from Southwark sold his wife “in a fit of conjugal indifference at the alehouse”. Once sober, the man asked his wife to return, and after she refused he hanged himself. A domestic fight might sometimes precede the sale of a wife, but in most recorded cases the intent was to end a marriage in a way that gave it the legitimacy of a divorce.”
“Prices paid for wives varied considerably, from a high of £100 plus £25 each for her two children in a sale of 1865 (equivalent to about £12,500 in 2015) to a low of a glass of ale, or even free. […] According to authors Wade Mansell and Belinda Meteyard, money seems usually to have been a secondary consideration; the more important factor was that the sale was seen by many as legally binding, despite it having no basis in law. […] In Sussex, inns and public houses were a regular venue for wife-selling, and alcohol often formed part of the payment. […] in Ninfield in 1790, a man who swapped his wife at the village inn for half a pint of gin changed his mind and bought her back later. […] Estimates of the frequency of the ritual usually number about 300 between 1780 and 1850, relatively insignificant compared to the instances of desertion, which in the Victorian era numbered in the tens of thousands.”
“In 1825 a man named Johnson was charged with “having sung a song in the streets describing the merits of his wife, for the purpose of selling her to the highest bidder at Smithfield.” Such songs were not unique; in about 1842 John Ashton wrote “Sale of a Wife”.[nb 6] The arresting officer claimed that the man had gathered a “crowd of all sorts of vagabonds together, who appeared to listen to his ditty, but were in fact, collected to pick pockets.” The defendant, however, replied that he had “not the most distant idea of selling his wife, who was, poor creature, at home with her hungry children, while he was endeavouring to earn a bit of bread for them by the strength of his lungs.” He had also printed copies of the song, and the story of a wife sale, to earn money. Before releasing him, the Lord Mayor, judging the case, cautioned Johnson that the practice could not be allowed, and must not be repeated. In 1833 the sale of a woman was reported at Epping. She was sold for 2s. 6d., with a duty of 6d. Once sober, and placed before the Justices of the Peace, the husband claimed that he had been forced into marriage by the parish authorities, and had “never since lived with her, and that she had lived in open adultery with the man Bradley, by whom she had been purchased”. He was imprisoned for “having deserted his wife”.”
v. Bog turtle.
“The bog turtle (Glyptemys muhlenbergii) is a semiaquatic turtle endemic to the eastern United States. […] It is the smallest North American turtle, measuring about 10 centimeters (4 in) long when fully grown. […] The bog turtle can be found from Vermont in the north, south to Georgia, and west to Ohio. Diurnal and secretive, it spends most of its time buried in mud and – during the winter months – in hibernation. The bog turtle is omnivorous, feeding mainly on small invertebrates.”
“The bog turtle is native only to the eastern United States,[nb 1] congregating in colonies that often consist of fewer than 20 individuals. […] densities can range from 5 to 125 individuals per 0.81 hectares (2.0 acres). […] The bog turtle spends its life almost exclusively in the wetland where it hatched. In its natural environment, it has a maximum lifespan of perhaps 50 years or more, and the average lifespan is 20–30 years.”
“The bog turtle is primarily diurnal, active during the day and sleeping at night. It wakes in the early morning, basks until fully warm, then begins its search for food. It is a seclusive species, making it challenging to observe in its natural habitat. During colder days, the bog turtle will spend much of its time in dense underbrush, underwater, or buried in mud. […] Day-to-day, the bog turtle moves very little, typically basking in the sun and waiting for prey. […] Various studies have found different rates of daily movement in bog turtles, varying from 2.1 to 23 meters (6.9 to 75.5 ft) in males and 1.1 to 18 meters (3.6 to 59.1 ft) in females.”
“Changes to the bog turtle’s habitat have resulted in the disappearance of 80 percent of the colonies that existed 30 years ago. Because of the turtle’s rarity, it is also in danger of illegal collection, often for the worldwide pet trade. […] The bog turtle was listed as critically endangered in the 2011 IUCN Red List.“
“Saffron has been a key seasoning, fragrance, dye, and medicine for over three millennia. One of the world’s most expensive spices by weight, saffron consists of stigmas plucked from the vegetatively propagated and sterile Crocus sativus, known popularly as the saffron crocus. The resulting dried “threads”[N 1] are distinguished by their bitter taste, hay-like fragrance, and slight metallic notes. The saffron crocus is unknown in the wild; its most likely precursor, Crocus cartwrightianus, originated in Crete or Central Asia; The saffron crocus is native to Southwest Asia and was first cultivated in what is now Greece.
From antiquity to modern times the history of saffron is full of applications in food, drink, and traditional herbal medicine: from Africa and Asia to Europe and the Americas the brilliant red threads were—and are—prized in baking, curries, and liquor. It coloured textiles and other items and often helped confer the social standing of political elites and religious adepts. Ancient peoples believed saffron could be used to treat stomach upsets, bubonic plague, and smallpox.
Saffron crocus cultivation has long centred on a broad belt of Eurasia bounded by the Mediterranean Sea in the southwest to India and China in the northeast. The major producers of antiquity—Iran, Spain, India, and Greece—continue to dominate the world trade. […] Iran has accounted for around 90–93 percent of recent annual world production and thereby dominates the export market on a by-quantity basis. […]
The high cost of saffron is due to the difficulty of manually extracting large numbers of minute stigmas, which are the only part of the crocus with the desired aroma and flavour. An exorbitant number of flowers need to be processed in order to yield marketable amounts of saffron. Obtaining 1 lb (0.45 kg) of dry saffron requires the harvesting of some 50,000 flowers, the equivalent of an association football pitch’s area of cultivation, or roughly 7,140 m2 (0.714 ha). By another estimate some 75,000 flowers are needed to produce one pound of dry saffron. […] Another complication arises in the flowers’ simultaneous and transient blooming. […] Bulk quantities of lower-grade saffron can reach upwards of US$500 per pound; retail costs for small amounts may exceed ten times that rate. In Western countries the average retail price is approximately US$1,000 per pound. Prices vary widely elsewhere, but on average tend to be lower. The high price is somewhat offset by the small quantities needed in kitchens: a few grams at most in medicinal use and a few strands, at most, in culinary applications; there are between 70,000 and 200,000 strands in a pound.”
ii. Scramble for Africa.
“The “Scramble for Africa” (also the Partition of Africa and the Conquest of Africa) was the invasion and occupation, colonization and annexation of African territory by European powers during the period of New Imperialism, between 1881 and 1914. In 1870, 10 percent of Africa was under European control; by 1914 it was 90 percent of the continent, with only Abyssinia (Ethiopia) and Liberia still independent.”
Here’s a really neat illustration from the article:
“Germany became the third largest colonial power in Africa. Nearly all of its overall empire of 2.6 million square kilometres and 14 million colonial subjects in 1914 was found in its African possessions of Southwest Africa, Togoland, the Cameroons, and Tanganyika. Following the 1904 Entente cordiale between France and the British Empire, Germany tried to isolate France in 1905 with the First Moroccan Crisis. This led to the 1905 Algeciras Conference, in which France’s influence on Morocco was compensated by the exchange of other territories, and then to the Agadir Crisis in 1911. Along with the 1898 Fashoda Incident between France and Britain, this succession of international crises reveals the bitterness of the struggle between the various imperialist nations, which ultimately led to World War I. […]
David Livingstone‘s explorations, carried on by Henry Morton Stanley, excited imaginations. But at first, Stanley’s grandiose ideas for colonisation found little support owing to the problems and scale of action required, except from Léopold II of Belgium, who in 1876 had organised the International African Association (the Congo Society). From 1869 to 1874, Stanley was secretly sent by Léopold II to the Congo region, where he made treaties with several African chiefs along the Congo River and by 1882 had sufficient territory to form the basis of the Congo Free State. Léopold II personally owned the colony from 1885 and used it as a source of ivory and rubber.
While Stanley was exploring Congo on behalf of Léopold II of Belgium, the Franco-Italian marine officer Pierre de Brazza travelled into the western Congo basin and raised the French flag over the newly founded Brazzaville in 1881, thus occupying today’s Republic of the Congo. Portugal, which also claimed the area due to old treaties with the native Kongo Empire, made a treaty with Britain on 26 February 1884 to block off the Congo Society’s access to the Atlantic.
By 1890 the Congo Free State had consolidated its control of its territory between Leopoldville and Stanleyville, and was looking to push south down the Lualaba River from Stanleyville. At the same time, the British South Africa Company of Cecil Rhodes was expanding north from the Limpopo River, sending the Pioneer Column (guided by Frederick Selous) through Matabeleland, and starting a colony in Mashonaland.
To the West, in the land where their expansions would meet, was Katanga, site of the Yeke Kingdom of Msiri. Msiri was the most militarily powerful ruler in the area, and traded large quantities of copper, ivory and slaves — and rumours of gold reached European ears. The scramble for Katanga was a prime example of the period. Rhodes and the BSAC sent two expeditions to Msiri in 1890 led by Alfred Sharpe, who was rebuffed, and Joseph Thomson, who failed to reach Katanga. Leopold sent four CFS expeditions. First, the Le Marinel Expedition could only extract a vaguely worded letter. The Delcommune Expedition was rebuffed. The well-armed Stairs Expedition was given orders to take Katanga with or without Msiri’s consent. Msiri refused, was shot, and the expedition cut off his head and stuck it on a pole as a “barbaric lesson” to the people. The Bia Expedition finished the job of establishing an administration of sorts and a “police presence” in Katanga.
Thus, the half million square kilometres of Katanga came into Leopold’s possession and brought his African realm up to 2,300,000 square kilometres (890,000 sq mi), about 75 times larger than Belgium. The Congo Free State imposed such a terror regime on the colonised people, including mass killings and forced labour, that Belgium, under pressure from the Congo Reform Association, ended Leopold II’s rule and annexed it in 1908 as a colony of Belgium, known as the Belgian Congo. […]
“Britain’s administration of Egypt and the Cape Colony contributed to a preoccupation over securing the source of the Nile River. Egypt was overrun by British forces in 1882 (although not formally declared a protectorate until 1914, and never an actual colony); Sudan, Nigeria, Kenya and Uganda were subjugated in the 1890s and early 20th century; and in the south, the Cape Colony (first acquired in 1795) provided a base for the subjugation of neighbouring African states and the Dutch Afrikaner settlers who had left the Cape to avoid the British and then founded their own republics. In 1877, Theophilus Shepstone annexed the South African Republic (or Transvaal – independent from 1857 to 1877) for the British Empire. In 1879, after the Anglo-Zulu War, Britain consolidated its control of most of the territories of South Africa. The Boers protested, and in December 1880 they revolted, leading to the First Boer War (1880–81). British Prime Minister William Gladstone signed a peace treaty on 23 March 1881, giving self-government to the Boers in the Transvaal. […] The Second Boer War, fought between 1899 and 1902, was about control of the gold and diamond industries; the independent Boer republics of the Orange Free State and the South African Republic (or Transvaal) were this time defeated and absorbed into the British Empire.”
There are a lot of unsourced claims in the article and some parts of it actually aren’t very good, but this is a topic about which I did not know much (I had no idea most of colonial Africa was acquired by the European powers as late as was actually the case). This is another good map from the article to have a look at if you just want the big picture.
iii. Cursed soldiers.
“The cursed soldiers (that is, “accursed soldiers” or “damned soldiers”; Polish: Żołnierze wyklęci) is a name applied to a variety of Polish resistance movements formed in the later stages of World War II and afterwards. Created by some members of the Polish Secret State, these clandestine organizations continued their armed struggle against the Stalinist government of Poland well into the 1950s. The guerrilla warfare included an array of military attacks launched against the new communist prisons as well as MBP state security offices, detention facilities for political prisoners, and concentration camps set up across the country. Most of the Polish anti-communist groups ceased to exist in the late 1940s or 1950s, hunted down by MBP security services and NKVD assassination squads. However, the last known ‘cursed soldier’, Józef Franczak, was killed in an ambush as late as 1963, almost 20 years after the Soviet take-over of Poland. […] Similar eastern European anti-communists fought on in other countries. […]
Armia Krajowa (or simply AK)-the main Polish resistance movement in World War II-had officially disbanded on 19 January 1945 to prevent a slide into armed conflict with the Red Army, including an increasing threat of civil war over Poland’s sovereignty. However, many units decided to continue on with their struggle under new circumstances, seeing the Soviet forces as new occupiers. Meanwhile, Soviet partisans in Poland had already been ordered by Moscow on June 22, 1943 to engage Polish Leśni partisans in combat. They commonly fought Poles more often than they did the Germans. The main forces of the Red Army (Northern Group of Forces) and the NKVD had begun conducting operations against AK partisans already during and directly after the Polish Operation Tempest, designed by the Poles as a preventive action to assure Polish rather than Soviet control of the cities after the German withdrawal. Soviet premier Joseph Stalin aimed to ensure that an independent Poland would never reemerge in the postwar period. […]
The first Polish communist government, the Polish Committee of National Liberation, was formed in July 1944, but declined jurisdiction over AK soldiers. Consequently, for more than a year, it was Soviet agencies like the NKVD that dealt with the AK. By the end of the war, approximately 60,000 soldiers of the AK had been arrested, and 50,000 of them were deported to the Soviet Union’s gulags and prisons. Most of those soldiers had been captured by the Soviets during or in the aftermath of Operation Tempest, when many AK units tried to cooperate with the Soviets in a nationwide uprising against the Germans. Other veterans were arrested when they decided to approach the government after being promised amnesty. In 1947, an amnesty was passed for most of the partisans; the Communist authorities expected around 12,000 people to give up their arms, but the actual number of people to come out of the forests eventually reached 53,000. Many of them were arrested despite promises of freedom; after repeated broken promises during the first few years of communist control, AK soldiers stopped trusting the government. […]
The persecution of the AK members was only a part of the reign of Stalinist terror in postwar Poland. In the period of 1944–56, approximately 300,000 Polish people had been arrested, or up to two million, by different accounts. There were 6,000 death sentences issued, the majority of them carried out. Possibly, over 20,000 people died in communist prisons including those executed “in the majesty of the law” such as Witold Pilecki, a hero of Auschwitz. A further six million Polish citizens (i.e., one out of every three adult Poles) were classified as suspected members of a ‘reactionary or criminal element’ and subjected to investigation by state agencies.”
This article is actually related to the Delusion and self-deception book, which covered some of the stuff included in this article, but I decided I might as well include the link in this post. I think some parts of the article are written in a somewhat different manner than most wiki articles – there are specific paragraphs briefly covering the results of specific meta-analyses conducted in this field. I can’t really tell from this article if I actually like this way of writing a wiki article or not.
v. Hamming distance. Not a long article, but this is a useful concept to be familiar with:
“In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In another way, it measures the minimum number of substitutions required to change one string into the other, or the minimum number of errors that could have transformed one string into the other. […]
The Hamming distance is named after Richard Hamming, who introduced it in his fundamental paper on Hamming codes Error detecting and error correcting codes in 1950. It is used in telecommunication to count the number of flipped bits in a fixed-length binary word as an estimate of error, and therefore is sometimes called the signal distance. Hamming weight analysis of bits is used in several disciplines including information theory, coding theory, and cryptography. However, for comparing strings of different lengths, or strings where not just substitutions but also insertions or deletions have to be expected, a more sophisticated metric like the Levenshtein distance is more appropriate.”
vi. Menstrual synchrony. I came across that one recently in a book, and when I did it was obvious that the author had not read this article, and lacked some knowledge included in this article (the phenomenon was assumed to be real in the coverage, and theory was developed assuming it was real which would not make sense if it was not). I figured if that person didn’t know this stuff, a lot of other people – including people reading along here – probably also do not, so I should cover this topic somewhere. This is an obvious place to do so. Okay, on to the article coverage:
“Menstrual synchrony, also called the McClintock effect, is the alleged process whereby women who begin living together in close proximity experience their menstrual cycle onsets (i.e., the onset of menstruation or menses) becoming closer together in time than previously. “For example, the distribution of onsets of seven female lifeguards was scattered at the beginning of the summer, but after 3 months spent together, the onset of all seven cycles fell within a 4-day period.”
Martha McClintock’s 1971 paper, published in Nature, says that menstrual cycle synchronization happens when the menstrual cycle onsets of two women or more women become closer together in time than they were several months earlier. Several mechanisms have been hypothesized to cause synchronization.
After the initial studies, several papers were published reporting methodological flaws in studies reporting menstrual synchrony including McClintock’s study. In addition, other studies were published that failed to find synchrony. The proposed mechanisms have also received scientific criticism. A 2013 review of menstrual synchrony concluded that menstrual synchrony is doubtful. […] in a recent systematic review of menstrual synchrony, Harris and Vitzthum concluded that “In light of the lack of empirical evidence for MS [menstrual synchrony] sensu stricto, it seems there should be more widespread doubt than acceptance of this hypothesis.” […]
The experience of synchrony may be the result of the mathematical fact that menstrual cycles of different frequencies repeatedly converge and diverge over time and not due to a process of synchronization. It may also be due to the high probability of menstruation overlap that occurs by chance.”
(A minor note: These days when I’m randomly browsing wikipedia and not just looking up concepts or terms found in the books I read, I’m mostly browsing the featured content on wikipedia. There’s a lot of featured stuff, and on average such articles more interesting than random articles. As a result of this approach, all articles covered in the post below are featured articles. A related consequence of this shift may be that I may cover fewer articles in future wikipedia posts than I have in the past; this post only contains five articles, which I believe is slightly less than usual for these posts – a big reason for this being that it sometimes takes a lot of time to read a featured article.)
i. Woolly mammoth.
“The woolly mammoth (Mammuthus primigenius) was a species of mammoth, the common name for the extinct elephant genus Mammuthus. The woolly mammoth was one of the last in a line of mammoth species, beginning with Mammuthus subplanifrons in the early Pliocene. M. primigenius diverged from the steppe mammoth, M. trogontherii, about 200,000 years ago in eastern Asia. Its closest extant relative is the Asian elephant. […] The earliest known proboscideans, the clade which contains elephants, existed about 55 million years ago around the Tethys Sea. […] The family Elephantidae existed six million years ago in Africa and includes the modern elephants and the mammoths. Among many now extinct clades, the mastodon is only a distant relative of the mammoths, and part of the separate Mammutidae family, which diverged 25 million years before the mammoths evolved. […] The woolly mammoth coexisted with early humans, who used its bones and tusks for making art, tools, and dwellings, and the species was also hunted for food. It disappeared from its mainland range at the end of the Pleistocene 10,000 years ago, most likely through a combination of climate change, consequent disappearance of its habitat, and hunting by humans, though the significance of these factors is disputed. Isolated populations survived on Wrangel Island until 4,000 years ago, and on St. Paul Island until 6,400 years ago.”
“The appearance and behaviour of this species are among the best studied of any prehistoric animal due to the discovery of frozen carcasses in Siberia and Alaska, as well as skeletons, teeth, stomach contents, dung, and depiction from life in prehistoric cave paintings. […] Fully grown males reached shoulder heights between 2.7 and 3.4 m (9 and 11 ft) and weighed up to 6 tonnes (6.6 short tons). This is almost as large as extant male African elephants, which commonly reach 3–3.4 m (9.8–11.2 ft), and is less than the size of the earlier mammoth species M. meridionalis and M. trogontherii, and the contemporary M. columbi. […] Woolly mammoths had several adaptations to the cold, most noticeably the layer of fur covering all parts of the body. Other adaptations to cold weather include ears that are far smaller than those of modern elephants […] The small ears reduced heat loss and frostbite, and the tail was short for the same reason […] They had a layer of fat up to 10 cm (3.9 in) thick under the skin, which helped to keep them warm. […] The coat consisted of an outer layer of long, coarse “guard hair”, which was 30 cm (12 in) on the upper part of the body, up to 90 cm (35 in) in length on the flanks and underside, and 0.5 mm (0.020 in) in diameter, and a denser inner layer of shorter, slightly curly under-wool, up to 8 cm (3.1 in) long and 0.05 mm (0.0020 in) in diameter. The hairs on the upper leg were up to 38 cm (15 in) long, and those of the feet were 15 cm (5.9 in) long, reaching the toes. The hairs on the head were relatively short, but longer on the underside and the sides of the trunk. The tail was extended by coarse hairs up to 60 cm (24 in) long, which were thicker than the guard hairs. It is likely that the woolly mammoth moulted seasonally, and that the heaviest fur was shed during spring.”
“Woolly mammoths had very long tusks, which were more curved than those of modern elephants. The largest known male tusk is 4.2 m (14 ft) long and weighs 91 kg (201 lb), but 2.4–2.7 m (7.9–8.9 ft) and 45 kg (99 lb) was a more typical size. Female tusks averaged at 1.5–1.8 m (4.9–5.9 ft) and weighed 9 kg (20 lb). About a quarter of the length was inside the sockets. The tusks grew spirally in opposite directions from the base and continued in a curve until the tips pointed towards each other. In this way, most of the weight would have been close to the skull, and there would be less torque than with straight tusks. The tusks were usually asymmetrical and showed considerable variation, with some tusks curving down instead of outwards and some being shorter due to breakage.”
“Woolly mammoths needed a varied diet to support their growth, like modern elephants. An adult of six tonnes would need to eat 180 kg (397 lb) daily, and may have foraged as long as twenty hours every day. […] Woolly mammoths continued growing past adulthood, like other elephants. Unfused limb bones show that males grew until they reached the age of 40, and females grew until they were 25. The frozen calf “Dima” was 90 cm (35 in) tall when it died at the age of 6–12 months. At this age, the second set of molars would be in the process of erupting, and the first set would be worn out at 18 months of age. The third set of molars lasted for ten years, and this process was repeated until the final, sixth set emerged when the animal was 30 years old. A woolly mammoth could probably reach the age of 60, like modern elephants of the same size. By then the last set of molars would be worn out, the animal would be unable to chew and feed, and it would die of starvation.”
“The habitat of the woolly mammoth is known as “mammoth steppe” or “tundra steppe”. This environment stretched across northern Asia, many parts of Europe, and the northern part of North America during the last ice age. It was similar to the grassy steppes of modern Russia, but the flora was more diverse, abundant, and grew faster. Grasses, sedges, shrubs, and herbaceous plants were present, and scattered trees were mainly found in southern regions. This habitat was not dominated by ice and snow, as is popularly believed, since these regions are thought to have been high-pressure areas at the time. The habitat of the woolly mammoth also supported other grazing herbivores such as the woolly rhinoceros, wild horses and bison. […] A 2008 study estimated that changes in climate shrank suitable mammoth habitat from 7,700,000 km2 (3,000,000 sq mi) 42,000 years ago to 800,000 km2 (310,000 sq mi) 6,000 years ago. Woolly mammoths survived an even greater loss of habitat at the end of the Saale glaciation 125,000 years ago, and it is likely that humans hunted the remaining populations to extinction at the end of the last glacial period. […] Several woolly mammoth specimens show evidence of being butchered by humans, which is indicated by breaks, cut-marks, and associated stone tools. It is not known how much prehistoric humans relied on woolly mammoth meat, since there were many other large herbivores available. Many mammoth carcasses may have been scavenged by humans rather than hunted. Some cave paintings show woolly mammoths in structures interpreted as pitfall traps. Few specimens show direct, unambiguous evidence of having been hunted by humans.”
“While frozen woolly mammoth carcasses had been excavated by Europeans as early as 1728, the first fully documented specimen was discovered near the delta of the Lena River in 1799 by Ossip Schumachov, a Siberian hunter. Schumachov let it thaw until he could retrieve the tusks for sale to the ivory trade. [Aargh!] […] The 1901 excavation of the “Berezovka mammoth” is the best documented of the early finds. It was discovered by the Berezovka River, and the Russian authorities financed its excavation. Its head was exposed, and the flesh had been scavenged. The animal still had grass between its teeth and on the tongue, showing that it had died suddenly. […] By 1929, the remains of 34 mammoths with frozen soft tissues (skin, flesh, or organs) had been documented. Only four of them were relatively complete. Since then, about that many more have been found.”
ii. Daniel Lambert.
“Daniel Lambert (13 March 1770 – 21 June 1809) was a gaol keeper[n 1] and animal breeder from Leicester, England, famous for his unusually large size. After serving four years as an apprentice at an engraving and die casting works in Birmingham, he returned to Leicester around 1788 and succeeded his father as keeper of Leicester’s gaol. […] At the time of Lambert’s return to Leicester, his weight began to increase steadily, even though he was athletically active and, by his own account, abstained from drinking alcohol and did not eat unusual amounts of food. In 1805, Lambert’s gaol closed. By this time, he weighed 50 stone (700 lb; 318 kg), and had become the heaviest authenticated person up to that point in recorded history. Unemployable and sensitive about his bulk, Lambert became a recluse.
In 1806, poverty forced Lambert to put himself on exhibition to raise money. In April 1806, he took up residence in London, charging spectators to enter his apartments to meet him. Visitors were impressed by his intelligence and personality, and visiting him became highly fashionable. After some months on public display, Lambert grew tired of exhibiting himself, and in September 1806, he returned, wealthy, to Leicester, where he bred sporting dogs and regularly attended sporting events. Between 1806 and 1809, he made a further series of short fundraising tours.
In June 1809, he died suddenly in Stamford. At the time of his death, he weighed 52 stone 11 lb (739 lb; 335 kg), and his coffin required 112 square feet (10.4 m2) of wood. Despite the coffin being built with wheels to allow easy transport, and a sloping approach being dug to the grave, it took 20 men almost half an hour to drag his casket into the trench, in a newly opened burial ground to the rear of St Martin’s Church.”
“Sensitive about his weight, Daniel Lambert refused to allow himself to be weighed, but sometime around 1805, some friends persuaded him to come with them to a cock fight in Loughborough. Once he had squeezed his way into their carriage, the rest of the party drove the carriage onto a large scale and jumped out. After deducting the weight of the (previously weighed) empty carriage, they calculated that Lambert’s weight was now 50 stone (700 lb; 318 kg), and that he had thus overtaken Edward Bright, the 616-pound (279 kg) “Fat Man of Maldon”, as the heaviest authenticated person in recorded history.
Despite his shyness, Lambert badly needed to earn money, and saw no alternative to putting himself on display, and charging his spectators. On 4 April 1806, he boarded a specially built carriage and travelled from Leicester to his new home at 53 Piccadilly, then near the western edge of London. For five hours each day, he welcomed visitors into his home, charging each a shilling (about £3.5 as of 2014). […] Lambert shared his interests and knowledge of sports, dogs and animal husbandry with London’s middle and upper classes, and it soon became highly fashionable to visit him, or become his friend. Many called repeatedly; one banker made 20 visits, paying the admission fee on each occasion. […] His business venture was immediately successful, drawing around 400 paying visitors per day. […] People would travel long distances to see him (on one occasion, a party of 14 travelled to London from Guernsey),[n 5] and many would spend hours speaking with him on animal breeding.”
“After some months in London, Lambert was visited by Józef Boruwłaski, a 3-foot 3-inch (99 cm) dwarf then in his seventies. Born in 1739 to a poor family in rural Pokuttya, Boruwłaski was generally considered to be the last of Europe’s court dwarfs. He was introduced to the Empress Maria Theresa in 1754, and after a short time residing with deposed Polish king Stanisław Leszczyński, he exhibited himself around Europe, thus becoming a wealthy man. At age 60, he retired to Durham, where he became such a popular figure that the City of Durham paid him to live there and he became one of its most prominent citizens […] The meeting of Lambert and Boruwłaski, the largest and smallest men in the country, was the subject of enormous public interest”
“There was no autopsy, and the cause of Lambert’s death is unknown. While many sources say that he died of a fatty degeneration of the heart or of stress on his heart caused by his bulk, his behaviour in the period leading to his death does not match that of someone suffering from cardiac insufficiency; witnesses agree that on the morning of his death he appeared well, before he became short of breath and collapsed. Bondeson (2006) speculates that the most consistent explanation of his death, given his symptoms and medical history, is that he had a sudden pulmonary embolism.”
“The exposed geology of the Capitol Reef area presents a record of mostly Mesozoic-aged sedimentation in an area of North America in and around Capitol Reef National Park, on the Colorado Plateau in southeastern Utah.
Nearly 10,000 feet (3,000 m) of sedimentary strata are found in the Capitol Reef area, representing nearly 200 million years of geologic history of the south-central part of the U.S. state of Utah. These rocks range in age from Permian (as old as 270 million years old) to Cretaceous (as young as 80 million years old.) Rock layers in the area reveal ancient climates as varied as rivers and swamps (Chinle Formation), Sahara-like deserts (Navajo Sandstone), and shallow ocean (Mancos Shale).
The area’s first known sediments were laid down as a shallow sea invaded the land in the Permian. At first sandstone was deposited but limestone followed as the sea deepened. After the sea retreated in the Triassic, streams deposited silt before the area was uplifted and underwent erosion. Conglomerate followed by logs, sand, mud and wind-transported volcanic ash were later added. Mid to Late Triassic time saw increasing aridity, during which vast amounts of sandstone were laid down along with some deposits from slow-moving streams. As another sea started to return it periodically flooded the area and left evaporite deposits. Barrier islands, sand bars and later, tidal flats, contributed sand for sandstone, followed by cobbles for conglomerate and mud for shale. The sea retreated, leaving streams, lakes and swampy plains to become the resting place for sediments. Another sea, the Western Interior Seaway, returned in the Cretaceous and left more sandstone and shale only to disappear in the early Cenozoic.”
“The Laramide orogeny compacted the region from about 70 million to 50 million years ago and in the process created the Rocky Mountains. Many monoclines (a type of gentle upward fold in rock strata) were also formed by the deep compressive forces of the Laramide. One of those monoclines, called the Waterpocket Fold, is the major geographic feature of the park. The 100 mile (160 km) long fold has a north-south alignment with a steeply east-dipping side. The rock layers on the west side of the Waterpocket Fold have been lifted more than 7,000 feet (2,100 m) higher than the layers on the east. Thus older rocks are exposed on the western part of the fold and younger rocks on the eastern part. This particular fold may have been created due to movement along a fault in the Precambrian basement rocks hidden well below any exposed formations. Small earthquakes centered below the fold in 1979 may be from such a fault. […] Ten to fifteen million years ago the entire region was uplifted several thousand feet (well over a kilometer) by the creation of the Colorado Plateaus. This time the uplift was more even, leaving the overall orientation of the formations mostly intact. Most of the erosion that carved today’s landscape occurred after the uplift of the Colorado Plateau with much of the major canyon cutting probably occurring between 1 and 6 million years ago.”
Apollonius of Perga (ca. 262 BC – ca. 190 BC) posed and solved this famous problem in his work Ἐπαφαί (Epaphaí, “Tangencies”); this work has been lost, but a 4th-century report of his results by Pappus of Alexandria has survived. Three given circles generically have eight different circles that are tangent to them […] and each solution circle encloses or excludes the three given circles in a different way […] The general statement of Apollonius’ problem is to construct one or more circles that are tangent to three given objects in a plane, where an object may be a line, a point or a circle of any size. These objects may be arranged in any way and may cross one another; however, they are usually taken to be distinct, meaning that they do not coincide. Solutions to Apollonius’ problem are sometimes called Apollonius circles, although the term is also used for other types of circles associated with Apollonius. […] A rich repertoire of geometrical and algebraic methods have been developed to solve Apollonius’ problem, which has been called “the most famous of all” geometry problems.”
v. Globular cluster.
“A globular cluster is a spherical collection of stars that orbits a galactic core as a satellite. Globular clusters are very tightly bound by gravity, which gives them their spherical shapes and relatively high stellar densities toward their centers. The name of this category of star cluster is derived from the Latin globulus—a small sphere. A globular cluster is sometimes known more simply as a globular.
Globular clusters, which are found in the halo of a galaxy, contain considerably more stars and are much older than the less dense galactic, or open clusters, which are found in the disk. Globular clusters are fairly common; there are about 150 to 158 currently known globular clusters in the Milky Way, with perhaps 10 to 20 more still undiscovered. Large galaxies can have more: Andromeda, for instance, may have as many as 500. […]
Every galaxy of sufficient mass in the Local Group has an associated group of globular clusters, and almost every large galaxy surveyed has been found to possess a system of globular clusters. The Sagittarius Dwarf galaxy and the disputed Canis Major Dwarf galaxy appear to be in the process of donating their associated globular clusters (such as Palomar 12) to the Milky Way. This demonstrates how many of this galaxy’s globular clusters might have been acquired in the past.
Although it appears that globular clusters contain some of the first stars to be produced in the galaxy, their origins and their role in galactic evolution are still unclear.”
“The dodo (Raphus cucullatus) is an extinct flightless bird that was endemic to the island of Mauritius, east of Madagascar in the Indian Ocean. Its closest genetic relative was the also extinct Rodrigues solitaire, the two forming the subfamily Raphinae of the family of pigeons and doves. […] Subfossil remains show the dodo was about 1 metre (3.3 feet) tall and may have weighed 10–18 kg (22–40 lb) in the wild. The dodo’s appearance in life is evidenced only by drawings, paintings and written accounts from the 17th century. Because these vary considerably, and because only some illustrations are known to have been drawn from live specimens, its exact appearance in life remains unresolved. Similarly, little is known with certainty about its habitat and behaviour.”
“The first recorded mention of the dodo was by Dutch sailors in 1598. In the following years, the bird was hunted by sailors, their domesticated animals, and invasive species introduced during that time. The last widely accepted sighting of a dodo was in 1662. Its extinction was not immediately noticed, and some considered it to be a mythical creature. In the 19th century, research was conducted on a small quantity of remains of four specimens that had been brought to Europe in the early 17th century. Among these is a dried head, the only soft tissue of the dodo that remains today. Since then, a large amount of subfossil material has been collected from Mauritius […] The dodo was anatomically similar to pigeons in many features. […] The dodo differed from other pigeons mainly in the small size of the wings and the large size of the beak in proportion to the rest of the cranium. […] Many of the skeletal features that distinguish the dodo and the Rodrigues solitaire, its closest relative, from pigeons have been attributed to their flightlessness. […] The lack of mammalian herbivores competing for resources on these islands allowed the solitaire and the dodo to attain very large sizes.” [If the last sentence sparked your interest and/or might be something about which you’d like to know more, I have previously covered a great book on related topics here on the blog]
“The etymology of the word dodo is unclear. Some ascribe it to the Dutch word dodoor for “sluggard”, but it is more probably related to Dodaars, which means either “fat-arse” or “knot-arse”, referring to the knot of feathers on the hind end. […] The traditional image of the dodo is of a very fat and clumsy bird, but this view may be exaggerated. The general opinion of scientists today is that many old European depictions were based on overfed captive birds or crudely stuffed specimens.”
“Like many animals that evolved in isolation from significant predators, the dodo was entirely fearless of humans. This fearlessness and its inability to fly made the dodo easy prey for sailors. Although some scattered reports describe mass killings of dodos for ships’ provisions, archaeological investigations have found scant evidence of human predation. […] The human population on Mauritius (an area of 1,860 km2 or 720 sq mi) never exceeded 50 people in the 17th century, but they introduced other animals, including dogs, pigs, cats, rats, and crab-eating macaques, which plundered dodo nests and competed for the limited food resources. At the same time, humans destroyed the dodo’s forest habitat. The impact of these introduced animals, especially the pigs and macaques, on the dodo population is currently considered more severe than that of hunting. […] Even though the rareness of the dodo was reported already in the 17th century, its extinction was not recognised until the 19th century. This was partly because, for religious reasons, extinction was not believed possible until later proved so by Georges Cuvier, and partly because many scientists doubted that the dodo had ever existed. It seemed altogether too strange a creature, and many believed it a myth.”
Some of the contemporary accounts and illustrations included in the article, from which behavioural patterns etc. have been inferred, I found quite depressing. Two illustrative quotes and a contemporary engraving are included below:
“Blue parrots are very numerous there, as well as other birds; among which are a kind, conspicuous for their size, larger than our swans, with huge heads only half covered with skin as if clothed with a hood. […] These we used to call ‘Walghvogel’, for the reason that the longer and oftener they were cooked, the less soft and more insipid eating they became. Nevertheless their belly and breast were of a pleasant flavour and easily masticated.”
“I have seen in Mauritius birds bigger than a Swan, without feathers on the body, which is covered with a black down; the hinder part is round, the rump adorned with curled feathers as many in number as the bird is years old. […] We call them Oiseaux de Nazaret. The fat is excellent to give ease to the muscles and nerves.”
“The Armero tragedy […] was one of the major consequences of the eruption of the Nevado del Ruiz stratovolcano in Tolima, Colombia, on November 13, 1985. After 69 years of dormancy, the volcano’s eruption caught nearby towns unaware, even though the government had received warnings from multiple volcanological organizations to evacuate the area when volcanic activity had been detected in September 1985.
As pyroclastic flows erupted from the volcano’s crater, they melted the mountain’s glaciers, sending four enormous lahars (volcanically induced mudslides, landslides, and debris flows) down its slopes at 50 kilometers per hour (30 miles per hour). The lahars picked up speed in gullies and coursed into the six major rivers at the base of the volcano; they engulfed the town of Armero, killing more than 20,000 of its almost 29,000 inhabitants. Casualties in other towns, particularly Chinchiná, brought the overall death toll to 23,000. […] The relief efforts were hindered by the composition of the mud, which made it nearly impossible to move through without becoming stuck. By the time relief workers reached Armero twelve hours after the eruption, many of the victims with serious injuries were dead. The relief workers were horrified by the landscape of fallen trees, disfigured human bodies, and piles of debris from entire houses. […] The event was a foreseeable catastrophe exacerbated by the populace’s unawareness of the volcano’s destructive history; geologists and other experts had warned authorities and media outlets about the danger over the weeks and days leading up to the eruption.”
“The day of the eruption, black ash columns erupted from the volcano at approximately 3:00 pm local time. The local Civil Defense director was promptly alerted to the situation. He contacted INGEOMINAS, which ruled that the area should be evacuated; he was then told to contact the Civil Defense directors in Bogotá and Tolima. Between 5:00 and 7:00 pm, the ash stopped falling, and local officials instructed people to “stay calm” and go inside. Around 5:00 pm an emergency committee meeting was called, and when it ended at 7:00 pm, several members contacted the regional Red Cross over the intended evacuation efforts at Armero, Mariquita, and Honda. The Ibagué Red Cross contacted Armero’s officials and ordered an evacuation, which was not carried out because of electrical problems caused by a storm. The storm’s heavy rain and constant thunder may have overpowered the noise of the volcano, and with no systematic warning efforts, the residents of Armero were completely unaware of the continuing activity at Ruiz. At 9:45 pm, after the volcano had erupted, Civil Defense officials from Ibagué and Murillo tried to warn Armero’s officials, but could not make contact. Later they overheard conversations between individual officials of Armero and others; famously, a few heard the Mayor of Armero speaking on a ham radio, saying “that he did not think there was much danger”, when he was overtaken by the lahar.”
“The lahars, formed of water, ice, pumice, and other rocks, incorporated clay from eroding soil as they traveled down the volcano’s flanks. They ran down the volcano’s sides at an average speed of 60 kilometers (40 mi) per hour, dislodging rock and destroying vegetation. After descending thousands of meters down the side of the volcano, the lahars followed the six river valleys leading from the volcano, where they grew to almost four times their original volume. In the Gualí River, a lahar reached a maximum width of 50 meters (160 ft).
Survivors in Armero described the night as “quiet”. Volcanic ash had been falling throughout the day, but residents were informed it was nothing to worry about. Later in the afternoon, ash began falling again after a long period of quiet. Local radio stations reported that residents should remain calm and ignore the material. One survivor reported going to the fire department to be informed that the ash was “nothing”. […] At 11:30 pm, the first lahar hit, followed shortly by the others. One of the lahars virtually erased Armero; three-quarters of its 28,700 inhabitants were killed. Proceeding in three major waves, this lahar was 30 meters (100 ft) deep, moved at 12 meters per second (39 ft/s), and lasted ten to twenty minutes. Traveling at about 6 meters (20 ft) per second, the second lahar lasted thirty minutes and was followed by smaller pulses. A third major pulse brought the lahar’s duration to roughly two hours; by that point, 85 percent of Armero was enveloped in mud. Survivors described people holding on to debris from their homes in attempts to stay above the mud. Buildings collapsed, crushing people and raining down debris. The front of the lahar contained boulders and cobbles which would have crushed anyone in their path, while the slower parts were dotted by fine, sharp stones which caused lacerations. Mud moved into open wounds and other open body parts – the eyes, ears, and mouth – and placed pressure capable of inducing traumatic asphyxia in one or two minutes upon people buried in it.”
“The volcano continues to pose a serious threat to nearby towns and villages. Of the threats, the one with the most potential for danger is that of small-volume eruptions, which can destabilize glaciers and trigger lahars. Although much of the volcano’s glacier mass has retreated, a significant volume of ice still sits atop Nevado del Ruiz and other volcanoes in the Ruiz–Tolima massif. Melting just 10 percent of the ice would produce lahars with a volume of up to 200 million cubic meters – similar to the lahar that destroyed Armero in 1985. In just hours, these lahars can travel up to 100 km along river valleys. Estimates show that up to 500,000 people living in the Combeima, Chinchina, Coello-Toche, and Guali valleys are at risk, with 100,000 individuals being considered to be at high risk.”
iii. Asteroid belt (featured).
“The asteroid belt is the region of the Solar System located roughly between the orbits of the planets Mars and Jupiter. It is occupied by numerous irregularly shaped bodies called asteroids or minor planets. The asteroid belt is also termed the main asteroid belt or main belt to distinguish its members from other asteroids in the Solar System such as near-Earth asteroids and trojan asteroids. About half the mass of the belt is contained in the four largest asteroids, Ceres, Vesta, Pallas, and Hygiea. Vesta, Pallas, and Hygiea have mean diameters of more than 400 km, whereas Ceres, the asteroid belt’s only dwarf planet, is about 950 km in diameter. The remaining bodies range down to the size of a dust particle.”
“The asteroid belt formed from the primordial solar nebula as a group of planetesimals, the smaller precursors of the planets, which in turn formed protoplanets. Between Mars and Jupiter, however, gravitational perturbations from Jupiter imbued the protoplanets with too much orbital energy for them to accrete into a planet. Collisions became too violent, and instead of fusing together, the planetesimals and most of the protoplanets shattered. As a result, 99.9% of the asteroid belt’s original mass was lost in the first 100 million years of the Solar System’s history.”
“In an anonymous footnote to his 1766 translation of Charles Bonnet‘s Contemplation de la Nature, the astronomer Johann Daniel Titius of Wittenberg noted an apparent pattern in the layout of the planets. If one began a numerical sequence at 0, then included 3, 6, 12, 24, 48, etc., doubling each time, and added four to each number and divided by 10, this produced a remarkably close approximation to the radii of the orbits of the known planets as measured in astronomical units. This pattern, now known as the Titius–Bode law, predicted the semi-major axes of the six planets of the time (Mercury, Venus, Earth, Mars, Jupiter and Saturn) provided one allowed for a “gap” between the orbits of Mars and Jupiter. […] On January 1, 1801, Giuseppe Piazzi, Chair of Astronomy at the University of Palermo, Sicily, found a tiny moving object in an orbit with exactly the radius predicted by the Titius–Bode law. He dubbed it Ceres, after the Roman goddess of the harvest and patron of Sicily. Piazzi initially believed it a comet, but its lack of a coma suggested it was a planet. Fifteen months later, Heinrich Wilhelm Olbers discovered a second object in the same region, Pallas. Unlike the other known planets, the objects remained points of light even under the highest telescope magnifications instead of resolving into discs. Apart from their rapid movement, they appeared indistinguishable from stars. Accordingly, in 1802 William Herschel suggested they be placed into a separate category, named asteroids, after the Greek asteroeides, meaning “star-like”. […] The discovery of Neptune in 1846 led to the discrediting of the Titius–Bode law in the eyes of scientists, because its orbit was nowhere near the predicted position. […] One hundred asteroids had been located by mid-1868, and in 1891 the introduction of astrophotography by Max Wolf accelerated the rate of discovery still further. A total of 1,000 asteroids had been found by 1921, 10,000 by 1981, and 100,000 by 2000. Modern asteroid survey systems now use automated means to locate new minor planets in ever-increasing quantities.”
“In 1802, shortly after discovering Pallas, Heinrich Olbers suggested to William Herschel that Ceres and Pallas were fragments of a much larger planet that once occupied the Mars–Jupiter region, this planet having suffered an internal explosion or a cometary impact many million years before. Over time, however, this hypothesis has fallen from favor. […] Today, most scientists accept that, rather than fragmenting from a progenitor planet, the asteroids never formed a planet at all. […] The asteroids are not samples of the primordial Solar System. They have undergone considerable evolution since their formation, including internal heating (in the first few tens of millions of years), surface melting from impacts, space weathering from radiation, and bombardment by micrometeorites. […] collisions between asteroids occur frequently (on astronomical time scales). Collisions between main-belt bodies with a mean radius of 10 km are expected to occur about once every 10 million years. A collision may fragment an asteroid into numerous smaller pieces (leading to the formation of a new asteroid family). Conversely, collisions that occur at low relative speeds may also join two asteroids. After more than 4 billion years of such processes, the members of the asteroid belt now bear little resemblance to the original population. […] The current asteroid belt is believed to contain only a small fraction of the mass of the primordial belt. Computer simulations suggest that the original asteroid belt may have contained mass equivalent to the Earth. Primarily because of gravitational perturbations, most of the material was ejected from the belt within about a million years of formation, leaving behind less than 0.1% of the original mass. Since their formation, the size distribution of the asteroid belt has remained relatively stable: there has been no significant increase or decrease in the typical dimensions of the main-belt asteroids.”
“Contrary to popular imagery, the asteroid belt is mostly empty. The asteroids are spread over such a large volume that it would be improbable to reach an asteroid without aiming carefully. Nonetheless, hundreds of thousands of asteroids are currently known, and the total number ranges in the millions or more, depending on the lower size cutoff. Over 200 asteroids are known to be larger than 100 km, and a survey in the infrared wavelengths has shown that the asteroid belt has 0.7–1.7 million asteroids with a diameter of 1 km or more. […] The total mass of the asteroid belt is estimated to be 2.8×1021 to 3.2×1021 kilograms, which is just 4% of the mass of the Moon. […] Several otherwise unremarkable bodies in the outer belt show cometary activity. Because their orbits cannot be explained through capture of classical comets, it is thought that many of the outer asteroids may be icy, with the ice occasionally exposed to sublimation through small impacts. Main-belt comets may have been a major source of the Earth’s oceans, because the deuterium–hydrogen ratio is too low for classical comets to have been the principal source. […] Of the 50,000 meteorites found on Earth to date, 99.8 percent are believed to have originated in the asteroid belt.”
iv. Series (mathematics). This article has a lot of stuff, including lots of links to other stuff.
v. Occupation of Japan. Interesting article, I haven’t really read very much about this before. Some quotes:
“At the head of the Occupation administration was General MacArthur who was technically supposed to defer to an advisory council set up by the Allied powers, but in practice did everything himself. As a result, this period was one of significant American influence […] MacArthur’s first priority was to set up a food distribution network; following the collapse of the ruling government and the wholesale destruction of most major cities, virtually everyone was starving. Even with these measures, millions of people were still on the brink of starvation for several years after the surrender.”
“By the end of 1945, more than 350,000 U.S. personnel were stationed throughout Japan. By the beginning of 1946, replacement troops began to arrive in the country in large numbers and were assigned to MacArthur’s Eighth Army, headquartered in Tokyo’s Dai-Ichi building. Of the main Japanese islands, Kyūshū was occupied by the 24th Infantry Division, with some responsibility for Shikoku. Honshū was occupied by the First Cavalry Division. Hokkaido was occupied by the 11th Airborne Division.
By June 1950, all these army units had suffered extensive troop reductions and their combat effectiveness was seriously weakened. When North Korea invaded South Korea (see Korean War), elements of the 24th Division were flown into South Korea to try to stem the massive invasion force there, but the green occupation troops, while acquitting themselves well when suddenly thrown into combat almost overnight, suffered heavy casualties and were forced into retreat until other Japan occupation troops could be sent to assist.”
“During the Occupation, GHQ/SCAP mostly abolished many of the financial coalitions known as the Zaibatsu, which had previously monopolized industry. […] A major land reform was also conducted […] Between 1947 and 1949, approximately 5,800,000 acres (23,000 km2) of land (approximately 38% of Japan’s cultivated land) were purchased from the landlords under the government’s reform program and resold at extremely low prices (after inflation) to the farmers who worked them. By 1950, three million peasants had acquired land, dismantling a power structure that the landlords had long dominated.”
“There are allegations that during the three months in 1945 when Okinawa was gradually occupied there were rapes committed by U.S. troops. According to some accounts, US troops committed thousands of rapes during the campaign.
Many Japanese civilians in the Japanese mainland feared that the Allied occupation troops were likely to rape Japanese women. The Japanese authorities set up a large system of prostitution facilities (RAA) in order to protect the population. […] However, there was a resulting large rise in venereal disease among the soldiers, which led MacArthur to close down the prostitution in early 1946. The incidence of rape increased after the closure of the brothels, possibly eight-fold; […] “According to one calculation the number of rapes and assaults on Japanese women amounted to around 40 daily while the RAA was in operation, and then rose to an average of 330 a day after it was terminated in early 1946.” Michael S. Molasky states that while rape and other violent crime was widespread in naval ports like Yokosuka and Yokohama during the first few weeks of occupation, according to Japanese police reports and journalistic studies, the number of incidents declined shortly after and were not common on mainland Japan throughout the rest of occupation. Two weeks into the occupation, the Occupation administration began censoring all media. This included any mention of rape or other sensitive social issues.”
“Post-war Japan was chaotic. The air raids on Japan’s urban centers left millions displaced and food shortages, created by bad harvests and the demands of the war, worsened when the seizure of food from Korea, Taiwan, and China ceased. Repatriation of Japanese living in other parts of Asia only aggravated the problems in Japan as these displaced people put more strain on already scarce resources. Over 5.1 million Japanese returned to Japan in the fifteen months following October 1, 1945. Alcohol and drug abuse became major problems. Deep exhaustion, declining morale and despair were so widespread that it was termed the “kyodatsu condition” (虚脱状態 kyodatsujoutai?, lit. “state of lethargy”). Inflation was rampant and many people turned to the black market for even the most basic goods. These black markets in turn were often places of turf wars between rival gangs, like the Shibuya incident in 1946.”
i. Albert Stevens.
“Albert Stevens (1887–1966), also known as patient CAL-1, was the subject of a human radiation experiment, and survived the highest known accumulated radiation dose in any human. On May 14, 1945, he was injected with 131 kBq (3.55 µCi) of plutonium without his knowledge or informed consent.
Plutonium remained present in his body for the remainder of his life, the amount decaying slowly through radioactive decay and biological elimination. Stevens died of heart disease some 20 years later, having accumulated an effective radiation dose of 64 Sv (6400 rem) over that period. The current annual permitted dose for a radiation worker in the United States is 5 rem. […] Steven’s annual dose was approximately 60 times this amount.”
“Plutonium was handled extensively by chemists, technicians, and physicists taking part in the Manhattan Project, but the effects of plutonium exposure on the human body were largely unknown. A few mishaps in 1944 had caused certain alarm amongst project leaders, and contamination was becoming a major problem in and outside the laboratories. […] As the Manhattan Project continued to use plutonium, airborne contamination began to be a major concern. Nose swipes were taken frequently of the workers, with numerous cases of moderate and high readings. […] Tracer experiments were begun in 1944 with rats and other animals with the knowledge of all of the Manhattan project managers and health directors of the various sites. In 1945, human tracer experiments began with the intent to determine how to properly analyze excretion samples to estimate body burden. Numerous analytic methods were devised by the lead doctors at the Met Lab (Chicago), Los Alamos, Rochester, Oak Ridge, and Berkeley. The first human plutonium injection experiments were approved in April 1945 for three tests: April 10 at the Manhattan Project Army Hospital in Oak Ridge, April 26 at Billings Hospital in Chicago, and May 14 at the University of California Hospital in San Francisco. Albert Stevens was the person selected in the California test and designated CAL-1 in official documents. […] The plutonium experiments were not isolated events. During this time, cancer researchers were attempting to discover whether certain radioactive elements might be useful to treat cancer. Recent studies on radium, polonium, and uranium proved foundational to the study of Pu toxicity. […] The mastermind behind this human experiment with plutonium was Dr. Joseph Gilbert Hamilton, a Manhattan Project doctor in charge of the human experiments in California. Hamilton had been experimenting on people (including himself) since the 1930s at Berkeley. […] Hamilton eventually succumbed to the radiation that he explored for most of his adult life: he died of leukemia at the age of 49.”
“Although Stevens was the person who received the highest dose of radiation during the plutonium experiments, he was neither the first nor the last subject to be studied. Eighteen people aged 4 to 69 were injected with plutonium. Subjects who were chosen for the experiment had been diagnosed with a terminal disease. They lived from 6 days up to 44 years past the time of their injection. Eight of the 18 died within 2 years of the injection. All died from their preexisting terminal illness, or cardiac illnesses. […] As with all radiological testing during World War II, it would have been difficult to receive informed consent for Pu injection studies on civilians. Within the Manhattan Project, plutonium was referred to often by its code “49” or simply the “product.” Few outside of the Manhattan Project would have known of plutonium, much less of the dangers of radioactive isotopes inside the body. There is no evidence that Stevens had any idea that he was the subject of a secret government experiment in which he would be subjected to a substance that would have no benefit to his health.”
The best part is perhaps this: Stevens was not terminal: “He had checked into the University of California Hospital in San Francisco with a gastric ulcer that was misdiagnosed as terminal cancer.” It seems pretty obvious from the fact that one of the people involved in these experiments survived for 44 years and the fact that four other experimentees were still alive by the time Stevens died that he was not the only one who was misdiagnosed, and one interpretation of the fact that more than half survived beyond two years might be that the definition of ‘terminal’ applied in this context may have been, well, slightly flexible (especially considering how large injections of radioactive poisons in these people may not exactly have increased their life expectancies). Today people usually use this term for conditions which people can expect to die from within 6 months – 2 years is a long time in this context. It may however also to some extent just have reflected the state of medical science at the time – also illustrative in that respect is how the surgeons screwed him over during his illness: “Half of the left lobe of the liver, the entire spleen, most of the ninth rib, lymph nodes, part of the pancreas, and a portion of the omentum… were taken out” to help prevent the spread of the cancer that Stevens did not have.” In case you were wondering, not only did they not tell him he was part of an experiment; they also did not ever tell him he had been misdiagnosed with cancer.
ii. Aberration of light.
“The aberration of light (also referred to as astronomical aberration or stellar aberration) is an astronomical phenomenon which produces an apparent motion of celestial objects about their locations dependent on the velocity of the observer. Aberration causes objects to appear to be angled or tilted towards the direction of motion of the observer compared to when the observer is stationary. The change in angle is typically very small, on the order of v/c where c is the speed of light and v the velocity of the observer. In the case of “stellar” or “annual” aberration, the apparent position of a star to an observer on Earth varies periodically over the course of a year as the Earth’s velocity changes as it revolves around the Sun […] Aberration is historically significant because of its role in the development of the theories of light, electromagnetism and, ultimately, the theory of Special Relativity. […] In 1729, James Bradley provided a classical explanation for it in terms of the finite speed of light relative to the motion of the Earth in its orbit around the Sun, which he used to make one of the earliest measurements of the speed of light. However, Bradley’s theory was incompatible with 19th century theories of light, and aberration became a major motivation for the aether drag theories of Augustin Fresnel (in 1818) and G. G. Stokes (in 1845), and for Hendrick Lorentz‘ aether theory of electromagnetism in 1892. The aberration of light, together with Lorentz’ elaboration of Maxwell’s electrodynamics, the moving magnet and conductor problem, the negative aether drift experiments, as well as the Fizeau experiment, led Albert Einstein to develop the theory of Special Relativity in 1905, which provided a conclusive explanation for the aberration phenomenon. […]
Aberration may be explained as the difference in angle of a beam of light in different inertial frames of reference. A common analogy is to the apparent direction of falling rain: If rain is falling vertically in the frame of reference of a person standing still, then to a person moving forwards the rain will appear to arrive at an angle, requiring the moving observer to tilt their umbrella forwards. The faster the observer moves, the more tilt is needed.
The net effect is that light rays striking the moving observer from the sides in a stationary frame will come angled from ahead in the moving observer’s frame. This effect is sometimes called the “searchlight” or “headlight” effect.
In the case of annual aberration of starlight, the direction of incoming starlight as seen in the Earth’s moving frame is tilted relative to the angle observed in the Sun’s frame. Since the direction of motion of the Earth changes during its orbit, the direction of this tilting changes during the course of the year, and causes the apparent position of the star to differ from its true position as measured in the inertial frame of the Sun.
While classical reasoning gives intuition for aberration, it leads to a number of physical paradoxes […] The theory of Special Relativity is required to correctly account for aberration.”
The article has much more, in particular it has a lot of stuff about historical aspects pertaining to this topic.
iii. Spanish Armada.
“The Spanish Armada (Spanish: Grande y Felicísima Armada or Armada Invencible, literally “Great and Most Fortunate Navy” or “Invincible Fleet”) was a Spanish fleet of 130 ships that sailed from A Coruña in August 1588 under the command of the Duke of Medina Sidonia with the purpose of escorting an army from Flanders to invade England. The strategic aim was to overthrow Queen Elizabeth I of England and the Tudor establishment of Protestantism in England, with the expectation that this would put a stop to English interference in the Spanish Netherlands and to the harm caused to Spanish interests by English and Dutch privateering.
The Armada chose not to attack the English fleet at Plymouth, then failed to establish a temporary anchorage in the Solent, after one Spanish ship had been captured by Francis Drake in the English Channel, and finally dropped anchor off Calais. While awaiting communications from the Duke of Parma‘s army the Armada was scattered by an English fireship attack. In the ensuing Battle of Gravelines the Spanish fleet was damaged and forced to abandon its rendezvous with Parma’s army, who were blockaded in harbour by Dutch flyboats. The Armada managed to regroup and, driven by southwest winds, withdrew north, with the English fleet harrying it up the east coast of England. The commander ordered a return to Spain, but the Armada was disrupted during severe storms in the North Atlantic and a large portion of the vessels were wrecked on the coasts of Scotland and Ireland. Of the initial 130 ships over a third failed to return. […] The expedition was the largest engagement of the undeclared Anglo-Spanish War (1585–1604). The following year England organised a similar large-scale campaign against Spain, the Drake-Norris Expedition, also known as the Counter-Armada of 1589, which was also unsuccessful. […]
The fleet was composed of 130 ships, 8,000 sailors and 18,000 soldiers, and bore 1,500 brass guns and 1,000 iron guns. […] In the Spanish Netherlands 30,000 soldiers awaited the arrival of the armada, the plan being to use the cover of the warships to convey the army on barges to a place near London. All told, 55,000 men were to have been mustered, a huge army for that time. […] The English fleet outnumbered the Spanish, with 200 ships to 130, while the Spanish fleet outgunned the English—its available firepower was 50% more than that of the English. The English fleet consisted of the 34 ships of the royal fleet (21 of which were galleons of 200 to 400 tons), and 163 other ships, 30 of which were of 200 to 400 tons and carried up to 42 guns each; 12 of these were privateers owned by Lord Howard of Effingham, Sir John Hawkins and Sir Francis Drake. […] The Armada was delayed by bad weather […], and was not sighted in England until 19 July, when it appeared off The Lizard in Cornwall. The news was conveyed to London by a system of beacons that had been constructed all the way along the south coast.”
“During all the engagements, the Spanish heavy guns could not easily be run in for reloading because of their close spacing and the quantities of supplies stowed between decks […] Instead the gunners fired once and then jumped to the rigging to attend to their main task as marines ready to board enemy ships, as had been the practice in naval warfare at the time. In fact, evidence from Armada wrecks in Ireland shows that much of the fleet’s ammunition was never spent. Their determination to fight by boarding, rather than cannon fire at a distance, proved a weakness for the Spanish; it had been effective on occasions such as the battles of Lepanto and Ponta Delgada (1582), but the English were aware of this strength and sought to avoid it by keeping their distance. With its superior manoeuvrability, the English fleet provoked Spanish fire while staying out of range. The English then closed, firing repeated and damaging broadsides into the enemy ships. This also enabled them to maintain a position to windward so that the heeling Armada hulls were exposed to damage below the water line. Many of the gunners were killed or wounded, and the task of manning the cannon often fell to the regular foot soldiers on board, who did not know how to operate the guns. The ships were close enough for sailors on the upper decks of the English and Spanish ships to exchange musket fire. […] The outcome seemed to vindicate the English strategy and resulted in a revolution in naval battle tactics with the promotion of gunnery, which until then had played a supporting role to the tasks of ramming and boarding.”
“In September 1588 the Armada sailed around Scotland and Ireland into the North Atlantic. The ships were beginning to show wear from the long voyage, and some were kept together by having their hulls bundled up with cables. Supplies of food and water ran short. The intention would have been to keep well to the west of the coast of Scotland and Ireland, in the relative safety of the open sea. However, there being at that time no way of accurately measuring longitude, the Spanish were not aware that the Gulf Stream was carrying them north and east as they tried to move west, and they eventually turned south much further to the east than planned, a devastating navigational error. Off the coasts of Scotland and Ireland the fleet ran into a series of powerful westerly winds […] Because so many anchors had been abandoned during the escape from the English fireships off Calais, many of the ships were incapable of securing shelter as they reached the coast of Ireland and were driven onto the rocks. Local men looted the ships. […] more ships and sailors were lost to cold and stormy weather than in direct combat. […] Following the gales it is reckoned that 5,000 men died, by drowning, starvation and slaughter at the hands of English forces after they were driven ashore in Ireland; only half of the Spanish Armada fleet returned home to Spain. Reports of the passage around Ireland abound with strange accounts of hardship and survival.
In the end, 67 ships and fewer than 10,000 men survived. Many of the men were near death from disease, as the conditions were very cramped and most of the ships ran out of food and water. Many more died in Spain, or on hospital ships in Spanish harbours, from diseases contracted during the voyage.”
“Viral hemorrhagic septicemia (VHS) is a deadly infectious fish disease caused by the Viral hemorrhagic septicemia virus (VHSV, or VHSv). It afflicts over 50 species of freshwater and marine fish in several parts of the northern hemisphere. VHS is caused by the viral hemorrhagic septicemia virus (VHSV), different strains of which occur in different regions, and affect different species. There are no signs that the disease affects human health. VHS is also known as “Egtved disease,” and VHSV as “Egtved virus.”
Historically, VHS was associated mostly with freshwater salmonids in western Europe, documented as a pathogenic disease among cultured salmonids since the 1950s. Today it is still a major concern for many fish farms in Europe and is therefore being watched closely by the European Community Reference Laboratory for Fish Diseases. It was first discovered in the US in 1988 among salmon returning from the Pacific in Washington State. This North American genotype was identified as a distinct, more marine-stable strain than the European genotype. VHS has since been found afflicting marine fish in the northeastern Pacific Ocean, the North Sea, and the Baltic Sea. Since 2005, massive die-offs have occurred among a wide variety of freshwater species in the Great Lakes region of North America.”
The article isn’t that great but I figured I should include it anyway because I find it sort of fascinating how almost all humans alive can and do live their entire lives without necessarily ever knowing anything about stuff like this. Humans have some really obvious blind spots when it comes to knowledge about some of the stuff we put into our mouths on a regular basis.
v. Bird migration.
“Bird migration is the regular seasonal movement, often north and south along a flyway between breeding and wintering grounds, undertaken by many species of birds. Migration, which carries high costs in predation and mortality, including from hunting by humans, is driven primarily by availability of food. Migration occurs mainly in the Northern Hemisphere where birds are funnelled on to specific routes by natural barriers such as the Mediterranean Sea or the Caribbean Sea.”
“Historically, migration has been recorded as much as 3,000 years ago by Ancient Greek authors including Homer and Aristotle […] Aristotle noted that cranes traveled from the steppes of Scythia to marshes at the headwaters of the Nile. […] Aristotle however suggested that swallows and other birds hibernated. […] It was not until the end of the eighteenth century that migration as an explanation for the winter disappearance of birds from northern climes was accepted […] [and Aristotle’s hibernation] belief persisted as late as 1878, when Elliott Coues listed the titles of no less than 182 papers dealing with the hibernation of swallows.”
“Approximately 1800 of the world’s 10,000 bird species are long-distance migrants. […] Within a species not all populations may be migratory; this is known as “partial migration”. Partial migration is very common in the southern continents; in Australia, 44% of non-passerine birds and 32% of passerine species are partially migratory. In some species, the population at higher latitudes tends to be migratory and will often winter at lower latitude. The migrating birds bypass the latitudes where other populations may be sedentary, where suitable wintering habitats may already be occupied. This is an example of leap-frog migration. Many fully migratory species show leap-frog migration (birds that nest at higher latitudes spend the winter at lower latitudes), and many show the alternative, chain migration, where populations ‘slide’ more evenly North and South without reversing order.
Within a population, it is common for different ages and/or sexes to have different patterns of timing and distance. […] Many, if not most, birds migrate in flocks. For larger birds, flying in flocks reduces the energy cost. Geese in a V-formation may conserve 12–20% of the energy they would need to fly alone. […] Seabirds fly low over water but gain altitude when crossing land, and the reverse pattern is seen in landbirds. However most bird migration is in the range of 150 m (500 ft) to 600 m (2000 ft). Bird strike aviation records from the United States show most collisions occur below 600 m (2000 ft) and almost none above 1800 m (6000 ft). Bird migration is not limited to birds that can fly. Most species of penguin migrate by swimming.”
“Some Bar-tailed Godwits have the longest known non-stop flight of any migrant, flying 11,000 km from Alaska to their New Zealand non-breeding areas. Prior to migration, 55 percent of their bodyweight is stored fat to fuel this uninterrupted journey. […] The Arctic Tern has the longest-distance migration of any bird, and sees more daylight than any other, moving from its Arctic breeding grounds to the Antarctic non-breeding areas. One Arctic Tern, ringed (banded) as a chick on the Farne Islands off the British east coast, reached Melbourne, Australia in just three months from fledging, a sea journey of over 22,000 km (14,000 mi). […] The most pelagic species, mainly in the ‘tubenose’ order Procellariiformes, are great wanderers, and the albatrosses of the southern oceans may circle the globe as they ride the “roaring forties” outside the breeding season. The tubenoses spread widely over large areas of open ocean, but congregate when food becomes available. Many are also among the longest-distance migrants; Sooty Shearwaters nesting on the Falkland Islands migrate 14,000 km (8,700 mi) between the breeding colony and the North Atlantic Ocean off Norway. Some Manx Shearwaters do this same journey in reverse. As they are long-lived birds, they may cover enormous distances during their lives; one record-breaking Manx Shearwater is calculated to have flown 8 million km (5 million miles) during its over-50 year lifespan.”
“Bird migration is primarily, but not entirely, a Northern Hemisphere phenomenon. This is because land birds in high northern latitudes, where food becomes scarce in winter, leave for areas further south (including the Southern Hemisphere) to overwinter, and because the continental landmass is much larger in the Northern Hemisphere [see also this post]. In contrast, among (pelagic) seabirds, species of the Southern Hemisphere are more likely to migrate. This is because there is a large area of ocean in the Southern Hemisphere, and more islands suitable for seabirds to nest.“
i. Great Fire of London (featured).
“The Great Fire of London was a major conflagration that swept through the central parts of the English city of London, from Sunday, 2 September to Wednesday, 5 September 1666. The fire gutted the medieval City of London inside the old Roman city wall. It threatened, but did not reach, the aristocratic district of Westminster, Charles II‘s Palace of Whitehall, and most of the suburban slums. It consumed 13,200 houses, 87 parish churches, St. Paul’s Cathedral and most of the buildings of the City authorities. It is estimated to have destroyed the homes of 70,000 of the City’s 80,000 inhabitants.”
Do note that even though this fire was a really big deal the ‘70,000 out of 80,000’ number can be misleading as many Londoners didn’t actually live in the City proper:
“By the late 17th century, the City proper—the area bounded by the City wall and the River Thames—was only a part of London, covering some 700.0 acres (2.833 km2; 1.0938 sq mi), and home to about 80,000 people, or one sixth of London’s inhabitants. The City was surrounded by a ring of inner suburbs, where most Londoners lived.”
I thought I should include a few observations related to how well people behaved in this terrible situation – humans are really wonderful sometimes, and of course the people affected by the fire did everything they could to stick together and help each other out:
“Order in the streets broke down as rumours arose of suspicious foreigners setting fires. The fears of the homeless focused on the French and Dutch, England‘s enemies in the ongoing Second Anglo-Dutch War; these substantial immigrant groups became victims of lynchings and street violence.” […] [no, wait…]
“Suspicion soon arose in the threatened city that the fire was no accident. The swirling winds carried sparks and burning flakes long distances to lodge on thatched roofs and in wooden gutters, causing seemingly unrelated house fires to break out far from their source and giving rise to rumours that fresh fires were being set on purpose. Foreigners were immediately suspects because of the current Second Anglo-Dutch War. As fear and suspicion hardened into certainty on the Monday, reports circulated of imminent invasion, and of foreign undercover agents seen casting “fireballs” into houses, or caught with hand grenades or matches. There was a wave of street violence. William Taswell saw a mob loot the shop of a French painter and level it to the ground, and watched in horror as a blacksmith walked up to a Frenchman in the street and hit him over the head with an iron bar.
The fears of terrorism received an extra boost from the disruption of communications and news as facilities were devoured by the fire. The General Letter Office in Threadneedle Street, through which post for the entire country passed, burned down early on Monday morning. The London Gazette just managed to put out its Monday issue before the printer’s premises went up in flames (this issue contained mainly society gossip, with a small note about a fire that had broken out on Sunday morning and “which continues still with great violence”). The whole nation depended on these communications, and the void they left filled up with rumours. There were also religious alarms of renewed Gunpowder Plots. As suspicions rose to panic and collective paranoia on the Monday, both the Trained Bands and the Coldstream Guards focused less on fire fighting and more on rounding up foreigners, Catholics, and any odd-looking people, and arresting them or rescuing them from mobs, or both together.”
I didn’t really know what to think about this part:
“An example of the urge to identify scapegoats for the fire is the acceptance of the confession of a simple-minded French watchmaker, Robert Hubert, who claimed he was an agent of the Pope and had started the Great Fire in Westminster. He later changed his story to say that he had started the fire at the bakery in Pudding Lane. Hubert was convicted, despite some misgivings about his fitness to plead, and hanged at Tyburn on 28 September 1666. After his death, it became apparent that he had not arrived in London until two days after the fire started.”
Just one year before the fire, London had incidentally been hit by a plague outbreak which “is believed to have killed a sixth of London’s inhabitants, or 80,000 people”. Being a Londoner during the 1660s probably wasn’t a great deal of fun. On the other hand this disaster was actually not that big of a deal when compared to e.g. the 1556 Shaanxi earthquake.
ii. Sea (featured). I was considering reading an oceanography textbook a while back, but I decided against it and I read this article ‘instead’. Some interesting stuff in there. A few observations from the article:
“About 97.2 percent of the Earth’s water is found in the sea, some 1,360,000,000 cubic kilometres (330,000,000 cu mi) of salty water. Of the rest, 2.15 percent is accounted for by ice in glaciers, surface deposits and sea ice, and 0.65 percent by vapour and liquid fresh water in lakes, rivers, the ground and the air.”
“The water in the sea was once thought to come from the Earth’s volcanoes, starting 4 billion years ago, released by degassing from molten rock.(pp24–25) More recent work suggests that much of the Earth’s water may have come from comets.” (This stuff covers 70 percent of the planet and we still are not completely sure how it got to be here. I’m often amazed at how much stuff we know about the world, but very occasionally I also get amazed at the things we don’t know. This seems like the sort of thing we somehow ‘ought to know’..)
“An important characteristic of seawater is that it is salty. Salinity is usually measured in parts per thousand (expressed with the ‰ sign or “per mil”), and the open ocean has about 35 grams (1.2 oz) of solids per litre, a salinity of 35‰ (about 90% of the water in the ocean has between 34‰ and 35‰ salinity). […] The constituents of table salt, sodium and chloride, make up about 85 percent of the solids in solution. […] The salinity of a body of water varies with evaporation from its surface (increased by high temperatures, wind and wave motion), precipitation, the freezing or melting of sea ice, the melting of glaciers, the influx of fresh river water, and the mixing of bodies of water of different salinities.”
“Sea temperature depends on the amount of solar radiation falling on its surface. In the tropics, with the sun nearly overhead, the temperature of the surface layers can rise to over 30 °C (86 °F) while near the poles the temperature in equilibrium with the sea ice is about −2 °C (28 °F). There is a continuous circulation of water in the oceans. Warm surface currents cool as they move away from the tropics, and the water becomes denser and sinks. The cold water moves back towards the equator as a deep sea current, driven by changes in the temperature and density of the water, before eventually welling up again towards the surface. Deep seawater has a temperature between −2 °C (28 °F) and 5 °C (41 °F) in all parts of the globe.”
“The amount of light that penetrates the sea depends on the angle of the sun, the weather conditions and the turbidity of the water. Much light gets reflected at the surface, and red light gets absorbed in the top few metres. […] There is insufficient light for photosynthesis and plant growth beyond a depth of about 200 metres (660 ft).”
“Over most of geologic time, the sea level has been higher than it is today.(p74) The main factor affecting sea level over time is the result of changes in the oceanic crust, with a downward trend expected to continue in the very long term. At the last glacial maximum, some 20,000 years ago, the sea level was 120 metres (390 ft) below its present-day level.” (this of course had some very interesting ecological effects – van der Geer et al. had some interesting observations on that topic)
“On her 68,890-nautical-mile (127,580 km) journey round the globe, HMS Challenger discovered about 4,700 new marine species, and made 492 deep sea soundings, 133 bottom dredges, 151 open water trawls and 263 serial water temperature observations.”
“Seaborne trade carries more than US $4 trillion worth of goods each year.”
“Many substances enter the sea as a result of human activities. Combustion products are transported in the air and deposited into the sea by precipitation. Industrial outflows and sewage contribute heavy metals, pesticides, PCBs, disinfectants, household cleaning products and other synthetic chemicals. These become concentrated in the surface film and in marine sediment, especially estuarine mud. The result of all this contamination is largely unknown because of the large number of substances involved and the lack of information on their biological effects. The heavy metals of greatest concern are copper, lead, mercury, cadmium and zinc which may be bio-accumulated by marine invertebrates. They are cumulative toxins and are passed up the food chain.
Much floating plastic rubbish does not biodegrade, instead disintegrating over time and eventually breaking down to the molecular level. Rigid plastics may float for years. In the centre of the Pacific gyre there is a permanent floating accumulation of mostly plastic waste and there is a similar garbage patch in the Atlantic. […] Run-off of fertilisers from agricultural land is a major source of pollution in some areas and the discharge of raw sewage has a similar effect. The extra nutrients provided by these sources can cause excessive plant growth. Nitrogen is often the limiting factor in marine systems, and with added nitrogen, algal blooms and red tides can lower the oxygen level of the water and kill marine animals. Such events have created dead zones in the Baltic Sea and the Gulf of Mexico.”
iii. List of chemical compounds with unusual names. Technically this is not an article, but I decided to include it here anyway. A few examples from the list:
“Sonic hedgehog: A protein named after Sonic the Hedgehog.”
iv. Operation Proboi. When trying to make sense of e.g. the reactions of people living in the Baltic countries to Russia’s ‘current activities’ in the Ukraine, it probably helps to know stuff like this. 1949 isn’t that long ago – if my father had been born in Latvia he might have been one of the people in the photo.
v. Schrödinger equation. I recently started reading A. C. Phillips’ Introduction to Quantum Mechanics – chapter 2 deals with this topic. Due to the technical nature of the book I’m incidentally not sure to which extent I’ll cover the book here (or for that matter whether I’ll be able to finish it..) – if I do decide to cover it in some detail I’ll probably include relevant links to wikipedia along the way. The wiki has a lot of stuff on these topics, but textbooks are really helpful in terms of figuring out the order in which you should proceed.
vi. Happisburgh footprints. ‘A small step for man, …’
“The Happisburgh footprints were a set of fossilized hominin footprints that date to the early Pleistocene. They were discovered in May 2013 in a newly uncovered sediment layer on a beach at Happisburgh […] in Norfolk, England, and were destroyed by the tide shortly afterwards. Results of research on the footprints were announced on 7 February 2014, and identified them as dating to more than 800,000 years ago, making them the oldest known hominin footprints outside Africa. Before the Happisburgh discovery, the oldest known footprints in Britain were at Uskmouth in South Wales, from the Mesolithic and carbon-dated to 4,600 BC.”
The fact that we found these footprints is awesome. The fact that we can tell that they are as old as they are is awesome. There’s a lot of awesome stuff going on here – Happisburg also simply seems to be a gift that keeps on giving:
“Happisburgh has produced a number of significant archaeological finds over many years. As the shoreline is subject to severe coastal erosion, new material is constantly being exposed along the cliffs and on the beach. Prehistoric discoveries have been noted since 1820, when fishermen trawling oyster beds offshore found their nets had brought up teeth, bones, horns and antlers from elephants, rhinos, giant deer and other extinct species. […]
In 2000, a black flint handaxe dating to between 600,000 and 800,000 years ago was found by a man walking on the beach. In 2012, for the television documentary Britain’s Secret Treasures, the handaxe was selected by a panel of experts from the British Museum and the Council for British Archaeology as the most important item on a list of fifty archaeological discoveries made by members of the public. Since its discovery, the palaeolithic history of Happisburgh has been the subject of the Ancient Human Occupation of Britain (AHOB) and Pathways to Ancient Britain (PAB) projects […] Between 2005 and 2010 eighty palaeolithic flint tools, mostly cores, flakes and flake tools were excavated from the foreshore in sediment dating back to up to 950,000 years ago.”
vii. Keep (‘good article’).
“A keep (from the Middle English kype) is a type of fortified tower built within castles during the Middle Ages by European nobility. Scholars have debated the scope of the word keep, but usually consider it to refer to large towers in castles that were fortified residences, used as a refuge of last resort should the rest of the castle fall to an adversary. The first keeps were made of timber and formed a key part of the motte and bailey castles that emerged in Normandy and Anjou during the 10th century; the design spread to England as a result of the Norman invasion of 1066, and in turn spread into Wales during the second half of the 11th century and into Ireland in the 1170s. The Anglo-Normans and French rulers began to build stone keeps during the 10th and 11th centuries; these included Norman keeps, with a square or rectangular design, and circular shell keeps. Stone keeps carried considerable political as well as military importance and could take up to a decade to build.
During the 12th century new designs began to be introduced – in France, quatrefoil-shaped keeps were introduced, while in England polygonal towers were built. By the end of the century, French and English keep designs began to diverge: Philip II of France built a sequence of circular keeps as part of his bid to stamp his royal authority on his new territories, while in England castles were built that abandoned the use of keeps altogether. In Spain, keeps were increasingly incorporated into both Christian and Islamic castles, although in Germany tall towers called Bergfriede were preferred to keeps in the western fashion. In the second half of the 14th century there was a resurgence in the building of keeps. In France, the keep at Vincennes began a fashion for tall, heavily machicolated designs, a trend adopted in Spain most prominently through the Valladolid school of Spanish castle design. Meanwhile, in England tower keeps became popular amongst the most wealthy nobles: these large keeps, each uniquely designed, formed part of the grandest castles built during the period.
By the 16th century, however, keeps were slowly falling out of fashion as fortifications and residences. Many were destroyed between the 17th and 18th centuries in civil wars, or incorporated into gardens as an alternative to follies. During the 19th century, keeps became fashionable once again and in England and France a number were restored or redesigned by Gothic architects. Despite further damage to many French and Spanish keeps during the wars of the 20th century, keeps now form an important part of the tourist and heritage industry in Europe. […]
“By the 15th century it was increasingly unusual for a lord to build both a keep and a large gatehouse at the same castle, and by the early 16th century the gatehouse had easily overtaken the keep as the more fashionable feature: indeed, almost no new keeps were built in England after this period. The classical Palladian style began to dominate European architecture during the 17th century, causing a further move away from the use of keeps. […] From the 17th century onwards, some keeps were deliberately destroyed. In England, many were destroyed after the end of the Second English Civil War in 1649, when Parliament took steps to prevent another royalist uprising by slighting, or damaging, castles so as to prevent them from having any further military utility. Slighting was quite expensive and took considerable effort to carry out, so damage was usually done in the most cost efficient fashion with only selected walls being destroyed. Keeps were singled out for particular attention in this process because of their continuing political and cultural importance, and the prestige they lent their former royalist owners […] There were some equivalent destruction of keeps in France in the 17th and 18th centuries […] The Spanish Civil War and First and Second World Wars in the 20th century caused damage to many castle keeps across Europe; in particular, the famous keep at Coucy was destroyed by the German Army in 1917. By the late 20th century, however, the conservation of castle keeps formed part of government policy across France, England, Ireland and Spain. In the 21st century in England, most keeps are ruined and form part of the tourism and heritage industries, rather than being used as functioning buildings – the keep of Windsor Castle being a rare exception. This is contrast to the fate of bergfried towers in Germany, large numbers of which were restored as functional buildings in the late 19th and early 20th century, often as government offices or youth hostels, or the modern conversion of tower houses, which in many cases have become modernised domestic homes.”
“The Battles of Khalkhyn Gol […] constituted the decisive engagement of the undeclared Soviet–Japanese border conflicts fought among the Soviet Union, Mongolia and the Empire of Japan in 1939. The conflict was named after the river Khalkhyn Gol, which passes through the battlefield. In Japan, the decisive battle of the conflict is known as the Nomonhan Incident […] after a nearby village on the border between Mongolia and Manchuria. The battles resulted in the defeat of the Japanese Sixth Army. […]
While this engagement is little-known in the West, it played an important part in subsequent Japanese conduct in World War II. This defeat, together with other factors, moved the Imperial General Staff in Tokyo away from the policy of the North Strike Group favored by the Army, which wanted to seize Siberia as far as Lake Baikal for its resources. […] Other factors included the signing of the Nazi-Soviet non-aggression pact, which deprived the Army of the basis of its war policy against the USSR. Nomonhan earned the Kwantung Army the displeasure of officials in Tokyo, not so much due to its defeat, but because it was initiated and escalated without direct authorization from the Japanese government. Politically, the defeat also shifted support to the South Strike Group, favored by the Navy, which wanted to seize the resources of Southeast Asia, especially the petroleum and mineral-rich Dutch East Indies. Two days after the Eastern Front of World War II broke out, the Japanese army and navy leaders adopted on 24 June 1941 a resolution “not intervening in German Soviet war for the time being”. In August 1941, Japan and the Soviet Union reaffirmed their neutrality pact. Since the European colonial powers were weakening and suffering early defeats in the war with Germany, coupled with their embargoes on Japan (especially of vital oil) in the second half of 1941, Japan’s focus was ultimately focused on the south, and led to its decision to launch the attack on Pearl Harbor, on 7 December that year.”
Note that there’s some disagreement in the reddit thread as to how important Khalkhin Gol really was – one commenter e.g. argues that: “Khalkhin Gol is overhyped as a factor in the Japanese decision for the southern plan.”
ix. Medical aspects, Hiroshima, Japan, 1946. Technically this is also not a wikipedia article, but multiple wikipedia articles link to it and it is a wikipedia link. The link is to a video featuring multiple people who were harmed by the first nuclear weapon used by humans in warfare. Extensive tissue damage, severe burns, scars – it’s worth having in mind that dying from cancer is not the only concern facing people who survive a nuclear blast. A few related links: a) How did cleanup in Nagasaki and Hiroshima proceed following the atom bombs? b) Minutes of the second meeting of the Target Committee Los Alamos, May 10-11, 1945. c) Keloid. d) Japan in the 1950s (pictures).
I’ve never really thought about myself as ‘gifted’, but during a conversation with a friend not too long ago I was reminded that my parents discussed with my teachers at one point early on if it would be better for me to skip a grade or not. This was probably in the third grade or so. I was asked, and I seem to remember not wanting to – during my conversation with the friend I brought up some reasons I had (…may have had?) for not wanting to, but I’m not sure if I remember the context correctly and so perhaps it’s better to just say that I can’t recall precisely why I was against this idea, but that I was. Neither of my parents were all that keen on the idea anyway. Incidentally the question of grade-skipping was asked in a Mensa survey answered by a sizeable proportion of all Danish members last year; I’m not allowed to cover that data here (or I would have already), but I don’t think I’ll get in trouble by saying that grade-skipping was quite rare even in this group of people – this surprised me a bit.
Anyway, a snippet from the article:
“There are widespread myths about the psychological vulnerability of gifted students and therefore fears that acceleration will lead to an increase in disturbances such as anxiety, depression, delinquent behavior, and lowered self-esteem. In fact, a comprehensive survey of the research on this topic finds no evidence that gifted students are any more psychologically vulnerable than other students, although boredom, underachievement, perfectionism, and succumbing to the effects of peer pressure are predictable when needs for academic advancement and compatible peers are unmet (Neihart, Reis, Robinson, & Moon, 2002). Questions remain, however, as to whether acceleration may place some students more at risk than others.”
Note incidentally that relative age effects (how is the grade/other academic outcomes of individual i impacted by the age difference between individual i and his/her classmates) vary across countries, but are usually not insignificant; most places you look the older students in the classroom do better than their younger classmates, all else equal. It’s worth having both such effects as well as the cross-country heterogeneities (and the mechanisms behind them) in mind when considering the potential impact of acceleration on academic performance – given differences across countries there’s no good reason why ‘acceleration effects’ should be homogenous across countries either. Relative age effects are sizeable in most countries – see e.g. this. I read a very nice study a while back investigating the impact of relative age on tracking options of German students and later life outcomes (the effects were quite large), but I’m too lazy to go look for it now – I may add it to this post later (but I probably won’t).
ii. Publishers withdraw more than 120 gibberish papers. (…still a lot of papers to go – do remember that at this point it’s only a small minority of all published gibberish papers which are computer-generated…)
Nope, this is not another article about how drinking during pregnancy is bad for the fetus (for stuff on that, see instead e.g. this post – link i.); this one is about how alcohol exposure before conception may harm the child:
“It has been well documented that maternal alcohol exposure during fetal development can have devastating neurological consequences. However, less is known about the consequences of maternal and/or paternal alcohol exposure outside of the gestational time frame. Here, we exposed adolescent male and female rats to a repeated binge EtOH exposure paradigm and then mated them in adulthood. Hypothalamic samples were taken from the offspring of these animals at postnatal day (PND) 7 and subjected to a genome-wide microarray analysis followed by qRT-PCR for selected genes. Importantly, the parents were not intoxicated at the time of mating and were not exposed to EtOH at any time during gestation therefore the offspring were never directly exposed to EtOH. Our results showed that the offspring of alcohol-exposed parents had significant differences compared to offspring from alcohol-naïve parents. Specifically, major differences were observed in the expression of genes that mediate neurogenesis and synaptic plasticity during neurodevelopment, genes important for directing chromatin remodeling, posttranslational modifications or transcription regulation, as well as genes involved in regulation of obesity and reproductive function. These data demonstrate that repeated binge alcohol exposure during pubertal development can potentially have detrimental effects on future offspring even in the absence of direct fetal alcohol exposure.”
I haven’t read all of it but I thought I should post it anyway. It is a study on rats who partied a lot early on in their lives and then mated later on after they’d been sober for a while, so I have no idea about the external validity (…I’m sure some people will say the study design is unrealistic – on account of the rats not also being drunk while having sex…) – but good luck setting up a similar prospective study on humans. I think it’ll be hard to do much more than just gather survey data (with a whole host of potential problems) and perhaps combine this kind of stuff with studies comparing outcomes (which?) across different geographical areas using things like legal drinking age reforms or something like that as early alcohol exposure instruments. I’d say that even if such effects are there they’ll be very hard to measure/identify and they’ll probably get lost in the noise.
iv. The relationship between obesity and type 2 diabetes is complicated. I’ve seen it reported elsewhere that this study ‘proved’ that there’s no link between obesity and diabetes or something like that – apparently you need headlines like that to sell ads. Such headlines make me very, tired.
vi. If people from the future write an encyclopedic article about your head, does that mean you did well in life? How you answer that question may depend on what they focus on when writing about the head in question. Interestingly this guy didn’t get an article like that.
i. Stirling engine.
“A Stirling engine is a heat engine operating by cyclic compression and expansion of air or other gas, the working fluid, at different temperature levels such that there is a net conversion of heat energy to mechanical work. Or more specifically, a closed-cycle regenerative heat engine with a permanently gaseous working fluid, where closed-cycle is defined as a thermodynamic system in which the working fluid is permanently contained within the system, and regenerative describes the use of a specific type of internal heat exchanger and thermal store, known as the regenerator. It is the inclusion of a regenerator that differentiates the Stirling engine from other closed cycle hot air engines.
The Stirling engine is noted for its high efficiency compared to steam engines, quiet operation, and the ease with which it can use almost any heat source. This compatibility with alternative and renewable energy sources has become increasingly significant as the price of conventional fuels rises, and also in light of concerns such as peak oil and climate change. This engine is currently exciting interest as the core component of micro combined heat and power (CHP) units, in which it is more efficient and safer than a comparable steam engine. […]
In contrast to internal combustion engines, Stirling engines have the potential to use renewable heat sources more easily, to be quieter, and to be more reliable with lower maintenance. They are preferred for applications that value these unique advantages, particularly if the cost per unit energy generated is more important than the capital cost per unit power. On this basis, Stirling engines are cost competitive up to about 100 kW.
Compared to an internal combustion engine of the same power rating, Stirling engines currently have a higher capital cost and are usually larger and heavier. However, they are more efficient than most internal combustion engines. Their lower maintenance requirements make the overall energy cost comparable. The thermal efficiency is also comparable (for small engines), ranging from 15% to 30%. For applications such as micro-CHP, a Stirling engine is often preferable to an internal combustion engine. Other applications include water pumping, astronautics, and electrical generation from plentiful energy sources that are incompatible with the internal combustion engine, such as solar energy, and biomass such as agricultural waste and other waste such as domestic refuse. Stirlings are also used as a marine engine in Swedish Gotland-class submarines. However, Stirling engines are generally not price-competitive as an automobile engine, due to high cost per unit power, low power density and high material costs.”
Sixty Symbols at one point made a neat little video about these engines – you can watch the video here.
ii. Doolittle Raid.
“The Doolittle Raid, also known as the Tokyo Raid, on 18 April 1942, was an air raid by the United States on the Japanese capital Tokyo and other places on Honshu island during World War II, the first air raid to strike the Japanese Home Islands. It demonstrated that Japan itself was vulnerable to American air attack, was retaliation for the Japanese attack on Pearl Harbor on 7 December 1941, provided an important boost to U.S. morale, and damaged Japanese morale. The raid was planned and led by Lieutenant Colonel James “Jimmy” Doolittle, U.S. Army Air Forces.
Sixteen U.S. Army Air Forces B-25B Mitchell medium bombers were launched without fighter escort from the U.S. Navy‘s aircraft carrier USS Hornet deep in the Western Pacific Ocean, each with a crew of five men. The plan called for them to bomb military targets in Japan, and to continue westward to land in China—landing a medium bomber on the Hornet was impossible. Fifteen of the aircraft reached China, and the other one landed in the Soviet Union. All but three of the crew survived, but all the aircraft were lost. […]
The raid caused negligible material damage to Japan, only hitting non-military targets or missing completely […] but it succeeded in its goal of helping American morale and casting doubt in Japan on the ability of its military leaders. It also caused Japan to withdraw its powerful aircraft carrier force from the Indian Ocean to defend their Home Islands, and the raid contributed to Admiral Isoroku Yamamoto‘s decision to attack Midway—an attack that turned into a decisive strategic defeat of the Imperial Japanese Navy (IJN) by the U.S. Navy near Midway Island in the Central Pacific. […]
Immediately following the raid, Doolittle told his crew that he believed the loss of all 16 aircraft, coupled with the relatively minor damage to targets, had rendered the attack a failure, and that he expected a court-martial upon his return to the United States. Instead, the raid bolstered American morale to such an extent that Doolittle was awarded the Medal of Honor by President Roosevelt, and was promoted two grades to brigadier general, skipping the rank of colonel. […]
Following the Doolittle Raid, most of the B-25 crews who had reached China eventually achieved safety with the help of Chinese civilians and soldiers. Of the 80 airmen who participated in the raid, 69 escaped capture or death. When the Chinese helped the Americans escape, the grateful Americans in turn gave them whatever they had on hand. The people who helped them, however, paid dearly for sheltering the Americans.”
The Japanese were pretty pissed afterwards:
“The Japanese military began the Zhejiang-Jiangxi Campaign to intimidate the Chinese from helping the American airmen. All airfields in a range of some 20,000 square miles (50,000 km2) in the areas where the Raiders had landed were torn up. Germ warfare was used and atrocities committed, and those found with American items were shot. The Japanese killed an estimated 250,000 Chinese civilians during their search for Doolittle’s men. […] On 28 August 1942, pilot Hallmark, pilot Farrow and gunner Spatz faced a war crimes trial by the Japanese for allegedly strafing Japanese civilians. At 16:30 on 15 October 1942 they were taken by truck to Public Cemetery Number 1, and executed by firing squad. The other captured airmen remained in military confinement on a starvation diet, their health rapidly deteriorating. In April 1943, they were moved to Nanking, where Meder died on 1 December 1943. The remaining men, Nielsen, Hite, Barr and DeShazer, eventually began receiving slightly better treatment and were given a copy of the Bible and a few other books. They were freed by American troops in August 1945.”
“Missouri Executive Order 44, also known as the Extermination Order in Latter Day Saint history, was an executive order issued on October 27, 1838 by the governor of Missouri, Lilburn Boggs. It was issued in the aftermath of the Battle of Crooked River, a clash between Mormons and a unit of the Missouri State Guard in northern Ray County, Missouri, during the Mormon War of 1838. Claiming that the Mormons had committed “open and avowed defiance of the laws”, and had “made war upon the people of this State,” Boggs directed that “the Mormons must be treated as enemies, and must be exterminated or driven from the State if necessary for the public peace—their outrages are beyond all description”.
While Executive Order 44 is often referred to as the “Mormon Extermination Order” due to the phrasing used by Boggs, no one is known to have been killed by the militia or anyone else specifically because of it. There were, however, other associated deaths: the militia and other state authorities used Boggs’ missive as a pretext to expel the Mormons from their lands in the state, and force them to migrate to Illinois. This forced expulsion in difficult, wintry conditions posed a substantial threat to the health and safety of the affected Mormons, and an unknown number died from hardship and exposure. Furthermore, a group of men and boys were killed by Livingston County militia in the Haun’s Mill massacre three days after the order was issued; however, there is no evidence that the militiamen had any knowledge of it, nor did they ever use the order to justify their actions.
Mormons did not begin to return to Missouri until 25 years later, when they found a more welcoming environment and were able to establish homes there once more. […] Boggs’ extermination order, long unenforced and forgotten by nearly everyone outside the Latter Day Saint community, was formally rescinded by Governor Christopher S. Bond on June 25, 1976, 137 years after being signed.”
iv. Ug99. This is probably the kind of thing you’d prefer most people never to learn anything about. At least I’d say that if something like this ever becomes a household name, this is very bad news:
“Ug99 is a lineage of wheat stem rust (Puccinia graminis f. sp. tritici), which is present in wheat fields in several countries in Africa and the Middle East and is predicted to spread rapidly through these regions and possibly further afield, potentially causing a wheat production disaster that would affect food security worldwide. It can cause up to 100% crop losses and is virulent against many resistance genes which have previously protected wheat against stem rust.
Although Ug99-resistant varieties of wheat do exist, a screen of 200 000 wheat varieties used in 22 African and Asian countries found that only 5-10% of the area of wheat grown in these countries consisted of varieties with adequate resistance.”
v. Egyptian temples (featured).
Some quotes from the article:
“Each temple had a principal deity, and most were dedicated to other gods as well. However, not all deities had temples dedicated to them. Many demons and household gods were involved primarily in magical or private religious practice, with little or no presence in temple ceremonies. There were also other gods who had significant roles in the cosmos but, for uncertain reasons, were not honored with temples of their own. Of those gods who did have temples of their own, many were venerated mainly in certain areas of Egypt, though many gods with a strong local tie were also important across the nation. Even deities whose worship spanned the country were strongly associated with the cities where their chief temples were located. In Egyptian creation myths, the first temple originated as a shelter for a god—which god it was varied according to the city—that stood on the mound of land where the process of creation began. Each temple in Egypt, therefore, was equated with this original temple and with the site of creation itself. As the primordial home of the god and the mythological location of the city’s founding, the temple was seen as the hub of the region, from which the city’s patron god ruled over it.
Pharaohs also built temples where offerings were made to sustain their spirits in the afterlife, often linked with or located near their tombs. These temples are traditionally called “mortuary temples” and regarded as essentially different from divine temples. However, in recent years some Egyptologists, such as Gerhard Haeny, have argued that there is no clear division between the two.” […]
“The earliest known primitive shrines appeared in Egypt by the late Predynastic Period, in the late fourth millennium BC. […] Temple-building continued down until the 4th century AD. However, with the rise of the Christian Roman Emperors temples lost their traditional state funding, had their treasures melted down, and the proceeds redirected towards the building of churches. In AD 391 all pagan cults were banned by Theodosius I and in this same year the Serapeum of Alexandria was destroyed by Christians. Attacks on pagans and temples were widespread throughout Egypt. A few temples, such as Luxor, were converted into churches, while many others went completely disused. In AD 550, Philae, the last functioning temple in Egypt, was closed.”
“Some estimate that by the New Kingdom period, temples owned about 33% of the arable land.”
“A temple needed many people to perform its rituals and support duties. Priests performed the temple’s essential ritual functions, but in Egyptian religious ideology they were far less important than the king. As temple decoration illustrates, all ceremonies were, in theory, acts by the king, and priests merely stood in his place. The priests were therefore subject to the king’s authority, and he had the right to appoint anyone he wished to the priesthood. In fact, in the Old and Middle Kingdoms most priests were government officials who left their secular duties for part of the year to serve the temple in shifts. Once the priesthood became more professional, the king seems to have used his power over appointments mainly for the highest-ranking positions, usually to reward a favorite official with a job or to intervene for political reasons in the affairs of an important cult. […] Besides its priests, a large temple employed singers, musicians, and dancers to perform during rituals, plus the farmers, bakers, artisans, builders, and administrators who supplied and managed its practical needs. A major cult […] could have well over 150 full or part-time priests, with tens of thousands of non-priestly employees working on its lands across the country. These numbers contrast with mid-sized temples, which may have had 10 to 25 priests, and with the smallest provincial temples, which might have only one.” […]
“After their original religious activities ceased, Egyptian temples suffered slow decay. Many were defaced or dismantled by Christians trying to erase the remnants of paganism. Over time locals carried off their stones to use as material for new buildings. What humans left intact was still subject to natural weathering. Temples in desert areas could be covered by drifts of sand, while those near the Nile, particularly in Lower Egypt, were often completely buried under layers of river-borne silt. Thus, some major temple sites like Memphis and Heliopolis were reduced to ruin, while many temples far from the Nile and centers of population remained mostly intact. […] Nineteenth-century Egyptologists studied the temples intensively, but their emphasis was on collection of artifacts to send to their own countries, and their slipshod excavation methods often did further harm. Slowly, however, the antique-hunting attitude toward Egyptian monuments gave way to careful study and preservation efforts. […] Today there are dozens of sites with substantial temple remains, although many more once existed, and none of the major temples in Lower or Middle Egypt are well preserved. […] Archaeological work continues as well, as many temple remains still lie buried and many extant temples are not yet fully studied.”
(Actually I’m not sure this one is really that interesting, but it was one of the articles I was browsing a couple of days ago while reading McPhee et al. When I had had a brief look at this article, I concluded that I really didn’t need to understand the specific line of reasoning in the book that had made me look up that stuff..).
“The metric expansion of space is the increase of the distance between two distant parts of the universe with time. It is an intrinsic expansion whereby the scale of space itself is changed. That is, a metric expansion is defined by an increase in distance between parts of the universe even without those parts “moving” anywhere. […]
This kind of expansion is different from all kinds of expansions and explosions commonly seen in nature. What we see normally as “space” and “distance” are not absolutes, but are determined by a metric that can change. In the metric expansion of space, rather than objects in a fixed “space” moving apart into “emptiness”, it is space itself which is changing. It is as if without objects themselves moving, space is somehow growing or shrinking between them: if it were possible to place a tape measure between even stationary objects, one would observe the scale of the tape measure changing to show more distance between them.
Because this expansion is caused by changes in the distance-defining metric, and not by objects themselves moving in space, this expansion (and the resultant movement apart of objects) is not restricted by the speed of light upper bound of special relativity. So objects can be moving at sub-light speed yet appear to be moving apart faster than light. […]
“The expansion of space is sometimes described as a force which acts to push objects apart. Though this is an accurate description of the effect of the cosmological constant, it is not an accurate picture of the phenomenon of expansion in general. For much of the universe’s history the expansion has been due mainly to inertia. The matter in the very early universe was flying apart for unknown reasons (most likely as a result of cosmic inflation) and has simply continued to do so, though at an ever-decreasing rate due to the attractive effect of gravity.
In addition to slowing the overall expansion, gravity causes local clumping of matter into stars and galaxies. Once objects are formed and bound by gravity, they “drop out” of the expansion and do not subsequently expand under the influence of the cosmological metric, there being no force compelling them to do so.”
This is complicated (if also fascinating) stuff, as there’s a lot of math doing work ‘behind the scenes’ and reasonably few people around who actually understands all that math. As they put it in the introduction:
“Due to the non-intuitive nature of the subject and what has been described by some as “careless” choices of wording, certain descriptions of the metric expansion of space and the misconceptions to which such descriptions can lead are an ongoing subject of discussion in the realm of pedagogy and communication of scientific concepts.”
Despite the fact that the article deals with very complex stuff this is not a ‘mathy’ article; I’d say the article is not too technical for most people with an interest in these matters to read it and obtain a greater understanding of the universe in which he or she lives. Note that progress in the field of observational cosmology is continually being made, so it’s not like the final version of this article has been written at this point. For example the article states that the most distant quasar currently known is 28 billion light years away (comoving distance), however the most distant object we have observed (discovered earlier this month) is now 30 lightyears away – even if that’s technically a galaxy and not a quasar, it’s highly likely that another, more distant quasar, will be found in the future (if you don’t feel like clicking the link to the wiki article about that galaxy, here’s one sentence from the article that may change your mind: “The galaxy in its observable timeframe was producing stars at a phenomenal rate, equivalent in mass to about 300 suns per year.“).
ii. Cell membrane. These things are very important, yet most people probably don’t know a great deal about them. This article will teach you more. Khan Academy has stuff on this topic as well (you can start here – I’ve been thinking about blogging these videos, and maybe I will later on).
iii. Axe Murder incident. Short version: Two US Army officers got killed by North Korean soldiers while trying to cut down a tree in the Joint Security Area between North and South Korea. Some people higher up got angry about that and decided that that tree had to go, and so Operation Paul Bunyan was launched. With the aid of 23 American and South Korean vehicles, some detonation charges, two 30-man security platoons, a 64-man South Korean special forces company (..and a total task force of 813 men), a U.S. infantry company in 20 utility helicopters and 7 Cobra attack helicopters, some B-52 Stratofortresses escorted by F-4 Phantom II fighter-bombers and South Korean F-5 Freedom Fighters, as well as a nearby aircraft carrier and a dozen C-130s ready to provide support plus 12,000 additional troops which were ordered to Korea, two eight-man teams of military engineers managed to cut down the tree. Here’s what all the fuss was about:
(Do note that the reason why the tree was not cut down completely was not interference from NK soldiers: “The stump of the tree, almost 6 m (20 ft) tall, was deliberately left standing.”)
iv. Maggot therapy.
“Maggot therapy is also known as maggot debridement therapy (MDT), larval therapy, larva therapy, larvae therapy, biodebridement or biosurgery. It is a type of biotherapy involving the introduction of live, disinfected maggots (fly larvae) into the non-healing skin and soft tissue wound(s) of a human or animal for the purpose of cleaning out the necrotic (dead) tissue within a wound (debridement) and disinfection. […]
While at Johns Hopkins University in 1929, Dr. Baer introduced maggots into 21 patients with intractable chronic osteomyelitis. He observed rapid debridement, reductions in the number of pathogenic organisms, reduced odor levels, alkalinization of wound beds, and ideal rates of healing. All 21 patients’ open lesions were completely healed and they were released from the hospital after two months of maggot therapy.
After the publication of Dr. Baer’s results in 1931, maggot therapy for wound care became very common, particularly in the United States. The Lederle pharmaceutical company commercially produced “Surgical Maggots”, larvae of the green bottle fly, which primarily feed on the necrotic (dead) tissue of the living host without attacking living tissue. Between 1930 and 1940, more than 100 medical papers were published on maggot therapy. Medical literature of this time contains many references to the successful use of maggots in chronic or infected wounds including osteomyelitis, abscesses, burns, sub-acute mastoiditis, and chronic empyema.
More than 300 American hospitals employed maggot therapy during the 1940s. The extensive use of maggot therapy prior to World War II was curtailed when the discovery and growing use of penicillin caused it to be deemed outdated. […] While in the past it was believed that maggots do not damage healthy tissue, this is in doubt now. […]
The wound must be of a type which can actually benefit from the application of maggot therapy. A moist, exudating wound with sufficient oxygen supply is a prerequisite. Not all wound-types are suitable: wounds which are dry, or open wounds of body cavities do not provide a good environment for maggots to feed. […] In about 1/3 of all patients pain is increased.”
v. Oil shale (featured article).
“Oil shale, also known as kerogen shale, is an organic-rich fine-grained sedimentary rock containing kerogen (a solid mixture of organic chemical compounds) from which liquid hydrocarbons called shale oil (not to be confused with tight oil—crude oil occurring naturally in shales) can be produced. Shale oil is a substitute for conventional crude oil; however, extracting shale oil from oil shale is more costly than the production of conventional crude oil both financially and in terms of its environmental impact. […]
Heating oil shale to a sufficiently high temperature causes the chemical process of pyrolysis to yield a vapor. Upon cooling the vapor, the liquid shale oil—an unconventional oil—is separated from combustible oil-shale gas (the term shale gas can also refer to gas occurring naturally in shales). Oil shale can also be burned directly in furnaces as a low-grade fuel for power generation and district heating or used as a raw material in chemical and construction-materials processing.
Oil shale gains attention as a potential abundant source of oil whenever the price of crude oil rises. At the same time, oil-shale mining and processing raise a number of environmental concerns, such as land use, waste disposal, water use, waste-water management, greenhouse-gas emissions and air pollution. Estonia and China have well-established oil shale industries, and Brazil, Germany, and Russia also utilize oil shale. […]
Oil shale, an organic-rich sedimentary rock, belongs to the group of sapropel fuels. It does not have a definite geological definition nor a specific chemical formula, and its seams do not always have discrete boundaries. Oil shales vary considerably in their mineral content, chemical composition, age, type of kerogen, and depositional history and not all oil shales would necessarily be classified as shales in the strict sense. According to the petrologist Adrian C. Hutton of the University of Wollongong, oil shales are not “geological nor geochemically distinctive rock but rather ‘economic’ term.” Their common feature is low solubility in low-boiling organic solvents and generation of liquid organic products on thermal decomposition. […]
The largest deposits in the world occur in the United States in the Green River Formation, which covers portions of Colorado, Utah, and Wyoming; about 70% of this resource lies on land owned or managed by the United States federal government. Deposits in the United States constitute 62% of world resources; together, the United States, Russia and Brazil account for 86% of the world’s resources in terms of shale-oil content. These figures remain tentative, with exploration or analysis of several deposits still outstanding. […] As of 2009, 80% of oil shale used globally is extracted in Estonia […]
The shale oil derived from oil shale does not directly substitute for crude oil in all applications. It may contain higher concentrations of olefins, oxygen, and nitrogen than conventional crude oil. Some shale oils may have higher sulfur or arsenic content. […] The higher concentrations of these materials means that the oil must undergo considerable upgrading (hydrotreating) before serving as oil-refinery feedstock. […] Shale oil serves best for producing middle-distillates such as kerosene, jet fuel, and diesel fuel.”
i. Golden Eagle.
I checked out the wiki article after I’d come across this article with some amazing pictures (“While the three photos were taken over the course of just two seconds, they made scientific history.”).
“The Golden Eagle (Aquila chrysaetos) is one of the best-known birds of prey in the Northern Hemisphere. It is the most widely distributed species of eagle. Like all eagles, it belongs to the family Accipitridae. These birds are dark brown, with lighter golden-brown plumage on their napes. Immature eagles of this species typically have white on the tail and often have white markings on the wings. Golden Eagles use their agility and speed combined with extremely powerful feet and massive, sharp talons to snatch up a variety of prey (mainly hares, rabbits, marmots and other ground squirrels). […]
Golden Eagles are sometimes considered the most superlative fliers among eagles and perhaps among all raptorial birds. They are equipped with broad, long wings with somewhat finger-like indentations on the tips of the wing. Golden Eagles are unique among their genus in that they often fly in a slight dihedral, which means the wings are often held in a slight, upturned V. When they must engage in flapping flight, Golden Eagles appear at their most labored but this flight method is generally less common than soaring or gliding flights. Flapping flight usually consists of 6–8 deep wing-beats, interspersed with 2 to 3 second glides. While soaring the wings and tail are held in one plane with the primary tips often spread. A typical, unhurried soaring speed in Golden Eagles is around 45–52 kilometers per hour (28–32 mph). When hunting or displaying, the Golden Eagle is capable of very fast gliding, attaining speeds of up to 190 km/h (120 mph). When diving (or stooping) in the direction of prey or during territorial displays, the eagle holds its wings tight and partially closed against their body and the legs up against tail. In a full stoop, a Golden Eagle can reach spectacular speeds of up to 240 to 320 kilometers per hour (150 to 200 mph) when diving after prey. Although less agile and maneuverable, the Golden Eagle is apparently quite the equal and possibly even the superior of the Peregrine Falcon’s stooping and gliding speeds. This places the Golden Eagle as the one of the two fastest moving living animals on earth. [Note though that: “The Peregrine is renowned for its speed, reaching over 322 km/h (200 mph) during its characteristic hunting stoop (high speed dive), making it the fastest member of the animal kingdom. According to a National Geographic TV programme, the highest measured speed of a Peregrine Falcon is 389 km/h (242 mph).” – from the (featured) wikipedia article about the Peregrine Falcon]. […]
One of the most fascinating, though relatively little studied, aspects of the Golden Eagle’s biology is how it interacts with other predators in a natural environment, especially other large predatory birds. The Golden Eagle is a powerful hunter with few avian rivals in size or strength, although what it gains in these areas it loses somewhat in its agility and speed. Golden Eagles are avian apex predators, meaning a healthy adult is not generally preyed upon. There are several other large birds of prey that inhabit the Northern Hemisphere that may be attracted to the same prey, habitats and nesting sites as the Golden Eagles. Two examples are the Common Raven and Peregrine Falcon (Falco peregrinus) as these are two fairly large-bodied, mostly predatory birds that co-exist with Golden Eagles in almost every part of their range, although the former occurs in much larger numbers and the latter has a much larger natural distribution in more varied habitats. Both the Raven and the Peregrine are often attracted to much the same precipitous habitat as the Golden Eagle. However, both are generally dominated by the much larger eagle and will actively avoid nesting in the same area as a Golden Eagle pair.”
It’s a very long article, so there’s a lot of stuff there if you’re curious to learn more.
ii. Triton (moon) (featured).
“Triton is the largest moon of the planet Neptune, discovered on October 10, 1846, by English astronomer William Lassell. It is the only large moon in the Solar System with a retrograde orbit, which is an orbit in the opposite direction to its planet’s rotation. At 2,700 kilometres (1,700 mi) in diameter, it is the seventh-largest moon in the Solar System. Because of its retrograde orbit and composition similar to Pluto‘s, Triton is thought to have been captured from the Kuiper belt. Triton has a surface of mostly frozen nitrogen, a mostly water ice crust, an icy mantle and a substantial core of rock and metal. The core makes up two-thirds of its total mass. Triton has a mean density of 2.061 grams per cubic centimetre (0.0745 lb/cu in) and is composed of approximately 15–35% water ice.
Triton is one of the few moons in the Solar System known to be geologically active. As a consequence, its surface is relatively young, with a complex geological history revealed in intricate and mysterious cryovolcanic and tectonic terrains. Part of its crust is dotted with geysers thought to erupt nitrogen. […]
Triton’s revolution around Neptune has become a nearly perfect circle with an eccentricity of almost zero. Viscoelastic damping from tides alone is not thought to be capable of circularizing Triton’s orbit in the time since the origin of the system, and gas drag from a prograde debris disc is likely to have played a substantial role. Tidal interactions also cause Triton’s orbit, already closer to Neptune than the Moon’s to Earth, to slowly decay further; predictions are that some 3.6 billion years from now, Triton will pass within Neptune’s Roche limit. This will result in either a collision with Neptune’s atmosphere or the breakup of Triton, forming a ring system similar to that found around Saturn. […]
iii. Myth of the Flat Earth.
“The myth of the Flat Earth is the modern misconception that the prevailing cosmological view during the Middle Ages saw the Earth as flat, instead of spherical. The idea seems to have been widespread during the first half of the 20th century, so that the Members of the Historical Association in 1945 stated that:
During the early Middle Ages, virtually all scholars maintained the spherical viewpoint first expressed by the Ancient Greeks. From at least the 14th century, belief in a flat Earth among the educated was almost nonexistent, despite fanciful depictions in art, such as the exterior of Hieronymus Bosch‘s famous triptych The Garden of Earthly Delights, in which a disc-shaped Earth is shown floating inside a transparent sphere. […]
Since the early 20th century, a number of books and articles have documented the flat earth error as one of a number of widespread misconceptions in popular views of the Middle Ages. Both E.M.W. Tillyard’s book The Elizabethan World Picture and C.S. Lewis’ The Discarded Image are devoted to a broad survey of how the universe was viewed in Renaissance and medieval times, and both extensively discuss how the educated classes knew the world was round. […] Although the misconception was frequently refuted in historical scholarship since at least 1920, it persisted in popular culture and in some school textbooks into the 1960s.”
This is what a tank looked like in 1918 (…’A’):
And here’s how a tank looked like 21 years later (…’B’):
v. Golden Horde.
The Golden Horde (Tatar: Алтын Урда Altın Urda; Mongolian: Зүчийн улс, Züchii-in Uls; Russian: Золотая Орда, tr. Zolotaya Orda) was a Mongol and later Turkicized khanate, established in the 13th century, which comprised the northwestern sector of the Mongol Empire. The khanate is also known as the Kipchak Khanate or as the Ulus of Jochi.
After the death of Batu Khan in 1255, the prosperity of his dynasty lasted for a full century, until 1359, though the intrigues of Nogai did instigate a partial civil war in the late 1290s. The Horde’s military power peaked during the reign of Uzbeg (1312–41), who adopted Islam. The territory of the Golden Horde at its peak included most of Eastern Europe from the Urals to the right bank of the Danube River, extending east deep into Siberia. In the south, the Golden Horde’s lands bordered on the Black Sea, the Caucasus Mountains, and the territories of the Mongol dynasty known as the Ilkhanate.
The khanate experienced violent internal political disorder beginning in 1359, before it was briefly reunited under Tokhtamysh in 1381. However, soon after the 1396 invasion of Tamerlane, it broke into smaller Tatar khanates that declined steadily in power. At the start of the 15th century the Horde began to fall apart. By 1433 it was being referred to simply as the Great Horde. Within its territories there emerged numerous, predominantly Turkic-speaking, khanates. These internal struggles allowed the northern vassal state of Muscovy to rid itself of the “Tatar Yoke” at the Great stand on the Ugra river in 1480. The Crimean Khanate and the Kazakh Khanate, the last remnants of the Golden Horde, persisted until 1783 and 1847, respectively. […]
After Uzbeg (Öz-Beg) mounted the throne in 1313, he adopted Islam as the state religion. He proscribed Buddhism and Shamanism among the Mongols in Russia, thus reversing the spread of the Yuan culture. By 1315, Uzbeg had successfully Islamicized the Horde, killing Jochid princes and Buddhist lamas who opposed his religious policy and succession of the throne.
Mohammed Uzbeg Khan continued the alliance with the Mamluks which Berke and his predecessors had begun. He kept a friendly relationship with the Mamluk Sultan and his shadow Caliph in Cairo. After a long delay and much discussion, he married a princess of the blood to Al-Nasir Muhammad, Sultan of Egypt.
The Mongol rulers’ Rus’ policy was one of constantly switching alliances in an attempt to keep Russia and Eastern Europe weak and divided. […]
Uzbeg, whose total army exceeded 300,000, repeatedly raided Thrace, partly in service of Bulgaria’s war against both Byzantium and Serbia from 1319 on. The Byzantine Empire, beginning in the reign of Andronikos II Palaiologos and continuing in that of Andronikos III Palaiologos, was raided by the Golden Horde between 1320 and 1341, until the Byzantine port of Vicina Macaria was occupied. Some sources report that Uzbeg also married Andronikos III’s illegitimate daughter, who had taken the name Bayalun, and who later, after relations between the Horde and the Byzantines deteriorated, fled back to the Byzantine Empire, apparently fearing her forced conversion to Islam. His armies pillaged Thrace for forty days in 1324 and for fifteen days in 1337,taking 300,000 captives. However, his attempt to reassert Mongol control over Serbia was unsuccessful in 1330.”
(The article has much more.)
vi. Caesium (featured).
It’s been a while since I’ve posted one of these and my collection of ‘articles I’d like to blog at some point’ has accumulated for a while, so I decided to post a few more links than I usually do while also limiting coverage of specific articles a little:
“The Halifax Explosion occurred near Halifax, Nova Scotia, Canada, on the morning of Thursday, December 6, 1917. SS Mont-Blanc, a French cargo ship fully laden with wartime explosives, collided with the Norwegian vessel SS Imo in the Narrows, a strait connecting the upper Halifax Harbour to Bedford Basin. Approximately twenty minutes later, a fire on board the French ship ignited her explosive cargo, causing a cataclysmic explosion that devastated the Richmond District of Halifax. Approximately 2,000 people were killed by debris, fires, and collapsed buildings, and it is estimated that nearly 9,000 others were injured. The blast was the largest man-made explosion prior to the development of nuclear weapons with an equivalent force of roughly 2.9 kilotons of TNT. […]
While the exact number killed by the disaster is unknown, a common estimate is 2,000. The Halifax Explosion Remembrance Book, an official database compiled in 2002 by the Nova Scotia Archives and Records Management identified 1,950 victims. As many as 1,600 people died immediately in the blast, the tsunami, and collapse of buildings. The last body, a caretaker killed at the Exhibition Grounds, was not recovered until the summer of 1919. An additional 9,000 were injured, 6,000 of them seriously; 1,630 homes were destroyed in the explosion and fires, with 12,000 more houses damaged. This disaster left roughly 6,000 people homeless and without shelter and 25,000 without adequate housing. The city’s industrial sector was in large part gone, with many workers among the casualties and the dockyard heavily damaged.
The explosion was responsible for the vast majority of Canada’s World War I-related civilian deaths and injuries, and killed more Nova Scotian residents than were killed in combat. Detailed estimates showed that among those killed, 600 were under the age of 15, 166 were labourers, 134 were soldiers and sailors, 125 were craftsmen, and 39 were workers for the railway.
Many of the wounds inflicted by the blast were permanently debilitating, with many people partially blinded by flying glass or by the flash of the explosion. Thousands of people had stopped to watch the ship burning in the harbour, with many people watching from inside buildings, leaving them directly in the path of flying glass from shattered windows. Roughly 600 people suffered eye injuries, and 38 of those lost their sight permanently.”
ii. Taman Shud Case.
“The Taman Shud Case,[notes 1] also known as the Mystery of the Somerton Man, is an unsolved case of an unidentified man found dead at 6:30 a.m., 1 December 1948, on Somerton beach in Adelaide, South Australia. It is named for a phrase, tamam shud, meaning “ended” or “finished”, on a scrap of the final page of The Rubaiyat, found in the hidden pocket of the man’s trousers.
Considered “one of Australia’s most profound mysteries” at the time, the case has been the subject of intense speculation over the years regarding the identity of the victim, the events leading up to his death, and the cause of death. Public interest in the case remains significant because of a number of factors: the death occurring at a time of heightened tensions during the Cold War, what appeared to be a secret code on a scrap of paper found in his pocket, the use of an undetectable poison, his lack of identification, and the possibility of unrequited love.”
iii. Bird strike. These are actually a much bigger deal than I’d imagined:
“A bird strike—sometimes called birdstrike, avian ingestion (only if in an engine), bird hit, or BASH (for Bird Aircraft Strike Hazard)—is a collision between an airborne animal (usually a bird or bat) and a human-made vehicle, especially aircraft. The term is also used for bird deaths resulting from collisions with human-made structures such as power lines, towers and wind turbines (see Bird-skyscraper collisions and Towerkill). A bug strike is an impairment of an aircraft or aviator by an airborne insect.
Bird strikes are a significant threat to flight safety, and have caused a number of accidents with human casualties. The number of major accidents involving civil aircraft is quite low and it has been estimated that there is only about 1 accident resulting in human death in one billion (109) flying hours. The majority of bird strikes (65%) cause little damage to the aircraft; needless to say, the collision is usually fatal to the bird.
Most accidents occur when the bird hits the windscreen or flies into the engines. These cause annual damages that have been estimated at $400 million within the United States of America alone and up to $1.2 billion to commercial aircraft worldwide. […]
Jet engine ingestion is extremely serious due to the rotation speed of the engine fan and engine design. As the bird strikes a fan blade, that blade can be displaced into another blade and so forth, causing a cascading failure. Jet engines are particularly vulnerable during the takeoff phase when the engine is turning at a very high speed and the plane is at a low altitude where birds are more commonly found.
The force of the impact on an aircraft depends on the weight of the animal and the speed difference and direction at the impact. The energy of the impact increases with the square of the speed difference. Hence a low-speed impact of a small bird on a car windshield causes relatively little damage. High speed impacts, as with jet aircraft, can cause considerable damage and even catastrophic failure to the vehicle. The energy of a 5 kg (11 lb) bird moving at a relative velocity of 275 km/h (171 mph) approximately equals the energy of a 100 kg (220 lb) weight dropped from a height of 15 metres (49 ft). […]
The largest numbers of strikes happen during the spring and fall migrations. Bird strikes above 500 feet (150 m) altitude are about 7 times more common at night than during the day during the bird migration season.
Large land-bound animals, such as deer, can also be a problem to aircraft during take off and landing, and over 650 civil aircraft collisions with deer were reported in the U.S. between 1990 and 2004.
An animal hazard reported from London Stansted Airport in England is rabbits: they get run over by ground vehicles and planes, and they pass large amounts of droppings, which attract mice, which attract owls, which become another birdstrike hazard. […]
The energy that must be dissipated in the collision is approximately the relative kinetic energy [Ek] of the bird, defined by the equation [Ek=1/2*mv2] where [m] is the mass and [v] is the relative velocity (the difference of the velocities of the bird and the plane if they are flying in the same direction and the sum if they are flying towards each other). Therefore the speed of the aircraft is much more important than the size of the bird when it comes to reducing energy transfer in a collision. The same can be said for jet engines: the slower the rotation of the engine, the less energy which will be imparted onto the engine at collision.
The body density of the bird is also a parameter that influences the amount of damage caused.
“The 1940 Tacoma Narrows Bridge was the first Tacoma Narrows Bridge, a suspension bridge in the U.S. state of Washington that spanned the Tacoma Narrows strait of Puget Sound between Tacoma and the Kitsap Peninsula. It opened to traffic on July 1, 1940, and dramatically collapsed into Puget Sound on November 7 of the same year. At the time of its construction (and its destruction), the bridge was the third longest suspension bridge in the world in terms of main span length, behind the Golden Gate Bridge and the George Washington Bridge.
Construction on the bridge began in September 1938. From the time the deck was built, it began to move vertically in windy conditions, which led to construction workers giving the bridge the nickname Galloping Gertie. The motion was observed even when the bridge opened to the public. Several measures aimed at stopping the motion were ineffective, and the bridge’s main span finally collapsed under 40-mile-per-hour (64 km/h) wind conditions the morning of November 7, 1940.”
v. Myrtle Corbin.
“Josephine Myrtle Corbin (May 12, 1868 in Lincoln County, Tennessee – May 6, 1928 in Cleburne, Texas) was born a dipygus. This referred to the fact that she had two separate pelvises side by side from the waist down, as a result of her body axis splitting as it developed. Each of her smaller inner legs was paired with one of her outer legs. She was said to be able to move her inner legs, but they were too weak for walking.”
Here’s an image from the article:
vi. Black mamba. (Add to the list of reasons why I’m glad I don’t live in Africa…)
“The black mamba (Dendroaspis polylepis), also called the common black mamba or black-mouthed mamba, is the longest venomous snake in Africa, averaging around 2.5 to 3.2 m (8.2 to 10 ft) in length, and sometimes growing to lengths of 4.45 m (14.6 ft). It is named for the black colour of the inside of the mouth rather than the colour of its scales which varies from dull yellowish-green to a gun-metal grey. It is also the fastest snake in the world, capable of moving at 4.32 to 5.4 metres per second (16–20 km/h, 10–12 mph). The black mamba has a reputation for being very aggressive, but it usually attempts to flee from humans like most snakes, unless it is threatened. Without rapid and vigorous antivenom therapy, a bite from a black mamba is almost always fatal. […]
Among mambas, toxicity of individual specimens within the same species and subspecies can vary greatly based on several factors, including geographical region (there can be great variation in toxicity from one town or village to another). […] It is estimated that only 10 to 15 mg will kill a human adult, and its bites delivers about 50–120 mg of venom on average. The largest envenomation on record is 400 mg. Its bite is often called “the kiss of death” because, before antivenom was widely available, the mortality rate from a bite was 100% since this species always delivers fatal dosage of venom during every envenomation. Severe black mamba envenomation can kill a person in 30 minutes, but sometimes it takes up to 2–3 hours, depending upon many factors. […] Due to antivenom, a bite from a black mamba is no longer a certain death sentence. But in order for the antivenom to be successful, vigorous antivenom therapy must be administered very rapidly post-envenomation.”
Despite how dangerous these animals are, it’s not the snake species that causes the most deaths in Africa. That’s this one.
vii. Wernher von Braun.
“Wernher Magnus Maximilian, Freiherr von Braun (March 23, 1912 – June 16, 1977) was a German, and later naturalized, American rocket scientist, aerospace engineer, space architect, and one of the leading figures in the development of rocket technology in Nazi Germany during World War II and, subsequently, in the United States. He is credited as being the “Father of Rocket Science”.
In his 20s and early 30s, von Braun was the central figure in Germany’s rocket development program, responsible for the design and realization of the V-2 combat rocket during World War II. After the war, he and some select members of his rocket team were taken to the United States as part of the then-secret Operation Paperclip. Von Braun worked on the United States Army intermediate range ballistic missile (IRBM) program before his group was assimilated by NASA. Under NASA, he served as director of the newly formed Marshall Space Flight Center and as the chief architect of the Saturn V launch vehicle, the superbooster that propelled the Apollo spacecraft to the Moon.”
viii. Taiwan under Japanese rule.
“Between 1895 and 1945, Taiwan (including the Pescadores) was a dependency of the Empire of Japan. The expansion into Taiwan was a part of Imperial Japan’s general policy of southward expansion during the late 19th century.
As Taiwan was Japan’s first overseas colony, Japanese intentions were to turn the island into a showpiece “model colony”. As a result, much effort was made to improve the island’s economy, industry, public works and to change its culture.
The relative failures of immediate post–World War II rule by the Kuomintang led to a certain degree of nostalgia amongst the older generation of Taiwanese who experienced both. This has affected, to some degree, issues such as national identity, ethnic identity and the Taiwan independence movement. Partly as a result, the people of Taiwan in general feel much less antipathy towards the legacy of Japanese rule than other countries in Asia. […]
As part of the colonial government’s overall goal of keeping the anti-Japanese movement in check, public education became an important mechanism for facilitating both control and intercultural dialogue. While secondary education institutions were restricted mostly to Japanese nationals, the impact of compulsory primary education on the Taiwanese was immense.
On July 14, 1895, Isawa Shūji was appointed as the first Education Minister, and proposed that the Colonial Government implement a policy of compulsory primary education for children (a policy that had not even been implemented in Japan at the time). The Colonial Government established the first Western-style primary school in Taipei (the modern day Shilin Elementary School) as an experiment. Satisfied with the results, the government ordered the establishment of fourteen language schools in 1896, which were later upgraded to become public schools. During this period, schools were segregated by ethnicity. Kōgakkō (公學校, Public Schools) were established for Taiwanese children, while shōgakkō (小學校, Elementary Schools) were restricted to the children of Japanese nationals. Schools for aborigines were also established in aboriginal areas. Criteria were established for teacher selection, and several teacher training schools such as Taihoku Normal School were founded. Secondary and post-secondary educational institutions, such as Taihoku Imperial University were also established, but access was restricted primarily to Japanese nationals. The emphasis for locals was placed on vocational education, to help increase productivity. […]
By 1944, there were 944 primary schools in Taiwan with total enrollment rates of 71.3% for Taiwanese children, 86.4% for aboriginal children, and 99.6% for Japanese children in Taiwan. As a result, primary school enrollment rates in Taiwan were among the highest in Asia, second only to Japan itself.”
ix. LGM-118 Peacekeeper.
“The LGM-118A Peacekeeper, also known as the MX missile (for Missile-eXperimental), was a land-based ICBM deployed by the United States starting in 1986. The Peacekeeper was a MIRV missile; it could carry up to 10 re-entry vehicles, each armed with a 300-kiloton W87 warhead/MK-21 RVs. A total of 50 missiles were deployed after a long and troubled development period.”
Long and troubled indeed:
“The operational missile was first manufactured in February 1984 and was deployed in December 1986 to the Strategic Air Command, 90th Strategic Missile Wing at the Francis E. Warren Air Force Base in Cheyenne, Wyoming in re-fitted Minuteman silos. However, the AIRS was not yet ready and the missiles were deployed with non-operational guidance units. AIRS had 19,000 parts and some of these required as many as 11,000 testing steps. Bogged down in paperwork due to government procurement policies, managers started bypassing official channels and buying replacement parts wherever they could be found, including claims that some of the parts were sourced at Radio Shack. In other cases, managers had created false shell companies to order needed test equipment.
When these allegations were released by 60 Minutes and the Los Angeles Times, the fallout was immediate. Northrop was slapped with a $130 million fine for late delivery, and when they reacted against employees they were countersued in whistleblower suits. The Air Force also admitted that 11 of the 29 missiles deployed were not operational. A Congressional report stated that “Northrop was behind schedule before it even started” and noted that the Air Force knew as early as 1985 that there were “serious system deficiencies as well as a lack of effective progress”. They complained that the Air Force should have come clean and simply pushed back the deployment date, but instead, in order to foster the illusion of progress, the missiles were deployed in a non-operational state.”
x. The Middle Ages (featured). Much of this stuff’s known to me of course, but there’s still a lot of good stuff here.