# Econstudentlog

## Random stuff

i. Your Care Home in 120 Seconds. Some quotes:

“In order to get an overall estimate of mental power, psychologists have chosen a series of tasks to represent some of the basic elements of problem solving. The selection is based on looking at the sorts of problems people have to solve in everyday life, with particular attention to learning at school and then taking up occupations with varying intellectual demands. Those tasks vary somewhat, though they have a core in common.

Most tests include Vocabulary, examples: either asking for the definition of words of increasing rarity; or the names of pictured objects or activities; or the synonyms or antonyms of words.

Most tests include Reasoning, examples: either determining which pattern best completes the missing cell in a matrix (like Raven’s Matrices); or putting in the word which completes a sequence; or finding the odd word out in a series.

Most tests include visualization of shapes, examples: determining the correspondence between a 3-D figure and alternative 2-D figures; determining the pattern of holes that would result from a sequence of folds and a punch through folded paper; determining which combinations of shapes are needed to fill a larger shape.

Most tests include episodic memory, examples: number of idea units recalled across two or three stories; number of words recalled from across 1 to 4 trials of a repeated word list; number of words recalled when presented with a stimulus term in a paired-associate learning task.

Most tests include a rather simple set of basic tasks called Processing Skills. They are rather humdrum activities, like checking for errors, applying simple codes, and checking for similarities or differences in word strings or line patterns. They may seem low grade, but they are necessary when we try to organise ourselves to carry out planned activities. They tend to decline with age, leading to patchy, unreliable performance, and a tendency to muddled and even harmful errors. […]

A brain scan, for all its apparent precision, is not a direct measure of actual performance. Currently, scans are not as accurate in predicting behaviour as is a simple test of behaviour. This is a simple but crucial point: so long as you are willing to conduct actual tests, you can get a good understanding of a person’s capacities even on a very brief examination of their performance. […] There are several tests which have the benefit of being quick to administer and powerful in their predictions.[..] All these tests are good at picking up illness related cognitive changes, as in diabetes. (Intelligence testing is rarely criticized when used in medical settings). Delayed memory and working memory are both affected during diabetic crises. Digit Symbol is reduced during hypoglycaemia, as are Digits Backwards. Digit Symbol is very good at showing general cognitive changes from age 70 to 76. Again, although this is a limited time period in the elderly, the decline in speed is a notable feature. […]

The most robust and consistent predictor of cognitive change within old age, even after control for all the other variables, was the presence of the APOE e4 allele. APOE e4 carriers showed over half a standard deviation more general cognitive decline compared to noncarriers, with particularly pronounced decline in their Speed and numerically smaller, but still significant, declines in their verbal memory.

It is rare to have a big effect from one gene. Few people carry it, and it is not good to have.

Apparently the OP had second thoughts about this query so s/he deleted the question and marked the thread nsfw (??? …nothing remotely nsfw in that thread…). Fortunately the replies are all still there, there are quite a few good responses in the thread. I added some examples below:

“I think underestimating the domain/business side of things and focusing too much on tools and methodology. As a fairly new data scientist myself, I found myself humbled during this one project where I had I spent a lot of time tweaking parameters and making sure the numbers worked just right. After going into a meeting about it became clear pretty quickly that my little micro-optimizations were hardly important, and instead there were X Y Z big picture considerations I was missing in my analysis.”

[…]

• Forgetting to check how actionable the model (or features) are. It doesn’t matter if you have amazing model for cancer prediction, if it’s based on features from tests performed as part of the post-mortem. Similarly, predicting account fraud after the money has been transferred is not going to be very useful.

• Emphasis on lack of understanding of the business/domain.

• Lack of communication and presentation of the impact. If improving your model (which is a quarter of the overall pipeline) by 10% in reducing customer churn is worth just ~100K a year, then it may not be worth putting into production in a large company.

• Underestimating how hard it is to productionize models. This includes acting on the models outputs, it’s not just “run model, get score out per sample”.

• Forgetting about model and feature decay over time, concept drift.

• Underestimating the amount of time for data cleaning.

• Thinking that data cleaning errors will be complicated.

• Thinking that data cleaning will be simple to automate.

• Thinking that automation is always better than heuristics from domain experts.

• Focusing on modelling at the expense of [everything] else”

“unhealthy attachments to tools. It really doesn’t matter if you use R, Python, SAS or Excel, did you solve the problem?”

“Starting with actual modelling way too soon: you’ll end up with a model that’s really good at answering the wrong question.
First, make sure that you’re trying to answer the right question, with the right considerations. This is typically not what the client initially told you. It’s (mainly) a data scientist’s job to help the client with formulating the right question.”

iv. 5-HTTLPR: A Pointed Review. This one is hard to quote, you should read all of it. I did however decide to add a few quotes from the post, as well as a few quotes from the comments:

“…what bothers me isn’t just that people said 5-HTTLPR mattered and it didn’t. It’s that we built whole imaginary edifices, whole castles in the air on top of this idea of 5-HTTLPR mattering. We “figured out” how 5-HTTLPR exerted its effects, what parts of the brain it was active in, what sorts of things it interacted with, how its effects were enhanced or suppressed by the effects of other imaginary depression genes. This isn’t just an explorer coming back from the Orient and claiming there are unicorns there. It’s the explorer describing the life cycle of unicorns, what unicorns eat, all the different subspecies of unicorn, which cuts of unicorn meat are tastiest, and a blow-by-blow account of a wrestling match between unicorns and Bigfoot.

This is why I start worrying when people talk about how maybe the replication crisis is overblown because sometimes experiments will go differently in different contexts. The problem isn’t just that sometimes an effect exists in a cold room but not in a hot room. The problem is more like “you can get an entire field with hundreds of studies analyzing the behavior of something that doesn’t exist”. There is no amount of context-sensitivity that can help this. […] The problem is that the studies came out positive when they shouldn’t have. This was a perfectly fine thing to study before we understood genetics well, but the whole point of studying is that, once you have done 450 studies on something, you should end up with more knowledge than you started with. In this case we ended up with less. […] I think we should take a second to remember that yes, this is really bad. That this is a rare case where methodological improvements allowed a conclusive test of a popular hypothesis, and it failed badly. How many other cases like this are there, where there’s no geneticist with a 600,000 person sample size to check if it’s true or not? How many of our scientific edifices are built on air? How many useless products are out there under the guise of good science? We still don’t know.”

A few more quotes from the comment section of the post:

“most things that are obviously advantageous or deleterious in a major way aren’t gonna hover at 10%/50%/70% allele frequency.

Population variance where they claim some gene found in > [non trivial]% of the population does something big… I’ll mostly tend to roll to disbelieve.

But if someone claims a family/village with a load of weirdly depressed people (or almost any other disorder affecting anything related to the human condition in any horrifying way you can imagine) are depressed because of a genetic quirk… believable but still make sure they’ve confirmed it segregates with the condition or they’ve got decent backing.

And a large fraction of people have some kind of rare disorder […]. Long tail. Lots of disorders so quite a lot of people with something odd.

It’s not that single variants can’t have a big effect. It’s that really big effects either win and spread to everyone or lose and end up carried by a tiny minority of families where it hasn’t had time to die out yet.

Very few variants with big effect sizes are going to be half way through that process at any given time.

Exceptions are

1: mutations that confer resistance to some disease as a tradeoff for something else […] 2: Genes that confer a big advantage against something that’s only a very recent issue.”

“I think the summary could be something like:
A single gene determining 50% of the variance in any complex trait is inherently atypical, because variance depends on the population plus environment and the selection for such a gene would be strong, rapidly reducing that variance.
However, if the environment has recently changed or is highly variable, or there is a trade-off against adverse effects it is more likely.
Furthermore – if the test population is specifically engineered to target an observed trait following an apparently Mendelian inheritance pattern – such as a family group or a small genetically isolated population plus controls – 50% of the variance could easily be due to a single gene.”

“The most over-used and under-analyzed statement in the academic vocabulary is surely “more research is needed”. These four words, occasionally justified when they appear as the last sentence in a Masters dissertation, are as often to be found as the coda for a mega-trial that consumed the lion’s share of a national research budget, or that of a Cochrane review which began with dozens or even hundreds of primary studies and progressively excluded most of them on the grounds that they were “methodologically flawed”. Yet however large the trial or however comprehensive the review, the answer always seems to lie just around the next empirical corner.

With due respect to all those who have used “more research is needed” to sum up months or years of their own work on a topic, this ultimate academic cliché is usually an indicator that serious scholarly thinking on the topic has ceased. It is almost never the only logical conclusion that can be drawn from a set of negative, ambiguous, incomplete or contradictory data.” […]

“Here is a quote from a typical genome-wide association study:

“Genome-wide association (GWA) studies on coronary artery disease (CAD) have been very successful, identifying a total of 32 susceptibility loci so far. Although these loci have provided valuable insights into the etiology of CAD, their cumulative effect explains surprisingly little of the total CAD heritability.”  [1]

The authors conclude that not only is more research needed into the genomic loci putatively linked to coronary artery disease, but that – precisely because the model they developed was so weak – further sets of variables (“genetic, epigenetic, transcriptomic, proteomic, metabolic and intermediate outcome variables”) should be added to it. By adding in more and more sets of variables, the authors suggest, we will progressively and substantially reduce the uncertainty about the multiple and complex gene-environment interactions that lead to coronary artery disease. […] We predict tomorrow’s weather, more or less accurately, by measuring dynamic trends in today’s air temperature, wind speed, humidity, barometric pressure and a host of other meteorological variables. But when we try to predict what the weather will be next month, the accuracy of our prediction falls to little better than random. Perhaps we should spend huge sums of money on a more sophisticated weather-prediction model, incorporating the tides on the seas of Mars and the flutter of butterflies’ wings? Of course we shouldn’t. Not only would such a hyper-inclusive model fail to improve the accuracy of our predictive modeling, there are good statistical and operational reasons why it could well make it less accurate.”

Anyone who built software for a while knows that estimating how long something is going to take is hard. It’s hard to come up with an unbiased estimate of how long something will take, when fundamentally the work in itself is about solving something. One pet theory I’ve had for a really long time, is that some of this is really just a statistical artifact.

Let’s say you estimate a project to take 1 week. Let’s say there are three equally likely outcomes: either it takes 1/2 week, or 1 week, or 2 weeks. The median outcome is actually the same as the estimate: 1 week, but the mean (aka average, aka expected value) is 7/6 = 1.17 weeks. The estimate is actually calibrated (unbiased) for the median (which is 1), but not for the the mean.

A reasonable model for the “blowup factor” (actual time divided by estimated time) would be something like a log-normal distribution. If the estimate is one week, then let’s model the real outcome as a random variable distributed according to the log-normal distribution around one week. This has the property that the median of the distribution is exactly one week, but the mean is much larger […] Intuitively the reason the mean is so large is that tasks that complete faster than estimated have no way to compensate for the tasks that take much longer than estimated. We’re bounded by 0, but unbounded in the other direction.”

I like this way to conceptually frame the problem, and I definitely do not think it only applies to software development.

“I filed this in my brain under “curious toy models” for a long time, occasionally thinking that it’s a neat illustration of a real world phenomenon I’ve observed. But surfing around on the interwebs one day, I encountered an interesting dataset of project estimation and actual times. Fantastic! […] The median blowup factor turns out to be exactly 1x for this dataset, whereas the mean blowup factor is 1.81x. Again, this confirms the hunch that developers estimate the median well, but the mean ends up being much higher. […]

If my model is right (a big if) then here’s what we can learn:

• People estimate the median completion time well, but not the mean.
• The mean turns out to be substantially worse than the median, due to the distribution being skewed (log-normally).
• When you add up the estimates for n tasks, things get even worse.
• Tasks with the most uncertainty (rather the biggest size) can often dominate the mean time it takes to complete all tasks.”

“…the relentless focus on inequality among politicians is usually quite narrow: they tend to consider inequality only in monetary terms, and to treat “inequality” as basically synonymous with “income inequality.” There are so many other types of inequality that get air time less often or not at all: inequality of talent, height, number of friends, longevity, inner peace, health, charm, gumption, intelligence, and fortitude. And finally, there is a type of inequality that everyone thinks about occasionally and that young single people obsess over almost constantly: inequality of sexual attractiveness. […] One of the useful tools that economists use to study inequality is the Gini coefficient. This is simply a number between zero and one that is meant to represent the degree of income inequality in any given nation or group. An egalitarian group in which each individual has the same income would have a Gini coefficient of zero, while an unequal group in which one individual had all the income and the rest had none would have a Gini coefficient close to one. […] Some enterprising data nerds have taken on the challenge of estimating Gini coefficients for the dating “economy.” […] The Gini coefficient for [heterosexual] men collectively is determined by [-ll-] women’s collective preferences, and vice versa. If women all find every man equally attractive, the male dating economy will have a Gini coefficient of zero. If men all find the same one woman attractive and consider all other women unattractive, the female dating economy will have a Gini coefficient close to one.”

“A data scientist representing the popular dating app “Hinge” reported on the Gini coefficients he had found in his company’s abundant data, treating “likes” as the equivalent of income. He reported that heterosexual females faced a Gini coefficient of 0.324, while heterosexual males faced a much higher Gini coefficient of 0.542. So neither sex has complete equality: in both cases, there are some “wealthy” people with access to more romantic experiences and some “poor” who have access to few or none. But while the situation for women is something like an economy with some poor, some middle class, and some millionaires, the situation for men is closer to a world with a small number of super-billionaires surrounded by huge masses who possess almost nothing. According to the Hinge analyst:

On a list of 149 countries’ Gini indices provided by the CIA World Factbook, this would place the female dating economy as 75th most unequal (average—think Western Europe) and the male dating economy as the 8th most unequal (kleptocracy, apartheid, perpetual civil war—think South Africa).”

Btw., I’m reasonably certain “Western Europe” as most people think of it is not average in terms of Gini, and that half-way down the list should rather be represented by some other region or country type, like, say Mongolia or Bulgaria. A brief look at Gini lists seemed to support this impression.

Quartz reported on this finding, and also cited another article about an experiment with Tinder that claimed that that “the bottom 80% of men (in terms of attractiveness) are competing for the bottom 22% of women and the top 78% of women are competing for the top 20% of men.” These studies examined “likes” and “swipes” on Hinge and Tinder, respectively, which are required if there is to be any contact (via messages) between prospective matches. […] Yet another study, run by OkCupid on their huge datasets, found that women rate 80 percent of men as “worse-looking than medium,” and that this 80 percent “below-average” block received replies to messages only about 30 percent of the time or less. By contrast, men rate women as worse-looking than medium only about 50 percent of the time, and this 50 percent below-average block received message replies closer to 40 percent of the time or higher.

If these findings are to be believed, the great majority of women are only willing to communicate romantically with a small minority of men while most men are willing to communicate romantically with most women. […] It seems hard to avoid a basic conclusion: that the majority of women find the majority of men unattractive and not worth engaging with romantically, while the reverse is not true. Stated in another way, it seems that men collectively create a “dating economy” for women with relatively low inequality, while women collectively create a “dating economy” for men with very high inequality.”

I think the author goes a bit off the rails later in the post, but the data is interesting. It’s however important keeping in mind in contexts like these that sexual selection pressures apply at multiple levels, not just one, and that partner preferences can be non-trivial to model satisfactorily; for example as many women have learned the hard way, males may have very different standards for whom to a) ‘engage with romantically’ and b) ‘consider a long-term partner’.

“Intermittent fasting (IF) is a term used to describe a variety of eating patterns in which no or few calories are consumed for time periods that can range from 12 hours to several days, on a recurring basis. Here we focus on the physiological responses of major organ systems, including the musculoskeletal system, to the onset of the metabolic switch – the point of negative energy balance at which liver glycogen stores are depleted and fatty acids are mobilized (typically beyond 12 hours after cessation of food intake). Emerging findings suggest the metabolic switch from glucose to fatty acid-derived ketones represents an evolutionarily conserved trigger point that shifts metabolism from lipid/cholesterol synthesis and fat storage to mobilization of fat through fatty acid oxidation and fatty-acid derived ketones, which serve to preserve muscle mass and function. Thus, IF regimens that induce the metabolic switch have the potential to improve body composition in overweight individuals. […] many experts have suggested IF regimens may have potential in the treatment of obesity and related metabolic conditions, including metabolic syndrome and type 2 diabetes.()”

“In most studies, IF regimens have been shown to reduce overall fat mass and visceral fat both of which have been linked to increased diabetes risk.() IF regimens ranging in duration from 8 to 24 weeks have consistently been found to decrease insulin resistance.(, , , , , , , , , ) In line with this, many, but not all,() large-scale observational studies have also shown a reduced risk of diabetes in participants following an IF eating pattern.”

“…we suggest that future randomized controlled IF trials should use biomarkers of the metabolic switch (e.g., plasma ketone levels) as a measure of compliance and the magnitude of negative energy balance during the fasting period. It is critical for this switch to occur in order to shift metabolism from lipidogenesis (fat storage) to fat mobilization for energy through fatty acid β-oxidation. […] As the health benefits and therapeutic efficacies of IF in different disease conditions emerge from RCTs, it is important to understand the current barriers to widespread use of IF by the medical and nutrition community and to develop strategies for broad implementation. One argument against IF is that, despite the plethora of animal data, some human studies have failed to show such significant benefits of IF over CR [Calorie Restriction].() Adherence to fasting interventions has been variable, some short-term studies have reported over 90% adherence,() whereas in a one year ADMF study the dropout rate was 38% vs 29% in the standard caloric restriction group.()”

June 2, 2019

## American Naval History (II)

I have added some observations and links related to the second half of the book‘s coverage below.

“The revival of the U.S. Navy in the last two decades of the nineteenth century resulted from a variety of circumstances. The most immediate was the simple fact that the several dozen ships retained from the Civil War were getting so old that they had become antiques. […] In 1883 therefore Congress authorized the construction of three new cruisers and one dispatch vessel, its first important naval appropriation since Appomattox. […] By 1896 […] five […] new battleships had been completed and launched, and a sixth (the Iowa) joined them a year later. None of these ships had been built to meet a perceived crisis or a national emergency. Instead the United States had finally embraced the navalist argument that a mature nation-state required a naval force of the first rank. Soon enough circumstances would offer an opportunity to test both the ships and the theory. […] the United States declared war against Spain on April 25, 1898. […] Active hostilities lasted barely six months and were punctuated by two entirely one-sided naval engagements […] With the peace treaty signed in Paris in December 1898, Spain granted Cuba its independence, though the United States assumed significant authority on the island and in 1903 negotiated a lease that gave the U.S. Navy control of Guantánamo Bay on Cuba’s south coast. Spain also ceded the Philippines, Puerto Rico, Guam, and Wake Island to the United States, which paid Spain \$20 million for them. Separately but simultaneously the annexation of the Kingdom of Hawaii, along with the previous annexation of Midway, gave the United States a series of Pacific Ocean stepping stones, each a potential refueling stop, that led from Hawaii to Midway, to Wake, to Guam, and to the Philippines. It made the United States not merely a continental power but a global power. […] between 1906 and 1908, no fewer than thirteen new battleships joined the fleet.”

“At root submarine warfare in the twentieth century was simply a more technologically advanced form of commerce raiding. In its objective it resembled both privateering during the American Revolution and the voyages of the CSS Alabama and other raiders during the Civil War. Yet somehow striking unarmed merchant ships from the depths, often without warning, seemed particularly heinous. Just as the use of underwater mines in the Civil War had horrified contemporaries before their use became routine, the employment of submarines against merchant shipping shocked public sentiment in the early months of World War I. […] American submarines accounted for 55 percent of all Japanese ship losses in the Pacific theater of World War II”.

“By late 1942 the first products of the Two-Ocean Navy Act of 1940 began to join the fleet. Whereas in June 1942, the United States had been hard-pressed to assemble three aircraft carriers for the Battle of Midway, a year later twenty-four new Essex-class aircraft carriers joined the fleet, each of them displacing more than 30,000 tons and carrying ninety to one hundred aircraft. Soon afterward nine more Independence-class carriers joined the fleet. […] U.S. shipyards also turned out an unprecedented number of cruisers, destroyers, and destroyer escorts, plus more than 2,700 Liberty Ships—the essential transport and cargo vessels of the war—as well as thousands of specialized landing ships essential to amphibious operations. In 1943 alone American shipyards turned out more than eight hundred of the large LSTs and LCIs, plus more than eight thousand of the smaller landing craft known as Higgins boats […] In the three weeks after D-Day, Allied landing ships and transports put more than 300,000 men, fifty thousand vehicles, and 150,000 tons of supplies ashore on Omaha Beach alone. By the first week of July the Allies had more than a million fully equipped soldiers ashore ready to break out of their enclave in Normandy and Brittany […] Having entered World War II with eleven active battleships and seven aircraft carriers, the U.S. Navy ended the war with 120 battleships and cruisers and nearly one hundred aircraft carriers (including escort carriers). Counting the smaller landing craft, the U.S. Navy listed an astonishing sixty-five thousand vessels on its register of warships and had more than four million men and women in uniform. It was more than twice as large as all the rest of the navies of the world combined. […] In the eighteen months after the end of the war, the navy processed out 3.5 million officers and enlisted personnel who returned to civilian life and their families, going back to work or attending college on the new G.I. Bill. In addition thousands of ships were scrapped or mothballed, assigned to what was designated as the National Defense Reserve Fleet and tied up in long rows at navy yards from California to Virginia. Though the navy boasted only about a thousand ships on active service by the end of 1946, that was still more than twice as many as before the war.”

“The Korean War ended in a stalemate, yet American forces, supported by troops from South Korea and other United Nations members, succeeded in repelling the first cross-border invasion by communist forces during the Cold War. That encouraged American lawmakers to continue support of a robust peacetime navy, and of military forces generally. Whereas U.S. military spending in 1950 had totaled \$141 billion, for the rest of the 1950s it averaged over \$350 billion per year. […] The overall architecture of American and Soviet rivalry influenced, and even defined, virtually every aspect of American foreign and defense policy in the Cold War years. Even when the issue at hand had little to do with the Soviet Union, every political and military dispute from 1949 onward was likely to be viewed through the prism of how it affected the East-West balance of power. […] For forty years the United States and the U.S. Navy had centered all of its attention on the rivalry with the Soviet Union. All planning for defense budgets, for force structure, and for the design of weapons systems grew out of assessments of the Soviet threat. The dissolution of the Soviet Union therefore compelled navy planners to revisit almost all of their assumptions. It did not erase the need for a global U.S. Navy, for even as the Soviet Union was collapsing, events in the Middle East and elsewhere provoked serial crises that led to the dispatch of U.S. naval combat groups to a variety of hot spots around the world. On the other hand, these new threats were so different from those of the Cold War era that the sophisticated weaponry the United States had developed to deter and, if necessary, defeat the Soviet Union did not necessarily meet the needs of what President George H. W. Bush called “a new world order.”

“The official roster of U.S. Navy warships in 2014 listed 283 “battle force ships” on active service. While that is fewer than at any time since World War I, those ships possess more capability and firepower than the rest of the world’s navies combined. […] For the present, […] as well as for the foreseeable future, the U.S. Navy remains supreme on the oceans of the world.”

## Random stuff

I have almost stopped posting posts like these, which has resulted in the accumulation of a very large number of links and studies which I figured I might like to blog at some point. This post is mainly an attempt to deal with the backlog – I won’t cover the material in too much detail.

i. Do Bullies Have More Sex? The answer seems to be a qualified yes. A few quotes:

“Sexual behavior during adolescence is fairly widespread in Western cultures (Zimmer-Gembeck and Helfland 2008) with nearly two thirds of youth having had sexual intercourse by the age of 19 (Finer and Philbin 2013). […] Bullying behavior may aid in intrasexual competition and intersexual selection as a strategy when competing for mates. In line with this contention, bullying has been linked to having a higher number of dating and sexual partners (Dane et al. 2017; Volk et al. 2015). This may be one reason why adolescence coincides with a peak in antisocial or aggressive behaviors, such as bullying (Volk et al. 2006). However, not all adolescents benefit from bullying. Instead, bullying may only benefit adolescents with certain personality traits who are willing and able to leverage bullying as a strategy for engaging in sexual behavior with opposite-sex peers. Therefore, we used two independent cross-sectional samples of older and younger adolescents to determine which personality traits, if any, are associated with leveraging bullying into opportunities for sexual behavior.”

“…bullying by males signal the ability to provide good genes, material resources, and protect offspring (Buss and Shackelford 1997; Volk et al. 2012) because bullying others is a way of displaying attractive qualities such as strength and dominance (Gallup et al. 2007; Reijntjes et al. 2013). As a result, this makes bullies attractive sexual partners to opposite-sex peers while simultaneously suppressing the sexual success of same-sex rivals (Gallup et al. 2011; Koh and Wong 2015; Zimmer-Gembeck et al. 2001). Females may denigrate other females, targeting their appearance and sexual promiscuity (Leenaars et al. 2008; Vaillancourt 2013), which are two qualities relating to male mate preferences. Consequently, derogating these qualities lowers a rivals’ appeal as a mate and also intimidates or coerces rivals into withdrawing from intrasexual competition (Campbell 2013; Dane et al. 2017; Fisher and Cox 2009; Vaillancourt 2013). Thus, males may use direct forms of bullying (e.g., physical, verbal) to facilitate intersexual selection (i.e., appear attractive to females), while females may use relational bullying to facilitate intrasexual competition, by making rivals appear less attractive to males.”

The study relies on the use of self-report data, which I find very problematic – so I won’t go into the results here. I’m not quite clear on how those studies mentioned in the discussion ‘have found self-report data [to be] valid under conditions of confidentiality’ – and I remain skeptical. You’ll usually want data from independent observers (e.g. teacher or peer observations) when analyzing these kinds of things. Note in the context of the self-report data problem that if there’s a strong stigma associated with being bullied (there often is, or bullying wouldn’t work as well), asking people if they have been bullied is not much better than asking people if they’re bullying others.

ii. Some topical advice that some people might soon regret not having followed, from the wonderful Things I Learn From My Patients thread:

“If you are a teenage boy experimenting with fireworks, do not empty the gunpowder from a dozen fireworks and try to mix it in your mother’s blender. But if you do decide to do that, don’t hold the lid down with your other hand and stand right over it. This will result in the traumatic amputation of several fingers, burned and skinned forearms, glass shrapnel in your face, and a couple of badly scratched corneas as a start. You will spend months in rehab and never be able to use your left hand again.”

iii. I haven’t talked about the AlphaZero-Stockfish match, but I was of course aware of it and did read a bit about that stuff. Here’s a reddit thread where one of the Stockfish programmers answers questions about the match. A few quotes:

“Which of the two is stronger under ideal conditions is, to me, neither particularly interesting (they are so different that it’s kind of like comparing the maximum speeds of a fish and a bird) nor particularly important (since there is only one of them that you and I can download and run anyway). What is super interesting is that we have two such radically different ways to create a computer chess playing entity with superhuman abilities. […] I don’t think there is anything to learn from AlphaZero that is applicable to Stockfish. They are just too different, you can’t transfer ideas from one to the other.”

“Based on the 100 games played, AlphaZero seems to be about 100 Elo points stronger under the conditions they used. The current development version of Stockfish is something like 40 Elo points stronger than the version used in Google’s experiment. There is a version of Stockfish translated to hand-written x86-64 assembly language that’s about 15 Elo points stronger still. This adds up to roughly half the Elo difference between AlphaZero and Stockfish shown in Google’s experiment.”

“It seems that Stockfish was playing with only 1 GB for transposition tables (the area of memory used to store data about the positions previously encountered in the search), which is way too little when running with 64 threads.” [I seem to recall a comp sci guy observing elsewhere that this was less than what was available to his smartphone version of Stockfish, but I didn’t bookmark that comment].

“The time control was a very artificial fixed 1 minute/move. That’s not how chess is traditionally played. Quite a lot of effort has gone into Stockfish’s time management. It’s pretty good at deciding when to move quickly, and when to spend a lot of time on a critical decision. In a fixed time per move game, it will often happen that the engine discovers that there is a problem with the move it wants to play just before the time is out. In a regular time control, it would then spend extra time analysing all alternative moves and trying to find a better one. When you force it to move after exactly one minute, it will play the move it already know is bad. There is no doubt that this will cause it to lose many games it would otherwise have drawn.”

“Thrombolysis has been rigorously studied in >60,000 patients for acute thrombotic myocardial infarction, and is proven to reduce mortality. It is theorized that thrombolysis may similarly benefit ischemic stroke patients, though a much smaller number (8120) has been studied in relevant, large scale, high quality trials thus far. […] There are 12 such trials 1-12. Despite the temptation to pool these data the studies are clinically heterogeneous. […] Data from multiple trials must be clinically and statistically homogenous to be validly pooled.14 Large thrombolytic studies demonstrate wide variations in anatomic stroke regions, small- versus large-vessel occlusion, clinical severity, age, vital sign parameters, stroke scale scores, and times of administration. […] Examining each study individually is therefore, in our opinion, both more valid and more instructive. […] Two of twelve studies suggest a benefit […] In comparison, twice as many studies showed harm and these were stopped early. This early stoppage means that the number of subjects in studies demonstrating harm would have included over 2400 subjects based on originally intended enrollments. Pooled analyses are therefore missing these phantom data, which would have further eroded any aggregate benefits. In their absence, any pooled analysis is biased toward benefit. Despite this, there remain five times as many trials showing harm or no benefit (n=10) as those concluding benefit (n=2), and 6675 subjects in trials demonstrating no benefit compared to 1445 subjects in trials concluding benefit.”

“Thrombolytics for ischemic stroke may be harmful or beneficial. The answer remains elusive. We struggled therefore, debating between a ‘yellow’ or ‘red’ light for our recommendation. However, over 60,000 subjects in trials of thrombolytics for coronary thrombosis suggest a consistent beneficial effect across groups and subgroups, with no studies suggesting harm. This consistency was found despite a very small mortality benefit (2.5%), and a very narrow therapeutic window (1% major bleeding). In comparison, the variation in trial results of thrombolytics for stroke and the daunting but consistent adverse effect rate caused by ICH suggested to us that thrombolytics are dangerous unless further study exonerates their use.”

“There is a Cochrane review that pooled estimates of effect. 17 We do not endorse this choice because of clinical heterogeneity. However, we present the NNT’s from the pooled analysis for the reader’s benefit. The Cochrane review suggested a 6% reduction in disability […] with thrombolytics. This would mean that 17 were treated for every 1 avoiding an unfavorable outcome. The review also noted a 1% increase in mortality (1 in 100 patients die because of thrombolytics) and a 5% increase in nonfatal intracranial hemorrhage (1 in 20), for a total of 6% harmed (1 in 17 suffers death or brain hemorrhage).”

v. Suicide attempts in Asperger Syndrome. An interesting finding: “Over 35% of individuals with AS reported that they had attempted suicide in the past.”

“374 adults (256 men and 118 women) were diagnosed with Asperger’s syndrome in the study period. 243 (66%) of 367 respondents self-reported suicidal ideation, 127 (35%) of 365 respondents self-reported plans or attempts at suicide, and 116 (31%) of 368 respondents self-reported depression. Adults with Asperger’s syndrome were significantly more likely to report lifetime experience of suicidal ideation than were individuals from a general UK population sample (odds ratio 9·6 [95% CI 7·6–11·9], p<0·0001), people with one, two, or more medical illnesses (p<0·0001), or people with psychotic illness (p=0·019). […] Lifetime experience of depression (p=0·787), suicidal ideation (p=0·164), and suicide plans or attempts (p=0·06) did not differ significantly between men and women […] Individuals who reported suicide plans or attempts had significantly higher Autism Spectrum Quotient scores than those who did not […] Empathy Quotient scores and ages did not differ between individuals who did or did not report suicide plans or attempts (table 4). Patients with self-reported depression or suicidal ideation did not have significantly higher Autism Spectrum Quotient scores, Empathy Quotient scores, or age than did those without depression or suicidal ideation”.

The fact that people with Asperger’s are more likely to be depressed and contemplate suicide is consistent with previous observations that they’re also more likely to die from suicide – for example a paper I blogged a while back found that in that particular (large Swedish population-based cohort-) study, people with ASD were more than 7 times as likely to die from suicide than were the comparable controls.

This link has some great graphs and tables of suicide data from the US.

Also autism-related: Increased perception of loudness in autism. This is one of the ‘important ones’ for me personally – I am much more sound-sensitive than are most people.

“Earlier trials have shown that a routine invasive strategy improves outcomes in patients with acute coronary syndromes without ST-segment elevation. However, the optimal timing of such intervention remains uncertain. […] We randomly assigned 3031 patients with acute coronary syndromes to undergo either routine early intervention (coronary angiography ≤24 hours after randomization) or delayed intervention (coronary angiography ≥36 hours after randomization). The primary outcome was a composite of death, myocardial infarction, or stroke at 6 months. A prespecified secondary outcome was death, myocardial infarction, or refractory ischemia at 6 months. […] Early intervention did not differ greatly from delayed intervention in preventing the primary outcome, but it did reduce the rate of the composite secondary outcome of death, myocardial infarction, or refractory ischemia and was superior to delayed intervention in high-risk patients.”

Behrens–Fisher problem.
Sailing ship tactics (I figured I had to read up on this if I were to get anything out of the Aubrey-Maturin books).
Anatomical terms of muscle.
Phatic expression (“a phatic expression […] is communication which serves a social function such as small talk and social pleasantries that don’t seek or offer any information of value.”)
Three-domain system.
Beringian wolf (featured).
Subdural hygroma.
Cayley graph.
Schur polynomial.
Solar neutrino problem.
True polar wander.

viii. Determinant versus permanent (mathematics – technical).

ix. Some years ago I wrote a few English-language posts about some of the various statistical/demographic properties of immigrants living in Denmark, based on numbers included in a publication by Statistics Denmark. I did it by translating the observations included in that publication, which was only published in Danish. I was briefly considering doing the same thing again when the 2017 data arrived, but I decided not to do it as I recalled that it took a lot of time to write those posts back then, and it didn’t seem to me to be worth the effort – but Danish readers might be interested to have a look at the data, if they haven’t already – here’s a link to the publication Indvandrere i Danmark 2017.

x. A banter blitz session with grandmaster Peter Svidler, who recently became the first Russian ever to win the Russian Chess Championship 8 times. He’s currently shared-second in the World Rapid Championship after 10 rounds and is now in the top 10 on the live rating list in both classical and rapid – seems like he’s had a very decent year.

xi. I recently discovered Dr. Whitecoat’s blog. The patient encounters are often interesting.

December 28, 2017

## The Antarctic

“A very poor book with poor coverage, mostly about politics and history (and a long collection of names of treaties and organizations). I would definitely not have finished it if it were much longer than it is.”

That was what I wrote about the book in my goodreads review. I was strongly debating whether or not to blog it at all, but I decided in the end to just settle for some very lazy coverage of the book, only consisting of links to content covered in the book. I only cover the book here to at least have some chance of remembering which kinds of things were covered in the book later on.

## Random stuff

It’s been a long time since I last posted one of these posts, so a great number of links of interest has accumulated in my bookmarks. I intended to include a large number of these in this post and this of course means that I surely won’t cover each specific link included in this post in anywhere near the amount of detail it deserves, but that can’t be helped.

“For those diagnosed with ASD in childhood, most will become adults with a significant degree of disability […] Seltzer et al […] concluded that, despite considerable heterogeneity in social outcomes, “few adults with autism live independently, marry, go to college, work in competitive jobs or develop a large network of friends”. However, the trend within individuals is for some functional improvement over time, as well as a decrease in autistic symptoms […]. Some authors suggest that a sub-group of 15–30% of adults with autism will show more positive outcomes […]. Howlin et al. (2004), and Cederlund et al. (2008) assigned global ratings of social functioning based on achieving independence, friendships/a steady relationship, and education and/or a job. These two papers described respectively 22% and 27% of groups of higher functioning (IQ above 70) ASD adults as attaining “Very Good” or “Good” outcomes.”

ii. Premature mortality in autism spectrum disorder. This is a Swedish matched case cohort study. Some observations from the paper:

“The aim of the current study was to analyse all-cause and cause-specific mortality in ASD using nationwide Swedish population-based registers. A further aim was to address the role of intellectual disability and gender as possible moderators of mortality and causes of death in ASD. […] Odds ratios (ORs) were calculated for a population-based cohort of ASD probands (n = 27 122, diagnosed between 1987 and 2009) compared with gender-, age- and county of residence-matched controls (n = 2 672 185). […] During the observed period, 24 358 (0.91%) individuals in the general population died, whereas the corresponding figure for individuals with ASD was 706 (2.60%; OR = 2.56; 95% CI 2.38–2.76). Cause-specific analyses showed elevated mortality in ASD for almost all analysed diagnostic categories. Mortality and patterns for cause-specific mortality were partly moderated by gender and general intellectual ability. […] Premature mortality was markedly increased in ASD owing to a multitude of medical conditions. […] Mortality was significantly elevated in both genders relative to the general population (males: OR = 2.87; females OR = 2.24)”.

“Individuals in the control group died at a mean age of 70.20 years (s.d. = 24.16, median = 80), whereas the corresponding figure for the entire ASD group was 53.87 years (s.d. = 24.78, median = 55), for low-functioning ASD 39.50 years (s.d. = 21.55, median = 40) and high-functioning ASD 58.39 years (s.d. = 24.01, median = 63) respectively. […] Significantly elevated mortality was noted among individuals with ASD in all analysed categories of specific causes of death except for infections […] ORs were highest in cases of mortality because of diseases of the nervous system (OR = 7.49) and because of suicide (OR = 7.55), in comparison with matched general population controls.”

iii. Adhesive capsulitis of shoulder. This one is related to a health scare I had a few months ago. A few quotes:

Adhesive capsulitis (also known as frozen shoulder) is a painful and disabling disorder of unclear cause in which the shoulder capsule, the connective tissue surrounding the glenohumeral joint of the shoulder, becomes inflamed and stiff, greatly restricting motion and causing chronic pain. Pain is usually constant, worse at night, and with cold weather. Certain movements or bumps can provoke episodes of tremendous pain and cramping. […] People who suffer from adhesive capsulitis usually experience severe pain and sleep deprivation for prolonged periods due to pain that gets worse when lying still and restricted movement/positions. The condition can lead to depression, problems in the neck and back, and severe weight loss due to long-term lack of deep sleep. People who suffer from adhesive capsulitis may have extreme difficulty concentrating, working, or performing daily life activities for extended periods of time.”

The prevalence of a diabetic condition and adhesive capsulitis of the shoulder.
“Adhesive capsulitis is characterized by a progressive and painful loss of shoulder motion of unknown etiology. Previous studies have found the prevalence of adhesive capsulitis to be slightly greater than 2% in the general population. However, the relationship between adhesive capsulitis and diabetes mellitus (DM) is well documented, with the incidence of adhesive capsulitis being two to four times higher in diabetics than in the general population. It affects about 20% of people with diabetes and has been described as the most disabling of the common musculoskeletal manifestations of diabetes.”

“Patients with type I diabetes have a 40% chance of developing a frozen shoulder in their lifetimes […] Dominant arm involvement has been shown to have a good prognosis; associated intrinsic pathology or insulin-dependent diabetes of more than 10 years are poor prognostic indicators.15 Three stages of adhesive capsulitis have been described, with each phase lasting for about 6 months. The first stage is the freezing stage in which there is an insidious onset of pain. At the end of this period, shoulder ROM [range of motion] becomes limited. The second stage is the frozen stage, in which there might be a reduction in pain; however, there is still restricted ROM. The third stage is the thawing stage, in which ROM improves, but can take between 12 and 42 months to do so. Most patients regain a full ROM; however, 10% to 15% of patients suffer from continued pain and limited ROM.”

Musculoskeletal Complications in Type 1 Diabetes.
“The development of periarticular thickening of skin on the hands and limited joint mobility (cheiroarthropathy) is associated with diabetes and can lead to significant disability. The objective of this study was to describe the prevalence of cheiroarthropathy in the well-characterized Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) cohort and examine associated risk factors […] This cross-sectional analysis was performed in 1,217 participants (95% of the active cohort) in EDIC years 18/19 after an average of 24 years of follow-up. Cheiroarthropathy — defined as the presence of any one of the following: adhesive capsulitis, carpal tunnel syndrome, flexor tenosynovitis, Dupuytren’s contracture, or a positive prayer sign [related link] — was assessed using a targeted medical history and standardized physical examination. […] Cheiroarthropathy was present in 66% of subjects […] Cheiroarthropathy is common in people with type 1 diabetes of long duration (∼30 years) and is related to longer duration and higher levels of glycemia. Clinicians should include cheiroarthropathy in their routine history and physical examination of patients with type 1 diabetes because it causes clinically significant functional disability.”

Musculoskeletal disorders in diabetes mellitus: an update.
“Diabetes mellitus (DM) is associated with several musculoskeletal disorders. […] The exact pathophysiology of most of these musculoskeletal disorders remains obscure. Connective tissue disorders, neuropathy, vasculopathy or combinations of these problems, may underlie the increased incidence of musculoskeletal disorders in DM. The development of musculoskeletal disorders is dependent on age and on the duration of DM; however, it has been difficult to show a direct correlation with the metabolic control of DM.”

Musculoskeletal Disorders of the Hand and Shoulder in Patients with Diabetes.
“In addition to micro- and macroangiopathic complications, diabetes mellitus is also associated with several musculoskeletal disorders of the hand and shoulder that can be debilitating (1,2). Limited joint mobility, also termed diabetic hand syndrome or cheiropathy (3), is characterized by skin thickening over the dorsum of the hands and restricted mobility of multiple joints. While this syndrome is painless and usually not disabling (2,4), other musculoskeletal problems occur with increased frequency in diabetic patients, including Dupuytren’s disease [“Dupuytren’s disease […] may be observed in up to 42% of adults with diabetes mellitus, typically in patients with long-standing T1D” – link], carpal tunnel syndrome [“The prevalence of [carpal tunnel syndrome, CTS] in patients with diabetes has been estimated at 11–30 % […], and is dependent on the duration of diabetes. […] Type I DM patients have a high prevalence of CTS with increasing duration of disease, up to 85 % after 54 years of DM” – link], palmar flexor tenosynovitis or trigger finger [“The incidence of trigger finger [/stenosing tenosynovitis] is 7–20 % of patients with diabetes comparing to only about 1–2 % in nondiabetic patients” – link], and adhesive capsulitis of the shoulder (5–10). The association of adhesive capsulitis with pain, swelling, dystrophic skin, and vasomotor instability of the hand constitutes the “shoulder-hand syndrome,” a rare but potentially disabling manifestation of diabetes (1,2).”

“The prevalence of musculoskeletal disorders was greater in diabetic patients than in control patients (36% vs. 9%, P < 0.01). Adhesive capsulitis was present in 12% of the diabetic patients and none of the control patients (P < 0.01), Dupuytren’s disease in 16% of diabetic and 3% of control patients (P < 0.01), and flexor tenosynovitis in 12% of diabetic and 2% of control patients (P < 0.04), while carpal tunnel syndrome occurred in 12% of diabetic patients and 8% of control patients (P = 0.29). Musculoskeletal disorders were more common in patients with type 1 diabetes than in those with type 2 diabetes […]. Forty-three patients [out of 100] with type 1 diabetes had either hand or shoulder disorders (37 with hand disorders, 6 with adhesive capsulitis of the shoulder, and 10 with both syndromes), compared with 28 patients [again out of 100] with type 2 diabetes (24 with hand disorders, 4 with adhesive capsulitis of the shoulder, and 3 with both syndromes, P = 0.03).”

Association of Diabetes Mellitus With the Risk of Developing Adhesive Capsulitis of the Shoulder: A Longitudinal Population-Based Followup Study.
“A total of 78,827 subjects with at least 2 ambulatory care visits with a principal diagnosis of DM in 2001 were recruited for the DM group. The non-DM group comprised 236,481 age- and sex-matched randomly sampled subjects without DM. […] During a 3-year followup period, 946 subjects (1.20%) in the DM group and 2,254 subjects (0.95%) in the non-DM group developed ACS. The crude HR of developing ACS for the DM group compared to the non-DM group was 1.333 […] the association between DM and ACS may be explained at least in part by a DM-related chronic inflammatory process with increased growth factor expression, which in turn leads to joint synovitis and subsequent capsular fibrosis.”

It is important to note when interpreting the results of the above paper that these results are based on Taiwanese population-level data, and type 1 diabetes – which is obviously the high-risk diabetes subgroup in this particular context – is rare in East Asian populations (as observed in Sperling et al., “A child in Helsinki, Finland is almost 400 times more likely to develop diabetes than a child in Sichuan, China”. Taiwanese incidence of type 1 DM in children is estimated at ~5 in 100.000).

iv. Parents who let diabetic son starve to death found guilty of first-degree murder. It’s been a while since I last saw one of these ‘boost-your-faith-in-humanity’-cases, but they in my impression do pop up every now and then. I should probably keep at hand one of these articles in case my parents ever express worry to me that they weren’t good parents; they could have done a lot worse…

v. Freedom of medicine. One quote from the conclusion of Cochran’s post:

“[I]t is surely possible to materially improve the efficacy of drug development, of medical research as a whole. We’re doing better than we did 500 years ago – although probably worse than we did 50 years ago. But I would approach it by learning as much as possible about medical history, demographics, epidemiology, evolutionary medicine, theory of senescence, genetics, etc. Read Koch, not Hayek. There is no royal road to medical progress.”

I agree, and I was considering including some related comments and observations about health economics in this post – however I ultimately decided against doing that in part because the post was growing unwieldy; I might include those observations in another post later on. Here’s another somewhat older Westhunt post I at some point decided to bookmark – I in particular like the following neat quote from the comments, which expresses a view I have of course expressed myself in the past here on this blog:

“When you think about it, falsehoods, stupid crap, make the best group identifiers, because anyone might agree with you when you’re obviously right. Signing up to clear nonsense is a better test of group loyalty. A true friend is with you when you’re wrong. Ideally, not just wrong, but barking mad, rolling around in your own vomit wrong.”

“Approximately 59% of all health care expenditures attributed to diabetes are for health resources used by the population aged 65 years and older, much of which is borne by the Medicare program […]. The population 45–64 years of age incurs 33% of diabetes-attributed costs, with the remaining 8% incurred by the population under 45 years of age. The annual attributed health care cost per person with diabetes […] increases with age, primarily as a result of increased use of hospital inpatient and nursing facility resources, physician office visits, and prescription medications. Dividing the total attributed health care expenditures by the number of people with diabetes, we estimate the average annual excess expenditures for the population aged under 45 years, 45–64 years, and 65 years and above, respectively, at \$4,394, \$5,611, and \$11,825.”

“Our logistic regression analysis with NHIS data suggests that diabetes is associated with a 2.4 percentage point increase in the likelihood of leaving the workforce for disability. This equates to approximately 541,000 working-age adults leaving the workforce prematurely and 130 million lost workdays in 2012. For the population that leaves the workforce early because of diabetes-associated disability, we estimate that their average daily earnings would have been \$166 per person (with the amount varying by demographic). Presenteeism accounted for 30% of the indirect cost of diabetes. The estimate of a 6.6% annual decline in productivity attributed to diabetes (in excess of the estimated decline in the absence of diabetes) equates to 113 million lost workdays per year.”

viii. Effect of longer term modest salt reduction on blood pressure: Cochrane systematic review and meta-analysis of randomised trials. Did I blog this paper at some point in the past? I could not find any coverage of it on the blog when I searched for it so I decided to include it here, even if I have a nagging suspicion I may have talked about these findings before. What did they find? The short version is this:

“A modest reduction in salt intake for four or more weeks causes significant and, from a population viewpoint, important falls in blood pressure in both hypertensive and normotensive individuals, irrespective of sex and ethnic group. Salt reduction is associated with a small physiological increase in plasma renin activity, aldosterone, and noradrenaline and no significant change in lipid concentrations. These results support a reduction in population salt intake, which will lower population blood pressure and thereby reduce cardiovascular disease.”

Heroic Age of Antarctic Exploration (featured).

Kuiper belt (featured).

Treason (one quote worth including here: “Currently, the consensus among major Islamic schools is that apostasy (leaving Islam) is considered treason and that the penalty is death; this is supported not in the Quran but in the Hadith.[42][43][44][45][46][47]“).

Black Death (“Over 60% of Norway’s population died in 1348–1350”).

Renault FT (“among the most revolutionary and influential tank designs in history”).

Weierstrass function (“an example of a pathological real-valued function on the real line. The function has the property of being continuous everywhere but differentiable nowhere”).

Void coefficient. (“a number that can be used to estimate how much the reactivity of a nuclear reactor changes as voids (typically steam bubbles) form in the reactor moderator or coolant. […] Reactivity is directly related to the tendency of the reactor core to change power level: if reactivity is positive, the core power tends to increase; if it is negative, the core power tends to decrease; if it is zero, the core power tends to remain stable. […] A positive void coefficient means that the reactivity increases as the void content inside the reactor increases due to increased boiling or loss of coolant; for example, if the coolant acts as a neutron absorber. If the void coefficient is large enough and control systems do not respond quickly enough, this can form a positive feedback loop which can quickly boil all the coolant in the reactor. This happened in the RBMK reactor that was destroyed in the Chernobyl disaster.”).

Gregor MacGregor (featured) (“a Scottish soldier, adventurer, and confidence trickster […] MacGregor’s Poyais scheme has been called one of the most brazen confidence tricks in history.”).

March 10, 2017

## Random stuff

I find it difficult to find the motivation to finish the half-finished drafts I have lying around, so this will have to do. Some random stuff below.

i.

(15.000 views… In some sense that seems really ‘unfair’ to me, but on the other hand I doubt neither Beethoven nor Gilels care; they’re both long dead, after all…)

ii. New/newish words I’ve encountered in books, on vocabulary.com or elsewhere:

iii. A lecture:

It’s been a long time since I watched it so I don’t have anything intelligent to say about it now, but I figured it might be of interest to one or two of the people who still subscribe to the blog despite the infrequent updates.

iv. A few wikipedia articles (I won’t comment much on the contents or quote extensively from the articles the way I’ve done in previous wikipedia posts – the links shall have to suffice for now):

Russian political jokes. Some of those made me laugh (e.g. this one: “A judge walks out of his chambers laughing his head off. A colleague approaches him and asks why he is laughing. “I just heard the funniest joke in the world!” “Well, go ahead, tell me!” says the other judge. “I can’t – I just gave someone ten years for it!”).

v. World War 2, if you think of it as a movie, has a highly unrealistic and implausible plot, according to this amusing post by Scott Alexander. Having recently read a rather long book about these topics, one aspect I’d have added had I written the piece myself would be that an additional factor making the setting seem even more implausible is how so many presumably quite smart people were so – what at least in retrospect seems – unbelievably stupid when it came to Hitler’s ideas and intentions before the war. Going back to Churchill’s own life I’d also add that if you were to make a movie about Churchill’s life during the war, which you could probably relatively easily do if you were to just base it upon his own copious and widely shared notes, then it could probably be made into a quite decent movie. His own comments, remarks, and observations certainly made for a great book.

May 15, 2016

## Random Stuff

i. Some new words I’ve encountered (not all of them are from vocabulary.com, but many of them are):

ii. A lecture:

I got annoyed a few times by the fact that you can’t tell where he’s pointing when he’s talking about the slides, which makes the lecture harder to follow than it ought to be, but it’s still an interesting lecture.

iii. Facts about Dihydrogen Monoxide. Includes coverage of important neglected topics such as ‘What is the link between Dihydrogen Monoxide and school violence?’ After reading the article, I am frankly outraged that this stuff’s still legal!

iv. Some wikipedia links of interest:

Steganography […] is the practice of concealing a file, message, image, or video within another file, message, image, or video. The word steganography combines the Greek words steganos (στεγανός), meaning “covered, concealed, or protected”, and graphein (γράφειν) meaning “writing”. […] Generally, the hidden messages appear to be (or be part of) something else: images, articles, shopping lists, or some other cover text. For example, the hidden message may be in invisible ink between the visible lines of a private letter. Some implementations of steganography that lack a shared secret are forms of security through obscurity, whereas key-dependent steganographic schemes adhere to Kerckhoffs’s principle.[1]

The advantage of steganography over cryptography alone is that the intended secret message does not attract attention to itself as an object of scrutiny. Plainly visible encrypted messages—no matter how unbreakable—arouse interest, and may in themselves be incriminating in countries where encryption is illegal.[2] Thus, whereas cryptography is the practice of protecting the contents of a message alone, steganography is concerned with concealing the fact that a secret message is being sent, as well as concealing the contents of the message.”

H. H. Holmes. A really nice guy.

Herman Webster Mudgett (May 16, 1861 – May 7, 1896), better known under the name of Dr. Henry Howard Holmes or more commonly just H. H. Holmes, was one of the first documented serial killers in the modern sense of the term.[1][2] In Chicago, at the time of the 1893 World’s Columbian Exposition, Holmes opened a hotel which he had designed and built for himself specifically with murder in mind, and which was the location of many of his murders. While he confessed to 27 murders, of which nine were confirmed, his actual body count could be up to 200.[3] He brought an unknown number of his victims to his World’s Fair Hotel, located about 3 miles (4.8 km) west of the fair, which was held in Jackson Park. Besides being a serial killer, H. H. Holmes was also a successful con artist and a bigamist. […]

Holmes purchased an empty lot across from the drugstore where he built his three-story, block-long hotel building. Because of its enormous structure, local people dubbed it “The Castle”. The building was 162 feet long and 50 feet wide. […] The ground floor of the Castle contained Holmes’ own relocated drugstore and various shops, while the upper two floors contained his personal office and a labyrinth of rooms with doorways opening to brick walls, oddly-angled hallways, stairways leading to nowhere, doors that could only be opened from the outside and a host of other strange and deceptive constructions. Holmes was constantly firing and hiring different workers during the construction of the Castle, claiming that “they were doing incompetent work.” His actual reason was to ensure that he was the only one who fully understood the design of the building.[3]

“The Minnesota Starvation Experiment […] was a clinical study performed at the University of Minnesota between November 19, 1944 and December 20, 1945. The investigation was designed to determine the physiological and psychological effects of severe and prolonged dietary restriction and the effectiveness of dietary rehabilitation strategies.

The motivation of the study was twofold: First, to produce a definitive treatise on the subject of human starvation based on a laboratory simulation of severe famine and, second, to use the scientific results produced to guide the Allied relief assistance to famine victims in Europe and Asia at the end of World War II. It was recognized early in 1944 that millions of people were in grave danger of mass famine as a result of the conflict, and information was needed regarding the effects of semi-starvation—and the impact of various rehabilitation strategies—if postwar relief efforts were to be effective.”

“most of the subjects experienced periods of severe emotional distress and depression.[1]:161 There were extreme reactions to the psychological effects during the experiment including self-mutilation (one subject amputated three fingers of his hand with an axe, though the subject was unsure if he had done so intentionally or accidentally).[5] Participants exhibited a preoccupation with food, both during the starvation period and the rehabilitation phase. Sexual interest was drastically reduced, and the volunteers showed signs of social withdrawal and isolation.[1]:123–124 […] One of the crucial observations of the Minnesota Starvation Experiment […] is that the physical effects of the induced semi-starvation during the study closely approximate the conditions experienced by people with a range of eating disorders such as anorexia nervosa and bulimia nervosa.”

Post-vasectomy pain syndrome. Vasectomy reversal is a risk people probably know about, but this one seems to also be worth being aware of if one is considering having a vasectomy.

Transport in the Soviet Union (‘good article’). A few observations from the article:

“By the mid-1970s, only eight percent of the Soviet population owned a car. […]  From 1924 to 1971 the USSR produced 1 million vehicles […] By 1975 only 8 percent of rural households owned a car. […] Growth of motor vehicles had increased by 224 percent in the 1980s, while hardcore surfaced roads only increased by 64 percent. […] By the 1980s Soviet railways had become the most intensively used in the world. Most Soviet citizens did not own private transport, and if they did, it was difficult to drive long distances due to the poor conditions of many roads. […] Road transport played a minor role in the Soviet economy, compared to domestic rail transport or First World road transport. According to historian Martin Crouch, road traffic of goods and passengers combined was only 14 percent of the volume of rail transport. It was only late in its existence that the Soviet authorities put emphasis on road construction and maintenance […] Road transport as a whole lagged far behind that of rail transport; the average distance moved by motor transport in 1982 was 16.4 kilometres (10.2 mi), while the average for railway transport was 930 km per ton and 435 km per ton for water freight. In 1982 there was a threefold increase in investment since 1960 in motor freight transport, and more than a thirtyfold increase since 1940.”

March 3, 2016

## A couple of lectures and a little bit of random stuff

i. Two lectures from the Institute for Advanced Studies:

The IAS has recently uploaded a large number of lectures on youtube, and the ones I blog here are a few of those where you can actually tell from the title what the lecture is about; I find it outright weird that these people don’t include the topic covered in the lecture in their lecture titles.

As for the video above, as usual for the IAS videos it’s annoying that you can’t hear the questions asked by the audience, but the sound quality of this video is at least quite a bit better than the sound quality of the video below (which has a couple of really annoying sequences, in particular around the 15-16 minutes mark (it gets better), where the image is also causing problems, and in the last couple of minutes of the Q&A things are also not exactly optimal as the lecturer leaves the area covered by the camera in order to write something on the blackboard – but you don’t know what he’s writing and you can’t see the lecturer, because the camera isn’t following him). I found most of the above lecture easier to follow than I did the lecture posted below, though in either case you’ll probably not understand all of it unless you’re an astrophysicist – you definitely won’t in case of the latter lecture. I found it helpful to look up a few topics along the way, e.g. the wiki articles about the virial theorem (/also dealing with virial mass/radius), active galactic nucleus (this is the ‘AGN’ she refers to repeatedly), and the Tully–Fisher relation.

Given how many questions are asked along the way it’s really annoying that you in most cases can’t hear what people are asking about – this is definitely an area where there’s room for improvement in the context of the IAS videos. The lecture was not easy to follow but I figured along the way that I understood enough of it to make it worth watching the lecture to the end (though I’d say you’ll not miss much if you stop after the lecture – around the 1.05 hours mark – and skip the subsequent Q&A). I’ve relatively recently read about related topics, e.g. pulsar formation and wave- and fluid dynamics, and if I had not I probably would not have watched this lecture to the end.

ii. A vocabulary.com update. I’m slowly working my way up to the ‘Running Dictionary’ rank (I’m only a walking dictionary at this point); here’s some stuff from my progress page:

I recently learned from a note added to a list that I’ve actually learned a very large proportion of all words available on vocabulary.com, which probably also means that I may have been too harsh on the word selection algorithm in past posts here on the blog; if there aren’t (/m)any new words left to learn it should not be surprising that the algorithm presents me with words I’ve already mastered, and it’s not the algorithm’s fault that there aren’t more words available for me to learn (well, it is to the extent that you’re of the opinion that questions should be automatically created by the algorithm as well, but I don’t think we’re quite there yet at this point). The aforementioned note was added in June, and here’s the important part: “there are words on your list that Vocabulary.com can’t teach yet. Vocabulary.com can teach over 12,000 words, but sadly, these aren’t among them”. ‘Over 12.000’ – and I’ve mastered 11.300. When the proportion of mastered words is this high, not only will the default random word algorithm mostly present you with questions related to words you’ve already mastered; but it actually also starts to get hard to find lists with many words you’ve not already mastered – I’ll often load lists with one hundred words and then realize that I’ve mastered every word on the list. This is annoying if you have a desire to continually be presented with both new words as well as old ones. Unless vocabulary.com increases the rate with which they add new words I’ll run out of new words to learn, and if that happens I’m sure it’ll be much more difficult for me to find motivation to use the site.

With all that stuff out of the way, if you’re not a regular user of the site I should note – again – that it’s an excellent resource if you desire to increase your vocabulary. Below is a list of words I’ve encountered on the site in recent weeks(/months?):

Copaceticfrumpyelisiontermagantharridanquondam, funambulist, phantasmagoriaeyelet, cachinnate, wilt, quidnunc, flocculent, galoot, frangible, prevaricate, clarion, trivet, noisome, revenant, myrmidon (I have included this word once before in a post of this type, but it is in my opinion a very nice word with which more people should be familiar…), debenture, teeter, tart, satiny, romp, auricular, terpsichorean, poultice, ululation, fusty, tangy, honorarium, eyas, bumptious, muckraker, bayou, hobble, omphaloskepsis, extemporize, virago, rarefaction, flibbertigibbet, finagle, emollient.

iii. I don’t think I’d do things exactly the way she’s suggesting here, but the general idea/approach seems to me appealing enough for it to be worth at least keeping in mind if I ever decide to start dating/looking for a partner.

Tarrare (featured). A man with odd eating habits and an interesting employment history (“Dr. Courville was keen to continue his investigations into Tarrare’s eating habits and digestive system, and approached General Alexandre de Beauharnais with a suggestion that Tarrare’s unusual abilities and behaviour could be put to military use.[9] A document was placed inside a wooden box which was in turn fed to Tarrare. Two days later, the box was retrieved from his excrement, with the document still in legible condition.[9][17] Courville proposed to de Beauharnais that Tarrare could thus serve as a military courier, carrying documents securely through enemy territory with no risk of their being found if he were searched.” Yeah…).

Cauda equina syndromeCastleman’s disease, Astereognosis, Familial dysautonomia, Homonymous hemianopsia, Amaurosis fugax. All of these are of course related to content covered in the Handbook.

1740 Batavia massacre (featured).

October 30, 2015

## Wikipedia articles of interest

i. Motte-and-bailey castle (‘good article’).

“A motte-and-bailey castle is a fortification with a wooden or stone keep situated on a raised earthwork called a motte, accompanied by an enclosed courtyard, or bailey, surrounded by a protective ditch and palisade. Relatively easy to build with unskilled, often forced labour, but still militarily formidable, these castles were built across northern Europe from the 10th century onwards, spreading from Normandy and Anjou in France, into the Holy Roman Empire in the 11th century. The Normans introduced the design into England and Wales following their invasion in 1066. Motte-and-bailey castles were adopted in Scotland, Ireland, the Low Countries and Denmark in the 12th and 13th centuries. By the end of the 13th century, the design was largely superseded by alternative forms of fortification, but the earthworks remain a prominent feature in many countries. […]

Various methods were used to build mottes. Where a natural hill could be used, scarping could produce a motte without the need to create an artificial mound, but more commonly much of the motte would have to be constructed by hand.[19] Four methods existed for building a mound and a tower: the mound could either be built first, and a tower placed on top of it; the tower could alternatively be built on the original ground surface and then buried within the mound; the tower could potentially be built on the original ground surface and then partially buried within the mound, the buried part forming a cellar beneath; or the tower could be built first, and the mound added later.[25]

Regardless of the sequencing, artificial mottes had to be built by piling up earth; this work was undertaken by hand, using wooden shovels and hand-barrows, possibly with picks as well in the later periods.[26] Larger mottes took disproportionately more effort to build than their smaller equivalents, because of the volumes of earth involved.[26] The largest mottes in England, such as Thetford, are estimated to have required up to 24,000 man-days of work; smaller ones required perhaps as little as 1,000.[27] […] Taking into account estimates of the likely available manpower during the period, historians estimate that the larger mottes might have taken between four and nine months to build.[29] This contrasted favourably with stone keeps of the period, which typically took up to ten years to build.[30] Very little skilled labour was required to build motte and bailey castles, which made them very attractive propositions if forced peasant labour was available, as was the case after the Norman invasion of England.[19] […]

The type of soil would make a difference to the design of the motte, as clay soils could support a steeper motte, whilst sandier soils meant that a motte would need a more gentle incline.[14] Where available, layers of different sorts of earth, such as clay, gravel and chalk, would be used alternatively to build in strength to the design.[32] Layers of turf could also be added to stabilise the motte as it was built up, or a core of stones placed as the heart of the structure to provide strength.[33] Similar issues applied to the defensive ditches, where designers found that the wider the ditch was dug, the deeper and steeper the sides of the scarp could be, making it more defensive. […]

Although motte-and-bailey castles are the best known castle design, they were not always the most numerous in any given area.[36] A popular alternative was the ringwork castle, involving a palisade being built on top of a raised earth rampart, protected by a ditch. The choice of motte and bailey or ringwork was partially driven by terrain, as mottes were typically built on low ground, and on deeper clay and alluvial soils.[37] Another factor may have been speed, as ringworks were faster to build than mottes.[38] Some ringwork castles were later converted into motte-and-bailey designs, by filling in the centre of the ringwork to produce a flat-topped motte. […]

In England, William invaded from Normandy in 1066, resulting in three phases of castle building in England, around 80% of which were in the motte-and-bailey pattern. […] around 741 motte-and-bailey castles [were built] in England and Wales alone. […] Many motte-and-bailey castles were occupied relatively briefly and in England many were being abandoned by the 12th century, and others neglected and allowed to lapse into disrepair.[96] In the Low Countries and Germany, a similar transition occurred in the 13th and 14th centuries. […] One factor was the introduction of stone into castle building. The earliest stone castles had emerged in the 10th century […] Although wood was a more powerful defensive material than was once thought, stone became increasingly popular for military and symbolic reasons.”

ii. Battle of Midway (featured). Lots of good stuff in there. One aspect I had not been aware of beforehand was that Allied codebreakers also here (I was quite familiar with the works of Turing and others in Bletchley Park) played a key role:

As a result, the Americans entered the battle with a very good picture of where, when, and in what strength the Japanese would appear. Nimitz knew that the Japanese had negated their numerical advantage by dividing their ships into four separate task groups, all too widely separated to be able to support each other.[50][nb 9] […] The Japanese, by contrast, remained almost totally unaware of their opponent’s true strength and dispositions even after the battle began.[27] […] Four Japanese aircraft carriers — Akagi, Kaga, Soryu and Hiryu, all part of the six-carrier force that had attacked Pearl Harbor six months earlier — and a heavy cruiser were sunk at a cost of the carrier Yorktown and a destroyer. After Midway and the exhausting attrition of the Solomon Islands campaign, Japan’s capacity to replace its losses in materiel (particularly aircraft carriers) and men (especially well-trained pilots) rapidly became insufficient to cope with mounting casualties, while the United States’ massive industrial capabilities made American losses far easier to bear. […] The Battle of Midway has often been called “the turning point of the Pacific”.[140] However, the Japanese continued to try to secure more strategic territory in the South Pacific, and the U.S. did not move from a state of naval parity to one of increasing supremacy until after several more months of hard combat.[141] Thus, although Midway was the Allies’ first major victory against the Japanese, it did not radically change the course of the war. Rather, it was the cumulative effects of the battles of Coral Sea and Midway that reduced Japan’s ability to undertake major offensives.[9]

One thing which really strikes you (well, struck me) when reading this stuff is how incredibly capital-intensive the war at sea really was; this was one of the most important sea battles of the Second World War, yet the total Japanese death toll at Midway was just 3,057. To put that number into perspective, it is significantly smaller than the average number of people killed each day in Stalingrad (according to one estimate, the Soviets alone suffered 478,741 killed or missing during those roughly 5 months (~150 days), which comes out at roughly 3000/day).

iii. History of time-keeping devices (featured). ‘Exactly what it says on the tin’, as they’d say on TV Tropes.

It took a long time to get from where we were to where we are today; the horologists of the past faced a lot of problems you’ve most likely never even thought about. What do you do for example do if your ingenious water clock has trouble keeping time because variation in water temperature causes issues? Well, you use mercury instead of water, of course! (“Since Yi Xing’s clock was a water clock, it was affected by temperature variations. That problem was solved in 976 by Zhang Sixun by replacing the water with mercury, which remains liquid down to −39 °C (−38 °F).”).

Microbial metabolism is the means by which a microbe obtains the energy and nutrients (e.g. carbon) it needs to live and reproduce. Microbes use many different types of metabolic strategies and species can often be differentiated from each other based on metabolic characteristics. The specific metabolic properties of a microbe are the major factors in determining that microbe’s ecological niche, and often allow for that microbe to be useful in industrial processes or responsible for biogeochemical cycles. […]

All microbial metabolisms can be arranged according to three principles:

1. How the organism obtains carbon for synthesising cell mass:

2. How the organism obtains reducing equivalents used either in energy conservation or in biosynthetic reactions:

3. How the organism obtains energy for living and growing:

In practice, these terms are almost freely combined. […] Most microbes are heterotrophic (more precisely chemoorganoheterotrophic), using organic compounds as both carbon and energy sources. […] Heterotrophic microbes are extremely abundant in nature and are responsible for the breakdown of large organic polymers such as cellulose, chitin or lignin which are generally indigestible to larger animals. Generally, the breakdown of large polymers to carbon dioxide (mineralization) requires several different organisms, with one breaking down the polymer into its constituent monomers, one able to use the monomers and excreting simpler waste compounds as by-products, and one able to use the excreted wastes. There are many variations on this theme, as different organisms are able to degrade different polymers and secrete different waste products. […]

Biochemically, prokaryotic heterotrophic metabolism is much more versatile than that of eukaryotic organisms, although many prokaryotes share the most basic metabolic models with eukaryotes, e. g. using glycolysis (also called EMP pathway) for sugar metabolism and the citric acid cycle to degrade acetate, producing energy in the form of ATP and reducing power in the form of NADH or quinols. These basic pathways are well conserved because they are also involved in biosynthesis of many conserved building blocks needed for cell growth (sometimes in reverse direction). However, many bacteria and archaea utilize alternative metabolic pathways other than glycolysis and the citric acid cycle. […] The metabolic diversity and ability of prokaryotes to use a large variety of organic compounds arises from the much deeper evolutionary history and diversity of prokaryotes, as compared to eukaryotes. […]

Many microbes (phototrophs) are capable of using light as a source of energy to produce ATP and organic compounds such as carbohydrates, lipids, and proteins. Of these, algae are particularly significant because they are oxygenic, using water as an electron donor for electron transfer during photosynthesis.[11] Phototrophic bacteria are found in the phyla Cyanobacteria, Chlorobi, Proteobacteria, Chloroflexi, and Firmicutes.[12] Along with plants these microbes are responsible for all biological generation of oxygen gas on Earth. […] As befits the large diversity of photosynthetic bacteria, there are many different mechanisms by which light is converted into energy for metabolism. All photosynthetic organisms locate their photosynthetic reaction centers within a membrane, which may be invaginations of the cytoplasmic membrane (Proteobacteria), thylakoid membranes (Cyanobacteria), specialized antenna structures called chlorosomes (Green sulfur and non-sulfur bacteria), or the cytoplasmic membrane itself (heliobacteria). Different photosynthetic bacteria also contain different photosynthetic pigments, such as chlorophylls and carotenoids, allowing them to take advantage of different portions of the electromagnetic spectrum and thereby inhabit different niches. Some groups of organisms contain more specialized light-harvesting structures (e.g. phycobilisomes in Cyanobacteria and chlorosomes in Green sulfur and non-sulfur bacteria), allowing for increased efficiency in light utilization. […]

Most photosynthetic microbes are autotrophic, fixing carbon dioxide via the Calvin cycle. Some photosynthetic bacteria (e.g. Chloroflexus) are photoheterotrophs, meaning that they use organic carbon compounds as a carbon source for growth. Some photosynthetic organisms also fix nitrogen […] Nitrogen is an element required for growth by all biological systems. While extremely common (80% by volume) in the atmosphere, dinitrogen gas (N2) is generally biologically inaccessible due to its high activation energy. Throughout all of nature, only specialized bacteria and Archaea are capable of nitrogen fixation, converting dinitrogen gas into ammonia (NH3), which is easily assimilated by all organisms.[14] These prokaryotes, therefore, are very important ecologically and are often essential for the survival of entire ecosystems. This is especially true in the ocean, where nitrogen-fixing cyanobacteria are often the only sources of fixed nitrogen, and in soils, where specialized symbioses exist between legumes and their nitrogen-fixing partners to provide the nitrogen needed by these plants for growth.

Nitrogen fixation can be found distributed throughout nearly all bacterial lineages and physiological classes but is not a universal property. Because the enzyme nitrogenase, responsible for nitrogen fixation, is very sensitive to oxygen which will inhibit it irreversibly, all nitrogen-fixing organisms must possess some mechanism to keep the concentration of oxygen low. […] The production and activity of nitrogenases is very highly regulated, both because nitrogen fixation is an extremely energetically expensive process (16–24 ATP are used per N2 fixed) and due to the extreme sensitivity of the nitrogenase to oxygen.” (A lot of the stuff above was of course for me either review or closely related to stuff I’ve already read in the coverage provided in Beer et al., a book I’ve talked about before here on the blog).

v. Uranium (featured). It’s hard to know what to include here as the article has a lot of stuff, but I found this part in particular, well, interesting:

“During the Cold War between the Soviet Union and the United States, huge stockpiles of uranium were amassed and tens of thousands of nuclear weapons were created using enriched uranium and plutonium made from uranium. Since the break-up of the Soviet Union in 1991, an estimated 600 short tons (540 metric tons) of highly enriched weapons grade uranium (enough to make 40,000 nuclear warheads) have been stored in often inadequately guarded facilities in the Russian Federation and several other former Soviet states.[12] Police in Asia, Europe, and South America on at least 16 occasions from 1993 to 2005 have intercepted shipments of smuggled bomb-grade uranium or plutonium, most of which was from ex-Soviet sources.[12] From 1993 to 2005 the Material Protection, Control, and Accounting Program, operated by the federal government of the United States, spent approximately US \$550 million to help safeguard uranium and plutonium stockpiles in Russia.[12] This money was used for improvements and security enhancements at research and storage facilities. Scientific American reported in February 2006 that in some of the facilities security consisted of chain link fences which were in severe states of disrepair. According to an interview from the article, one facility had been storing samples of enriched (weapons grade) uranium in a broom closet before the improvement project; another had been keeping track of its stock of nuclear warheads using index cards kept in a shoe box.[45]

Some other observations from the article below:

“Uranium is a naturally occurring element that can be found in low levels within all rock, soil, and water. Uranium is the 51st element in order of abundance in the Earth’s crust. Uranium is also the highest-numbered element to be found naturally in significant quantities on Earth and is almost always found combined with other elements.[10] Along with all elements having atomic weights higher than that of iron, it is only naturally formed in supernovae.[46] The decay of uranium, thorium, and potassium-40 in the Earth’s mantle is thought to be the main source of heat[47][48] that keeps the outer core liquid and drives mantle convection, which in turn drives plate tectonics. […]

Natural uranium consists of three major isotopes: uranium-238 (99.28% natural abundance), uranium-235 (0.71%), and uranium-234 (0.0054%). […] Uranium-238 is the most stable isotope of uranium, with a half-life of about 4.468×109 years, roughly the age of the Earth. Uranium-235 has a half-life of about 7.13×108 years, and uranium-234 has a half-life of about 2.48×105 years.[82] For natural uranium, about 49% of its alpha rays are emitted by each of 238U atom, and also 49% by 234U (since the latter is formed from the former) and about 2.0% of them by the 235U. When the Earth was young, probably about one-fifth of its uranium was uranium-235, but the percentage of 234U was probably much lower than this. […]

Worldwide production of U3O8 (yellowcake) in 2013 amounted to 70,015 tonnes, of which 22,451 t (32%) was mined in Kazakhstan. Other important uranium mining countries are Canada (9,331 t), Australia (6,350 t), Niger (4,518 t), Namibia (4,323 t) and Russia (3,135 t).[55] […] Australia has 31% of the world’s known uranium ore reserves[61] and the world’s largest single uranium deposit, located at the Olympic Dam Mine in South Australia.[62] There is a significant reserve of uranium in Bakouma a sub-prefecture in the prefecture of Mbomou in Central African Republic. […] Uranium deposits seem to be log-normal distributed. There is a 300-fold increase in the amount of uranium recoverable for each tenfold decrease in ore grade.[75] In other words, there is little high grade ore and proportionately much more low grade ore available.”

The idea behind radiocarbon dating is straightforward, but years of work were required to develop the technique to the point where accurate dates could be obtained. […]

The development of radiocarbon dating has had a profound impact on archaeology. In addition to permitting more accurate dating within archaeological sites than did previous methods, it allows comparison of dates of events across great distances. Histories of archaeology often refer to its impact as the “radiocarbon revolution”.”

I’ve read about these topics before in a textbook setting (e.g. here), but/and I should note that the article provides quite detailed coverage and I think most people will encounter some new information by having a look at it even if they’re superficially familiar with this topic. The article has a lot of stuff about e.g. ‘what you need to correct for’, which some of you might find interesting.

vii. Raccoon (featured). One interesting observation from the article:

“One aspect of raccoon behavior is so well known that it gives the animal part of its scientific name, Procyon lotor; “lotor” is neo-Latin for “washer”. In the wild, raccoons often dabble for underwater food near the shore-line. They then often pick up the food item with their front paws to examine it and rub the item, sometimes to remove unwanted parts. This gives the appearance of the raccoon “washing” the food. The tactile sensitivity of raccoons’ paws is increased if this rubbing action is performed underwater, since the water softens the hard layer covering the paws.[126] However, the behavior observed in captive raccoons in which they carry their food to water to “wash” or douse it before eating has not been observed in the wild.[127] Naturalist Georges-Louis Leclerc, Comte de Buffon, believed that raccoons do not have adequate saliva production to moisten food thereby necessitating dousing, but this hypothesis is now considered to be incorrect.[128] Captive raccoons douse their food more frequently when a watering hole with a layout similar to a stream is not farther away than 3 m (10 ft).[129] The widely accepted theory is that dousing in captive raccoons is a fixed action pattern from the dabbling behavior performed when foraging at shores for aquatic foods.[130] This is supported by the observation that aquatic foods are doused more frequently. Cleaning dirty food does not seem to be a reason for “washing”.[129] Experts have cast doubt on the veracity of observations of wild raccoons dousing food.[131]

And here’s another interesting set of observations:

“In Germany—where the racoon is called the Waschbär (literally, “wash-bear” or “washing bear”) due to its habit of “dousing” food in water—two pairs of pet raccoons were released into the German countryside at the Edersee reservoir in the north of Hesse in April 1934 by a forester upon request of their owner, a poultry farmer.[186] He released them two weeks before receiving permission from the Prussian hunting office to “enrich the fauna.” [187] Several prior attempts to introduce raccoons in Germany were not successful.[188] A second population was established in eastern Germany in 1945 when 25 raccoons escaped from a fur farm at Wolfshagen, east of Berlin, after an air strike. The two populations are parasitologically distinguishable: 70% of the raccoons of the Hessian population are infected with the roundworm Baylisascaris procyonis, but none of the Brandenburgian population has the parasite.[189] The estimated number of raccoons was 285 animals in the Hessian region in 1956, over 20,000 animals in the Hessian region in 1970 and between 200,000 and 400,000 animals in the whole of Germany in 2008.[158][190] By 2012 it was estimated that Germany now had more than a million raccoons.[191]

June 14, 2015

## Stuff

Sorry for the infrequent updates. I realized blogging Wodehouse books takes more time than I’d imagined, so posting this sort of stuff is probably a better idea.

i. Dunkirk evacuation (wikipedia ‘good article’). Fascinating article, as are a few of the related ones which I’ve also been reading (e.g. Operation Ariel).

“On the first day of the evacuation, only 7,669 men were evacuated, but by the end of the eighth day, a total of 338,226 soldiers had been rescued by a hastily assembled fleet of over 800 boats. Many of the troops were able to embark from the harbour’s protective mole onto 39 British destroyers and other large ships, while others had to wade out from the beaches, waiting for hours in the shoulder-deep water. Some were ferried from the beaches to the larger ships by the famous little ships of Dunkirk, a flotilla of hundreds of merchant marine boats, fishing boats, pleasure craft, and lifeboats called into service for the emergency. The BEF lost 68,000 soldiers during the French campaign and had to abandon nearly all of their tanks, vehicles, and other equipment.”

One way to make sense of the scale of the operations here is to compare them with the naval activities on D-day four years later. The British evacuated more people from France during three consecutive days in 1940 (30th and 31st of May, and 1st of June) than the Allies (Americans and British combined) landed on D-day four years later, and the British evacuated roughly as many people on the 31st of May (68,014) as they landed by sea on D-day (75,215). Here’s a part of the story I did not know:

“Three British divisions and a host of logistic and labour troops were cut off to the south of the Somme by the German “race to the sea”. At the end of May, a further two divisions began moving to France with the hope of establishing a Second BEF. The majority of the 51st (Highland) Division was forced to surrender on 12 June, but almost 192,000 Allied personnel, 144,000 of them British, were evacuated through various French ports from 15–25 June under the codename Operation Ariel.[104] […] More than 100,000 evacuated French troops were quickly and efficiently shuttled to camps in various parts of southwestern England, where they were temporarily lodged before being repatriated.[106] British ships ferried French troops to Brest, Cherbourg, and other ports in Normandy and Brittany, although only about half of the repatriated troops were deployed against the Germans before the surrender of France. For many French soldiers, the Dunkirk evacuation represented only a few weeks’ delay before being killed or captured by the German army after their return to France.[107]

ii. A pretty awesome display by the current world chess champion:

If you feel the same way I do about Maurice Ashley, you’ll probably want to skip the first few minutes of this video. Don’t miss the games, though – this is great stuff. Do keep in mind when watching this video that the clock is a really important part of this event; other players in the past have played a lot more people at the same time while blindfolded than Carlsen does here – “Although not a full-time chess professional [Najdorf] was one of the world’s leading chess players in the 1950s and 1960s and he excelled in playing blindfold chess: he broke the world record twice, by playing blindfold 40 games in Rosario, 1943,[8] and 45 in São Paulo, 1947, becoming the world blindfold chess champion” (link) – but a game clock changes things a lot. A few comments and discussion here.
In very slightly related news, I recently got in my first win against a grandmaster in a bullet game on the ICC.

“The genus was unique because it contained the only two known frog species that incubated the prejuvenile stages of their offspring in the stomach of the mother.[3] […] What makes these frogs unique among all frog species is their form of parental care. Following external fertilization by the male, the female would take the eggs or embryos into her mouth and swallow them.[19] […] Eggs found in females measured up to 5.1 mm in diameter and had large yolk supplies. These large supplies are common among species that live entirely off yolk during their development. Most female frogs had around 40 ripe eggs, almost double that of the number of juveniles ever found in the stomach (21–26). This means one of two things, that the female fails to swallow all the eggs or the first few eggs to be swallowed are digested. […] During the period that the offspring were present in the stomach the frog would not eat. […] The birth process was widely spaced and may have occurred over a period of as long as a week. However, if disturbed the female may regurgitate all the young frogs in a single act of propulsive vomiting.”

Fascinating creatures.. Unfortunately they’re no longer around (they’re classified as extinct).

Why am I conflicted? Well, on the one hand it’s nice to know that they’re making progress in terms of figuring out why people get Alzheimer’s and potential therapeutic targets are being identified. On the other hand this – “our findings suggest that repeated episodes of transient hyperglycemia […] could both initiate and accelerate plaque accumulation” – is bad news if you’re a type 1 diabetic (I’d much rather have them identify risk factors to which I’m not exposed).

v. I recently noticed that Khan Academy has put up some videos about diabetes. From the few ones I’ve had a look at they don’t seem to contain much stuff I don’t already know so I’m not sure I’ll explore this playlist in any more detail, but I figured I might as well share a few of the videos here; the first one is about the pathophysiology of type 1 diabetes and the second one’s about diabetic nephropathy (kidney disease):

vi. On Being the Right Size, by J. B. S. Haldane. A neat little text. A few quotes:

“To the mouse and any smaller animal [gravity] presents practically no dangers. You can drop a mouse down a thousand-yard mine shaft; and, on arriving at the bottom, it gets a slight shock and walks away, provided that the ground is fairly soft. A rat is killed, a man is broken, a horse splashes. For the resistance presented to movement by the air is proportional to the surface of the moving object. Divide an animal’s length, breadth, and height each by ten; its weight is reduced to a thousandth, but its surface only to a hundredth. So the resistance to falling in the case of the small animal is relatively ten times greater than the driving force.

An insect, therefore, is not afraid of gravity; it can fall without danger, and can cling to the ceiling with remarkably little trouble. It can go in for elegant and fantastic forms of support like that of the daddy-longlegs. But there is a force which is as formidable to an insect as gravitation to a mammal. This is surface tension. A man coming out of a bath carries with him a film of water of about one-fiftieth of an inch in thickness. This weighs roughly a pound. A wet mouse has to carry about its own weight of water. A wet fly has to lift many times its own weight and, as everyone knows, a fly once wetted by water or any other liquid is in a very serious position indeed. An insect going for a drink is in as great danger as a man leaning out over a precipice in search of food. If it once falls into the grip of the surface tension of the water—that is to say, gets wet—it is likely to remain so until it drowns. A few insects, such as water-beetles, contrive to be unwettable; the majority keep well away from their drink by means of a long proboscis. […]

It is an elementary principle of aeronautics that the minimum speed needed to keep an aeroplane of a given shape in the air varies as the square root of its length. If its linear dimensions are increased four times, it must fly twice as fast. Now the power needed for the minimum speed increases more rapidly than the weight of the machine. So the larger aeroplane, which weighs sixty-four times as much as the smaller, needs one hundred and twenty-eight times its horsepower to keep up. Applying the same principle to the birds, we find that the limit to their size is soon reached. An angel whose muscles developed no more power weight for weight than those of an eagle or a pigeon would require a breast projecting for about four feet to house the muscles engaged in working its wings, while to economize in weight, its legs would have to be reduced to mere stilts. Actually a large bird such as an eagle or kite does not keep in the air mainly by moving its wings. It is generally to be seen soaring, that is to say balanced on a rising column of air. And even soaring becomes more and more difficult with increasing size. Were this not the case eagles might be as large as tigers and as formidable to man as hostile aeroplanes.

But it is time that we pass to some of the advantages of size. One of the most obvious is that it enables one to keep warm. All warmblooded animals at rest lose the same amount of heat from a unit area of skin, for which purpose they need a food-supply proportional to their surface and not to their weight. Five thousand mice weigh as much as a man. Their combined surface and food or oxygen consumption are about seventeen times a man’s. In fact a mouse eats about one quarter its own weight of food every day, which is mainly used in keeping it warm. For the same reason small animals cannot live in cold countries. In the arctic regions there are no reptiles or amphibians, and no small mammals. The smallest mammal in Spitzbergen is the fox. The small birds fly away in winter, while the insects die, though their eggs can survive six months or more of frost. The most successful mammals are bears, seals, and walruses.” [I think he’s a bit too categorical in his statements here and this topic is more contested today than it probably was when he wrote his text – see wikipedia’s coverage of Bergmann’s rule].

May 26, 2015

## Wikipedia articles of interest

i. Lock (water transport). Zumerchik and Danver’s book covered this kind of stuff as well, sort of, and I figured that since I’m not going to blog the book – for reasons provided in my goodreads review here – I might as well add a link or two here instead. The words ‘sort of’ above are in my opinion justified because the book coverage is so horrid you’d never even know what a lock is used for from reading that book; you’d need to look that up elsewhere.

On a related note there’s a lot of stuff in that book about the history of water transport etc. which you probably won’t get from these articles, but having a look here will give you some idea about which sort of topics many of the chapters of the book are dealing with. Also, stuff like this and this. The book coverage of the latter topic is incidentally much, much more detailed than is that wiki article, and the article – as well as many other articles about related topics (economic history, etc.) on the wiki, to the extent that they even exist – could clearly be improved greatly by adding content from books like this one. However I’m not going to be the guy doing that.

ii. Congruence (geometry).

I’d note that this is a topic which seems to be reasonably well covered on wikipedia; there’s for example also a ‘good article’ on the Everglades and a featured article about the Everglades National Park. A few quotes and observations from the article:

“The geography and ecology of the Everglades involve the complex elements affecting the natural environment throughout the southern region of the U.S. state of Florida. Before drainage, the Everglades were an interwoven mesh of marshes and prairies covering 4,000 square miles (10,000 km2). […] Although sawgrass and sloughs are the enduring geographical icons of the Everglades, other ecosystems are just as vital, and the borders marking them are subtle or nonexistent. Pinelands and tropical hardwood hammocks are located throughout the sloughs; the trees, rooted in soil inches above the peat, marl, or water, support a variety of wildlife. The oldest and tallest trees are cypresses, whose roots are specially adapted to grow underwater for months at a time.”

“A vast marshland could only have been formed due to the underlying rock formations in southern Florida.[15] The floor of the Everglades formed between 25 million and 2 million years ago when the Florida peninsula was a shallow sea floor. The peninsula has been covered by sea water at least seven times since the earliest bedrock formation. […] At only 5,000 years of age, the Everglades is a young region in geological terms. Its ecosystems are in constant flux as a result of the interplay of three factors: the type and amount of water present, the geology of the region, and the frequency and severity of fires. […] Water is the dominant element in the Everglades, and it shapes the land, vegetation, and animal life of South Florida. The South Florida climate was once arid and semi-arid, interspersed with wet periods. Between 10,000 and 20,000 years ago, sea levels rose, submerging portions of the Florida peninsula and causing the water table to rise. Fresh water saturated the limestone, eroding some of it and creating springs and sinkholes. The abundance of fresh water allowed new vegetation to take root, and through evaporation formed thunderstorms. Limestone was dissolved by the slightly acidic rainwater. The limestone wore away, and groundwater came into contact with the surface, creating a massive wetland ecosystem. […] Only two seasons exist in the Everglades: wet (May to November) and dry (December to April). […] The Everglades are unique; no other wetland system in the world is nourished primarily from the atmosphere. […] Average annual rainfall in the Everglades is approximately 62 inches (160 cm), though fluctuations of precipitation are normal.”

“Between 1871 and 2003, 40 tropical cyclones struck the Everglades, usually every one to three years.”

“Islands of trees featuring dense temperate or tropical trees are called tropical hardwood hammocks.[38] They may rise between 1 and 3 feet (0.30 and 0.91 m) above water level in freshwater sloughs, sawgrass prairies, or pineland. These islands illustrate the difficulty of characterizing the climate of the Everglades as tropical or subtropical. Hammocks in the northern portion of the Everglades consist of more temperate plant species, but closer to Florida Bay the trees are tropical and smaller shrubs are more prevalent. […] Islands vary in size, but most range between 1 and 10 acres (0.40 and 4.05 ha); the water slowly flowing around them limits their size and gives them a teardrop appearance from above.[42] The height of the trees is limited by factors such as frost, lightning, and wind: the majority of trees in hammocks grow no higher than 55 feet (17 m). […] There are more than 50 varieties of tree snails in the Everglades; the color patterns and designs unique to single islands may be a result of the isolation of certain hammocks.[44] […] An estimated 11,000 species of seed-bearing plants and 400 species of land or water vertebrates live in the Everglades, but slight variations in water levels affect many organisms and reshape land formations.”

“Because much of the coast and inner estuaries are built by mangroves—and there is no border between the coastal marshes and the bay—the ecosystems in Florida Bay are considered part of the Everglades. […] Sea grasses stabilize sea beds and protect shorelines from erosion by absorbing energy from waves. […] Sea floor patterns of Florida Bay are formed by currents and winds. However, since 1932, sea levels have been rising at a rate of 1 foot (0.30 m) per 100 years.[81] Though mangroves serve to build and stabilize the coastline, seas may be rising more rapidly than the trees are able to build.[82]

iv. Chang and Eng Bunker. Not a long article, but interesting:

Chang (Chinese: ; pinyin: Chāng; Thai: จัน, Jan, rtgsChan) and Eng (Chinese: ; pinyin: Ēn; Thai: อิน In) Bunker (May 11, 1811 – January 17, 1874) were Thai-American conjoined twin brothers whose condition and birthplace became the basis for the term “Siamese twins”.[1][2][3]

I loved some of the implicit assumptions in this article: “Determined to live as normal a life they could, Chang and Eng settled on their small plantation and bought slaves to do the work they could not do themselves. […] Chang and Adelaide [his wife] would become the parents of eleven children. Eng and Sarah [‘the other wife’] had ten.”

A ‘normal life’ indeed… The women the twins married were incidentally sisters who ended up disliking each other (I can’t imagine why…).

v. Genie (feral child). This is a very long article, and you should be warned that many parts of it may not be pleasant to read. From the article:

Genie (born 1957) is the pseudonym of a feral child who was the victim of extraordinarily severe abuse, neglect and social isolation. Her circumstances are prominently recorded in the annals of abnormal child psychology.[1][2] When Genie was a baby her father decided that she was severely mentally retarded, causing him to dislike her and withhold as much care and attention as possible. Around the time she reached the age of 20 months Genie’s father decided to keep her as socially isolated as possible, so from that point until she reached 13 years, 7 months, he kept her locked alone in a room. During this time he almost always strapped her to a child’s toilet or bound her in a crib with her arms and legs completely immobilized, forbade anyone from interacting with her, and left her severely malnourished.[3][4][5] The extent of Genie’s isolation prevented her from being exposed to any significant amount of speech, and as a result she did not acquire language during childhood. Her abuse came to the attention of Los Angeles child welfare authorities on November 4, 1970.[1][3][4]

In the first several years after Genie’s early life and circumstances came to light, psychologists, linguists and other scientists focused a great deal of attention on Genie’s case, seeing in her near-total isolation an opportunity to study many aspects of human development. […] In early January 1978 Genie’s mother suddenly decided to forbid all of the scientists except for one from having any contact with Genie, and all testing and scientific observations of her immediately ceased. Most of the scientists who studied and worked with Genie have not seen her since this time. The only post-1977 updates on Genie and her whereabouts are personal observations or secondary accounts of them, and all are spaced several years apart. […]

Genie’s father had an extremely low tolerance for noise, to the point of refusing to have a working television or radio in the house. Due to this, the only sounds Genie ever heard from her parents or brother on a regular basis were noises when they used the bathroom.[8][43] Although Genie’s mother claimed that Genie had been able to hear other people talking in the house, her father almost never allowed his wife or son to speak and viciously beat them if he heard them talking without permission. They were particularly forbidden to speak to or around Genie, so what conversations they had were therefore always very quiet and out of Genie’s earshot, preventing her from being exposed to any meaningful language besides her father’s occasional swearing.[3][13][43] […] Genie’s father fed Genie as little as possible and refused to give her solid food […]

In late October 1970, Genie’s mother and father had a violent argument in which she threatened to leave if she could not call her parents. He eventually relented, and later that day Genie’s mother was able to get herself and Genie away from her husband while he was out of the house […] She and Genie went to live with her parents in Monterey Park.[13][20][56] Around three weeks later, on November 4, after being told to seek disability benefits for the blind, Genie’s mother decided to do so in nearby Temple City, California and brought Genie along with her.[3][56]

On account of her near-blindness, instead of the disabilities benefits office Genie’s mother accidentally entered the general social services office next door.[3][56] The social worker who greeted them instantly sensed something was not right when she first saw Genie and was shocked to learn Genie’s true age was 13, having estimated from her appearance and demeanor that she was around 6 or 7 and possibly autistic. She notified her supervisor, and after questioning Genie’s mother and confirming Genie’s age they immediately contacted the police. […]

Upon admission to Children’s Hospital, Genie was extremely pale and grossly malnourished. She was severely undersized and underweight for her age, standing 4 ft 6 in (1.37 m) and weighing only 59 pounds (27 kg) […] Genie’s gross motor skills were extremely weak; she could not stand up straight nor fully straighten any of her limbs.[83][84] Her movements were very hesitant and unsteady, and her characteristic “bunny walk”, in which she held her hands in front of her like claws, suggested extreme difficulty with sensory processing and an inability to integrate visual and tactile information.[62] She had very little endurance, only able to engage in any physical activity for brief periods of time.[85] […]

Despite tests conducted shortly after her admission which determined Genie had normal vision in both eyes she could not focus them on anything more than 10 feet (3 m) away, which corresponded to the dimensions of the room she was kept in.[86] She was also completely incontinent, and gave no response whatsoever to extreme temperatures.[48][87] As Genie never ate solid food as a child she was completely unable to chew and had very severe dysphagia, completely unable to swallow any solid or even soft food and barely able to swallow liquids.[80][88] Because of this she would hold anything which she could not swallow in her mouth until her saliva broke it down, and if this took too long she would spit it out and mash it with her fingers.[50] She constantly salivated and spat, and continually sniffed and blew her nose on anything that happened to be nearby.[83][84]

Genie’s behavior was typically highly anti-social, and proved extremely difficult for others to control. She had no sense of personal property, frequently pointing to or simply taking something she wanted from someone else, and did not have any situational awareness whatsoever, acting on any of her impulses regardless of the setting. […] Doctors found it extremely difficult to test Genie’s mental age, but on two attempts they found Genie scored at the level of a 13-month-old. […] When upset Genie would wildly spit, blow her nose into her clothing, rub mucus all over her body, frequently urinate, and scratch and strike herself.[102][103] These tantrums were usually the only times Genie was at all demonstrative in her behavior. […] Genie clearly distinguished speaking from other environmental sounds, but she remained almost completely silent and was almost entirely unresponsive to speech. When she did vocalize, it was always extremely soft and devoid of tone. Hospital staff initially thought that the responsiveness she did show to them meant she understood what they were saying, but later determined that she was instead responding to nonverbal signals that accompanied their speaking. […] Linguists later determined that in January 1971, two months after her admission, Genie only showed understanding of a few names and about 15–20 words. Upon hearing any of these, she invariably responded to them as if they had been spoken in isolation. Hospital staff concluded that her active vocabulary at that time consisted of just two short phrases, “stop it” and “no more”.[27][88][99] Beyond negative commands, and possibly intonation indicating a question, she showed no understanding of any grammar whatsoever. […] Genie had a great deal of difficulty learning to count in sequential order. During Genie’s stay with the Riglers, the scientists spent a great deal of time attempting to teach her to count. She did not start to do so at all until late 1972, and when she did her efforts were extremely deliberate and laborious. By 1975 she could only count up to 7, which even then remained very difficult for her.”

“From January 1978 until 1993, Genie moved through a series of at least four additional foster homes and institutions. In some of these locations she was further physically abused and harassed to extreme degrees, and her development continued to regress. […] Genie is a ward of the state of California, and is living in an undisclosed location in the Los Angeles area.[3][20] In May 2008, ABC News reported that someone who spoke under condition of anonymity had hired a private investigator who located Genie in 2000. She was reportedly living a relatively simple lifestyle in a small private facility for mentally underdeveloped adults, and appeared to be happy. Although she only spoke a few words, she could still communicate fairly well in sign language.[3]

April 20, 2015

## Wikipedia articles of interest

i. Invasion of Poland. I recently realized I had no idea e.g. how long it took for the Germans and Soviets to defeat Poland during WW2 (the answer is 1 month and five days). The Germans attacked more than two weeks before the Soviets did. The article has lots of links, like most articles about such topics on wikipedia. Incidentally the question of why France and Britain applied a double standard and only declared war on Germany, and not the Soviet Union, is discussed in much detail in the links provided by u/OldWorldGlory here.

ii. Huaynaputina. From the article:

“A few days before the eruption, someone reported booming noise from the volcano and fog-like gas being emitted from its crater. The locals scrambled to appease the volcano, preparing girls, pets, and flowers for sacrifice.”

This makes sense – what else would one do in a situation like that? Finding a few virgins, dogs and flowers seems like the sensible approach – yes, you have to love humans and how they always react in sensible ways to such crises.

I’m not really sure the rest of the article is really all that interesting, but I found the above sentence both amusing and depressing enough to link to it here.

iii. Albert Pierrepoint. This guy killed hundreds of people.

On the other hand people were fine with it – it was his job. Well, sort of, this is actually slightly complicated. (“Pierrepoint was often dubbed the Official Executioner, despite there being no such job or title”).

Anyway this article is clearly the story of a guy who achieved his childhood dream – though unlike other children, he did not dream of becoming a fireman or a pilot, but rather of becoming the Official Executioner of the country. I’m currently thinking of using Pierrepoint as the main character in the motivational story I plan to tell my nephew when he’s a bit older.

iv. Second Crusade (featured). Considering how many different ‘states’ and ‘kingdoms’ were involved, a surprisingly small amount of people were actually fighting; the article notes that “[t]here were perhaps 50,000 troops in total” on the Christian side when the attack on Damascus was initiated. It wasn’t enough, as the outcome of the crusade was a decisive Muslim victory in the ‘Holy Land’ (Middle East).

v. 0.999… (featured). This thing is equal to one, but it can sometimes be really hard to get even very smart people to accept this fact. Lots of details and some proofs presented in the article.

vi. Shapley–Folkman lemma (‘good article’ – but also a somewhat technical article).

vii. Multituberculata. This article is not that special, but I add it here also because I think it ought to be and I’m actually sort of angry that it’s not; sometimes the coverage provided on wikipedia simply strikes me as grossly unfair, even if this is perhaps a slightly odd way to think about stuff. As pointed out in the article (Agustí points this out in his book as well), “The multituberculates existed for about 120 million years, and are often considered the most successful, diversified, and long-lasting mammals in natural history.” Yet notice how much (/little) coverage the article provides. Now compare the article with this article, or this.

February 25, 2015

It’s been quite a while since the last time I posted a ‘here’s some interesting stuff I’ve found online’-post, so I’ll do that now even though I actually don’t spend much time randomly looking around for interesting stuff online these days. I added some wikipedia links I’d saved for a ‘wikipedia articles of interest’-post because it usually takes quite a bit of time to write a standard wikipedia post (as it takes time to figure out what to include and what not to include in the coverage) and I figured that if I didn’t add those links here I’d never get around to blogging them.

i. Battle of Dyrrhachium. Found via this link, which has a lot of stuff.

iii. I found this article about the so-called “Einstellung” effect in chess interesting. I’m however not sure how important this stuff really is. I don’t think it’s sub-optimal for a player to spend a significant amount of time in positions like the ones they analyzed on ideas that don’t work, because usually you’ll only have to spot one idea that does to win the game. It’s obvious that one can argue people spend ‘too much’ time looking for a winning combination in positions where by design no winning combinations exist, but the fact of the matter is that in positions where ‘familiar patterns’ pop up winning resources often do exist, and you don’t win games by overlooking those or by failing to spend time looking for them; occasional suboptimal moves in some contexts may be a reasonable price to pay for increasing your likelihood of finding/playing the best/winning moves when those do exist. Here’s a slightly related link dealing with the question of the potential number of games/moves in chess. Here’s a good wiki article about pawn structures, and here’s one about swindles in chess. I incidentally very recently became a member of the ICC, and I’m frankly impressed with the player pool – which is huge and includes some really strong players (players like Morozevich and Tomashevsky seem to play there regularly). Since I started out on the site I’ve already beaten 3 IMs in bullet and lost a game against Islandic GM Henrik Danielsen. The IMs I’ve beaten were far from the strongest players in the player pool, but in my experience you don’t get to play titled players nearly as often as that on other sites if you’re at my level.

iv. A picture of the Andromeda galaxy. A really big picture. Related link here.

v. You may already have seen this one, but in case you have not: A Philosopher Walks Into A Coffee Shop. More than one of these made me laugh out loud. If you like the post you should take a look at the comments as well, there are some brilliant ones there as well.

vi. Amdahl’s law.

vii. Eigendecomposition of a matrix. On a related note I’m currently reading Imboden and Pfenninger’s Introduction to Systems Analysis (which goodreads for some reason has listed under a wrong title, as the goodreads book title is really the subtitle of the book), and today I had a look at the wiki article on Jacobian matrices and determinants for that reason (the book is about as technical as you’d expect from a book with a title like that).

viii. If you’ve been wondering how I’ve found the quotes I’ve posted here on this blog (I’ve posted roughly 150 posts with quotes so far), links like these are very useful.

February 7, 2015

## Wikipedia articles of interest

“The trials of the Pendle witches in 1612 are among the most famous witch trials in English history, and some of the best recorded of the 17th century. The twelve accused lived in the area around Pendle Hill in Lancashire, and were charged with the murders of ten people by the use of witchcraft. All but two were tried at Lancaster Assizes on 18–19 August 1612, along with the Samlesbury witches and others, in a series of trials that have become known as the Lancashire witch trials. One was tried at York Assizes on 27 July 1612, and another died in prison. Of the eleven who went to trial – nine women and two men – ten were found guilty and executed by hanging; one was found not guilty.

The official publication of the proceedings by the clerk to the court, Thomas Potts, in his The Wonderfull Discoverie of Witches in the Countie of Lancaster, and the number of witches hanged together – nine at Lancaster and one at York – make the trials unusual for England at that time. It has been estimated that all the English witch trials between the early 15th and early 18th centuries resulted in fewer than 500 executions; this series of trials accounts for more than two per cent of that total.”

“One of the accused, Demdike, had been regarded in the area as a witch for fifty years, and some of the deaths the witches were accused of had happened many years before Roger Nowell started to take an interest in 1612.[13] The event that seems to have triggered Nowell’s investigation, culminating in the Pendle witch trials, occurred on 21 March 1612.[14]

On her way to Trawden Forest, Demdike’s granddaughter, Alizon Device, encountered John Law, a pedlar from Halifax, and asked him for some pins.[15] Seventeenth-century metal pins were handmade and relatively expensive, but they were frequently needed for magical purposes, such as in healing – particularly for treating warts – divination, and for love magic, which may have been why Alizon was so keen to get hold of them and why Law was so reluctant to sell them to her.[16] Whether she meant to buy them, as she claimed, and Law refused to undo his pack for such a small transaction, or whether she had no money and was begging for them, as Law’s son Abraham claimed, is unclear.[17] A few minutes after their encounter Alizon saw Law stumble and fall, perhaps because he suffered a stroke; he managed to regain his feet and reach a nearby inn.[18] Initially Law made no accusations against Alizon,[19] but she appears to have been convinced of her own powers; when Abraham Law took her to visit his father a few days after the incident, she reportedly confessed and asked for his forgiveness.[20]

Alizon Device, her mother Elizabeth, and her brother James were summoned to appear before Nowell on 30 March 1612. Alizon confessed that she had sold her soul to the Devil, and that she had told him to lame John Law after he had called her a thief. Her brother, James, stated that his sister had also confessed to bewitching a local child. Elizabeth was more reticent, admitting only that her mother, Demdike, had a mark on her body, something that many, including Nowell, would have regarded as having been left by the Devil after he had sucked her blood.”

“The Pendle witches were tried in a group that also included the Samlesbury witches, Jane Southworth, Jennet Brierley, and Ellen Brierley, the charges against whom included child murder and cannibalism; Margaret Pearson, the so-called Padiham witch, who was facing her third trial for witchcraft, this time for killing a horse; and Isobel Robey from Windle, accused of using witchcraft to cause sickness.[33]

Some of the accused Pendle witches, such as Alizon Device, seem to have genuinely believed in their guilt, but others protested their innocence to the end.”

“Nine-year-old Jennet Device was a key witness for the prosecution, something that would not have been permitted in many other 17th-century criminal trials. However, King James had made a case for suspending the normal rules of evidence for witchcraft trials in his Daemonologie.[42] As well as identifying those who had attended the Malkin Tower meeting, Jennet also gave evidence against her mother, brother, and sister. […] When Jennet was asked to stand up and give evidence against her mother, Elizabeth began to scream and curse her daughter, forcing the judges to have her removed from the courtroom before the evidence could be heard.[48] Jennet was placed on a table and stated that she believed her mother had been a witch for three or four years. She also said her mother had a familiar called Ball, who appeared in the shape of a brown dog. Jennet claimed to have witnessed conversations between Ball and her mother, in which Ball had been asked to help with various murders. James Device also gave evidence against his mother, saying he had seen her making a clay figure of one of her victims, John Robinson.[49] Elizabeth Device was found guilty.[47]

James Device pleaded not guilty to the murders by witchcraft of Anne Townley and John Duckworth. However he, like Chattox, had earlier made a confession to Nowell, which was read out in court. That, and the evidence presented against him by his sister Jennet, who said that she had seen her brother asking a black dog he had conjured up to help him kill Townley, was sufficient to persuade the jury to find him guilty.[50][51]

“Many of the allegations made in the Pendle witch trials resulted from members of the Demdike and Chattox families making accusations against each other. Historian John Swain has said that the outbreaks of witchcraft in and around Pendle demonstrate the extent to which people could make a living either by posing as a witch, or by accusing or threatening to accuse others of being a witch.[17] Although it is implicit in much of the literature on witchcraft that the accused were victims, often mentally or physically abnormal, for some at least, it may have been a trade like any other, albeit one with significant risks.[74] There may have been bad blood between the Demdike and Chattox families because they were in competition with each other, trying to make a living from healing, begging, and extortion.”

The first thing that would spring to mind if someone asked me what I knew about it would probably be something along the lines of: “…well, it’s huge…”

…and it is. But we know a lot more than that – some observations from the article:

“The atmosphere of Jupiter is the largest planetary atmosphere in the Solar System. It is mostly made of molecular hydrogen and helium in roughly solar proportions; other chemical compounds are present only in small amounts […] The atmosphere of Jupiter lacks a clear lower boundary and gradually transitions into the liquid interior of the planet. […] The Jovian atmosphere shows a wide range of active phenomena, including band instabilities, vortices (cyclones and anticyclones), storms and lightning. […] Jupiter has powerful storms, always accompanied by lightning strikes. The storms are a result of moist convection in the atmosphere connected to the evaporation and condensation of water. They are sites of strong upward motion of the air, which leads to the formation of bright and dense clouds. The storms form mainly in belt regions. The lightning strikes on Jupiter are hundreds of times more powerful than those seen on Earth.” [However do note that later on in the article it is stated that: “On Jupiter lighting strikes are on average a few times more powerful than those on Earth.”]

“The composition of Jupiter’s atmosphere is similar to that of the planet as a whole.[1] Jupiter’s atmosphere is the most comprehensively understood of those of all the gas giants because it was observed directly by the Galileo atmospheric probe when it entered the Jovian atmosphere on December 7, 1995.[26] Other sources of information about Jupiter’s atmospheric composition include the Infrared Space Observatory (ISO),[27] the Galileo and Cassini orbiters,[28] and Earth-based observations.”

“The visible surface of Jupiter is divided into several bands parallel to the equator. There are two types of bands: lightly colored zones and relatively dark belts. […] The alternating pattern of belts and zones continues until the polar regions at approximately 50 degrees latitude, where their visible appearance becomes somewhat muted.[30] The basic belt-zone structure probably extends well towards the poles, reaching at least to 80° North or South.[5]

The difference in the appearance between zones and belts is caused by differences in the opacity of the clouds. Ammonia concentration is higher in zones, which leads to the appearance of denser clouds of ammonia ice at higher altitudes, which in turn leads to their lighter color.[15] On the other hand, in belts clouds are thinner and are located at lower altitudes.[15] The upper troposphere is colder in zones and warmer in belts.[5] […] The Jovian bands are bounded by zonal atmospheric flows (winds), called jets. […] The location and width of bands, speed and location of jets on Jupiter are remarkably stable, having changed only slightly between 1980 and 2000. […] However bands vary in coloration and intensity over time […] These variations were first observed in the early seventeenth century.”

“Jupiter radiates much more heat than it receives from the Sun. It is estimated that the ratio between the power emitted by the planet and that absorbed from the Sun is 1.67 ± 0.09.”

Wife selling in England was a way of ending an unsatisfactory marriage by mutual agreement that probably began in the late 17th century, when divorce was a practical impossibility for all but the very wealthiest. After parading his wife with a halter around her neck, arm, or waist, a husband would publicly auction her to the highest bidder. […] Although the custom had no basis in law and frequently resulted in prosecution, particularly from the mid-19th century onwards, the attitude of the authorities was equivocal. At least one early 19th-century magistrate is on record as stating that he did not believe he had the right to prevent wife sales, and there were cases of local Poor Law Commissioners forcing husbands to sell their wives, rather than having to maintain the family in workhouses.”

“Until the passing of the Marriage Act of 1753, a formal ceremony of marriage before a clergyman was not a legal requirement in England, and marriages were unregistered. All that was required was for both parties to agree to the union, so long as each had reached the legal age of consent,[8] which was 12 for girls and 14 for boys.[9] Women were completely subordinated to their husbands after marriage, the husband and wife becoming one legal entity, a legal status known as coverture. […] Married women could not own property in their own right, and were indeed themselves the property of their husbands. […] Five distinct methods of breaking up a marriage existed in the early modern period of English history. One was to sue in the ecclesiastical courts for separation from bed and board (a mensa et thoro), on the grounds of adultery or life-threatening cruelty, but it did not allow a remarriage.[11] From the 1550s, until the Matrimonial Causes Act became law in 1857, divorce in England was only possible, if at all, by the complex and costly procedure of a private Act of Parliament.[12] Although the divorce courts set up in the wake of the 1857 Act made the procedure considerably cheaper, divorce remained prohibitively expensive for the poorer members of society.[13][nb 1] An alternative was to obtain a “private separation”, an agreement negotiated between both spouses, embodied in a deed of separation drawn up by a conveyancer. Desertion or elopement was also possible, whereby the wife was forced out of the family home, or the husband simply set up a new home with his mistress.[11] Finally, the less popular notion of wife selling was an alternative but illegitimate method of ending a marriage.”

“Although some 19th-century wives objected, records of 18th-century women resisting their sales are non-existent. With no financial resources, and no skills on which to trade, for many women a sale was the only way out of an unhappy marriage.[17] Indeed the wife is sometimes reported as having insisted on the sale. […] Although the initiative was usually the husband’s, the wife had to agree to the sale. An 1824 report from Manchester says that “after several biddings she [the wife] was knocked down for 5s; but not liking the purchaser, she was put up again for 3s and a quart of ale”.[27] Frequently the wife was already living with her new partner.[28] In one case in 1804 a London shopkeeper found his wife in bed with a stranger to him, who, following an altercation, offered to purchase the wife. The shopkeeper agreed, and in this instance the sale may have been an acceptable method of resolving the situation. However, the sale was sometimes spontaneous, and the wife could find herself the subject of bids from total strangers.[29] In March 1766, a carpenter from Southwark sold his wife “in a fit of conjugal indifference at the alehouse”. Once sober, the man asked his wife to return, and after she refused he hanged himself. A domestic fight might sometimes precede the sale of a wife, but in most recorded cases the intent was to end a marriage in a way that gave it the legitimacy of a divorce.”

“Prices paid for wives varied considerably, from a high of £100 plus £25 each for her two children in a sale of 1865 (equivalent to about £12,500 in 2015)[34] to a low of a glass of ale, or even free. […] According to authors Wade Mansell and Belinda Meteyard, money seems usually to have been a secondary consideration;[4] the more important factor was that the sale was seen by many as legally binding, despite it having no basis in law. […] In Sussex, inns and public houses were a regular venue for wife-selling, and alcohol often formed part of the payment. […] in Ninfield in 1790, a man who swapped his wife at the village inn for half a pint of gin changed his mind and bought her back later.[42] […] Estimates of the frequency of the ritual usually number about 300 between 1780 and 1850, relatively insignificant compared to the instances of desertion, which in the Victorian era numbered in the tens of thousands.[43]

v. Bog turtle.

“The bog turtle (Glyptemys muhlenbergii) is a semiaquatic turtle endemic to the eastern United States. […] It is the smallest North American turtle, measuring about 10 centimeters (4 in) long when fully grown. […] The bog turtle can be found from Vermont in the north, south to Georgia, and west to Ohio. Diurnal and secretive, it spends most of its time buried in mud and – during the winter months – in hibernation. The bog turtle is omnivorous, feeding mainly on small invertebrates.”

“The bog turtle is native only to the eastern United States,[nb 1] congregating in colonies that often consist of fewer than 20 individuals.[23] […] densities can range from 5 to 125 individuals per 0.81 hectares (2.0 acres). […] The bog turtle spends its life almost exclusively in the wetland where it hatched. In its natural environment, it has a maximum lifespan of perhaps 50 years or more,[47] and the average lifespan is 20–30 years.”

“The bog turtle is primarily diurnal, active during the day and sleeping at night. It wakes in the early morning, basks until fully warm, then begins its search for food.[31] It is a seclusive species, making it challenging to observe in its natural habitat.[11] During colder days, the bog turtle will spend much of its time in dense underbrush, underwater, or buried in mud. […] Day-to-day, the bog turtle moves very little, typically basking in the sun and waiting for prey. […] Various studies have found different rates of daily movement in bog turtles, varying from 2.1 to 23 meters (6.9 to 75.5 ft) in males and 1.1 to 18 meters (3.6 to 59.1 ft) in females.”

“Changes to the bog turtle’s habitat have resulted in the disappearance of 80 percent of the colonies that existed 30 years ago.[7] Because of the turtle’s rarity, it is also in danger of illegal collection, often for the worldwide pet trade. […] The bog turtle was listed as critically endangered in the 2011 IUCN Red List.[53]

January 3, 2015

## Wikipedia articles of interest

Saffron has been a key seasoning, fragrance, dye, and medicine for over three millennia.[1] One of the world’s most expensive spices by weight,[2] saffron consists of stigmas plucked from the vegetatively propagated and sterile Crocus sativus, known popularly as the saffron crocus. The resulting dried “threads”[N 1] are distinguished by their bitter taste, hay-like fragrance, and slight metallic notes. The saffron crocus is unknown in the wild; its most likely precursor, Crocus cartwrightianus, originated in Crete or Central Asia;[3] The saffron crocus is native to Southwest Asia and was first cultivated in what is now Greece.[4][5][6]

From antiquity to modern times the history of saffron is full of applications in food, drink, and traditional herbal medicine: from Africa and Asia to Europe and the Americas the brilliant red threads were—and are—prized in baking, curries, and liquor. It coloured textiles and other items and often helped confer the social standing of political elites and religious adepts. Ancient peoples believed saffron could be used to treat stomach upsets, bubonic plague, and smallpox.

Saffron crocus cultivation has long centred on a broad belt of Eurasia bounded by the Mediterranean Sea in the southwest to India and China in the northeast. The major producers of antiquity—Iran, Spain, India, and Greece—continue to dominate the world trade. […] Iran has accounted for around 90–93 percent of recent annual world production and thereby dominates the export market on a by-quantity basis. […]

The high cost of saffron is due to the difficulty of manually extracting large numbers of minute stigmas, which are the only part of the crocus with the desired aroma and flavour. An exorbitant number of flowers need to be processed in order to yield marketable amounts of saffron. Obtaining 1 lb (0.45 kg) of dry saffron requires the harvesting of some 50,000 flowers, the equivalent of an association football pitch’s area of cultivation, or roughly 7,140 m2 (0.714 ha).[14] By another estimate some 75,000 flowers are needed to produce one pound of dry saffron. […] Another complication arises in the flowers’ simultaneous and transient blooming. […] Bulk quantities of lower-grade saffron can reach upwards of US\$500 per pound; retail costs for small amounts may exceed ten times that rate. In Western countries the average retail price is approximately US\$1,000 per pound.[5] Prices vary widely elsewhere, but on average tend to be lower. The high price is somewhat offset by the small quantities needed in kitchens: a few grams at most in medicinal use and a few strands, at most, in culinary applications; there are between 70,000 and 200,000 strands in a pound.”

“The “Scramble for Africa” (also the Partition of Africa and the Conquest of Africa) was the invasion and occupation, colonization and annexation of African territory by European powers during the period of New Imperialism, between 1881 and 1914. In 1870, 10 percent of Africa was under European control; by 1914 it was 90 percent of the continent, with only Abyssinia (Ethiopia) and Liberia still independent.”

Here’s a really neat illustration from the article:

“Germany became the third largest colonial power in Africa. Nearly all of its overall empire of 2.6 million square kilometres and 14 million colonial subjects in 1914 was found in its African possessions of Southwest Africa, Togoland, the Cameroons, and Tanganyika. Following the 1904 Entente cordiale between France and the British Empire, Germany tried to isolate France in 1905 with the First Moroccan Crisis. This led to the 1905 Algeciras Conference, in which France’s influence on Morocco was compensated by the exchange of other territories, and then to the Agadir Crisis in 1911. Along with the 1898 Fashoda Incident between France and Britain, this succession of international crises reveals the bitterness of the struggle between the various imperialist nations, which ultimately led to World War I. […]

David Livingstone‘s explorations, carried on by Henry Morton Stanley, excited imaginations. But at first, Stanley’s grandiose ideas for colonisation found little support owing to the problems and scale of action required, except from Léopold II of Belgium, who in 1876 had organised the International African Association (the Congo Society). From 1869 to 1874, Stanley was secretly sent by Léopold II to the Congo region, where he made treaties with several African chiefs along the Congo River and by 1882 had sufficient territory to form the basis of the Congo Free State. Léopold II personally owned the colony from 1885 and used it as a source of ivory and rubber.

While Stanley was exploring Congo on behalf of Léopold II of Belgium, the Franco-Italian marine officer Pierre de Brazza travelled into the western Congo basin and raised the French flag over the newly founded Brazzaville in 1881, thus occupying today’s Republic of the Congo. Portugal, which also claimed the area due to old treaties with the native Kongo Empire, made a treaty with Britain on 26 February 1884 to block off the Congo Society’s access to the Atlantic.

By 1890 the Congo Free State had consolidated its control of its territory between Leopoldville and Stanleyville, and was looking to push south down the Lualaba River from Stanleyville. At the same time, the British South Africa Company of Cecil Rhodes was expanding north from the Limpopo River, sending the Pioneer Column (guided by Frederick Selous) through Matabeleland, and starting a colony in Mashonaland.

To the West, in the land where their expansions would meet, was Katanga, site of the Yeke Kingdom of Msiri. Msiri was the most militarily powerful ruler in the area, and traded large quantities of copper, ivory and slaves — and rumours of gold reached European ears. The scramble for Katanga was a prime example of the period. Rhodes and the BSAC sent two expeditions to Msiri in 1890 led by Alfred Sharpe, who was rebuffed, and Joseph Thomson, who failed to reach Katanga. Leopold sent four CFS expeditions. First, the Le Marinel Expedition could only extract a vaguely worded letter. The Delcommune Expedition was rebuffed. The well-armed Stairs Expedition was given orders to take Katanga with or without Msiri’s consent. Msiri refused, was shot, and the expedition cut off his head and stuck it on a pole as a “barbaric lesson” to the people. The Bia Expedition finished the job of establishing an administration of sorts and a “police presence” in Katanga.

Thus, the half million square kilometres of Katanga came into Leopold’s possession and brought his African realm up to 2,300,000 square kilometres (890,000 sq mi), about 75 times larger than Belgium. The Congo Free State imposed such a terror regime on the colonised people, including mass killings and forced labour, that Belgium, under pressure from the Congo Reform Association, ended Leopold II’s rule and annexed it in 1908 as a colony of Belgium, known as the Belgian Congo. […]

“Britain’s administration of Egypt and the Cape Colony contributed to a preoccupation over securing the source of the Nile River. Egypt was overrun by British forces in 1882 (although not formally declared a protectorate until 1914, and never an actual colony); Sudan, Nigeria, Kenya and Uganda were subjugated in the 1890s and early 20th century; and in the south, the Cape Colony (first acquired in 1795) provided a base for the subjugation of neighbouring African states and the Dutch Afrikaner settlers who had left the Cape to avoid the British and then founded their own republics. In 1877, Theophilus Shepstone annexed the South African Republic (or Transvaal – independent from 1857 to 1877) for the British Empire. In 1879, after the Anglo-Zulu War, Britain consolidated its control of most of the territories of South Africa. The Boers protested, and in December 1880 they revolted, leading to the First Boer War (1880–81). British Prime Minister William Gladstone signed a peace treaty on 23 March 1881, giving self-government to the Boers in the Transvaal. […] The Second Boer War, fought between 1899 and 1902, was about control of the gold and diamond industries; the independent Boer republics of the Orange Free State and the South African Republic (or Transvaal) were this time defeated and absorbed into the British Empire.”

There are a lot of unsourced claims in the article and some parts of it actually aren’t very good, but this is a topic about which I did not know much (I had no idea most of colonial Africa was acquired by the European powers as late as was actually the case). This is another good map from the article to have a look at if you just want the big picture.

iii. Cursed soldiers.

“The cursed soldiers (that is, “accursed soldiers” or “damned soldiers”; Polish: Żołnierze wyklęci) is a name applied to a variety of Polish resistance movements formed in the later stages of World War II and afterwards. Created by some members of the Polish Secret State, these clandestine organizations continued their armed struggle against the Stalinist government of Poland well into the 1950s. The guerrilla warfare included an array of military attacks launched against the new communist prisons as well as MBP state security offices, detention facilities for political prisoners, and concentration camps set up across the country. Most of the Polish anti-communist groups ceased to exist in the late 1940s or 1950s, hunted down by MBP security services and NKVD assassination squads.[1] However, the last known ‘cursed soldier’, Józef Franczak, was killed in an ambush as late as 1963, almost 20 years after the Soviet take-over of Poland.[2][3] […] Similar eastern European anti-communists fought on in other countries. […]

Armia Krajowa (or simply AK)-the main Polish resistance movement in World War II-had officially disbanded on 19 January 1945 to prevent a slide into armed conflict with the Red Army, including an increasing threat of civil war over Poland’s sovereignty. However, many units decided to continue on with their struggle under new circumstances, seeing the Soviet forces as new occupiers. Meanwhile, Soviet partisans in Poland had already been ordered by Moscow on June 22, 1943 to engage Polish Leśni partisans in combat.[6] They commonly fought Poles more often than they did the Germans.[4] The main forces of the Red Army (Northern Group of Forces) and the NKVD had begun conducting operations against AK partisans already during and directly after the Polish Operation Tempest, designed by the Poles as a preventive action to assure Polish rather than Soviet control of the cities after the German withdrawal.[5] Soviet premier Joseph Stalin aimed to ensure that an independent Poland would never reemerge in the postwar period.[7] […]

The first Polish communist government, the Polish Committee of National Liberation, was formed in July 1944, but declined jurisdiction over AK soldiers. Consequently, for more than a year, it was Soviet agencies like the NKVD that dealt with the AK. By the end of the war, approximately 60,000 soldiers of the AK had been arrested, and 50,000 of them were deported to the Soviet Union’s gulags and prisons. Most of those soldiers had been captured by the Soviets during or in the aftermath of Operation Tempest, when many AK units tried to cooperate with the Soviets in a nationwide uprising against the Germans. Other veterans were arrested when they decided to approach the government after being promised amnesty. In 1947, an amnesty was passed for most of the partisans; the Communist authorities expected around 12,000 people to give up their arms, but the actual number of people to come out of the forests eventually reached 53,000. Many of them were arrested despite promises of freedom; after repeated broken promises during the first few years of communist control, AK soldiers stopped trusting the government.[5] […]

The persecution of the AK members was only a part of the reign of Stalinist terror in postwar Poland. In the period of 1944–56, approximately 300,000 Polish people had been arrested,[21] or up to two million, by different accounts.[5] There were 6,000 death sentences issued, the majority of them carried out.[21] Possibly, over 20,000 people died in communist prisons including those executed “in the majesty of the law” such as Witold Pilecki, a hero of Auschwitz.[5] A further six million Polish citizens (i.e., one out of every three adult Poles) were classified as suspected members of a ‘reactionary or criminal element’ and subjected to investigation by state agencies.”

Affective neuroscience is the study of the neural mechanisms of emotion. This interdisciplinary field combines neuroscience with the psychological study of personality, emotion, and mood.[1]

This article is actually related to the Delusion and self-deception book, which covered some of the stuff included in this article, but I decided I might as well include the link in this post. I think some parts of the article are written in a somewhat different manner than most wiki articles – there are specific paragraphs briefly covering the results of specific meta-analyses conducted in this field. I can’t really tell from this article if I actually like this way of writing a wiki article or not.

v. Hamming distance. Not a long article, but this is a useful concept to be familiar with:

“In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In another way, it measures the minimum number of substitutions required to change one string into the other, or the minimum number of errors that could have transformed one string into the other. […]

The Hamming distance is named after Richard Hamming, who introduced it in his fundamental paper on Hamming codes Error detecting and error correcting codes in 1950.[1] It is used in telecommunication to count the number of flipped bits in a fixed-length binary word as an estimate of error, and therefore is sometimes called the signal distance. Hamming weight analysis of bits is used in several disciplines including information theory, coding theory, and cryptography. However, for comparing strings of different lengths, or strings where not just substitutions but also insertions or deletions have to be expected, a more sophisticated metric like the Levenshtein distance is more appropriate.”

vi. Menstrual synchrony. I came across that one recently in a book, and when I did it was obvious that the author had not read this article, and lacked some knowledge included in this article (the phenomenon was assumed to be real in the coverage, and theory was developed assuming it was real which would not make sense if it was not). I figured if that person didn’t know this stuff, a lot of other people – including people reading along here – probably also do not, so I should cover this topic somewhere. This is an obvious place to do so. Okay, on to the article coverage:

Menstrual synchrony, also called the McClintock effect,[2] is the alleged process whereby women who begin living together in close proximity experience their menstrual cycle onsets (i.e., the onset of menstruation or menses) becoming closer together in time than previously. “For example, the distribution of onsets of seven female lifeguards was scattered at the beginning of the summer, but after 3 months spent together, the onset of all seven cycles fell within a 4-day period.”[3]

Martha McClintock’s 1971 paper, published in Nature, says that menstrual cycle synchronization happens when the menstrual cycle onsets of two women or more women become closer together in time than they were several months earlier.[3] Several mechanisms have been hypothesized to cause synchronization.[4]

After the initial studies, several papers were published reporting methodological flaws in studies reporting menstrual synchrony including McClintock’s study. In addition, other studies were published that failed to find synchrony. The proposed mechanisms have also received scientific criticism. A 2013 review of menstrual synchrony concluded that menstrual synchrony is doubtful.[4] […] in a recent systematic review of menstrual synchrony, Harris and Vitzthum concluded that “In light of the lack of empirical evidence for MS [menstrual synchrony] sensu stricto, it seems there should be more widespread doubt than acceptance of this hypothesis.” […]

The experience of synchrony may be the result of the mathematical fact that menstrual cycles of different frequencies repeatedly converge and diverge over time and not due to a process of synchronization.[12] It may also be due to the high probability of menstruation overlap that occurs by chance.[6]

December 4, 2014

## Wikipedia articles of interest

(A minor note: These days when I’m randomly browsing wikipedia and not just looking up concepts or terms found in the books I read, I’m mostly browsing the featured content on wikipedia. There’s a lot of featured stuff, and on average such articles more interesting than random articles. As a result of this approach, all articles covered in the post below are featured articles. A related consequence of this shift may be that I may cover fewer articles in future wikipedia posts than I have in the past; this post only contains five articles, which I believe is slightly less than usual for these posts – a big reason for this being that it sometimes takes a lot of time to read a featured article.)

“The woolly mammoth (Mammuthus primigenius) was a species of mammoth, the common name for the extinct elephant genus Mammuthus. The woolly mammoth was one of the last in a line of mammoth species, beginning with Mammuthus subplanifrons in the early Pliocene. M. primigenius diverged from the steppe mammoth, M. trogontherii, about 200,000 years ago in eastern Asia. Its closest extant relative is the Asian elephant. […] The earliest known proboscideans, the clade which contains elephants, existed about 55 million years ago around the Tethys Sea. […] The family Elephantidae existed six million years ago in Africa and includes the modern elephants and the mammoths. Among many now extinct clades, the mastodon is only a distant relative of the mammoths, and part of the separate Mammutidae family, which diverged 25 million years before the mammoths evolved.[12] […] The woolly mammoth coexisted with early humans, who used its bones and tusks for making art, tools, and dwellings, and the species was also hunted for food.[1] It disappeared from its mainland range at the end of the Pleistocene 10,000 years ago, most likely through a combination of climate change, consequent disappearance of its habitat, and hunting by humans, though the significance of these factors is disputed. Isolated populations survived on Wrangel Island until 4,000 years ago, and on St. Paul Island until 6,400 years ago.”

“The appearance and behaviour of this species are among the best studied of any prehistoric animal due to the discovery of frozen carcasses in Siberia and Alaska, as well as skeletons, teeth, stomach contents, dung, and depiction from life in prehistoric cave paintings. […] Fully grown males reached shoulder heights between 2.7 and 3.4 m (9 and 11 ft) and weighed up to 6 tonnes (6.6 short tons). This is almost as large as extant male African elephants, which commonly reach 3–3.4 m (9.8–11.2 ft), and is less than the size of the earlier mammoth species M. meridionalis and M. trogontherii, and the contemporary M. columbi. […] Woolly mammoths had several adaptations to the cold, most noticeably the layer of fur covering all parts of the body. Other adaptations to cold weather include ears that are far smaller than those of modern elephants […] The small ears reduced heat loss and frostbite, and the tail was short for the same reason […] They had a layer of fat up to 10 cm (3.9 in) thick under the skin, which helped to keep them warm. […] The coat consisted of an outer layer of long, coarse “guard hair”, which was 30 cm (12 in) on the upper part of the body, up to 90 cm (35 in) in length on the flanks and underside, and 0.5 mm (0.020 in) in diameter, and a denser inner layer of shorter, slightly curly under-wool, up to 8 cm (3.1 in) long and 0.05 mm (0.0020 in) in diameter. The hairs on the upper leg were up to 38 cm (15 in) long, and those of the feet were 15 cm (5.9 in) long, reaching the toes. The hairs on the head were relatively short, but longer on the underside and the sides of the trunk. The tail was extended by coarse hairs up to 60 cm (24 in) long, which were thicker than the guard hairs. It is likely that the woolly mammoth moulted seasonally, and that the heaviest fur was shed during spring.”

“Woolly mammoths had very long tusks, which were more curved than those of modern elephants. The largest known male tusk is 4.2 m (14 ft) long and weighs 91 kg (201 lb), but 2.4–2.7 m (7.9–8.9 ft) and 45 kg (99 lb) was a more typical size. Female tusks averaged at 1.5–1.8 m (4.9–5.9 ft) and weighed 9 kg (20 lb). About a quarter of the length was inside the sockets. The tusks grew spirally in opposite directions from the base and continued in a curve until the tips pointed towards each other. In this way, most of the weight would have been close to the skull, and there would be less torque than with straight tusks. The tusks were usually asymmetrical and showed considerable variation, with some tusks curving down instead of outwards and some being shorter due to breakage.”

“Woolly mammoths needed a varied diet to support their growth, like modern elephants. An adult of six tonnes would need to eat 180 kg (397 lb) daily, and may have foraged as long as twenty hours every day. […] Woolly mammoths continued growing past adulthood, like other elephants. Unfused limb bones show that males grew until they reached the age of 40, and females grew until they were 25. The frozen calf “Dima” was 90 cm (35 in) tall when it died at the age of 6–12 months. At this age, the second set of molars would be in the process of erupting, and the first set would be worn out at 18 months of age. The third set of molars lasted for ten years, and this process was repeated until the final, sixth set emerged when the animal was 30 years old. A woolly mammoth could probably reach the age of 60, like modern elephants of the same size. By then the last set of molars would be worn out, the animal would be unable to chew and feed, and it would die of starvation.[53]

“The habitat of the woolly mammoth is known as “mammoth steppe” or “tundra steppe”. This environment stretched across northern Asia, many parts of Europe, and the northern part of North America during the last ice age. It was similar to the grassy steppes of modern Russia, but the flora was more diverse, abundant, and grew faster. Grasses, sedges, shrubs, and herbaceous plants were present, and scattered trees were mainly found in southern regions. This habitat was not dominated by ice and snow, as is popularly believed, since these regions are thought to have been high-pressure areas at the time. The habitat of the woolly mammoth also supported other grazing herbivores such as the woolly rhinoceros, wild horses and bison. […] A 2008 study estimated that changes in climate shrank suitable mammoth habitat from 7,700,000 km2 (3,000,000 sq mi) 42,000 years ago to 800,000 km2 (310,000 sq mi) 6,000 years ago.[81][82] Woolly mammoths survived an even greater loss of habitat at the end of the Saale glaciation 125,000 years ago, and it is likely that humans hunted the remaining populations to extinction at the end of the last glacial period.[83][84] […] Several woolly mammoth specimens show evidence of being butchered by humans, which is indicated by breaks, cut-marks, and associated stone tools. It is not known how much prehistoric humans relied on woolly mammoth meat, since there were many other large herbivores available. Many mammoth carcasses may have been scavenged by humans rather than hunted. Some cave paintings show woolly mammoths in structures interpreted as pitfall traps. Few specimens show direct, unambiguous evidence of having been hunted by humans.”

“While frozen woolly mammoth carcasses had been excavated by Europeans as early as 1728, the first fully documented specimen was discovered near the delta of the Lena River in 1799 by Ossip Schumachov, a Siberian hunter.[90] Schumachov let it thaw until he could retrieve the tusks for sale to the ivory trade. [Aargh!] […] The 1901 excavation of the “Berezovka mammoth” is the best documented of the early finds. It was discovered by the Berezovka River, and the Russian authorities financed its excavation. Its head was exposed, and the flesh had been scavenged. The animal still had grass between its teeth and on the tongue, showing that it had died suddenly. […] By 1929, the remains of 34 mammoths with frozen soft tissues (skin, flesh, or organs) had been documented. Only four of them were relatively complete. Since then, about that many more have been found.”

ii. Daniel Lambert.

Daniel Lambert (13 March 1770 – 21 June 1809) was a gaol keeper[n 1] and animal breeder from Leicester, England, famous for his unusually large size. After serving four years as an apprentice at an engraving and die casting works in Birmingham, he returned to Leicester around 1788 and succeeded his father as keeper of Leicester’s gaol. […] At the time of Lambert’s return to Leicester, his weight began to increase steadily, even though he was athletically active and, by his own account, abstained from drinking alcohol and did not eat unusual amounts of food. In 1805, Lambert’s gaol closed. By this time, he weighed 50 stone (700 lb; 318 kg), and had become the heaviest authenticated person up to that point in recorded history. Unemployable and sensitive about his bulk, Lambert became a recluse.

In 1806, poverty forced Lambert to put himself on exhibition to raise money. In April 1806, he took up residence in London, charging spectators to enter his apartments to meet him. Visitors were impressed by his intelligence and personality, and visiting him became highly fashionable. After some months on public display, Lambert grew tired of exhibiting himself, and in September 1806, he returned, wealthy, to Leicester, where he bred sporting dogs and regularly attended sporting events. Between 1806 and 1809, he made a further series of short fundraising tours.

In June 1809, he died suddenly in Stamford. At the time of his death, he weighed 52 stone 11 lb (739 lb; 335 kg), and his coffin required 112 square feet (10.4 m2) of wood. Despite the coffin being built with wheels to allow easy transport, and a sloping approach being dug to the grave, it took 20 men almost half an hour to drag his casket into the trench, in a newly opened burial ground to the rear of St Martin’s Church.”

“Sensitive about his weight, Daniel Lambert refused to allow himself to be weighed, but sometime around 1805, some friends persuaded him to come with them to a cock fight in Loughborough. Once he had squeezed his way into their carriage, the rest of the party drove the carriage onto a large scale and jumped out. After deducting the weight of the (previously weighed) empty carriage, they calculated that Lambert’s weight was now 50 stone (700 lb; 318 kg), and that he had thus overtaken Edward Bright, the 616-pound (279 kg) “Fat Man of Maldon”,[23] as the heaviest authenticated person in recorded history.[20][24]

Despite his shyness, Lambert badly needed to earn money, and saw no alternative to putting himself on display, and charging his spectators.[20] On 4 April 1806, he boarded a specially built carriage and travelled from Leicester[26] to his new home at 53 Piccadilly, then near the western edge of London.[20] For five hours each day, he welcomed visitors into his home, charging each a shilling (about £3.5 as of 2014).[18][25] […] Lambert shared his interests and knowledge of sports, dogs and animal husbandry with London’s middle and upper classes,[27] and it soon became highly fashionable to visit him, or become his friend.[27] Many called repeatedly; one banker made 20 visits, paying the admission fee on each occasion.[17] […] His business venture was immediately successful, drawing around 400 paying visitors per day. […] People would travel long distances to see him (on one occasion, a party of 14 travelled to London from Guernsey),[n 5] and many would spend hours speaking with him on animal breeding.”

“After some months in London, Lambert was visited by Józef Boruwłaski, a 3-foot 3-inch (99 cm) dwarf then in his seventies.[44] Born in 1739 to a poor family in rural Pokuttya,[45] Boruwłaski was generally considered to be the last of Europe’s court dwarfs.[46] He was introduced to the Empress Maria Theresa in 1754,[47] and after a short time residing with deposed Polish king Stanisław Leszczyński,[44] he exhibited himself around Europe, thus becoming a wealthy man.[48] At age 60, he retired to Durham,[49] where he became such a popular figure that the City of Durham paid him to live there[50] and he became one of its most prominent citizens […] The meeting of Lambert and Boruwłaski, the largest and smallest men in the country,[51] was the subject of enormous public interest”

“There was no autopsy, and the cause of Lambert’s death is unknown.[65] While many sources say that he died of a fatty degeneration of the heart or of stress on his heart caused by his bulk, his behaviour in the period leading to his death does not match that of someone suffering from cardiac insufficiency; witnesses agree that on the morning of his death he appeared well, before he became short of breath and collapsed.[65] Bondeson (2006) speculates that the most consistent explanation of his death, given his symptoms and medical history, is that he had a sudden pulmonary embolism.[65]

“The exposed geology of the Capitol Reef area presents a record of mostly Mesozoic-aged sedimentation in an area of North America in and around Capitol Reef National Park, on the Colorado Plateau in southeastern Utah.

Nearly 10,000 feet (3,000 m) of sedimentary strata are found in the Capitol Reef area, representing nearly 200 million years of geologic history of the south-central part of the U.S. state of Utah. These rocks range in age from Permian (as old as 270 million years old) to Cretaceous (as young as 80 million years old.)[1] Rock layers in the area reveal ancient climates as varied as rivers and swamps (Chinle Formation), Sahara-like deserts (Navajo Sandstone), and shallow ocean (Mancos Shale).

The area’s first known sediments were laid down as a shallow sea invaded the land in the Permian. At first sandstone was deposited but limestone followed as the sea deepened. After the sea retreated in the Triassic, streams deposited silt before the area was uplifted and underwent erosion. Conglomerate followed by logs, sand, mud and wind-transported volcanic ash were later added. Mid to Late Triassic time saw increasing aridity, during which vast amounts of sandstone were laid down along with some deposits from slow-moving streams. As another sea started to return it periodically flooded the area and left evaporite deposits. Barrier islands, sand bars and later, tidal flats, contributed sand for sandstone, followed by cobbles for conglomerate and mud for shale. The sea retreated, leaving streams, lakes and swampy plains to become the resting place for sediments. Another sea, the Western Interior Seaway, returned in the Cretaceous and left more sandstone and shale only to disappear in the early Cenozoic.”

“The Laramide orogeny compacted the region from about 70 million to 50 million years ago and in the process created the Rocky Mountains. Many monoclines (a type of gentle upward fold in rock strata) were also formed by the deep compressive forces of the Laramide. One of those monoclines, called the Waterpocket Fold, is the major geographic feature of the park. The 100 mile (160 km) long fold has a north-south alignment with a steeply east-dipping side. The rock layers on the west side of the Waterpocket Fold have been lifted more than 7,000 feet (2,100 m) higher than the layers on the east.[23] Thus older rocks are exposed on the western part of the fold and younger rocks on the eastern part. This particular fold may have been created due to movement along a fault in the Precambrian basement rocks hidden well below any exposed formations. Small earthquakes centered below the fold in 1979 may be from such a fault.[24] […] Ten to fifteen million years ago the entire region was uplifted several thousand feet (well over a kilometer) by the creation of the Colorado Plateaus. This time the uplift was more even, leaving the overall orientation of the formations mostly intact. Most of the erosion that carved today’s landscape occurred after the uplift of the Colorado Plateau with much of the major canyon cutting probably occurring between 1 and 6 million years ago.”

“In Euclidean plane geometry, Apollonius’s problem is to construct circles that are tangent to three given circles in a plane (Figure 1).

Apollonius of Perga (ca. 262 BC – ca. 190 BC) posed and solved this famous problem in his work Ἐπαφαί (Epaphaí, “Tangencies”); this work has been lost, but a 4th-century report of his results by Pappus of Alexandria has survived. Three given circles generically have eight different circles that are tangent to them […] and each solution circle encloses or excludes the three given circles in a different way […] The general statement of Apollonius’ problem is to construct one or more circles that are tangent to three given objects in a plane, where an object may be a line, a point or a circle of any size.[1][2][3][4] These objects may be arranged in any way and may cross one another; however, they are usually taken to be distinct, meaning that they do not coincide. Solutions to Apollonius’ problem are sometimes called Apollonius circles, although the term is also used for other types of circles associated with Apollonius. […] A rich repertoire of geometrical and algebraic methods have been developed to solve Apollonius’ problem,[9][10] which has been called “the most famous of all” geometry problems.[3]

“A globular cluster is a spherical collection of stars that orbits a galactic core as a satellite. Globular clusters are very tightly bound by gravity, which gives them their spherical shapes and relatively high stellar densities toward their centers. The name of this category of star cluster is derived from the Latin globulus—a small sphere. A globular cluster is sometimes known more simply as a globular.

Globular clusters, which are found in the halo of a galaxy, contain considerably more stars and are much older than the less dense galactic, or open clusters, which are found in the disk. Globular clusters are fairly common; there are about 150[2] to 158[3] currently known globular clusters in the Milky Way, with perhaps 10 to 20 more still undiscovered.[4] Large galaxies can have more: Andromeda, for instance, may have as many as 500. […]

Every galaxy of sufficient mass in the Local Group has an associated group of globular clusters, and almost every large galaxy surveyed has been found to possess a system of globular clusters.[8] The Sagittarius Dwarf galaxy and the disputed Canis Major Dwarf galaxy appear to be in the process of donating their associated globular clusters (such as Palomar 12) to the Milky Way.[9] This demonstrates how many of this galaxy’s globular clusters might have been acquired in the past.

Although it appears that globular clusters contain some of the first stars to be produced in the galaxy, their origins and their role in galactic evolution are still unclear.”

October 23, 2014

## Wikipedia articles of interest

“The dodo (Raphus cucullatus) is an extinct flightless bird that was endemic to the island of Mauritius, east of Madagascar in the Indian Ocean. Its closest genetic relative was the also extinct Rodrigues solitaire, the two forming the subfamily Raphinae of the family of pigeons and doves. […] Subfossil remains show the dodo was about 1 metre (3.3 feet) tall and may have weighed 10–18 kg (22–40 lb) in the wild. The dodo’s appearance in life is evidenced only by drawings, paintings and written accounts from the 17th century. Because these vary considerably, and because only some illustrations are known to have been drawn from live specimens, its exact appearance in life remains unresolved. Similarly, little is known with certainty about its habitat and behaviour.”

“The first recorded mention of the dodo was by Dutch sailors in 1598. In the following years, the bird was hunted by sailors, their domesticated animals, and invasive species introduced during that time. The last widely accepted sighting of a dodo was in 1662. Its extinction was not immediately noticed, and some considered it to be a mythical creature. In the 19th century, research was conducted on a small quantity of remains of four specimens that had been brought to Europe in the early 17th century. Among these is a dried head, the only soft tissue of the dodo that remains today. Since then, a large amount of subfossil material has been collected from Mauritius […] The dodo was anatomically similar to pigeons in many features. […] The dodo differed from other pigeons mainly in the small size of the wings and the large size of the beak in proportion to the rest of the cranium. […] Many of the skeletal features that distinguish the dodo and the Rodrigues solitaire, its closest relative, from pigeons have been attributed to their flightlessness. […] The lack of mammalian herbivores competing for resources on these islands allowed the solitaire and the dodo to attain very large sizes.[19]” [If the last sentence sparked your interest and/or might be something about which you’d like to know more, I have previously covered a great book on related topics here on the blog]

“The etymology of the word dodo is unclear. Some ascribe it to the Dutch word dodoor for “sluggard”, but it is more probably related to Dodaars, which means either “fat-arse” or “knot-arse”, referring to the knot of feathers on the hind end. […] The traditional image of the dodo is of a very fat and clumsy bird, but this view may be exaggerated. The general opinion of scientists today is that many old European depictions were based on overfed captive birds or crudely stuffed specimens.[44]

“Like many animals that evolved in isolation from significant predators, the dodo was entirely fearless of humans. This fearlessness and its inability to fly made the dodo easy prey for sailors.[79] Although some scattered reports describe mass killings of dodos for ships’ provisions, archaeological investigations have found scant evidence of human predation. […] The human population on Mauritius (an area of 1,860 km2 or 720 sq mi) never exceeded 50 people in the 17th century, but they introduced other animals, including dogs, pigs, cats, rats, and crab-eating macaques, which plundered dodo nests and competed for the limited food resources.[37] At the same time, humans destroyed the dodo’s forest habitat. The impact of these introduced animals, especially the pigs and macaques, on the dodo population is currently considered more severe than that of hunting. […] Even though the rareness of the dodo was reported already in the 17th century, its extinction was not recognised until the 19th century. This was partly because, for religious reasons, extinction was not believed possible until later proved so by Georges Cuvier, and partly because many scientists doubted that the dodo had ever existed. It seemed altogether too strange a creature, and many believed it a myth.”

Some of the contemporary accounts and illustrations included in the article, from which behavioural patterns etc. have been inferred, I found quite depressing. Two illustrative quotes and a contemporary engraving are included below:

“Blue parrots are very numerous there, as well as other birds; among which are a kind, conspicuous for their size, larger than our swans, with huge heads only half covered with skin as if clothed with a hood. […] These we used to call ‘Walghvogel’, for the reason that the longer and oftener they were cooked, the less soft and more insipid eating they became. Nevertheless their belly and breast were of a pleasant flavour and easily masticated.[40]

“I have seen in Mauritius birds bigger than a Swan, without feathers on the body, which is covered with a black down; the hinder part is round, the rump adorned with curled feathers as many in number as the bird is years old. […] We call them Oiseaux de Nazaret. The fat is excellent to give ease to the muscles and nerves.[7]

“The Armero tragedy […] was one of the major consequences of the eruption of the Nevado del Ruiz stratovolcano in Tolima, Colombia, on November 13, 1985. After 69 years of dormancy, the volcano’s eruption caught nearby towns unaware, even though the government had received warnings from multiple volcanological organizations to evacuate the area when volcanic activity had been detected in September 1985.[1]

As pyroclastic flows erupted from the volcano’s crater, they melted the mountain’s glaciers, sending four enormous lahars (volcanically induced mudslides, landslides, and debris flows) down its slopes at 50 kilometers per hour (30 miles per hour). The lahars picked up speed in gullies and coursed into the six major rivers at the base of the volcano; they engulfed the town of Armero, killing more than 20,000 of its almost 29,000 inhabitants.[2] Casualties in other towns, particularly Chinchiná, brought the overall death toll to 23,000. […] The relief efforts were hindered by the composition of the mud, which made it nearly impossible to move through without becoming stuck. By the time relief workers reached Armero twelve hours after the eruption, many of the victims with serious injuries were dead. The relief workers were horrified by the landscape of fallen trees, disfigured human bodies, and piles of debris from entire houses. […] The event was a foreseeable catastrophe exacerbated by the populace’s unawareness of the volcano’s destructive history; geologists and other experts had warned authorities and media outlets about the danger over the weeks and days leading up to the eruption.”

“The day of the eruption, black ash columns erupted from the volcano at approximately 3:00 pm local time. The local Civil Defense director was promptly alerted to the situation. He contacted INGEOMINAS, which ruled that the area should be evacuated; he was then told to contact the Civil Defense directors in Bogotá and Tolima. Between 5:00 and 7:00 pm, the ash stopped falling, and local officials instructed people to “stay calm” and go inside. Around 5:00 pm an emergency committee meeting was called, and when it ended at 7:00 pm, several members contacted the regional Red Cross over the intended evacuation efforts at Armero, Mariquita, and Honda. The Ibagué Red Cross contacted Armero’s officials and ordered an evacuation, which was not carried out because of electrical problems caused by a storm. The storm’s heavy rain and constant thunder may have overpowered the noise of the volcano, and with no systematic warning efforts, the residents of Armero were completely unaware of the continuing activity at Ruiz. At 9:45 pm, after the volcano had erupted, Civil Defense officials from Ibagué and Murillo tried to warn Armero’s officials, but could not make contact. Later they overheard conversations between individual officials of Armero and others; famously, a few heard the Mayor of Armero speaking on a ham radio, saying “that he did not think there was much danger”, when he was overtaken by the lahar.[20]

“The lahars, formed of water, ice, pumice, and other rocks,[25] incorporated clay from eroding soil as they traveled down the volcano’s flanks.[26] They ran down the volcano’s sides at an average speed of 60 kilometers (40 mi) per hour, dislodging rock and destroying vegetation. After descending thousands of meters down the side of the volcano, the lahars followed the six river valleys leading from the volcano, where they grew to almost four times their original volume. In the Gualí River, a lahar reached a maximum width of 50 meters (160 ft).[25]

Survivors in Armero described the night as “quiet”. Volcanic ash had been falling throughout the day, but residents were informed it was nothing to worry about. Later in the afternoon, ash began falling again after a long period of quiet. Local radio stations reported that residents should remain calm and ignore the material. One survivor reported going to the fire department to be informed that the ash was “nothing”.[27] […] At 11:30 pm, the first lahar hit, followed shortly by the others.[28] One of the lahars virtually erased Armero; three-quarters of its 28,700 inhabitants were killed.[25] Proceeding in three major waves, this lahar was 30 meters (100 ft) deep, moved at 12 meters per second (39 ft/s), and lasted ten to twenty minutes. Traveling at about 6 meters (20 ft) per second, the second lahar lasted thirty minutes and was followed by smaller pulses. A third major pulse brought the lahar’s duration to roughly two hours; by that point, 85 percent of Armero was enveloped in mud. Survivors described people holding on to debris from their homes in attempts to stay above the mud. Buildings collapsed, crushing people and raining down debris. The front of the lahar contained boulders and cobbles which would have crushed anyone in their path, while the slower parts were dotted by fine, sharp stones which caused lacerations. Mud moved into open wounds and other open body parts – the eyes, ears, and mouth – and placed pressure capable of inducing traumatic asphyxia in one or two minutes upon people buried in it.”

“The volcano continues to pose a serious threat to nearby towns and villages. Of the threats, the one with the most potential for danger is that of small-volume eruptions, which can destabilize glaciers and trigger lahars.[51] Although much of the volcano’s glacier mass has retreated, a significant volume of ice still sits atop Nevado del Ruiz and other volcanoes in the Ruiz–Tolima massif. Melting just 10 percent of the ice would produce lahars with a volume of up to 200 million cubic meters – similar to the lahar that destroyed Armero in 1985. In just hours, these lahars can travel up to 100 km along river valleys.[33] Estimates show that up to 500,000 people living in the Combeima, Chinchina, Coello-Toche, and Guali valleys are at risk, with 100,000 individuals being considered to be at high risk.”

“The asteroid belt is the region of the Solar System located roughly between the orbits of the planets Mars and Jupiter. It is occupied by numerous irregularly shaped bodies called asteroids or minor planets. The asteroid belt is also termed the main asteroid belt or main belt to distinguish its members from other asteroids in the Solar System such as near-Earth asteroids and trojan asteroids. About half the mass of the belt is contained in the four largest asteroids, Ceres, Vesta, Pallas, and Hygiea. Vesta, Pallas, and Hygiea have mean diameters of more than 400 km, whereas Ceres, the asteroid belt’s only dwarf planet, is about 950 km in diameter.[1][2][3][4] The remaining bodies range down to the size of a dust particle.”

“The asteroid belt formed from the primordial solar nebula as a group of planetesimals, the smaller precursors of the planets, which in turn formed protoplanets. Between Mars and Jupiter, however, gravitational perturbations from Jupiter imbued the protoplanets with too much orbital energy for them to accrete into a planet. Collisions became too violent, and instead of fusing together, the planetesimals and most of the protoplanets shattered. As a result, 99.9% of the asteroid belt’s original mass was lost in the first 100 million years of the Solar System’s history.[5]

“In an anonymous footnote to his 1766 translation of Charles Bonnet‘s Contemplation de la Nature,[8] the astronomer Johann Daniel Titius of Wittenberg[9][10] noted an apparent pattern in the layout of the planets. If one began a numerical sequence at 0, then included 3, 6, 12, 24, 48, etc., doubling each time, and added four to each number and divided by 10, this produced a remarkably close approximation to the radii of the orbits of the known planets as measured in astronomical units. This pattern, now known as the Titius–Bode law, predicted the semi-major axes of the six planets of the time (Mercury, Venus, Earth, Mars, Jupiter and Saturn) provided one allowed for a “gap” between the orbits of Mars and Jupiter. […] On January 1, 1801, Giuseppe Piazzi, Chair of Astronomy at the University of Palermo, Sicily, found a tiny moving object in an orbit with exactly the radius predicted by the Titius–Bode law. He dubbed it Ceres, after the Roman goddess of the harvest and patron of Sicily. Piazzi initially believed it a comet, but its lack of a coma suggested it was a planet.[12] Fifteen months later, Heinrich Wilhelm Olbers discovered a second object in the same region, Pallas. Unlike the other known planets, the objects remained points of light even under the highest telescope magnifications instead of resolving into discs. Apart from their rapid movement, they appeared indistinguishable from stars. Accordingly, in 1802 William Herschel suggested they be placed into a separate category, named asteroids, after the Greek asteroeides, meaning “star-like”. […] The discovery of Neptune in 1846 led to the discrediting of the Titius–Bode law in the eyes of scientists, because its orbit was nowhere near the predicted position. […] One hundred asteroids had been located by mid-1868, and in 1891 the introduction of astrophotography by Max Wolf accelerated the rate of discovery still further.[22] A total of 1,000 asteroids had been found by 1921,[23] 10,000 by 1981,[24] and 100,000 by 2000.[25] Modern asteroid survey systems now use automated means to locate new minor planets in ever-increasing quantities.”

“In 1802, shortly after discovering Pallas, Heinrich Olbers suggested to William Herschel that Ceres and Pallas were fragments of a much larger planet that once occupied the Mars–Jupiter region, this planet having suffered an internal explosion or a cometary impact many million years before.[26] Over time, however, this hypothesis has fallen from favor. […] Today, most scientists accept that, rather than fragmenting from a progenitor planet, the asteroids never formed a planet at all. […] The asteroids are not samples of the primordial Solar System. They have undergone considerable evolution since their formation, including internal heating (in the first few tens of millions of years), surface melting from impacts, space weathering from radiation, and bombardment by micrometeorites.[34] […] collisions between asteroids occur frequently (on astronomical time scales). Collisions between main-belt bodies with a mean radius of 10 km are expected to occur about once every 10 million years.[63] A collision may fragment an asteroid into numerous smaller pieces (leading to the formation of a new asteroid family). Conversely, collisions that occur at low relative speeds may also join two asteroids. After more than 4 billion years of such processes, the members of the asteroid belt now bear little resemblance to the original population. […] The current asteroid belt is believed to contain only a small fraction of the mass of the primordial belt. Computer simulations suggest that the original asteroid belt may have contained mass equivalent to the Earth.[37] Primarily because of gravitational perturbations, most of the material was ejected from the belt within about a million years of formation, leaving behind less than 0.1% of the original mass.[29] Since their formation, the size distribution of the asteroid belt has remained relatively stable: there has been no significant increase or decrease in the typical dimensions of the main-belt asteroids.[38]

“Contrary to popular imagery, the asteroid belt is mostly empty. The asteroids are spread over such a large volume that it would be improbable to reach an asteroid without aiming carefully. Nonetheless, hundreds of thousands of asteroids are currently known, and the total number ranges in the millions or more, depending on the lower size cutoff. Over 200 asteroids are known to be larger than 100 km,[44] and a survey in the infrared wavelengths has shown that the asteroid belt has 0.7–1.7 million asteroids with a diameter of 1 km or more. […] The total mass of the asteroid belt is estimated to be 2.8×1021 to 3.2×1021 kilograms, which is just 4% of the mass of the Moon.[2] […] Several otherwise unremarkable bodies in the outer belt show cometary activity. Because their orbits cannot be explained through capture of classical comets, it is thought that many of the outer asteroids may be icy, with the ice occasionally exposed to sublimation through small impacts. Main-belt comets may have been a major source of the Earth’s oceans, because the deuterium–hydrogen ratio is too low for classical comets to have been the principal source.[56] […] Of the 50,000 meteorites found on Earth to date, 99.8 percent are believed to have originated in the asteroid belt.[67]

iv. Series (mathematics). This article has a lot of stuff, including lots of links to other stuff.

“At the head of the Occupation administration was General MacArthur who was technically supposed to defer to an advisory council set up by the Allied powers, but in practice did everything himself. As a result, this period was one of significant American influence […] MacArthur’s first priority was to set up a food distribution network; following the collapse of the ruling government and the wholesale destruction of most major cities, virtually everyone was starving. Even with these measures, millions of people were still on the brink of starvation for several years after the surrender.”

“By the end of 1945, more than 350,000 U.S. personnel were stationed throughout Japan. By the beginning of 1946, replacement troops began to arrive in the country in large numbers and were assigned to MacArthur’s Eighth Army, headquartered in Tokyo’s Dai-Ichi building. Of the main Japanese islands, Kyūshū was occupied by the 24th Infantry Division, with some responsibility for Shikoku. Honshū was occupied by the First Cavalry Division. Hokkaido was occupied by the 11th Airborne Division.

By June 1950, all these army units had suffered extensive troop reductions and their combat effectiveness was seriously weakened. When North Korea invaded South Korea (see Korean War), elements of the 24th Division were flown into South Korea to try to stem the massive invasion force there, but the green occupation troops, while acquitting themselves well when suddenly thrown into combat almost overnight, suffered heavy casualties and were forced into retreat until other Japan occupation troops could be sent to assist.”

“During the Occupation, GHQ/SCAP mostly abolished many of the financial coalitions known as the Zaibatsu, which had previously monopolized industry.[20] […] A major land reform was also conducted […] Between 1947 and 1949, approximately 5,800,000 acres (23,000 km2) of land (approximately 38% of Japan’s cultivated land) were purchased from the landlords under the government’s reform program and resold at extremely low prices (after inflation) to the farmers who worked them. By 1950, three million peasants had acquired land, dismantling a power structure that the landlords had long dominated.[22]

“There are allegations that during the three months in 1945 when Okinawa was gradually occupied there were rapes committed by U.S. troops. According to some accounts, US troops committed thousands of rapes during the campaign.[36][37]

Many Japanese civilians in the Japanese mainland feared that the Allied occupation troops were likely to rape Japanese women. The Japanese authorities set up a large system of prostitution facilities (RAA) in order to protect the population. […] However, there was a resulting large rise in venereal disease among the soldiers, which led MacArthur to close down the prostitution in early 1946.[39] The incidence of rape increased after the closure of the brothels, possibly eight-fold; […] “According to one calculation the number of rapes and assaults on Japanese women amounted to around 40 daily while the RAA was in operation, and then rose to an average of 330 a day after it was terminated in early 1946.”[40] Michael S. Molasky states that while rape and other violent crime was widespread in naval ports like Yokosuka and Yokohama during the first few weeks of occupation, according to Japanese police reports and journalistic studies, the number of incidents declined shortly after and were not common on mainland Japan throughout the rest of occupation.[41] Two weeks into the occupation, the Occupation administration began censoring all media. This included any mention of rape or other sensitive social issues.”

“Post-war Japan was chaotic. The air raids on Japan’s urban centers left millions displaced and food shortages, created by bad harvests and the demands of the war, worsened when the seizure of food from Korea, Taiwan, and China ceased.[58] Repatriation of Japanese living in other parts of Asia only aggravated the problems in Japan as these displaced people put more strain on already scarce resources. Over 5.1 million Japanese returned to Japan in the fifteen months following October 1, 1945.[59] Alcohol and drug abuse became major problems. Deep exhaustion, declining morale and despair were so widespread that it was termed the “kyodatsu condition” (虚脱状態 kyodatsujoutai?, lit. “state of lethargy”).[60] Inflation was rampant and many people turned to the black market for even the most basic goods. These black markets in turn were often places of turf wars between rival gangs, like the Shibuya incident in 1946.”

August 16, 2014

## Wikipedia articles of interest

Albert Stevens (1887–1966), also known as patient CAL-1, was the subject of a human radiation experiment, and survived the highest known accumulated radiation dose in any human.[1] On May 14, 1945, he was injected with 131 kBq (3.55 µCi) of plutonium without his knowledge or informed consent.[2]

Plutonium remained present in his body for the remainder of his life, the amount decaying slowly through radioactive decay and biological elimination. Stevens died of heart disease some 20 years later, having accumulated an effective radiation dose of 64 Sv (6400 rem) over that period. The current annual permitted dose for a radiation worker in the United States is 5 rem. […] Steven’s annual dose was approximately 60 times this amount.”

“Plutonium was handled extensively by chemists, technicians, and physicists taking part in the Manhattan Project, but the effects of plutonium exposure on the human body were largely unknown.[2] A few mishaps in 1944 had caused certain alarm amongst project leaders, and contamination was becoming a major problem in and outside the laboratories.[2] […] As the Manhattan Project continued to use plutonium, airborne contamination began to be a major concern.[2] Nose swipes were taken frequently of the workers, with numerous cases of moderate and high readings.[2][5] […] Tracer experiments were begun in 1944 with rats and other animals with the knowledge of all of the Manhattan project managers and health directors of the various sites. In 1945, human tracer experiments began with the intent to determine how to properly analyze excretion samples to estimate body burden. Numerous analytic methods were devised by the lead doctors at the Met Lab (Chicago), Los Alamos, Rochester, Oak Ridge, and Berkeley.[2] The first human plutonium injection experiments were approved in April 1945 for three tests: April 10 at the Manhattan Project Army Hospital in Oak Ridge, April 26 at Billings Hospital in Chicago, and May 14 at the University of California Hospital in San Francisco. Albert Stevens was the person selected in the California test and designated CAL-1 in official documents.[2] […] The plutonium experiments were not isolated events.[2] During this time, cancer researchers were attempting to discover whether certain radioactive elements might be useful to treat cancer.[2] Recent studies on radium, polonium, and uranium proved foundational to the study of Pu toxicity. […] The mastermind behind this human experiment with plutonium was Dr. Joseph Gilbert Hamilton, a Manhattan Project doctor in charge of the human experiments in California.[6] Hamilton had been experimenting on people (including himself) since the 1930s at Berkeley. […] Hamilton eventually succumbed to the radiation that he explored for most of his adult life: he died of leukemia at the age of 49.”

“Although Stevens was the person who received the highest dose of radiation during the plutonium experiments, he was neither the first nor the last subject to be studied. Eighteen people aged 4 to 69 were injected with plutonium. Subjects who were chosen for the experiment had been diagnosed with a terminal disease. They lived from 6 days up to 44 years past the time of their injection.[2] Eight of the 18 died within 2 years of the injection.[2] All died from their preexisting terminal illness, or cardiac illnesses. […] As with all radiological testing during World War II, it would have been difficult to receive informed consent for Pu injection studies on civilians. Within the Manhattan Project, plutonium was referred to often by its code “49” or simply the “product.” Few outside of the Manhattan Project would have known of plutonium, much less of the dangers of radioactive isotopes inside the body. There is no evidence that Stevens had any idea that he was the subject of a secret government experiment in which he would be subjected to a substance that would have no benefit to his health.[2][6]

The best part is perhaps this: Stevens was not terminal: “He had checked into the University of California Hospital in San Francisco with a gastric ulcer that was misdiagnosed as terminal cancer.” It seems pretty obvious from the fact that one of the people involved in these experiments survived for 44 years and the fact that four other experimentees were still alive by the time Stevens died that he was not the only one who was misdiagnosed, and one interpretation of the fact that more than half survived beyond two years might be that the definition of ‘terminal’ applied in this context may have been, well, slightly flexible (especially considering how large injections of radioactive poisons in these people may not exactly have increased their life expectancies). Today people usually use this term for conditions which people can expect to die from within 6 months – 2 years is a long time in this context. It may however also to some extent just have reflected the state of medical science at the time – also illustrative in that respect is how the surgeons screwed him over during his illness: “Half of the left lobe of the liver, the entire spleen, most of the ninth rib, lymph nodes, part of the pancreas, and a portion of the omentum… were taken out”[1] to help prevent the spread of the cancer that Stevens did not have.” In case you were wondering, not only did they not tell him he was part of an experiment; they also did not ever tell him he had been misdiagnosed with cancer.

“The aberration of light (also referred to as astronomical aberration or stellar aberration) is an astronomical phenomenon which produces an apparent motion of celestial objects about their locations dependent on the velocity of the observer. Aberration causes objects to appear to be angled or tilted towards the direction of motion of the observer compared to when the observer is stationary. The change in angle is typically very small, on the order of v/c where c is the speed of light and v the velocity of the observer. In the case of “stellar” or “annual” aberration, the apparent position of a star to an observer on Earth varies periodically over the course of a year as the Earth’s velocity changes as it revolves around the Sun […] Aberration is historically significant because of its role in the development of the theories of light, electromagnetism and, ultimately, the theory of Special Relativity. […] In 1729, James Bradley provided a classical explanation for it in terms of the finite speed of light relative to the motion of the Earth in its orbit around the Sun,[1][2] which he used to make one of the earliest measurements of the speed of light. However, Bradley’s theory was incompatible with 19th century theories of light, and aberration became a major motivation for the aether drag theories of Augustin Fresnel (in 1818) and G. G. Stokes (in 1845), and for Hendrick Lorentzaether theory of electromagnetism in 1892. The aberration of light, together with Lorentz’ elaboration of Maxwell’s electrodynamics, the moving magnet and conductor problem, the negative aether drift experiments, as well as the Fizeau experiment, led Albert Einstein to develop the theory of Special Relativity in 1905, which provided a conclusive explanation for the aberration phenomenon.[3] […]

Aberration may be explained as the difference in angle of a beam of light in different inertial frames of reference. A common analogy is to the apparent direction of falling rain: If rain is falling vertically in the frame of reference of a person standing still, then to a person moving forwards the rain will appear to arrive at an angle, requiring the moving observer to tilt their umbrella forwards. The faster the observer moves, the more tilt is needed.

The net effect is that light rays striking the moving observer from the sides in a stationary frame will come angled from ahead in the moving observer’s frame. This effect is sometimes called the “searchlight” or “headlight” effect.

In the case of annual aberration of starlight, the direction of incoming starlight as seen in the Earth’s moving frame is tilted relative to the angle observed in the Sun’s frame. Since the direction of motion of the Earth changes during its orbit, the direction of this tilting changes during the course of the year, and causes the apparent position of the star to differ from its true position as measured in the inertial frame of the Sun.

While classical reasoning gives intuition for aberration, it leads to a number of physical paradoxes […] The theory of Special Relativity is required to correctly account for aberration.”

The article has much more, in particular it has a lot of stuff about historical aspects pertaining to this topic.

“The Spanish Armada (Spanish: Grande y Felicísima Armada or Armada Invencible, literally “Great and Most Fortunate Navy” or “Invincible Fleet”) was a Spanish fleet of 130 ships that sailed from A Coruña in August 1588 under the command of the Duke of Medina Sidonia with the purpose of escorting an army from Flanders to invade England. The strategic aim was to overthrow Queen Elizabeth I of England and the Tudor establishment of Protestantism in England, with the expectation that this would put a stop to English interference in the Spanish Netherlands and to the harm caused to Spanish interests by English and Dutch privateering.

The Armada chose not to attack the English fleet at Plymouth, then failed to establish a temporary anchorage in the Solent, after one Spanish ship had been captured by Francis Drake in the English Channel, and finally dropped anchor off Calais.[10] While awaiting communications from the Duke of Parma‘s army the Armada was scattered by an English fireship attack. In the ensuing Battle of Gravelines the Spanish fleet was damaged and forced to abandon its rendezvous with Parma’s army, who were blockaded in harbour by Dutch flyboats. The Armada managed to regroup and, driven by southwest winds, withdrew north, with the English fleet harrying it up the east coast of England. The commander ordered a return to Spain, but the Armada was disrupted during severe storms in the North Atlantic and a large portion of the vessels were wrecked on the coasts of Scotland and Ireland. Of the initial 130 ships over a third failed to return.[11] […] The expedition was the largest engagement of the undeclared Anglo-Spanish War (1585–1604). The following year England organised a similar large-scale campaign against Spain, the Drake-Norris Expedition, also known as the Counter-Armada of 1589, which was also unsuccessful. […]

The fleet was composed of 130 ships, 8,000 sailors and 18,000 soldiers, and bore 1,500 brass guns and 1,000 iron guns. […] In the Spanish Netherlands 30,000 soldiers[17] awaited the arrival of the armada, the plan being to use the cover of the warships to convey the army on barges to a place near London. All told, 55,000 men were to have been mustered, a huge army for that time. […] The English fleet outnumbered the Spanish, with 200 ships to 130,[18] while the Spanish fleet outgunned the English—its available firepower was 50% more than that of the English.[19] The English fleet consisted of the 34 ships of the royal fleet (21 of which were galleons of 200 to 400 tons), and 163 other ships, 30 of which were of 200 to 400 tons and carried up to 42 guns each; 12 of these were privateers owned by Lord Howard of Effingham, Sir John Hawkins and Sir Francis Drake.[1] […] The Armada was delayed by bad weather […], and was not sighted in England until 19 July, when it appeared off The Lizard in Cornwall. The news was conveyed to London by a system of beacons that had been constructed all the way along the south coast.”

“In September 1588 the Armada sailed around Scotland and Ireland into the North Atlantic. The ships were beginning to show wear from the long voyage, and some were kept together by having their hulls bundled up with cables. Supplies of food and water ran short. The intention would have been to keep well to the west of the coast of Scotland and Ireland, in the relative safety of the open sea. However, there being at that time no way of accurately measuring longitude, the Spanish were not aware that the Gulf Stream was carrying them north and east as they tried to move west, and they eventually turned south much further to the east than planned, a devastating navigational error. Off the coasts of Scotland and Ireland the fleet ran into a series of powerful westerly winds […] Because so many anchors had been abandoned during the escape from the English fireships off Calais, many of the ships were incapable of securing shelter as they reached the coast of Ireland and were driven onto the rocks. Local men looted the ships. […] more ships and sailors were lost to cold and stormy weather than in direct combat. […] Following the gales it is reckoned that 5,000 men died, by drowning, starvation and slaughter at the hands of English forces after they were driven ashore in Ireland; only half of the Spanish Armada fleet returned home to Spain.[30] Reports of the passage around Ireland abound with strange accounts of hardship and survival.[31]

In the end, 67 ships and fewer than 10,000 men survived.[32] Many of the men were near death from disease, as the conditions were very cramped and most of the ships ran out of food and water. Many more died in Spain, or on hospital ships in Spanish harbours, from diseases contracted during the voyage.”

Viral hemorrhagic septicemia (VHS) is a deadly infectious fish disease caused by the Viral hemorrhagic septicemia virus (VHSV, or VHSv). It afflicts over 50 species of freshwater and marine fish in several parts of the northern hemisphere.[1] VHS is caused by the viral hemorrhagic septicemia virus (VHSV), different strains of which occur in different regions, and affect different species. There are no signs that the disease affects human health. VHS is also known as “Egtved disease,” and VHSV as “Egtved virus.”[2]

Historically, VHS was associated mostly with freshwater salmonids in western Europe, documented as a pathogenic disease among cultured salmonids since the 1950s.[3] Today it is still a major concern for many fish farms in Europe and is therefore being watched closely by the European Community Reference Laboratory for Fish Diseases. It was first discovered in the US in 1988 among salmon returning from the Pacific in Washington State.[4] This North American genotype was identified as a distinct, more marine-stable strain than the European genotype. VHS has since been found afflicting marine fish in the northeastern Pacific Ocean, the North Sea, and the Baltic Sea.[3] Since 2005, massive die-offs have occurred among a wide variety of freshwater species in the Great Lakes region of North America.”

The article isn’t that great but I figured I should include it anyway because I find it sort of fascinating how almost all humans alive can and do live their entire lives without necessarily ever knowing anything about stuff like this. Humans have some really obvious blind spots when it comes to knowledge about some of the stuff we put into our mouths on a regular basis.

Bird migration is the regular seasonal movement, often north and south along a flyway between breeding and wintering grounds, undertaken by many species of birds. Migration, which carries high costs in predation and mortality, including from hunting by humans, is driven primarily by availability of food. Migration occurs mainly in the Northern Hemisphere where birds are funnelled on to specific routes by natural barriers such as the Mediterranean Sea or the Caribbean Sea.”

“Historically, migration has been recorded as much as 3,000 years ago by Ancient Greek authors including Homer and Aristotle […] Aristotle noted that cranes traveled from the steppes of Scythia to marshes at the headwaters of the Nile. […] Aristotle however suggested that swallows and other birds hibernated. […] It was not until the end of the eighteenth century that migration as an explanation for the winter disappearance of birds from northern climes was accepted […] [and Aristotle’s hibernation] belief persisted as late as 1878, when Elliott Coues listed the titles of no less than 182 papers dealing with the hibernation of swallows.”

“Approximately 1800 of the world’s 10,000 bird species are long-distance migrants.[9][10] […] Within a species not all populations may be migratory; this is known as “partial migration”. Partial migration is very common in the southern continents; in Australia, 44% of non-passerine birds and 32% of passerine species are partially migratory.[17] In some species, the population at higher latitudes tends to be migratory and will often winter at lower latitude. The migrating birds bypass the latitudes where other populations may be sedentary, where suitable wintering habitats may already be occupied. This is an example of leap-frog migration.[18] Many fully migratory species show leap-frog migration (birds that nest at higher latitudes spend the winter at lower latitudes), and many show the alternative, chain migration, where populations ‘slide’ more evenly North and South without reversing order.[19]

Within a population, it is common for different ages and/or sexes to have different patterns of timing and distance. […] Many, if not most, birds migrate in flocks. For larger birds, flying in flocks reduces the energy cost. Geese in a V-formation may conserve 12–20% of the energy they would need to fly alone.[21][22] […] Seabirds fly low over water but gain altitude when crossing land, and the reverse pattern is seen in landbirds.[25][26] However most bird migration is in the range of 150 m (500 ft) to 600 m (2000 ft). Bird strike aviation records from the United States show most collisions occur below 600 m (2000 ft) and almost none above 1800 m (6000 ft).[27] Bird migration is not limited to birds that can fly. Most species of penguin migrate by swimming.”

“Some Bar-tailed Godwits have the longest known non-stop flight of any migrant, flying 11,000 km from Alaska to their New Zealand non-breeding areas.[36] Prior to migration, 55 percent of their bodyweight is stored fat to fuel this uninterrupted journey. […] The Arctic Tern has the longest-distance migration of any bird, and sees more daylight than any other, moving from its Arctic breeding grounds to the Antarctic non-breeding areas.[37] One Arctic Tern, ringed (banded) as a chick on the Farne Islands off the British east coast, reached Melbourne, Australia in just three months from fledging, a sea journey of over 22,000 km (14,000 mi). […] The most pelagic species, mainly in the ‘tubenose’ order Procellariiformes, are great wanderers, and the albatrosses of the southern oceans may circle the globe as they ride the “roaring forties” outside the breeding season. The tubenoses spread widely over large areas of open ocean, but congregate when food becomes available. Many are also among the longest-distance migrants; Sooty Shearwaters nesting on the Falkland Islands migrate 14,000 km (8,700 mi) between the breeding colony and the North Atlantic Ocean off Norway. Some Manx Shearwaters do this same journey in reverse. As they are long-lived birds, they may cover enormous distances during their lives; one record-breaking Manx Shearwater is calculated to have flown 8 million km (5 million miles) during its over-50 year lifespan.[39]

“Bird migration is primarily, but not entirely, a Northern Hemisphere phenomenon.[50] This is because land birds in high northern latitudes, where food becomes scarce in winter, leave for areas further south (including the Southern Hemisphere) to overwinter, and because the continental landmass is much larger in the Northern Hemisphere [see also this post]. In contrast, among (pelagic) seabirds, species of the Southern Hemisphere are more likely to migrate. This is because there is a large area of ocean in the Southern Hemisphere, and more islands suitable for seabirds to nest.[51]

July 10, 2014

## Wikipedia articles of interest

i. Great Fire of London (featured).

“The Great Fire of London was a major conflagration that swept through the central parts of the English city of London, from Sunday, 2 September to Wednesday, 5 September 1666.[1] The fire gutted the medieval City of London inside the old Roman city wall. It threatened, but did not reach, the aristocratic district of Westminster, Charles II‘s Palace of Whitehall, and most of the suburban slums.[2] It consumed 13,200 houses, 87 parish churches, St. Paul’s Cathedral and most of the buildings of the City authorities. It is estimated to have destroyed the homes of 70,000 of the City’s 80,000 inhabitants.”

Do note that even though this fire was a really big deal the ‘70,000 out of 80,000’ number can be misleading as many Londoners didn’t actually live in the City proper:

“By the late 17th century, the City proper—the area bounded by the City wall and the River Thames—was only a part of London, covering some 700.0 acres (2.833 km2; 1.0938 sq mi),[7] and home to about 80,000 people, or one sixth of London’s inhabitants. The City was surrounded by a ring of inner suburbs, where most Londoners lived.”

I thought I should include a few observations related to how well people behaved in this terrible situation – humans are really wonderful sometimes, and of course the people affected by the fire did everything they could to stick together and help each other out:

“Order in the streets broke down as rumours arose of suspicious foreigners setting fires. The fears of the homeless focused on the French and Dutch, England‘s enemies in the ongoing Second Anglo-Dutch War; these substantial immigrant groups became victims of lynchings and street violence.” […] [no, wait…]

“Suspicion soon arose in the threatened city that the fire was no accident. The swirling winds carried sparks and burning flakes long distances to lodge on thatched roofs and in wooden gutters, causing seemingly unrelated house fires to break out far from their source and giving rise to rumours that fresh fires were being set on purpose. Foreigners were immediately suspects because of the current Second Anglo-Dutch War. As fear and suspicion hardened into certainty on the Monday, reports circulated of imminent invasion, and of foreign undercover agents seen casting “fireballs” into houses, or caught with hand grenades or matches.[37] There was a wave of street violence.[38] William Taswell saw a mob loot the shop of a French painter and level it to the ground, and watched in horror as a blacksmith walked up to a Frenchman in the street and hit him over the head with an iron bar.

The fears of terrorism received an extra boost from the disruption of communications and news as facilities were devoured by the fire. The General Letter Office in Threadneedle Street, through which post for the entire country passed, burned down early on Monday morning. The London Gazette just managed to put out its Monday issue before the printer’s premises went up in flames (this issue contained mainly society gossip, with a small note about a fire that had broken out on Sunday morning and “which continues still with great violence”). The whole nation depended on these communications, and the void they left filled up with rumours. There were also religious alarms of renewed Gunpowder Plots. As suspicions rose to panic and collective paranoia on the Monday, both the Trained Bands and the Coldstream Guards focused less on fire fighting and more on rounding up foreigners, Catholics, and any odd-looking people, and arresting them or rescuing them from mobs, or both together.”

“An example of the urge to identify scapegoats for the fire is the acceptance of the confession of a simple-minded French watchmaker, Robert Hubert, who claimed he was an agent of the Pope and had started the Great Fire in Westminster.[55] He later changed his story to say that he had started the fire at the bakery in Pudding Lane. Hubert was convicted, despite some misgivings about his fitness to plead, and hanged at Tyburn on 28 September 1666. After his death, it became apparent that he had not arrived in London until two days after the fire started.”

Just one year before the fire, London had incidentally been hit by a plague outbreak which “is believed to have killed a sixth of London’s inhabitants, or 80,000 people”. Being a Londoner during the 1660s probably wasn’t a great deal of fun. On the other hand this disaster was actually not that big of a deal when compared to e.g. the 1556 Shaanxi earthquake.

ii. Sea (featured). I was considering reading an oceanography textbook a while back, but I decided against it and I read this article ‘instead’. Some interesting stuff in there. A few observations from the article:

“About 97.2 percent of the Earth’s water is found in the sea, some 1,360,000,000 cubic kilometres (330,000,000 cu mi) of salty water.[12] Of the rest, 2.15 percent is accounted for by ice in glaciers, surface deposits and sea ice, and 0.65 percent by vapour and liquid fresh water in lakes, rivers, the ground and the air.[12]

“The water in the sea was once thought to come from the Earth’s volcanoes, starting 4 billion years ago, released by degassing from molten rock.[3](pp24–25) More recent work suggests that much of the Earth’s water may have come from comets.[16]” (This stuff covers 70 percent of the planet and we still are not completely sure how it got to be here. I’m often amazed at how much stuff we know about the world, but very occasionally I also get amazed at the things we don’t know. This seems like the sort of thing we somehow ‘ought to know’..)

“An important characteristic of seawater is that it is salty. Salinity is usually measured in parts per thousand (expressed with the ‰ sign or “per mil”), and the open ocean has about 35 grams (1.2 oz) of solids per litre, a salinity of 35‰ (about 90% of the water in the ocean has between 34‰ and 35‰ salinity[17]). […] The constituents of table salt, sodium and chloride, make up about 85 percent of the solids in solution. […] The salinity of a body of water varies with evaporation from its surface (increased by high temperatures, wind and wave motion), precipitation, the freezing or melting of sea ice, the melting of glaciers, the influx of fresh river water, and the mixing of bodies of water of different salinities.”

“Sea temperature depends on the amount of solar radiation falling on its surface. In the tropics, with the sun nearly overhead, the temperature of the surface layers can rise to over 30 °C (86 °F) while near the poles the temperature in equilibrium with the sea ice is about −2 °C (28 °F). There is a continuous circulation of water in the oceans. Warm surface currents cool as they move away from the tropics, and the water becomes denser and sinks. The cold water moves back towards the equator as a deep sea current, driven by changes in the temperature and density of the water, before eventually welling up again towards the surface. Deep seawater has a temperature between −2 °C (28 °F) and 5 °C (41 °F) in all parts of the globe.[23]

“The amount of light that penetrates the sea depends on the angle of the sun, the weather conditions and the turbidity of the water. Much light gets reflected at the surface, and red light gets absorbed in the top few metres. […] There is insufficient light for photosynthesis and plant growth beyond a depth of about 200 metres (660 ft).[27]

“Over most of geologic time, the sea level has been higher than it is today.[3](p74) The main factor affecting sea level over time is the result of changes in the oceanic crust, with a downward trend expected to continue in the very long term.[73] At the last glacial maximum, some 20,000 years ago, the sea level was 120 metres (390 ft) below its present-day level.” (this of course had some very interesting ecological effects – van der Geer et al. had some interesting observations on that topic)

“On her 68,890-nautical-mile (127,580 km) journey round the globe, HMS Challenger discovered about 4,700 new marine species, and made 492 deep sea soundings, 133 bottom dredges, 151 open water trawls and 263 serial water temperature observations.[115]

“Seaborne trade carries more than US \$4 trillion worth of goods each year.[139]

“Many substances enter the sea as a result of human activities. Combustion products are transported in the air and deposited into the sea by precipitation. Industrial outflows and sewage contribute heavy metals, pesticides, PCBs, disinfectants, household cleaning products and other synthetic chemicals. These become concentrated in the surface film and in marine sediment, especially estuarine mud. The result of all this contamination is largely unknown because of the large number of substances involved and the lack of information on their biological effects.[199] The heavy metals of greatest concern are copper, lead, mercury, cadmium and zinc which may be bio-accumulated by marine invertebrates. They are cumulative toxins and are passed up the food chain.[200]

Much floating plastic rubbish does not biodegrade, instead disintegrating over time and eventually breaking down to the molecular level. Rigid plastics may float for years.[201] In the centre of the Pacific gyre there is a permanent floating accumulation of mostly plastic waste[202] and there is a similar garbage patch in the Atlantic.[203] […] Run-off of fertilisers from agricultural land is a major source of pollution in some areas and the discharge of raw sewage has a similar effect. The extra nutrients provided by these sources can cause excessive plant growth. Nitrogen is often the limiting factor in marine systems, and with added nitrogen, algal blooms and red tides can lower the oxygen level of the water and kill marine animals. Such events have created dead zones in the Baltic Sea and the Gulf of Mexico.[205]

iii. List of chemical compounds with unusual names. Technically this is not an article, but I decided to include it here anyway. A few examples from the list:

“Ranasmurfin: A blue protein from the foam nests of a tropical frog, named after the Smurfs.”

“Sonic hedgehog: A protein named after Sonic the Hedgehog.”

Arsole: (C4H5As), an analogue of pyrrole in which an arsenic atom replaces the nitrogen atom.[16]

“DAMN: Diaminomaleonitrile, a cyanocarbon that contains two amine groups and two nitrile groups bound to an ethylene backbone.”

fucK: The name of the gene that encodes L-fuculokinase, an enzyme that catalyzes a chemical reaction between L-fuculose, ADP, and L-fuculose-1-phosphate.[3]

Moronic acid: Moronic acid [3-oxoolean-18-en-28-oic acid], a natural triterpene

Draculin: An anticoagulant found in the saliva of vampire bats.[27]

iv. Operation Proboi. When trying to make sense of e.g. the reactions of people living in the Baltic countries to Russia’s ‘current activities’ in the Ukraine, it probably helps to know stuff like this. 1949 isn’t that long ago – if my father had been born in Latvia he might have been one of the people in the photo.

v. Schrödinger equation. I recently started reading  A. C. Phillips’ Introduction to Quantum Mechanics – chapter 2 deals with this topic. Due to the technical nature of the book I’m incidentally not sure to which extent I’ll cover the book here (or for that matter whether I’ll be able to finish it..) – if I do decide to cover it in some detail I’ll probably include relevant links to wikipedia along the way. The wiki has a lot of stuff on these topics, but textbooks are really helpful in terms of figuring out the order in which you should proceed.

vi. Happisburgh footprints. ‘A small step for man, …’

“The Happisburgh footprints were a set of fossilized hominin footprints that date to the early Pleistocene. They were discovered in May 2013 in a newly uncovered sediment layer on a beach at Happisburgh […] in Norfolk, England, and were destroyed by the tide shortly afterwards.  Results of research on the footprints were announced on 7 February 2014, and identified them as dating to more than 800,000 years ago, making them the oldest known hominin footprints outside Africa.[1][2][3] Before the Happisburgh discovery, the oldest known footprints in Britain were at Uskmouth in South Wales, from the Mesolithic and carbon-dated to 4,600 BC.[4]”

The fact that we found these footprints is awesome. The fact that we can tell that they are as old as they are is awesome. There’s a lot of awesome stuff going on here – Happisburg also simply seems to be a gift that keeps on giving:

“Happisburgh has produced a number of significant archaeological finds over many years. As the shoreline is subject to severe coastal erosion, new material is constantly being exposed along the cliffs and on the beach. Prehistoric discoveries have been noted since 1820, when fishermen trawling oyster beds offshore found their nets had brought up teeth, bones, horns and antlers from elephants, rhinos, giant deer and other extinct species. […]

In 2000, a black flint handaxe dating to between 600,000 and 800,000 years ago was found by a man walking on the beach. In 2012, for the television documentary Britain’s Secret Treasures, the handaxe was selected by a panel of experts from the British Museum and the Council for British Archaeology as the most important item on a list of fifty archaeological discoveries made by members of the public.[14][15] Since its discovery, the palaeolithic history of Happisburgh has been the subject of the Ancient Human Occupation of Britain (AHOB) and Pathways to Ancient Britain (PAB) projects […] Between 2005 and 2010 eighty palaeolithic flint tools, mostly cores, flakes and flake tools were excavated from the foreshore in sediment dating back to up to 950,000 years ago.”

vii. Keep (‘good article’).

“A keep (from the Middle English kype) is a type of fortified tower built within castles during the Middle Ages by European nobility. Scholars have debated the scope of the word keep, but usually consider it to refer to large towers in castles that were fortified residences, used as a refuge of last resort should the rest of the castle fall to an adversary. The first keeps were made of timber and formed a key part of the motte and bailey castles that emerged in Normandy and Anjou during the 10th century; the design spread to England as a result of the Norman invasion of 1066, and in turn spread into Wales during the second half of the 11th century and into Ireland in the 1170s. The Anglo-Normans and French rulers began to build stone keeps during the 10th and 11th centuries; these included Norman keeps, with a square or rectangular design, and circular shell keeps. Stone keeps carried considerable political as well as military importance and could take up to a decade to build.

During the 12th century new designs began to be introduced – in France, quatrefoil-shaped keeps were introduced, while in England polygonal towers were built. By the end of the century, French and English keep designs began to diverge: Philip II of France built a sequence of circular keeps as part of his bid to stamp his royal authority on his new territories, while in England castles were built that abandoned the use of keeps altogether. In Spain, keeps were increasingly incorporated into both Christian and Islamic castles, although in Germany tall towers called Bergfriede were preferred to keeps in the western fashion. In the second half of the 14th century there was a resurgence in the building of keeps. In France, the keep at Vincennes began a fashion for tall, heavily machicolated designs, a trend adopted in Spain most prominently through the Valladolid school of Spanish castle design. Meanwhile, in England tower keeps became popular amongst the most wealthy nobles: these large keeps, each uniquely designed, formed part of the grandest castles built during the period.

By the 16th century, however, keeps were slowly falling out of fashion as fortifications and residences. Many were destroyed between the 17th and 18th centuries in civil wars, or incorporated into gardens as an alternative to follies. During the 19th century, keeps became fashionable once again and in England and France a number were restored or redesigned by Gothic architects. Despite further damage to many French and Spanish keeps during the wars of the 20th century, keeps now form an important part of the tourist and heritage industry in Europe. […]

“By the 15th century it was increasingly unusual for a lord to build both a keep and a large gatehouse at the same castle, and by the early 16th century the gatehouse had easily overtaken the keep as the more fashionable feature: indeed, almost no new keeps were built in England after this period.[99] The classical Palladian style began to dominate European architecture during the 17th century, causing a further move away from the use of keeps. […] From the 17th century onwards, some keeps were deliberately destroyed. In England, many were destroyed after the end of the Second English Civil War in 1649, when Parliament took steps to prevent another royalist uprising by slighting, or damaging, castles so as to prevent them from having any further military utility. Slighting was quite expensive and took considerable effort to carry out, so damage was usually done in the most cost efficient fashion with only selected walls being destroyed.[103] Keeps were singled out for particular attention in this process because of their continuing political and cultural importance, and the prestige they lent their former royalist owners […] There were some equivalent destruction of keeps in France in the 17th and 18th centuries […] The Spanish Civil War and First and Second World Wars in the 20th century caused damage to many castle keeps across Europe; in particular, the famous keep at Coucy was destroyed by the German Army in 1917.[111] By the late 20th century, however, the conservation of castle keeps formed part of government policy across France, England, Ireland and Spain.[112] In the 21st century in England, most keeps are ruined and form part of the tourism and heritage industries, rather than being used as functioning buildings – the keep of Windsor Castle being a rare exception. This is contrast to the fate of bergfried towers in Germany, large numbers of which were restored as functional buildings in the late 19th and early 20th century, often as government offices or youth hostels, or the modern conversion of tower houses, which in many cases have become modernised domestic homes.[113]

viii. Battles of Khalkhin Gol. I decided to look up that stuff because of some of the comments in this thread.

“The Battles of Khalkhyn Gol […] constituted the decisive engagement of the undeclared Soviet–Japanese border conflicts fought among the Soviet Union, Mongolia and the Empire of Japan in 1939. The conflict was named after the river Khalkhyn Gol, which passes through the battlefield. In Japan, the decisive battle of the conflict is known as the Nomonhan Incident […] after a nearby village on the border between Mongolia and Manchuria. The battles resulted in the defeat of the Japanese Sixth Army. […]

While this engagement is little-known in the West, it played an important part in subsequent Japanese conduct in World War II. This defeat, together with other factors, moved the Imperial General Staff in Tokyo away from the policy of the North Strike Group favored by the Army, which wanted to seize Siberia as far as Lake Baikal for its resources. […] Other factors included the signing of the Nazi-Soviet non-aggression pact, which deprived the Army of the basis of its war policy against the USSR. Nomonhan earned the Kwantung Army the displeasure of officials in Tokyo, not so much due to its defeat, but because it was initiated and escalated without direct authorization from the Japanese government. Politically, the defeat also shifted support to the South Strike Group, favored by the Navy, which wanted to seize the resources of Southeast Asia, especially the petroleum and mineral-rich Dutch East Indies. Two days after the Eastern Front of World War II broke out, the Japanese army and navy leaders adopted on 24 June 1941 a resolution “not intervening in German Soviet war for the time being”. In August 1941, Japan and the Soviet Union reaffirmed their neutrality pact.[38] Since the European colonial powers were weakening and suffering early defeats in the war with Germany, coupled with their embargoes on Japan (especially of vital oil) in the second half of 1941, Japan’s focus was ultimately focused on the south, and led to its decision to launch the attack on Pearl Harbor, on 7 December that year.”

Note that there’s some disagreement in the reddit thread as to how important Khalkhin Gol really was – one commenter e.g. argues that: “Khalkhin Gol is overhyped as a factor in the Japanese decision for the southern plan.”

ix. Medical aspects, Hiroshima, Japan, 1946. Technically this is also not a wikipedia article, but multiple wikipedia articles link to it and it is a wikipedia link. The link is to a video featuring multiple people who were harmed by the first nuclear weapon used by humans in warfare. Extensive tissue damage, severe burns, scars – it’s worth having in mind that dying from cancer is not the only concern facing people who survive a nuclear blast. A few related links: a) How did cleanup in Nagasaki and Hiroshima proceed following the atom bombs? b) Minutes of the second meeting of the Target Committee Los Alamos, May 10-11, 1945. c) Keloid. d) Japan in the 1950s (pictures).

April 11, 2014

## Random stuff

Anyway, a snippet from the article:

“There are widespread myths about the psychological vulnerability of gifted students and therefore fears that acceleration will lead to an increase in disturbances such as anxiety, depression, delinquent behavior, and lowered self-esteem. In fact, a comprehensive survey of the research on this topic finds no evidence that gifted students are any more psychologically vulnerable than other students, although boredom, underachievement, perfectionism, and succumbing to the effects of peer pressure are predictable when needs for academic advancement and compatible peers are unmet (Neihart, Reis, Robinson, & Moon, 2002). Questions remain, however, as to whether acceleration may place some students more at risk than others.”

Note incidentally that relative age effects (how is the grade/other academic outcomes of individual i impacted by the age difference between individual i and his/her classmates) vary across countries, but are usually not insignificant; most places you look the older students in the classroom do better than their younger classmates, all else equal. It’s worth having both such effects as well as the cross-country heterogeneities (and the mechanisms behind them) in mind when considering the potential impact of acceleration on academic performance – given differences across countries there’s no good reason why ‘acceleration effects’ should be homogenous across countries either. Relative age effects are sizeable in most countries – see e.g. this. I read a very nice study a while back investigating the impact of relative age on tracking options of German students and later life outcomes (the effects were quite large), but I’m too lazy to go look for it now – I may add it to this post later (but I probably won’t).

ii. Publishers withdraw more than 120 gibberish papers. (…still a lot of papers to go – do remember that at this point it’s only a small minority of all published gibberish papers which are computer-generated…)

Nope, this is not another article about how drinking during pregnancy is bad for the fetus (for stuff on that, see instead e.g. this post – link i.); this one is about how alcohol exposure before conception may harm the child:

“It has been well documented that maternal alcohol exposure during fetal development can have devastating neurological consequences. However, less is known about the consequences of maternal and/or paternal alcohol exposure outside of the gestational time frame. Here, we exposed adolescent male and female rats to a repeated binge EtOH exposure paradigm and then mated them in adulthood. Hypothalamic samples were taken from the offspring of these animals at postnatal day (PND) 7 and subjected to a genome-wide microarray analysis followed by qRT-PCR for selected genes. Importantly, the parents were not intoxicated at the time of mating and were not exposed to EtOH at any time during gestation therefore the offspring were never directly exposed to EtOH. Our results showed that the offspring of alcohol-exposed parents had significant differences compared to offspring from alcohol-naïve parents. Specifically, major differences were observed in the expression of genes that mediate neurogenesis and synaptic plasticity during neurodevelopment, genes important for directing chromatin remodeling, posttranslational modifications or transcription regulation, as well as genes involved in regulation of obesity and reproductive function. These data demonstrate that repeated binge alcohol exposure during pubertal development can potentially have detrimental effects on future offspring even in the absence of direct fetal alcohol exposure.”

I haven’t read all of it but I thought I should post it anyway. It is a study on rats who partied a lot early on in their lives and then mated later on after they’d been sober for a while, so I have no idea about the external validity (…I’m sure some people will say the study design is unrealistic – on account of the rats not also being drunk while having sex…) – but good luck setting up a similar prospective study on humans. I think it’ll be hard to do much more than just gather survey data (with a whole host of potential problems) and perhaps combine this kind of stuff with studies comparing outcomes (which?) across different geographical areas using things like legal drinking age reforms or something like that as early alcohol exposure instruments. I’d say that even if such effects are there they’ll be very hard to measure/identify and they’ll probably get lost in the noise.

iv. The relationship between obesity and type 2 diabetes is complicated. I’ve seen it reported elsewhere that this study ‘proved’ that there’s no link between obesity and diabetes or something like that – apparently you need headlines like that to sell ads. Such headlines make me very, tired.

v. Scientific Freud. On a related note I have been considering reading the Handbook of Cognitive Behavioral Therapy, but I haven’t gotten around to that yet.

vi. If people from the future write an encyclopedic article about your head, does that mean you did well in life? How you answer that question may depend on what they focus on when writing about the head in question. Interestingly this guy didn’t get an article like that.

March 1, 2014