I have previously here on the blog posted multiple lectures in my ‘lecture-posts’, or I have combined a lecture with other stuff (e.g. links such as those in the previous ‘random stuff’ post). I think such approaches have made me less likely to post lectures on the blog (if I don’t post a lecture soon after I’ve watched it, my experience tells me that I not infrequently simply never get around to posting it), and combined with this issue is also the issue that I don’t really watch a lot of lectures these days. For these reasons I have decided to start posting single lecture posts here on the blog; when I start thinking about the time expenditure of people reading along here in a way this approach actually also seems justified – although it might take me as much time/work to watch and cover, say, 4 lectures as it would take me to read and cover 100 pages of a textbook, the time expenditure required by a reader of the blog would be very different in those two cases (you’ll usually be able to read a post that took me multiple hours to write in a short amount of time, whereas ‘the time advantage’ of the reader is close to negligible (maybe not completely; search costs are not completely irrelevant) in the case of lectures). By posting multiple lectures in the same post I probably decrease the expected value of the time readers spend watching the content I upload, which seems suboptimal.
Here’s the youtube description of the lecture, which was posted a few days ago on the IAS youtube account:
“Over the past two decades, information theory has reemerged within computational complexity theory as a mathematical tool for obtaining unconditional lower bounds in a number of models, including streaming algorithms, data structures, and communication complexity. Many of these applications can be systematized and extended via the study of information complexity – which treats information revealed or transmitted as the resource to be conserved. In this overview talk we will discuss the two-party information complexity and its properties – and the interactive analogues of classical source coding theorems. We will then discuss applications to exact communication complexity bounds, hardness amplification, and quantum communication complexity.”
He actually decided to skip the quantum communication complexity stuff because of the time constraint. I should note that the lecture was ‘easy enough’ for me to follow most of it, so it is not really that difficult, at least not if you know some basic information theory.
A few links to related stuff (you can take these links as indications of what sort of stuff the lecture is about/discusses, if you’re on the fence about whether or not to watch it):
Computational complexity theory.
Shannon’s source coding theorem.
From Information to Exact Communication (in the lecture he discusses some aspects covered in this paper).
Unique games conjecture (Two-prover proof systems).
A Counterexample to Strong Parallel Repetition (another paper mentioned/briefly discussed during the lecture).
An interesting aspect I once again noted during this lecture is the sort of loose linkage you sometimes observe between the topics of game theory/microeconomics and computer science. Of course the link is made explicit a few minutes later in the talk when he discusses the unique games conjecture to which I link above, but it’s perhaps worth noting that the link is on display even before that point is reached. Around 38 minutes into the lecture he mentions that one of the relevant proofs ‘involves such things as Lagrange multipliers and optimization’. I was far from surprised, as from a certain point of view the problem he discusses at that point is conceptually very similar to some problems encountered in auction theory, where Lagrange multipliers and optimization problems are frequently encountered… If you are too unfamiliar with that field to realize how the similar problem might appear in an auction theory context, what you have there are instead auction partipants who prefer not to reveal their true willingness to pay; and some auction designs actually work in a very similar manner as does the (pseudo-)protocol described in the lecture, and are thus used to reveal it (for some subset of participants at least)).
It’s been a long time since I last posted one of these posts, so a great number of links of interest has accumulated in my bookmarks. I intended to include a large number of these in this post and this of course means that I surely won’t cover each specific link included in this post in anywhere near the amount of detail it deserves, but that can’t be helped.
“For those diagnosed with ASD in childhood, most will become adults with a significant degree of disability […] Seltzer et al […] concluded that, despite considerable heterogeneity in social outcomes, “few adults with autism live independently, marry, go to college, work in competitive jobs or develop a large network of friends”. However, the trend within individuals is for some functional improvement over time, as well as a decrease in autistic symptoms […]. Some authors suggest that a sub-group of 15–30% of adults with autism will show more positive outcomes […]. Howlin et al. (2004), and Cederlund et al. (2008) assigned global ratings of social functioning based on achieving independence, friendships/a steady relationship, and education and/or a job. These two papers described respectively 22% and 27% of groups of higher functioning (IQ above 70) ASD adults as attaining “Very Good” or “Good” outcomes.”
“[W]e evaluated the adult outcomes for 45 individuals diagnosed with ASD prior to age 18, and compared this with the functioning of 35 patients whose ASD was identified after 18 years. Concurrent mental illnesses were noted for both groups. […] Comparison of adult outcome within the group of subjects diagnosed with ASD prior to 18 years of age showed significantly poorer functioning for those with co-morbid Intellectual Disability, except in the domain of establishing intimate relationships [my emphasis. To make this point completely clear, one way to look at these results is that apparently in the domain of partner-search autistics diagnosed during childhood are doing so badly in general that being intellectually disabled on top of being autistic is apparently conferring no additional disadvantage]. Even in the normal IQ group, the mean total score, i.e. the sum of the 5 domains, was relatively low at 12.1 out of a possible 25. […] Those diagnosed as adults had achieved significantly more in the domains of education and independence […] Some authors have described a subgroup of 15–27% of adult ASD patients who attained more positive outcomes […]. Defining an arbitrary adaptive score of 20/25 as “Good” for our normal IQ patients, 8 of thirty four (25%) of those diagnosed as adults achieved this level. Only 5 of the thirty three (15%) diagnosed in childhood made the cutoff. (The cut off was consistent with a well, but not superlatively, functioning member of society […]). None of the Intellectually Disabled ASD subjects scored above 10. […] All three groups had a high rate of co-morbid psychiatric illnesses. Depression was particularly frequent in those diagnosed as adults, consistent with other reports […]. Anxiety disorders were also prevalent in the higher functioning participants, 25–27%. […] Most of the higher functioning ASD individuals, whether diagnosed before or after 18 years of age, were functioning well below the potential implied by their normal range intellect.”
ii. Premature mortality in autism spectrum disorder. This is a Swedish matched case cohort study. Some observations from the paper:
“The aim of the current study was to analyse all-cause and cause-specific mortality in ASD using nationwide Swedish population-based registers. A further aim was to address the role of intellectual disability and gender as possible moderators of mortality and causes of death in ASD. […] Odds ratios (ORs) were calculated for a population-based cohort of ASD probands (n = 27 122, diagnosed between 1987 and 2009) compared with gender-, age- and county of residence-matched controls (n = 2 672 185). […] During the observed period, 24 358 (0.91%) individuals in the general population died, whereas the corresponding figure for individuals with ASD was 706 (2.60%; OR = 2.56; 95% CI 2.38–2.76). Cause-specific analyses showed elevated mortality in ASD for almost all analysed diagnostic categories. Mortality and patterns for cause-specific mortality were partly moderated by gender and general intellectual ability. […] Premature mortality was markedly increased in ASD owing to a multitude of medical conditions. […] Mortality was significantly elevated in both genders relative to the general population (males: OR = 2.87; females OR = 2.24)”.
“Individuals in the control group died at a mean age of 70.20 years (s.d. = 24.16, median = 80), whereas the corresponding figure for the entire ASD group was 53.87 years (s.d. = 24.78, median = 55), for low-functioning ASD 39.50 years (s.d. = 21.55, median = 40) and high-functioning ASD 58.39 years (s.d. = 24.01, median = 63) respectively. […] Significantly elevated mortality was noted among individuals with ASD in all analysed categories of specific causes of death except for infections […] ORs were highest in cases of mortality because of diseases of the nervous system (OR = 7.49) and because of suicide (OR = 7.55), in comparison with matched general population controls.”
iii. Adhesive capsulitis of shoulder. This one is related to a health scare I had a few months ago. A few quotes:
“Adhesive capsulitis (also known as frozen shoulder) is a painful and disabling disorder of unclear cause in which the shoulder capsule, the connective tissue surrounding the glenohumeral joint of the shoulder, becomes inflamed and stiff, greatly restricting motion and causing chronic pain. Pain is usually constant, worse at night, and with cold weather. Certain movements or bumps can provoke episodes of tremendous pain and cramping. […] People who suffer from adhesive capsulitis usually experience severe pain and sleep deprivation for prolonged periods due to pain that gets worse when lying still and restricted movement/positions. The condition can lead to depression, problems in the neck and back, and severe weight loss due to long-term lack of deep sleep. People who suffer from adhesive capsulitis may have extreme difficulty concentrating, working, or performing daily life activities for extended periods of time.”
Some other related links below:
The prevalence of a diabetic condition and adhesive capsulitis of the shoulder.
“Adhesive capsulitis is characterized by a progressive and painful loss of shoulder motion of unknown etiology. Previous studies have found the prevalence of adhesive capsulitis to be slightly greater than 2% in the general population. However, the relationship between adhesive capsulitis and diabetes mellitus (DM) is well documented, with the incidence of adhesive capsulitis being two to four times higher in diabetics than in the general population. It affects about 20% of people with diabetes and has been described as the most disabling of the common musculoskeletal manifestations of diabetes.”
Adhesive Capsulitis (review article).
“Patients with type I diabetes have a 40% chance of developing a frozen shoulder in their lifetimes […] Dominant arm involvement has been shown to have a good prognosis; associated intrinsic pathology or insulin-dependent diabetes of more than 10 years are poor prognostic indicators.15 Three stages of adhesive capsulitis have been described, with each phase lasting for about 6 months. The first stage is the freezing stage in which there is an insidious onset of pain. At the end of this period, shoulder ROM [range of motion] becomes limited. The second stage is the frozen stage, in which there might be a reduction in pain; however, there is still restricted ROM. The third stage is the thawing stage, in which ROM improves, but can take between 12 and 42 months to do so. Most patients regain a full ROM; however, 10% to 15% of patients suffer from continued pain and limited ROM.”
Musculoskeletal Complications in Type 1 Diabetes.
“The development of periarticular thickening of skin on the hands and limited joint mobility (cheiroarthropathy) is associated with diabetes and can lead to significant disability. The objective of this study was to describe the prevalence of cheiroarthropathy in the well-characterized Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications (DCCT/EDIC) cohort and examine associated risk factors […] This cross-sectional analysis was performed in 1,217 participants (95% of the active cohort) in EDIC years 18/19 after an average of 24 years of follow-up. Cheiroarthropathy — defined as the presence of any one of the following: adhesive capsulitis, carpal tunnel syndrome, flexor tenosynovitis, Dupuytren’s contracture, or a positive prayer sign [related link] — was assessed using a targeted medical history and standardized physical examination. […] Cheiroarthropathy was present in 66% of subjects […] Cheiroarthropathy is common in people with type 1 diabetes of long duration (∼30 years) and is related to longer duration and higher levels of glycemia. Clinicians should include cheiroarthropathy in their routine history and physical examination of patients with type 1 diabetes because it causes clinically significant functional disability.”
Musculoskeletal disorders in diabetes mellitus: an update.
“Diabetes mellitus (DM) is associated with several musculoskeletal disorders. […] The exact pathophysiology of most of these musculoskeletal disorders remains obscure. Connective tissue disorders, neuropathy, vasculopathy or combinations of these problems, may underlie the increased incidence of musculoskeletal disorders in DM. The development of musculoskeletal disorders is dependent on age and on the duration of DM; however, it has been difficult to show a direct correlation with the metabolic control of DM.”
Musculoskeletal Disorders of the Hand and Shoulder in Patients with Diabetes.
“In addition to micro- and macroangiopathic complications, diabetes mellitus is also associated with several musculoskeletal disorders of the hand and shoulder that can be debilitating (1,2). Limited joint mobility, also termed diabetic hand syndrome or cheiropathy (3), is characterized by skin thickening over the dorsum of the hands and restricted mobility of multiple joints. While this syndrome is painless and usually not disabling (2,4), other musculoskeletal problems occur with increased frequency in diabetic patients, including Dupuytren’s disease [“Dupuytren’s disease […] may be observed in up to 42% of adults with diabetes mellitus, typically in patients with long-standing T1D” – link], carpal tunnel syndrome [“The prevalence of [carpal tunnel syndrome, CTS] in patients with diabetes has been estimated at 11–30 % […], and is dependent on the duration of diabetes. […] Type I DM patients have a high prevalence of CTS with increasing duration of disease, up to 85 % after 54 years of DM” – link], palmar flexor tenosynovitis or trigger finger [“The incidence of trigger finger [/stenosing tenosynovitis] is 7–20 % of patients with diabetes comparing to only about 1–2 % in nondiabetic patients” – link], and adhesive capsulitis of the shoulder (5–10). The association of adhesive capsulitis with pain, swelling, dystrophic skin, and vasomotor instability of the hand constitutes the “shoulder-hand syndrome,” a rare but potentially disabling manifestation of diabetes (1,2).”
“The prevalence of musculoskeletal disorders was greater in diabetic patients than in control patients (36% vs. 9%, P < 0.01). Adhesive capsulitis was present in 12% of the diabetic patients and none of the control patients (P < 0.01), Dupuytren’s disease in 16% of diabetic and 3% of control patients (P < 0.01), and flexor tenosynovitis in 12% of diabetic and 2% of control patients (P < 0.04), while carpal tunnel syndrome occurred in 12% of diabetic patients and 8% of control patients (P = 0.29). Musculoskeletal disorders were more common in patients with type 1 diabetes than in those with type 2 diabetes […]. Forty-three patients [out of 100] with type 1 diabetes had either hand or shoulder disorders (37 with hand disorders, 6 with adhesive capsulitis of the shoulder, and 10 with both syndromes), compared with 28 patients [again out of 100] with type 2 diabetes (24 with hand disorders, 4 with adhesive capsulitis of the shoulder, and 3 with both syndromes, P = 0.03).”
Association of Diabetes Mellitus With the Risk of Developing Adhesive Capsulitis of the Shoulder: A Longitudinal Population-Based Followup Study.
“A total of 78,827 subjects with at least 2 ambulatory care visits with a principal diagnosis of DM in 2001 were recruited for the DM group. The non-DM group comprised 236,481 age- and sex-matched randomly sampled subjects without DM. […] During a 3-year followup period, 946 subjects (1.20%) in the DM group and 2,254 subjects (0.95%) in the non-DM group developed ACS. The crude HR of developing ACS for the DM group compared to the non-DM group was 1.333 […] the association between DM and ACS may be explained at least in part by a DM-related chronic inflammatory process with increased growth factor expression, which in turn leads to joint synovitis and subsequent capsular fibrosis.”
It is important to note when interpreting the results of the above paper that these results are based on Taiwanese population-level data, and type 1 diabetes – which is obviously the high-risk diabetes subgroup in this particular context – is rare in East Asian populations (as observed in Sperling et al., “A child in Helsinki, Finland is almost 400 times more likely to develop diabetes than a child in Sichuan, China”. Taiwanese incidence of type 1 DM in children is estimated at ~5 in 100.000).
iv. Parents who let diabetic son starve to death found guilty of first-degree murder. It’s been a while since I last saw one of these ‘boost-your-faith-in-humanity’-cases, but they in my impression do pop up every now and then. I should probably keep at hand one of these articles in case my parents ever express worry to me that they weren’t good parents; they could have done a lot worse…
v. Freedom of medicine. One quote from the conclusion of Cochran’s post:
“[I]t is surely possible to materially improve the efficacy of drug development, of medical research as a whole. We’re doing better than we did 500 years ago – although probably worse than we did 50 years ago. But I would approach it by learning as much as possible about medical history, demographics, epidemiology, evolutionary medicine, theory of senescence, genetics, etc. Read Koch, not Hayek. There is no royal road to medical progress.”
I agree, and I was considering including some related comments and observations about health economics in this post – however I ultimately decided against doing that in part because the post was growing unwieldy; I might include those observations in another post later on. Here’s another somewhat older Westhunt post I at some point decided to bookmark – I in particular like the following neat quote from the comments, which expresses a view I have of course expressed myself in the past here on this blog:
“When you think about it, falsehoods, stupid crap, make the best group identifiers, because anyone might agree with you when you’re obviously right. Signing up to clear nonsense is a better test of group loyalty. A true friend is with you when you’re wrong. Ideally, not just wrong, but barking mad, rolling around in your own vomit wrong.”
“Approximately 59% of all health care expenditures attributed to diabetes are for health resources used by the population aged 65 years and older, much of which is borne by the Medicare program […]. The population 45–64 years of age incurs 33% of diabetes-attributed costs, with the remaining 8% incurred by the population under 45 years of age. The annual attributed health care cost per person with diabetes […] increases with age, primarily as a result of increased use of hospital inpatient and nursing facility resources, physician office visits, and prescription medications. Dividing the total attributed health care expenditures by the number of people with diabetes, we estimate the average annual excess expenditures for the population aged under 45 years, 45–64 years, and 65 years and above, respectively, at $4,394, $5,611, and $11,825.”
“Our logistic regression analysis with NHIS data suggests that diabetes is associated with a 2.4 percentage point increase in the likelihood of leaving the workforce for disability. This equates to approximately 541,000 working-age adults leaving the workforce prematurely and 130 million lost workdays in 2012. For the population that leaves the workforce early because of diabetes-associated disability, we estimate that their average daily earnings would have been $166 per person (with the amount varying by demographic). Presenteeism accounted for 30% of the indirect cost of diabetes. The estimate of a 6.6% annual decline in productivity attributed to diabetes (in excess of the estimated decline in the absence of diabetes) equates to 113 million lost workdays per year.”
viii. Effect of longer term modest salt reduction on blood pressure: Cochrane systematic review and meta-analysis of randomised trials. Did I blog this paper at some point in the past? I could not find any coverage of it on the blog when I searched for it so I decided to include it here, even if I have a nagging suspicion I may have talked about these findings before. What did they find? The short version is this:
“A modest reduction in salt intake for four or more weeks causes significant and, from a population viewpoint, important falls in blood pressure in both hypertensive and normotensive individuals, irrespective of sex and ethnic group. Salt reduction is associated with a small physiological increase in plasma renin activity, aldosterone, and noradrenaline and no significant change in lipid concentrations. These results support a reduction in population salt intake, which will lower population blood pressure and thereby reduce cardiovascular disease.”
ix. Some wikipedia links:
Heroic Age of Antarctic Exploration (featured).
Kuiper belt (featured).
Treason (one quote worth including here: “Currently, the consensus among major Islamic schools is that apostasy (leaving Islam) is considered treason and that the penalty is death; this is supported not in the Quran but in the Hadith.“).
Savant syndrome (“It is estimated that 10% of those with autism have some form of savant abilities”). A small sidenote of interest to Danish readers: The Danish Broadcasting Corporation recently featured a series about autistics with ‘special abilities’ – the show was called ‘The hidden talents’ (De skjulte talenter), and after multiple people had nagged me to watch it I ended up deciding to do so. Most of the people in that show presumably had some degree of ‘savantism’ combined with autism at the milder end of the spectrum, i.e. Asperger’s. I was somewhat conflicted about what to think about the show and did consider blogging it in detail (in Danish?), but I decided against it. However I do want to add here to Danish readers reading along who’ve seen the show that they would do well to repeatedly keep in mind that a) the great majority of autistics do not have abilities like these, b) many autistics with abilities like these presumably do quite poorly, and c) that many autistics have even greater social impairments than do people like e.g. (the very likeable, I have to add…) Louise Wille from the show).
Black Death (“Over 60% of Norway’s population died in 1348–1350”).
Renault FT (“among the most revolutionary and influential tank designs in history”).
Weierstrass function (“an example of a pathological real-valued function on the real line. The function has the property of being continuous everywhere but differentiable nowhere”).
Void coefficient. (“a number that can be used to estimate how much the reactivity of a nuclear reactor changes as voids (typically steam bubbles) form in the reactor moderator or coolant. […] Reactivity is directly related to the tendency of the reactor core to change power level: if reactivity is positive, the core power tends to increase; if it is negative, the core power tends to decrease; if it is zero, the core power tends to remain stable. […] A positive void coefficient means that the reactivity increases as the void content inside the reactor increases due to increased boiling or loss of coolant; for example, if the coolant acts as a neutron absorber. If the void coefficient is large enough and control systems do not respond quickly enough, this can form a positive feedback loop which can quickly boil all the coolant in the reactor. This happened in the RBMK reactor that was destroyed in the Chernobyl disaster.”).
Gregor MacGregor (featured) (“a Scottish soldier, adventurer, and confidence trickster […] MacGregor’s Poyais scheme has been called one of the most brazen confidence tricks in history.”).
“A recent study estimated that 234 million surgical procedures requiring anaesthesia are performed worldwide annually. Anaesthesia is the largest hospital specialty in the UK, with over 12,000 practising anaesthetists […] In this book, I give a short account of the historical background of anaesthetic practice, a review of anaesthetic equipment, techniques, and medications, and a discussion of how they work. The risks and side effects of anaesthetics will be covered, and some of the subspecialties of anaesthetic practice will be explored.”
I liked the book, and I gave it three stars on goodreads; I was closer to four stars than two. Below I have added a few sample observations from the book, as well as what turned out in the end to be actually a quite considerable number of links (more than 60 it turned out, from a brief count) to topics/people/etc. discussed or mentioned in the text. I decided to spend a bit more time finding relevant links than I’ve previously done when writing link-heavy posts, so in this post I have not limited myself to wikipedia articles and I e.g. also link directly to primary literature discussed in the coverage. The links provided are, as usual, meant to be indicators of which kind of stuff is covered in the book, rather than an alternative to the book; some of the wikipedia articles in particular I assume are not very good (the main point of a link to a wikipedia article of questionable quality should probably be taken to be an indication that I consider ‘awareness of the existence of concept X’ to be of interest/important also to people who have not read this book, even if no great resource on the topic was immediately at hand to me).
Sample observations from the book:
“[G]eneral anaesthesia is not sleep. In physiological terms, the two states are very dissimilar. The term general anaesthesia refers to the state of unconsciousness which is deliberately produced by the action of drugs on the patient. Local anaesthesia (and its related terms) refers to the numbness produced in a part of the body by deliberate interruption of nerve function; this is typically achieved without affecting consciousness. […] The purpose of inhaling ether vapour [in the past] was so that surgery would be painless, not so that unconsciousness would necessarily be produced. However, unconsciousness and immobility soon came to be considered desirable attributes […] For almost a century, lying still was the only reliable sign of adequate anaesthesia.”
“The experience of pain triggers powerful emotional consequences, including fear, anger, and anxiety. A reasonable word for the emotional response to pain is ‘suffering’. Pain also triggers the formation of memories which remind us to avoid potentially painful experiences in the future. The intensity of pain perception and suffering also depends on the mental state of the subject at the time, and the relationship between pain, memory, and emotion is subtle and complex. […] The effects of adrenaline are responsible for the appearance of someone in pain: pale, sweating, trembling, with a rapid heart rate and breathing. Additionally, a hormonal storm is activated, readying the body to respond to damage and fight infection. This is known as the stress response. […] Those responses may be abolished by an analgesic such as morphine, which will counteract all those changes. For this reason, it is routine to use analgesic drugs in addition to anaesthetic ones. […] Typical anaesthetic agents are poor at suppressing the stress response, but analgesics like morphine are very effective. […] The hormonal stress response can be shown to be harmful, especially to those who are already ill. For example, the increase in blood coagulability which evolved to reduce blood loss as a result of injury makes the patient more likely to suffer a deep venous thrombosis in the leg veins.”
“If we monitor the EEG of someone under general anaesthesia, certain identifiable changes to the signal occur. In general, the frequency spectrum of the signal slows. […] Next, the overall power of the signal diminishes. In very deep general anaesthesia, short periods of electrical silence, known as burst suppression, can be observed. Finally, the overall randomness of the signal, its entropy, decreases. In short, the EEG of someone who is anaesthetized looks completely different from someone who is awake. […] Depth of anaesthesia is no longer considered to be a linear concept […] since it is clear that anaesthesia is not a single process. It is now believed that the two most important components of anaesthesia are unconsciousness and suppression of the stress response. These can be represented on a three-dimensional diagram called a response surface. [Here’s incidentally a recent review paper on related topics, US]”
“Before the widespread advent of anaesthesia, there were very few painkilling options available. […] Alcohol was commonly given as a means of enhancing the patient’s courage prior to surgery, but alcohol has almost no effect on pain perception. […] For many centuries, opium was the only effective pain-relieving substance known. […] For general anaesthesia to be discovered, certain prerequisites were required. On the one hand, the idea that surgery without pain was achievable had to be accepted as possible. Despite tantalizing clues from history, this idea took a long time to catch on. The few workers who pursued this idea were often openly ridiculed. On the other, an agent had to be discovered that was potent enough to render a patient suitably unconscious to tolerate surgery, but not so potent that overdose (hence accidental death) was too likely. This agent also needed to be easy to produce, tolerable for the patient, and easy enough for untrained people to administer. The herbal candidates (opium, mandrake) were too unreliable or dangerous. The next reasonable candidate, and every agent since, was provided by the proliferating science of chemistry.”
“Inducing anaesthesia by intravenous injection is substantially quicker than the inhalational method. Inhalational induction may take several minutes, while intravenous induction happens in the time it takes for the blood to travel from the needle to the brain (30 to 60 seconds). The main benefit of this is not convenience or comfort but patient safety. […] It was soon discovered that the ideal balance is to induce anaesthesia intravenously, but switch to an inhalational agent […] to keep the patient anaesthetized during the operation. The template of an intravenous induction followed by maintenance with an inhalational agent is still widely used today. […] Most of the drawbacks of volatile agents disappear when the patient is already anaesthetized [and] volatile agents have several advantages for maintenance. First, they are predictable in their effects. Second, they can be conveniently administered in known quantities. Third, the concentration delivered or exhaled by the patient can be easily and reliably measured. Finally, at steady state, the concentration of volatile agent in the patient’s expired air is a close reflection of its concentration in the patient’s brain. This gives the anaesthetist a reliable way of ensuring that enough anaesthetic is present to ensure the patient remains anaesthetized.”
“All current volatile agents are colourless liquids that evaporate into a vapour which produces general anaesthesia when inhaled. All are chemically stable, which means they are non-flammable, and not likely to break down or be metabolized to poisonous products. What distinguishes them from each other are their specific properties: potency, speed of onset, and smell. Potency of an inhalational agent is expressed as MAC, the minimum alveolar concentration required to keep 50% of adults unmoving in response to a standard surgical skin incision. MAC as a concept was introduced […] in 1963, and has proven to be a very useful way of comparing potencies of different anaesthetic agents. […] MAC correlates with observed depth of anaesthesia. It has been known for over a century that potency correlates very highly with lipid solubility; that is, the more soluble an agent is in lipid […], the more potent an anaesthetic it is. This is known as the Meyer-Overton correlation […] Speed of onset is inversely proportional to water solubility. The less soluble in water, the more rapidly an agent will take effect. […] Where immobility is produced at around 1.0 MAC, amnesia is produced at a much lower dose, typically 0.25 MAC, and unconsciousness at around 0.5 MAC. Therefore, a patient may move in response to a surgical stimulus without either being conscious of the stimulus, or remembering it afterwards.”
“The most useful way to estimate the body’s physiological reserve is to assess the patient’s tolerance for exercise. Exercise is a good model of the surgical stress response. The greater the patient’s tolerance for exercise, the better the perioperative outcome is likely to be […] For a smoker who is unable to quit, stopping for even a couple of days before the operation improves outcome. […] Dying ‘on the table’ during surgery is very unusual. Patients who die following surgery usually do so during convalescence, their weakened state making them susceptible to complications such as wound breakdown, chest infections, deep venous thrombosis, and pressure sores.”
“Mechanical ventilation is based on the principle of intermittent positive pressure ventilation (IPPV), gas being ‘blown’ into the patient’s lungs from the machine. […] Inflating a patient’s lungs is a delicate process. Healthy lung tissue is fragile, and can easily be damaged by overdistension (barotrauma). While healthy lung tissue is light and spongy, and easily inflated, diseased lung tissue may be heavy and waterlogged and difficult to inflate, and therefore may collapse, allowing blood to pass through it without exchanging any gases (this is known as shunt). Simply applying higher pressures may not be the answer: this may just overdistend adjacent areas of healthier lung. The ventilator must therefore provide a series of breaths whose volume and pressure are very closely controlled. Every aspect of a mechanical breath may now be adjusted by the anaesthetist: the volume, the pressure, the frequency, and the ratio of inspiratory time to expiratory time are only the basic factors.”
“All anaesthetic drugs are poisons. Remember that in achieving a state of anaesthesia you intend to poison someone, but not kill them – so give as little as possible. [Introductory quote to a chapter, from an Anaesthetics textbook – US] […] Other cells besides neurons use action potentials as the basis of cellular signalling. For example, the synchronized contraction of heart muscle is performed using action potentials, and action potentials are transmitted from nerves to skeletal muscle at the neuromuscular junction to initiate movement. Local anaesthetic drugs are therefore toxic to the heart and brain. In the heart, local anaesthetic drugs interfere with normal contraction, eventually stopping the heart. In the brain, toxicity causes seizures and coma. To avoid toxicity, the total dose is carefully limited”.
Links of interest:
Arthur Ernest Guedel.
Henry Hill Hickman.
William Thomas Green Morton.
James Young Simpson.
Joseph Thomas Clover.
Principles of Total Intravenous Anaesthesia (TIVA).
Laryngeal mask airway.
Gate control theory of pain.
Hartmann’s solution (…what this is called seems to be depending on whom you ask, but it’s called Hartmann’s solution in the book…).
Epidural nerve block.
Intensive care medicine.
Bjørn Aage Ibsen.
Pearse et al. (results of paper briefly discussed in the book).
Awareness under anaesthesia (skip the first page).
Pollard et al. (2007).
Postoperative nausea and vomiting.
Postoperative cognitive dysfunction.
Monk et al. (2008).
i. Fire works a little differently than people imagine. A great ask-science comment. See also AugustusFink-nottle’s comment in the same thread.
iii. I was very conflicted about whether to link to this because I haven’t actually spent any time looking at it myself so I don’t know if it’s any good, but according to somebody (?) who linked to it on SSC the people behind this stuff have academic backgrounds in evolutionary biology, which is something at least (whether you think this is a good thing or not will probably depend greatly on your opinion of evolutionary biologists, but I’ve definitely learned a lot more about human mating patterns, partner interaction patterns, etc. from evolutionary biologists than I have from personal experience, so I’m probably in the ‘they-sometimes-have-interesting-ideas-about-these-topics-and-those-ideas-may-not-be-terrible’-camp). I figure these guys are much more application-oriented than were some of the previous sources I’ve read on related topics, such as e.g. Kappeler et al. I add the link mostly so that if I in five years time have a stroke that obliterates most of my decision-making skills, causing me to decide that entering the dating market might be a good idea, I’ll have some idea where it might make sense to start.
“Are stereotypes accurate or inaccurate? We summarize evidence that stereotype accuracy is one of the largest and most replicable findings in social psychology. We address controversies in this literature, including the long-standing and continuing but unjustified emphasis on stereotype inaccuracy, how to define and assess stereotype accuracy, and whether stereotypic (vs. individuating) information can be used rationally in person perception. We conclude with suggestions for building theory and for future directions of stereotype (in)accuracy research.”
A few quotes from the paper:
“Demographic stereotypes are accurate. Research has consistently shown moderate to high levels of correspondence accuracy for demographic (e.g., race/ethnicity, gender) stereotypes […]. Nearly all accuracy correlations for consensual stereotypes about race/ethnicity and gender exceed .50 (compared to only 5% of social psychological findings; Richard, Bond, & Stokes-Zoota, 2003).[…] Rather than being based in cultural myths, the shared component of stereotypes is often highly accurate. This pattern cannot be easily explained by motivational or social-constructionist theories of stereotypes and probably reflects a “wisdom of crowds” effect […] personal stereotypes are also quite accurate, with correspondence accuracy for roughly half exceeding r =.50.”
“We found 34 published studies of racial-, ethnic-, and gender-stereotype accuracy. Although not every study examined discrepancy scores, when they did, a plurality or majority of all consensual stereotype judgments were accurate. […] In these 34 studies, when stereotypes were inaccurate, there was more evidence of underestimating than overestimating actual demographic group differences […] Research assessing the accuracy of miscellaneous other stereotypes (e.g., about occupations, college majors, sororities, etc.) has generally found accuracy levels comparable to those for demographic stereotypes”
“A common claim […] is that even though many stereotypes accurately capture group means, they are still not accurate because group means cannot describe every individual group member. […] If people were rational, they would use stereotypes to judge individual targets when they lack information about targets’ unique personal characteristics (i.e., individuating information), when the stereotype itself is highly diagnostic (i.e., highly informative regarding the judgment), and when available individuating information is ambiguous or incompletely useful. People’s judgments robustly conform to rational predictions. In the rare situations in which a stereotype is highly diagnostic, people rely on it (e.g., Crawford, Jussim, Madon, Cain, & Stevens, 2011). When highly diagnostic individuating information is available, people overwhelmingly rely on it (Kunda & Thagard, 1996; effect size averaging r = .70). Stereotype biases average no higher than r = .10 ( Jussim, 2012) but reach r = .25 in the absence of individuating information (Kunda & Thagard, 1996). The more diagnostic individuating information people have, the less they stereotype (Crawford et al., 2011; Krueger & Rothbart, 1988). Thus, people do not indiscriminately apply their stereotypes to all individual members of stereotyped groups.” (Funder incidentally talked about this stuff as well in his book Personality Judgment).
One thing worth mentioning in the context of stereotypes is that if you look at stuff like crime data – which sadly not many people do – and you stratify based on stuff like country of origin, then the sub-group differences you observe tend to be very large. Some of the differences you observe between subgroups are not in the order of something like 10%, which is probably the sort of difference which could easily be ignored without major consequences; some subgroup differences can easily be in the order of one or two orders of magnitude. The differences are in some contexts so large as to basically make it downright idiotic to assume there are no differences – it doesn’t make sense, it’s frankly a stupid thing to do. To give an example, in Germany the probability that a random person, about whom you know nothing, has been a suspect in a thievery case is 22% if that random person happens to be of Algerian extraction, whereas it’s only 0,27% if you’re dealing with an immigrant from China. Roughly one in 13 of those Algerians have also been involved in a case of ‘body (bodily?) harm’, which is the case for less than one in 400 of the Chinese immigrants.
v. Assessing Immigrant Integration in Sweden after the May 2013 Riots. Some data from the article:
“Today, about one-fifth of Sweden’s population has an immigrant background, defined as those who were either born abroad or born in Sweden to two immigrant parents. The foreign born comprised 15.4 percent of the Swedish population in 2012, up from 11.3 percent in 2000 and 9.2 percent in 1990 […] Of the estimated 331,975 asylum applicants registered in EU countries in 2012, 43,865 (or 13 percent) were in Sweden. […] More than half of these applications were from Syrians, Somalis, Afghanis, Serbians, and Eritreans. […] One town of about 80,000 people, Södertälje, since the mid-2000s has taken in more Iraqi refugees than the United States and Canada combined.”
“Coupled with […] macroeconomic changes, the largely humanitarian nature of immigrant arrivals since the 1970s has posed challenges of labor market integration for Sweden, as refugees often arrive with low levels of education and transferable skills […] high unemployment rates have disproportionately affected immigrant communities in Sweden. In 2009-10, Sweden had the highest gap between native and immigrant employment rates among OECD countries. Approximately 63 percent of immigrants were employed compared to 76 percent of the native-born population. This 13 percentage-point gap is significantly greater than the OECD average […] Explanations for the gap include less work experience and domestic formal qualifications such as language skills among immigrants […] Among recent immigrants, defined as those who have been in the country for less than five years, the employment rate differed from that of the native born by more than 27 percentage points. In 2011, the Swedish newspaper Dagens Nyheter reported that 35 percent of the unemployed registered at the Swedish Public Employment Service were foreign born, up from 22 percent in 2005.”
“As immigrant populations have grown, Sweden has experienced a persistent level of segregation — among the highest in Western Europe. In 2008, 60 percent of native Swedes lived in areas where the majority of the population was also Swedish, and 20 percent lived in areas that were virtually 100 percent Swedish. In contrast, 20 percent of Sweden’s foreign born lived in areas where more than 40 percent of the population was also foreign born.”
vi. Book recommendations. Or rather, author recommendations. A while back I asked ‘the people of SSC’ if they knew of any fiction authors I hadn’t read yet which were both funny and easy to read. I got a lot of good suggestions, and the roughly 20 Dick Francis novels I’ve read during the fall I’ve read as a consequence of that thread.
“On the basis of an original survey among native Christians and Muslims of Turkish and Moroccan origin in Germany, France, the Netherlands, Belgium, Austria and Sweden, this paper investigates four research questions comparing native Christians to Muslim immigrants: (1) the extent of religious fundamentalism; (2) its socio-economic determinants; (3) whether it can be distinguished from other indicators of religiosity; and (4) its relationship to hostility towards out-groups (homosexuals, Jews, the West, and Muslims). The results indicate that religious fundamentalist attitudes are much more widespread among Sunnite Muslims than among native Christians, even after controlling for the different demographic and socio-economic compositions of these groups. […] Fundamentalist believers […] show very high levels of out-group hostility, especially among Muslims.”
ix. Portal: Dinosaurs. It would have been so incredibly awesome to have had access to this kind of stuff back when I was a child. The portal includes links to articles with names like ‘Bone Wars‘ – what’s not to like? Again, awesome!
x. “you can’t determine if something is truly random from observations alone. You can only determine if something is not truly random.” (link) An important insight well expressed.
xi. Chessprogramming. If you’re interested in having a look at how chess programs work, this is a neat resource. The wiki contains lots of links with information on specific sub-topics of interest. Also chess-related: The World Championship match between Carlsen and Karjakin has started. To the extent that I’ll be following the live coverage, I’ll be following Svidler et al.’s coverage on chess24. Robin van Kampen and Eric Hansen – both 2600+ elo GMs – did quite well yesterday, in my opinion.
xii. Justified by More Than Logos Alone (Razib Khan).
“Very few are Roman Catholic because they have read Aquinas’ Five Ways. Rather, they are Roman Catholic, in order of necessity, because God aligns with their deep intuitions, basic cognitive needs in terms of cosmological coherency, and because the church serves as an avenue for socialization and repetitive ritual which binds individuals to the greater whole. People do not believe in Catholicism as often as they are born Catholics, and the Catholic religion is rather well fitted to a range of predispositions to the typical human.”
As I’ve observed many times before, a wordpress blog like mine is not a particularly nice place to cover mathematical topics involving equations and lots of Greek letters, so the coverage below will be more or less purely conceptual; don’t take this to mean that the book doesn’t contain formulas. Some parts of the book look like this:
That of course makes the book hard to blog, also for other reasons than just the fact that it’s typographically hard to deal with the equations. In general it’s hard to talk about the content of a book like this one without going into a lot of details outlining how you get from A to B to C – usually you’re only really interested in C, but you need A and B to make sense of C. At this point I’ve sort of concluded that when covering books like this one I’ll only cover some of the main themes which are easy to discuss in a blog post, and I’ve concluded that I should skip coverage of (potentially important) points which might also be of interest if they’re difficult to discuss in a small amount of space, which is unfortunately often the case. I should perhaps observe that although I noted in my goodreads review that in a way there was a bit too much philosophy and a bit too little statistics in the coverage for my taste, you should definitely not take that objection to mean that this book is full of fluff; a lot of that philosophical stuff is ‘formal logic’ type stuff and related comments, and the book in general is quite dense. As I also noted in the goodreads review I didn’t read this book as carefully as I might have done – for example I skipped a couple of the technical proofs because they didn’t seem to be worth the effort – and I’d probably need to read it again to fully understand some of the minor points made throughout the more technical parts of the coverage; so that’s of course a related reason why I don’t cover the book in a great amount of detail here – it’s hard work just to read the damn thing, to talk about the technical stuff in detail here as well would definitely be overkill even if it would surely make me understand the material better.
I have added some observations from the coverage below. I’ve tried to clarify beforehand which question/topic the quote in question deals with, to ease reading/understanding of the topics covered.
On how statistical methods are related to experimental science:
“statistical methods have aims similar to the process of experimental science. But statistics is not itself an experimental science, it consists of models of how to do experimental science. Statistical theory is a logical — mostly mathematical — discipline; its findings are not subject to experimental test. […] The primary sense in which statistical theory is a science is that it guides and explains statistical methods. A sharpened statement of the purpose of this book is to provide explanations of the senses in which some statistical methods provide scientific evidence.”
On mathematics and axiomatic systems (the book goes into much more detail than this):
“It is not sufficiently appreciated that a link is needed between mathematics and methods. Mathematics is not about the world until it is interpreted and then it is only about models of the world […]. No contradiction is introduced by either interpreting the same theory in different ways or by modeling the same concept by different theories. […] In general, a primitive undefined term is said to be interpreted when a meaning is assigned to it and when all such terms are interpreted we have an interpretation of the axiomatic system. It makes no sense to ask which is the correct interpretation of an axiom system. This is a primary strength of the axiomatic method; we can use it to organize and structure our thoughts and knowledge by simultaneously and economically treating all interpretations of an axiom system. It is also a weakness in that failure to define or interpret terms leads to much confusion about the implications of theory for application.”
It’s all about models:
“The scientific method of theory checking is to compare predictions deduced from a theoretical model with observations on nature. Thus science must predict what happens in nature but it need not explain why. […] whether experiment is consistent with theory is relative to accuracy and purpose. All theories are simplifications of reality and hence no theory will be expected to be a perfect predictor. Theories of statistical inference become relevant to scientific process at precisely this point. […] Scientific method is a practice developed to deal with experiments on nature. Probability theory is a deductive study of the properties of models of such experiments. All of the theorems of probability are results about models of experiments.”
But given a frequentist interpretation you can test your statistical theories with the real world, right? Right? Well…
“How might we check the long run stability of relative frequency? If we are to compare mathematical theory with experiment then only finite sequences can be observed. But for the Bernoulli case, the event that frequency approaches probability is stochastically independent of any sequence of finite length. […] Long-run stability of relative frequency cannot be checked experimentally. There are neither theoretical nor empirical guarantees that, a priori, one can recognize experiments performed under uniform conditions and that under these circumstances one will obtain stable frequencies.” [related link]
What should we expect to get out of mathematical and statistical theories of inference?
“What can we expect of a theory of statistical inference? We can expect an internally consistent explanation of why certain conclusions follow from certain data. The theory will not be about inductive rationality but about a model of inductive rationality. Statisticians are used to thinking that they apply their logic to models of the physical world; less common is the realization that their logic itself is only a model. Explanation will be in terms of introduced concepts which do not exist in nature. Properties of the concepts will be derived from assumptions which merely seem reasonable. This is the only sense in which the axioms of any mathematical theory are true […] We can expect these concepts, assumptions, and properties to be intuitive but, unlike natural science, they cannot be checked by experiment. Different people have different ideas about what “seems reasonable,” so we can expect different explanations and different properties. We should not be surprised if the theorems of two different theories of statistical evidence differ. If two models had no different properties then they would be different versions of the same model […] We should not expect to achieve, by mathematics alone, a single coherent theory of inference, for mathematical truth is conditional and the assumptions are not “self-evident.” Faith in a set of assumptions would be needed to achieve a single coherent theory.”
On disagreements about the nature of statistical evidence:
“The context of this section is that there is disagreement among experts about the nature of statistical evidence and consequently much use of one formulation to criticize another. Neyman (1950) maintains that, from his behavioral hypothesis testing point of view, Fisherian significance tests do not express evidence. Royall (1997) employs the “law” of likelihood to criticize hypothesis as well as significance testing. Pratt (1965), Berger and Selke (1987), Berger and Berry (1988), and Casella and Berger (1987) employ Bayesian theory to criticize sampling theory. […] Critics assume that their findings are about evidence, but they are at most about models of evidence. Many theoretical statistical criticisms, when stated in terms of evidence, have the following outline: According to model A, evidence satisfies proposition P. But according to model B, which is correct since it is derived from “self-evident truths,” P is not true. Now evidence can’t be two different ways so, since B is right, A must be wrong. Note that the argument is symmetric: since A appears “self-evident” (to adherents of A) B must be wrong. But both conclusions are invalid since evidence can be modeled in different ways, perhaps useful in different contexts and for different purposes. From the observation that P is a theorem of A but not of B, all we can properly conclude is that A and B are different models of evidence. […] The common practice of using one theory of inference to critique another is a misleading activity.”
Is mathematics a science?
“Is mathematics a science? It is certainly systematized knowledge much concerned with structure, but then so is history. Does it employ the scientific method? Well, partly; hypothesis and deduction are the essence of mathematics and the search for counter examples is a mathematical counterpart of experimentation; but the question is not put to nature. Is mathematics about nature? In part. The hypotheses of most mathematics are suggested by some natural primitive concept, for it is difficult to think of interesting hypotheses concerning nonsense syllables and to check their consistency. However, it often happens that as a mathematical subject matures it tends to evolve away from the original concept which motivated it. Mathematics in its purest form is probably not natural science since it lacks the experimental aspect. Art is sometimes defined to be creative work displaying form, beauty and unusual perception. By this definition pure mathematics is clearly an art. On the other hand, applied mathematics, taking its hypotheses from real world concepts, is an attempt to describe nature. Applied mathematics, without regard to experimental verification, is in fact largely the “conditional truth” portion of science. If a body of applied mathematics has survived experimental test to become trustworthy belief then it is the essence of natural science.”
Then what about statistics – is statistics a science?
“Statisticians can and do make contributions to subject matter fields such as physics, and demography but statistical theory and methods proper, distinguished from their findings, are not like physics in that they are not about nature. […] Applied statistics is natural science but the findings are about the subject matter field not statistical theory or method. […] Statistical theory helps with how to do natural science but it is not itself a natural science.”
I should note that I am, and have for a long time been, in broad agreement with the author’s remarks on the nature of science and mathematics above. Popper, among many others, discussed this topic a long time ago e.g. in The Logic of Scientific Discovery and I’ve basically been of the opinion that (‘pure’) mathematics is not science (‘but rather ‘something else’ … and that doesn’t mean it’s not useful’) for probably a decade. I’ve had a harder time coming to terms with how precisely to deal with statistics in terms of these things, and in that context the book has been conceptually helpful.
Below I’ve added a few links to other stuff also covered in the book:
Radon-Nikodyn theorem. (not covered in the book, but the necessity of using ‘a Radon-Nikodyn derivative’ to obtain an answer to a question being asked was remarked upon at one point, and I had no clue what he was talking about – it seems that the stuff in the link was what he was talking about).
A very specific and relevant link: Berger and Wolpert (1984). The stuff about Birnbaum’s argument covered from p.24 (p.40) and forward is covered in some detail in the book. The author is critical of the model and explains in the book in some detail why that is. See also: On the foundations of statistical inference (Birnbaum, 1962).
i. World Happiness Report 2013. A few figures from the publication:
“As the Internet has become a nearly ubiquitous resource for acquiring knowledge about the world, questions have arisen about its potential effects on cognition. Here we show that searching the Internet for explanatory knowledge creates an illusion whereby people mistake access to information for their own personal understanding of the information. Evidence from 9 experiments shows that searching for information online leads to an increase in self-assessed knowledge as people mistakenly think they have more knowledge “in the head,” even seeing their own brains as more active as depicted by functional MRI (fMRI) images.”
A little more from the paper:
“If we go to the library to find a fact or call a friend to recall a memory, it is quite clear that the information we seek is not accessible within our own minds. When we go to the Internet in search of an answer, it seems quite clear that we are we consciously seeking outside knowledge. In contrast to other external sources, however, the Internet often provides much more immediate and reliable access to a broad array of expert information. Might the Internet’s unique accessibility, speed, and expertise cause us to lose track of our reliance upon it, distorting how we view our own abilities? One consequence of an inability to monitor one’s reliance on the Internet may be that users become miscalibrated regarding their personal knowledge. Self-assessments can be highly inaccurate, often occurring as inflated self-ratings of competence, with most people seeing themselves as above average [here’s a related link] […] For example, people overestimate their own ability to offer a quality explanation even in familiar domains […]. Similar illusions of competence may emerge as individuals become immersed in transactive memory networks. They may overestimate the amount of information contained in their network, producing a “feeling of knowing,” even when the content is inaccessible […]. In other words, they may conflate the knowledge for which their partner is responsible with the knowledge that they themselves possess (Wegner, 1987). And in the case of the Internet, an especially immediate and ubiquitous memory partner, there may be especially large knowledge overestimations. As people underestimate how much they are relying on the Internet, success at finding information on the Internet may be conflated with personally mastered information, leading Internet users to erroneously include knowledge stored outside their own heads as their own. That is, when participants access outside knowledge sources, they may become systematically miscalibrated regarding the extent to which they rely on their transactive memory partner. It is not that they misattribute the source of their knowledge, they could know full well where it came from, but rather they may inflate the sense of how much of the sum total of knowledge is stored internally.
We present evidence from nine experiments that searching the Internet leads people to conflate information that can be found online with knowledge “in the head.” […] The effect derives from a true misattribution of the sources of knowledge, not a change in understanding of what counts as internal knowledge (Experiment 2a and b) and is not driven by a “halo effect” or general overconfidence (Experiment 3). We provide evidence that this effect occurs specifically because information online can so easily be accessed through search (Experiment 4a–c).”
iii. Some words I’ve recently encountered on vocabulary.com: hortatory, adduce, obsequious, enunciate, ineluctable, guerdon, chthonic, condign, philippic, coruscate, exceptionable, colophon, lapidary, rubicund, frumpish, raiment, prorogue, sonorous, metonymy.
v. I have no idea how accurate this test of chess strength is, (some people in this thread argue that there are probably some calibration issues at the low end) but I thought I should link to it anyway. I’d be very cautious about drawing strong conclusions about over-the-board strength without knowing how they’ve validated the tool. In over-the-board chess you have at minimum a couple of minutes/move on average and this tool never gives you more than 30 seconds, so some slow players will probably suffer using this tool (I’d imagine this is why u/ViktorVamos got such a low estimate). For what it’s worth my Elo estimate was 2039 (95% CI: 1859, 2220).
In related news, I recently defeated my first IM – Pablo Garcia Castro – in a blitz (3 minutes/player) game. It actually felt a bit like an anticlimax and afterwards I was thinking that it would probably have felt like a bigger deal if I’d not lately been getting used to winning the occasional bullet game against IMs on the ICC. Actually I think my two wins against WIM Shiqun Ni during the same bullet session at the time felt like a bigger accomplishment, because that specific session was played during the Women’s World Chess Championship and I realized while looking up my opponent that this woman was actually stronger than one of the contestants who made it to the quarter-finals in that event (Meri Arabidze). On the other hand bullet isn’t really chess, so…
“This report shows trends and group differences in current marital status, with a focus on first marriages among women and men aged 15–44 years in the United States. Trends and group differences in the timing and duration of first marriages are also discussed. […] The analyses presented in this report are based on a nationally representative sample of 12,279 women and 10,403 men aged 15–44 years in the household population of the United States.”
“In 2006–2010, […] median age at first marriage was 25.8 for women and 28.3 for men.”
“Among women, 68% of unions formed in 1997–2001 began as a cohabitation rather than as a marriage (8). If entry into any type of union, marriage or cohabitation, is taken into account, then the timing of a first union occurs at roughly the same point in the life course as marriage did in the past (9). Given the place of cohabitation in contemporary union formation, descriptions of marital behavior, particularly those concerning trends over time, are more complete when cohabitation is also measured. […] Trends in the current marital statuses of women using the 1982, 1995, 2002, and 2006–2010 NSFG indicate that the percentage of women who were currently in a first marriage decreased over the past several decades, from 44% in 1982 to 36% in 2006–2010 […]. At the same time, the percentage of women who were currently cohabiting increased steadily from 3.0% in 1982 to 11% in 2006– 2010. In addition, the proportion of women aged 15–44 who were never married at the time of interview increased from 34% in 1982 to 38% in 2006–2010.”
“In 2006–2010, the probability of first marriage by age 25 was 44% for women compared with 59% in 1995, a decrease of 25%. By age 35, the probability of first marriage was 84% in 1995 compared with 78% in 2006–2010 […] By age 40, the difference in the probability of age at first marriage for women was not significant between 1995 (86%) and 2006–2010 (84%). These findings suggest that between 1995 and 2006– 2010, women married for the first time at older ages; however, this delay was not apparent by age 40.”
“In 2006–2010, the probability of a first marriage lasting at least 10 years was 68% for women and 70% for men. Looking at 20 years, the probability that the first marriages of women and men will survive was 52% for women and 56% for men in 2006–2010. These levels are virtually identical to estimates based on vital statistics from the early 1970s (24). For women, there was no significant change in the probability of a first marriage lasting 20 years between the 1995 NSFG (50%) and the 2006–2010 NSFG (52%)”
“Women who had no births when they married for the first time had a higher probability of their marriage surviving 20 years (56%) compared with women who had one or more births at the time of first marriage (33%). […] Looking at spousal characteristics, women whose first husbands had been previously married (38%) had a lower probability of their first marriage lasting 20 years compared with women whose first husband had never been married before (54%). Women whose first husband had children from previous relationships had a lower probability that their first marriage would last 20 years (37%) compared with first husbands who had no other children (54%). For men, […] patterns of first marriage survival […] are similar to those shown for women for marriages that survived up to 15 years.”
“These data show trends that are consistent with broad demographic changes in the American family that have occurred in the United States over the last several decades. One such trend is an increase in the time spent unmarried among women and men. For women, there was a continued decrease in the percentage currently married for the first time — and an increase in the percent currently cohabiting — in 2006–2010 compared with earlier years. For men, there was also an increase in the percentage unmarried and in the percentage currently cohabiting between 2002 and 2006–2010. Another trend is an increase in the age at first marriage for women and men, with men continuing to marry for the first time at older ages than women. […] Previous research suggests that women with more education and better economic prospects are more likely to delay first marriage to older ages, but are ultimately more likely to become married and to stay married […]. Data from the 2006–2010 NSFG support these findings”
ii. Involuntary Celibacy: A life course analysis (review). This is not a link to the actual paper – the paper is not freely available, which is why I do not link to it – but rather a link to a report talking about what’s in that paper. However I found some of the stuff interesting:
“A member of an on-line discussion group for involuntary celibates approached the first author of the paper via email to ask about research on involuntary celibacy. It soon became apparent that little had been done, and so the discussion group volunteered to be interviewed and a research team was put together. An initial questionnaire was mailed to 35 group members, and they got a return rate of 85%. They later posted it to a web page so that other potential respondents had access to it. Eventually 60 men and 22 women took the survey.”
“Most were between the ages of 25-34, 28% were married or living with a partner, 89% had attended or completed college. Professionals (45%) and students (16%) were the two largest groups. 85% of the sample was white, 89% were heterosexual. 70% lived in the U.S. and the rest primarily in Western Europe, Canada and Australia. […] the value of this research lies in the rich descriptive data obtained about the lives of involuntary celibates, a group about which little is known. […] The questionnaire contained 13 categorical, close-ended questions assessing demographic data such as age, sex, marital status, living arrangement, income, education, employment type, area of residence, race/ethnicity, sexual orientation, religious preference, political views, and time spent on the computer. 58 open-ended questions investigated such areas as past sexual experiences, current relationships, initiating relationships, sexuality and celibacy, nonsexual relationships and the consequences of celibacy. They started out by asking about childhood experiences, progressed to questions about teen and early adult years and finished with questions about current status and the effects of celibacy.”
“78% of this sample had discussed sex with friends, 84% had masturbated as teens. The virgins and singles, however, differed from national averages in their dating and sexual experiences.”
“91% of virgins and 52 % of singles had never dated as teenagers. Males reported hesitancy in initiating dates, and females reporting a lack of invitations by males. For those who did date, their experiences tended to be very limited. Only 29% of virgins reported first sexual experiences that involved other people, and they frequently reported no sexual activity at all except for masturbation. Singles were more likely than virgins to have had an initial sexual experience that involved other people (76%), but they tended to report that they were dissatisfied with the experience. […] While most of the sample had discussed sex with friends and masturbated as teens, most virgins and singles did not date. […] Virgins and singles may have missed important transitions, and as they got older, their trajectories began to differ from those of their age peers. Patterns of sexuality in young adulthood are significantly related to dating, steady dating and sexual experience in adolescence. It is rare for a teenager to initiate sexual activity outside of a dating relationship. While virginity and lack of experience are fairly common in teenagers and young adults, by the time these respondents reached their mid-twenties, they reported feeling left behind by age peers. […] Even for the heterosexuals in the study, it appears that lack of dating and sexual experimentation in the teen years may be precursors to problems in adult sexual relationships.”
“Many of the virgins reported that becoming celibate involved a lack of sexual and interpersonal experience at several different transition points in adolescence and young adulthood. They never or rarely dated, had little experience with interpersonal sexual activity, and had never had sexual intercourse. […] In contrast, partnered celibates generally became sexually inactive by a very different process. All had initially been sexually active with their partners, but at some point stopped. At the time of the survey, sexual intimacy no longer or very rarely occurred in their relationships. The majority of them (70%) started out having satisfactory relationships, but they slowly stopped having sex as time went on.”
“shyness was a barrier to developing and maintaining relationships for many of the respondents. Virgins (94%) and singles (84%) were more likely to report shyness than were partnered respondents (20%). The men (89%) were more likely to report being shy than women (77%). 41% of virgins and 23% of singles reported an inability to relate to others socially. […] 1/3 of the respondents thought their weight, appearance, or physical characteristics were obstacles to attracting potential partners. 47% of virgins and 56% of singles mentioned these factors, compared to only 9% of partnered people. […] Many felt that their sexual development had somehow stalled in an earlier stage of life; feeling different from their peers and feeling like they will never catch up. […] All respondents perceived their lack of sexual activity in a negative light and in all likelihood, the relationship between involuntary celibacy and unhappiness, anger and depression is reciprocal, with involuntary celibacy contributing to negative feelings, but these negative feelings also causing people to feel less self-confident and less open to sexual opportunities when they occur. The longer the duration of the celibacy, the more likely our respondents were to view it as a permanent way of life. Virginal celibates tended to see their condition as temporary for the most part, but the older they were, the more likely they were to see it as permanent, and the same was true for single celibates.”
It seems to me from ‘a brief look around’ that not a lot of research has been done on this topic, which I find annoying. Because yes, I’m well aware these are old data and that the sample is small and ‘convenient’. Here’s a brief related study on the ‘Characteristics of adult women who abstain from sexual intercourse‘ – the main findings:
“Of the 1801 respondents, 244 (14%) reported abstaining from intercourse in the past 6 months. Univariate analysis revealed that abstinent women were less likely than sexually active women to have used illicit drugs [odds ratio (OR) 0.47; 95% CI 0.35–0.63], to have been physically abused (OR 0.44, 95% CI 0.31–0.64), to be current smokers (OR 0.59, 95% CI 0.45–0.78), to drink above risk thresholds (OR 0.66, 95% CI 0.49–0.90), to have high Mental Health Inventory-5 scores (OR 0.7, 95% CI 0.54–0.92) and to have health insurance (OR 0.74, 95% CI 0.56–0.98). Abstinent women were more likely to be aged over 30 years (OR 1.98, 95% CI 1.51–2.61) and to have a high school education (OR 1.38, 95% CI 1.01–1.89). Logistic regression showed that age >30 years, absence of illicit drug use, absence of physical abuse and lack of health insurance were independently associated with sexual abstinence.
Prolonged sexual abstinence was not uncommon among adult women. Periodic, voluntary sexual abstinence was associated with positive health behaviours, implying that abstinence was not a random event. Future studies should address whether abstinence has a causal role in promoting healthy behaviours or whether women with a healthy lifestyle are more likely to choose abstinence.”
Here’s another more recent study – Prevalence and Predictors of Sexual Inexperience in Adulthood (unfortunately I haven’t been able to locate a non-gated link) – which I found and may have a closer look at later. A few quotes/observations:
“By adulthood, sexual activity is nearly universal: 97 % of men and 98 % of women between the ages of 25-44 report having had vaginal intercourse (Mosher, Chandra, & Jones, 2005). […] Although the majority of individuals experience this transition during adolescence or early adulthood, a small minority remain sexually inexperienced far longer. Data from the NSFG indicate that about 5% of males and 3% of females between the ages of 25 and 29 report never having had vaginal sex (Mosher et al., 2005). While the percentage of sexually inexperienced participants drops slightly among older age groups, between 1 and 2% of both males and females continue to report that they have never had vaginal sex even into their early 40s. Other nationally representative surveys have yielded similar estimates of adult sexual inexperience (Billy, Tanfer, Grady, & Klepinger, 1993)”
“Individuals who have not experienced any type of sexual activity as adults […] may differ from those who only abstain from vaginal intercourse. For example, vaginal virgins who engage in “everything but” vaginal sex – sometimes referred to as “technical virgins” […] – may abstain from vaginal sex in order to avoid its potential negative consequences […]. In contrast, individuals who have neither coital nor noncoital experience may have been unable to attract sexual partners or may have little interest in sexual involvement. Because prior analyses have generally conflated these two populations, we know virtually nothing about the prevalence or characteristics of young adults who have abstained from all types of sexual activity.”
“We used data from 2,857 individuals who participated in Waves I–IV of the National Longitudinal Study of Adolescent Health (Add Health) and reported no sexual activity (i.e., oral-genital, vaginal, or anal sex) by age 18 to identify, using discrete-time survival models, adolescent sociodemographic, biosocial, and behavioral characteristics that predicted adult sexual inexperience. The mean age of participants at Wave IV was 28.5 years (SD = 1.92). Over one out of eight participants who did not initiate sexual activity during adolescence remained abstinent as young adults. Sexual non-attraction significantly predicted sexual inexperience among both males (aOR = 0.5) and females (aOR = 0.6). Males also had lower odds of initiating sexual activity after age 18 if they were non-Hispanic Asian, reported later than average pubertal development, or were rated as physically unattractive (aORs = 0.6–0.7). Females who were overweight, had lower cognitive performance, or reported frequent religious attendance had lower odds of sexual experience (aORs = 0.7–0.8) while those who were rated by the interviewers as very attractive or whose parents had lower educational attainment had higher odds of sexual experience (aORs = 1.4–1.8). Our findings underscore the heterogeneity of this unique population and suggest that there are a number of different pathways that may lead to either voluntary or involuntary adult sexual inexperience.”
“Breastfeeding has clear short-term benefits, but its long-term consequences on human capital are yet to be established. We aimed to assess whether breastfeeding duration was associated with intelligence quotient (IQ), years of schooling, and income at the age of 30 years, in a setting where no strong social patterning of breastfeeding exists. […] A prospective, population-based birth cohort study of neonates was launched in 1982 in Pelotas, Brazil. Information about breastfeeding was recorded in early childhood. At 30 years of age, we studied the IQ (Wechsler Adult Intelligence Scale, 3rd version), educational attainment, and income of the participants. For the analyses, we used multiple linear regression with adjustment for ten confounding variables and the G-formula. […] From June 4, 2012, to Feb 28, 2013, of the 5914 neonates enrolled, information about IQ and breastfeeding duration was available for 3493 participants. In the crude and adjusted analyses, the durations of total breastfeeding and predominant breastfeeding (breastfeeding as the main form of nutrition with some other foods) were positively associated with IQ, educational attainment, and income. We identified dose-response associations with breastfeeding duration for IQ and educational attainment. In the confounder-adjusted analysis, participants who were breastfed for 12 months or more had higher IQ scores (difference of 3,76 points, 95% CI 2,20–5,33), more years of education (0,91 years, 0,42–1,40), and higher monthly incomes (341,0 Brazilian reals, 93,8–588,3) than did those who were breastfed for less than 1 month. The results of our mediation analysis suggested that IQ was responsible for 72% of the effect on income.”
This is a huge effect size.
iv. Grandmaster blunders (chess). This is quite a nice little collection; some of the best players in the world have actually played some really terrible moves over the years, which I find oddly comforting in a way..
v. History of the United Kingdom during World War I (wikipedia, ‘good article’). A few observations from the article:
“In 1915, the Ministry of Munitions under David Lloyd-George was formed to control munitions production and had considerable success. By April 1915, just two million rounds of shells had been sent to France; by the end of the war the figure had reached 187 million, and a year’s worth of pre-war production of light munitions could be completed in just four days by 1918.”
“During the war, average calories intake [in Britain] decreased only three percent, but protein intake six percent.“
“Energy was a critical factor for the British war effort. Most of the energy supplies came from coal mines in Britain, where the issue was labour supply. Critical however was the flow of oil for ships, lorries and industrial use. There were no oil wells in Britain so everything was imported. The U.S. pumped two-thirds of the world’s oil. In 1917, total British consumption was 827 million barrels, of which 85 percent was supplied by the United States, and 6 percent by Mexico.”
“In the post war publication Statistics of the Military Effort of the British Empire During the Great War 1914–1920 (The War Office, March 1922), the official report lists 908,371 ‘soldiers’ as being either killed in action, dying of wounds, dying as prisoners of war or missing in action in the World War. (This is broken down into the United Kingdom and its colonies 704,121; British India 64,449; Canada 56,639; Australia 59,330; New Zealand 16,711; South Africa 7,121.) […] The civilian death rate exceeded the prewar level by 292,000, which included 109,000 deaths due to food shortages and 183,577 from Spanish Flu.”
vi. House of Plantagenet (wikipedia, ‘good article’).
vii. r/Earthp*rn. There are some really nice pictures here…
i. I’ve been slightly more busy than usual lately, which has had as a consequence that I’ve been reading slightly less than usual. In a way this stuff has had bigger ‘tertiary effects’ (on blogging) than ‘secondary effects’ (on reading); I’ve not read that much less than usual, but reading and blogging are two different things, and blog-posts don’t write themselves. Sometimes it’s much easier for me to justify reading books than it is for me to justify spending time blogging books I’ve read. I just finished Newman and Kohn’s excellent book Evidence-Based Diagnosis, but despite this being an excellent book and despite me having already written a bit of stuff about the book in a post draft, I just don’t feel like finishing that blog post now. But I also don’t feel like letting any more time pass without an update – thus this post.
ii. On another reading-related matter, I should note that even assuming (a strong assumption here) the people they asked weren’t lying, these numbers seem low:
“Descriptive analysis indicated that the hours students spent weekly (M) on academic reading (AR), extracurricular reading (ER), and the Internet (INT), were 7.72 hours, 4.24 hours, and 8.95 hours, respectively.”
But on the other hand the estimate of 19.4 hours of weekly reading reported here (table 1, page 281) actually seems to match that estimate reasonably well (the sum of the numbers in the quote is ~20.9). Incidentally don’t you also just love when people report easily convertible metrics/units like these – ‘8.95 hours’..? Anyway, if the estimates are true, (some samples of…) college students read roughly 3 hours per day on average over the course of a week, including internet reading (which makes up almost half of the total and may or may not – you can’t really tell from the abstract – include academic stuff like stuff from journals…). I sometimes get curious about these sorts of things, and/but then I usually quickly get annoyed because it’s so difficult to get good data, and no good data seem to exist anywhere on such matters. This is in a way perfectly understandable (but also frustrating); I don’t even have a good idea what would be a good estimate of the ‘average’ number of hours I spend reading on an ‘average’ day, and I’m painfully aware of the fact that you can’t get access to that sort of information just by doing something simple like recording the number of hours/minutes spent reading during the day each day, for obvious reasons; the number would likely cease to be particularly relevant once the data recording process were to stop, even assuming there was no measurement error (’rounding up’). Such schemes might be a way to increase the amount of reading short-term (but if they are, why are they not already used in schools? Or perhaps they are?), but unless the scheme is implemented permanently the data derived from it are not going to be particularly relevant to anything later on. I don’t think unsophisticated self-reports which simply ask people how much they read are particularly useful, but if one assumes such estimates will always tend to overestimate the amount of reading going on, such metrics still do add some value (this is related to a familiar point also made in Newman & Kohn; knowing that an estimate is biased is very different from having to conclude that the estimate is useless. Biased estimates can often add information even if you know they’re biased, and this is especially the case if you know in which direction the estimate is most likely to be biased). Having said this, here are some more numbers from a different source:
“Nearly 52 percent of Americans 18–24 years of age, and just over 50 percent of all American adults, read books for pleasure […] Bibby, et al. (2009) reported that 47 percent of Canadian teenagers 15–19 years of age received a “great deal” or “quite a bit” of pleasure from reading. […] Young Canadian readers were more likely to be female than male: 56 percent of those who reported pleasure reading were female, while only 35 percent were male […] In 2009, the publishing industry reported that men in the United States only accounted for 29 percent of purchases made within the adult fiction market, compared to 40 percent of the U.K. market (Bowker LLC, 2009). The NEA surveys also consistently suggest that more women read than men: about 42 percent of men are voluntary readers of literature (defined as novels, short stories, poems, or plays in print or online), compared to 58 percent of women […] Unfortunately the NEA studies do not include in–depth reading for work or school. If this were included, the overall rates and breakdowns by sex might look very different. […] While these studies suggest that reading is enjoyed by a substantial number of North Americans, on the flip side, about half of the populations surveyed are not readers.”
“In 2008, 98 percent of Canadian high school students aged 15 to 19 were using computers one hour a day or more (Bibby, et al., 2009). About one half of those teenagers were using their computers at least two hours a day, while another 20 percent were on their computers for three to four hours, and 20 percent used their computers five hours or more each day […]. More recently it has been reported that 18–34 year old Canadians are spending an average of 20 hours a week online (Ipsos, 2010). […] A Canadian study using the Statistics Canada 2005 General Social Survey found that both heavy and moderate Internet users spend more time reading books than people who do not use the Internet, although people in all three categories of Internet usage read similar numbers of magazines and newspapers”
It was my impression while reading this that it did not seem to have occurred to the researchers here that one might use a personal computer to read books (instead of an e-reader); that people don’t just use computers to read stuff online (…and play games, and watch movies, etc.), but that you can also use a computer to read books. It may not just be that ‘the sort of people who spend much time online are also the sort of people who’re more likely to read books when they’re not online’; it may also be that some of those ‘computer hours’ are actually ‘book hours’. I much prefer to read books on my computer to reading books on my e-reader if both options are available (of course one point of having an e-reader is that it’s often the case that both options are not available), and I don’t see any good reason to assume that I’m the only person feeling that way.
ii. Here’s a list of words I’ve encountered on vocabulary.com recently:
While writing this post I realized that the Merriam-Webster site also has a quiz one can play around with if one likes. I don’t think it’s nearly as useful as vocabulary.com’s approach if you want to learn new words, but I’m not sure how fair it is to even compare the two. I scored much higher than average the four times I took the test, but I didn’t like a couple of the questions in the second test because it seemed to me there were multiple correct answers. One of the ways in which vocabulary.com is clearly superior to this sort of test is of course that you’re able to provide them with feedback about issues like these, which in the long run should serve to minimize the number of problematic questions in the sample.
If you haven’t read along here very long you’ll probably not be familiar with the vocabulary.com site, and in that case you might want to read this previous post on the topic.
iii. A chess kibitzing video:
Just to let you know this is a thing, in case you didn’t know. I enjoy watching strong players play chess; it’s often quite a bit more fun than playing yourself.
iv. “a child came to the hospital with cigarette burns dotting his torso. almost every patch of skin that could be covered with a tee shirt was scarred. some of the marks were old, some were very fresh.
his parents said it was a skin condition.”
Lots of other heartwarming stories in this reddit thread. I’m actually not quite sure why I even read those; some of them are really terrible.
It’s been a long time since I had one of these. Questions? Comments? Random observations?
I hate posting posts devoid of content, so here’s some random stuff:
If you think the stuff above is all fun and games I should note that the topic of chiralty, which is one of the things talked about in the lecture above, was actually covered in some detail in Gale’s book, which hardly is a book which spends a great deal of time talking about esoteric mathematical concepts. On a related note, the main reason why I have not blogged that book is incidentally that I lost all notes and highlights I’d made in the first 200 pages of the book when my computer broke down, and I just can’t face reading that book again simply in order to blog it. It’s a good book, with interesting stuff, and I may decide to blog it later, but I don’t feel like doing it at the moment; without highlights and notes it’s a real pain to blog a book, and right now it’s just not worth it to reread the book. Rereading books can be fun – I’ve incidentally been rereading Darwin lately and I may decide to blog this book soon; I imagine I might also choose to reread some of Asimov’s books before long – but it’s not much fun if you’re finding yourself having to do it simply because the computer deleted your work.
Here’s the abstract:
“Statistical power analysis provides the conventional approach to assess error rates when designing a research study. However, power analysis is flawed in that a narrow emphasis on statistical significance is placed as the primary focus of study design. In noisy, small-sample settings, statistically significant results can often be misleading. To help researchers address this problem in the context of their own studies, we recommend design calculations in which (a) the probability of an estimate being in the wrong direction (Type S [sign] error) and (b) the factor by which the magnitude of an effect might be overestimated (Type M [magnitude] error or exaggeration ratio) are estimated. We illustrate with examples from recent published research and discuss the largest challenge in a design calculation: coming up with reasonable estimates of plausible effect sizes based on external information.”
If a study has low power, you can get into a lot of trouble. Some problems are well known, others probably aren’t. A bit more from the paper:
“design calculations can reveal three problems:
1. Most obvious, a study with low power is unlikely to “succeed” in the sense of yielding a statistically significant result.
2. It is quite possible for a result to be significant at the 5% level — with a 95% confidence interval that entirely excludes zero — and for there to be a high chance, sometimes 40% or more, that this interval is on the wrong side of zero. Even sophisticated users of statistics can be unaware of this point — that the probability of a Type S error is not the same as the p value or significance level.
3. Using statistical significance as a screener can lead researchers to drastically overestimate the magnitude of an effect (Button et al., 2013).
Design analysis can provide a clue about the importance of these problems in any particular case.”
“Statistics textbooks commonly give the advice that statistical significance is not the same as practical significance, often with examples in which an effect is clearly demonstrated but is very small […]. In many studies in psychology and medicine, however, the problem is the opposite: an estimate that is statistically significant but with such a large uncertainty that it provides essentially no information about the phenomenon of interest. […] There is a range of evidence to demonstrate that it remains the case that too many small studies are done and preferentially published when “significant.” We suggest that one reason for the continuing lack of real movement on this problem is the historic focus on power as a lever for ensuring statistical significance, with inadequate attention being paid to the difficulties of interpreting statistical significance in underpowered studies. Because insufficient attention has been paid to these issues, we believe that too many small studies are done and preferentially published when “significant.” There is a common misconception that if you happen to obtain statistical significance with low power, then you have achieved a particularly impressive feat, obtaining scientific success under difficult conditions.
However, that is incorrect if the goal is scientific understanding rather than (say) publication in a top journal. In fact, statistically significant results in a noisy setting are highly likely to be in the wrong direction and invariably overestimate the absolute values of any actual effect sizes, often by a substantial factor.”
iii. I’m sure most people who might be interested in following the match are already well aware that Anand and Carlsen are currently competing for the world chess championship, and I’m not going to talk about that match here. However I do want to mention to people interested in improving their chess that I recently came across this site, and that I quite like it. It only deals with endgames, but endgames are really important. If you don’t know much about endgames you may find the videos available here, here and here to be helpful.
iv. A link: Crosss Validated: “Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization.”
A friend recently told me about this resource. I knew about the existence of StackExchange, but I haven’t really spent much time there. These days I mostly stick to books and a few sites I already know about; I rarely look for new interesting stuff online. This also means you should not automatically assume I surely already know about X when you’re considering whether to tell me about X in an Open Thread.
i. A while back I promised an update on the clinical trial I’ve been enrolled in, but I forgot about that stuff. Anyway I have learned that I was one of the patients who got the active drug, and they’d like me to continue taking the drug for another two years. I’ve decided to stay in the trial (technically I’m enrolling in a new trial, but…) and keep taking the drug.
ii. Given that the World Chess Championship has just started, this paper about chess ratings seems timely. Here’s incidentally the World Chess Championship main site. It’s started out with two draws, the last of them lasting only one hour and fifteen minutes or so – really disappointing but perhaps not that surprising; most championship match games tend in my opinion to be rather boring.
iii. This weekend I went to a Mensa ‘Game Day’ meetup – basically we got together and played various games (mostly board-games in my case, but no chess..) the entire day. This is one of the few ways I’m currently trying to step outside my comfort zone. It was sort of an okay experience and I’m glad I gave it a shot. But I did get bored towards the end and I felt very drained afterwards. I learned that this kind of thing is an inefficient way to get to know people. I was reminded that when you feel socially isolated and lonely you tend to think of social interaction with other people as much nicer than it actually often is in real life.
Sorry for the infrequent updates. What have you been up to? Read something interesting? Watched a good movie?
i. The Living Dead: Bacterial Community Structure of a Cadaver at the Onset and End of the Bloat Stage of Decomposition. There are a lot of questions one might ask about how the world works. Incidentally I should note that when I die I really wouldn’t mind contributing to a study like this. Here’s the abstract, with a couple of links added to ease understanding:
“Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition.”
The introduction contains a good description of how decomposition in humans proceed:
“A cadaver is far from dead when viewed as an ecosystem for a suite of bacteria, insects, and fungi, many of which are obligate and documented only in such a context. Decomposition is a mosaic system with an intimate association between biotic factors (i.e., the individuality of the cadaver, intrinsic and extrinsic bacteria and other microbes, and insects) and abiotic factors (i.e., weather, climate, and humidity) and therefore a function of a specific ecological scenario. Slight alteration of the ecosystem, such as exclusion of insects or burial, may lead to a unique trajectory for decomposition and potentially anomalous results; therefore, it is critical to forensics that the interplay of these factors be understood. Bacteria are often credited as a major driving force for the process of decomposition but few studies cataloging the microbiome of decomposition have been published […]
A body passes through several stages as decomposition progresses driven by dehydration and discernible by characteristic gross taphonomic changes. The early stages of decomposition are wet and marked by discoloration of the flesh and the onset and cessation of bacterially-induced bloat. During early decay, intrinsic bacteria begin to digest the intestines from the inside out, eventually digesting away the surrounding tissues . Enzymes from within the dead cells of the cadaver also begin to break down tissues (autolysis). During putrefaction, bacteria undergo anaerobic respiration and produce gases as by-products such as hydrogen sulfide, methane, cadaverine, and putrescine . The buildup of resulting gas creates pressure, inflating the cadaver, and eventually forcing fluids out . This purging event marks the shift from early decomposition to late decomposition and may not be uniform; the head may purge before the trunk, for example. Purge may also last for some period of time in some parts of the body even as other parts of the body enter the most advanced stages of decomposition. In the trunk, purge is associated with an opening of the abdominal cavity to the environment . At this point, the rate of decay is reported by several authors to greatly increase as larval flies remove large portions of tissues; however, mummification may also occur, thus serving to preserve tissues –. The final stages of decomposition last through to skeletonization and are the driest stages , –.”
It’s really quite an interesting paper, but you probably don’t want to read this while you’re having dinner. A few other interesting observations and conclusions:
“Many factors can influence the bacteria detected in and on a cadaver, including the individual’s “starting” microbiome, differences in the decomposition environments of the two cadavers, and differences in the sites sampled at end-bloat. The integrity of organs at end-bloat varied between cadavers (as decomposition varied between cadavers) and did not allow for consistent sampling of sites across cadavers. Specifically, STAFS 2011-016 no longer had a sigmoidal colon at the end-bloat sample time.” […]
“With the exception of the fecal sample from STAFS 2011-006, which was the least rich sample in the study with only 26 unique OTUs [operational taxonomic units – US] detected, fecal samples were the richest of all body sites sampled, with an average of nearly 400 OTUs detected. The stomach sample was the second least rich sample, with small intestine and mouth samples slightly richer. The body cavity, transverse colon, and sigmoidal colon samples were much richer. Overall, these data show that as one moves from the upper gastrointestinal tract (mouth, stomach, and small intestine) to the lower gastrointestinal tract (colon and rectal/fecal), microbiome richness increases.” […]
“It is important to note that while difference in abundance seen in particular species between this study and the others noted above could be due to the discussed constraints of culturing bacteria, differences could also be due to a variety of factors such as individual variability between the cadaver microbiomes, seasonality, climate, and species of colonizing insects. Finally, abundance does not necessarily indicate metabolic significance for decomposition, a point of importance that our study cannot address.” […]
“Our data represent initial insights into the bacteria populating decomposing human cadavers and an early start to discovering successive changes through time. While our data support the findings of previous culture studies, they also demonstrate that bacteria not detected by culture-based methods comprise a large portion of the community. No definitive conclusion regarding a shift in community structure through time can be made with this data set.”
Diabetic renal disease (diabetic nephropathy) is a leading cause of end-stage renal failure. Once the process has started, it cannot be reversed by glycaemic control, but progression might be slowed by control of blood pressure and protein restriction.
To assess the effects of dietary protein restriction on the pro gression of diabetic nephropathy in patients with diabetes .
We searched The Cochrane Library , MEDLINE, EMBASE, ISI Proceedings, Science Citation Index Expanded and bibliographies of included studies.
Randomised controlled trials (RCTs) and before and after studies of the effects of a modified or restricted protein diet on diabetic renal function in people with type 1 or type 2 diabetes following diet for at least four months were considered.
Data collection and analysis
Two reviewers performed data extraction and evaluation of quality independently. Pooling of results was done by means of random- effects model.
Twelve studies were included, nine RCTs and three before and after studies. Only one study explored all-cause mortality and end-stage renal disease (ESRD) as endpoints. The relative risk (RR) of ESRD or death was 0.23 (95% confidence interval (CI) 0.07 to 0.72) for patients assigned to a low protein diet (LPD). Pooling of the seven RCTs in patients with type 1 diabetes resulted in a non-significant reduction in the decline of glomerular filtration rate (GFR) of 0.1 ml/min/month (95% CI -0.1 to 0.3) in the LPD group. For type 2 diabetes, one trial showed a small insignificant improvement in the rate of decline of GFR in the protein-restricted group and a second found a similar decline in both the intervention and control groups. Actual protein intake in the intervention groups ranged from 0.7 to 1.1 g/kg/day. One study noted malnutrition in the LPD group. We found no data on the effects of LPDs on health-related quality of life and costs.
The results show that reducing protein intake appears to slightly slow progression to renal failure but not statistically significantly so. However, questions concerning the level of protein intake and compliance remain. Further longer-term research on large representative groups of patients with both type 1 and type 2 diabetes mellitus is necessary.”
The paper has a lot more. Do note that due to the link between kidney disease and dietary protein intake, at least one diabetic I know has actually considered the question of whether to adjust protein intake at an even earlier point in the disease process than the one comtemplated in these studies, i.e. before the lab tests show that the kidneys have started to fail – this is hardly an outrageous idea given evidence in related fields. I do think however that the evidence is much too inconclusive in the case of diabetic nephropathy for anything like this to make much sense at this point. Lowering salt intake seems to be far more likely to have positive effects. I’d be curious to know if the (very tentative..) finding that the type of dietary protein (‘chicken and fish vs red meat’) may matter for outcomes, and not just the amount of protein, holds; this seems very unclear at this point, but it’s potentially important as it also relates to the compliance/adherence problem.
“Archaeological excavations at a U-shaped pyramid in the northern Lake Titicaca Basin of Peru have documented a continuous 5-m-deep stratigraphic sequence of metalworking remains. The sequence begins in the first millennium AD and ends in the Spanish Colonial period ca. AD 1600. The earliest dates associated with silver production are 1960 ± 40 BP (2-sigma cal. 40 BC to AD 120) and 1870 ± 40 BP (2-sigma cal. AD 60 to 240) representing the oldest known silver smelting in South America. Scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS) analysis of production debris indicate a complex, multistage, high temperature technology for producing silver throughout the archaeological sequence. These data hold significant theoretical implications including the following: (i) silver production occurred before the development of the first southern Andean state of Tiwanaku, (ii) the location and process of silverworking remained consistent for 1,500 years even though political control of the area cycled between expansionist states and smaller chiefly polities, and (iii) that U-shaped structures were the location of ceremonial, residential, and industrial activities.”
A little more from the paper:
“Our data establish an initial date for silverworking that is at least three centuries earlier than previous studies had indicated. […] Three independent lines of evidence establish the chronological integrity of the deposit: 1) a ceramic sequence in uninterrupted stratigraphic layers, 2) absolute radiocarbon dates, and 3) absolute ceramic thermoluminescence (TL) dates (1). […] the two absolute dating methods are internally consistent, and […] these match the relative sequence derived from analyzing the diagnostic pottery or ceramics. The unit excavated at Huajje represents a rare instance of an intact, well-demarcated stratigraphic deposit that allows us to precisely define the material changes through time in silver production. […] The steps required for silver extraction include mining, beneficiation (i.e., crushing of the ore and sorting of metal-bearing mineral), optional roasting to remove sulfur via oxidation, followed by smelting, and cupellation […] Archaeological or ethnographic evidence for most of these steps is extremely scarce, making this a very significant assemblage for our understanding of early silver production. A total of 3,457 (7,215.84 g) smelting-related artifacts were collected.”
i. Yesterday I had a very bad and prolonged hypoglycemic episode which lasted hours. I was in a semi-conscious state for a long time before realizing there was a problem, and the situation did not improve much even after intake of significant amounts of dextrose. This is by far the closest I’ve been to a hospital admission for more than a year – I had both severe neurological symptoms and GI-tract involvement. I don’t think I’ve ever been admitted without GI-tract involvement, and this tends to worsen outcomes significantly – it’s hard to reverse a disease process the main treatment of which is putting stuff into your stomach and keeping it there if you have severe nausea and vomit up the stuff you eat.
I really hope that if something like this happens again I’ll be smart enough to actually call an ambulance, or at the very least involve other people so that they can help me if things go really bad. I like to tell myself that I am a very self-reliant and independent person in general – the sort of person who don’t like to ask other people for help and so rarely do. And nobody likes to be seen and judged by others when they’re at their weakest. Combine these facts with the inherent difficulty of assessing when a situation such as this one is sufficiently severe to merit involving other people while you’re having neurological symptoms impacting your thought processes and impairing judgment, and you have the perfect recipe for a situation where you end up making bad decisions and running a major risk of things going very wrong by not getting help. I should really become better at reminding myself (to the extent that it’s possible; as mentioned impaired judgment is a symptom here, so this stuff is not completely under my control) that when I’m in a state like this I’m just a very sick person who very well may need other people’s help simply to survive. Type 1 diabetics die from such hypoglycemic episodes all the time.
Here’s a related post from the past.
ii. (Yet) A(nother) medical lecture:
iii. An event that changed the world:
Objectives To determine whether parachutes are effective in preventing major trauma related to gravitational challenge.
Design Systematic review of randomised controlled trials.
Data sources: Medline, Web of Science, Embase, and the Cochrane Library databases; appropriate internet sites and citation lists.
Study selection: Studies showing the effects of using a parachute during free fall.
Main outcome measure Death or major trauma, defined as an injury severity score > 15.
Results We were unable to identify any randomised controlled trials of parachute intervention.
Conclusions As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials. Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.”
v. On Being Sane in Insane Places, by David L. Rosenhan.
“At its heart, the question of whether the sane can be distinguished from the insane (and whether degrees of insanity can be distinguished from each other) is a simple matter: Do the salient characteristics that lead to diagnoses reside in the patients themselves or in the environments and contexts in which observers find them? From Bleuler, through Kretchmer, through the formulators of the recently revised Diagnostic and Statistical Manual of the American Psychiatric Association, the belief has been strong that patients present symptoms, that those symptoms can be categorized, and, implicitly, that the sane are distinguishable from the insane. More recently, however, this belief has been questioned. Based in part on theoretical and anthropological considerations, but also on philosophical, legal, and therapeutic ones, the view has grown that psychological categorization of mental illness is useless at best and downright harmful, misleading, and pejorative at worst. Psychiatric diagnoses, in this view, are in the minds of observers and are not valid summaries of characteristics displayed by the observed. [3-5]
Gains can be made in deciding which of these is more nearly accurate by getting normal people (that is, people who do not have, and have never suffered, symptoms of serious psychiatric disorders) admitted to psychiatric hospitals and then determining whether they were discovered to be sane and, if so, how. If the sanity of such pseudopatients were always detected, there would be prima facie evidence that a sane individual can be distinguished from the insane context in which he is found. Normality (and presumably abnormality) is distinct enough that it can be recognized wherever it occurs, for it is carried within the person. If, on the other hand, the sanity of the pseudopatients were never discovered, serious difficulties would arise for those who support traditional modes of psychiatric diagnosis. Given that the hospital staff was not incompetent, that the pseudopatient had been behaving as sanely as he had been out of the hospital, and that it had never been previously suggested that he belonged in a psychiatric hospital, such an unlikely outcome would support the view that psychiatric diagnosis betrays little about the patient but much about the environment in which an observer finds him.
This article describes such an experiment.”
Here’s the wikipedia article about the experiment. Below some more stuff from the paper:
“Eight sane people gained secret admission to 12 different hospitals . […] the pseudopatients were never detected. Admitted, except in one case, with a diagnosis of schizophrenia , each was discharged with a diagnosis of schizophrenia “in remission.” The label “in remission” should in no way be dismissed as a formality, for at no time during any hospitalization had any question been raised about any pseudopatient’s simulation. Nor are there any indications in the hospital records that the pseudopatient’s status was suspect. Rather, the evidence is strong that, once labeled schizophrenic, the pseudopatient was stuck with that label. If the pseudopatient was to be discharged, he must naturally be “in remission”; but he was not sane, nor, in the institution’s view, had he ever been sane. […] Length of hospitalization ranged from 7 to 52 days, with an average of 19 days.” […]
“Failure to detect sanity during the course of hospitalization may be due to the fact that physicians operate with a strong bias toward what statisticians call the Type 2 error . This is to say that physicians are more inclined to call a healthy person sick (a false positive, Type 2) than a sick person healthy (a false negative, Type 1). The reasons for this are not hard to find: it is clearly more dangerous to misdiagnose illness than health. Better to err on the side of caution, to suspect illness even among the healthy.” […]
“The following experiment was arranged at a research and teaching hospital whose staff had heard these findings but doubted that such an error could occur in their hospital. The staff was informed that at some time during the following three months, one or more pseudopatients would attempt to be admitted into the psychiatric hospital. Each staff member was asked to rate each patient who presented himself at admissions or on the ward according to the likelihood that the patient was a pseudopatient. A 10-point scale was used, with a 1 and 2 reflecting high confidence that the patient was a pseudopatient.
Judgments were obtained on 193 patients who were admitted for psychiatric treatment. All staff who had had sustained contact with or primary responsibility for the patient — attendants, nurses, psychiatrists, physicians, and psychologists — were asked to make judgments. Forty-one patients were alleged, with high confidence, to be pseudopatients by at least one member of the staff. Twenty-three were considered suspect by at least one psychiatrist. Nineteen were suspected by one psychiatrist and one other staff member. Actually, no genuine pseudopatient (at least from my group) presented himself during this period.
The experiment is instructive. It indicates that the tendency to designate sane people as insane can be reversed when the stakes (in this case, prestige and diagnostic acumen) are high. But what can be said of the 19 people who were suspected of being “sane” by one psychiatrist and another staff member? Were these people truly “sane” or was it rather the case that in the course of avoiding the Type 2 error the staff tended to make more errors of the first sort — calling the crazy “sane”? There is no way of knowing. But one thing is certain: any diagnostic process that lends itself too readily to massive errors of this sort cannot be a very reliable one. […]
It is clear that we cannot distinguish the sane from the insane in psychiatric hospitals. The hospital itself imposes a special environment in which the meaning of behavior can easily be misunderstood. The consequences to patients hospitalized in such an environment — the powerlessness, depersonalization, segregation, mortification, and self-labeling — seem undoubtedly counter-therapeutic.”
“The share of one-person households in the U.S. maintained by men ages 15 to 64 rose to 34% in 2012, up from 23% in 1970, according to a Census report on the status of families released Tuesday. For women of the same age, this figure actually dropped slightly, to 30% in 2012 from 31% in 1970.
The findings may reflect, in part, the sharp increase in divorce rates in the U.S. throughout the 1970s, Census said. The dominant living arrangement for children following their parents’ divorce is custody by mothers.”
I would have preferred to read the actual Census report and I did go have a look for it; but when I click the pdf link to the report in question at the census site all I get is an error message (link) – they seem to have put up a corrupt link. Annoying. Here are some related Danish numbers which I blogged a while ago. Although the 2012 report doesn’t seem to be available, there’s a lot of 2009-2011 data on related matters here. I messed around a little with that data – below some stuff from that source:
Naturally there’s a big gender disparity; at the age range of 24-29, 89.1% of males have never married whereas only 80,7% of the females have never married. For people in the 25-29 year age range 64% of males and 50,1% of females have never married. You’d expect the numbers to converge somewhat ‘over time’ (/as people get older) and they do, but not until we reach the age group of 55-64 year olds do the proportion of females who have never married surpass the proportion of males who have not (and these numbers are quite small – less than 9% have never married at that age, both when looking at males and females).
Higher earnings seem to confer an advantage when it comes to minimizing the risk of never getting married, which is of course a big surprise. For example, of the 45-49 year old people with a reported income of $25,000 to $39,999 17,6% of them have never married, whereas the corresponding number for people with an income of $40,000-75,000 is 11,5%. For people with incomes in the $75,000-$100,000 range the number is 5.5%, and incidentally the number of 45-49 year olds with incomes above $100k who’ve never married is also 5,5%. The relationship is not perfectly linear, but it’s clear that people with higher earnings have a higher likelihood of getting married. Incidentally almost a third of people in that age range who reported annual earnings less than $5000 have never married (29.2%).
The numbers above are from the first third of the first document. There’s a lot of data available here if you’re curious.
ii. Global Reality of Type 1 Diabetes Care in 2013. Not much to see here – here’s why I bookmarked it:
“from a global perspective, the most common cause of death for a child with type 1 diabetes is lack of access to insulin (2). Yet, this is not just a problem for low-income countries, with one recent study in the U.S. noting that discontinuation of insulin therapy represents the leading precipitating cause of diabetic ketoacidosis (3). Indeed, lack of insulin explained 68% of such episodes in people living in an inner-city setting, with approximately one-third of people reporting a lack of financial resources to buy insulin and eking out their insulin supplies.”
We’re talking about the United States of America, a very rich country – and in fact the country in the world with the highest health care expenditures. And still you have type 1 diabetics who go into ketoacidosis because they can’t afford their drugs. That’s messed up. Note that low medical subsidies to type 1s may not necessarily be cost saving at a systemic level as hospital admissions are very expensive; based on the average estimates at the link and these length of stay estimates, a back of the envelope estimate of the average cost of a DKA-related hospital admission would be $5.500. This estimate is probably too low as this study (which I may blog in more detail later) estimated non-compliance-related DKA-admissions to cost on average roughly $7.500 (and the non-compliance admissions were actually significantly cheaper than the other admissions on a per-case basis). To put this estimate into perspective, the mean annual cost of intensive diabetes care per diabetic patient in the U.S. is $4,000 (same link).
iii. Related to i., but I figured it deserved to be linked to separately: A theory of marriage, by Gary Becker.
iv. Some maps illustrating racial segregation patterns in the US. Don’t miss the sixth map of Detroit. The one of Saint Louis is also…
v. Vocabulary.com. I haven’t used it much yet, so I don’t really know if it’s any good – but it looks interesting and I’ve missed such a resource. I sometimes feel a bit guilty about not working harder on improving my vocabulary, especially on account of the fact that I’ve basically ended up only speaking two languages – I used to speak French reasonably well, but that’s many years ago and at this point I’d rather spend time improving my English than spend a lot of effort on a third language which most likely will only be of very limited use to me.
“Robots offer new possibilities for investigating animal social behaviour. This method enhances controllability and reproducibility of experimental techniques, and it allows also the experimental separation of the effects of bodily appearance (embodiment) and behaviour. In the present study we examined dogs’ interactive behaviour in a problem solving task (in which the dog has no access to the food) with three different social partners, two of which were robots and the third a human behaving in a robot-like manner. The Mechanical UMO (Unidentified Moving Object) and the Mechanical Human differed only in their embodiment, but showed similar behaviour toward the dog. In contrast, the Social UMO was interactive, showed contingent responsiveness and goal-directed behaviour and moved along varied routes. The dogs showed shorter looking and touching duration, but increased gaze alternation toward the Mechanical Human than to the Mechanical UMO. This suggests that dogs’ interactive behaviour may have been affected by previous experience with typical humans. We found that dogs also looked longer and showed more gaze alternations between the food and the Social UMO compared to the Mechanical UMO. These results suggest that dogs form expectations about an unfamiliar moving object within a short period of time and they recognise some social aspects of UMOs’ behaviour. This is the first evidence that interactive behaviour of a robot is important for evoking dogs’ social responsiveness.”
From the discussion:
“The aim of this study was to investigate whether dogs are able to differentiate agents on the basis of their behaviour and show social behaviours toward an UMO (Unidentified Moving Object) if the agent behaves appropriately in an interactive situation. In order to observe such interaction we modelled an experimental situation in which the dog is faced with inaccessible food. Miklósi et al  showed that in this case dogs increase their looking time at a human helper and show gaze alternation between the inaccessible food and the human. These observations have been replicated by Gaunet  and Horn et al , and the authors implicated that the dogs’ behaviour reflects communicative intentions. The present experiment showed that these behaviour features also emerge in the dogs while they are interacting with an UMO, moreover the onset of these behaviours is facilitated by the social features of the UMO: Dogs look longer and show more gaze alternation if the UMO carries eyes, shows variations in its path of movement, displays goal-directed behaviour and contingent reactivity (reacts to the looking action of the dog by retrieving the inaccessible food item).”
If you’re curious about how they actually did this stuff, don’t miss the neat video towards the end.
“Previous studies have shown that estimations of the calorie content of an unhealthy main meal food tend to be lower when the food is shown alongside a healthy item (e.g. fruit or vegetables) than when shown alone. This effect has been called the negative calorie illusion and has been attributed to averaging the unhealthy (vice) and healthy (virtue) foods leading to increased perceived healthiness and reduced calorie estimates. The current study aimed to replicate and extend these findings to test the hypothesized mediating effect of ratings of healthiness of foods on calorie estimates. […] The first two studies failed to replicate the negative calorie illusion. In a final study, the use of a reference food, closely following a procedure from a previously published study, did elicit a negative calorie illusion. No evidence was found for a mediating role of healthiness estimates. […] The negative calorie illusion appears to be a function of the contrast between a food being judged and a reference, supporting the hypothesis that the negative calorie illusion arises from the use of a reference-dependent anchoring and adjustment heuristic and not from an ‘averaging’ effect, as initially proposed. This finding is consistent with existing data on sequential calorie estimates, and highlights a significant impact of the order in which foods are viewed on how foods are evaluated.” […]
The basic idea behind the ‘averaging effect’ above is that your calorie estimate depends on how ‘healthy’ you assume the dish to be; the intuition here is that if you see an apple next to an icecream, you may think of the dish as more healthy than if the apple wasn’t there and that might lead to faulty (faultier) estimates of the actual number of calories in the dish (incidentally presumably such an effect is possible to detect even if people correctly infer that the latter dish has more calories than does the former; what’s of interest here is the estimate error, not the actual estimate). These guys have a hard time finding a negative calorie illusion at all (they don’t in the first two studies), and in the case where they do the mechanism is different from the one initially proposed; it seems to them that the story to be told is a story about anchoring effects. I like when replication attempts get published, especially when they fail to replicate – such studies are important. Here are a few more remarks from the study, about ‘real-world implications’:
“Calorie estimates are a simple measure of participant’s perception of foods; however they almost certainly do not reflect actual factual knowledge about a food’s calorie content. It is not currently known whether calorie estimates are related to the expected satiety for a food, or anticipated tastiness. The data from the current studies fail to show that calorie estimates are derived directly from the healthiness ratings of foods. Other studies have shown that calorie estimates are influenced by the restaurant from which a food is purchased , as well as the order in which foods are presented [current study, 11], very much supporting the contextually sensitive nature of calorie estimates. And there is some evidence that erroneous calorie estimates alter portion size selection  and that lower calorie estimates for a main meal item have been shown to alter selection for drinks and side dishes .
Based on the current data, a negative calorie illusion is unlikely to be driving systematic failures in calorie estimations when incidental “healthy foods”, such as fruit and vegetables, are viewed alongside energy dense nutrition poor foods in advertisements or food labels. Foods would need to be viewed in a pre-determined sequence for systematic errors in real-world instances of calorie estimates. A couple of examples when this might occur are when food items are viewed in a meal with courses (starter, main, dessert) or when foods are seen in a specified order as they are positioned on a food menu or within the pathway around a supermarket from the entrance to the checkout tills.”
iii. You can read some pages from Popper’s Conjectures and Refutations here. A few quotes:
“I found that those of my friends who were admirers of Marx, Freud, and Adler, were impressed by a number of points common to these theories, and especially by their apparent explanatory power. These theories appeared to be able to explain practically everything that happened within the fields to which they referred. The study of any of them seemed to have the effect of an intellectual conversion or revelation, opening your eyes to a new truth hidden from those not yet initiated. Once your eyes were thus opened you saw confirming instances everywhere: the world was full of verifications of the theory. Whatever happened always confirmed it. Thus its truth appeared manifest; and unbelievers were clearly people who did not want to see the manifest truth; who refused to see it, either because it was against their class interest, or because of their repressions which were still ʺun‐analysedʺ and crying aloud for treatment.
The most characteristic element in this situation seemed to me the incessant stream of confirmations, of observations which ʺverifiedʺ the theories in question; and this point was constantly emphasized by their adherents. A Marxist could not open a newspaper without finding on every page confirming evidence for his interpretation of history; not only in the news, but also in its presentation ‐ which revealed the class bias of the paper ‐ and especially of course in what the paper did not say. The Freudian analysts emphasized that their theories were constantly verified by their ʺclinical observations.” […] I could not think of any human behaviour which could not be interpreted in terms of either theory [Freud or Adler]. It was precisely this fact ‐ that they always fitted, that they were always confirmed ‐ which in the eyes of their admirers constituted the strongest argument in favour of these theories. It began to dawn on me that this apparent strength was in fact their weakness. […]
These considerations led me in the winter of 1919 ‐ 20 to conclusions which I may now reformulate as follows.
(1) It is easy to obtain confirmations, or verifications, for nearly every theory ‐ if we look for confirmations.
(2) Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory ‐ an event which would have refuted the theory.
(3) Every ʺgoodʺ scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.
(4) A theory which is not refutable by any conceivable event is nonscientific. Irrefutability is not a virtue of theory (as people often think) but a vice.
(5) Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability; some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.
(6) Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of ʺcorroborating evidence.ʺ)
(7) Some genuinely testable theories, when found to be false, are still upheld by their admirers ‐ for example by introducing ad hoc some auxiliary assumption, or by re‐interpreting theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status. (I later described such a rescuing operation as a ʺconventionalist twistʺ or a ʺconventionalist stratagem.ʺ)
One can sum up all this by saying that the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.”
iv. A couple of physics videos:
“Global eradication of polio has been the ultimate game of Whack-a-Mole for the past decade; when it seems the virus has been beaten into submission in a final refuge, up it pops in a new region. Now, as vanquishing polio worldwide appears again within reach, another insidious threat may be in store from infection sources hidden in plain view.
Polio’s latest redoubts are “chronic excreters,” people with compromised immune systems who, having swallowed weakened polioviruses in an oral vaccine as children, generate and shed live viruses from their intestines and upper respiratory tracts for years. Healthy children react to the vaccine by developing antibodies that shut down viral replication, thus gaining immunity to infection. But chronic excreters cannot quite complete that process and instead churn out a steady supply of viruses. The oral vaccine’s weakened viruses can mutate and regain wild polio’s hallmark ability to paralyze the people it infects. After coming into wider awareness in the mid-1990s, the condition shocked researchers. […] Chronic excreters are generally only discovered when they develop polio after years of surreptitiously spreading the virus.”
Wikipedia incidentally has a featured article about Poliomyelitis here.
i. How To Write Badly Well. The blog is no longer active, but there’s a lot of good stuff in the archives. A collection of good links here. Some posts from the blog that made me smile or laugh: Choose a narrator who is peripheral to the story, Select words for their impressiveness rather than their relevance, Find the bone mote, Let your characters explain themselves, Underestimate your audience.
“The tendency for genetically unrelated individuals to build large-scale cooperative networks in human societies is a major exception in the animal kingdom . Researchers have suggested that the principle of indirect reciprocity–the idea that altruistic (or prosocial) behavior toward an individual is returned by another individual–is crucial in enabling these cooperative networks , . Three different forms of indirect reciprocity exist: social indirect (downstream) –, generalized (upstream) , , and generalized indirect . In this study, we focus on social indirect reciprocity (SIR), which means that if A helps B, then C will help A, who acted cooperatively toward B; this is based on individuals’ evaluations of others’ prior behaviors toward third parties –. SIR is associated with social evaluation or moral judgment in humans and seems to be most important form for human prosociality. SIR is more elaborate than the other two forms of indirect reciprocity and requires individuals to recognize and select those with whom they cooperate , . Through computer simulations and analytic models, previous studies have demonstrated that SIR could evolve when individuals act according to particular strategies , . In all such strategies, individuals have the tendency (1) to reward helpful individuals and (2) to detect and avoid helping cheaters , .
In reality, studies with human adults have demonstrated a behavioral tendency toward SIR in the decision to cooperate or defect in game experiments , . However, there are relatively few studies on SIR in children. Therefore, investigating whether young children have a tendency toward SIR, as well as the manner in which such reciprocity develops during the early developmental stages, will help us understand how and when this tendency, that is so fundamental in organizing cooperative interactions between adults, takes root in people’s lives.
Prosocial behavior can be observed from the first year of a child’s life  and becomes common between ages 1 and 2 . Additionally, even 14-month-olds have been shown to be capable of helping others achieve their goals . However, this early prosocial tendency does not seem to be selective with regard to recipients , . Such selectivity begins to appear between toddlerhood and the preschool period. For example, prosocial behavior becomes selective in terms of partners’ gender and personality , , familiarity between partners , , or the existence of prior prosocial behavior from the partners, thereby suggesting that children engage in direct reciprocity –. However, this selectivity is based on the partners’ own characteristics or behavior toward the potential helper itself. In order to build cooperative relationships through SIR, children require a more elaborate selective ability based on the social evaluation of a partner’s behavior toward a third party. […]
In this study, we investigated whether preschool children, in their natural interactions, have a tendency to behave prosocially or affiliatively toward a peer after they have directly observed the peer behaving prosocially toward a third party. The results showed that bystanders performed prosocial and affiliative behaviors toward focal children more frequently after the focal children’s prosocial behavior toward third parties, compared with control situations. This indicates that 5- to 6-year-olds have a behavioral tendency toward prosocial and affiliative behaviors according to the recipient peers’ prior prosocial behavior toward another peer. […] bystanders tended to engage in prosocial and affiliative behaviors toward peers more frequently soon after observing the peers’ prosocial behavior toward other peers than in control situations, even when the possibility that bystanders had imitated the first recipient’s or other bystanders’ prosocial behavior was eliminated. […]
we conclude that preschool children have an essential behavioral tendency to establish SIR when interacting with their peers in naturalistic settings, as well as in their interactions with puppets or adult actors, as found in previous studies –. Our results, which extend the findings of previous studies –, suggest that preschool children not only have the ability to evaluate partners on the basis of their prosocial behavior toward a third party, they can also use this ability in natural interactions with their peers. This study is the first to provide evidence indicating that children’s prosocial interactions can be formed for the benefits derived from exchanging prosocial behaviors according to the mechanism of SIR in natural interactions with their peers. […]
SIR arises from two aspects of motivation: reward helpful individuals and avoid helping (or punish) harmful individuals , –. The present study explored only the behavioral tendency related to the former aspect and did not examine the latter. Previous studies have demonstrated that a “negativity bias” (a greater impact of negative information as compared with positive information) affects the behavioral tendencies of infants , , . Vaish et al.  has shown that 3-year-olds’ prosocial behavior decreased toward a harmful individual but did not increase toward a helpful individual. Negativity bias has also been demonstrated in nonhuman primate: capuchin monkeys avoided non-reciprocal and non-helpful individuals with intentionality rather than express a preference for reciprocal or helpful ones , .”
iii. Benford’s very strange law:
I don’t actually think this lecture is all that great, but I watched it all and figured I might as well blog it. I think I’ve written about it before, I’ve certainly read about it before. Here’s wikipedia on the subject.
iv. I couldn’t help noticing the results of a new study suggesting that alcohol consumption (/during adolescence?) may be a significant risk factor for young-onset dementia (note the hazard ratio). I assume the study will be put up on the author’s website at some point – I’m annoyed I can’t access it yet.
This is a huge study (n=488,484), so it’s the kind of thing you’ll want to check out if you want to know more about this topic. Alcohol consumption has incidentally been linked to mental decline and dementia risk for a very long time, see e.g. this publication from around the year 2001.
v. A little stuff about mitochondrial DNA and related matters, via John Hawks. Do watch the video – it’s not long. Perhaps not surprisingly, Dawkins covered related stuff in The Ancestor’s Tale.
vi. In one of his recent posts, Ed Yong tells you a little about what happens inside you when a mosquito bites.
vii. Eyjafjallajökull and 9/11: The Impact of Large-Scale Disasters on Worldwide Mobility. I’ve only just briefly skimmed this, but it looks interesting. A quote:
“We model the impact of Eyjafjallajökull and 9/11 on the WAN [worldwide air transportation network] by removing the same set of airports that were closed in response to these events (see Text S1 Sec. S2.1) together with the non-stop flights to and from these airports. Our method captures the dynamic re-routing of passengers at functional airports to avoid obstructed multi-stop connections through airports that close. At their peak, on April 16th 2010, the closures due to Eyjafjallajökull’s ash cloud interrupted 20.5% of the total traffic and closed 10.5% of all airports. The closure of American and Canadian airspace as a response to the 9/11 attacks was even more severe, removing 37.7% of air traffic and 19.6% of all airports.”
i. I’ve read The Murder of Roger Ackroyd. I’ll say very little about the book here because I don’t want to spoil it in any way – but I do want to say that the book is awesome. I read it in one sitting, and I gave it 5 stars on goodreads (av.: 4,09); I think it’s safe to say it’s one of the best crime novels I’ve ever read (and I’ll remind you again that even though I haven’t read that much crime fiction, I have read some – e.g. every Sherlock Holmes story ever published and every inspector Morse novel written by Colin Dexter). The cleverness of the plot reminded me of a few Asimov novels I read a long time ago. A short while after I’d finished the book I was in the laundry room about to start the washing machine and a big smile spread on my face, I was actually close to laughing – because damn, the book is just so clever, so brilliant!
I highly recommend the book.
ii. I have been watching a few of the videos in the Introduction to Higher Mathematics youtube-series by Bill Shillito, here are a couple of examples:
I’m not super impressed by these videos at this point, but I figured I might as well link to them anyway. There are 19 videos in the playlist.
iii. Mind the Gap: Disparity Between Research Funding and Costs of Care for Diabetic Foot Ulcers. A brief comment from this month’s issue of Diabetes Care. The main point:
“Diabetic foot ulceration (DFU) is a serious and prevalent complication of diabetes, ultimately affecting some 25% of those living with the disease (1). DFUs have a consistently negative impact on quality of life and productivity […] Patients with DFUs also have morbidity and mortality rates equivalent to aggressive forms of cancer (2). These ulcers remain an important risk factor for lower-extremity amputation as up to 85% of amputations are preceded by foot ulcers (6). It should therefore come as no surprise that some 33% of the $116 billion in direct costs generated by the treatment of diabetes and its complications was linked to the treatment of foot ulcers (7). Another study has suggested that 25–50% of the costs related to inpatient diabetes care may be directly related to DFUs (2). […] The cost of care of people with diabetic foot ulcers is 5.4 times higher in the year after the first ulcer episode than the cost of care of people with diabetes without foot ulcers (10). […]
We identified 22,531 NIH-funded projects in diabetes between 2002–2011. Remarkably, of these, only 33 (0.15%) were specific to DFUs. Likewise, these 22,531 NIH-funded projects yielded $7,161,363,871 in overall diabetes funding, and of this, only $11,851,468 (0.17%) was specific to DFUs. Thus, a 604-fold difference exists between overall diabetes funding and that allocated to DFUs. […] As DFUs are prevalent and have a negative impact on the quality of life of patients with diabetes, it would stand to reason that U.S. federal funding specifically for DFUs would be proportionate with this burden. Unfortunately, this yawning gap in funding (and commensurate development of a culture of sub-specialty research) stands in stark contrast to the outsized influence of DFUs on resource utilization within diabetes care. This disparity does not appear to be isolated to [the US].”
I’ve read about diabetic foot care before, but I had no idea about this stuff. Of the roughly 175.000 peer-reviewed publications about diabetes published in the period of 2000-2009, only 1200 of them – 0.69% – were about the diabetic foot. You can quibble over the cost estimates and argue that perhaps they’ve overstated because these guys want more money, but I think that it’s highly unlikely that the uncertainties related to the cost estimates are so big as to somehow make the current (research) ressource allocation scheme appear cost efficient in a CBA with reasonable assumptions – there simply has to be some low-hanging fruit here.
A slightly related (if you stretch the definition of ‘related’ a little) article which I also found interesting here.
iv. “How quickly would the ocean’s drain if a circular portal 10 meters in radius leading into space was created at the bottom of Challenger Deep, the deepest spot in the ocean? How would the Earth change as the water is being drained?”
And, “Supposing you did Drain the Oceans, and dumped the water on top of the Curiosity rover, how would Mars change as the water accumulated?”
v. Take news of cancer ‘breakthrough’ with a big grain of salt. I’d have added the word ‘any’ and probably an ‘s’ to the word breakthrough as well if I’d authored the headline, in order to make a more general point – but be that as it may… The main thrust:
“scientific breakthroughs should not be announced at press conferences using the vocabulary of public relations professionals.
The language of science and medicine should be cautious and humble because diseases like cancer are relentless and humbling. […]
The reality is that biomedical research is a slow process that yields small incremental results. If there is a lesson to retain from the tale of CFI-400945, it’s that finding new treatments takes a lot of time and a lot of money. It is a venture worthy of support, but unworthy of exaggerated expectations and casual overstatement.
Hype only serves to create false hope.”
People who’re not familiar with how science actually works (and how related processes such as drug development work) often have weird ideas about how fast things tend to proceed and how (/un?)likely a ‘promising’ result in the lab might be to be translated into, say, a new treatment option available to the general patient population. And yeah, that set of ‘people who’re not familiar with how science works’ would include almost everybody.
It should be noted, as I’m sure Picard knows, that it’s a lot easier to get funding for your project if you’re exaggerating benefits and downplaying costs; if you’re too optimistic; if you’re saying nice things about the guy writing the checks even though you think he’s an asshole; etc. Some types of dishonesty are probably best perceived of as nothing more than ‘good salesmanship’ whereas other types might have different interpretations; but either way it’d be silly to pretend that stuff like false hope does not sell a lot of tickets (and newspapers, and diluted soap water, and…). Given that, it’s hardly likely that things will change much anytime soon – the demand for information here is much higher than is the demand for accurate information. But it’s nice to read an article like this one every now and then anyway.
I won’t talk much about these links or cover them in any detail – but I do encourage you to have a closer look if some of this stuff sounds interesting:
Given how long people have known about stuff like the Hawthorne effect, I almost can’t believe nobody ever got the idea of doing something like this at some point in the past. I however have no problem believing the results.
ii. Finnish war pics. Fascinating stuff.
iii. The kind of people who apparently receive elite research prizes in Denmark these years – exhibit B: Claudia Welz (Danish link). Unfortunately I couldn’t find a good English webpage describing her activities, in order to illustrate just how mad it is that a person like that receives that kind of money from the Danish taxpayers in order to do the kind of ‘research’ she does, and my life is definitely too short to translate the crap that’s put up at the Danish site.
Exhibit A is of course Milena Penkowa. Naturally more deserving people have received the prize as well this year – at least most of the recipients probably won’t feel any strong need to talk about imaginary entities in their publications.
Here’s a related link (in Danish). It’d be a lot cheaper to just give these people unemployment insurance. I’m sure not all of this research is equally useless, but even so my willingness to pay for this kind of stuff is, well, let’s put it diplomatically – not exactly super high. I don’t really understand why people can not just study that kind of stuff (and less useless stuff…) themselves, during their own time, when they’re not working.
iv. A few more Steven Farmer pharmacology lectures:
There’s a bit of annoying microphone-related noise in parts of the second video and parts of the third one, but aside from that they’re quite good and this should not stop you from watching the videos if you find the topics covered interesting.
Some reviews I had a look at after browsing the site:
i. Omega 3 fatty acids for prevention and treatment of cardiovascular disease (Review), by Hooper et al.
“Main results Forty eight randomised controlled trials (36,913 participants) and 41 cohort analyses were included. Pooled trial results did not show a reduction in the risk of total mortality or combined cardiovascular events in those taking additional omega 3 fats (with significant statistical heterogeneity). Sensitivity analysis, retaining only studies at low risk of bias, reduced heterogeneity and again suggested no significant effect of omega 3 fats. […]
Authors’ conclusions It is not clear that dietary or supplemental omega 3 fats alter total mortality, combined cardiovascular events or cancers in people with, or at high risk of, cardiovascular disease or in the general population. There is no evidence we should advise people to stop taking rich sources of omega 3 fats, but further high quality trials are needed to confirm suggestions of a protective effect of omega 3 fats on cardiovascular health.”
(The review has 196 pages, so naturally there’s more stuff here if you’re interested…)
A total of 53 trials met inclusion criteria for one or more of the comparisons in the review. Thirteen trials compared a group programme with a self-help programme; there was an increase in cessation with the use of a group programme (N = 4375, relative risk (RR) 1.98, 95% confidence interval (CI) 1.60 to 2.46). There was statistical heterogeneity between trials in the comparison of group programmes with no intervention controls so we did not estimate a pooled effect. We failed to detect evidence that group therapy was more effective than a similar intensity of individual counselling. There was limited evidence that the addition of group therapy to other forms of treatment, such as advice from a health professional or nicotine replacement, produced extra benefit. There was variation in the extent to which those offered group therapy accepted the treatment. Programmes which included components for increasing cognitive and behavioural skills were not shown to be more effective than same length or shorter programmes without these components.
Group therapy is better for helping people stop smoking than self help, and other less intensive interventions. There is not enough evidence to evaluate whether groups are more effective, or cost-effective, than intensive individual counselling. There is not enough evidence to support the use of particular psychological components in a programme beyond the support and skills training normally included.”
“36 trials were included but most were small and of duration less than three months. Nine trials were of six months duration (2016 patients). These longer trials were the more recent trials and generally were of adequate size, and conducted to a reasonable standard. Most trials tested the same standardised preparation of Ginkgo biloba, EGb 761, at different doses, which are classified as high or low.
The results from the more recent trials showed inconsistent results for cognition, activities of daily living, mood, depression and carer burden. Of the four most recent trials to report results three found no difference between Ginkgo biloba and placebo, and one reported very large treatment effects in favour of Ginkgo biloba. There are no significant differences between Ginkgo biloba and placebo in the proportion of participants experiencing adverse events. […]
Ginkgo biloba appears to be safe in use with no excess side effects compared with placebo. Many of the early trials used unsatisfactory methods, were small, and publication bias cannot be excluded. The evidence that Ginkgo biloba has predictable and clinically significant benefit for people with dementia or cognitive impairment is inconsistent and unreliable.”
Four trials with a combined total of 44,012 patients met the inclusion criteria and are included in this review. Acetylsalicylic acid (ASA) did not reduce stroke or ’all cardiovascular events’ compared to placebo in primary prevention patients with elevated blood pressure and no prior cardiovascular disease. In one large trial ASA taken for 5 years reduced myocardial infarction (ARR 0.5%, NNT 200), increased major haemorrhage (ARI 0.7%, NNT 154), and did not reduce all cause mortality or cardiovascular mortality. In one trial there was no significant difference between ASA and clopidogrel for the composite endpoint of stroke, myocardial infarction or vascular death.
In two small trials warfarin alone or in combination with ASA did not reduce stroke or coronary events.
The ATC meta-analysis of antiplatelet therapy for secondary prevention in patients with elevated blood pressure reported an absolute reduction in vascular events of 4.1% as compared to placebo. Data on the 10,600 patients with elevated blood pressure from the 29 individual trials included in the ATC meta-analysis was requested but could not be obtained.
Antiplatelet therapy with ASA for primary prevention in patients with elevated blood pressure provides a benefit, reduction in myocardial infarction, which is negated by a harm of similar magnitude, increase in major haemorrhage.
The benefit of antiplatelet therapy for secondary prevention in patients with elevated blood pressure is many times greater than the harm. […]
Further trials of antithrombotic therapy including with newer agents and complete documentation of all benefits and harms are required in patients with elevated blood pressure.”
I have a paper deadline approaching, so I’ll be unlikely to blog much more this week. Below some links and stuff of interest:
“we surveyed the faculty and trainees at MD Anderson Cancer Center using an anonymous computerized questionnaire; we sought to ascertain the frequency and potential causes of non-reproducible data. We found that ~50% of respondents had experienced at least one episode of the inability to reproduce published data; many who pursued this issue with the original authors were never able to identify the reason for the lack of reproducibility; some were even met with a less than “collegial” interaction. […] These results suggest that the problem of data reproducibility is real. Biomedical science needs to establish processes to decrease the problem and adjudicate discrepancies in findings when they are discovered.”
ii. The development in the number of people killed in traffic accidents in Denmark over the last decade (link):
For people who don’t understand Danish: The x-axis displays the years, the y-axis displays deaths – I dislike it when people manipulate the y-axis (…it should start at 0, not 200…), but this decline is real; the number of Danes killed in traffic accidents has more than halved over the last decade (463 deaths in 2002; 220 deaths in 2011). The number of people sustaining traffic-related injuries dropped from 9254 in 2002 to 4259 in 2011. There’s a direct link to the data set at the link provided above if you want to know more.
iii. Gender identity and relative income within households, by Bertrand, Kamenica & Pan.
“We examine causes and consequences of relative income within households. We establish that gender identity – in particular, an aversion to the wife earning more than the husband – impacts marriage formation, the wife’s labor force participation, the wife’s income conditional on working, marriage satisfaction, likelihood of divorce, and the division of home production. The distribution of the share of household income earned by the wife exhibits a sharp cliff at 0.5, which suggests that a couple is less willing to match if her income exceeds his. Within marriage markets, when a randomly chosen woman becomes more likely to earn more than a randomly chosen man, marriage rates decline. Within couples, if the wife’s potential income (based on her demographics) is likely to exceed the husband’s, the wife is less likely to be in the labor force and earns less than her potential if she does work. Couples where the wife earns more than the husband are less satisfied with their marriage and are more likely to divorce. Finally, based on time use surveys, the gender gap in non-market work is larger if the wife earns more than the husband.” […]
“In our preferred specification […] we find that if the wife earns more than the husband, spouses are 7 percentage points (15%) less likely to report that their marriage is very happy, 8 percentage points (32%) more likely to report marital troubles in the past year, and 6 percentage points (46%) more likely to have discussed separating in the past year.”
These are not trivial effects…
iv. Some Khan Academy videos of interest:
“Relative to developed countries, there are far fewer women than men in India. Estimates suggest that among the stock of women who could potentially be alive today, over 25 million are “missing”. Sex selection at birth and the mistreatment of young girls are widely regarded as key explanations. We provide a decomposition of missing women by age across the states. While we do not dispute the existence of severe gender bias at young ages, our computations yield some striking findings. First, the vast majority of missing women in India are of adult age. Second, there is significant variation in the distribution of missing women by age across different states. Missing girls at birth are most pervasive in some north-western states, but excess female mortality at older ages is relatively low. In contrast, some north-eastern states have the highest excess female mortality in adulthood but the lowest number of missing women at birth. The state-wise variation in the distribution of missing women across the age groups makes it very difficult to draw simple conclusions to explain the missing women phenomenon in India.”
A table from the paper:
“We estimate that a total of more than two million women in India are missing in a given year. Our age decomposition of this total yields some striking findings. First, the majority of missing women, in India die in adulthood. Our estimates demonstrate that roughly 12% of missing women are found at birth, 25% die in childhood, 18% at the reproductive ages, and 45% die at older ages. […] There are just two states in which the majority of missing women are either never born or die in childhood (i e, [sic] before age 15), and these are Haryana and Rajasthan. Moreover, the missing women in these three states add up to well under 15% of the total missing women in India.
For all other states, the majority of missing women die in adulthood. […]
Because there is so much state-wise variation in the distribution of missing women across the age groups, it is difficult to provide a clear explanation for missing women in India. The traditional explanation for missing women, a strong preference for the birth of a son, is most likely driving a significant proportion of missing women in the two states of Punjab and Haryana where the biased sex ratios at birth are undeniable. However, the explanation for excess female deaths after birth is far from clear.”
“The finding of abnormal lung function in some diabetic subjects suggests that the lung should be considered a “target organ” in diabetes mellitus; however, the clinical implications of these findings in terms of respiratory disease are at present unknown.”
Malcolm Sandler wrote this almost 25 years ago. What’s happened since then? Well, I should perhaps point out that you still today have a situation where highly educated individuals who’ve had diabetes for decades may not even be aware that their disease may affect the lung tissue – I should know, because until a few years ago I didn’t know this. You care about the kidneys, you care about the feet, the eyes, the heart, sometimes the autonomous nervous system – but your lungs aren’t very likely to be brought up in a discussion with the endocrinonologist unless you happen to be a smoker, and in that case the concern is cancer risk and cardiovascular risk.
One main explanation is likely that the effects of the disease are minor, and so do not have much influence on the quality of life of the patient:
“Clear decrements in lung function have been reported in patients with diabetes over the past 2 decades, and many reports have suggested plausible pathophysiological mechanisms. However, at the present time, there are no reports of functional limitations of activities of daily living ascribable to pulmonary disease in patients with diabetes. Accordingly, this review is directed toward a description of the nature of reported lung dysfunction in diabetes, with an emphasis on the emerging potential clinical implications of such dysfunction.” (my emphasis, quote from this review)
I am interested in this matter because, well, at least partly because I’m just the kind of person who takes an interest in such matters. But recently I’ve also started to become a bit curious about whether the disease may have already have had an impact on my own lung function, ‘compared to baseline’. It’s far from certain – most studies find that microvascular complications are correlated (say if your eyes start to display signs of damage, it’s more likely that one may also observe damage to the kidneys) and that the link between those complications and metabolic control is strong; and my metabolic control is close to optimal, and my eyes and kidneys look fine.
I’m a long-distance runner. I run ~35 km/week now (and increasing with ~3 km/week), so of course I should not have breathing difficulties walking up and down stairs, and I don’t. And as the quote above makes clear even for patients who may be impacted, the damage is not likely to be all that major. So the fact that I don’t have any overt lung problems isn’t relevant – we wouldn’t expect such to present anyway. But it is worth asking whether I perform as well as I would do without my disease when I run. The obvious answer would be ‘of course not’ – for reasons unrelated to my lungs (taking blood samples take time, loading up on carbohydrates during a run after the blood sample is taken takes time – and I can’t do these things while running). But is there an impact from the lungs as well? I don’t know. Maybe. You can’t observe the counterfactual.
Which is why I thought this recent-ish meta-analysis was interesting:
“Background: Research into the association between diabetes and pulmonary function has resulted in inconsistent outcomes among studies. We performed a metaanalysis to clarify this association.
Methods: From a systematic search of the literature, we included 40 studies describing pulmonary function data of 3,182 patients with diabetes and 27,080 control subjects. Associations were summarized pooling the mean difference (MD) (standard error) between patients with diabetes and control subjects of all studies for key lung function parameters.
Results: For all studies, the pooled MD for FEV 1 , FVC, and diffusion of the lungs for carbon monoxide were -5.1 (95% CI, -6.4 to -3.7; P<.001), -6.3 (95% CI, -8.0 to -4.7; P<.001), and -7.2 (95% CI, -10.0 to -4.4; P<.001) % predicted, respectively, and for FEV 1 /FVC 0.1% (95% CI, -0.8 to 1.0; P = .78). Metaregression analyses showed that between-study heterogeneity was not explained by BMI, smoking, diabetes duration, or glycated hemoglobin (all P<.05).
Conclusions: Diabetes is associated with a modest, albeit statistically significant, impaired pulmonary function in a restrictive pattern. […]
Our metaanalysis shows that diabetes, in the absence of overt pulmonary disease, is associated with a modest, albeit statistically significant, impaired pulmonary function in a restrictive pattern. The results were irrespective of BMI, smoking, diabetes duration, and HbA1c levels. In subanalyses, the association seemed to be more pronounced in type 2 diabetes than in type 1 diabetes. Our study adds evidence for yet another organ system to be involved in bothtype 1 and type 2 diabetes. As a consequence of exclusion criteria, the levels of functional impairment fell within values that are generally considered to be normal. However, to place this in perspective, the magnitude of impairment found in our study closely resembles that of smoking per se.57 Similarly, given the relatively high prevalence of diabetes in COPD,58 it is tempting to speculate that (uncontrolled) diabetes may accelerate progressive lung function decline. However, from our metaanalysis summarizing crosssectional studies, it is difficult to draw conclusions on causality and progression into overt pulmonary diseases.” (my emphasis)
Whether you smoke or not is certainly not a trivial effect when you’re considering the fitness level of a long-distance runner! I know the effects are smaller for T1’s, but this is most certainly an effect to have in mind. Back when I ran my marathon three years ago both me and my brother were surprised that he did so much better than I did (he came in more than half an hour before I did, despite the fact that we both assumed beforehand that I was the one who was in better shape).
I consider some of the findings quite weird, and it’s hard to make heads or tails of some of this stuff:
“One would expect that a longer exposure to diabetes would proportionally increase the chance of connective tissue being nonenzymatically glycated. However, our study suggests that a longer duration is not necessarily associated with additional loss of pulmonary reserves. This is in line with previous longitudinal studies on this topic.59,60 […]
It is intriguing to observe that the pulmonary system remains relatively spared in diabetes when compared with other organs with wide microvascular beds. It is speculated that the large pulmonary reserves protect against severe pulmonary dysfunction.
Because neither the duration of diabetes nor glycemic state appeared to influence the association in our study, one might question whether there is a causal relationship between diabetes and impaired pulmonary function.”
I’ll try to keep my eyes open for updates on this stuff – although the estimated effects may not be big enough for people to seek out medical advice, they’re huge if you’re a long-distance runner considering whether it’s even worth it to participate in future official runs solely for the sake of improving your performance in such competitions.
On a sidenote I should point out that I don’t (/no longer) run in order to obtain a faster time in an official run – I run because I like to run, and I no longer have much desire to participate in official runs – but I’d be lying if I said I didn’t care at all about that stuff some years back when I started out participating in such runs. Imagine what happens with your desire to participate in such official runs if you don’t seem to be able to improve your time much even with strict adherence to running schedules, especially considering the fact that other people who in other respects are similar to you can out-perform you without doing a lot of work. I was above 70 km/week and had several 30+ kilometer runs behind me before my marathon; my brother never even crossed the 40 km/week threshold. And he beat me by more than half an hour. Go figure. I had a bad run for diabetes-related reasons so during the day this was not a surprising outcome, but it was a profoundly annoying outcome. And no, I was not ‘overtraining’; I was rather at the point where a 25+ km run was the ‘standard running distance’ – you know, that distance you managed without thinking much about it every Tuesday, and Saturday, with a short 20 km run in between – and I decreased the kilometer count up to the run as advised by the plan I was following (more or less stringently, but compared to the people whom I entered the goal line with the word ‘more’ is by far the more accurate one). And no, it’s not like I hadn’t heard about interval training, and it’s not like this stuff is hard to implement in a hilly place like Aarhus.
I did make progress from I started running to the point where I decided not to really consider ‘official runs’ to be be worth it anymore – the first half-marathon took me more than 2 hours, the best one I did in an hour and 47 minutes (this performance was achieved at a point in time where I ran 65 km/week and at least cared somewhat about speed and time taken – so, yeah… Compare this again with my brother, whose next goal is 1.35, without ever having been near 50 km/week). Right now my ‘standard running distance’ is 12-15 km – I like to run, but I have a very limited desire to participate in official runs in the future. It’s not worth it – if I go back to very-high intensity training I may improve my official performances, but that could just as easily be due to factors completely unrelated to my actual shape, like whether I was lucky about the starting blood glucose (fewer tests during the run, less time wasted on that), or whether I’d slept well. Who cares? And it’s not like I need to participate in these runs to motivate myself to get out there – I find running enjoyable as it is, especially in the summer when the weather is nice.
But in case you’d forgotten because of all the personal stuff in the end – to just reiterate the main points that made me start out writing this post:
“Diabetes is associated with a modest, albeit statistically significant, impaired pulmonary function in a restrictive pattern. […] the magnitude of impairment found in our study closely resembles that of smoking”.
This is perhaps also a good illustration of how dangerous diabetes is; the fact that the disease may impact the performance of the lungs in a manner not too dissimilar from smoking is not even considered clinically relevant; the patients have much bigger problems to worry about as it is.