I only ever covered two of Steven Farmer’s lectures here on the blog, and back when I blogged them I didn’t watch all of the lectures. Recently I went through some old bookmarks and decided to have a go at that stuff again. He’s pretty good:
Completely unrelated but I figured I should mention it: Tomorrow’s the first day of the London Chess Classic tournament. This chess tournament is as good as it gets; The world’s three highest rated players are all playing, as is the World Champion and the world’s strongest female player. Last year the live commentary was provided mainly by IM Lawrence Trent and GM Daniel King. They did a splendid job, but this year the organizers have upped the ante and found some significantly stronger players to do the job; Nigel Short and David Howell. Both of those guys are former contestants in the tournament. As usual the tournament has an unequal number of contestants, and the player with the bye round will join Short and Howell in the commentator box and give his/her views on the games as they proceed. I’ve been really impressed with the way the live commentary has been handled the last few years, and you can learn a lot by watching this stuff (here’s a direct link). The tournament has implemented a 3/1/0-rule (3 points for a win, 1 for a draw, 0 for a loss) so the number of ‘GM-draws’ is likely to be lower than it often is in these kinds of tournaments – the organizers want to incentivize the players to actually play interesting games, and in the past I think they’ve been successful. If you like chess, this is the place to be for the next one and a half weeks.
I’ve spent way too much money on books this autumn, as well as arguably too much time as well, so I’ve been feeling guilty about that. This guilty conscience has had as a consequence that I didn’t stock up on reading materials after I’d read the interesting stuff from the last amazon batch, something I usually do so that I always have a few books available that I’d potentially like to read if I find myself in the mood. I basically haven’t had many interesting unread books standing on my shelf, and so I haven’t read very much – a fact that I’ve also felt guilty about. Yesterday the contribution to my guilty conscience from not engaging in offline book-reading finally surpassed the contribution to it from spending money and time on reading too much ‘irrelevant stuff’ (i.e. non-exam-related-stuff), and so I ended up reading Ramachandran’s book.
Overall it’s better than Sacks, but I don’t really think that’s saying all that much. At least there aren’t any Wittgenstein quotes in this one (though there are Shakespeare quotes). I think this is the last book of this nature I’ll read – they’re too unsystematic, speculative and messy in their structure and I’d learn a lot more from just reading some chapters in a textbook like this (I probably won’t start out with that as I have this one standing on my shelf..). This is not to say that I didn’t learn anything from the book, and if you want to have a go at one of these easy-to-read introductory pop-sci neurology books, you can do worse (as I have realized). There’s less focus on patients and more focus on the specifics of the stuff that goes wrong and what those specifics tell us about how specific elements of the human brain works than in Sacks, and particularly important here is the fact that Ramachandran has included figures with illustrations of how the brain looks like and which structures are placed where, which was a big help to me during the reading.
I found the stuff on vision and how it works very interesting, so I’ll quote some stuff from that part of the book:
“The human brain contains multiple areas for processing images, each of which is composed of an intricate network of neurons that is specialized for extracting certain types of information from the image. […] every act of perception, even something as simple as viewing a drawing of a cube, involves an act of judgment by the brain.
In making these judgments, the brain takes advantage of the fact that the world we live in is not chaotic and amorphous; it has stable physical properties. During evolution—and partly during childhood as a result of learning— these stable properties became incorporated into the visual areas of the brain as certain “assumptions” or hidden knowledge about the world that can be used to eliminate ambiguity in perception. For example, when a set of dots move in unison—like the spots on a leopard—they usually belong to a single object. So, any time you see a set of dots moving together, your visual system makes the reasonable inference that they’re not moving like this just by coincidence—that they probably are a single object. And therefore, that’s what you see.” […]
“because of some quirk in our evolutionary history, each side of your brain sees the opposite half of the world (Figure 4.4). If you look straight ahead, the entire world on your left is mapped onto your right visual cortex and the world to the right of your center of gaze is mapped onto your left visual cortex. […] this first map serves as a sorting and editorial office where redundant or useless information is discarded wholesale and certain defining attributes of the visual image—such as edges—are strongly emphasized. […] This edited information is then relayed to an estimated thirty distinct visual areas in the human brain, each of which thus receives a complete and partial map of the visual world. […] Why do we need thirty areas?6 We really don’t know the answer, but they appear to be highly specialized for extracting different attributes from the visual scene—color, depth, motion and the like. When one or more areas are selectively damaged, you are confronted with paradoxical mental states of the kind seen in a number of neurological patients. […]
One of the most important principles in vision is that it tries to get away with as little processing as it can to get the job done. To economize on visual processing, the brain takes advantage of statistical regularities in the world—such as the fact that contours are generally continuous or that table surfaces are uniform—and these regularities are captured and wired into the machinery of the visual pathways early in visual processing. When you look at your desk, for instance, it seems likely that the visual system extracts information about its edges and creates a mental representation that resembles a cartoon sketch of the table (again, this initial extraction of edges occurs because your brain is mainly interested in regions of change, of abrupt discontinuity, at the edge of the desk, which is where the information is). The visual system might then apply surface interpolation to “fill in” the color and texture of the table, saying in effect, “Well, there’s this grainy stuff here; it must be the same grainy stuff all over.” This act of interpolation saves an enormous amount of computation; your brain can avoid the burden of scrutinizing every little section of the desk and can simply employ loose guesswork instead […] what we call perception is really the end result of a dynamic interplay between sensory signals and high-level stored information about visual images from the past. Each time one of us encounters an object, the visual system begins a constant questioning process. Fragmentary evidence comes in and the higher centers say, “Hmmmmm, maybe this is an animal.” Our brains then pose a series of visual questions: as in a twenty questions game. Is it a mammal? A cat? What kind of cat? Tame? Wild? Big? Small? Black or white or tabby? The higher visual centers then project partial “best fit” answers back to lower visual areas including the primary visual cortex. In this manner, the impoverished image is progressively worked on and refined (with bits “filled in,” when appropriate). I think that these massive feed forward and feedback projections are in the business of conducting successive iterations that enable us to home in on the closest approximation to the truth.16 To overstate the argument deliberately, perhaps we are hallucinating all the time and what we call perception is arrived at by simply determining which hallucination best conforms to the current sensory input.”
When I include quotes like the ones above in the post, I feel that I also have to quote some different stuff in order to give you a more complete picture. Here’s one quote which says a lot: “Contrary to what many of my colleagues believe, the message preached by physicians like Deepak Chopra and Andrew Weil is not just New Age psychobabble. It contains important insights into the human organism—ones that deserve serious scientific scrutiny.” So, yeah… Fortunately that quote was on page 221 (if it had been on page 20, I would not have read the rest of the book). In all fairness, he calls for rigorous tests but he also writes that “We have no idea which ones (if any) [of the alternative ‘medicine’ interventions] work and which ones do not” – which is a, problematic, claim. ‘Alternative medicine’ is ‘alternative’ because it doesn’t work – when health interventions of one kind or another can be shown to work in controlled experiments, they stop being ‘alternative’ treatments; the stuff that works is just called medicine. I know that there are institutional obstacles at play that keeps out treatment options which likely work but will never be profitable enough to justify trying to get through FDA approval, to take an example, but at least as a first approximation that’s how it works. You should probably also know, before you rush out to find the book, that I felt compelled to write words like ‘fool’ and ‘WTF’ in the margin at various occasions – the quote is not the only one of its kind. It’s safe to say that I very rarely do this when I read a book.
Even though I mostly don’t post personal stuff here anymore, I felt a personal post this week was probably in order. I wrote another one of those earlier this week, but I pulled it quite fast (for many reasons) – we’ll see if I let this one through my implicit filter.
So, people who’ve read along for a while are probably starting to get worried at this point – ‘personal stuff’, that can’t be good… Well, no need to worry. I’ve had a good week. A close friend who needed a place to crash stayed with me for a few days; this is the first time I’ve ever been in such a situation. I can’t speak for my friend, but I had a good time. I value my privacy very highly, and I generally don’t like being around people for extended periods of time. So the fact that I had a good time is, I think, sort of important. I’ve been thinking that there might be things to learn from the experience so I’ve thought a bit about it along the way. An important insight did not occur to me until today, and that insight is what first motivated me to write this post. But I’ll get to that stuff later.
When my friend (let’s call the individual in question ‘X’) asked me, one of my first reactions was to feel flattered. I’m vain, like most people – sue me. Anyway I realized that I was now in a situation where I had a friend who felt comfortable asking me a favour like that, and I realized that that felt awesome. Especially as I was able to say yes; it felt awesome being able to say yes. I should perhaps point out that even though X would probably argue – indeed has argued (quoting X: “you’ve never had friends who are idiotic enough to get themselves in a situation in which they’d appreciate help like that”) – that it’s not necessarily a good thing that I now have a friend ‘like that’, I really couldn’t take that argument seriously.
When X asked me I also felt a little bit scared and uncomfortable. Though I should make it clear that most of those thoughts only came later, after I’d said yes. What if it didn’t work out? What if I couldn’t stand spending so much time with X, or vice versa? As mentioned I’m a very private person and given the circumstances we’d have to share the same room for a few days – what if that was too much? I really didn’t know if I could handle that; during the last ten years I don’t think I’ve ever been in a situation where I was more or less unable to retreat from other people if it became too much for me for any extended period of time. And what if it became too much for X – what if X couldn’t stand being around me that long? What helped me there, though, was that I knew that X knows at least as much about what’s going on in my life as do my own brothers, and it’s very safe to say that X is personality-wise more like me than anyone in my own family. If I couldn’t even handle a few days in the same room as X, well… As for whether X could handle spending so much time with me, I figured that as long as I at least tried to behave reasonably like the person I’d like to be, which is what I try to a significant extent to do on a day to day basis anyway (though with varying degrees of success), it should be okay. So I ended up thinking that it would be fine and that it might even be fun and/or do me some good – the implicitly added social control element making me marginally more likely to do useful and productive stuff while X was around also had to be considered (the Hawthorne effect). Though on the other hand I’d have to add here that this element should not be overemphasized; X knows me quite well and so I knew that I wouldn’t have to put up any kind of elaborate facade in order to behave in what X would consider an ‘acceptable manner’. If that had not been the case I’d have been a lot more worried about the arrangement, because in that case I’d also had had to worry about significant foreseeable and ‘perceived necessary’ behavioural changes ‘draining me’.
Since I more or less stopped intrinsically caring about grades and how I did in school, I’ve tended to have a bit of a hard time figuring out what I was actually aiming for in life. My brain has tried to convince me that partnership and perhaps children are the sort of things I should aim for, and it has also tried to convince me that I’m not particularly likely to experience that kind of stuff during my life, which is annoying. I’ve long since convinced myself that career-stuff is unlikely to be fulfilling on its own. So what else? An interesting notion here is the fact that I’ve ‘traditionally’ been very skeptical about the value of friendships – close friendships were for people who couldn’t find a partner and then tried to fill out the void in other ways. I’d think that even long-term friends aren’t actually all that close, and how many of the people who cannot even get/keep a partner manage to find/keep a close, long-term friend anyway? I’ve been skeptical.
Since my period of social isolation ended, to the extent that it has, I’ve so far tended to think of friendships as a way to avoid problems, as a strategy to avoid isolation. It was the main reason why I started out interacting with people again; to avoid problems, to avoid a repeat of the hikikomori experience. It wasn’t that I thought I’d find interesting people to interact with – I’d never had close friends at that point. According to this conceptual approach I employed friends were perceived to have but instrumental value – ‘it’s good for you to interact with others so you should do that from time to time’. And that was it. It no longer is. Friendships can be much, much more than that. My friendship with X is not ‘just’ a ‘friendships to avoid problems’-friendship. My friendship with X is at this point, at least to me, probably closer to an ‘X is awesome, I feel lucky we’ve found each other and now have the opportunity to interact and exchange ideas and views, and I’d feel devastated if I no longer had this’-friendship. I don’t interact with X because I know that ‘it’s good for me’; I do it because I want to, because I enjoy it. Maybe I was in the same situation three months ago and it has just taken this long for my self-awareness to truly catch up with me; it’s been a gradual process surely, but it just hit me today: ‘This friendship is an important part of your life, and you should be very careful not to underestimate how valuable it is.’ At this point I’m really starting to realize that a friendship isn’t ‘just’ anything; establishing and maintaining such a social relationship with another individual can meaningfully be considered one of the major life goals.
In case anyone was wondering, X is a female.
Regarding the “I feel lucky we’ve found each other and now have the opportunity to interact and exchange ideas and views”-part, I’m pretty sure I could say that about a commenter or two here as well. ‘Online friendships’ are different from real-life ones but sometimes they end up overlapping and I should probably mention that if one of you people feel like you’d like to know me better and that I’d perhaps like to know you better as well, you’re welcome to reach out in this comment section. I’ve started to use skype regularly and it’s (…almost… – you can’t really disregard the time difference) as easy to skype with someone from Denmark as it is to skype with someone who lives on a completely different continent. I’d probably prefer to establish contact with people who’ve commented here before and/or have read along for a while. And please don’t consider it a one-time offer; consider it a standing invitation.
“SUMMARY AND CONCLUSIONS
Documents provided by the Department of Energy reveal the frequent and systematic use of human subjects as guinea pigs for radiation experiments. Some experiments were conducted in the 1940s at the dawn of the nuclear age, and might be attributed to an ignorance of the long term effects of radiation exposure, or to the atomic hubris that accompanied the making of the first nuclear bombs. But other experiments were conducted during the supposedly more enlightened 1960s and 1970s. In either event, such experiments cannot be excused.
These experiments were conducted under the sponsorship of the Manhattan Project, the Atomic Energy Commission, or the Energy Research and Development Administration, all predecessor agencies of the Department of Energy. These experiments spanned roughly thirty years. This report presents the findings of the Subcommittee staff on this project.
Literally hundreds of individuals were exposed to radiation in experiments which provided little or no medical benefit to the subjects. The chief objectives of these experiments were to directly measure the biological effects of redioactive material; to measure doses from injected, ingested, or inhaled redioactive substances; or to measure the time it took radioactive substances to pass through the human body. American citizens thus became nuclear calibration devices.
In many cases, subjects willingly participated in experiments, but they became willing guinea pigs nonetheless. In some cases, the human subjects were captive audiences or populations that experimenters might frighteningly have considered “expendable”: the elderly, prisoners, hospital patients suffering from terminal diseases or who might not have retained their full faculties for informed consent. For some human subjects, informed consent was not obtained or there is no evidence that informed consent was granted. For a number of these same subjects, the government covered up the nature of the experiments and deceived the families of deceased victims as to what had transpired. In many experiments, subjects received doses that approached or even exceeded presently recognized limits for occupational radiation exposure. Doses were as great as 98 times the body burden recognized at the time the experiments were conducted.”
It seems that the Tuskegee syphilis experiment wasn’t quite as unique as I’d thought.
ii. Diuretic Treatment of Hypertension. Interesting, lots of stuff there I didn’t know.
“After adjusting for age, sex, education, and race/ethnicity, risk of death was higher in low-income than high-income group for both all-cause mortality (Hazard ratio [HR], 1.98; 95% confidence interval [CI]: 1.37, 2.85) and cardiovascular disease (CVD)/diabetes mortality (HR, 3.68; 95% CI: 1.64, 8.27). The combination of the four pathways attenuated 58% of the association between income and all-cause mortality and 35% of that of CVD/diabetes mortality. Health behaviors attenuated the risk of all-cause and CVD/diabetes mortality by 30% and 21%, respectively, in the low-income group. Health status attenuated 39% of all-cause mortality and 18% of CVD/diabetes mortality, whereas, health insurance and inflammation accounted for only a small portion of the income-associated mortality (≤6%).
Excess mortality associated with lower income can be largely accounted for by poor health status and unhealthy behaviors. Future studies should address behavioral modification, as well as possible strategies to improve health status in low-income people.”
iv. Influence of Opinion Dynamics on the Evolution of Games. I’ve only just skimmed this, but it looks interesting. Here’s the abstract:
“Under certain circumstances such as lack of information or bounded rationality, human players can take decisions on which strategy to choose in a game on the basis of simple opinions. These opinions can be modified after each round by observing own or others payoff results but can be also modified after interchanging impressions with other players. In this way, the update of the strategies can become a question that goes beyond simple evolutionary rules based on fitness and become a social issue. In this work, we explore this scenario by coupling a game with an opinion dynamics model. The opinion is represented by a continuous variable that corresponds to the certainty of the agents respect to which strategy is best. The opinions transform into actions by making the selection of an strategy a stochastic event with a probability regulated by the opinion. A certain regard for the previous round payoff is included but the main update rules of the opinion are given by a model inspired in social interchanges. We find that the fixed points of the dynamics of the coupled model are different from those of the evolutionary game or the opinion models alone. Furthermore, new features emerge such as the independence of the fraction of cooperators with respect to the topology of the social interaction network or the presence of a small fraction of extremist players.”
v. This is awesome.
“Determining the fitness consequences of sibling interactions is pivotal for understanding the evolution of family living, but studies investigating them across lifetime are lacking. We used a large demographic dataset on preindustrial humans from Finland to study the effect of elder siblings on key life-history traits. The presence of elder siblings improved the chances of younger siblings surviving to sexual maturity, suggesting that despite a competition for parental resources, they may help rearing their younger siblings. After reaching sexual maturity however, same-sex elder siblings’ presence was associated with reduced reproductive success in the focal individual, indicating the existence of competition among same-sex siblings. Overall, lifetime fitness was reduced by same-sex elder siblings’ presence and increased by opposite-sex elder siblings’ presence. Our study shows opposite effects of sibling interactions depending on the life-history stage, and highlights the need for using long-term fitness measures to understand the selection pressures acting on sibling interactions.”
Where did they get their data? Well, it was hard for people living in the 17th and 18th century to avoid death or taxes too:
“The demographic dataset from historical Finnish populations was compiled from records of the Lutheran church, which was obliged by law to document all dates of births, marriages and deaths in the population for tax purposes [25–29]. As migration events were relatively rare and the migration records maintained by the church allowed us to follow dispersers in the majority of the cases, these records provide us with relatively accurate information on individual survival and reproductive histories  (e.g. 91% of individuals with known birth date were followed to sexual maturity at age 15 years). Our study period is limited to the eighteenth and nineteenth centuries, before the transition to reduced birth and mortality rates .”
vii. I’ve posted about this topic before, here’s a new study on cancer screening procedures: Effect of Three Decades of Screening Mammography on Breast-Cancer Incidence. I think the results are depressing:
“The introduction of screening mammography in the United States has been associated with a doubling in the number of cases of early-stage breast cancer that are detected each year, from 112 to 234 cases per 100,000 women — an absolute increase of 122 cases per 100,000 women. Concomitantly, the rate at which women present with late-stage cancer has decreased by 8%, from 102 to 94 cases per 100,000 women — an absolute decrease of 8 cases per 100,000 women. With the assumption of a constant underlying disease burden, only 8 of the 122 additional early-stage cancers diagnosed were expected to progress to advanced disease. After excluding the transient excess incidence associated with hormone-replacement therapy and adjusting for trends in the incidence of breast cancer among women younger than 40 years of age, we estimated that breast cancer was overdiagnosed (i.e., tumors were detected on screening that would never have led to clinical symptoms) in 1.3 million U.S. women in the past 30 years. We estimated that in 2008, breast cancer was overdiagnosed in more than 70,000 women; this accounted for 31% of all breast cancers diagnosed.
Despite substantial increases in the number of cases of early-stage breast cancer detected, screening mammography has only marginally reduced the rate at which women present with advanced cancer. Although it is not certain which women have been affected, the imbalance suggests that there is substantial overdiagnosis, accounting for nearly a third of all newly diagnosed breast cancers, and that screening is having, at best, only a small effect on the rate of death from breast cancer.”
i. Globular cluster (featured). What a thing like that looks like:
“A globular cluster is a spherical collection of stars that orbits a galactic core as a satellite. Globular clusters are very tightly bound by gravity, which gives them their spherical shapes and relatively high stellar densities toward their centers. The name of this category of star cluster is derived from the Latin globulus—a small sphere. A globular cluster is sometimes known more simply as a globular.
Globular clusters, which are found in the halo of a galaxy, contain considerably more stars and are much older than the less dense galactic, or open clusters, which are found in the disk. Globular clusters are fairly common; there are about 150 to 158 currently known globular clusters in the Milky Way, with perhaps 10 to 20 more still undiscovered. Large galaxies can have more: Andromeda, for instance, may have as many as 500. Some giant elliptical galaxies, particularly those at the centers of galaxy clusters, such as M87, have as many as 13,000 globular clusters. These globular clusters orbit the galaxy out to large radii, 40 kiloparsecs (approximately 131,000 light-years) or more.
Every galaxy of sufficient mass in the Local Group has an associated group of globular clusters, and almost every large galaxy surveyed has been found to possess a system of globular clusters. The Sagittarius Dwarf and Canis Major Dwarf galaxies appear to be in the process of donating their associated globular clusters (such as Palomar 12) to the Milky Way. This demonstrates how many of this galaxy’s globular clusters might have been acquired in the past.
Although it appears that globular clusters contain some of the first stars to be produced in the galaxy, their origins and their role in galactic evolution are still unclear.”
ii. Srinivasa Ramanujan (‘good article’). If you’ve seen Good Will Hunting the name will probably ring a bell. An interesting life, but much too short.
“The Gastropoda or gastropods, more commonly known as snails and slugs, are a large taxonomic class within the phylum Mollusca. The class Gastropoda includes snails and slugs of all kinds and all sizes from microscopic to large. There are many thousands of species of sea snails and sea slugs, as well as freshwater snails and freshwater limpets, as well as land snails and land slugs.
The class Gastropoda contains a vast total of named species, second only to the insects in overall number. The fossil history of this class goes back to the Late Cambrian. There are 611 families of gastropods, of which 202 families are extinct, being found only in the fossil record.
Gastropoda (previously known as univalves and sometimes spelled Gasteropoda) are a major part of the phylum Mollusca and are the most highly diversified class in the phylum, with 60,000 to 80,000 living snail and slug species. The anatomy, behavior, feeding and reproductive adaptations of gastropods vary significantly from one clade or group to another. Therefore, it is difficult to state many generalities for all gastropods. […]
At all taxonomic levels, gastropods are second only to the insects in terms of their diversity. […]
Although the name “snail” can be, and often is, applied to all the members of this class, commonly this word means only those species with an external shell large enough that the soft parts can withdraw completely into it. Those gastropods without a shell, and those with only a very reduced or internal shell, are usually known as slugs.”
iv. Borel-Cantelli lemma.
v. Vijayanagara Empire. (featured)
“The Vijayanagara Empire referred to as the Kingdom of Bisnagar by the Portuguese, was an empire based in South India, in the Deccan Plateau region. It was established in 1336 by Harihara I and his brother Bukka Raya I of Sangama Dynasty. The empire rose to prominence as a culmination of attempts by the southern powers to ward off Islamic invasions by the end of the 13th century. It lasted until 1646 although its power declined after a major military defeat in 1565 by the Deccan sultanates. The empire is named after its capital city of Vijayanagara, whose ruins surround present day Hampi, now a World Heritage Site in Karnataka, India. ”
vi. Donner Party (featured).
“The Donner Party was a group of 87 American pioneers who in 1846 set off from Missouri in a wagon train headed west for California, only to find themselves trapped by snow in the Sierra Nevada. The subsequent casualties resulting from starvation, exposure, disease, and trauma were extremely high, and many of the survivors resorted to cannibalism.
The wagons left in May 1846. Encouraged to try a new, faster route across Utah and Nevada, they opted to take the Hastings Cutoff proposed by Lansford Hastings, who had never taken the journey with wagons. The Cutoff required the wagons to traverse Utah’s Wasatch Mountains and the Great Salt Lake Desert, and slowed the party considerably, leading to the loss of wagons, horses, and cattle. It also forced them to engage in heavy labor by clearing the path ahead of them, and created deep divisions between members of the party. They had planned to be in California by September, but found themselves trapped in the Sierra Nevada by early November.
Most of the party took shelter in three cabins that had been constructed two years earlier at Truckee Lake (now Donner Lake), while a smaller group camped several miles away. Food stores quickly ran out, and a group of 15 men and women attempted to reach California on snowshoes in December, but became disoriented in the mountains before succumbing to starvation and cold. Only seven members of the snowshoe party survived, by eating the flesh of their dead companions. Meanwhile, the Mexican American War delayed rescue attempts from California, although family members and authorities in California tried to reach the stranded pioneers but were turned back by harsh weather.
The first rescue group reached the remaining members, who were starving and feeble, in February 1847. Weather conditions were so bad that three rescue groups were required to lead the rest to California, the last arriving in March. Most of these survivors also had resorted to cannibalism. Forty-eight members of the Donner Party survived to live in California. Although a minor incident in the record of westward migration in North America, the Donner Party became notorious for the reported claims of cannibalism. Efforts to memorialize the Donner Party were underway within a few years; historians have described the episode as one of the most spectacular tragedies in California history and in the record of western migration. […]
The group became lost and confused. After two more days without food, Patrick Dolan proposed that one of them should volunteer to die, to feed the others. Some suggested a duel, while another account describes an attempt to create a lottery to choose a member to sacrifice. Eddy suggested they keep moving until someone simply fell, but a blizzard forced the group to halt. Antonio, the animal handler, was the first to die; Franklin Graves was the next casualty.
As the blizzard progressed, Patrick Dolan began to rant deliriously, stripped off his clothes and ran into the woods. He returned shortly afterwards and died a few hours later. Not long after, possibly because 12-year-old Lemuel Murphy was near death, some of the group began to eat flesh from Dolan’s body. Lemuel’s sister tried to feed some to her brother, but he died shortly afterwards. Eddy, Salvador and Luis refused to eat. The next morning the group stripped the muscle and organs from the bodies of Antonio, Dolan, Graves, and Murphy and dried it to store for the days ahead, taking care to ensure that nobody would have to eat his or her relatives.
After three days rest they set off again, searching for the trail. Eddy eventually succumbed to his hunger and ate human flesh, but that was soon gone. They began to take apart their snowshoes to eat the oxhide webbing, and discussed killing Luis and Salvador for food; after Eddy warned the Indians they quietly left. During the night Jay Fosdick died, leaving only seven members of the party. Eddy and Mary Graves left to hunt, but when they returned with deer meat, Fosdick’s body had already been cut apart for food. After several more days—25 since they had left Truckee Lake—they came across Salvador and Luis, who had not eaten for about nine days and were close to death. William Foster, believing the flesh of the Indians was the group’s last hope of avoiding imminent death from starvation, shot the pair.
On January 12, the group stumbled into a Miwok camp looking so deteriorated that the Indians initially fled. The Miwoks gave them what they had to eat: acorns, grass, and pine nuts. After a few days, Eddy continued on with the help of a Miwok to a ranch in a small farming community at the edge of the Sacramento Valley. A hurriedly assembled rescue party found the other six survivors on January 17.”
“An endosymbiont is any organism that lives within the body or cells of another organism, i.e. forming an endosymbiosis (Greek: ἔνδον endon “within”, σύν syn “together” and βίωσις biosis “living”). Examples are nitrogen-fixing bacteria (called rhizobia) which live in root nodules on legume roots, single-celled algae inside reef-building corals, and bacterial endosymbionts that provide essential nutrients to about 10–15% of insects.
Many instances of endosymbiosis are obligate; that is, either the endosymbiont or the host cannot survive without the other, such as the gutless marine worms of the genus Riftia, which get nutrition from their endosymbiotic bacteria. The most common examples of obligate endosymbiosis are mitochondria and chloroplasts. Some human parasites, e.g. : Wucherichia bancrofti and Mansonella perstans thrive in their hosts because of an obligate endosymbiosis with Wolbachi spp.. They can both be eliminated from their host by treatments that target this bacterium. However, not all endosymbioses are obligate. Also, some endosymbioses can be harmful to either of the organisms involved.
It is generally agreed that certain organelles of the eukaryotic cell, especially mitochondria and plastids such as chloroplasts, originated as bacterial endosymbionts. This theory is called the endosymbiotic theory, and was first articulated by the Russian botanist Konstantin Mereschkowski in 1905.“
This new article is rather awesome, if for no other reason then because it involves so many people and follow them over such a long time-frame:
“Objective To estimate, in a national cohort, the absolute risk of suicide within 36 years after the first psychiatric contact.
Design Prospective study of incident cases followed up for as long as 36 years. Median follow-up was 18 years.
Setting Individual data drawn from Danish longitudinal registers.
Participants A total of 176 347 persons born from January 1, 1955, through December 31, 1991, were followed up from their first contact with secondary mental health services after 15 years of age until death, emigration, disappearance, or the end of 2006. For each participant, 5 matched control individuals were included.”
176.347 people followed for roughly two decades on average. That’s a lot of data. What did they find? Some of the main results:
“Results Among men, the absolute risk of suicide (95% confidence interval [CI]) was highest for bipolar disorder, (7.77%; 6.01%-10.05%), followed by unipolar affective disorder (6.67%; 5.72%-7.78%) and schizophrenia (6.55%; 5.85%-7.34%). Among women, the highest risk was found among women with schizophrenia (4.91%; 95% CI, 4.03%-5.98%), followed by bipolar disorder (4.78%; 3.48%-6.56%). In the nonpsychiatric population, the risk was 0.72% (95% CI, 0.61%-0.86%) for men and 0.26% (0.20%-0.35%) for women. Comorbid substance abuse and comorbid unipolar affective disorder significantly increased the risk. The co-occurrence of deliberate self-harm increased the risk approximately 2-fold. Men with bipolar disorder and deliberate self-harm had the highest risk (17.08%; 95% CI, 11.19%-26.07%).”
As mentioned they of course they didn’t just limit themselves to following ‘the sick people’ – they also needed people to compare them with… So:
“To estimate the cumulative incidence of suicide among people with no history of mental illness, we adopted a slightly alternative strategy. For each person with a history of any mental illness (as defined in the“Assessment of Suicide and Mental Illness” subsection), we randomly selected 5 people of the same sex and same birth date who had no history of mental illness (time matched). Using the described strategy, we followed up this healthy population (881 735 persons) to provide absolute suicide risks. Because this healthy population was selected at random among all 2.46 million people included in the study population, the estimates obtained represent the absolute risk of suicide among all 2.46 million people without a mental disorder.”
Again, that’s a lot of data – representativeness really is unlikely to be an issue here (at least when dealing with the situation in Denmark). As they put it in the paper: “This is the first analysis of the absolute risk of suicide in a total national cohort of individuals followed up from the first psychiatric contact, and it represents, to our knowledge, the hitherto largest sample with the longest and most complete follow-up.”
Results in a bit more detail:
(click to view full size). I’ve previously seen it argued in papers on anorexia that it’s the phychiatric disorder with the highest mortality rate, so I was a bit surprised by the relatively low numbers here. On the other hand that may be related to the fact that they tend to starve themselves to death rather than take their own lives in the traditional sense, which means that a lot of those excess deaths are not considered suicides. Note that a big majority of all suicides committed are committed by people with a mental illness and that the risk increase from a diagnosis is really quite significant; given the estimates, females with a mental illness are more than 8 times as likely to kill themselves than females without a mental illness, and males are 6 times more likely. Schizophrenic females are almost 20 times as likely to commit suicide than are females without a mental illness. Add substance abuse as well and these females are more than 30 times as likely to commit suicide (the absolute risk is around 7% in that case). The risk is substantially increased for almost all groups when you add substance abuse.
Do also note that not all people in the ‘mental illness’ group are actually people with a mental illness; personality disorders are not usually considered mental illnesses by health professionals, but the study includes in the group of people with mental illnesses people with: “any mental illness (any ICD-8 or ICD-10 code) if they had been admitted to a psychiatric hospital or had been in outpatient care with one of these diagnoses.” (The “any ICD-8 or ICD-10 code” means that people with personality disorders are included in the group as well). This is probably ‘fair enough’ given that at least some of these groups clearly have elevated suicide levels, but it’s worth having in mind that it should change the interpretation slightly. How about people who’ve attempted suicide?
The deliberate self-harm/attempted suicide group is obviously a high-risk group. The follow-up period is shorter than for the other estimates (30 years, rather than 36) so these estimates are perhaps best thought of as lower bounds. There’s some uncertainty regarding the estimates because the sample sizes aren’t that big (which is a good thing I think…), but roughly 1 in 6 Danish males with bipolar affective disorder killed themselves during the period. The absolute risks here are substantial; for the ‘any mental illness’ group, one in 12 committed suicide during the period. Although the female numbers are substantially lower for the group as a whole, for some illnesses the absolute risk is comparable to that of the males (and the excess risk much, much higher). More than one in ten females with schizophrenia and a suicide attempt in the past committed suicide during the follow-up period.
I should perhaps mention here that there may be some significant tail risk unaccounted for in the data, despite the long follow-up period which might lead you to think these are good estimates of the ‘lifetime probability of suicide’. The suicide-rate of Danish males above the age of 85 is the highest of all age groups, and it’s five times as high as the suicide risk of males at the age of 25-29 (Danish link). This is not just a Danish thing – similar dynamics have been observed elsewhere. Age matters a lot here, but people tend to care less when old people kill themselves than when young people do.
Did I ever blog any of these at some point? I’ve seen a couple of them before so I’m not sure, but I couldn’t find them in the archives and so I decided to post them here. I spent yesterday evening in great company:
i. Temporal view of the costs and benefits of self-deception, by Chance, Nortona, Ginob, and Ariely. The abstract:
“Researchers have documented many cases in which individuals rationalize their regrettable actions. Four experiments examine situations in which people go beyond merely explaining away their misconduct to actively deceiving themselves. We find that those who exploit opportunities to cheat on tests are likely to engage in self-deception, inferring that their elevated performance is a sign of intelligence. This short-term psychological benefit of self-deception, however, can come with longer-term costs: when predicting future performance, participants expect to perform equally well—a lack of awareness that persists even when these inflated expectations prove costly. We show that although people expect to cheat, they do not foresee self-deception, and that factors that reinforce the benefits of cheating enhance self-deception. More broadly, the findings of these experiments offer evidence that debates about the relative costs and benefits of self-deception are informed by adopting a temporal view that assesses the cumulative impact of self-deception over time.”
A bit more from the paper:
“People often rationalize their questionable behavior in an effort to maintain a positive view of themselves. We show that, beyond merely sweeping transgressions under the psychological rug, people can use the positive outcomes resulting from negative behavior to enhance their opinions of themselves—a mistake that can prove costly in the long run. We capture this form of self-deception in a series of laboratory experiments in which we give some people the opportunity to perform well on an initial test by allowing them access to the answers. We then examine whether the participants accurately attribute their inflated scores to having seen the answers, or whether they deceive themselves into believing that their high scores reflect new-found intelligence, and therefore expect to perform similarly well on future tests without the answer key.
Previous theorists have modeled self-deception after interpersonal deception, proposing that self-deception—one part of the self deceiving another part of the self—evolved in the service of deceiving others, since a lie can be harder to detect if the liar believes it to be true (1, 2). This interpersonal account reflects the calculated nature of lying; the liar is assumed to balance the immediate advantages of deceit against the risk of subsequent exposure. For example, people frequently lie in matchmaking contexts by exaggerating their own physical attributes, and though such deception might initially prove beneficial in convincing an attractive prospect to meet for coffee, the ensuing disenchantment during that rendezvous demonstrates the risks (3, 4). Thus, the benefits of deceiving others (e.g., getting a date, getting a job) often accrue in the short term, and the costs of deception (e.g., rejection, punishment) accrue over time.
The relative costs and benefits of self-deception, however, are less clear, and have spurred a theoretical debate across disciplines (5–10). […]
As we had expected, social recognition exacerbated self-deception: those who were commended for their answers-aided performance were even more likely to inflate their beliefs about their subsequent performance. The fact that social recognition, which so often accompanies self-deception in the real world, enhances self-deception has troubling implications for the prevalence and magnitude of self-deception in everyday life.”
ii. Nonverbal Communication, by Albert Mehrabian. Some time ago I decided that I wanted to know more about this stuff, but I haven’t really gotten around to it until now. It’s old stuff, but it’s quite interesting. Some quotes:
“The work of Condon and Ogston (1966, 1967) has dealt with the synchronous relations of a speaker’s verbal cues to his own and his addressee’s nonverbal behaviors. One implication of their work is the existence of a kind of coactive regulation of communicator-addressee behaviors which is an intrinsic part of social interaction and which is certainly not exhausted through a consideration of speech alone. Kendon (1967a) recognized these and other functions that are also served by implicit behaviors, particularly eye contact. He noted that looking at another person helps in getting information about how that person is behaving (that is, to monitor), in regulating the initiation and termination of speech, and in conveying emotionality or intimacy. With regard to the regulatory function, Kendon’s (1967a) findings showed that when the speaker and his listener are baout to change roles, the speaker looks in the direction of his listener as he stops talking, and his listener in turn looks away as he starts speaking. Further, when speech is fluent, the speaker looks more in the direction of his listener than when his speech is disrupted with errors and hesitations. Looking away during these awkward moments implies recognition by the speaker that he has less to say, and is demanding less attention from his listener. It also provides the speaker with some relief to organize his thoughts.
The concept of regulation has also been studied by Scheflen (1964, 1965). According to him, a communicator may use changes in posture, eye contact, or position to indicate that (1) he is about to make a new point, (2) he is assuming an attitude relative to several points being made by himself or his addresse, or (3) he wishes to temporarily remove himself from the communication situation, as would be the case if he were to select a great distance from the addressee or begin to turn his back on him. There are many interesting aspects of this regulative function of nonverbal cues that have been dealt with only informally. […]
One of the first attempts for a more general characterization of the referents of implicit behavior and, therefore, possibly of the behaviors themselves, was made by Schlosberg (1954). He suggested a three-dimensional framework involving pleasantness-unpleasantness, sleep-tension, and attention-rejection. Any feeling could be assigned a value on each of these three dimensions, and different feelings would correspond to different points in this three-dimensional space. This shift away from the study of isolated feelings and their corresponding nonverbal cues and toward a characterization of the general referents of nonverbal behavior on a limited set of dimensions was seen as beneficial. It was hoped that it could aid in the identification of large classes of interrelated nonverbal behaviors.
Recent factor-analytic work by Williams and Sundene (1965) and Osgood (1966) provided further impetus for characterizing the referents of implicit behavior in terms of a limited set of dimensions. Williams and Sundene (1965) found that facial, vocal, or facial-vocal cues can be categorized primarily in terms of three orthogonal factors: general evalution, social control, and activity.
For facial expression of emotion, Osgood (1966) suggested the following dimensions as primary referents: pleasantness (joy and glee versus dread and anxiety), control (annoyance, disgust, contempt, scorn, and loathing versus dismay, bewilderment, surprise, amazement, and excitement), and activation (sullen anger, rage, disgust, scorn, and loathing versus despair, pity, dreamy sadness, boredom, quiet pleasure, complacency, and adoration). […]
Scheflen (1964, 1965, 1966) provided detailed observations of an informal quality on the significance of postures and positions in interpersonal situations. Along similar lines, Kendon (1967a) and Exline and his colleagues explored the many-faceted significance of eye contact with, or observation of, another […] These investigations consistently found, among same-sexed pairs of communicators, that females generally had more eye contact with each other than did males; also, members of both sexes had less eye contact with one another when the interaction between them was aversive […] In generally positive exchanges, males had a tendency to decrease their eye contact over a period of time, whereas females tended to increase it (Exline and Winters, 1965). […]
extensive data provided by Kendon (1967a) showed that observation of another person duing a social exchange varied from about 30 per cent of 70 per cent, and that corresponding figures for eye contact ranged from 10 per cent to 40 per cent. […]
Physical proximity, touching, eye contact, a forward lean rather than a reclining position, and an orientation of the torso toward rather than away from an addressee have all been found to communicate a more positive attitude toward him. A second set of cues that indicates postural relaxation includes asymmetrical placement of the limbs, a sideways lean and/or reclining position by the seated communicator, and specific relaxation measures of the hands or neck. This second set of cues relates primarily to status differences between the communicator and his addressee: there is more relaxation with an addressee of lower status, and less relaxation with one of higher status. […]
In sum, the findings from studies of posture and position and subtle variations in verbal statements […] show that immediacy cues primarily denote evaluation, and postural relaxation ues denote status or potency in a relationship. It is interesting to note a weaker effect: less relaxation of one’s posture also conveys a more positive attitude toward another. One way to interpret this overlap of the referential significance of less relaxation and more immediacy in communicating a more positive feeling is in terms of the implied positive connotations of higher status in our culture. A respectful attitude (that is, when one conveys that the other is of higher status) does indeed have implied positive connotations. Therefore it is not surprising that the communication of respect and of positive attitude exhibits some similarity in the nonverbal cues that they require. However, whereas the communication of liking is more heavily weighted by variations in immediacy, that of respect is weighted more by variations in relaxation.”
I should probably note here that whereas it makes a lot of sense to be skeptical of some of the reported findings in the book, simply to get an awareness of some of the key variables and some proposed dynamics may actually be helpful. I don’t know how deficient I am in these areas because I haven’t really given body language and similar stuff much thought; I assume most people haven’t/don’t, but I may be mistaken.
iii. A friend let me know about this ressource and I thought I should share it here. It’s a collection of free online courses/lectures provided by Yale University.
iv. Prevalence, Heritability, and Prospective Risk Factors for Anorexia Nervosa. It’s a pretty neat setup: “During a 4-year period ending in 2002, all living, contactable, interviewable, and consenting twins in the Swedish Twin Registry (N = 31 406) born between January 1, 1935, and December 31, 1958, underwent screening for a range of disorders, including AN. Information collected systematically in 1972 to 1973, before the onset of AN, was used to examine prospective risk factors for AN.”
“Results The overall prevalence of AN was 1.20% and 0.29% for female and male participants, respectively. The prevalence of AN in both sexes was greater among those born after 1945. Individuals with lifetime AN reported lower body mass index, greater physical activity, and better health satisfaction than those without lifetime AN. […]
This study represents, to our knowledge, the largest twin study conducted to date of individuals with rigorously diagnosed AN. Our results confirm and extend the findings of previous studies on prevalence, risk factors, and heritability.
Consistent with several studies, the lifetime prevalence of AN identified by all sources was 1.20% in female participants and 0.29% in male participants, reflecting the typically observed disproportionate sex ratio. Similarly, our data show a clear increase in prevalence of DSM-IV AN (broadly and narrowly defined) with historical time in Swedish twins. The increase was apparent for both sexes. Hoek and van Hoeken3 also reported a consistent increase in prevalence, with a leveling out of the trajectory around the 1970s. Future studies in younger STR participants will allow verification of this observation.
Several observed differences between individuals with and without AN were expected, ie, more frequent endorsement of symptoms of eating disorders. Other differences are noteworthy. Consistent with previous observations, individuals with lifetime AN reported lower BMIs at the time of interview than did individuals with no history of AN. Although this could be partially accounted for by the presence of currently symptomatic individuals in the sample, our results remained unchanged when we excluded individuals likely to have current AN (ie, current BMI, ≤17.5). Previous studies have shown that, even after recovery, individuals with a history of AN have a low BMI.59 Although perhaps obvious, a history of AN appears to offer protection against becoming overweight. The protective effect also holds for obesity (BMI, ≥30), although there were too few individuals in the sample with histories of AN who had become obese for meaningful analyses. Despite the obvious nature of this observation, the mechanism whereby protection against overweight is afforded is not immediately clear. Those with a history of AN reported greater current exercise and a perception of being in better physical health. One possible interpretation of this pattern of findings is that individuals with a history of AN continue to display subthreshold symptoms of AN (ie, excessive exercise and caloric restriction) that contribute to their low BMIs. Alternatively, symptoms that were pathologic during acute phases of AN, such as excessive exercise and decreased caloric intake, may resolve over time into healthy behaviors, such as consistent exercise patterns and a healthful diet, that result in better weight control and self-rated health.
Regardless of which of these hypotheses is true, another intriguing difference is that individuals with lifetime AN report a lower age at highest BMI, although the magnitude of the highest lifetime BMI does not differ in those with and without a history of AN. Those with AN report their highest lifetime BMIs early in their fourth decade of life on average, whereas those without AN report their highest BMIs in the middle of their fifth decade of life (close to the age at interview). On a population level, adults tend to gain on average 2.25 kg (5 lb) per decade until reaching their eighth decade of life.60 Although more detailed data are necessary to make definitive statements about different weight trajectories, our results suggest not only that individuals with AN may maintain low BMIs but also that they may not follow the typical adult weight gain trajectories. These data are particularly intriguing in light of recent reports of AN being associated with reduced risk of certain cancers61 – 62 and protective against mortality due to diseases of the circulatory system.63 – 64 Energy intake is closely related to fat intake and obesity, both of which have also been related to cancer development65 – 66 and both of which are reduced in AN. Further detailed studies of the weight trajectories and health of individuals with histories of AN are required to explicate the nature and magnitude of these intriguing findings.
Of the variables assessed in 1972 to 1973, neuroticism emerged as the only significant prospective predictor of AN. This is notable because there have been few truly prospective risk factor studies of AN.”
v. The music is a bit much for me towards the end, but this is just an awesome video. I think I’d really have liked to know that guy:
vi. Political Sorting in Social Relationships: Evidence from an Online Dating Community, by Huber and Malhotra.
I found these data surprising (and I’m skeptical about the latter finding):
“Among paid content, online dating is the third largest driver of Internet traffic behind music and games (Jupiter Research 2011).A substantial number of marriages also result from interactions started online. For instance, a Harris Interactive study conducted in 2007 found that 2% of U.S. marriages could be traced back to relationships formed on eHarmony.com, a single online dating site (Bialik 2009).”
Anyway I’ll just post some data/results below and leave out the discussion (click to view tables in full size). Note that there are a lot of significant results here:
The last few figures are also interesting (people really care about that black/white thing when they date (online)…). but you can go have a look for yourself. As I’ve already mentioned there are a lot of significant results – they had a huge number of data to work with (170,413 men and 132,081 women).
I recently found this gem on youtube and I thought I should share it:
From this WHO paper. It has 254 pages and I haven’t read them all – neither should you, a lot of them are just pages of data. Anyway, some more stuff from the paper (click to view graphs and tables in full size):
“37 of the 40 countries with the lowest life expectancy are in Sub-Saharan Africa. HIV/AIDS is a major cause of the poor performance of many Africa countries in terms of health gains over the last decade or so. Overall, life expectancy in Sub-Saharan Africa has declined by 3-5 years in the 1990s due to increasing mortality from HIV/AIDS, with the estimated loss reaching 15-20 years in countries such as Botswana, Zimbabwe and Zambia.” [my emphasis] […]
“Of the 10.5 million deaths below age 5 estimated to have occurred in 1999, 99% of them were in developing regions (3). The probability of child death (5qo) is typically less than 1% in industrialized countries classified into the A Regional Strata (and 0.5% in Japan), but rises to 300-350 per 1000 in Niger and Sierra Leone. Levels of child mortality well in excess of 10% (100 per 1000) are still common throughout Africa and in parts of Asia (Mongolia, Cambodia, Laos, Afghanistan, Bhutan, Myanmar, Bangladesh and Nepal).
However, perhaps the widest disparities in mortality occur at the adult ages 15-59 years. In some Southern African countries such as Zimbabwe, Zambia and Botswana, where HIV/AIDS is now a major public health problem, 70% or more of adults who survive to age 15 can be expected to die before age 60 on current mortality rates [in the late 80es, the number for Zimbabwe was 15-20%, see p.25 – US]. In several others (e.g. Malawi, Namibia and Uganda) the risk exceeds 60%. The dramatic increase in 45q15 in South Africa is also noteworthy, with estimated levels of 601 per 1000 and 533 per 1000 for males and females respectively in 1999. At the other extreme, 45q15 levels of 90-100 per 1000 are common in most developed countries for men, with risks as low as half this again for women. […] HIV/AIDS was the cause of about 2.2 million deaths in Africa in 1999, making it by far the leading cause of death on the continent.”
There’s a lot of variation in mortality rates:
…and Africa is not the only region that’s doing badly: “The extraordinary risks of premature adult death among men in Eastern Europe is also clear from the Figure, (EUR C Region) with more than 1 in 3 who survive to age 15 in this Region likely to die before reaching age 60, at current risks compared with 10-12% in Western Europe, Japan and Australia.”
“Globally, some 56 million people are estimated to have died in 1999, 10.5 million below age five years. More males (29million) then females (27million) died, reflecting the systematically higher death rates for males at all ages in almost all countries. […] Worldwide, deaths at ages 15-59 in 1999 amounted to an estimated 15.5 million, (9 million males, 6.5 million females), but with wide uncertainty. By any definition, these deaths (28% of the total over all ages) must be considered premature.”
The Danish life tables are at page 112 and I decided to post them below. The US life tables are at page 245. More fine-grained and newer US data are also available here.
Which variables are reported above? Well: “For each age, estimates of central death rates (nMx), the probability of dying (nqx), number of survivors (lx), and expectation of life (ex) are shown.” (p. 19) I didn’t have a clue what the ‘central death rate’ is but luckily one can look that kind of stuff up:
“For a given population or cohort, the central death rate at age x during a given period of 12 months is found by dividing the number of people who died during this period while aged x (that is, after they had reached the exact age x but before reached the exact age x+1) by the average number who were living in that age group during the period.”
Do remember when looking at numbers such as these that it’s not just about how long you live – how you die matters a great deal.
File under: Stuff you probably didn’t know about that actually matters a great deal.
“Generation of electricity using coal started at the end of the 19th century. The first power stations had an efficiency of around 1%, and needed 12.3 kg of coal for the generation of 1 kWh. […] With increasing experience, in combination with research and development, these low efficiency levels improved rapidly. Increased technical experience with coal processing and combustion technology enabled a steady increase in the steam parameters ‘pressure’ and ‘temperature’, resulting in higher efficiency. In the years 1910, efficiency had already increased to 5%, reaching 20% by 1920. In the fifty’s, power plants achieved 30% efficiency, but the average efficiency of all operating power plants was still a modest 17%. […] continuous development resulted around the mid 80’s in an average efficiency of 38% for all power stations, and best values of 43%. In the second half of the nineties, a Danish power plant set a world record at 47%. […] The average efficiency of all coal power stations in the world is around 31%. […] In the next 10 years [the paper is from 2005, US], efficiencies up to 55% can be expected.” […]
Often, the question is asked why the ‘other 45%’ cannot be converted into electricity. This relates to the laws of physics: the absolute maximum efficiency is the so-called ‘Carnot efficiency‘. For a turbine operating with gasses of 600°C, it is 67%. Then we need to take into account the exergy content of steam (around 94%). Also combustion has an efficiency less than 100% (around 95%). The transfer of combustion heat to steam in the boiler is for example 96% efficient. Losses due to friction can be around 5% (efficiency 95%). The efficiency of a generator is about 98% on average . . . .
To obtain the combined efficiency, one needs to multiply the efficiency of each process. Taking the above mentioned components, one obtains 0.67 x 0.94 x 0.95 x 0.96 x 0.95 x 0.98 = 0.535 or 53.5%.
This does not yet take into account the efficiency of all components. The power station’s own power use for motors to grind coal, pumps, ventilators, . . . further reduces efficiency. In practice, net efficiency will be around 40 and 45%. Continuous load changes, i.e. following the load, and start-up/shutdown procedures further lower efficiency. The increasing variability of the load, through increased use of intermittent sources such as wind, will lead to increased swings in the load of the power station, reducing efficiency.”
ii. Allostatic load as a marker of cumulative biological risk: MacArthur studies of successful aging. From the abstract:
“Allostatic load (AL) has been proposed as a new conceptualization of cumulative biological burden exacted on the body through attempts to adapt to life’s demands. Using a multisystem summary measure of AL, we evaluated its capacity to predict four categories of health outcomes, 7 years after a baseline survey of 1,189 men and women age 70–79. Higher baseline AL scores were associated with significantly increased risk for 7-year mortality as well as declines in cognitive and physical functioning and were marginally associated with incident cardiovascular disease events, independent of standard socio-demographic characteristics and baseline health status. The summary AL measure was based on 10 parameters of biological functioning, four of which are primary mediators in the cascade from perceived challenges to downstream health outcomes. Six of the components are secondary mediators reflecting primarily components of the metabolic syndrome (syndrome X). AL was a better predictor of mortality and decline in physical functioning than either the syndrome X or primary mediator components alone. The findings support the concept of AL as a measure of cumulative biological burden.
In elderly populations, comorbidity in the form of multiple co-occurring chronic conditions is the norm rather than the exception. For example, in the U.S. 61% of women and 47% of men age 70–79 report two or more chronic conditions. These figures rise to 70% of women and 53% of men age 80–89 with 2+ chronic conditions (1). No single form of comorbidity occurs with high frequency, but rather a multiplicity of diverse combinations are observed (e.g., osteoarthritis and diabetes, colon cancer, coronary heart disease, depression, and hypertension). This diversity underscores the need for an early warning system of biomarkers that can signal early signs of dysregulation across multiple physiological systems.
One response to this challenge was the introduction of the concept of allostatic load (AL) (2–4) as a measure of the cumulative physiological burden exacted on the body through attempts to adapt to life’s demands. The ability to successfully adapt to challenges has been referred to by Sterling and Eyer (5) as allostasis. This notion emphasizes the physiological imperative that, to survive, “an organism must vary parameters of its internal milieu and match them appropriately to environmental demands” (5). When the adaptive responses to challenge lie chronically outside of normal operating ranges, wear and tear on regulatory systems occurs and AL accumulates.”
They conclude that: “The analyses completed to date suggest that the concept of AL offers considerable insight into the cumulative risks to health from biological dysregulation across multiple regulatory systems.” I haven’t come across the concept before but I’ll try to keep it in mind. There’s a lot of stuff on this.
“a few years ago, I learned that it’s actually pretty common to survive a plane crash. Like most people, I’d assumed that the safety in flying came from how seldom accidents happened. Once you were in a crash situation, though, I figured you were probably screwed. But that’s not the case.
Looking at all the commercial airline accidents between 1983 and 2000, the National Transportation Safety Board found that 95.7% of the people involved survived. Even when they narrowed down to look at only the worst accidents, the overall survival rate was 76.6%. Yes, some plane crashes kill everyone on board. But those aren’t the norm. So you’re even safer than you think. Not only are crashes incredibly rare, you’re more likely to survive a crash than not. In fact, out of 568 accidents during those 17 years, only 71 resulted in any fatalities at all.”
iv. Now that we’re talking about planes: What does an airplane actually cost? Here’s one article on the subject:
“As for actual prices, airlines occasionally let numbers slip, either because of disclosure requirements or loose tongues.
Southwest Airlines Co., LUV +0.11% for example, recently published numbers related to its new order for Boeing 737 Max jetliners in a government filing. Mr. Liebowitz of Wells Fargo crunched the data and estimated an actual base price of roughly $35 million per plane, or a discount of around 64%. He noted that Southwest is one of Boeing’s best customers and that early buyers of new models get preferential pricing. A Southwest spokeswoman declined to comment.
Air India, in seeking funding last year for seven Boeing 787 Dreamliners it expects to receive this year, cited an average “net cost” of about $110 million per plane. The current list price is roughly $194 million, suggesting a 43% discount. Air India didn’t respond to a request for comment for this article.
In March 2011, Russian flag carrier Aeroflot mentioned in a securities filing that it would pay at most $1.16 billion for eight Boeing 777s…”
100+ million dollars for a plane. I had not seen that one coming. File under: Questions people don’t seem to be asking, which I think is sort of weird. Now that we’re at it, what about trains? Here’s a Danish article about our new IC4-trains. A conservative estimate is at $1,09 billion (6,4 billion kroner) for 83 trains, which is ~$13,2 million/train (or rather per trainset (US terminology) or ~77 million Danish kroner. That’s much cheaper than the big airplanes, but it sure is a lot of money. What about busses? I’ve often thought about this one, perhaps because it’s a mode of transportation I use far more frequently than the others. Here’s one bit of information about the situation in the US, which is surely different from the Danish one but not that different:
“Diesel buses are the most common type of bus in the United States, and they cost around $300,000 per vehicle, although a recent purchase by the Chicago Transit Authority found them paying almost $600,000 per diesel bus. Buses powered by natural gas are becoming more popular, and they cost about $30,000 more per bus than diesels do. Los Angeles Metro recently spent $400,000 per standard size bus and $670,000 per 45 foot bus that run on natural gas.
Hybrid buses, which combine a gasoline or diesel engine with an electric motor much like a Toyota Prius, are much more expensive than either natural gas or diesel buses. Typically, they cost around $500,000 per bus with Greensboro, NC’s transit system spending $714,000 per vehicle.”
So of course you can’t actually compare these things this way because of the different way costs are calculated, but let’s just for fun assume you can: When you use the average price of a standard US diesel bus and compare it to the price of the recently bought Danish trains, the conclusion is that you could buy 44 busses for the price of one train. And you could buy 367 busses for the price of one of the Dreamliners.
v. A new blog you might like: Collectively Unconscious. A sort of ‘The Onion’ type science-blog.
vi. I was considering including this stuff in a wikipedia-post, but I thought I’d include it here instead because what’s interesting is not the articles themselves but rather their differences: Try to compare this english language article, about a flame tank designed in the United States, with this article about the same tank but written in Russian. I thought ‘this is weird’ – anybody have a good explanation for this state of affairs?
vii. The Emergence and Representation of Knowledge about Social and Nonsocial Hierarchies. I haven’t found an ungated version of the paper, but here’s the summary:
“Primates are remarkably adept at ranking each other within social hierarchies, a capacity that is critical to successful group living. Surprisingly little, however, is understood about the neurobiology underlying this quintessential aspect of primate cognition. In our experiment, participants first acquired knowledge about a social and a nonsocial hierarchy and then used this information to guide investment decisions. We found that neural activity in the amygdala tracked the development of knowledge about a social, but not a nonsocial, hierarchy. Further, structural variations in amygdala gray matter volume accounted for interindividual differences in social transitivity performance. Finally, the amygdala expressed a neural signal selectively coding for social rank, whose robustness predicted the influence of rank on participants’ investment decisions. In contrast, we observed that the linear structure of both social and nonsocial hierarchies was represented at a neural level in the hippocampus. Our study implicates the amygdala in the emergence and representation of knowledge about social hierarchies and distinguishes the domain-general contribution of the hippocampus.”
I’ve only actually watched the first 15 minutes (and I’m not sure I’ll watch the rest), but I assume some of you will find this interesting.
If you don’t know what I’m talking about, here’s the introduction.
I haven’t done as many sessions as I’d have liked, but at this point n is equal to 50 so I figured I might as well give you a scatter plot with the performance data so far:
Without the 2100+ performance at 17 mmol/l (the far right data point) R^2 would be 0,1463 – so n is still way too low to draw any conclusions. Perhaps aside from the fact that I don’t think the pattern looks completely random.
I’ve become aware of the fact that there are just loads of omitted variables here (nearby road work done with extensive use of pneumatic drills being one of the major ones in the beginning) and it would take a lot of data to take them all into account.
I’ve also realized by now that the tactics trainer performance is not a super great tool to pick up on variation in mental ability, though I maintain it’s not completely crazy to use it as a proxy. A significant number of the problems during a session are either repeats or quite similar to other problems solved in the past, and I remember those patterns just as well with a high blood glucose as with a lower one. So most of the variation in performance is around a set baseline, and how much I deviate from that baseline depends on how many ‘new’ problems – where I actually do have to think a bit – are introduced during a session. My performance is quite sensitive to the type of problems presented during a session and to which degree new problems/themes are introduced – the performance can easily vary with 200 points or more if I do two sessions ten minutes apart.
So I thought about this stuff a while ago while I was out for a walk, and I decided back then that I should blog it when I got home. When I did get home I’d forgotten all about it (it was a long walk). Today I was out walking again, and well…
Okay, so let’s assume a job interviewer asks you how you’d feel about working with X, X being the kind of stuff you could be expected to work with in the job function in question. The obvious answer to many people would be ‘I’d feel great about working with X, I’d be very excited to have that opportunity’ or something along those lines. Though ‘it’s what I’ve dreamt of my entire life’ is probably an unwise reply in some situations (desk clerk, bouncer, renovation worker..), in general it seems obvious that it makes a lot of sense to fake interest and excitement in such a situation; this is because such an approach is usually perceived to make you more likely to land the job.
But why is that again? Let’s think a little bit about the signalling aspects here. People who are intrinsically motivated need lower monetary compensation rates to motivate them to do their jobs than do people who are not; they’ll be happy with a lower wage because they like what they do, and if they really like what they do they’re less likely to complain about stuff like e.g. a poor work environment. So if you signal that you’re eager to work with this stuff, you signal that you have a lower reservation wage. This makes you more likely to land the job if you’re perceived to meet the task requirements, but the deceit should in equilibrium affect the employer’s expectations about your productivity – people who have lower reservation wages are all else equal less productive. On the other hand perhaps the reason why you’re eager is that you know a lot about the subject, which means that all else isn’t equal and that your interest might lead to higher productivity on the job or lower training costs. Depending on the specifics there are likely multiple optimal strategies here; and it’s worth having in mind that individual characteristics are highly likely to impact which strategy is optimal for a given individual in a given setting.
Now consider another variable that’s likely to come up in a job interview setting: Ambition. Again people are often implicitly encouraged to fake ambition because it’s perceived in some areas (though far from all) to increase their employment opportunities. If you’re ambitious you’re willing to work harder than the other guy. If you’re ambitious this means you care about the social hierarchy in the organisation, and if you care about that stuff you’ll be more likely to follow the instructions you’re given which is often a useful ability for an employee to possess. If you’re ambitious you’re probably likely to be willing to do a lot of extra stuff to impress the people above you so that you can rise in the social hierarchy, which corresponds to working harder for a lower level of monetary compensation. On the other hand some employers prefer to limit the competition for the management spots by selecting people who are not ‘too ambitious’ for a given job function. And if a vacancy is created for a job function where it’s unlikely that a satisfactory performance will lead to further advancement in the organisational hierarchy, an employer may prefer an unambitious applicant, as he or she is less likely to become disgruntled by the absence of career advancement opportunities. Ambitious people are incidentally quite likely to be perceived of as more aggressive than their unambitious counterparts, which also translates to higher expected wage demands (for the same amount of work).
If you’re perceived to be dishonest about your goals or attributes to a greater extent than is tolerated in such situations this will most likely harm your opportunities greatly, but it’s worth noting that the tolerated level of dishonesty may vary a lot across organisations. Note that organisations always have an incentive to create the illusion that honesty is your best bet at a job interview; that’s because it’s the best bet for the organisation, i.e. the strategy which, if applied by all applicants, would give the organisation the highest potential payoff. This is because if all applicants supply all the decision-relevant information to the organisation, this will make the organisation most likely to be able to pick the best applicant for the job. But here’s the thing; the organisational payoff should at the point where you’re not yet hired by the organisation be irrelevant to you. You don’t care about the organisational payoff at the job interview stage, at this stage you only care about your likelihood of landing the job and the expected pay; withholding information will most frequently be optimal if that information might make you less likely to land the job or likely to earn less. Please do not assume that just because firms implicitly punish deceit, complete honesty is the best strategy for you – in most settings, it’ll likely be a stochastically dominated strategy. On the other hand if you have to grossly misrepresent who you are in order to land the job, the expected derived utility from landing the job probably isn’t as high as you think it is; the employer is not the only one who should care about whether you’re a good match for the job. The optimal amount of deceit is non-zero, but the risk of getting the wrong job should be weighed against the risk of not getting the job. When deciding on the optimal level of deceit do recall that the firm may have an incentive to withhold information from you as well, either by lying to you about which types of information that are important to them when it comes to whom to hire (in order to stop people from trying to game the system and weed out dishonest candidates), by misrepresenting the career opportunities associated with the job (if applicants think the job is high-profile and is likely to increase their future job market opportunities, they’ll likely decrease their wage demands because of the human capital investment value of the job), or perhaps by misrepresenting to some extent what you’ll actually be doing when you get the job (bait-and-switch type strategies are likely sometimes optimal, because it can lead to lower wage demands).
Like in romantic settings, displaying a low level of self-confidence is likely sub-optimal here. If you can’t convince yourself you’re the applicant they should pick, this is a great example of the kind of information you should be trying to hide from them. Don’t give the people involved the impression that you’re doing them a favour by showing up to the interview. Most of the people who go to an interview don’t get the job, and from a certain point of view the firm you’re interviewing with is quite likely to simply be wasting your time.
I decided to do a post with mostly just chamber stuff, I don’t think I’ve done that before (and I figured if I included ‘a Hamelin’ as well, people who don’t like chamber music should still be able to appreciate some of the stuff in the post – because, you know, Hamelin is awesome..).
The rest is below the fold…
The data included in this post are from Statistics Denmark, Statistikbanken – “KRHFU1: Befolkningens højeste fuldførte uddannelse (15-69 år) efter område, herkomst, uddannelse alder og køn.” I had a look at the documentation and most of the data are registry data, but the data on immigrants are survey-based (and thus less reliable) – no surveys have been conducted since 2006, so all immigrants who’ve arrived since then have an ‘unknown education level’ in the data. If you’re more curious about that subject I presented some other, much more detailed, data on the same topic a while ago here. If you disaggregate the data on immigrants and descendants, the image looks worse than it does here because descendants of Western immigrants have a different age profile than do descendants of non-Western immigrants – one third of descendants of Western immigrants are above the age of 30, whereas only 6% of non-Western descendants are that old (also, only 10% of all descendants in Denmark have reached the age of 30). Another aspect adding to the confusion is the fact that Western immigrants are quite well educated, which contributes to the confusion about the numbers – if you look at ‘Danish immigrants’, you’re basically mixing data drawn from two completely different distributions.
If you want to know more about the Danish education system this link may be of some use (note that there’s a lot of additional stuff in the sidebar there). Regarding the higher education stuff, a short-cycle higher education is at most two years long, a medium-cycle higher education is from 2-4 years long. The latter differs from a ‘standard’ Danish bachelor’s degree: “Professionally oriented higher education programmes are offered at colleges. Whereas in other countries, similar programmes may be offered by universities, in Denmark they have traditionally been offered by specialised colleges” (from the link). Long-cycle higher education corresponds to a Master’s degree. Click to view graphs in full size. All data reported are from the 2012 data sets. There aren’t all that many data included in this post, but part of the reason is that the source only gave the raw data – not percentage stuff (which is much more informative) – and variable transformations take time. Anyway…
So, let’s have a look at the data… I figured it’d be interesting to look at the first cohort of ‘young people’ (30-34) consisting of people about whom we can say with relative certainty that almost all of them have completed their formal education. I’ll start out with the males:
Descendant n is not that high compared to the other groups, but I think it’s ‘high enough to draw conclusions’ from the data (n=3041). Descendants are more than twice as likely to not get any education aside from grundskole than are people of Danish origin. What about the females?
Females are more likely to get an education so the numbers generally look better. When including all groups, 17,7% of males and 12,9% of females end up with only grundskole – it’s a significant difference, but it’s not actually that high compared to some of the variation we see in these data. The female descendants (n=2873) are roughly twice as likely to only have a grundskole education (22.8% vs 11,9%) as are people of Danish origin. Male immigrants were much less likely to have attended vocational school than were males of Danish origin; that difference is almost gone when we’re looking at the females. As I’ve mentioned elsewhere, do have in mind that a lot of those ‘unknown education level’ immigrants are people with very little education, and/or education which is not worth a lot on the Danish labour market – immigrants who’ve studied here have known education levels, and most of the people with an unknown education level aren’t highly educated foreigners who love the Danish weather and our marginal tax rates.
Education rates have increased over time. Below I’ve compared the numbers for males at the age of 65-69 with the 30-34 year old cohort. I didn’t really see why I should care about the education levels of immigrants or descendants in that age group, so I only included people of Danish origin:
Note here that the mandatory education level back then was lower than it is now (7 years vs 9 years), so most of the people in this graph with only grundskole education have spent less time in school than have the groups in the sections above. The numbers are not identical, but they’re not that different given how much the educational system changed over that 35 year period. I think it’s interesting that ‘only high school’ (or technical/trade high school) was a less likely scenario for people in this group than for the younger generation, but on the other hand it’s not surprising.
How about the females?
The difference is significant. The number of uneducated women has been reduced dramatically and the number of females who are highly educated has gone up a lot. The proportion of 30-34 year old females of Danish origin with a long-cycle higher education is higher than the proportion of males of Danish origin with a long-cycle higher education (15.6% vs 13.2%). The same pattern is seen in the younger 25-29 year old cohort: In that age group, 6,86% of males of Danish origin (n=8791) have completed a long-cycle higher education, whereas the corresponding number for females of Danish origin (n=10168) is 8,22% (which is a 20% difference). Here’s a mapping of all relevant cohorts included in the data set:
It’s well known that females are more likely to get a medium-cycle higher education than are males (n=34921 or 25,6% of females and n=14750 or 10,6% of males at the ages of 30-34 have such an education), so I decided to also look at the proportion of the genders with any type of higher education (short-cycle, medium-cycle, bachelor, long-cycle or PhD) and condition on age. A lot of people would probably be surprised to learn that it looks this way:
In case you want to argue that the short-cycle ones are roughly equivalent to vocational schooling (‘females take short-cycle higher educations where males take vocational schooling’), it’s worth noting that for all age groups males are more likely to get a short-cycle higher education than are females. No, the difference derives from the other categories. It seems that females are better educated than males on average and have been for decades – I did not expect that.
The dataset also includes some information about geographical variation. The percentage of people with a long-cycle higher education varies a lot:
(Wikipedia can help you if you don’t know anything about the Danish regions). If you meet a random person above the age of 30 on the street, he or she is more than three times as likely to have a long-cycle higher education if the person is from Copenhagen than if the person is from the Region of Southern Denmark. If you restrict your search to include only people drawn from the younger cohorts, the differences become even larger. If you scale down further and look at the differences at the municipal level, they are just huge; to take but one example, 3,1% of Danes above 30 years of age from the Jammerbugt municipality have a long-cycle higher education, whereas the corresponding number for people living in the Gentofte municipality is 29,6%.
I played the game last Monday and it took approximately 4 hours. I know that a few of the readers find chess interesting, so I thought I might as well blog it. I didn’t play particularly well, but it was good enough for a win – my opponent made the last mistake, though for much of the game I was clearly worse. I guess if you don’t know but would like to have some idea how strong ‘average club players’ are when they play games with standard time control, you can sort of use this game as a starting point. You can watch the game here – I was black. Moves, diagrams and comments below the fold:
i. “One is easily fooled by that which one loves.” (On est aisément dupé par ce qu’on aime. Molière)
ii. “The more we love our friends, the less we flatter them; It is by excusing nothing that pure love shows itself.” (Plus on aime quelqu’un, moins il faut qu’on le flatte: À rien pardonner le pur amour éclate. -ll-)
iii. “A learned fool is more foolish than an ignorant one.” (Un sot savant est sot plus qu’un sot ignorant. -ll-)
iv. “the hatred that one has for oneself is probably the one for which there is no forgiveness.” (Georges Bernanos)
v. “We hate some persons because we do not know them; and we will not know them because we hate them.” (Charles Caleb Colton)
vi. “Many a man may thank his talent for his rank, but no man has ever been able to return the compliment by thanking his rank for his talent.” (-ll-)
vii. Imitation is the sincerest of flattery. (-ll-)
viii. “When you have nothing to say, say nothing; a weak defense strengthens your opponent, and silence is less injurious than a bad reply.” (-ll-)
ix. “Nice distinctions are troublesome. It is so much easier to say that a thing is black, than to discriminate the particular shade of brown, blue, or green, to which it really belongs. It is so much easier to make up your mind that your neighbour is good for nothing, than to enter into all the circumstances that would oblige you to modify that opinion. […]
Falsehood is so easy, truth so difficult. […] Examine your words well, and you will find that even when you have no motive to be false, it is a very hard thing to say the exact truth, even about your own immediate feelings — much harder than to say something fine about them which is not the exact truth.” (George Eliot)
x. “There are few prophets in the world; few sublimely beautiful women; few heroes. I can’t afford to give all my love and reverence to such rarities: I want a great deal of those feelings for my every-day fellow-men, especially for the few in the foreground of the great multitude, whose faces I know, whose hands I touch, for whom I have to make way with kindly courtesy.” (-ll-)
xi. “One gets a bad habit of being unhappy.” (-ll-)
xii. “Ignorance gives one a large range of probabilities.” (-ll-)
xiii. “Suicide is another thing that’s so frowned upon in this society, but honestly, life isn’t for everybody. It really isn’t. It’s sad when kids kill themselves ’cause they didn’t really give it a chance, but life is like a movie: if you’ve sat through more than half of it and it sucked every second so far, it probably isn’t gonna get great right at the very end for you and make it all worthwhile. No one should blame you for walking out early.” (Doug Stanhope)
xiv. “Conceal a flaw, and the world will imagine the worst.” (Marcial)
xv. “Whoever makes great presents, expects great presents in return.” (Quisquis magna dedit, voluit sibi magna remitti. -ll-)
xvi. “An ordinary human being, with a personal conscience, personally answering for something to somebody and personally and directly taking responsibility, seems to be receding farther and farther from the realm of politics. Politicians seem to turn into puppets that only look human and move in a giant, rather inhuman theatre; they appear to become merely cogs in a huge machine, objects of a major civilizational automatism which has gotten out of control and for which nobody is responsible.” (Václav Havel)
xvii. “We should not write so that it is possible for [the reader] to understand us, but so that it is impossible for him to misunderstand us.” (Quintilian)
xviii. “Reason and love are sworn enemies.” (La raison et l’amour sont ennemis jurés. Pierre Corneille)
xix. “Desire increases when fulfillment is postponed.” (Le désir s’accroît quand l’effet se recule. -ll-)
xx. “False facts are highly injurious to the progress of science for they often endure long; but false hypotheses do little harm, as everyone takes a salutary pleasure in proving their falseness; and when this is done, one path toward error is closed and the road to truth is often at the same time opened.” (Charles Darwin, quote from Phantoms in the brain)
I’ve written a lot of stuff about models on this blog in the past, so some of the stuff I’m writing now I’ve probably covered before. I thought it was worth revisiting the subject anyway.
First off, one way to think about a mental model is to consider it a way of thinking about a problem. This also implies that if there’s a problem of some sort, you can construct a model. And thus, from a certain point of view (…the point of view of mathematicians, economists, engineers, or…), there’s always a model. It can be implicit, it can be explicit – but it’s there somewhere. A model is an explanation, and it’s always possible to come up with an explanation. So when you see a model you don’t like, it’s not very helpful to say that ‘it’s only a model’. What else would it be? And so is whatever you’re considering, from a certain point of view. If the model presented is an inaccurate representation of the problem at hand, then it’s the inaccuracy-part that should be the subject of criticism, not the model-part.
Most people dislike formal models that are very specific and give very precise estimates. They know instinctively that these models are simplistic and that the real world is much more complicated than the models – so the perceived over-precise estimates may be way off and may even seem downright silly. Skepticism is warranted, surely. But the precision is also a very helpful aspect of such models, because precision allows us to be demonstratively wrong about something. I’d argue that this is also an important part of why such models are disliked by humans. Many people who’ve worked a bit with models have a quite low regard for formal models because they know the assumptions are driving many of the results. They are skeptical and prefer the models in their own minds. Those ‘mind models’ are much less specific, much more flexible and much less likely to actually generate testable hypotheses. It’s not that they are necessarily wrong – it’s more that they’re unlikely to ever be proven wrong. People who’ve not worked with models also are skeptic when it comes to models, and their mind models are even less specific and testable than the rest.
Here’s the thing: If you think that it makes good sense to be skeptical of models where assumptions are clearly stated beforehand, where parameters/parameter estimates are generated through a clear and transparent process and where limitations are addressed, then you should be a lot more skeptical of models where these conditions are not met.
Most people prefer vague models because they are more convenient. You’re less likely to be proven wrong; you’re less likely to take a stance that are at odds with the tribe; if the model is general enough it will be able to predict anything, making you think that you’re always right. They’re also often less computationally expensive to formulate.
Here’s one hypothesis from a model: ‘Immigrants from country X are 2,5 times as likely to have a criminal record than are non-immigrants.’
Here’s another hypothesis: ‘Immigrants from country X are more likely to have a criminal record than are non-immigrants.’
Here’s a third hypothesis: ‘Some immigrants from country X have a criminal record.’
Here’s a fourth hypothesis: ‘Some people commit crime.’
Which one of these hypotheses has the greatest information potential, that is the potential to tell us the most about the world? The first one, given that all the other three are also true if that one is. Which one is more likely to be considered correct when evaluated against the evidence? The last one.
From an information processing point of view, having nothing but correct beliefs you are certain about is not a good thing. That’s a sign that your models are very poor and don’t contain a lot of information. If you never seem to be (/realize you’re) wrong, that’s a sign that you’re doing things wrong.
Sometimes the ‘models’ we make use of when evaluating evidence is of the variety: ‘I’d like X to be true (because Y, Z), so obviously X is true.’ Sometimes that’s the model you use when you reject the presented formal model with a beta-estimate of 0,21 and a standard deviation of 0,06. This is worth having in mind.
On a related note, of course not all models are about generating hypotheses and testing them – some of them are rather meant to be used to illustrate certain aspects of a problem at hand in a simple and transparent manner. It’s always important to have in mind what the model is trying to achieve. That goes for the ‘mind models’ too. Are you trying to learn new stuff about the world, or are you just trying to be right?