i. Invasion of Poland. I recently realized I had no idea e.g. how long it took for the Germans and Soviets to defeat Poland during WW2 (the answer is 1 month and five days). The Germans attacked more than two weeks before the Soviets did. The article has lots of links, like most articles about such topics on wikipedia. Incidentally the question of why France and Britain applied a double standard and only declared war on Germany, and not the Soviet Union, is discussed in much detail in the links provided by u/OldWorldGlory here.
ii. Huaynaputina. From the article:
“A few days before the eruption, someone reported booming noise from the volcano and fog-like gas being emitted from its crater. The locals scrambled to appease the volcano, preparing girls, pets, and flowers for sacrifice.”
This makes sense – what else would one do in a situation like that? Finding a few virgins, dogs and flowers seems like the sensible approach – yes, you have to love humans and how they always react in sensible ways to such crises.
I’m not really sure the rest of the article is really all that interesting, but I found the above sentence both amusing and depressing enough to link to it here.
iii. Albert Pierrepoint. This guy killed hundreds of people.
On the other hand people were fine with it – it was his job. Well, sort of, this is actually slightly complicated. (“Pierrepoint was often dubbed the Official Executioner, despite there being no such job or title”).
Anyway this article is clearly the story of a guy who achieved his childhood dream – though unlike other children, he did not dream of becoming a fireman or a pilot, but rather of becoming the Official Executioner of the country. I’m currently thinking of using Pierrepoint as the main character in the motivational story I plan to tell my nephew when he’s a bit older.
iv. Second Crusade (featured). Considering how many different ‘states’ and ‘kingdoms’ were involved, a surprisingly small amount of people were actually fighting; the article notes that “[t]here were perhaps 50,000 troops in total” on the Christian side when the attack on Damascus was initiated. It wasn’t enough, as the outcome of the crusade was a decisive Muslim victory in the ‘Holy Land’ (Middle East).
v. 0.999… (featured). This thing is equal to one, but it can sometimes be really hard to get even very smart people to accept this fact. Lots of details and some proofs presented in the article.
vi. Shapley–Folkman lemma (‘good article’ – but also a somewhat technical article).
vii. Multituberculata. This article is not that special, but I add it here also because I think it ought to be and I’m actually sort of angry that it’s not; sometimes the coverage provided on wikipedia simply strikes me as grossly unfair, even if this is perhaps a slightly odd way to think about stuff. As pointed out in the article (Agustí points this out in his book as well), “The multituberculates existed for about 120 million years, and are often considered the most successful, diversified, and long-lasting mammals in natural history.” Yet notice how much (/little) coverage the article provides. Now compare the article with this article, or this.
I wasn’t quite sure how to rate the book, but I ended up at four stars on goodreads. The main thing holding me back from giving it a higher rating is that the book is actually quite hard to read and there’s a lot of talk about teeth; one general point I learned from this book is that the teeth animals who lived in the past have left behind for us to find are sometimes really useful, because they can help us to make/support various inferences about other things, from animal behaviours to climatic developments. As for the ‘hard to read’-part, I (mostly) don’t blame the author for this because a book like this would have to be a bit hard to read to provide the level of coverage that is provided; that’s part of why I give it four stars in spite of this. If you have a look at the links in the first post, you’ll notice the many Latin names. You’ll find a lot of those in the text as well. This is perfectly natural as there were a lot of e.g. horse-like and rhino-like species living in the past and you need to be clear about which one of them you’re talking about now because they were all different, lived in different time periods, etc. For obvious reasons the book has a lot of talk about species/genera with no corresponding ‘familiar/popular’ names (like ‘cat’ or ‘dog’), and you need to keep track of the Latin names to make sense of the stuff; as well as keeping track of the various other Latin terms used e.g. in osteometry. So you’ll encounter some passages where there’s some talk about the differences between two groups whose names look pretty similar, and you’re told about how one group had two teeth which were a bit longer than they were in the other group and the teeth also looked slightly different (and you’ll be told exactly which teeth we’re talking about, described in a language you’d probably have to be a dentist to understand without looking up a lot of stuff along the way). Problems keeping track of the animals/groups encountered also stem from the fact that whereas some species encountered in the book do have modern counterparts, others don’t. The coverage helps you to figure out which ecological niche which group may have inhabited, but if you’re completely unfamiliar with the field of ecology I’m not sure how easy it is to get into this mindset. The text does provide some help navigating this weird landscape of the past, and the many fascinating illustrations in the book make it easier to visualize what the animals encountered along the way might have looked like, but reading the book takes some work.
That said, it’s totally worth it because this stuff’s just plain fascinating! The book isn’t quite ‘up there’ with Herrera et al. (it reminded me a bit more of van der Geer et al., not only because of the slight coverage overlap), but some of the stuff in there’s pretty damn awesome – and it’s stuff you ought to know, because it’ll probably change how you think about the world. The really neat thing about reading a book like this is that it exposes a lot of unwarranted assumptions you’ve been making without knowing it, about what the past used to be like. I’m almost certain anyone reading a book like this will encounter ideas which are very surprising to them. We look at the world through the eyes of the present, and it can be difficult to imagine just how many things used to be different. Vague and tentative ideas you might have had about how the world used to look like and how it used to work can through reading books like this one be replaced with a much more clear, and much better supported, picture of the past. Even though there’s still a lot of stuff we don’t know, and will never know. I could mention almost countless examples of things I was very surprised to learn while reading this book, and I’m sure many people reading the book would encounter even more of these, as I actually was somewhat familiar with parts of the related literature already before reading the book.
I’ve added a few sample quotes and observations from the book below.
“Europe, although just an appendage of the Eurasian supercontinent, acted during most of its history as a crossroad where Asian, African, and American faunas passed one another, throughout successive dispersal and extinction events. But these events did not happen in an isolated context, since they were the response to climatic and environmental events of a higher order. Thus this book pays special attention to the abundant literature that for the past few decades has dedicated itself to the climatic evolution of our planet.”
“A common scenario tends to posit the early evolutionary radiation of placental mammals as occurring only after the extinction of the dinosaurs at the end of the Cretaceous period. The same scenario assumes a sudden explosion of forms immediately after the End Cretaceous Mass Extinction, filling the vacancies left by the vanished reptilian faunas. But a close inspection of the first epoch of the Cenozoic provides quite a different picture: the “explosion” began well before the end of the Cretaceous period and was not sudden, but lasted millions of years throughout the first division of the Cenozoic era, the Paleocene epoch. […] our knowledge of this remote time of mammalian evolution is much more obscure and incomplete than our understanding of the other periods of the Cenozoic. […] compared with our present world, and in contrast to the succeeding epochs, the Paleocene appears to us as a strange time, in which the present orders of mammals were absent or can hardly be distinguished: no rodents, no perissodactyls, no artiodactyls, bizarre noncarnivorous carnivorans. […] although the Paleocene was mammalian in character, we do not recognize it as a clear part of our own world; it looks more like an impoverished extension of the late Cretaceous world than the seed of the present Age of Mammals.”
“The diatrymas were human-size — up to 2 m tall — ground-running birds that inhabited the terrestrial ecosystems of Europe and North America in the Paleocene and the early to middle Eocene […] Besides the large diatrymas, a large variety of crocodiles — mainly terrestrial and amphibious eusuchian crocodiles — populated the marshes of the Paleocene rainforests. […] The high diversification of the crocodile fauna throughout the Paleocene and Eocene represents a significant ecological datum, since crocodiles do not tolerate temperatures below 10 to 15°C (exceptionally, they could survive in temperatures of about 5 or 6°C). Their existence in Europe indicates that during the first part of the Cenozoic the average temperature of the coldest month never fell below these values and that these mild conditions persisted at least until the middle Miocene.”
“At the end of the Paleocene, approximately 55.5 million years ago, there was a sudden, short-term warming known as the Latest Paleocene Thermal Maximum. Over a period of tens of thousands of years or less, the temperature of all the oceans increased by around 4°C. This was the highest warming during the entire Cenozoic, reaching global mean temperatures of around 20°C. There is some evidence that the Latest Paleocene Thermal Maximum resulted from a sudden increase in atmospheric CO2. Intense volcanic activity developed at the Paleocene–Eocene boundary, associated with the rifting process in the North Atlantic and the opening of the Norwegian-Greenland Sea. […] According to some analyses, atmospheric CO2 during the early Eocene may have been eight times its present concentration. […] The high temperatures and increasing humidity favored the extension of tropical rainforests over the middle and higher latitudes, as far north as Ellesmere Island, now in the Canadian arctic north. There, an abundant fauna — including crocodiles, monitor lizards, primates, rodents, multituberculates, early perissodactyls, and the pantodont Coryphodon — and a flora composed of tropical elements indicates the extension of the forests as far north as 78 degrees north latitude. […] The global oceanic level at the beginning of the Eocene was high, and extensive areas of Eurasia were still under the sea. In this context, Europe consisted of a number of emerged islands forming a kind of archipelago. A central European island consisted of parts of present-day England, France, and Germany, although it was placed in a much more southerly position, approximately at the present latitude of Naples. […] To the east, the growing Mediterranean opened into a wide sea, since the landmasses of Turkey, Iraq, and Iran were still below sea level. To the east of the Urals, the Turgai Strait still connected the warm waters of the Tethys Sea with the Polar Sea. […] Despite the opening of the Greenland-Norwegian Sea, Europe and North America were still connected during most of the early and middle Eocene across two main land bridges […] the De Geer Corridor [and] the Thule Bridge […] these corridors must have been effective, since the European fossil record shows a massive entry of American elements […] The ischyromyid and ailuravid rodents, as well as the miacid carnivores, were among the oldest representatives of the modern orders of mammals to appear in Europe during the early Eocene. However, they were not the only ones, since the “modernization” of the mammalian communities at this time went even further, and groups such as the first true primates, bats (Chiroptera), flying lemurs (Dermoptera), and oddtoed (Perissodactyla) and even-toed (Artiodactyla) ungulates entered onto the scene, in both Europe and North America.”
“Although it was the first member of the horse lineage, Pliolophus certainly did not look like a horse. As classically stated, it had the dimensions of a medium dog (“a fox-terrier”), bearing four hooves on the front legs and three on the hind legs. […] the first rhino-related forms included Hyrachius, a small rhino about the size of a wolf that during the Eocene inhabited a wide geographic range, from North America to Europe and Asia.” (Yep, in case you didn’t know Europe had rhinos for millions and millions of years…) “The artiodactyls are among the most successful orders of mammals, having diversified in the past 10 million years into a wide array of families, subfamilies, tribes, and genera all around the world, including pigs, peccaries, hippos, chevrotains, camels, giraffes, deer, antelopes, gazelles, goats, and cattle. They are easily distinguished from the perissodactyls because each extremity is supported on the two central toes, instead of on the middle strengthened toe. […] The oldest member of the order is Diacodexis, […] a rabbit-size ungulate”
“Although the number of middle Eocene localities in Europe is quite restricted, we have excellent knowledge of the terrestrial communities of this time thanks to the extraordinary fossiliferous site of Messel, Germany. […] several specimens from Messel retain in their gut their last meal, providing a rare opportunity for testing the teeth-inferred dietary requirements of a number of extinct mammalian groups. […] A dense canopy forest surrounded Messel lake, formed of several tropical and paratropical taxa that today live in Southeast Asia”.
“At the end of the middle Eocene, things began to change in the European archipelago. Several late Paleocene and early Eocene survivors had become extinct […] The last part of the middle Eocene saw a clear change in the structure of the herbivore community as specialized browsing herbivores […] replaced the small to medium-size omnivorous/ frugivorous archaic ungulates of the early Eocene and became the dominant species. […] These changes among the mammalian faunas were most probably a response to the major tectonic transformations occurring at that time and the associated environmental changes. During the middle Eocene, the Indian plate collided with Asia, closing the Tethys Sea north of India. The collision of India and the compression between Africa and Europe formed an active alpine mountain belt along the southern border of Eurasia. In the western Mediterranean, strong compression occurred during the late Eocene, […] leading to the final emergence of the Pyrenees. To the south of the Pyrenees, the sea branch between the Iberian plate and Europe retreated”
“The European terrestrial ecosystems at the end of the Eocene were quite different from those inherited from the Paleocene, which were dominated by archaic, unspecialized groups. In contrast, a diversified fauna of specialized small and large browsing herbivores […] characterized the late Eocene. From our perspective, they looked much more “modern” than those of the early and early-middle Eocene and perfectly adapted to the new late Eocene environmental conditions characterized by the spread of more open habitats.”
“during the Eocene […] Australia and South America were still attached to Antarctica, as the last remnants of the ancient Gondwanan supercontinent. Today’s circumpolar current did not yet exist, and the equatorial South Atlantic and South Pacific waters went closer to the Antarctic coasts, thus transporting heat from the low latitudes to the high southern latitudes. However, this changed during the late Eocene, when a rifting process began to separate Australia from Antarctica. At the beginning of the Oligocene, between 34 and 33 million years ago, the spread between the two continents was large enough to allow a first phase of circumpolar circulation, which restricted the thermal exchange between the low-latitude equatorial waters and the Antarctic waters. A sudden and massive cooling took place, and mean global temperatures fell by about 5°C. […] During a few hundred thousand years (the estimated duration of this early Oligocene glacial episode), the ice sheets expanded and covered extensive areas of Antarctica, particularly in its western regions. […] The onset of Antarctic glaciation and the growing of the ice sheets in western Antarctica provoked an important global sea-level lowering of about 30 m. Several shallow epicontinental seas became continental areas, including those that surrounded the European Archipelago. The Turgai Strait, which during millions of years had isolated the European lands from Asia, vanished and opened a migration pathway for Asian and American mammals to the west. […] The tectonic movements led to the final split of the Tethys Sea into two main seas, the Mediterranean Sea to the south and the Paratethys Sea, the latter covering the formerly open ocean areas of central and eastern Europe. […] After the retreat of the Turgai Strait and the emergence of the Paratethys province, the European Archipelago ceased to exist, and Europe approached its present configuration. The ancient barriers that had prevented Asian faunas from settling in this continental area no longer existed, and a wave of new immigrants entered from the east. This coincided with the trend toward more temperate conditions and the spread of open environments initiated during the late Eocene. Consequently, most of the species that had characterized the middle and late Eocene declined or became completely extinct, replaced by herds of Asian newcomers.”
In the post I’ll cover a few more chapters from the book. Let’s start with some observations from the chapter about relationship satisfaction. When you compare distressed couples with satisfied couples, distressed couples tend to show a range of dysfunctional communicative behaviours which include higher levels of criticism and complaining, hostility, defensiveness and disengagement, and not responding to the partner. “With regard to sequences of behavior, the “signature” of dissatisfied couples is the existence of reciprocated negative behavior that tends to escalate in intensity.” Attempts to repair the relationship usually employ meta-communication – e.g. “You’re not listening to me” – and these are typically delivered with negative affect (e.g. anger). The other party responds to the negative affect and reciprocates; on the other hand in satisfied couples the parties are usually more responsive to the repair attempts. The demand-withdrawal interaction pattern is another commonly observed interaction pattern in distressed couples which I’ve talked about before in my coverage of this book; this pattern involves one party pressuring the other with demands, complaints and criticism, and the other party withdrawing and reacting with defensiveness and passive inaction. An argument can be made that conflict interaction patterns may be relatively stable over time; for example researchers have looked at variables such as active listening, anger, and negative affect reciprocity in newly-weds and used these variables to successfully predict marital satisfaction and stability (presumably relationship dissolution risk, but this is not explicit in the text) 6 years later.
The chapter notes that research on cognitions in the relationship context has looked at the presence of unrealistic relationship beliefs early on in the relationship and used these unrealistic beliefs to predict relationship outcomes/dynamics later; it turns out that unrealistic relationship beliefs predict relationship dissatisfaction and observed couple behaviours. Other studies have instead looked at what they in the chapter term ‘functional’ unrealistic beliefs. I can’t recall if I’ve talked about this stuff before here in my coverage, but I haven’t talked about this chapter before anyway and the chapter notes that such studies have found e.g. that happy couples view their partners in a more positive light than the partners view themselves, and that “egocentrically assuming similarities between partner and self that do not exist is characteristic of being in a satisfying relationship.” The chapter notes that it has been known for a long time that happy couples tend to overestimate the positive qualities and underestimate the negative qualities of their partners, whereas unhappy couples tend to do the opposite. As mentioned earlier in the coverage, “happy spouses [tend to] make egocentric attributions for negative relationships events (e.g., arguments) but partner-centric attributions for positive relationships events”. They observe in the chapter that “[m]ore work has been conducted on attributions in close relationships than on any other cognitive variable. Evidence for an association between attribution and relationship satisfaction is overwhelming, making it possibly the most robust, replicable phenomenon in the study of close relationships”. Attributions affect many dimensions and one perhaps surprising variable involved is our memories. They mention a 5-year longitudinal study of dating couples in the chapter, which found that even though the participants’ self-reports of love of their partner declined during the year every year in the study, participants at the end of the year still consistently reported that they loved their partner more than they had the year before. People are funny sometimes.
The next chapter in the book deals with the topic of ‘romantic love’. I found this part quite interesting:
“Data from animal studies […] support the hypothesis that elevated activities of central dopamine play a primary role in attraction in mammalian species. In rats, blocking the activities of dopamine diminishes specific proceptive behaviors, including hopping and darting […]. Further, when a female lab-raised prairie vole is mated with a male, she forms a distinct preference for this partner. This preference is associated with a 50% increase of dopamine in the nucleus accumbens […] when a dopamine antagonist is injected directly into the nucleus accumbens, females no longer prefer [the] partner and when a female is injected with a dopamine agonist, she begins to prefer a conspecific who is present at the time of infusion, even if the female has not mated with this male […] In sum, the considerable data on mate preference in mammalian (and avian) species, and the association of this mate preference with subcortical dopaminergic pathways in human and animal studies suggest that attraction in mammals (and its human counterpart, romantic love) is a specific biobehavioral brain system; that it is associated with at least one specific neurotransmitter, dopamine; and that this brain system evolved to facilitate a specific reproductive function: mate preference and pursuit of this preferred mating partner.”
When looking at what makes a person likeable, people are usually found to like people who are similar to themselves – “perceived shared attitudes plays a highly consistent role across many experiments” – “but when other variables are also free to vary, the effect sizes are often relatively small”, and the authors note that reduced attraction to perceived dissimilars may play a big role here; maybe it doesn’t matter so much whether or not someone is particularly similar to yourself, and what really matters is that the individual is not ‘too different’. They note that “perceived similarity is much more important than actual similarity”. As for the mere-exposure effect, the authors note that the main effect of the variable is through providing an opportunity for interaction/relationship formation and that there is “little direct evidence for it playing much of a direct role in falling in love”. I found the research included on the ‘arousal at time of meeting the partner’-variable interesting. A psychological experiment from the 70es involving males meeting up with a female confederate on a bridge indicated that when interactions took place on a shaky suspension bridge, the males were more attracted to good-looking female confederates than they were when the two met up on a solid, low bridge. Later studies have since then demonstrated similar effects in a variety of contexts involving positive and negative sources of arousal. I guess if you don’t know the details of the followup-studies and you’re a woman who’ve found a potential partner you’d like to ask out, you might consider suggesting the first date take place on a (poorly constructed?) suspension bridge (or near one)..
The next chapter deals with the topic of commitment. Various conceptual models which provide different ways to think about commitment are presented early in the chapter, but I won’t really talk about that part of the coverage. The first observation I thought worth including here is that if you want to understand topics like abusive relationships, you need to understand stuff like commitment and related topics; it’s not very helpful to explain abuse as the result of the irrationality and stupidity of the victims, and the authors argue that the early literature on such topics were too focused on e.g. relationship satisfaction, which made it hard to understand what was going on. When people started including variables such as the investments (time, money, etc.) people had put into the relationship and available alternatives, some other ways to think about these things presented themselves – as they put it in the chapter, “Once researchers recognized the importance of commitment, it became evident that abuse victims may remain in their relationships because they are trapped – because they have poor alternatives (especially economic alternatives; e.g., limited financial resources, poor employment options) or because important investments bind them to their partners (e.g., young children, joint home ownership). […] recent empirical work supports the claim that persistence in abusive relationships is at least partially attributable to poor alternatives and high investments”.
Researchers in the field have argued that the reason why high commitment levels tend to keep relationships together is because they promote adaptive relationship-relevant acts, termed relationship maintenance phenomena. The important point is of course that you need a mechanism to explain why high commitment leads to different outcomes (it does), and that you need to look at behaviours. Although behaviours are important, so are cognitions (once again); to give a few examples, it’s been shown that people strongly committed to a relationship tend to shield themselves from attractive alternative partners by cognitively derogating tempting alternatives, and that people with strong commitment to a relationship react to periods of doubt or uncertainty by cognitively enhancing their partners and relationships. One behavioural mechanism supporting relationship persistence is that people with strong commitment are inclined to accommodate rather than retaliate when a partner engages in potentially destructive behaviours (instead of yelling when the partner is rude, the other party disengages and asks if s/he had a bad day at work (I’m not sure I’d have termed this type of behaviour ‘accommodation’, but that’s how they frame it in the chapter – as people might notice, this type of behaviour parallels the ‘bad behaviour is (explained away as) situational, good behaviour is personality’-cognitive angle mentioned multiple times during my coverage of the book already)). Research has found that committed people are more likely to sacrifice their personal interests to promote the interests of the partner and relationship (I’m sure some would argue this is/ought to be part of the construct), and that they’re more likely to forgive if confronted with acts of betrayal.
A few quotes related to the above: “Maintenance acts such as accommodation and sacrifice are beneficial not only because they prevent the escalation of conflict and yield better immediate outcomes, but also because they help each partner recognize the extent of the other’s commitment. For this reason, the situations that call forth maintenance acts […] have been termed diagnostic situations […] Such situations are “diagnostic” in that it is possible to discern the strength of another’s commitment only in situations wherein the behavior that benefits a relationship is at odds with the behavior that would benefit the individual […] Why are diagnostic situations important? Confidence in a partner’s commitment is reflected in trust, defined as the strength of one’s conviction that the partner will be responsive to one’s needs […] As such, one person’s trust in the other is a rough gauge of the strength of the other’s commitment […] As people become increasingly trusting, they become more willing to place themselves in vulnerable positions relative to the partner by becoming increasingly dependent – that is, they not only become more satisfied with the relationship, but are also more willing to drive away or derogate alternative partners (i.e., burn their bridges) and invest in the relationship in material and non-material ways […] increasing dependence yields strengthened commitment, which in turn causes […] a variety of prosocial maintenance acts”. Of course for a variety of reasons a desirable cycle like that may be interrupted or fail to materialize, and when things are not going well the cognitive mechanisms which usually help support relationship maintenance will instead support relationship dissolution – an unrealistically favourable view of the partner will e.g. be replaced by an unrealistically favourable view of the available alternatives. Relationship satisfaction is closely linked to commitment, and one thing to note here is that fluctuations in this variable have been shown to predict breakups independent of the level of relationship satisfaction. The authors note in the last part of the chapter that it’s likely that not all types of commitment are equal (e.g. ‘enthusiastic vs. moral’) and that different types of commitment may have different effects on e.g. the risk of relationship dissolution, but it doesn’t seem like a lot of research had been done on this stuff when the book was published – they don’t really go into the details.
I don’t generally comment on current affairs and I was debating for a long time whether or not to post anything about this. I had actually decided not to, but then I changed my mind this afternoon.
I haven’t spent much time on political stuff for a few years, and I think this is a very good decision; so it’s not like I’m suddenly starting out as a political blogger now. I won’t discuss the details of the event, but back when I was interested in political stuff I did have a look at some data which I thought might be worth briefly revisiting now. Here’s exhibit a), a quote from a Danish newspaper article, summing up the main results of a major Danish opinion poll conducted a few years ago:
“Angreb på religion bør være strafbart i Danmark. Det svarer halvdelen af indvandrere og efterkommere fra muslimske lande i en meningsmåling, som Danmarks Statistik har gennemført for den liberale tænketank Cepos.
2.792 personer har svaret på, om loven bør forbyde film og bøger, der angriber religion. Ja, mener 50 pct. af både indvandrere og deres efterkommere. Nej, siger 35 pct. af indvandrerne og 40 pct. af deres efterkommere.”
[I found it hard to translate this ‘directly’ because it’s ‘newspaper language’, but here’s my attempt at a translation (you can always use google translate to get a second opinion)]:
“Attacks on religion ought to be a crime in Denmark. This was the response of half of immigrants and descendants from muslim countries who participated in a recent opinion poll conducted by Statistics Denmark for the liberal think tank Cepos.
2792 persons responded to the question of whether or not they think films and books that attack religion ought to be outlawed. 50 % of both immigrants and descendants answered yes to this question, whereas 35 % of the immigrants and 40 % of the descendants answered no.”
I should probably make clear that a rule of thumb I’ve seen a few times is that you’re usually doing okay in terms of representativeness/external validity if you’ve asked 1000 people in a Danish poll. 2792 respondents from a subgroup is way more than is technically required for the results of such a poll to be reliable in terms of making out-of-sample inferences.
Everybody with half a brain cell will condemn the terrorist attack over the next days, if they haven’t already. Yet half of muslims in Denmark probably would be in favour of putting people like Lars Vilks in jail, or give people like him fines so that they stop talking.
Exhibit b) – who are the Danish anti-semites these days? It’s hard to tell, but here are some presumably relevant international data from this Pew report. The included results from muslim countries roughly mirror what I’d imagine you’d have got from a survey answered by members of the Waffen-SS in the early 1940es, allowing for a few drunk respondents etc.
If you want more data on what muslims around the world think about various things, have a look at these numbers. Did you know that roughly two out of three South-Asian muslims according to these numbers are in favour of killing apostates, or that more than half of South-Asian, Iraqi, Egyptian, and Palestinian muslims refused to say that honour killings are never justified?
I’ll allow comments here, but don’t expect me to engage.
Update: Some of you will already know this, but I thought I should point this out to people who don’t: A few years back I translated and blogged a substantial proportion of a statistical publication in Danish about immigrants living in Denmark. You can read those posts – which have quite a bit of data about which types of immigrants live in Denmark and how well they do here – here, here, here, and here.
I more or less discontinued these types of posts during the last year, but I figured that given how infrequently I post these days I might as well revive these. I’m sure I’ve included some of the pieces below in previous posts, but I don’t really care – if a couple of people who read along when I first posted them still remember those pieces from my previous coverage, they probably liked them anyway.
I’m currently reading this book. It’s quite nice so far, though the title is slightly misleading (I’ve read 82 pages so far and I’ve yet to come across any mammoths, sabertooths or hominids…). I mentioned yesterday that I wanted to cover the systems analysis text in more detail today, but that turned out to be really difficult to do without actually rewriting the book (or at the very least quoting very extensively), something I really don’t want to do. I decided to cover this book instead, though it’s admittedly slightly ‘lazy coverage’. Below I have added some links to stuff he talks about in the book. It’s the sort of book which is reasonably easy to blog, so I’m quite sure I’ll add more detail and context later, especially considering how most people presumably know far more (…okay, well, more) about the lives of the dinosaurs than they do about the lives of their much more recent ancestors, which lived during the Cenozoic.
The book frequently has more information about a given species/genus than does wikipedia’s corresponding article (and there’s stuff in here which wikipedia does not have articles about at all…), and/but I’ve tried to avoid linking to stubs below. Some articles below have decent coverage, but these are in general topics not well covered on wikipedia – I don’t think there’s a single featured article among the articles included. Even so, it’s probably worth having a look at some of the articles below if you’re curious to know which kind of stuff’s covered in this book. Aside from the links, I decided to also include a few pictures from the articles.
“This book was originally developed alongside the lecture Systems Analysis at the Swiss Federal Institute of Technology (ETH) Zürich, on the basis of lecture notes developed over 12 years. The lecture, together with others on analysis, differential equations and linear algebra, belongs to the basic mathematical knowledge imparted on students of environmental sciences and other related areas at ETH Zürich. […] The book aims to be more than a mathematical treatise on the analysis and modeling of natural systems, yet a certain set of basic mathematical skills are still necessary. We will use linear differential equations, vector and matrix calculus, linear algebra, and even take a glimpse at nonlinear and partial differential equations. Most of the mathematical methods used are covered in the appendices. Their treatment there is brief however, and without proofs. Therefore it will not replace a good mathematics textbook for someone who has not encountered this level of math before. […] The book is firmly rooted in the algebraic formulation of mathematical models, their analytical solution, or — if solutions are too complex or do not exist — in a thorough discussion of the anticipated model properties.”
I finished the book yesterday – here’s my goodreads review (note that the first link in this post was not to the goodreads profile of the book for the reason that goodreads has listed the book under the wrong title). I’ve never read a book about ‘systems analysis’ before, but as I also mention in the goodreads review it turned out that much of this stuff was stuff I’d seen before. There are 8 chapters in the book. Chapter one is a brief introductory chapter, the second chapter contains a short overview of mathematical models (static models, dynamic models, discrete and continuous time models, stochastic models…), the third chapter is a brief chapter about static models (the rest of the book is about dynamic models, but they want you to at least know the difference), the fourth chapter deals with linear (differential equation) models with one variable, chapter 5 extends the analysis to linear models with several variables, chapter 6 is about non-linear models (covers e.g. the Lotka-Volterra model (of course) and the Holling-Tanner model (both were covered in Ecological Dynamics, in much more detail)), chapter 7 deals briefly with time-discrete models and how they are different from continuous-time models (I liked Gurney and Nisbet’s coverage of this stuff a lot better, as that book had a lot more details about these things) and chapter 8 concludes with models including both a time- and a space-dimension, which leads to coverage of concepts such as mixing and transformation, advection, diffusion and exchange in a model context.
How to derive solutions to various types of differential equations, how to calculate eigenvalues and what these tell you about the model dynamics (and how to deal with them when they’re imaginary), phase diagrams/phase planes and topographical maps of system dynamics, fixed points/steady states and their properties, what’s an attractor?, what’s hysteresis and in which model contexts might this phenomenon be present?, the difference between homogeneous and non-homogeneous differential equations and between first order- and higher-order differential equations, which role do the initial conditions play in various contexts?, etc. – it’s this kind of book. Applications included in the book are varied; some of the examples are (as already mentioned) derived from the field of ecology/mathematical biology (there are also e.g. models of phosphate distribution/dynamics in lakes and models of fish population dynamics), others are from chemistry (e.g. models dealing with gas exchange – Fick’s laws of diffusion are e.g. covered in the book, and they also talk about e.g. Henry’s law), physics (e.g. the harmonic oscillator, the Lorenz model) – there are even a few examples from economics (e.g. dealing with interest rates). As they put it in the introduction, “Although most of the examples used here are drawn from the environmental sciences, this book is not an introduction to the theory of aquatic or terrestrial environmental systems. Rather, a key goal of the book is to demonstrate the virtually limitless practical potential of the methods presented.” I’m not sure if they succeeded, but it’s certainly clear from the coverage that you can use the tools they cover in a lot of different contexts.
I’m not quite sure how much mathematics you’ll need to know in order to read and understand this book on your own. In the coverage they seem to me to assume some familiarity with linear algebra, multi-variable calculus, complex analysis (/related trigonometry) (perhaps also basic combinatorics – for example factorials are included without comments about how they work). You should probably take the authors at their words when they say above that the book “will not replace a good mathematics textbook for someone who has not encountered this level of math before”. A related observation is also that regardless of whether you’ve seen this sort of stuff before or not, this is probably not the sort of book you’ll be able to read in a day or two.
I think I’ll try to cover the book in more detail (with much more specific coverage of some main points) tomorrow.
It’s been quite a while since the last time I posted a ‘here’s some interesting stuff I’ve found online’-post, so I’ll do that now even though I actually don’t spend much time randomly looking around for interesting stuff online these days. I added some wikipedia links I’d saved for a ‘wikipedia articles of interest’-post because it usually takes quite a bit of time to write a standard wikipedia post (as it takes time to figure out what to include and what not to include in the coverage) and I figured that if I didn’t add those links here I’d never get around to blogging them.
iii. I found this article about the so-called “Einstellung” effect in chess interesting. I’m however not sure how important this stuff really is. I don’t think it’s sub-optimal for a player to spend a significant amount of time in positions like the ones they analyzed on ideas that don’t work, because usually you’ll only have to spot one idea that does to win the game. It’s obvious that one can argue people spend ‘too much’ time looking for a winning combination in positions where by design no winning combinations exist, but the fact of the matter is that in positions where ‘familiar patterns’ pop up winning resources often do exist, and you don’t win games by overlooking those or by failing to spend time looking for them; occasional suboptimal moves in some contexts may be a reasonable price to pay for increasing your likelihood of finding/playing the best/winning moves when those do exist. Here’s a slightly related link dealing with the question of the potential number of games/moves in chess. Here’s a good wiki article about pawn structures, and here’s one about swindles in chess. I incidentally very recently became a member of the ICC, and I’m frankly impressed with the player pool – which is huge and includes some really strong players (players like Morozevich and Tomashevsky seem to play there regularly). Since I started out on the site I’ve already beaten 3 IMs in bullet and lost a game against Islandic GM Henrik Danielsen. The IMs I’ve beaten were far from the strongest players in the player pool, but in my experience you don’t get to play titled players nearly as often as that on other sites if you’re at my level.
v. You may already have seen this one, but in case you have not: A Philosopher Walks Into A Coffee Shop. More than one of these made me laugh out loud. If you like the post you should take a look at the comments as well, there are some brilliant ones there as well.
vi. Amdahl’s law.
vii. Eigendecomposition of a matrix. On a related note I’m currently reading Imboden and Pfenninger’s Introduction to Systems Analysis (which goodreads for some reason has listed under a wrong title, as the goodreads book title is really the subtitle of the book), and today I had a look at the wiki article on Jacobian matrices and determinants for that reason (the book is about as technical as you’d expect from a book with a title like that).
Here’s my first post about the book. In this post I’ll cover two more of the individual systems chapters – the first of the chapters I’ll talk about is the one about the renal system (kidneys). Some key symptoms which may suggest renal pathology are disorders of micturition (urination), disorders of urine volume, changes in urine composition, loin pain, oedema, and hypertension. Disorders of micturition can relate to frequency, poor urinary stream (typically caused by outflow obstructions) and dysuria (pain on micturition). There are 19 different causes of frequency mentioned in a table in the chapter, so there are a lot of possible causes. Volume changes may be termed polyuria (increase in volume), oliguria (decrease-), or anuria (total loss of urine output – this is bad); it’s important to note that frequency does not necessarily imply polyuria. Blood in the urine is called haematuria, a symptom which will often cause people to seek medical attention – for good reason: “Any patient above the age of 40 years with haematuria (visible or invisible) requires urgent evaluation by a urologist to look for malignant disease of the urinary tract.” It should however be noted that red/brown urine doesn’t necessarily indicate haematuria – other common causes are drugs and vegetable dyes – and relatedly it should be mentioned that blood in the urine may not be visible (haematuria is sometimes caught as an incidental finding by dip-stick analysis of the urine). When blood is present in the urine at the start of micturition only it usually indicates urethral bleeding, whereas bleeding towards the end of micturition is indicative of bladder/prostate bleeding. In the context of kidney disease pain patterns are inconsistent, but when there’s pain it’s usually due to renal tract inflammation or obstruction (e.g. due to a kidney stone). Cancer need not cause pain: “The cardinal feature of transitional cell carcinoma of the urinary tract is painless haematuria”, which may or may not be visible to the naked eye. In bladder cancer ‘local’ symptoms such as frequency and nocturia present before systemic symptoms such as weight loss, and the latter symptoms usually present late. Risk factors include smoking, occupational exposure to hydrocarbons, ionizing radiation (e.g. previous cancer treatment), prolonged immunosuppression, and bladder stones.
There are a number of inherited renal diseases, as well as a huge number of medical conditions associated with renal disease (18 of them are listed in the chapter). Aside from specific medical conditions a large number of drugs may also impact kidney function and the risk of developing renal disease. Pregnancy is a risk factor. Dietary factors may be important in some cases; for example excessive salt intake may lead to hypertension, as may alcohol, and hypertension is bad for the kidneys. Another example would be inadequate fluid intake or high intake of animal protein, both of which may promote lithiasis (stone formation). Tobacco is a significant risk factor for the development and progression of kidney disease. Of the many causes of kidney failure, diabetic renal disease is the most common cause of end-stage renal disease (-ESRD) in the Western world, according to the book accounting for 20-50% of new patients with end-stage renal disease (presumably the estimate is so broad-banded due to major cross-country differences). Another important cause is autosomal dominant polycystic kidney disease (ADPKD), which make up 10 per cent of patients with ESRD. Women have urinary tract infections (-UTI) much more often than men, and 50-60 per cent of women have at least one UTI during their lives. In males the risk has been estimated to be 5/10.000/year. It’s noted in the next chapter of the book that “Urinary tract infections (UTIs) are common in women, but uncommon in men under 50 years old”, but that “[o]lder men may get UTIs secondary to bladder outflow obstruction from prostatic hypertrophy”. I won’t talk much about that chapter, about the genitourinary system, as I’ve talked quite a bit about these sorts of things before when covering Holmes et al., e.g. here and here, but one other important quote is probably worth including here as well: “Seventy-five per cent of people infected with HSV [herpes simplex virus] are not aware that they have genital herpes either because their symptoms are very mild/absent or because the symptoms have been assumed to be due to something else (most commonly thrush).”
The other main chapter I’ll cover here is the chapter about the nervous system. I liked the way the author starts out the chapter – here’s a quote from the beginning of the chapter: “Inexperienced clinicians often order sophisticated (and expensive!) investigations hoping that the diagnosis may be revealed, but sadly this rarely happens. Many investigations are relatively sensitive but not necessarily disease specific”. Later on he also notes that it is his opinion that: “Electroencephalograms (EEGs) are grossly overordered. They should not be used as a diagnostic tool in epilepsy as they are relatively non-specific and non-sensitive.” I liked this stuff in part because I’m the sort of person who cares about cost-effectiveness, but also because Eysenck and Keane’s Cognitive Psychology text, part of which I read last December, contained some reasonably detailed coverage of various imaging methods used in these contexts and what you can and cannot use them for; and I think it’s highly likely that the author of the chapter is right. I may go into much more details about this kind of stuff later if I decide to cover E&K’s book, but I won’t talk about it here. One related observation worth including here is however that in the context of a seizure, something as ‘low-tech’ as an available eye-witness is often crucial (was there jerking? pallor? gaze aversion?) to make a diagnosis and distinguish between an epileptic seizure and a cardiovascular syncope (the most common diagnostic dilemma here).
Headaches are common. It’s useful to know that whereas an acute headache may be a sign of sinister pathology, chronic headaches rarely are. Acute headaches may be almost instantaneous (hyperacute), or they may develop over hours to days. Instantaneous headache may be (but of course needn’t be) due to life-threatening conditions such as subarachnoid haemorrhage, venous sinus thrombosis, cerebral haemorrhage, and phaeochromocytoma, all of which may present that way. The combination of neck stiffness and photophobia (together with headache) is called meningism and this is something that requires urgent investigation, as it may be due to meningitis or encephalitis. Muscle weakness is a common neurological symptom, and here it’s important to note that hyperacute limb weakness is usually caused by a stroke, and is most commonly unilateral (i.e. affecting e.g. only one arm or leg, rather than both), whereas bilateral weakness is a marker of spinal cord disease. Sensory symptoms may be either ‘positive’ (e.g. tingling, dysaesthesia) or negative (numbness); stroke usually causes negative symptoms, whereas various genetic or acquired disorders may also present with ‘positive’ symptoms as well. “Neuropathic pain (cf diabetes) is often lower limb predominant and described as burning, stinging or throbbing.” Relatedly: “Diabetes is the commonest cause of neuropathy in the UK; distal predominantly sensory neuropathy, diabetic amyotrophy (pain and wasting in femoral distribution), nerve entrapments (carpal tunnel syndrome), cranial neuropathy and autonomic neuropathy are relatively common complications.” As for the aforementioned strokes, they’re sometimes (in 15 per cent of cases, according to the book) preceded by a TIA (a transient ischaemic attack), a sort of ‘mini-stroke’ which causes a reversible neurological deficit lasting less than 24 hours (‘in practice much shorter duration’). A recent TIA puts you in high immediate risk of stroke, which is probably useful to know – for more details, see this link.
The nervous system deals with a lot of stuff, so a lot of things can go wrong. Autonomic nervous system disorders may cause symptoms/problems such as: sphincter disturbances (e.g. incontinence), change in sweating patterns, photophobia (when the pupil is affected), night blindness, orthostatic hypotension, dry mouth, dry eyes, erectile/ejaculatory failure, and vomiting. Specific nerves doing specific things can cause specific symptoms when they stop working the way they’re supposed to, and these sorts of symptoms are very far from limited to ‘people being unable to move their arms or legs'; neurological problems can also cause you to e.g. go blind or deaf. The distinction between monolateral (‘vascular’) and bilateral (‘neurological’) symptoms and how this distinction relate to the underlying medical cause seems to apply not only to major limbs, but also to other areas of the body – for example if you’re experiencing vision loss in both eyes it’s more often a neurological problem, whereas problems caused by retinal pathology tend to cause unilateral symptoms. On a related note, in elderly people monocular loss of vision can be a harbinger of stroke. They mention in this chapter that neurological dysphagia (difficulty swallowing) may affect liquids first, whereas a mechanical obstruction (e.g. due to a tumour) will preferentially affect solids (I mentioned this in my last post about the book). “The duration of anterograde amnesia is an extremely useful indicator of the severity of head injury.” In the last part of the chapter the author talks about various specific conditions causing neurological problems, such as Parkinson’s disease, Motor Neuron Disease, Multiple Sclerosis, Myasthenia gravis, and Guillain–Barré syndrome – I won’t cover these in any detail as the book only covers them very briefly (you can google them if you’re curious).
I decided in this post to have a look at a few of the chapters in the first part(s) of the book. As earlier mentioned I lost my notes and highlights to these parts of the book due to computer trouble, making it much more difficult and time consuming to blog this stuff, but I wanted to cover some of that stuff even so because if I don’t I’ll forget the details (to the extent that I have not already – I should caution that this post provides relatively ‘lazy coverage’ as I felt it to be completely out of the question to select material from the book to talk about here using the same criteria I normally employ).
An obvious but important conclusion from the chapter on The Affective Structure of Marriage, which is a chapter that among other things covers multiple conceptual models dealing with relationship change, is that: “various models of marital change are useful because no single pathway describes changes in all, or even most, marriages. Even among couples sharing a similar outcome (e.g., divorce), there is considerable variation in the course toward that outcome. This implies that attempts to develop a single explanation or description of divorce are likely to be, at best, incomplete. Concluding that multiple models are useful is merely recognizing that there are multiple developmental processes in marriage.” Relationships may change for all sorts of reasons, and there is no full model out there which explains everything. This stuff is complicated. It’s noted in the chapter that some of the attempts people have made at trying to e.g. predict which couples divorced over a given time period turned out to perform really quite well on one sample (using longitudinal data – this is not just unsophisticated cross section analyses we’re dealing with), but then it turned out later that they perform really quite horribly on validation samples using the same type of data to predict outcomes in different couples. Similar stimuli may have different effects depending on how long people have been together. Different models deal with time frame aspects in different ways.
I’ll mention a few results from the literature covered in that chapter here. One is that couples who were initially more affectionate and less antagonistic were happier 13 years later than were other couples who were still together at that point in time but had lower initial levels of affection/higher antagonism. It’s also been found that couples which are high in antagonism early on in the relationship (‘lots of drama’) are more likely to divorce early on; disillusionment after a few years of marriage seems to be a better predictor of divorce years later, with initial affection being an important moderating variable in the sense that couples who were initially higher in affection were together for a longer period of time before they eventually divorced. Shorter relationship duration at the time of marriage seems to predict divorce. Personality characteristics such as (trait) (presumably also state-, US) anxiety and neuroticism are associated with relationship dissatisfaction and divorce risk. I should probably once again emphasize that the only reason why I’m not providing effect sizes here is that the authors do not, and so I’m not able to. Some conclusions from the chapter:
“one of the most exciting nascent trends in the marital literature involves the recognition that there is not a single unitary process leading to marital distress and divorce […]. Some couples begin marriage with lower marital satisfaction than most other couples but remain married indefinitely, whereas other couples begin marriage very satisfied but end up divorcing. Moreover, the predictors of dissatisfaction and divorce are not always the same; for instance, stable characteristics such as trait anxiety appear to be more strongly related to satisfaction than they are to divorce […]. Even the processes leading to divorce are not uniform, with some couples who eventually divorce beginning marriage with high levels of hostility and divorcing quickly, others beginning marriage with moderate amounts of both positive and negative elements before becoming quite low in affection, and still others beginning marriage with exceedingly high levels of affection that are not sustained over the early years of marriage. Also, the predictors of divorce are different for divorces that occur earlier in marriage compared with those that happen later in marriage. […] Being high in conscientiousness [for example] appears to diminish the chances that one will divorce early in marriage but does not appear to prevent eventual divorce”.
It should be noted, as they also do, that much of this research is based on what they in the book call ‘observational data’, which in this context means data obtained by actually observing the individuals, usually in a lab, and then coding specific behaviours in specific ways. They didn’t just ask people if they were affectionate towards each other; they tried to estimate whether or not they were, based on behaviours they could observe. There are problems with this sort of data and they talk about that in the chapter; for example it has been argued (I think I may have talked about this before in my coverage) that the most effective kind of support may well be invisible support (“actions that take place outside the recipients’ awareness”, or supportive actions which are provided “in such a skillful way that, although the information about the transaction is available to the recipient, the transaction is not coded as enacted support”) – and this sort of support is difficult to observe in a lab; whereas on the other hand the most visible sort of support, which is the easiest type to code by observers, may be counterproductive (such actions may provide a signal to the partner that the other party considers him/her too incompetent to handle the task on his/her own, which may lead to self-doubt etc. in the recipient), perhaps making interpretations slightly more difficult than one might think they are. A related problem seems to me to be that not providing support may in some situations be the optimal approach to take by the partner (‘my partner obviously doesn’t need my help right now, and if I were to provide support in this situation this would not be helpful’), and so such behaviours may be indicative of a strong relationship – yet that’s not how such behaviours will be coded in the studies. There are some problems here.
Next, a few observations dealing with divorce and postdivorce relationships. This data is old, but better than nothing: “Most divorced adults find another romantic partner. In the United States, the probability of cohabiting after the dissolution of first marriage is 70% after 10 years […] Census estimates project that in the United States nearly 85% of divorced people remarry […]. Although the remarriage rate is lower in other Western societies, most divorced people eventually cohabit or remarry […] It is an almost universal finding that children have more difficulty adapting to parental remarriage than do the adults.” I thought I should mention in a slightly unrelated context that I recently came across a Danish article about how children are dealt with here in the divorce context; I was not surprised to learn that women get custody in 90% of the cases – the politicians are thinking about changing this (this did surprise me), which has caused some organizations to argue that it’s a bad idea to change this state of affairs (again, not surprising). I’ve blogged US data on this stuff before – go have a look at the archives/use the search function if you’re curious, I’m too lazy to provide a link. I believe the US numbers are reasonably similar. An important observation made in the chapter is that parenting roles have evolved over time, and that the institutional setup had not really evolved with them at the time this book was written: “Child support policies have been predicated on the notion of fathers having only one set of children to support. In fact, increases in multiple marital and cohabiting relationships means that nearly 75% of remarried men have multiple sets of children to support (emotionally and financially) both inside and outside their current relationship.” It’s important to observe in this context that the proportion of all marriages which were remarriages was really high in the US, and that the remarried couples made up a big proportion of the total: “About half of all U.S. marriages are remarriages for one or both partners” (data from the U.S. Census Bureau, 2000). Things may or may not be different today.
Some observations from the chapter about personal relationships in adolescence and early adulthood: “Friendships and romantic relationships are tightly interwoven in adolescence and early adulthood. Unsupervised mixed-gender peer groups during adolescence provide opportunities and supportive environments for “pairing off” between group members. By mid-adolescence, most individuals have been involved in at least one romantic relationship; by the early years of early adulthood, most are currently participating in an ongoing romantic relationship (Collins, 2003). […] Existing findings point to a shift in the qualitative characteristics of dating relationships between the ages of 15 and 17 years, and dating among early adults seems similar in key ways to dating among late adolescents. After age 17, the likelihood of being involved in a romantic relationship changes little […] Having a romantic relationship and the quality of that relationship are associated positively with romantic self-concept and, in turn, with feelings of self-worth […], and longitudinal evidence indicates that by late adolescence, self-perceived competence in romantic relationships emerges as a reliable component of general competence […]. Whether adolescent romantic relationships play a distinctive role in identity formation during adolescence is not known, although considerable speculation and some theoretical contentions imply a link […] The most widely studied patterns have to do with variations in the timing of involvement in both romantic relationships and sexual activity, typically showing that early dating and sexual activity are risk factors for current and later problem behaviors and social and emotional difficulties […] The social worlds of those involved in romantic relationships differ from those who are not because romantic partners quickly become dominant in the relationship hierarchy […]. Although romantic interconnections initially are predicated on principles of social exchange, commitment drives participants to transform this voluntary relationship into one that is more obligatory and permanent […]. Eventually, most early adults marry and reproduce, further transforming the relationship and marginalizing remaining friendships, thus effectively ending the peer group’s dominance of relationship experiences”.
And finally some data and observations from the chapter about close relationships in middle and late adulthood: “The majority of adults in the United States are married, but the proportion is smaller in old age than earlier in adulthood (ages 35 to 54 years = 71.3%, 55 to 64 years = 74.2%, and 65 or older = 56.7%), and a notable sex difference in the proportion married exists between men and women aged 65 or older (75.7% versus 42.9%, respectively). The majority of households comprise family households (68%), usually of married couples (52%), but 32% of adults live in non-family households, including the 26% who live alone [do keep in mind that many of those 26% are involved in romantic relationships as well, though the characteristics of the relationships they have are different]. Among persons aged 75 years or older, however, the proportion living alone is much higher (39.6%) because of the greater likelihood of being widowed (ages 35 to 54 years = 1.6%, 55 to 64 years = 6.7%, 65 to 74 = 19.6%, and 75 or older = 41%, U.S. Census Bureau, 2003). […] the proportion of householders with children of any age at home remains above 50% even in the 45- to 54-year-old age group (Russell, 2001). […] One of the key findings of research on the causes and consequences of relational difficulties in adulthood is that negative dimensions of interactions have stronger effects than positive ones on relationship quality and satisfaction.”
(A slightly unusual post, but I hope you’ll bear with me…)
*Someone’s sitting on death row, condemned to death for a crime he did not commit.
*A young woman just got raped.
*A childless widow near retirement age just lost most of her life savings by participating in a financial scheme she did not understand.
*A child died of AIDS.
*A man got diagnosed with ALS.
*A traffic accident killed the parents of two children, who are now orphans.
*A married woman just realized her husband of two decades has been cheating on her for a long time. She does not know what to do.
*An old man decides to end his pathetic existence and shoots himself.
*A dog starved to death because the owner neglected to take care of it.
*Some people are employed to try to minimize the number of bullets which leave the barrel of a machine gun and do not proceed to later hit a human being.
*A young man lost most of his teeth in a bad mugging.
*A guy working in a saw-mill lost his right hand due to a work accident with a chain saw.
*A child forgot to look both ways before crossing the street, and was killed by a woman whose life will never be the same again.
*A decision is reached at the family council. The father summarizes and tells his two sons that they are to kill their sister next week in order to protect the honor of the family. They all feel that this is the best option.
*Yesterday people from the government came by and took her two children away from her.
*An old man with dementia who for the last few years has had no visitors dies at the local retirement home.
*A man who’d been married to his wife for fifteen years was yesterday told that she no longer loves him, and that she wants a divorce.
*A woman is unable to sleep due to pain from a broken arm. She broke her arm in a recent violent argument with her abusive husband, who refuses to let her see a doctor. She lives in a part of the world where there are no women’s shelters, and she has no money or friends who might be able to help her.
*A long-time smoker has been feeling scared for weeks because he’s started to cough up blood and is worried that this might be something serious. He’s too afraid to see a doctor.
*A couple was just told that their new-born child has Down’s Syndrome. Before the child was born they had no idea anything might be wrong with the child.
*A Chinese man in his forties has lost weight over the last months and some of his teeth have fallen out. He’s also had stomach pain and memory problems. He’s worried that this might have something to do with his long-time work at the local battery manufacturing plant, but he can’t afford to see a doctor about it.
*A poor alcoholic went blind from drinking methanol.
*While they were on vacation abroad, their house burned down.
*A sixty-five year old woman suddenly feels the onset of the worst headache she’s ever had while out shopping with her grandchild. She’s dead before she reaches the hospital.
*Three children are praying with their mother. A major storm hit the coast last evening, and their father is a fisherman who did not return from his fishing trip yesterday.
*A young man was recently in a bar fight. The other guy had a knife. The young man is now awaiting a kidney transplant.
*Yesterday a homeless man died from hypothermia.
*The morphine can no longer block out the pain. The woman starts to scream.
*A mother is told by her adult son that after thinking things over carefully, he’s decided that he’ll never speak to her again, because of what she did.
*A taxi-driver was involved in an accident and has just learned that he’ll need a wheelchair for the rest of his life.
*A couple is told that if they don’t pay the rent they owe next week, they’ll get kicked out of their flat. They already know they’ll never be able to raise the money in time, but they don’t know what to do.
*A young man learns that one of his former classmates, whom he used to bully in middle school, has recently taken his life. He feels partly responsible.
*A prostitute was beaten up by one of her clients.
(I’ve been thinking about writing/publishing ‘inspirational short stories based on real-life events’ for a while…) (No, not really…)
“The student of medicine has to learn both the ‘bottom up’ approach of constructing a differential diagnosis from individual clinical findings, and the ‘top down’ approach of learning the key features pertaining to a particular diagnosis. In this textbook we have integrated both approaches into a coherent working framework that will assist the reader in preparing for academic and professional examinations, and in every day practice. […] We have split this textbook into three sections. The first section introduces the basic skills underpinning much of what follows – how to take a history and perform an examination, how to devise a differential diagnosis and select appropriate investigations, and how to record your findings in the case notes and present cases on ward rounds. The second section takes a systems-based approach to history taking and examining patients, and also includes information on relevant diagnostic tests and common diagnoses for each system. Each chapter begins with the individual ‘building blocks’ of the history and examination, and ends by drawing these elements together into relevant diagnoses. […] The third and final section of the book covers ‘special situations’, including the assessment of the newborn, infants and children, the acutely ill patient, the patient with impaired consciousness, the older patient and death and the dying patient.”
The above quote is from the preface of the book. This is a medical textbook with 500 pages and 26 chapters written by 27 contributors, so it has a lot of stuff; I’ve been conflicted about how to blog it for this reason. It has as lot of stuff which is useful to know but which most people don’t, and I think it’s the sort of book I might be tempted to ‘consult’ later on; the various 100 Cases… books I’ve read include some similar useful observations, but I think it’d be more natural to consult this book first because it’s much more likely that this book will at least have something about the medical condition you’re curious/can’t remember the details about. I think it was somewhat easier to read than was McPhee et al., and I’m not sure this is only because I read the former first (while I was reading McPhee et al. I was learning part of the vocabulary which is needed to read this book).
In the coverage below I have not talked about the stuff included in the first part; I don’t need to e.g. be able to take a medical history and navigate medical records, and if some of my readers do I’ll assume they have the necessary skills already, or know where/how to obtain such skills. In this post I’ll focus on the coverage of major systems in part two, with my coverage focused on ‘key variables’, and, well, ‘stuff I found interesting’ – which also means that I won’t talk about stuff like ‘this is how you palpate a liver’ and ‘this is how you grade heart murmurs’ (the book also covers that kind of stuff in some detail). Nor will I tell you what Buerger’s test or Trendelenburg’s test are used for, or give you a full account of the many, many different types of ‘named medical signs’ included and described in the book (Charcot’s triad, Cullen’s sign, Grey Turner’s sign, Murphy’s sign, Courvoisier’s sign, Kussmaul’s sign, Levine’s sign, etc. …).
I may in my coverage of this book tend to focus more on acute conditions than on chronic conditions, in part because it seems more useful to me to know/remember whether or not someone is, say, having a heart attack than whether or not someone with chronic kidney failure will be bothered by pitting edema. I think this approach makes sense.
The book has split the systems coverage in part 2 up into 15 chapters – there are specific chapters about: *The cardiovascular system, *the respiratory system, *the gastrointestinal system, *the renal system, *the genitourinary system, *the nervous system, *psychiatric assessment, *the musculoskeletal system, *the endocrine system, *the breast, *the haematological system, *skin, nails and hair, *the eye, *ear, nose and throat, and *infectious and tropical diseases. Most of the book coverage is devoted to this treatment of individual systems, as these 15 chapters make up roughly 350 pages of the total. I found it, interesting, that there was close to zero overlap between the coverage in this book and Newman and Kohn’s text; I’m not quite sure what to think about that.
In this post I’ll mostly talk about the first three ‘systems’ chapters. When dealing with cardiovascular disease, the major symptoms are chest discomfort, breathlessness, palpitation (an awareness of the heartbeat), dizziness and syncope (‘transient loss of consciousness resulting from transient global cerebral hypoperfusion’), and peripheral oedema (usually ankle swelling, most often associated with heart failure, often worse in the evening). An important observation is that myocardial ischemia (‘the heart muscle doesn’t get enough blood/oxygen’) can cause breathlessness and chest discomfort, and “in many cases breathlessness is the predominant symptom (particularly in women).” Deep vein thrombosis can be asymptomatic, but it commonly causes pain and swelling in the affected leg – the main acute risk factor associated with the condition (which is not particularly rare among elderly people) is that the blood clot travels to the lungs and causes a pulmonary embolism.
Next, the respiratory system: “respiratory conditions are common – accounting for more than 13 per cent of all emergency admissions and more than 20 per cent of general practitioner consultations”. I was very surprised the number was that high! I can’t provide a source as the authors did not provide a source; there are no inline citations in this book, which is part of the reason why the book did not get five stars on goodreads. Six key symptoms of respiratory diseases are chest pain (that may be extended to chest sensations), dyspnoea (shortness of breath/breathlessness), cough (“the commonest symptom that is associated with pure respiratory disease”), wheeze, sputum production, and haemoptysis (coughing up blood/blood in the sputum – this is, perhaps unsurprisingly, often, but not always, a ‘red flag symptom': “Current recommendations indicate that urgent referral to a hospital clinic should be made when patients have haemoptysis, are over the age of 40, and are current or ex-smokers. However, a young patient who has a small amount of streak (lines in sputum) haemoptysis in the context of an upper respiratory tract infection usually will not require referral”).
In respiratory medicine, cough duration is an important variable in the diagnostic context; I was surprised that even simple respiratory tract infections may cause cough for up to three weeks, and that this is not necessarily something to worry about. Longer than that and it’s however less likely to be due to a self-limiting condition, and is more likely to be due to either lung cancer or one of the many causes of chronic cough (cough is not chronic until it’s lasted longer than 6 months) – these causes include, but are not limited to, astma, COPD, and GERD. As should be clear from the above, both heart and lung conditions may cause shortness of breath, so you can’t always conclude that shortness of breath is a lung issue. This is of course far from the only symptom which may present in different disease contexts, and the heart and lungs are connected in other ways as well; for example problems in both systems may cause clubbing. When dealing with a case of pneumonia it’s useful to be familiar with the CURB 65 score to assess risk/severity. Lung cancer can be either ‘non-small cell’ or ‘small cell’ lung cancer – in terms of presenting symptoms they’re reasonably similar, but the latter is more often associated with paraneoplastic syndromes (though these are still rare in an absolute sense, presenting in 5% of small cell lung cancers and 1% of non-small cell lung cancers, according to the book). The most common symptom is a cough, followed by persistent ‘chest infections’ (which are of course not infections) and bloody sputum/coughing up blood – but “some patients have remarkably few signs.” In the context of acute conditions affecting the lungs, pleuritic chest pain is an important symptom; this means pain which is made worse by breathing and which often has a sharp and stabbing quality to it – acute onset pleuritic chest pain can be due to a pulmonary embolism (60% of patients with PE have acute onset pleuritic chest pain; in another 25% there is a sudden onset of acute breathlessness) or a pneumothorax (‘collapsed lung’ – may also cause acute breathlessness). Although the two conditions are different, if you have either of them you want to get to a hospital, fast – sudden onset pleuritic chest pain seems to me a very good reason to call for an ambulance/visit the local emergency department.
“The gastrointestinal system includes the alimentary tract from mouth to anus, the liver, hepatobiliary structures including the gallbladder, pancreas and the biliary and pancreatic ductal systems.” This is a big system. And it’s often hard to get a good look at what’s the problem: “Almost half of gastrointestinal problems are not associated with physical signs or positive test results. Hence, the diagnosis and management is often based entirely on the inferences drawn from a patient’s symptoms.” Difficulty swallowing is a ‘red flag’ symptom, because “many patients with this symptom will have clinically significant pathology.” Weight loss combined with worsening difficulty swallowing (solids first, liquids later) means that oesophageal cancer is likely to be the cause (this one has a really bad prognosis). A useful observation when it comes to distinguishing between angina (‘heart issue’) and heartburn (‘gastrointestinal issue’), which may cause somewhat similar symptoms, is that whereas angina is often worsened by physical exertion, heartburn is not and often occurs at rest. It’s worth noting that when dealing with gastrointestinal disorders, you can learn a lot by figuring out where exactly the pain is coming from - stomach pain isn’t just stomach pain. Pain localized to one specific section of the stomach is much more likely to be due to condition X than condition Y (e.g., pain in the right upper quadrant = maybe biliary obstruction or hepatomegaly; pain in the left lower quadrant = maybe diverticulitis or infectious colitis). This may not be particularly useful for people in general to know, but I thought it was interesting. Duration of pain is a key variable: “Sudden onset of well-localized severe pain is likely to be due to catastrophic events [and] [p]ain present for weeks to months is often less life-threatening than pain presenting within hours of symptom onset.” The authors point out that the severity of abdominal pain can be underestimated in elderly people, very young patients, people who are immunosuppressed and diabetics (the latter presumably due to autonomous-/diabetes-associated enteric neuropathy). “Presence of blood in the stool points towards either inflammatory bowel disease or malignancy, but in those with infective diarrhoea it is highly specific for infections with a invasive organism.” The authors mention a few pointers to specific nutritional deficiencies which are probably useful to know about – iron deficiency may cause a flat angle or ‘spooning‘ of the nails, and it may also (together with vitamin B12-deficiency) cause soreness/redness of the tongue. Redness and cracks at the angles of the mouth are also associated with deficiencies of iron and vitamin-B12, as well as deficiencies of riboflavin, and folate.
This post will be brief but I thought that since it’s been a while since I last posted anything and since I just finished reading this book, I wanted to add a few remarks about it here while it was still ‘fresh in my mind’. I’m gradually coming to the conclusion that if I’m to blog all the books I’m reading in the amount of detail I’d ideally like to, I’ll have to read a lot less. This option does not appeal to me; I’d rather provide limited coverage of a book I’ve actually read than not read a book in order to provide more extensive coverage of another book.
Anyway, the book is a rather nice collection of interviews with mathematicians from MIT’s ‘early days’ (in some sense at least – MIT is a rather old institution, but at least some of the people interviewed in this book came along during the days before MIT was what it is today), who talk about the history of the mathematics department of MIT, and other stuff – the people interviewed include an Abel Prize winner and a few people who’ve been members of the Institute for Advanced Study, a former MacArthur Fellow, as well as a guy who used to be on the selection committee for the MacArthur Foundation. All of them are really, really smart, and some of them have lived quite interesting lives. To the extent that these guys aren’t impressive enough on their own, some of them also knew some people most non-mathematicians have probably heard about – this book includes contributions from people who were friends of people like John Nash, Grothendieck, Shannon, Minsky, and Chomsky, and they are people who’ve met and talked to people like John von Neumann, Oppenheimer, Weyl, Heisenberg, and Albert Einstein. They talk a little bit about their work and the history of the mathematics department, but they also talk about other stuff as well; there are various amusing anecdotes along the way (for example one interviewee tells the story about the time he lectured in a gorilla suit at MIT), there are stories about the private parties and social lives of the MIT staff during the fifties (and later), we get some personal stories about mathematicians who fled Europe when the Nazis started to cause trouble, and there are stories about student protests in the late sixties and how they were dealt with – the books spans widely. There was some repetition across the interviews (various people answering similar questions in similar ways), and there was more talk about ‘administrative matters’ than I’d have liked – probably a natural consequence of the fact that a few of them (3? At least three of the contributors..) were former department heads – which is part of why I didn’t give it five stars, but it’s really a quite nice book. I may or may not blog it later in more detail.
“When I retired from clinical practice in 1998, my intention was (and still is) to write a definitive, exhaustively referenced, history of diabetes, which would be of interest primarily to doctors. However, I jumped at the suggestion of the editors of this series at Oxford University Press that I should write a biography of diabetes that would be about a tenth of the length of a full history with a minimum of references, for a wide general readership.”
This book is the result. As I pointed out on goodreads, this book is really great. The book is not particularly technical compared to other books about diabetes which I’ve read in the past, however this semi-critical review does make the point that the coverage is occasionally implicitly ‘asking too much’ even from diabetic readers (“There were parts of all this that lost my interest or that I lacked the background to appreciate”). Whereas the reviewer was apparently to some extent getting lost in the details, so was I – but in a completely different way; I was simply amazed at the amount of small details and interesting observations included in the book that I did not know, and I loved every single chapter of the book. The author of the other review incidentally also states that: “I don’t recommend that anyone read this who is not already familiar with diabetes, either by having it or knowing someone with it.” I’d note that I’m not sure I agree with this recommendation, to the extent that it’s even ‘relevant’ – these days such people who don’t even know anyone with diabetes might well be a bit hard to find, on account of the fact that diabetes has become a quite common illness. Presumably a significant proportion of the people who assume they don’t know anyone with the disease might well do so anyway, because a very large number of people have type 2 diabetes without knowing it. I think a reader would get more out of the book if he or she has diabetes or knows someone with diabetes, but a lot of people who do not would also benefit from knowing the stuff in this book. Not only in a ‘and now you know how bad type 2 is and why you should get checked out if you think you’re at risk’-sense (there’s incidentally also a lot of stuff about type 1), but also in the ‘the history of diabetes is really quite fascinating’-sense. I do think it is.
Have a look at this image. The book included a similar picture (not exactly the same one, but it’s of the same patient and the ‘before’ picture is obviously taken at the same time this one was), which is of Billy Leroy, a type 1 diabetic, before and after he started insulin. He was one of the first patients treated with insulin (the first human treated with insulin was Leonard Thompson, in 1922). Billy Leroy’s weight in the first picture, where he was 3 years old, was 6.8 kg (the 5 % (underweight) CDC body weight cutoff at the age of 3 is 12 kg) – during the three months after he started on insulin, his weight doubled. The author argues in the beginning of the book that: “When people are asked to rank diseases in order of seriousness, diabetes is usually at the mild end of the spectrum.” This may or may not be true, but the picture to which I link above certainly adds a detail which is important to keep in mind but easy to forget when evaluating ‘the severity’ of the disease today – type 1 diabetes in particular is not much fun if you don’t have access to insulin, and until the early 1920s people with this disease simply died, most of them quite fast. (They all still do – like all other humans – but they live a lot longer before they die…)
The author knows his stuff and the book has a lot of content, making it hard to know what to pick out and mention in particular in a review like this – however below I have added a few quotes from the book and some observations made along the way. The content covering the late nineteenth century and the first couple of decades of the twentieth century, before it was discovered that insulin could save the lives of a lot of sick children, would in my opinion on its own be a strong reason for reading the book; but the chapters covering the periods that came after are wonderful as well. When insulin was discovered a religiously inclined mind might well be tempted to think of the effects on young type 1 diabetic children as almost miraculous; but gradually doctors treating diabetics came to realize (the patients never knew, because they were not told – it is pointed out in the book that the fact that it might make a lot of sense to give patients with a disease like diabetes some discretion in terms of how to treat their illness is a in a historical context very new idea; active patient involvement in medical decision-making is one of the cornerstones of current treatment regimes, for good reason, and I found it really surprising and frustrating to learn how this disease was treated in the past) that things might be more complicated than they had initially been assumed to be. Type 2 diabetics had suffered from late stage complications like blindness and kidney failure for centuries, but such complications had never before been observed in type type 1 diabetics before insulin, because diabetes presenting in children were pre-insulin universally fatal. It turned out that many of the children who were initially ‘saved’ by insulin in the early 1920s ended up suffering from severe complications just a couple of decades later, and many of them died early from these complications:
“After the Second World War it became clear that [diabetic] kidney disease could also affect the young, and there were increasingly frequent reports of diabetics who had been saved by insulin as children only to succumb to kidney failure in their 20s and 30s. Fifty of Joslin’s child patients who had started insulin before 1929 were followed up in 1949, when a third had died at an average age of 25, after having had diabetes for an average of 17.6 years. One half had died of kidney failure and the other half of tuberculosis and other infections. […] In the experience of the Joslin group, only 2 per cent of deaths of young diabetic patients before 1937 were due to kidney disease, but, of those who died between 1944 and 1950, more than half had advanced kidney disease. Results in Europe were equally bad. In 1955 all of eighty-seven Swiss children had signs of kidney disease after sixteen years of diabetes, and after twenty-one years all had died. Most young people with diabetic kidney disease also had severe retinopathy and many became blind—by the mid 1950s diabetes was the commonest cause of new blindness in people under the age of 50. […] Such devastating cases were being increasingly reported in the medical literature in the late 1940s and early 1950s, but they were not publicized in the lay press, presumably to avoid spreading despair and despondency and puncturing the myth that insulin had solved the problem of diabetes […] The British Diabetic Association (founded in 1935) produced a quarterly Diabetic Journal for its lay members, but no issue from 1940 to 1960 mentions complications”.
The book makes it clear that patients were for many years basically to some extent kept in the dark about the severity of their condition, but in all fairness for a long time the doctors treating them frankly didn’t know enough to give them good information on a lot of topics anyway. The book has some really interesting observations included about how medical men of the times thought about various aspects of the illness and treatment, and how many of the things we know today, some of which ‘seem obvious’, really were not to people at the time. Many attempts have been made over time to explain why people got diabetes, and especially type 1 was really quite hard to pin down – type 2 was somewhat easier because the lifestyle component was hard to miss; however it was natural to explain the disease in terms of the symptoms it caused, and some of those symptoms in type 2 diabetics were complications which are best considered secondary to the ‘true’ disease process. For example because many type 2 diabetics suffered from disorders of the nervous system, neuropathy, the nervous system was for a while assumed to be implicated in causing diabetes – but although disorders of the nervous system can and often do present in long-standing diabetes, they are not why type 2 diabetics get sick. Kidney problems were thought to be “part and parcel of diabetes in the 19th century.” Oskar Minkowsky made it clear in 1889 that removal of the pancreas caused severe (‘type 1-like’) diabetes in dogs – but despite this discovery it still took a long time for people to figure out how it worked. This wasn’t because people at the time were stupid. One problem faced at the time was that the pancreas actually looked quite normal in people who died from diabetes – the islet cells which are implicated in the disease weigh around 1-1.5 grams altogether, and make up only a very small proportion of the pancreas (1% or so). Many doctors found it hard to imagine that the islets cells could be reponsible for controlling carbohydrate metabolism (and other aspects of metabolism as well – “It is important to realize that diabetes is not just a glucose disease. There are also abnormalities of fat metabolism”). The pancreas wasn’t the only organ that looked normal – despite the excessive urination the kidneys did as well, and so did other organs, to the naked eye. All major features of diabetic retinopathy (diabetic eye disease) had been described by the year 1890 with the aid of the ophthalmoscope, so people knew the eyes of people with long-standing diabetes looked different; how to interpret these findings was however not clear at the time – some argued the eye damage found in diabetics was not different from eye damage caused by hypertension, and treatment options were non-existent anyway.
Many of the treatment options discussed among medical men before insulin were diets, and although dietary considerations are important in the treatment context today, it’s probably fair to say that not all of the supposed dietary remedies of the past were equally sensible: “One diet that had a short vogue in the 1850s was sugar feeding, brainchild of the well-known but eccentric French physician Pierre Piorry (1794–1879). He thought that diabetics lost weight and felt so weak because of the amount of sugar they lost in the urine and that replacing it should restore their strength”. (Aargh!). For the curious (or desperate) man, though, there were alternatives to diets: “A US government publication in 1894 listed no less than forty-two anti-diabetic remedies including bromides, uranium nitrate, and arsenic.” Relatedly, “in England until 1925, any drug could be advertised and marketed as a cure for any disease, even if it was completely ineffective”. Whether or not diets ‘worked’ depended in part on what those proposed diets included (see above..), whether people followed them, and whether people who presented were thin or fat. In the book Tattersall mentions that already from the middle of the nineteenth century many physicians thought that there were two different types of diabetes (there are more than two, but…). The thin young people presenting with symptoms were by many for decades considered hopeless cases (that they were hopeless cases was even noted in medical textbooks at the time), because they had this annoying habit of dying no matter what you did.
It should be noted that the book indirectly provides some insights into the general state of medical research and medical treatment options over time; for an example of the former it is mentioned that the first clinical trial (with really poor randomization/selection mechanisms, it seems from the description in the book) dealing with diabetes was undertaken in the 1960es: “the FDA demanded randomized controlled trials for the first time in 1962, and [the University Group Diabetes Program (UGDP)] was the first in diabetes. Before 1962 the evidence in support of therapeutic efficacy put to the FDA was often just ‘testimonials’ from physicians who casually tested experimental drugs on their patients and were paid for doing so.” See also this link. An example of the latter would be the observation made in the book that: “until the 1970s treatment for a heart attack was bed rest for five or six weeks, while nature took its course.” Diabetics were not the only sick people who had a tough time in the past.
One interesting question related to what people didn’t know in the beginning after the introduction of insulin was how the treatment might work long-term. The author notes that newspapers in the early years made people believe that insulin would be a cure; it was thought that insulin might nurse the islet cells back to health, so that they’d start producing insulin on their own again – which was actually not a completely stupid idea, as e.g. kidneys had the ability to recover after acute glomerulonephritis. The fact that diabetics often started on high doses which could then be lowered a month or two later even lent support to this idea; however it was discovered quite fast that regeneration was not taking place. Remarkably, insulin was explored as a treatment option for other diseases in the 1920s, and was actually used to stimulate appetite in tuberculosis patients and ‘in the insane refusing food’, an idea which came about because one of its most obvious effects was weight gain. This effect was also part of the reason why insulin was for a long time not considered an attractive option for type 2 diabetics, who instead were treated only with diet unless this treatment failed to reduce blood sugar levels sufficiently (these were the only two treatment options until the 1950s); most of them were already overweight and insulin caused weight gain, and besides insulin didn’t work nearly as well in them as it did in young and lean people with type 1 because of insulin resistance, which lead to the requirement of high doses of the drug.
Throughout much of the history of diabetes, diabetics did not measure their blood glucose regularly – what they did instead was measuring their urine, figuring out if it contained glucose or not (glucose in the urine indicates that the blood glucose is quite high). This meant that the only metric they had available to them to monitor their disease on a day to day basis was one which was unable to measure low blood glucose, and which could only (badly) distinguish between much too high blood glucose values and not-much-too-high values. Any type of treatment regime like the one I’m currently on would be completely impossible without regular blood tests on a daily basis, and I was very surprised about how late the idea of self-monitoring of blood glucose appeared; like the measurement of Hba1c, this innovation did not appear until the late 1970s. Few years after that, the first insulin pen revolutionized treatment regimes and made treatment regimes using multiple rejections each day much more common than they had been in the past, facilitating much better metabolic control.
The book has a lot of stuff about specific complications and the history of treatment advances – both the ones that worked and some of the ones that didn’t. If you’re a diabetic today, you tend to take a lot of stuff for granted – and reading a book like this will really make you appreciate how many ideas had to be explored, how many false starts there were, how much work by so many different people actually went into giving you the options you have today, keeping you alive, and perhaps even relatively well. One example of the type of treatment options which were considered in the past but turned out not to work was curative pancreas transplants, which were explored in the 60es and 70es: “Pancreas transplantation offered a potential cure of type 1 diabetes. The first was done in 1966 […] Worldwide in the next eleven years, fifty-seven transplants were done, but only two worked for more than a year”. Recent attempts to stop people at risk of developing type 1 diabetes from becoming sick are also discussed in the last part of the book, and in this context he makes a point I was familiar with: “[Repeated] failures [in this area] are particularly frustrating, because, in the best animal model of type 1 diabetes, the NOD mouse, over 100 different interventions can prevent diabetes.” This is one of the reasons why I tend to be skeptical about results from animal studies. Although he spends many pages on complications – which in a book like this makes a lot of sense given how common these complications were (and to some extent still are), and how important a role they have played in the lives of people suffering from diabetes throughout the ages – I have talked about many of these things before, just as I have talked about the results of various large-scale trials like the DCCT trial and the UKPDS (see e.g. this and this), so I will not discuss such topics in detail here. I do however want to briefly remind people of what kind of a disease badly managed type 2 diabetes (the by far most common of the two) is, especially if it is true as the author argues in the introduction that many people perceive of it as a relatively mild disease – so I’ll end the post with a few quotes from the book:
“I took over the diabetic clinic in Nottingham in 1975 and three years later met Lilian, an overweight 60-year-old woman who was on tablets for diabetes. She had had sugar in her urine during her last pregnancy in 1957 but was well until 1963, when genital itching (pruritus vulvae) led to a diagnosis of diabetes. She attended the clinic for two years but was then sent back to her GP with a letter that read: ‘I am discharging this lady with mild maturity onset diabetes back to your care.’ She continued to collect her tablets but had no other supervision. When I met her after she had had diabetes for eighteen years she was blind, had had a heart attack, and had had one leg amputated below the knee. The reason for the referral to me was an ulcer on her remaining foot, which would not heal. […] Someone whose course is not dissimilar to that of Lilian is Sue Townsend (b. 1946), author of the Adrian Mole books. She developed diabetes at the age of 38 and after only fifteen years was blind from retinopathy and wheelchair bound because of a Charcot foot, a condition in which the ankle disintegrates as a result of nerve damage. Neuropathy has also destroyed the nerve endings in her fingers, so that, like most other blind diabetics, she cannot read Braille. She blames her complications on the fact that she cavalierly disregarded the disease and kept her blood sugars high to avoid the inconvenience of hypoglycaemic (low-blood-sugar) attacks.”
This is a list of books I’ve read to completion this year so far. I’ll try to update the list regularly throughout the year. See my 2014 post for details on how these lists work.
3. The Eye of Zoltar (f). Jasper Fforde.
4. Statistical Models for Proportions and Probabilities (nf. Springer). Blog coverage here.
7. Chamberlain’s Symptoms and Signs in Clinical Medicine: An Introduction to Medical Diagnosis (4, nf. CRC Press). Blog coverage here and here.
8. Diabetes: The Biography (5, nf. Oxford University Press). My goodreads review is worth reposting here: “This book is awesome. This is simply a wonderful account of the history of diabetes. Highly recommended.” Blog coverage here.
10. Model Selection and Multi-Model Inference: A Practical Information-Theoretic Approach (5, nf. Springer). Goodreads review here. Blog coverage here.
11. Recountings: Conversations with MIT Mathematicians (4, nf. AK Peters). Blog coverage here.
12. Whose Body? (2, f). Dorothy Sayers.
13. Clouds of Witness (3, f). Dorothy Sayers.
14. Introduction to Systems Analysis: Mathematically Modeling Natural Systems (3, nf. Springer). Note that goodreads has listed this book under the wrong title, which is the reason why the title in this post deviates from the title on goodreads. Goodreads review here. Blog coverage here.
15. Unnatural Death (2, f). Dorothy Sayers.
16. Mammoths, Sabertooths, and Hominids: 65 Million Years of Mammalian Evolution in Europe (4, nf. Columbia University Press). Blog coverage here.
18. Lord Peter Views the Body (2, f). Dorothy Sayers.
“We wrote this book to introduce graduate students and research workers in various scientific disciplines to the use of information-theoretic approaches in the analysis of empirical data. These methods allow the data-based selection of a “best” model and a ranking and weighting of the remaining models in a pre-defined set. Traditional statistical inference can then be based on this selected best model. However, we now emphasize that information-theoretic approaches allow formal inference to be based on more than one model (multimodel inference). Such procedures lead to more robust inferences in many cases, and we advocate these approaches throughout the book. […] Information theory includes the celebrated Kullback–Leibler “distance” between two models (actually, probability distributions), and this represents a fundamental quantity in science. In 1973, Hirotugu Akaike derived an estimator of the (relative) expectation of Kullback–Leibler distance based on Fisher’s maximized log-likelihood. His measure, now called Akaike’s information criterion (AIC), provided a new paradigm for model selection in the analysis of empirical data. His approach, with a fundamental link to information theory, is relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case. […] We do not claim that the information-theoretic methods are always the very best for a particular situation. They do represent a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and they are objective and practical to employ across a very wide class of empirical problems. Inference from multiple models, or the selection of a single “best” model, by methods based on the Kullback–Leibler distance are almost certainly better than other methods commonly in use now (e.g., null hypothesis testing of various sorts, the use of R2, or merely the use of just one available model).
This is an applied book written primarily for biologists and statisticians using models for making inferences from empirical data. […] This book might be useful as a text for a course for students with substantial experience and education in statistics and applied data analysis. A second primary audience includes honors or graduate students in the biological, medical, or statistical sciences […] Readers should ideally have some maturity in the quantitative sciences and experience in data analysis. Several courses in contemporary statistical theory and methods as well as some philosophy of science would be particularly useful in understanding the material. Some exposure to likelihood theory is nearly essential”.
The above quotes are from the preface of the book, which I have so far only briefly talked about here; this post will provide a lot more details. Aside from writing the post in order to mentally process the material and obtain a greater appreciation of the points made in the book, I have also as a secondary goal tried to write the post in a manner so that people who are not necessarily experienced model-builders might also derive some benefit from the coverage. Whether or not I was successful in that respect I do not know – given the outline above, it should be obvious that there are limits as to how ‘readable’ you can make stuff like this to people without a background in a semi-relevant field. I don’t think I have written specifically about the application of information criteria in the model selection context before here on the blog, at least not in any amount of detail, but I have written about ‘model-stuff’ before, also in ‘meta-contexts’ not necessarily related to the application of models in economics; so if you’re interested in ‘this kind of stuff’ but you don’t feel like having a go at a post dealing with a book which includes word combinations like ‘the (relative) expectation of Kullback–Leibler distance based on Fisher’s maximized log-likelihood’ in the preface, you can for example have a look at posts like this, this, this and this. I have also discussed here on the blog some stuff somewhat related to the multi-model inference part, how you can combine the results of various models to get a bigger picture of what’s going on, in these posts – they approach ‘the topic’ (these are in fact separate topics…) in a very different manner than does this book, but some key ideas should presumably transfer. Having said all this, I should also point out that many of the basic points made in the coverage below should be relatively easy to understand, and I should perhaps repeat that I’ve tried to make this post readable to people who’re not too familiar with this kind of stuff. I have deliberately chosen to include no mathematical formulas in my coverage in this post. Please do not assume this is because the book does not contain mathematical formulas.
Before moving on to the main coverage I thought I’d add a note about the remark above that stuff like AIC is “little taught in statistics classes and far less understood in the applied sciences than should be the case”. The book was written a while back, and some things may have changed a bit since then. I have done coursework on the application of information criteria in model selection as it was a topic (briefly) covered in regression analysis(? …or an earlier course), so at least this kind of stuff is now being taught to students of economics where I study and has been for a while as far as I’m aware – meaning that coverage of such topics is probably reasonably widespread at least in this field. However I can hardly claim that I obtained a ‘great’ or ‘full’ understanding of the issues at hand from the work on these topics I did back then – and so I have only gradually, while reading this book, come to appreciate some of the deeper issues and tradeoffs involved in model selection. This could probably be taken as an argument that these topics are still ‘far less understood … than should be the case’ – and another, perhaps stronger, argument would be Seber’s comments in the last part of his book; if a statistician today may still ‘overlook’ information criteria when discussing model selection in a Springer text, it’s not hard to argue that the methods are perhaps not as well known as should ‘ideally’ be the case. It’s obvious from the coverage that a lot of people were not using the methods when the book was written, and I’m not sure things have changed as much as would be preferable since then.
What is the book about? A starting point for understanding the sort of questions the book deals with might be to consider the simple question: When we set out to model stuff empirically and we have different candidate models to choose from, how do we decide which of the models is ‘best’? There are a lot of other questions dealt with in the coverage as well. What does the word ‘best’ mean? We might worry over both the functional form of the model and which variables should be included in ‘the best’ model – do we need separate mechanisms for dealing with concerns about the functional form and concerns about variable selection, or can we deal with such things at the same time? How do we best measure the effect of a variable which we have access to and consider including in our model(s) – is it preferable to interpret the effect of a variable on an outcome based on the results you obtain from a ‘best model’ in the set of candidate models, or is it perhaps sometimes better to combine the results of multiple models (and for example take an average of the effects of the variable across multiple proposed models to be the best possible estimate) in the choice set (as should by now be obvious for people who’ve read along here, there are some sometimes quite close parallels between stuff covered in this book and stuff covered in Borenstein & Hedges)? If we’re not sure which model is ‘right’, how might we quantify our uncertainty about these matters – and what happens if we don’t try to quantify our uncertainty about which model is correct? What is bootstrapping, and how can we use Monte Carlo methods to help us with model selection? If we apply information criteria to choose among models, what do these criteria tell us, and which sort of issues are they silent about? Are some methods for deciding between models better than others in specific contexts – might it for example be a good idea to make criteria adjustments when faced with small sample sizes which makes it harder for us to rely on asymptotic properties of the criteria we apply? How might the sample size more generally relate to our decision criterion deciding which model might be considered ‘best’ – do we think that what might be considered to be ‘the best model’ might depend upon (‘should depend upon’?) how much data we have access to or not, and if how much data we have access to and the ‘optimal size of a model’ are related, how are the two related, and why? The questions included in the previous sentence relate to some fundamental differences between AIC (and similar measures) and BIC – but let’s not get ahead of ourselves. I may or may not go into details like these in my coverage of the book, but I certainly won’t cover stuff like that in this post. Some of the content is really technical: “Chapters 5 and 6 present more difficult material [than chapters 1-4] and some new research results. Few readers will be able to absorb the concepts presented here after just one reading of the material […] Underlying theory is presented in Chapter 7, and this material is much deeper and more mathematical.” – from the preface. The sample size considerations mentioned above relate to stuff covered in chapter 6. As you might already have realized, this book has a lot of stuff.
When dealing with models, one way to think about these things is to consider two in some sense separate issues: On the one hand we might think about which model is most appropriate (model selection), and on the other hand we might think about how best to estimate parameter values and variance-covariance matrices given a specific model. As the book points out early on, “if one assumes or somehow chooses a particular model, methods exist that are objective and asymptotically optimal for estimating model parameters and the sampling covariance structure, conditional on that model. […] The sampling distributions of ML [maximum likelihood] estimators are often skewed with small samples, but profile likelihood intervals or log-based intervals or bootstrap procedures can be used to achieve asymmetric confidence intervals with good coverage properties. In general, the maximum likelihood method provides an objective, omnibus theory for estimation of model parameters and the sampling covariance matrix, given an appropriate model.” The problem is that it’s not ‘a given’ that the model we’re working on is actually appropriate. That’s where model selection mechanisms enters the picture. Such methods can help us realize which of the models we’re considering might be the most appropriate one(s) to apply in the specific context (there are other things they can’t tell us, however – see below).
Below I have added some quotes from the book and some further comments:
“Generally, alternative models will involve differing numbers of parameters; the number of parameters will often differ by at least an order of magnitude across the set of candidate models. […] The more parameters used, the better the fit of the model to the data that is achieved. Large and extensive data sets are likely to support more complexity, and this should be considered in the development of the set of candidate models. If a particular model (parametrization) does not make biological [/’scientific’] sense, this is reason to exclude it from the set of candidate models, particularly in the case where causation is of interest. In developing the set of candidate models, one must recognize a certain balance between keeping the set small and focused on plausible hypotheses, while making it big enough to guard against omitting a very good a priori model. While this balance should be considered, we advise the inclusion of all models that seem to have a reasonable justification, prior to data analysis. While one must worry about errors due to both underfitting and overfitting, it seems that modest overfitting is less damaging than underfitting (Shibata 1989).” (The key word here is ‘modest’ – and please don’t take these authors to be in favour of obviously overfitted models and data dredging strategies; they spend quite a few pages criticizing such models/approaches!).
“It is not uncommon to see biologists collect data on 50–130 “ecological” variables in the blind hope that some analysis method and computer system will “find the variables that are significant” and sort out the “interesting” results […]. This shotgun strategy will likely uncover mainly spurious correlations […], and it is prevalent in the naive use of many of the traditional multivariate analysis methods (e.g., principal components, stepwise discriminant function analysis, canonical correlation methods, and factor analysis) found in the biological literature [and elsewhere, US]. We believe that mostly spurious results will be found using this unthinking approach […], and we encourage investigators to give very serious consideration to a well-founded set of candidate models and predictor variables (as a reduced set of possible prediction) as a means of minimizing the inclusion of spurious variables and relationships. […] Using AIC and other similar methods one can only hope to select the best model from this set; if good models are not in the set of candidates, they cannot be discovered by model selection (i.e., data analysis) algorithms. […] statistically we can infer only that a best model (by some criterion) has been selected, never that it is the true model. […] Truth and true models are not statistically identifiable from data.”
“It is generally a mistake to believe that there is a simple “true model” in the biological sciences and that during data analysis this model can be uncovered and its parameters estimated. Instead, biological systems [and other systems! – US] are complex, with many small effects, interactions, individual heterogeneity, and individual and environmental covariates (most being unknown to us); we can only hope to identify a model that provides a good approximation to the data available. The words “true model” represent an oxymoron, except in the case of Monte Carlo studies, whereby a model is used to generate “data” using pseudorandom numbers […] A model is a simplification or approximation of reality and hence will not reflect all of reality. […] While a model can never be “truth,” a model might be ranked from very useful, to useful, to somewhat useful to, finally, essentially useless. Model selection methods try to rank models in the candidate set relative to each other; whether any of the models is actually “good” depends primarily on the quality of the data and the science and a priori thinking that went into the modeling. […] Proper modeling and data analysis tell what inferences the data support, not what full reality might be […] Even if a “true model” did exist and if it could be found using some method, it would not be good as a fitted model for general inference (i.e., understanding or prediction) about some biological system, because its numerous parameters would have to be estimated from the finite data, and the precision of these estimated parameters would be quite low.”
A key concept in the context of model selection is the tradeoff between bias and variance in a model framework:
“If the fit is improved by a model with more parameters, then where should one stop? Box and Jenkins […] suggested that the principle of parsimony should lead to a model with “. . . the smallest possible number of parameters for adequate representation of the data.” Statisticians view the principle of parsimony as a bias versus variance tradeoff. In general, bias decreases and variance increases as the dimension of the model (K) increases […] The fit of any model can be improved by increasing the number of parameters […]; however, a tradeoff with the increasing variance must be considered in selecting a model for inference. Parsimonious models achieve a proper tradeoff between bias and variance. All model selection methods are based to some extent on the principle of parsimony […] The concept of parsimony and a bias versus variance tradeoff is very important.”
“we reserve the terms underfitted and overfitted for use in relation to a “best approximating model” […] Here, an underfitted model would ignore some important replicable (i.e., conceptually replicable in most other samples) structure in the data and thus fail to identify effects that were actually supported by the data. In this case, bias in the parameter estimators is often substantial, and the sampling variance is underestimated, both factors resulting in poor confidence interval coverage. Underfitted models tend to miss important treatment effects in experimental settings. Overfitted models, as judged against a best approximating model, are often free of bias in the parameter estimators, but have estimated (and actual) sampling variances that are needlessly large (the precision of the estimators is poor, relative to what could have been accomplished with a more parsimonious model). Spurious treatment effects tend to be identified, and spurious variables are included with overfitted models. […] The goal of data collection and analysis is to make inferences from the sample that properly apply to the population […] A paramount consideration is the repeatability, with good precision, of any inference reached. When we imagine many replicate samples, there will be some recognizable features common to almost all of the samples. Such features are the sort of inference about which we seek to make strong inferences (from our single sample). Other features might appear in, say, 60% of the samples yet still reflect something real about the population or process under study, and we would hope to make weaker inferences concerning these. Yet additional features appear in only a few samples, and these might be best included in the error term (σ2) in modeling. If one were to make an inference about these features quite unique to just the single data set at hand, as if they applied to all (or most all) samples (hence to the population), then we would say that the sample is overfitted by the model (we have overfitted the data). Conversely, failure to identify the features present that are strongly replicable over samples is underfitting. […] A best approximating model is achieved by properly balancing the errors of underfitting and overfitting.”
Model selection bias is a key concept in the model selection context, and I think this problem is quite similar/closely related to problems encountered in a meta-analytical context which I believe I’ve discussed before here on the blog (see links above to the posts on meta-analysis) – if I’ve understood these authors correctly, one might choose to think of publication bias issues as partly the result of model selection bias issues. Let’s for a moment pretend you have a ‘true model’ which includes three variables (in the book example there are four, but I don’t think you need four…); one is very important, one is a sort of ‘60% of the samples variable’ mentioned above, and the last one would be a variable we might prefer to just include in the error term. Now the problem is this: When people look at samples where the last one of these variables is ‘seen to matter’, the effect size of this variable will be biased away from zero (they don’t explain where this bias comes from in the book, but I’m reasonably sure this is a result of the probability of identification/inclusion of the variable in the model depending on the (‘local’/’sample’) effect size; the bigger the effect size of a specific variable in a specific sample, the more likely the variable is to be identified as important enough to be included in the model – Bohrenstein and Hedges talked about similar dynamics, for obvious reasons, and I think their reasoning ‘transfers’ to this situation and is applicable here as well). When models include variables such as the last one, you’ll have model selection bias: “When predictor variables [like these] are included in models, the associated estimator for a σ2 is negatively biased and precision is exaggerated. These two types of bias are called model selection bias”. Much later in the book they incidentally conclude that: “The best way to minimize model selection bias is to reduce the number of models fit to the data by thoughtful a priori model formulation.”
“Model selection has most often been viewed, and hence taught, in a context of null hypothesis testing. Sequential testing has most often been employed, either stepup (forward) or stepdown (backward) methods. Stepwise procedures allow for variables to be added or deleted at each step. These testing-based methods remain popular in many computer software packages in spite of their poor operating characteristics. […] Generally, hypothesis testing is a very poor basis for model selection […] There is no statistical theory that supports the notion that hypothesis testing with a fixed α level is a basis for model selection. […] Tests of hypotheses within a data set are not independent, making inferences difficult. The order of testing is arbitrary, and differing test order will often lead to different final models. [This is incidentally one, of several, key differences between hypothesis testing approaches and information theoretic approaches: “The order in which the information criterion is computed over the set of models is not relevant.”] […] Model selection is dependent on the arbitrary choice of α, but α should depend on both n and K to be useful in model selection”.
This will be my last post about the book. Below I have posted some observations from the second half of the book and some comments:
When we negotiate with others, we don’t just optimize economic payoffs. Process-concerns matter (e.g. stuff like reputation effects, whether people are perceived to behave in a fair manner during the negotiation, etc.), and ‘relationship management’-stuff may be very important. How well we know the other party and how much we trust him/her matters a lot: “As a general matter, it is easier for negotiators to reach integrative agreements if they already know and trust each other. For example, compared to mere acquaintances who negotiate, friends are more likely to achieve integrative agreements […] Friends are willing to sacrifice economic payoffs to reduce the conflict and negative externalities of negotiations […] When we negotiate with a friend, we prefer that our counterpart’s outcome be equal to our own; but when we negotiate with a stranger, we prefer to take more of the surplus for ourselves […] Regardless of the objective payoffs, positive relationships influence parties’ interpretations of those payoffs […] as a general matter, the stronger the relationship between negotiators, the more likely they are to share information (Greenhalgh & Chapman, 1998).”
“although we tend to rate ourselves more favorably than others on ambiguous traits like dependability, intelligence, and considerateness […], these ratings are likely to depend on the abstractness of the others in question. If the other is the “average person” then my comparison tends to be more favorable toward myself than if the other is an individual stranger sitting next to me […]. In other words, the less abstract the other person about whom I make a judgment, the more likely I am to judge that person as more similar to me. […] The “identifiable other” effect extends to other judgments besides estimates of personality traits. For example, people estimate that an “average person” making a decision will choose a riskier option than themselves, but that the stranger sitting next to them will choose a similar option to the one they just chose for themselves […]. Even more interesting for our purposes, the extent to which the other person is identifiable influences more than merely our judgments — it also affects our behavior toward other people. For example, responses to e-mail requests to participate in a survey have been shown to increase if the sender’s photograph is included in the e-mail, thereby making the person more identifiable […] people are more willing to help a target who is more identifiable than one who is more abstract, even when the act of identification conveys no information whatsoever about the characteristics of the target.”
“When we communicate via technology, we attend less to the other person and more on the message they are disseminating […]. This focus on the message has potential benefits. For example, in a study comparing negotiations that took place either over the phone or in person […], one negotiator in each dyad was given a strong case (i.e., a large number of high-quality arguments) to present whereas the other was assigned to present a weak case. The strong case was more successful in the phone condition (where negotiation partners were not visible) than in the face-to-face condition. By contrast, weak arguments were more successful when negotiations took place face to face as opposed to by phone. A clear implication of these findings is that the social constraint of the communication medium can affect the persuasion process that occurs during negotiations. […] communicating via technology can lead negotiators to focus more on the content or quality of arguments made by their negotiation partner rather than being (mis)led into agreement by more peripheral factors like those made salient in face-to-face interactions […] This greater focus on the message content can have particularly negative consequences when the content of the message that one receives is negative or confrontational. […] Ultimately, participants interacting via information technology often like their discussion partners less than those interacting face to face”.
A personal comment is perhaps worth inserting here: I generally don’t like debating ‘factual stuff’ offline in a social context where someone is openly disagreeing with me, certainly not when compared to doing the same thing online (I don’t like engaging in disagreements online either, but it’s not as bad), and part of the reason is a strong long-standing impression on my part that poor arguments are much, much harder to fight in a face-to-face interaction than they are in interactions taking place online. In the past I think I’ve mentally explained this in terms of me being ‘a bad communicator’/’not verbally skilled’ and similar stuff, but recently I have received some social feedback from face-to-face interactions suggesting that perhaps that’s not the (only?) reason (i.e. I’ve been given feedback to the effect that my verbal communication skills may be significantly better than I’ve tended to think they are). A conceptual distinction should probably in this context be made between communication and persuasion, though naturally there’s some overlap; the ability to accurately outline your point of view is to some extent different from the ability to convince others that your point of view is true, or perhaps that it is socially desirable to hold this point of view (the latter concern is presumably often a more salient factor in a social context than is the truth content of the competing positions). It’s become clear to me that being right has little to do with whether or not you’ll win a verbal/face-to-face argument, and that the skill-set required to actually win arguments of that sort is very different from the skill-set required to actually be right about stuff. Incidentally on an unrelated note I suspect especially the penultimate sentence in the paragraph above may be rather important for some people on the autism spectrum to keep in mind.
“McGinn and Croson (2004) argue that in settings where social perceptions and intimacy have not been established, such as negotiations between strangers, the lack of visual access, synchronicity, and efficacy inherent in e-mail can result in less cooperation, coordination, truth telling, and rapport building. But where social perceptions and intimacy are already established, as in negotiations between friends, the medium might not make much or any difference. […] when communicators have positive-valenced goals such as relationship-building, computer-mediated interactions can be highly personal, resulting in the development of close relationships”
“Research by Bond (1983), Shweder and Bourne (1982), Miller (1984), and most recently Morris and Peng (1994), has documented that Asians [from East Asia, it should be emphasized] make more situational attributions while Americans make more dispositional ones. […] In non-Western [in the research context here, ‘non-Western’ = ‘East Asian’] cultures, social conceptions are group centered, reflecting interdependence […]. People believe that human behavior is constrained by roles and role constraints, by group norms to preserve relationships with others, and by scripts that prescribe proper situational behavior. This perspective is highly consistent with making situational attributions. It is also consistent with the hierarchical values of these cultures […] Given the same information about an event, people from different cultures give very different attributions […], and construct causal explanations that are consistent with their culturally instantiated belief system. […] these culturally linked attribution patterns [affect] perceptions, reasoning, and other cognitive processes, as well as negotiation and other behavior. […] [East Asians] prefer expressing conflict in indirect ways, both verbally and especially behaviorally […]. From a verbal perspective, non-Westerners in conflict prefer using words whose meaning requires inference, for example, contempt rather than anger; words that are less blunt […] and words that are more ambiguous and “avoid leaving an assertive impression” […]. The behaviors that non-Westerners use to manage conflict are also indirect. There are many different types of indirect confrontation behaviors. One set involves using diffused voice, which means broadcasting concerns publicly to a diffused audience rather than directly to the other person […] Leung (1987) also showed that compared to Americans, Chinese preferred indirect procedures involving third parties, such as mediation, to direct face-to-face adversarial procedures, because the indirect procedures were seen as more conducive to reducing animosity among the parties. […] Similarly, Ohbuchi and Takahashi (1994) found that disputants in Japan used indirect strategies of ingratiation, impression management, and appeasement in order to deal with conflict.”
“Because gender may affect negotiation behavior in some situations and not others, when researchers compare results across studies and do not account for the different situational factors across these same studies, it may appear that gender has an inconsistent impact.”
“Stuhlmacher and Walters (1999) […] showed that, across a wide range of studies, gender differences in negotiated outcomes were greater in distributive negotiations than in integrative ones. […] Barron (2003) documented a striking divergence in how men and women determine their worth in salary negotiations. Whereas the vast majority of men indicated their worth was self-determined, the overwhelming majority of women felt that their worth was determined by what the company would pay them.”
Here are my first two post about the book. In this post I’ll continue my coverage of chapters from the book which I thought were interesting. The first of these chapters is the chapter on ‘Loneliness and Social Isolation’. I’ve written about this kind of stuff before (see e.g. these posts) and though I have not checked, it seems likely that some of my previous coverage of this topic on the blog will overlap with studies/results/distinctions mentioned here – I don’t mind that. I’m reasonably sure I’ve made the distinction between loneliness and social isolation clear before here, but it’s important and a useful starting point for any discussion of these matters:
“loneliness is a subjective and negative experience, and the outcome of a cognitive evaluation of the match between the quantity and quality of existing relationships and relationship standards. The opposite of loneliness is belongingness or embeddedness. […] Social isolation concerns the objective characteristics of a situation and refers to the absence of relationships with other people. The central question is this: To what extent is he or she alone? […] Persons with a very small number of meaningful ties are, by definition, socially isolated. Loneliness is not directly connected to objective social isolation; the association is of a more complex nature. […] Socially isolated persons are not necessarily lonely, and lonely persons are not necessarily socially isolated”
“Using the [De Jong Gierveld loneliness scale (related link)] in self-administered questionnaires results in higher scale means than if the scale is used in face-to-face or telephone interviews […]. This finding is in line with Sudman and Bradburn’s (1974) observation that, compared with interviews, the more anonymous the setting in which self-administered surveys are completed, the more the results show self-disclosure and reduce the tendency of respondents to present themselves in a favorable light”
“A partner does not always provide protection against loneliness. Persons with a partner who is not their most supportive network member tend to be very lonely […]. Generally speaking, however, persons with a partner bond tend to be better protected from loneliness than persons without a partner bond […] Adult children are an important source of companionship, closeness, and sharing, particularly for those who live alone. […] Divorce often impairs the relationship between parents and children, especially in the case of fathers […] The low level of contact with adult children is the reason divorced fathers tend to be lonelier than divorced mothers [I’d probably at most have concluded that it is a reason, rather than being the reason]. […] Siblings serve a particularly important function in alleviating the loneliness of those who lack the intimate attachment of a partner and have no children […] best friends can step in and function as confidants and in doing so help alleviate emotional loneliness, in particular, for never partnered or childless adults […] Generally speaking, as the number of relationships in the social network increases and as the amount of emotional and social support exchanged increases, the intensity of loneliness decreases […]. The four closest ties in a person’s network provide the greatest degree of protection against loneliness. The protection provided by additional relationships is marginal […]. Diversity across relationship types also serves to protect against loneliness. People with networks composed of both strong and weak ties are less prone to loneliness than people with strong ties only […]. Moreover, research […] has shown that people with networks that consist primarily or entirely of kin ties are more vulnerable to loneliness than people with more heterogeneous networks. Those who are dependent on family members for social contacts because they lack alternatives tend to have the highest levels of loneliness.”
“Research has shown that over the course of time, men and women who have lost their partner by death start downplaying the advantages of having a partner and start upgrading the advantages of being single […]. In doing so, they free the way for other relationships. The less importance attached to having a partner, the less lonely the widowed were found to be. […] Feeling socially uncomfortable, fear of intimacy, being easily intimidated by others, being unable to communicate adequately to others and developmental deficits such as childhood neglect and abandonment are reported by lonely people as the main causes of their feelings of loneliness […]. Characteristics such as low self-esteem, shyness and low assertiveness can predispose people to loneliness and might also make it more difficult to recover from loneliness […] Loneliness is associated with a variety of measures of physical health. Those who are in poor health, whether this is measured objectively or subjectively, tend to report higher levels of loneliness […] The causal mechanisms underlying the association between loneliness and health are not well understood”
The next chapter deals with ‘Stress in Couples: The Process of Dyadic Coping’. Some observations from the chapter:
“Research has shown […] that couples are more likely to have fights at home when the husband has had a difficult day at work” (In other news, water is wet. But do romantic partners take this kind of stuff sufficiently into account when trying to figure out why they’re arguing with their partner?) A related observation is this: “Dyadic coping suffers when the demands of the stressor reduce the amount of time and attention that spouses devote to one another. Perceived neglect may lead to resentment.” (Note again: Perceived neglect. Attributions and cognitions matter a lot).
In the dyadic coping context, there are good ways and bad ways to deal with problems. Positive relationship-focused coping strategies mentioned in the coverage include empathy, support provision and compromise, whereas negative strategies include confronting, ignoring, blaming, and withdrawal. Some things may make it harder to handle problems the right way: “It is easier to cope constructively, in ways that foster a positive relationship climate, when the individual is not overwhelmed by situational demands and a lack of material and interpersonal resources.” This observation lends support to the notion that it may be significantly harder to implement changes in coping strategies than e.g. just pointing out that the coping strategy employed is not optimal and that a different approach might be better; suboptimal coping may be employed at least partially for reasons outside the control of the individual.
“there may be two critical components to relationship maintenance in the context of severe stress. The first is preventing hurtful or counterproductive interaction patterns, such as pressuring one’s spouse to stop coping in a way that makes one uncomfortable (e.g., crying or expressing fear) or taking out one’s frustrations on the spouse. Some negative coping is probably inevitable, given the context of high stress. Thus, the second critical component may be forgiveness. Individuals who are facing the potential loss of physical functions, valued achievements, or the presence of pain or suffering in a loved one cannot always be empathic or even reasonable. An intervention model that combines awareness of appraisal processes, the implications of conflicting primary and secondary appraisals, and instruction on how to cope together rather than at cross-purposes may work best if it is tempered with the expectation that people will fail. Such failures can be normalized, rather than construed as major betrayals […] The most important contribution of interventions may be to educate people about the effects of stress on communication, the ability to express affection, and the capacity to process and react to information in a rational manner. It may be that our goal should not be “perfect dyadic coping” but multiple opportunities for redemption.” (“But do romantic partners take this kind of stuff sufficiently into account when trying to figure out why they’re arguing with their partner?” I asked above. It seems the authors believe that at least individuals in troubled relationships don’t.)
People in relationships lie to each other. In the previous coverage of the self-disclosure chapter I dealt briefly with some similar/related themes, but I didn’t go into much detail about lying – I’ll do this here, as the chapter ‘Lying and Deception in Close Relationships’ had some interesting stuff on the topic:
“the closeness of a relationship will affect the frequency of lying, the motivation for lying, the things lied about, the interactive manifestations of the lying process, the awareness of and desire to detect deception, the accuracy of that detection, the methods used to detect deception, and the consequences of the deception” (In other words, this stuff is complicated…)
“Rowatt, Cunningham, and Druen (1998) found both men and women saying they would be willing to lie about their intelligence, personal appearance, personality traits, income, past relationship outcomes, and career skills to a prospective date who was high in facial attractiveness. Even in get-acquainted conversations, participants are not averse to lying when they are asked to appear likable and/or competent […] it seems that lies are common, even expected, in the interactions that serve as a launching pad for close relationships [my bold, US]. […] In one survey of college students, 92% admitted to lying to a romantic partner about sexual issues […] Lies don’t cease when relationships are designated as “close,” however; they just decrease in frequency. DePaulo and Kashy (1998) found both community leaders and students reporting that they told fewer lies (relative to the total number of interactions) to those with whom they had closer relationships. Lies occurred about once in every 10 interactions in a broad range of close relationships that included spouses, best friends, family, children, nonspouse romantic partners, and mothers. However, it is worth noting that lying in close relationships does not seem to be equally low for all types of relationships. The closeness of the relationship is only one factor governing the frequency of lying. More lies, for instance, were reportedly told to mothers and nonspouse romantic partners. One in every three transactions with nonspouse romantic partners were reported to involve lying. Emotional closeness can be a powerful deterrent to lying in close relationships, and when lies do occur, they are often troubling for the liar. […] close relationships create an environment in which any given lie can take a huge toll on the degree of closeness felt.”
“Lies attributed to people in close relationships are not always lies. The dialogue associated with these attributions, however, may profoundly affect relationship closeness. […] People in close relationships are familiar with their partner’s communication style and do not expect their partner to lie to them. Nevertheless, one’s motives for not saying something a partner thought should have been said, forgetting something a partner thought should have been reported, or misunderstanding something a partner thought should have been understood are all subject to attributions of deception. Accusing close relationship partners of lying when they don’t believe they have can generate relationship-altering dialogue.”
“Close relationships […] are especially fertile ground for studying what Werth and Flaherty (1986) called collusion and Barnes (1994) called connivance. Connivance occurs when individuals in close relationships know they are being deceived by their partner, but deceptively act as if they didn’t know. […] Research typically focuses on lies of commission – false accounts, information, and stories that are invented by the liar. However, distinctions between lies of commission that invent a new reality for the target versus lies that involve secrets or simply allow the target to continue believing something false may be of special interest in close relationships […] Levenger and Senn (1967) found that concealing negative feelings, particularly about their mates, was far more characteristic of satisfied spouses than dissatisfied ones. Metts (1989) found spouses more likely to conceal information than to make deliberately false statements. […] One common reason for lying is to support and sustain our partners – to avoid hurting them, to tell them what they want to hear, to build and maintain their self-esteem, to help them accomplish their goals, and to show concern for their physical and mental states. Indeed, DePaulo and Kashy (1998) found what they called “altruistic” lies to be the most common type of lie told to friends and best friends. Lies told to close relationship partners are usually viewed by the lie teller as altruistically motivated, guilt inducing, spontaneous, justified by the situation, and/or provoked by the lie receiver […] More satisfied couples may […] tend to create a new partner reality through what Murray and Holmes (1996) called “positive illusions” – seeing virtues in their partner that aren’t there, turning faults into virtues, constructing excuses for misdeeds, and so on. What may begin as lies of support or as positive illusions may later, with the effects of self-persuasion and/or the self-fulfilling prophecy, be viewed as fact”
“Liars often view their own behavior as far less harmful, offensive, and consequential than the target of the lie. Liars often describe extenuating circumstances that they view as justification for their lie(s), but targets often do not share those views […] Liars feel especially good about lies that make their partner feel better […] Sometimes lies are not uncovered, but suspicion has been aroused to such an extent that trust in one’s partner is negatively affected. To the extent that the lie or lies told have powerful effects on the liar (e.g., guilt, anger, fear, embarrassment), these effects may manifest themselves in almost any dialogue with the liar’s partner. The target of the lie may wonder why, for no apparent reason, his or her partner seems so irritable over the slightest things. […] liars will sometimes denigrate and distrust the target, trying to make themselves feel better by believing that their partner is just like they are – that they lie too, that they invited the lie by being such an easy dupe, that they created a situation where they were going to get just as much punishment for telling the truth as for lying, and so on. […] Serious lies are often told to cover major transgressions of the relationship such as infidelity.”
“when Metts (1989) asked people to describe a time when they didn’t tell their relationship partner the whole truth, over a third of them mentioned deceptions involving emotional information – for example, feelings of love and commitment.”
“Several studies portray the consequences of deception to be more substantial for women than men. […] Women […] seem to view deception as more unacceptable than men, see it as a more significant relational event, and react more strongly to its discovery. They report being more distressed and anxious than men on the discovery that their partner in a close relationship has lied to them […] and more tearful and apologetic than men for the serious lies they tell. They may also maintain their bitterness about the transgression for a longer period of time than men”
“we know relatively little about any kind of lie that has positive effects and any kind of truth that has negative effects in close relationships. Most surveys find that truth telling is considered a necessary feature in establishing and maintaining a close relationship [On the other hand “it seems that lies are common, even expected, in the interactions that serve as a launching pad for close relationships” […] “92 % admitted to lying to romantic partners about sexual issues.” This stuff is obviously complicated], but most people in those same surveys are willing to admit that lying may play a worthwhile role in close relationships [so this makes sense]. It is possible, of course, that lies that provide the bonding elements for close relationships in everyday dialogue may not even be thought of as lies by the relationship partners.”
Finally, a few observations from the chapter on ‘Temptation and Threat: Extradyadic Relations and Jealousy':
“whereas about 30% of Asian Americans feel that violence is justified in case of a wife’s sexual infidelity […], among Arab American immigrants 48% of the women and 23% of the men approve of a man slapping a sexually unfaithful wife, with 18% of the women even approving a man killing his wife if she were to have an affair (Kulwicki & Miller, 1999). In general, attitudes toward infidelity are more permissive among younger individuals, among the better educated and those from the upper middle class, among persons who are less religious, among those living in urban areas, and among those holding liberal political orientations” [semi-related and more recent data here and here – I post these links partly because they’re ‘semi-relevant’ in this context, but also partly because I haven’t commented on the Charlie Hebdo attack here on the blog and I thought this was a good place to add a little bit of data to that discussion in case people reading along here want it. Another ‘Hebdo-related observation’ is that Statistics Denmark concluded a few years back, on the basis of a (very large) survey (the sample size was much larger than what is usually required for a Danish sample to be considered ‘representative'; n = 2.792), that half of the Danish immigrants and descendants from muslim countries (Danish link) were in favour of making it illegal for movies and books to ‘attack islam’ (and no, the support for such a legal restriction on freedom of speech was not lower for descendants).]
“There is some evidence that individuals who engage in extradyadic sex are relatively often characterized by lower levels of wellbeing and mental health […], and this seems to apply in particular to women […] Especially wives low in conscientiousness, high in narcissism, and high in psychoticism […] or suffering from a histrionic personality disorder […] seem to be inclined to be unfaithful. They may do so because, in part, these personality characteristics reflect insecure attachment styles […]. Indeed, some studies have found that especially among women, an anxious–ambivalent attachment style is associated with a tendency to be unfaithful. […] extradyadic sex is more prevalent among individuals with a positive attitude toward sexuality. […] Several studies have found that, particularly among men, adultery often stems from feelings of sexual deprivation in the primary relationship. In contrast, among women emotional dissatisfaction with the relationship has been found to be related to adultery [I did not find this surprising, but it’s probably worth keeping in mind] […] lowered satisfaction, as well as lowered commitment have also been found to be important determinants of extradyadic sexual involvement or of the willingness to be involved in an extradyadic relationship […] In addition, there is evidence that […] extradyadic sex may be particularly likely to occur in relationships characterized by low dependency.”
“Most extradyadic relationships are kept secret from the primary partner, and even when this partner gets obvious clues that the other partner may be having an affair, such clues are often denied because the offended partners may not want to know they are being cheated on. […] In general, it seems that extradyadic sexual relationships will particularly likely lead to a divorce when they stem primarily from dissatisfaction with the primary relationship with the affair being a consequence rather than a cause of relational problems […]. There is evidence that even when the spouse accepts the extradyadic sexual involvement such as in sexually open marriages, relational and sexual satisfaction decreases substantially over time”
“several studies have found lowered self-esteem and increased jealousy to be related […] a substantial number of studies [have] found that, particularly among women, jealousy is related to low self-esteem […] for jealousy to occur, a rival is a necessary and defining condition. Overall, a rival who possesses qualities that are believed to be important to the opposite sex or to one’s partner tends to evoke more feelings of jealousy than a rival who does not possess those qualities […] individuals tend to report more jealousy as their rivals possess more self-relevant attributes, such as intelligence, popularity, athleticism, and certain professional skills […] Because jealousy is evoked by those characteristics that contribute most to the rival’s value as a partner, one would, from an evolutionary–psychological perspective, expect women to feel more jealous than men when their rival is physically attractive and men to feel more jealous than women when their rival possesses status-related characteristics. Several studies have found support for this hypothesis […] there is increasing evidence that, rather than evoking merely upset, sexual and emotional jealousy evoke different emotional responses. In general, emotional infidelity is more likely to evoke feelings of insecurity and threat whereas sexual infidelity is more likely to evoke feelings of betrayal, anger, and repulsion […] A recurrent finding is that, in response to a jealousy-evoking event, women in particular have the tendency to think that they are “not good enough.” […] many more men than women commit homicides out of jealousy […]. However, studies that have asked participants what they would do if a jealousy-evoking event would occur consistently show that women in particular are inclined to endorse aggressive action against their rival […] Possible explanations for this discrepancy are that women are more likely than men to admit intentions of violence toward their rival, women are less likely than men to convert their violent intentions into actual behavior, and, although women may physically injure their rivals, they do not kill them, whereas men do.”
Here’s my first post about the book. In this post I’ll talk some more about the findings described in the book; I have included more quotes from the book in this post than I did in the first post because it’s very time-consuming to blog books the way I’ve tried to do over the last week, and I’ll never cover anywhere near the amount of material I’d like to cover if I limit myself to doing things that way.
“experiments by Yukl (1974) traced the location of demands, goals, and limits over time. His data suggest that at the early stages of negotiation, demands are placed well in advance of goals and limits (called overbidding, which may reflect negotiator efforts to create an image of firmness). But over time, overbidding diminishes, and demands come close to or identical with goals. Goals, in turn, tend to approach limits, as wishful thinking becomes eroded. The upshot of these trends is that limits are usually the most stable and demands the least stable of demands, goals, and limits.
A large body of work suggests that the impact of goals on negotiation is similar to those of limits. Higher goals produce higher demands, smaller concessions, and slower agreements; because higher goals produce higher demands, they lead to larger profits if agreement is reached […] There is some evidence that an “anchoring-and-insufficient- adjustment” (Tversky & Kahneman, 1974) process may mediate the impact of goals on negotiation. […] anchoring effects hold for individuals as well as groups of negotiators, and generalize across novices (e.g., students) and professionals (Whyte & Sebenius, 1997).”
The books talks a bit about the dual concern model – see the wiki for a general description. The model’s part of a literature which has focused on how the extent to which individuals care about the outcomes of others relate to their behaviours and outcomes during negotiations. Here’s a relevant quote:
“In a meta-analysis, De Dreu, Weingart, and Kwon (2000b) tested the effects of (a) individual differences (mainly social value orientation), (b) incentives, (c) instructions, and (d) implicit cues (group membership, future interaction, friend vs. stranger) on joint outcomes. For all four, effect sizes indicated that pro-social negotiators achieved higher joint outcomes than selfish negotiators and, importantly, there were no differences between the different operationalizations of social motive. This latter finding indicates a functional equivalence of various ways of implementing social motivation in negotiation.”
More specific findings in this area are probably more questionable than the meta-analytical results, but it’s for example noted in this part of the coverage that one study found that people who were more concerned about the outcomes of others (‘pro-social’) during negotiations saw their negotiation partners as more fair and trustworthy, made greater concessions, and that they both make more generous starting offers yet also end up doing better in the final agreement. Another study found that pro-social individuals were more likely to correctly recall joint-gain task features, whereas people who cared less about the outcomes of others more accurately recalled self-gain features. I would caution that the meta-review finding described above, that the differences in negotiation outcomes between groups of pro-social and selfish negotiators seem to be similar regardless of how the pro-social behaviour comes about, does not necessarily imply that other features of the negotiation process will necessarily be identical across implementation strategies.
People who enter negotiations like to look strong and project an image of toughness, and: “there is evidence that conditions that help negotiators maintain this sense of strength, yet allow them to make concessions, such as the help of a third party, increase the likelihood of agreement”.
“As for the literature on bargaining games, the base finding concerning learning is that inexperienced players often do not know how to maximize their value, but often improve across trials.” However it’s important to note that learning does not always lead to better outcomes: “one’s negotiation experience may also lead to less value creation. There are at least three kinds of reasons. First, prior experience with a skewed sample of negotiation situations (e.g., pure distributive haggling) may lead people to assume that value creation is not possible—and assuming it is not possible should make it less likely to occur […]. Second, prior experience with one kind of counterpart may make it difficult to create value with other kinds of counterparts because of misunderstandings […] Third, O’Connor and Arnold (2001) found that after an impasse, relative to those not experiencing an impasse, negotiators were less interested in working with their counterparts again, planned to share less information in the future, planned to behave less cooperatively, and felt that negotiation was a less effective means for resolving conflicts. And people appear to follow through with these behavioral intentions […], creating a self-reinforcing cycle of poor negotiated outcomes. In short, prior experience can yield lessons that are counterproductive.”
“broadly, there are two important conclusions […] concerning expertise and decision biases. First, practitioners with years of experience nonetheless show decision biases […]. Second, decision biases are surely more consequential for practitioners than novices because the former make more consequential decisions. […] there [has been] little attempt to define what constitutes expertise in decision making. For example, in the Northcraft and Neale (1987) study, the expert population was real estate agents with about 9 years of experience, engaging in about 16 transactions per year. This is only 144 total transactions — by contrast, Chase and Simon (1973) estimated that chess experts knew at least 50,000 chess patterns. Further, a given transaction does not provide all that much clear feedback, and there is little in the way of formal training to serve as a substitute. This is typical for negotiation as well — feedback is typically poor and subject to misinterpretation […], and formal training rarely exceeds a course or two. Impoverished training and poor feedback are troublesome because people’s intuitions are not particularly well honed for effective negotiation […], and expertise appears to require planning, monitoring, analyzing, and reflecting on one’s practice, not merely repeated performance […] trial and error is likely to be inefficient, can lead to misunderstandings, and is in itself a poor means of fostering reflection and reframing, given the paucity of information feedback alone provides. […] negotiators may more readily remember their interpretations about their negotiation than the perceptions on which the interpretations were based. This impedes learning, as the more one abstracts away from what happened, the more one is assimilating the experience to old categories (Argyris & Schon, 1996).”
“Most negotiators are neither novices nor experts. They have negotiated repeatedly in a particular setting […] For this reason, negotiators likely use situated, not general, concepts of negotiation […] Because negotiations occur in a wide variety of settings, people’s knowledge about negotiating is fragmented across situations […]. Not only are sophisticated negotiation strategies artificially limited in scope, people are also limited because they do not think of their actions in a variety of settings as all being negotiations. […] One result is that people’s assumptions about what a negotiation is are shaped by the limited situations they actually think about as being negotiations. For example, people may fail to claim value because they do not realize a situation is a negotiation and simply agree to another’s proposal (Babcock, Gelfand, Small, & Stayn, 2002).”
“Taken together, the articles reviewed here demonstrate that negotiators evaluate their own performance based both on their prenegotiation expectations and their perceptions of how opponents and similar (by role) others performed. […] Rationally, it should not matter how one’s opponent fares in a negotiation if one’s own interests are met, however, the empirical evidence reviewed above indicates that negotiators, quite irrationally, weigh their own success in negotiation against the perceived success of others.”
“Results indicate that a basis for a relationship (in-group status or self-disclosure) elicits positive affect within the negotiation, enhancing rapport and diminishing the likelihood of impasse. […] even trivial relationships facilitate rapport among negotiators, which may result in more cooperative behavior and better outcomes. […] The use of emotion-behaviors as tactical maneuvers in negotiation is often commented on and more frequently implied in discussion of emotion in negotiation. We are aware of just one published empirical paper (an edited volume chapter) that explicitly addressed the topic. Barry (1999) found that negotiators perceive the falsification of emotion (e.g., feigning pleasure, shock) more ethically appropriate than other deceptive negotiation tactics. Findings also suggest that negotiators differentiate between the strategic use of positive versus negative emotions, where feigning a negative emotion was considered less ethically appropriate than feigning a positive emotion.”
“An extensive literature has explored the effects of experienced mood or emotion on cognitive processes such as memory, information processing, and judgment. Several general patterns emerge from this research and may be summarized as follows […]: Affective experiences impact memory encoding and later recall; emotional impressions are often remembered more vividly than other details of social encounters. In addition, mood state during recall biases memory retrieval. Generally, happy (sad) memories are recalled when people are in positive (negative) moods. Information processing also seems to be guided by mood state; individuals in a given mood often seek out and pay greater attention to information that is congruent with that mood. Other research finds an increase in creativity and flexible problem solving when people are happy. […] In addition to the actual variability and stability of emotion over time, one’s predictions about affect in response to future events — “affective forecasting” — is likely to be important for decision making in negotiation. Gilbert, Wilson, and colleagues find consistent support for the theory that people overestimate the intensity and duration of their future emotional reactions to a particular event […] Bazerman
and Chugh […] link these heightened expectations to a broader tendency on the part of negotiators and others to overfocus on a narrow subset of information.”
“Empirical evidence confirms what our everyday experience tells us: some people are more emotionally expressive than others. For example, women perceive themselves to be more skilled at expressing emotion, where men view themselves as more skilled at controlling emotion […]. And indeed, women do tend to be more facially expressive of emotions than men […]. Men, on the other hand, report hiding their emotions more than women […] Emotional suppression is physically challenging for the suppressor, resulting in increases in cardiovascular activation and blood pressure […]. Suppression is also cognitively demanding, negatively impacting memory […] Particularly relevant for negotiation is research suggesting that emotional suppression inhibits relationship formation.”
“The book covers a lot of the research that exists in this field, but the research that has been done is in my opinion not very impressive. Another general problem I have with the book is that the authors much too rarely comment upon methodological issues in the research they cover – they’ll frequently make grand conclusions based on just a few studies (often just one or two papers), the validity of which are completely unknown(/unknowable) given the coverage. A simple word search for ‘sample size’ in this book will yield you exactly zero hits. Occasionally big and wide-reaching conclusions (promoting further theorizing later on in the coverage) will be drawn from data/research which frankly quite obviously provides completely insufficient support for the conclusion being drawn.
Even when you feel inclined to trust the results of the research you’ll often feel that it’s not really telling you very much, because the authors mostly frame the findings in terms of ‘it has been found that there’s an effect’, rather than ‘the effect size was…’. Where effect sizes are mentioned, sample sizes aren’t (and confidence intervals aren’t mentioned either), so it’s really hard to critically evaluate the coverage and the research results included without actually reading the papers in the references; which in a way makes reading the book a bit superfluous – why would you read a book if you need to read the papers the book ‘covers’ anyway in order to figure out if you can trust what the authors are telling you in the book?
There’s some useful content in there, it’s not like this stuff is all totally useless – some of the specific observations included are quite likely sound and important/useful to know about. Some chapters are better than others. But especially the second half of this book was a disappointing read.”
In the post below I’ll talk a little bit about some of the more specific ideas/research they cover, but before I do that I’ll note that there’s a lot of stuff they do not cover; specifically they talk very little about bargaining models from microeconomics, mechanism design, contract theory and related stuff. It’s not that kind of book – the book deals with the psychology of negotiation. There are some economic concepts included, you can’t write a book like this without them, but it’s occasionally quite obvious that some of the researchers are not particularly familiar with even reasonably simple ‘formal models’ in micro (such as those you might encounter at the advanced undergraduate level or early graduate level as a student of economics); and that this unfamiliarity might have lead them to overlook some important factors/ideas, in particular in the context of the role of uncertainty and risk in negotiations. At least in a couple of situations I found it somewhat strange that parallels to micro models and what they might have to say about this stuff were not made. But anyway, the book applies an almost purely behavioural economics/psychological approach to the topic at hand. This also means that if you’ve read along for a while here on this blog or you are familiar with this kind of research, some of the specific findings might not be that surprising to you. I’ve included them anyway, but as implied by the goodreads review above I maintain a skeptical mind – I’m quite open to the idea that some of the results in this book are simply wrong.
It’s noted early on that in two-party negotiations, negotiators tend to be more concessionary in positively-framed negotiations than in negatively-framed negotiations. They tend to be inappropriately affected by anchors (which among other things also translates into the person presenting the first offer in a negotiation tending to have an advantage over the other party), and when it comes to outcomes that favour themselves they tend to be overconfident and overly optimistic. They often frame the negotiation context in terms of a fixed pie even though the size of the pie is often partly endogenous (i.e. people tend to miss out in negotiations because they have a tendency to assume the other guy’s win is their loss, even if in many contexts the potential choice set will include deals which will benefit both parties due to the existence of different tradeoffs/valuations of specific variables between/among negotiators. People have been shown to make the assumption that ‘the pie is fixed’ in studies where by design it was not). Many studies have shown that people often escalate conflict when a rational analysis would be to change the strategy instead, and that they (very surprisingly!) interpret disputes in ways that favour themselves.
Positive mood may increase the tendency of a negotiator to select a cooperative negotiating strategy, which may lead to better negotiating outcomes. I’m more uncertain about this one than ‘the opposite’ result; that anger leads to poorer outcomes. Angry negotiators have been shown to be more likely to reject offers in ultimatum games, and anger seems to impair memory so that angry negotiators tend to forget details from negotiations. Angry people have also been shown to be less accurate when assessing the interests of the other party and to be more self-centered during negotiations. One study indicated that joint gains in negotiations involving angry people are lower than in negotiations involving controls.
Negotiations are complex interactions with a lot of uncertainty, and standard ‘cognitive miser’ models of human behaviour tell us that in such contexts we tend to cut down on uncertainty and simplify matters to ease decision-making and resolve tension associated with uncertainty e.g. by making use of cognitively available heuristics of various kinds, which may then influence negotiations in foreseeable ways. Like in other contexts, specific variables will affect the likelihood of people defaulting to heuristic reasoning; three specific variables are mentioned in this context in the book, the need for closure variable which I’ve talked about before, time pressure (less time for mental processing -> higher likelihood of relying on heuristics), and accuracy motivation (“Individuals differ in their accuracy motivation — or their desire to form accurate, rather than biased, judgments […] In general, the higher one’s level of accuracy motivation, the greater the likelihood that one will engage in systematic and thoughtful information processing […] Accuracy motivation has also been examined in negotiation contexts and has been found to reduce negotiators’ reliance on simple heuristics and improve the accuracy of their perceptions”).
What’s in focus for us during a negotiation matters a great deal, because when we focus on something specific we tend to have a hard time focusing on other stuff as well because our attention span is limited; a few reviews specifically make the point in this context that “people often act on a limited set of data that prompts an affective response that cuts off cognitive deliberation.” It makes sense to believe that what’s most likely to come into focus in this context is stuff which is easily available to us; many predictions follow from this idea, some of which have been tested and are talked about elsewhere in the book. I think the authors here are making a similar point as I believe was made by a few of the authors in Yzerbyt et al., a point also popularized by people like Kahneman in popular science books like Thinking, Fast and Slow (I skimmed the first 100 pages of this book a while back, before concluding that reading the book would not be worth my time), that it may (always? often? in some contexts?) be argued to be faulty to perceive of the primary effect of affect on decision-making as affect influencing the decisive cognitive processes, rather than affect deciding on which decisions are made – affect may in some sense more or less bypass cognitive processes completely in the decision-making process (we might rationalize decisions later, but such rationalizations may have little to nothing to do with why the decisions were made). All kinds of stuff gets ignored when people try to cut down on the level of complexity and uncertainty they face during negotiations, and “[s]ubstantial research on the Acquiring a Company problem suggests that bounded awareness leads decision makers to ignore or simplify the cognitions of opposing parties as well as the rules of the game”. The choice set is also affected. It’s cognitively expensive to consider what your opponent might be thinking, and it’s cognitively expensive to come up with and explore/analyze new/additional ideas for how to solve the problem at hand, so when stuff gets complicated you focus on yourself and your preferences, and you cut down on the amount of work that goes into figuring out why your opponent thinks the way he does and the number of alternatives you’re willing to consider.
When considering the potential deals that might result from a negotiation, people will usually consider a range of outcomes to be acceptable. At the top of that range you have the aspiration price and at the bottom you have the reservation price; a related important concept to be familiar with is the BATNA (Best Alternative To a Negotiated Agreement). It matters whether you focus on the upper range or the lower range of acceptable outcomes when you negotiate, and the authors argue that given the level of complexity and uncertainty involved in negotiations, people will tend to focus either on one or the other: “In general, negotiators achieve better outcomes for themselves when they focus on their aspiration price than when they focus on their reservation price”. A related point made here is that: “The more difficult the goal a negotiator is trying to achieve, the better the outcome obtained from the negotiation”. Presumably there are some relevant constraints at play here which they don’t talk about in the text; if you make unrealistic offers, it seems likely that this sort of behaviour would increase the likelihood that the other party walks away, leading to a poor outcome.
Although (or perhaps because?) they don’t talk about this in the book, I thought I should caution that I’m not convinced the above comments indicate that risk averse individuals who focus on reservation price rather than aspiration price are not optimizing (‘are leaving money on the table’) or that a risk-averse individual will necessarily ‘do better’ by focusing on the aspiration price; the negotiation strategies people employ presumably in part reflect their risk profiles, and if you’re more risk averse than another individual then you may well prefer an outcome with a lower expected value. The impression I got from reading the chapter in question was that most readers would be very tempted to conclude from the coverage that it’s always better to focus on the aspiration price during negotiations; I’m not sure they’ve actually shown that this is the case. There may be some reasons for assuming this is not the case; for example they note later on in the chapter that people who achieve high outcomes during a negotiation in multiple studies have been found to be less satisfied with their outcomes than people who achieve worse outcomes – so the relationship between subjective and objective outcomes is perhaps different from what people might expect it to be. Of course how important such concerns are depend upon the context of the negotiation; in organizations you’ll often want people to optimize expected value and (competent) owners will generally try to give people incentives to try to get them to do that, but when people negotiate in other contexts other concerns may be more important. That said, a general point probably worth keeping in mind is that tradeoffs like these exist and influence outcomes. It’s easier to focus on your aspiration price (and get the other guy to focus on your aspiration price) if you’re the one to make the first offer; as already mentioned people who make the initial offer in a negotiation do better than people who don’t. But they tend to be less happy about the outcome, and in particular people who make the initial offer tend to become dissatisfied with the result of the negotiation if an initial offer made by them is immediately accepted by the other party.
Behavioural differences in negotiation contexts are, it must be noted, not all dispositional in nature; situational factors may also affect preferences and behaviour in negotiation contexts in all kinds of ways. And people who negotiate will often neglect to take situational factors into account when making attributions during negotiations (…in Western societies, it is argued/added in a later chapter; there an argument is made that in East Asian societies people will be more likely to explain the behaviour of negotiation partners in terms of situational factors than in terms of dispositional factors, compared to what’s the case in the West).
Okay, so like most other people you have the impression that how the other party behaves during a negotiation may affect how you behave during the negotiation. But are your ideas about how the other party’s behaviour might influence you correct? People have looked at this stuff, and here’s a relevant quote: “Research by Diekmann and her colleagues […] suggests that negotiators are aware that the behaviors of their counterparts will influence their negotiation behaviors, but that they are not always accurate in forecasting how these counterpart behaviors will affect them. In a series of studies, these researchers found that negotiators expected that they would behave more competitively when negotiating against a competitive opponent than a cooperative opponent. However, in actual negotiations, this did not occur.”
When I started out writing this post I did not intend to write more than one post about the book, but I see now that this will certainly be necessary if I’m to cover all the stuff I’d ideally like to cover. If I don’t cover the rest of the chapters, you should note that there’s a lot of stuff in there which I did not get to talk about here.