“The student of medicine has to learn both the ‘bottom up’ approach of constructing a differential diagnosis from individual clinical findings, and the ‘top down’ approach of learning the key features pertaining to a particular diagnosis. In this textbook we have integrated both approaches into a coherent working framework that will assist the reader in preparing for academic and professional examinations, and in every day practice. […] We have split this textbook into three sections. The first section introduces the basic skills underpinning much of what follows – how to take a history and perform an examination, how to devise a differential diagnosis and select appropriate investigations, and how to record your findings in the case notes and present cases on ward rounds. The second section takes a systems-based approach to history taking and examining patients, and also includes information on relevant diagnostic tests and common diagnoses for each system. Each chapter begins with the individual ‘building blocks’ of the history and examination, and ends by drawing these elements together into relevant diagnoses. […] The third and final section of the book covers ‘special situations’, including the assessment of the newborn, infants and children, the acutely ill patient, the patient with impaired consciousness, the older patient and death and the dying patient.”
The above quote is from the preface of the book. This is a medical textbook with 500 pages and 26 chapters written by 27 contributors, so it has a lot of stuff; I’ve been conflicted about how to blog it for this reason. It has as lot of stuff which is useful to know but which most people don’t, and I think it’s the sort of book I might be tempted to ‘consult’ later on; the various 100 Cases… books I’ve read include some similar useful observations, but I think it’d be more natural to consult this book first because it’s much more likely that this book will at least have something about the medical condition you’re curious/can’t remember the details about. I think it was somewhat easier to read than was McPhee et al., and I’m not sure this is only because I read the former first (while I was reading McPhee et al. I was learning part of the vocabulary which is needed to read this book).
In the coverage below I have not talked about the stuff included in the first part; I don’t need to e.g. be able to take a medical history and navigate medical records, and if some of my readers do I’ll assume they have the necessary skills already, or know where/how to obtain such skills. In this post I’ll focus on the coverage of major systems in part two, with my coverage focused on ‘key variables’, and, well, ‘stuff I found interesting’ – which also means that I won’t talk about stuff like ‘this is how you palpate a liver’ and ‘this is how you grade heart murmurs’ (the book also covers that kind of stuff in some detail). Nor will I tell you what Buerger’s test or Trendelenburg’s test are used for, or give you a full account of the many, many different types of ‘named medical signs’ included and described in the book (Charcot’s triad, Cullen’s sign, Grey Turner’s sign, Murphy’s sign, Courvoisier’s sign, Kussmaul’s sign, Levine’s sign, etc. …).
I may in my coverage of this book tend to focus more on acute conditions than on chronic conditions, in part because it seems more useful to me to know/remember whether or not someone is, say, having a heart attack than whether or not someone with chronic kidney failure will be bothered by pitting edema. I think this approach makes sense.
The book has split the systems coverage in part 2 up into 15 chapters – there are specific chapters about: *The cardiovascular system, *the respiratory system, *the gastrointestinal system, *the renal system, *the genitourinary system, *the nervous system, *psychiatric assessment, *the musculoskeletal system, *the endocrine system, *the breast, *the haematological system, *skin, nails and hair, *the eye, *ear, nose and throat, and *infectious and tropical diseases. Most of the book coverage is devoted to this treatment of individual systems, as these 15 chapters make up roughly 350 pages of the total. I found it, interesting, that there was close to zero overlap between the coverage in this book and Newman and Kohn’s text; I’m not quite sure what to think about that.
In this post I’ll mostly talk about the first three ‘systems’ chapters. When dealing with cardiovascular disease, the major symptoms are chest discomfort, breathlessness, palpitation (an awareness of the heartbeat), dizziness and syncope (‘transient loss of consciousness resulting from transient global cerebral hypoperfusion’), and peripheral oedema (usually ankle swelling, most often associated with heart failure, often worse in the evening). An important observation is that myocardial ischemia (‘the heart muscle doesn’t get enough blood/oxygen’) can cause breathlessness and chest discomfort, and “in many cases breathlessness is the predominant symptom (particularly in women).” Deep vein thrombosis can be asymptomatic, but it commonly causes pain and swelling in the affected leg – the main acute risk factor associated with the condition (which is not particularly rare among elderly people) is that the blood clot travels to the lungs and causes a pulmonary embolism.
Next, the respiratory system: “respiratory conditions are common – accounting for more than 13 per cent of all emergency admissions and more than 20 per cent of general practitioner consultations”. I was very surprised the number was that high! I can’t provide a source as the authors did not provide a source; there are no inline citations in this book, which is part of the reason why the book did not get five stars on goodreads. Six key symptoms of respiratory diseases are chest pain (that may be extended to chest sensations), dyspnoea (shortness of breath/breathlessness), cough (“the commonest symptom that is associated with pure respiratory disease”), wheeze, sputum production, and haemoptysis (coughing up blood/blood in the sputum – this is, perhaps unsurprisingly, often, but not always, a ‘red flag symptom’: “Current recommendations indicate that urgent referral to a hospital clinic should be made when patients have haemoptysis, are over the age of 40, and are current or ex-smokers. However, a young patient who has a small amount of streak (lines in sputum) haemoptysis in the context of an upper respiratory tract infection usually will not require referral”).
In respiratory medicine, cough duration is an important variable in the diagnostic context; I was surprised that even simple respiratory tract infections may cause cough for up to three weeks, and that this is not necessarily something to worry about. Longer than that and it’s however less likely to be due to a self-limiting condition, and is more likely to be due to either lung cancer or one of the many causes of chronic cough (cough is not chronic until it’s lasted longer than 6 months) – these causes include, but are not limited to, astma, COPD, and GERD. As should be clear from the above, both heart and lung conditions may cause shortness of breath, so you can’t always conclude that shortness of breath is a lung issue. This is of course far from the only symptom which may present in different disease contexts, and the heart and lungs are connected in other ways as well; for example problems in both systems may cause clubbing. When dealing with a case of pneumonia it’s useful to be familiar with the CURB 65 score to assess risk/severity. Lung cancer can be either ‘non-small cell’ or ‘small cell’ lung cancer – in terms of presenting symptoms they’re reasonably similar, but the latter is more often associated with paraneoplastic syndromes (though these are still rare in an absolute sense, presenting in 5% of small cell lung cancers and 1% of non-small cell lung cancers, according to the book). The most common symptom is a cough, followed by persistent ‘chest infections’ (which are of course not infections) and bloody sputum/coughing up blood – but “some patients have remarkably few signs.” In the context of acute conditions affecting the lungs, pleuritic chest pain is an important symptom; this means pain which is made worse by breathing and which often has a sharp and stabbing quality to it – acute onset pleuritic chest pain can be due to a pulmonary embolism (60% of patients with PE have acute onset pleuritic chest pain; in another 25% there is a sudden onset of acute breathlessness) or a pneumothorax (‘collapsed lung’ – may also cause acute breathlessness). Although the two conditions are different, if you have either of them you want to get to a hospital, fast – sudden onset pleuritic chest pain seems to me a very good reason to call for an ambulance/visit the local emergency department.
“The gastrointestinal system includes the alimentary tract from mouth to anus, the liver, hepatobiliary structures including the gallbladder, pancreas and the biliary and pancreatic ductal systems.” This is a big system. And it’s often hard to get a good look at what’s the problem: “Almost half of gastrointestinal problems are not associated with physical signs or positive test results. Hence, the diagnosis and management is often based entirely on the inferences drawn from a patient’s symptoms.” Difficulty swallowing is a ‘red flag’ symptom, because “many patients with this symptom will have clinically significant pathology.” Weight loss combined with worsening difficulty swallowing (solids first, liquids later) means that oesophageal cancer is likely to be the cause (this one has a really bad prognosis). A useful observation when it comes to distinguishing between angina (‘heart issue’) and heartburn (‘gastrointestinal issue’), which may cause somewhat similar symptoms, is that whereas angina is often worsened by physical exertion, heartburn is not and often occurs at rest. It’s worth noting that when dealing with gastrointestinal disorders, you can learn a lot by figuring out where exactly the pain is coming from – stomach pain isn’t just stomach pain. Pain localized to one specific section of the stomach is much more likely to be due to condition X than condition Y (e.g., pain in the right upper quadrant = maybe biliary obstruction or hepatomegaly; pain in the left lower quadrant = maybe diverticulitis or infectious colitis). This may not be particularly useful for people in general to know, but I thought it was interesting. Duration of pain is a key variable: “Sudden onset of well-localized severe pain is likely to be due to catastrophic events [and] [p]ain present for weeks to months is often less life-threatening than pain presenting within hours of symptom onset.” The authors point out that the severity of abdominal pain can be underestimated in elderly people, very young patients, people who are immunosuppressed and diabetics (the latter presumably due to autonomous-/diabetes-associated enteric neuropathy). “Presence of blood in the stool points towards either inflammatory bowel disease or malignancy, but in those with infective diarrhoea it is highly specific for infections with a invasive organism.” The authors mention a few pointers to specific nutritional deficiencies which are probably useful to know about – iron deficiency may cause a flat angle or ‘spooning‘ of the nails, and it may also (together with vitamin B12-deficiency) cause soreness/redness of the tongue. Redness and cracks at the angles of the mouth are also associated with deficiencies of iron and vitamin-B12, as well as deficiencies of riboflavin, and folate.
This post will be brief but I thought that since it’s been a while since I last posted anything and since I just finished reading this book, I wanted to add a few remarks about it here while it was still ‘fresh in my mind’. I’m gradually coming to the conclusion that if I’m to blog all the books I’m reading in the amount of detail I’d ideally like to, I’ll have to read a lot less. This option does not appeal to me; I’d rather provide limited coverage of a book I’ve actually read than not read a book in order to provide more extensive coverage of another book.
Anyway, the book is a rather nice collection of interviews with mathematicians from MIT’s ‘early days’ (in some sense at least – MIT is a rather old institution, but at least some of the people interviewed in this book came along during the days before MIT was what it is today), who talk about the history of the mathematics department of MIT, and other stuff – the people interviewed include an Abel Prize winner and a few people who’ve been members of the Institute for Advanced Study, a former MacArthur Fellow, as well as a guy who used to be on the selection committee for the MacArthur Foundation. All of them are really, really smart, and some of them have lived quite interesting lives. To the extent that these guys aren’t impressive enough on their own, some of them also knew some people most non-mathematicians have probably heard about – this book includes contributions from people who were friends of people like John Nash, Grothendieck, Shannon, Minsky, and Chomsky, and they are people who’ve met and talked to people like John von Neumann, Oppenheimer, Weyl, Heisenberg, and Albert Einstein. They talk a little bit about their work and the history of the mathematics department, but they also talk about other stuff as well; there are various amusing anecdotes along the way (for example one interviewee tells the story about the time he lectured in a gorilla suit at MIT), there are stories about the private parties and social lives of the MIT staff during the fifties (and later), we get some personal stories about mathematicians who fled Europe when the Nazis started to cause trouble, and there are stories about student protests in the late sixties and how they were dealt with – the books spans widely. There was some repetition across the interviews (various people answering similar questions in similar ways), and there was more talk about ‘administrative matters’ than I’d have liked – probably a natural consequence of the fact that a few of them (3? At least three of the contributors..) were former department heads – which is part of why I didn’t give it five stars, but it’s really a quite nice book. I may or may not blog it later in more detail.
“When I retired from clinical practice in 1998, my intention was (and still is) to write a definitive, exhaustively referenced, history of diabetes, which would be of interest primarily to doctors. However, I jumped at the suggestion of the editors of this series at Oxford University Press that I should write a biography of diabetes that would be about a tenth of the length of a full history with a minimum of references, for a wide general readership.”
This book is the result. As I pointed out on goodreads, this book is really great. The book is not particularly technical compared to other books about diabetes which I’ve read in the past, however this semi-critical review does make the point that the coverage is occasionally implicitly ‘asking too much’ even from diabetic readers (“There were parts of all this that lost my interest or that I lacked the background to appreciate”). Whereas the reviewer was apparently to some extent getting lost in the details, so was I – but in a completely different way; I was simply amazed at the amount of small details and interesting observations included in the book that I did not know, and I loved every single chapter of the book. The author of the other review incidentally also states that: “I don’t recommend that anyone read this who is not already familiar with diabetes, either by having it or knowing someone with it.” I’d note that I’m not sure I agree with this recommendation, to the extent that it’s even ‘relevant’ – these days such people who don’t even know anyone with diabetes might well be a bit hard to find, on account of the fact that diabetes has become a quite common illness. Presumably a significant proportion of the people who assume they don’t know anyone with the disease might well do so anyway, because a very large number of people have type 2 diabetes without knowing it. I think a reader would get more out of the book if he or she has diabetes or knows someone with diabetes, but a lot of people who do not would also benefit from knowing the stuff in this book. Not only in a ‘and now you know how bad type 2 is and why you should get checked out if you think you’re at risk’-sense (there’s incidentally also a lot of stuff about type 1), but also in the ‘the history of diabetes is really quite fascinating’-sense. I do think it is.
Have a look at this image. The book included a similar picture (not exactly the same one, but it’s of the same patient and the ‘before’ picture is obviously taken at the same time this one was), which is of Billy Leroy, a type 1 diabetic, before and after he started insulin. He was one of the first patients treated with insulin (the first human treated with insulin was Leonard Thompson, in 1922). Billy Leroy’s weight in the first picture, where he was 3 years old, was 6.8 kg (the 5 % (underweight) CDC body weight cutoff at the age of 3 is 12 kg) – during the three months after he started on insulin, his weight doubled. The author argues in the beginning of the book that: “When people are asked to rank diseases in order of seriousness, diabetes is usually at the mild end of the spectrum.” This may or may not be true, but the picture to which I link above certainly adds a detail which is important to keep in mind but easy to forget when evaluating ‘the severity’ of the disease today – type 1 diabetes in particular is not much fun if you don’t have access to insulin, and until the early 1920s people with this disease simply died, most of them quite fast. (They all still do – like all other humans – but they live a lot longer before they die…)
The author knows his stuff and the book has a lot of content, making it hard to know what to pick out and mention in particular in a review like this – however below I have added a few quotes from the book and some observations made along the way. The content covering the late nineteenth century and the first couple of decades of the twentieth century, before it was discovered that insulin could save the lives of a lot of sick children, would in my opinion on its own be a strong reason for reading the book; but the chapters covering the periods that came after are wonderful as well. When insulin was discovered a religiously inclined mind might well be tempted to think of the effects on young type 1 diabetic children as almost miraculous; but gradually doctors treating diabetics came to realize (the patients never knew, because they were not told – it is pointed out in the book that the fact that it might make a lot of sense to give patients with a disease like diabetes some discretion in terms of how to treat their illness is a in a historical context very new idea; active patient involvement in medical decision-making is one of the cornerstones of current treatment regimes, for good reason, and I found it really surprising and frustrating to learn how this disease was treated in the past) that things might be more complicated than they had initially been assumed to be. Type 2 diabetics had suffered from late stage complications like blindness and kidney failure for centuries, but such complications had never before been observed in type type 1 diabetics before insulin, because diabetes presenting in children were pre-insulin universally fatal. It turned out that many of the children who were initially ‘saved’ by insulin in the early 1920s ended up suffering from severe complications just a couple of decades later, and many of them died early from these complications:
“After the Second World War it became clear that [diabetic] kidney disease could also affect the young, and there were increasingly frequent reports of diabetics who had been saved by insulin as children only to succumb to kidney failure in their 20s and 30s. Fifty of Joslin’s child patients who had started insulin before 1929 were followed up in 1949, when a third had died at an average age of 25, after having had diabetes for an average of 17.6 years. One half had died of kidney failure and the other half of tuberculosis and other infections. […] In the experience of the Joslin group, only 2 per cent of deaths of young diabetic patients before 1937 were due to kidney disease, but, of those who died between 1944 and 1950, more than half had advanced kidney disease. Results in Europe were equally bad. In 1955 all of eighty-seven Swiss children had signs of kidney disease after sixteen years of diabetes, and after twenty-one years all had died. Most young people with diabetic kidney disease also had severe retinopathy and many became blind—by the mid 1950s diabetes was the commonest cause of new blindness in people under the age of 50. […] Such devastating cases were being increasingly reported in the medical literature in the late 1940s and early 1950s, but they were not publicized in the lay press, presumably to avoid spreading despair and despondency and puncturing the myth that insulin had solved the problem of diabetes […] The British Diabetic Association (founded in 1935) produced a quarterly Diabetic Journal for its lay members, but no issue from 1940 to 1960 mentions complications”.
The book makes it clear that patients were for many years basically to some extent kept in the dark about the severity of their condition, but in all fairness for a long time the doctors treating them frankly didn’t know enough to give them good information on a lot of topics anyway. The book has some really interesting observations included about how medical men of the times thought about various aspects of the illness and treatment, and how many of the things we know today, some of which ‘seem obvious’, really were not to people at the time. Many attempts have been made over time to explain why people got diabetes, and especially type 1 was really quite hard to pin down – type 2 was somewhat easier because the lifestyle component was hard to miss; however it was natural to explain the disease in terms of the symptoms it caused, and some of those symptoms in type 2 diabetics were complications which are best considered secondary to the ‘true’ disease process. For example because many type 2 diabetics suffered from disorders of the nervous system, neuropathy, the nervous system was for a while assumed to be implicated in causing diabetes – but although disorders of the nervous system can and often do present in long-standing diabetes, they are not why type 2 diabetics get sick. Kidney problems were thought to be “part and parcel of diabetes in the 19th century.” Oskar Minkowsky made it clear in 1889 that removal of the pancreas caused severe (‘type 1-like’) diabetes in dogs – but despite this discovery it still took a long time for people to figure out how it worked. This wasn’t because people at the time were stupid. One problem faced at the time was that the pancreas actually looked quite normal in people who died from diabetes – the islet cells which are implicated in the disease weigh around 1-1.5 grams altogether, and make up only a very small proportion of the pancreas (1% or so). Many doctors found it hard to imagine that the islets cells could be reponsible for controlling carbohydrate metabolism (and other aspects of metabolism as well – “It is important to realize that diabetes is not just a glucose disease. There are also abnormalities of fat metabolism”). The pancreas wasn’t the only organ that looked normal – despite the excessive urination the kidneys did as well, and so did other organs, to the naked eye. All major features of diabetic retinopathy (diabetic eye disease) had been described by the year 1890 with the aid of the ophthalmoscope, so people knew the eyes of people with long-standing diabetes looked different; how to interpret these findings was however not clear at the time – some argued the eye damage found in diabetics was not different from eye damage caused by hypertension, and treatment options were non-existent anyway.
Many of the treatment options discussed among medical men before insulin were diets, and although dietary considerations are important in the treatment context today, it’s probably fair to say that not all of the supposed dietary remedies of the past were equally sensible: “One diet that had a short vogue in the 1850s was sugar feeding, brainchild of the well-known but eccentric French physician Pierre Piorry (1794–1879). He thought that diabetics lost weight and felt so weak because of the amount of sugar they lost in the urine and that replacing it should restore their strength”. (Aargh!). For the curious (or desperate) man, though, there were alternatives to diets: “A US government publication in 1894 listed no less than forty-two anti-diabetic remedies including bromides, uranium nitrate, and arsenic.” Relatedly, “in England until 1925, any drug could be advertised and marketed as a cure for any disease, even if it was completely ineffective”. Whether or not diets ‘worked’ depended in part on what those proposed diets included (see above..), whether people followed them, and whether people who presented were thin or fat. In the book Tattersall mentions that already from the middle of the nineteenth century many physicians thought that there were two different types of diabetes (there are more than two, but…). The thin young people presenting with symptoms were by many for decades considered hopeless cases (that they were hopeless cases was even noted in medical textbooks at the time), because they had this annoying habit of dying no matter what you did.
It should be noted that the book indirectly provides some insights into the general state of medical research and medical treatment options over time; for an example of the former it is mentioned that the first clinical trial (with really poor randomization/selection mechanisms, it seems from the description in the book) dealing with diabetes was undertaken in the 1960es: “the FDA demanded randomized controlled trials for the first time in 1962, and [the University Group Diabetes Program (UGDP)] was the first in diabetes. Before 1962 the evidence in support of therapeutic efficacy put to the FDA was often just ‘testimonials’ from physicians who casually tested experimental drugs on their patients and were paid for doing so.” See also this link. An example of the latter would be the observation made in the book that: “until the 1970s treatment for a heart attack was bed rest for five or six weeks, while nature took its course.” Diabetics were not the only sick people who had a tough time in the past.
One interesting question related to what people didn’t know in the beginning after the introduction of insulin was how the treatment might work long-term. The author notes that newspapers in the early years made people believe that insulin would be a cure; it was thought that insulin might nurse the islet cells back to health, so that they’d start producing insulin on their own again – which was actually not a completely stupid idea, as e.g. kidneys had the ability to recover after acute glomerulonephritis. The fact that diabetics often started on high doses which could then be lowered a month or two later even lent support to this idea; however it was discovered quite fast that regeneration was not taking place. Remarkably, insulin was explored as a treatment option for other diseases in the 1920s, and was actually used to stimulate appetite in tuberculosis patients and ‘in the insane refusing food’, an idea which came about because one of its most obvious effects was weight gain. This effect was also part of the reason why insulin was for a long time not considered an attractive option for type 2 diabetics, who instead were treated only with diet unless this treatment failed to reduce blood sugar levels sufficiently (these were the only two treatment options until the 1950s); most of them were already overweight and insulin caused weight gain, and besides insulin didn’t work nearly as well in them as it did in young and lean people with type 1 because of insulin resistance, which lead to the requirement of high doses of the drug.
Throughout much of the history of diabetes, diabetics did not measure their blood glucose regularly – what they did instead was measuring their urine, figuring out if it contained glucose or not (glucose in the urine indicates that the blood glucose is quite high). This meant that the only metric they had available to them to monitor their disease on a day to day basis was one which was unable to measure low blood glucose, and which could only (badly) distinguish between much too high blood glucose values and not-much-too-high values. Any type of treatment regime like the one I’m currently on would be completely impossible without regular blood tests on a daily basis, and I was very surprised about how late the idea of self-monitoring of blood glucose appeared; like the measurement of Hba1c, this innovation did not appear until the late 1970s. Few years after that, the first insulin pen revolutionized treatment regimes and made treatment regimes using multiple rejections each day much more common than they had been in the past, facilitating much better metabolic control.
The book has a lot of stuff about specific complications and the history of treatment advances – both the ones that worked and some of the ones that didn’t. If you’re a diabetic today, you tend to take a lot of stuff for granted – and reading a book like this will really make you appreciate how many ideas had to be explored, how many false starts there were, how much work by so many different people actually went into giving you the options you have today, keeping you alive, and perhaps even relatively well. One example of the type of treatment options which were considered in the past but turned out not to work was curative pancreas transplants, which were explored in the 60es and 70es: “Pancreas transplantation offered a potential cure of type 1 diabetes. The first was done in 1966 […] Worldwide in the next eleven years, fifty-seven transplants were done, but only two worked for more than a year”. Recent attempts to stop people at risk of developing type 1 diabetes from becoming sick are also discussed in the last part of the book, and in this context he makes a point I was familiar with: “[Repeated] failures [in this area] are particularly frustrating, because, in the best animal model of type 1 diabetes, the NOD mouse, over 100 different interventions can prevent diabetes.” This is one of the reasons why I tend to be skeptical about results from animal studies. Although he spends many pages on complications – which in a book like this makes a lot of sense given how common these complications were (and to some extent still are), and how important a role they have played in the lives of people suffering from diabetes throughout the ages – I have talked about many of these things before, just as I have talked about the results of various large-scale trials like the DCCT trial and the UKPDS (see e.g. this and this), so I will not discuss such topics in detail here. I do however want to briefly remind people of what kind of a disease badly managed type 2 diabetes (the by far most common of the two) is, especially if it is true as the author argues in the introduction that many people perceive of it as a relatively mild disease – so I’ll end the post with a few quotes from the book:
“I took over the diabetic clinic in Nottingham in 1975 and three years later met Lilian, an overweight 60-year-old woman who was on tablets for diabetes. She had had sugar in her urine during her last pregnancy in 1957 but was well until 1963, when genital itching (pruritus vulvae) led to a diagnosis of diabetes. She attended the clinic for two years but was then sent back to her GP with a letter that read: ‘I am discharging this lady with mild maturity onset diabetes back to your care.’ She continued to collect her tablets but had no other supervision. When I met her after she had had diabetes for eighteen years she was blind, had had a heart attack, and had had one leg amputated below the knee. The reason for the referral to me was an ulcer on her remaining foot, which would not heal. […] Someone whose course is not dissimilar to that of Lilian is Sue Townsend (b. 1946), author of the Adrian Mole books. She developed diabetes at the age of 38 and after only fifteen years was blind from retinopathy and wheelchair bound because of a Charcot foot, a condition in which the ankle disintegrates as a result of nerve damage. Neuropathy has also destroyed the nerve endings in her fingers, so that, like most other blind diabetics, she cannot read Braille. She blames her complications on the fact that she cavalierly disregarded the disease and kept her blood sugars high to avoid the inconvenience of hypoglycaemic (low-blood-sugar) attacks.”
“We wrote this book to introduce graduate students and research workers in various scientific disciplines to the use of information-theoretic approaches in the analysis of empirical data. These methods allow the data-based selection of a “best” model and a ranking and weighting of the remaining models in a pre-defined set. Traditional statistical inference can then be based on this selected best model. However, we now emphasize that information-theoretic approaches allow formal inference to be based on more than one model (multimodel inference). Such procedures lead to more robust inferences in many cases, and we advocate these approaches throughout the book. […] Information theory includes the celebrated Kullback–Leibler “distance” between two models (actually, probability distributions), and this represents a fundamental quantity in science. In 1973, Hirotugu Akaike derived an estimator of the (relative) expectation of Kullback–Leibler distance based on Fisher’s maximized log-likelihood. His measure, now called Akaike’s information criterion (AIC), provided a new paradigm for model selection in the analysis of empirical data. His approach, with a fundamental link to information theory, is relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case. […] We do not claim that the information-theoretic methods are always the very best for a particular situation. They do represent a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and they are objective and practical to employ across a very wide class of empirical problems. Inference from multiple models, or the selection of a single “best” model, by methods based on the Kullback–Leibler distance are almost certainly better than other methods commonly in use now (e.g., null hypothesis testing of various sorts, the use of R2, or merely the use of just one available model).
This is an applied book written primarily for biologists and statisticians using models for making inferences from empirical data. […] This book might be useful as a text for a course for students with substantial experience and education in statistics and applied data analysis. A second primary audience includes honors or graduate students in the biological, medical, or statistical sciences […] Readers should ideally have some maturity in the quantitative sciences and experience in data analysis. Several courses in contemporary statistical theory and methods as well as some philosophy of science would be particularly useful in understanding the material. Some exposure to likelihood theory is nearly essential”.
The above quotes are from the preface of the book, which I have so far only briefly talked about here; this post will provide a lot more details. Aside from writing the post in order to mentally process the material and obtain a greater appreciation of the points made in the book, I have also as a secondary goal tried to write the post in a manner so that people who are not necessarily experienced model-builders might also derive some benefit from the coverage. Whether or not I was successful in that respect I do not know – given the outline above, it should be obvious that there are limits as to how ‘readable’ you can make stuff like this to people without a background in a semi-relevant field. I don’t think I have written specifically about the application of information criteria in the model selection context before here on the blog, at least not in any amount of detail, but I have written about ‘model-stuff’ before, also in ‘meta-contexts’ not necessarily related to the application of models in economics; so if you’re interested in ‘this kind of stuff’ but you don’t feel like having a go at a post dealing with a book which includes word combinations like ‘the (relative) expectation of Kullback–Leibler distance based on Fisher’s maximized log-likelihood’ in the preface, you can for example have a look at posts like this, this, this and this. I have also discussed here on the blog some stuff somewhat related to the multi-model inference part, how you can combine the results of various models to get a bigger picture of what’s going on, in these posts – they approach ‘the topic’ (these are in fact separate topics…) in a very different manner than does this book, but some key ideas should presumably transfer. Having said all this, I should also point out that many of the basic points made in the coverage below should be relatively easy to understand, and I should perhaps repeat that I’ve tried to make this post readable to people who’re not too familiar with this kind of stuff. I have deliberately chosen to include no mathematical formulas in my coverage in this post. Please do not assume this is because the book does not contain mathematical formulas.
Before moving on to the main coverage I thought I’d add a note about the remark above that stuff like AIC is “little taught in statistics classes and far less understood in the applied sciences than should be the case”. The book was written a while back, and some things may have changed a bit since then. I have done coursework on the application of information criteria in model selection as it was a topic (briefly) covered in regression analysis(? …or an earlier course), so at least this kind of stuff is now being taught to students of economics where I study and has been for a while as far as I’m aware – meaning that coverage of such topics is probably reasonably widespread at least in this field. However I can hardly claim that I obtained a ‘great’ or ‘full’ understanding of the issues at hand from the work on these topics I did back then – and so I have only gradually, while reading this book, come to appreciate some of the deeper issues and tradeoffs involved in model selection. This could probably be taken as an argument that these topics are still ‘far less understood … than should be the case’ – and another, perhaps stronger, argument would be Seber’s comments in the last part of his book; if a statistician today may still ‘overlook’ information criteria when discussing model selection in a Springer text, it’s not hard to argue that the methods are perhaps not as well known as should ‘ideally’ be the case. It’s obvious from the coverage that a lot of people were not using the methods when the book was written, and I’m not sure things have changed as much as would be preferable since then.
What is the book about? A starting point for understanding the sort of questions the book deals with might be to consider the simple question: When we set out to model stuff empirically and we have different candidate models to choose from, how do we decide which of the models is ‘best’? There are a lot of other questions dealt with in the coverage as well. What does the word ‘best’ mean? We might worry over both the functional form of the model and which variables should be included in ‘the best’ model – do we need separate mechanisms for dealing with concerns about the functional form and concerns about variable selection, or can we deal with such things at the same time? How do we best measure the effect of a variable which we have access to and consider including in our model(s) – is it preferable to interpret the effect of a variable on an outcome based on the results you obtain from a ‘best model’ in the set of candidate models, or is it perhaps sometimes better to combine the results of multiple models (and for example take an average of the effects of the variable across multiple proposed models to be the best possible estimate) in the choice set (as should by now be obvious for people who’ve read along here, there are some sometimes quite close parallels between stuff covered in this book and stuff covered in Borenstein & Hedges)? If we’re not sure which model is ‘right’, how might we quantify our uncertainty about these matters – and what happens if we don’t try to quantify our uncertainty about which model is correct? What is bootstrapping, and how can we use Monte Carlo methods to help us with model selection? If we apply information criteria to choose among models, what do these criteria tell us, and which sort of issues are they silent about? Are some methods for deciding between models better than others in specific contexts – might it for example be a good idea to make criteria adjustments when faced with small sample sizes which makes it harder for us to rely on asymptotic properties of the criteria we apply? How might the sample size more generally relate to our decision criterion deciding which model might be considered ‘best’ – do we think that what might be considered to be ‘the best model’ might depend upon (‘should depend upon’?) how much data we have access to or not, and if how much data we have access to and the ‘optimal size of a model’ are related, how are the two related, and why? The questions included in the previous sentence relate to some fundamental differences between AIC (and similar measures) and BIC – but let’s not get ahead of ourselves. I may or may not go into details like these in my coverage of the book, but I certainly won’t cover stuff like that in this post. Some of the content is really technical: “Chapters 5 and 6 present more difficult material [than chapters 1-4] and some new research results. Few readers will be able to absorb the concepts presented here after just one reading of the material […] Underlying theory is presented in Chapter 7, and this material is much deeper and more mathematical.” – from the preface. The sample size considerations mentioned above relate to stuff covered in chapter 6. As you might already have realized, this book has a lot of stuff.
When dealing with models, one way to think about these things is to consider two in some sense separate issues: On the one hand we might think about which model is most appropriate (model selection), and on the other hand we might think about how best to estimate parameter values and variance-covariance matrices given a specific model. As the book points out early on, “if one assumes or somehow chooses a particular model, methods exist that are objective and asymptotically optimal for estimating model parameters and the sampling covariance structure, conditional on that model. […] The sampling distributions of ML [maximum likelihood] estimators are often skewed with small samples, but profile likelihood intervals or log-based intervals or bootstrap procedures can be used to achieve asymmetric confidence intervals with good coverage properties. In general, the maximum likelihood method provides an objective, omnibus theory for estimation of model parameters and the sampling covariance matrix, given an appropriate model.” The problem is that it’s not ‘a given’ that the model we’re working on is actually appropriate. That’s where model selection mechanisms enters the picture. Such methods can help us realize which of the models we’re considering might be the most appropriate one(s) to apply in the specific context (there are other things they can’t tell us, however – see below).
Below I have added some quotes from the book and some further comments:
“Generally, alternative models will involve differing numbers of parameters; the number of parameters will often differ by at least an order of magnitude across the set of candidate models. […] The more parameters used, the better the fit of the model to the data that is achieved. Large and extensive data sets are likely to support more complexity, and this should be considered in the development of the set of candidate models. If a particular model (parametrization) does not make biological [/’scientific’] sense, this is reason to exclude it from the set of candidate models, particularly in the case where causation is of interest. In developing the set of candidate models, one must recognize a certain balance between keeping the set small and focused on plausible hypotheses, while making it big enough to guard against omitting a very good a priori model. While this balance should be considered, we advise the inclusion of all models that seem to have a reasonable justification, prior to data analysis. While one must worry about errors due to both underfitting and overfitting, it seems that modest overfitting is less damaging than underfitting (Shibata 1989).” (The key word here is ‘modest’ – and please don’t take these authors to be in favour of obviously overfitted models and data dredging strategies; they spend quite a few pages criticizing such models/approaches!).
“It is not uncommon to see biologists collect data on 50–130 “ecological” variables in the blind hope that some analysis method and computer system will “find the variables that are significant” and sort out the “interesting” results […]. This shotgun strategy will likely uncover mainly spurious correlations […], and it is prevalent in the naive use of many of the traditional multivariate analysis methods (e.g., principal components, stepwise discriminant function analysis, canonical correlation methods, and factor analysis) found in the biological literature [and elsewhere, US]. We believe that mostly spurious results will be found using this unthinking approach […], and we encourage investigators to give very serious consideration to a well-founded set of candidate models and predictor variables (as a reduced set of possible prediction) as a means of minimizing the inclusion of spurious variables and relationships. […] Using AIC and other similar methods one can only hope to select the best model from this set; if good models are not in the set of candidates, they cannot be discovered by model selection (i.e., data analysis) algorithms. […] statistically we can infer only that a best model (by some criterion) has been selected, never that it is the true model. […] Truth and true models are not statistically identifiable from data.”
“It is generally a mistake to believe that there is a simple “true model” in the biological sciences and that during data analysis this model can be uncovered and its parameters estimated. Instead, biological systems [and other systems! – US] are complex, with many small effects, interactions, individual heterogeneity, and individual and environmental covariates (most being unknown to us); we can only hope to identify a model that provides a good approximation to the data available. The words “true model” represent an oxymoron, except in the case of Monte Carlo studies, whereby a model is used to generate “data” using pseudorandom numbers […] A model is a simplification or approximation of reality and hence will not reflect all of reality. […] While a model can never be “truth,” a model might be ranked from very useful, to useful, to somewhat useful to, finally, essentially useless. Model selection methods try to rank models in the candidate set relative to each other; whether any of the models is actually “good” depends primarily on the quality of the data and the science and a priori thinking that went into the modeling. […] Proper modeling and data analysis tell what inferences the data support, not what full reality might be […] Even if a “true model” did exist and if it could be found using some method, it would not be good as a fitted model for general inference (i.e., understanding or prediction) about some biological system, because its numerous parameters would have to be estimated from the finite data, and the precision of these estimated parameters would be quite low.”
A key concept in the context of model selection is the tradeoff between bias and variance in a model framework:
“If the fit is improved by a model with more parameters, then where should one stop? Box and Jenkins […] suggested that the principle of parsimony should lead to a model with “. . . the smallest possible number of parameters for adequate representation of the data.” Statisticians view the principle of parsimony as a bias versus variance tradeoff. In general, bias decreases and variance increases as the dimension of the model (K) increases […] The fit of any model can be improved by increasing the number of parameters […]; however, a tradeoff with the increasing variance must be considered in selecting a model for inference. Parsimonious models achieve a proper tradeoff between bias and variance. All model selection methods are based to some extent on the principle of parsimony […] The concept of parsimony and a bias versus variance tradeoff is very important.”
“we reserve the terms underfitted and overfitted for use in relation to a “best approximating model” […] Here, an underfitted model would ignore some important replicable (i.e., conceptually replicable in most other samples) structure in the data and thus fail to identify effects that were actually supported by the data. In this case, bias in the parameter estimators is often substantial, and the sampling variance is underestimated, both factors resulting in poor confidence interval coverage. Underfitted models tend to miss important treatment effects in experimental settings. Overfitted models, as judged against a best approximating model, are often free of bias in the parameter estimators, but have estimated (and actual) sampling variances that are needlessly large (the precision of the estimators is poor, relative to what could have been accomplished with a more parsimonious model). Spurious treatment effects tend to be identified, and spurious variables are included with overfitted models. […] The goal of data collection and analysis is to make inferences from the sample that properly apply to the population […] A paramount consideration is the repeatability, with good precision, of any inference reached. When we imagine many replicate samples, there will be some recognizable features common to almost all of the samples. Such features are the sort of inference about which we seek to make strong inferences (from our single sample). Other features might appear in, say, 60% of the samples yet still reflect something real about the population or process under study, and we would hope to make weaker inferences concerning these. Yet additional features appear in only a few samples, and these might be best included in the error term (σ2) in modeling. If one were to make an inference about these features quite unique to just the single data set at hand, as if they applied to all (or most all) samples (hence to the population), then we would say that the sample is overfitted by the model (we have overfitted the data). Conversely, failure to identify the features present that are strongly replicable over samples is underfitting. […] A best approximating model is achieved by properly balancing the errors of underfitting and overfitting.”
Model selection bias is a key concept in the model selection context, and I think this problem is quite similar/closely related to problems encountered in a meta-analytical context which I believe I’ve discussed before here on the blog (see links above to the posts on meta-analysis) – if I’ve understood these authors correctly, one might choose to think of publication bias issues as partly the result of model selection bias issues. Let’s for a moment pretend you have a ‘true model’ which includes three variables (in the book example there are four, but I don’t think you need four…); one is very important, one is a sort of ‘60% of the samples variable’ mentioned above, and the last one would be a variable we might prefer to just include in the error term. Now the problem is this: When people look at samples where the last one of these variables is ‘seen to matter’, the effect size of this variable will be biased away from zero (they don’t explain where this bias comes from in the book, but I’m reasonably sure this is a result of the probability of identification/inclusion of the variable in the model depending on the (‘local’/’sample’) effect size; the bigger the effect size of a specific variable in a specific sample, the more likely the variable is to be identified as important enough to be included in the model – Bohrenstein and Hedges talked about similar dynamics, for obvious reasons, and I think their reasoning ‘transfers’ to this situation and is applicable here as well). When models include variables such as the last one, you’ll have model selection bias: “When predictor variables [like these] are included in models, the associated estimator for a σ2 is negatively biased and precision is exaggerated. These two types of bias are called model selection bias”. Much later in the book they incidentally conclude that: “The best way to minimize model selection bias is to reduce the number of models fit to the data by thoughtful a priori model formulation.”
“Model selection has most often been viewed, and hence taught, in a context of null hypothesis testing. Sequential testing has most often been employed, either stepup (forward) or stepdown (backward) methods. Stepwise procedures allow for variables to be added or deleted at each step. These testing-based methods remain popular in many computer software packages in spite of their poor operating characteristics. […] Generally, hypothesis testing is a very poor basis for model selection […] There is no statistical theory that supports the notion that hypothesis testing with a fixed α level is a basis for model selection. […] Tests of hypotheses within a data set are not independent, making inferences difficult. The order of testing is arbitrary, and differing test order will often lead to different final models. [This is incidentally one, of several, key differences between hypothesis testing approaches and information theoretic approaches: “The order in which the information criterion is computed over the set of models is not relevant.”] […] Model selection is dependent on the arbitrary choice of α, but α should depend on both n and K to be useful in model selection”.
This will be my last post about the book. Below I have posted some observations from the second half of the book and some comments:
When we negotiate with others, we don’t just optimize economic payoffs. Process-concerns matter (e.g. stuff like reputation effects, whether people are perceived to behave in a fair manner during the negotiation, etc.), and ‘relationship management’-stuff may be very important. How well we know the other party and how much we trust him/her matters a lot: “As a general matter, it is easier for negotiators to reach integrative agreements if they already know and trust each other. For example, compared to mere acquaintances who negotiate, friends are more likely to achieve integrative agreements […] Friends are willing to sacrifice economic payoffs to reduce the conflict and negative externalities of negotiations […] When we negotiate with a friend, we prefer that our counterpart’s outcome be equal to our own; but when we negotiate with a stranger, we prefer to take more of the surplus for ourselves […] Regardless of the objective payoffs, positive relationships influence parties’ interpretations of those payoffs […] as a general matter, the stronger the relationship between negotiators, the more likely they are to share information (Greenhalgh & Chapman, 1998).”
“although we tend to rate ourselves more favorably than others on ambiguous traits like dependability, intelligence, and considerateness […], these ratings are likely to depend on the abstractness of the others in question. If the other is the “average person” then my comparison tends to be more favorable toward myself than if the other is an individual stranger sitting next to me […]. In other words, the less abstract the other person about whom I make a judgment, the more likely I am to judge that person as more similar to me. […] The “identifiable other” effect extends to other judgments besides estimates of personality traits. For example, people estimate that an “average person” making a decision will choose a riskier option than themselves, but that the stranger sitting next to them will choose a similar option to the one they just chose for themselves […]. Even more interesting for our purposes, the extent to which the other person is identifiable influences more than merely our judgments — it also affects our behavior toward other people. For example, responses to e-mail requests to participate in a survey have been shown to increase if the sender’s photograph is included in the e-mail, thereby making the person more identifiable […] people are more willing to help a target who is more identifiable than one who is more abstract, even when the act of identification conveys no information whatsoever about the characteristics of the target.”
“When we communicate via technology, we attend less to the other person and more on the message they are disseminating […]. This focus on the message has potential benefits. For example, in a study comparing negotiations that took place either over the phone or in person […], one negotiator in each dyad was given a strong case (i.e., a large number of high-quality arguments) to present whereas the other was assigned to present a weak case. The strong case was more successful in the phone condition (where negotiation partners were not visible) than in the face-to-face condition. By contrast, weak arguments were more successful when negotiations took place face to face as opposed to by phone. A clear implication of these findings is that the social constraint of the communication medium can affect the persuasion process that occurs during negotiations. […] communicating via technology can lead negotiators to focus more on the content or quality of arguments made by their negotiation partner rather than being (mis)led into agreement by more peripheral factors like those made salient in face-to-face interactions […] This greater focus on the message content can have particularly negative consequences when the content of the message that one receives is negative or confrontational. […] Ultimately, participants interacting via information technology often like their discussion partners less than those interacting face to face”.
A personal comment is perhaps worth inserting here: I generally don’t like debating ‘factual stuff’ offline in a social context where someone is openly disagreeing with me, certainly not when compared to doing the same thing online (I don’t like engaging in disagreements online either, but it’s not as bad), and part of the reason is a strong long-standing impression on my part that poor arguments are much, much harder to fight in a face-to-face interaction than they are in interactions taking place online. In the past I think I’ve mentally explained this in terms of me being ‘a bad communicator’/’not verbally skilled’ and similar stuff, but recently I have received some social feedback from face-to-face interactions suggesting that perhaps that’s not the (only?) reason (i.e. I’ve been given feedback to the effect that my verbal communication skills may be significantly better than I’ve tended to think they are). A conceptual distinction should probably in this context be made between communication and persuasion, though naturally there’s some overlap; the ability to accurately outline your point of view is to some extent different from the ability to convince others that your point of view is true, or perhaps that it is socially desirable to hold this point of view (the latter concern is presumably often a more salient factor in a social context than is the truth content of the competing positions). It’s become clear to me that being right has little to do with whether or not you’ll win a verbal/face-to-face argument, and that the skill-set required to actually win arguments of that sort is very different from the skill-set required to actually be right about stuff. Incidentally on an unrelated note I suspect especially the penultimate sentence in the paragraph above may be rather important for some people on the autism spectrum to keep in mind.
“McGinn and Croson (2004) argue that in settings where social perceptions and intimacy have not been established, such as negotiations between strangers, the lack of visual access, synchronicity, and efficacy inherent in e-mail can result in less cooperation, coordination, truth telling, and rapport building. But where social perceptions and intimacy are already established, as in negotiations between friends, the medium might not make much or any difference. […] when communicators have positive-valenced goals such as relationship-building, computer-mediated interactions can be highly personal, resulting in the development of close relationships”
“Research by Bond (1983), Shweder and Bourne (1982), Miller (1984), and most recently Morris and Peng (1994), has documented that Asians [from East Asia, it should be emphasized] make more situational attributions while Americans make more dispositional ones. […] In non-Western [in the research context here, ‘non-Western’ = ‘East Asian’] cultures, social conceptions are group centered, reflecting interdependence […]. People believe that human behavior is constrained by roles and role constraints, by group norms to preserve relationships with others, and by scripts that prescribe proper situational behavior. This perspective is highly consistent with making situational attributions. It is also consistent with the hierarchical values of these cultures […] Given the same information about an event, people from different cultures give very different attributions […], and construct causal explanations that are consistent with their culturally instantiated belief system. […] these culturally linked attribution patterns [affect] perceptions, reasoning, and other cognitive processes, as well as negotiation and other behavior. […] [East Asians] prefer expressing conflict in indirect ways, both verbally and especially behaviorally […]. From a verbal perspective, non-Westerners in conflict prefer using words whose meaning requires inference, for example, contempt rather than anger; words that are less blunt […] and words that are more ambiguous and “avoid leaving an assertive impression” […]. The behaviors that non-Westerners use to manage conflict are also indirect. There are many different types of indirect confrontation behaviors. One set involves using diffused voice, which means broadcasting concerns publicly to a diffused audience rather than directly to the other person […] Leung (1987) also showed that compared to Americans, Chinese preferred indirect procedures involving third parties, such as mediation, to direct face-to-face adversarial procedures, because the indirect procedures were seen as more conducive to reducing animosity among the parties. […] Similarly, Ohbuchi and Takahashi (1994) found that disputants in Japan used indirect strategies of ingratiation, impression management, and appeasement in order to deal with conflict.”
“Because gender may affect negotiation behavior in some situations and not others, when researchers compare results across studies and do not account for the different situational factors across these same studies, it may appear that gender has an inconsistent impact.”
“Stuhlmacher and Walters (1999) […] showed that, across a wide range of studies, gender differences in negotiated outcomes were greater in distributive negotiations than in integrative ones. […] Barron (2003) documented a striking divergence in how men and women determine their worth in salary negotiations. Whereas the vast majority of men indicated their worth was self-determined, the overwhelming majority of women felt that their worth was determined by what the company would pay them.”
Here are my first two post about the book. In this post I’ll continue my coverage of chapters from the book which I thought were interesting. The first of these chapters is the chapter on ‘Loneliness and Social Isolation’. I’ve written about this kind of stuff before (see e.g. these posts) and though I have not checked, it seems likely that some of my previous coverage of this topic on the blog will overlap with studies/results/distinctions mentioned here – I don’t mind that. I’m reasonably sure I’ve made the distinction between loneliness and social isolation clear before here, but it’s important and a useful starting point for any discussion of these matters:
“loneliness is a subjective and negative experience, and the outcome of a cognitive evaluation of the match between the quantity and quality of existing relationships and relationship standards. The opposite of loneliness is belongingness or embeddedness. […] Social isolation concerns the objective characteristics of a situation and refers to the absence of relationships with other people. The central question is this: To what extent is he or she alone? […] Persons with a very small number of meaningful ties are, by definition, socially isolated. Loneliness is not directly connected to objective social isolation; the association is of a more complex nature. […] Socially isolated persons are not necessarily lonely, and lonely persons are not necessarily socially isolated”
“Using the [De Jong Gierveld loneliness scale (related link)] in self-administered questionnaires results in higher scale means than if the scale is used in face-to-face or telephone interviews […]. This finding is in line with Sudman and Bradburn’s (1974) observation that, compared with interviews, the more anonymous the setting in which self-administered surveys are completed, the more the results show self-disclosure and reduce the tendency of respondents to present themselves in a favorable light”
“A partner does not always provide protection against loneliness. Persons with a partner who is not their most supportive network member tend to be very lonely […]. Generally speaking, however, persons with a partner bond tend to be better protected from loneliness than persons without a partner bond […] Adult children are an important source of companionship, closeness, and sharing, particularly for those who live alone. […] Divorce often impairs the relationship between parents and children, especially in the case of fathers […] The low level of contact with adult children is the reason divorced fathers tend to be lonelier than divorced mothers [I’d probably at most have concluded that it is a reason, rather than being the reason]. […] Siblings serve a particularly important function in alleviating the loneliness of those who lack the intimate attachment of a partner and have no children […] best friends can step in and function as confidants and in doing so help alleviate emotional loneliness, in particular, for never partnered or childless adults […] Generally speaking, as the number of relationships in the social network increases and as the amount of emotional and social support exchanged increases, the intensity of loneliness decreases […]. The four closest ties in a person’s network provide the greatest degree of protection against loneliness. The protection provided by additional relationships is marginal […]. Diversity across relationship types also serves to protect against loneliness. People with networks composed of both strong and weak ties are less prone to loneliness than people with strong ties only […]. Moreover, research […] has shown that people with networks that consist primarily or entirely of kin ties are more vulnerable to loneliness than people with more heterogeneous networks. Those who are dependent on family members for social contacts because they lack alternatives tend to have the highest levels of loneliness.”
“Research has shown that over the course of time, men and women who have lost their partner by death start downplaying the advantages of having a partner and start upgrading the advantages of being single […]. In doing so, they free the way for other relationships. The less importance attached to having a partner, the less lonely the widowed were found to be. […] Feeling socially uncomfortable, fear of intimacy, being easily intimidated by others, being unable to communicate adequately to others and developmental deficits such as childhood neglect and abandonment are reported by lonely people as the main causes of their feelings of loneliness […]. Characteristics such as low self-esteem, shyness and low assertiveness can predispose people to loneliness and might also make it more difficult to recover from loneliness […] Loneliness is associated with a variety of measures of physical health. Those who are in poor health, whether this is measured objectively or subjectively, tend to report higher levels of loneliness […] The causal mechanisms underlying the association between loneliness and health are not well understood”
The next chapter deals with ‘Stress in Couples: The Process of Dyadic Coping’. Some observations from the chapter:
“Research has shown […] that couples are more likely to have fights at home when the husband has had a difficult day at work” (In other news, water is wet. But do romantic partners take this kind of stuff sufficiently into account when trying to figure out why they’re arguing with their partner?) A related observation is this: “Dyadic coping suffers when the demands of the stressor reduce the amount of time and attention that spouses devote to one another. Perceived neglect may lead to resentment.” (Note again: Perceived neglect. Attributions and cognitions matter a lot).
In the dyadic coping context, there are good ways and bad ways to deal with problems. Positive relationship-focused coping strategies mentioned in the coverage include empathy, support provision and compromise, whereas negative strategies include confronting, ignoring, blaming, and withdrawal. Some things may make it harder to handle problems the right way: “It is easier to cope constructively, in ways that foster a positive relationship climate, when the individual is not overwhelmed by situational demands and a lack of material and interpersonal resources.” This observation lends support to the notion that it may be significantly harder to implement changes in coping strategies than e.g. just pointing out that the coping strategy employed is not optimal and that a different approach might be better; suboptimal coping may be employed at least partially for reasons outside the control of the individual.
“there may be two critical components to relationship maintenance in the context of severe stress. The first is preventing hurtful or counterproductive interaction patterns, such as pressuring one’s spouse to stop coping in a way that makes one uncomfortable (e.g., crying or expressing fear) or taking out one’s frustrations on the spouse. Some negative coping is probably inevitable, given the context of high stress. Thus, the second critical component may be forgiveness. Individuals who are facing the potential loss of physical functions, valued achievements, or the presence of pain or suffering in a loved one cannot always be empathic or even reasonable. An intervention model that combines awareness of appraisal processes, the implications of conflicting primary and secondary appraisals, and instruction on how to cope together rather than at cross-purposes may work best if it is tempered with the expectation that people will fail. Such failures can be normalized, rather than construed as major betrayals […] The most important contribution of interventions may be to educate people about the effects of stress on communication, the ability to express affection, and the capacity to process and react to information in a rational manner. It may be that our goal should not be “perfect dyadic coping” but multiple opportunities for redemption.” (“But do romantic partners take this kind of stuff sufficiently into account when trying to figure out why they’re arguing with their partner?” I asked above. It seems the authors believe that at least individuals in troubled relationships don’t.)
People in relationships lie to each other. In the previous coverage of the self-disclosure chapter I dealt briefly with some similar/related themes, but I didn’t go into much detail about lying – I’ll do this here, as the chapter ‘Lying and Deception in Close Relationships’ had some interesting stuff on the topic:
“the closeness of a relationship will affect the frequency of lying, the motivation for lying, the things lied about, the interactive manifestations of the lying process, the awareness of and desire to detect deception, the accuracy of that detection, the methods used to detect deception, and the consequences of the deception” (In other words, this stuff is complicated…)
“Rowatt, Cunningham, and Druen (1998) found both men and women saying they would be willing to lie about their intelligence, personal appearance, personality traits, income, past relationship outcomes, and career skills to a prospective date who was high in facial attractiveness. Even in get-acquainted conversations, participants are not averse to lying when they are asked to appear likable and/or competent […] it seems that lies are common, even expected, in the interactions that serve as a launching pad for close relationships [my bold, US]. […] In one survey of college students, 92% admitted to lying to a romantic partner about sexual issues […] Lies don’t cease when relationships are designated as “close,” however; they just decrease in frequency. DePaulo and Kashy (1998) found both community leaders and students reporting that they told fewer lies (relative to the total number of interactions) to those with whom they had closer relationships. Lies occurred about once in every 10 interactions in a broad range of close relationships that included spouses, best friends, family, children, nonspouse romantic partners, and mothers. However, it is worth noting that lying in close relationships does not seem to be equally low for all types of relationships. The closeness of the relationship is only one factor governing the frequency of lying. More lies, for instance, were reportedly told to mothers and nonspouse romantic partners. One in every three transactions with nonspouse romantic partners were reported to involve lying. Emotional closeness can be a powerful deterrent to lying in close relationships, and when lies do occur, they are often troubling for the liar. […] close relationships create an environment in which any given lie can take a huge toll on the degree of closeness felt.”
“Lies attributed to people in close relationships are not always lies. The dialogue associated with these attributions, however, may profoundly affect relationship closeness. […] People in close relationships are familiar with their partner’s communication style and do not expect their partner to lie to them. Nevertheless, one’s motives for not saying something a partner thought should have been said, forgetting something a partner thought should have been reported, or misunderstanding something a partner thought should have been understood are all subject to attributions of deception. Accusing close relationship partners of lying when they don’t believe they have can generate relationship-altering dialogue.”
“Close relationships […] are especially fertile ground for studying what Werth and Flaherty (1986) called collusion and Barnes (1994) called connivance. Connivance occurs when individuals in close relationships know they are being deceived by their partner, but deceptively act as if they didn’t know. […] Research typically focuses on lies of commission – false accounts, information, and stories that are invented by the liar. However, distinctions between lies of commission that invent a new reality for the target versus lies that involve secrets or simply allow the target to continue believing something false may be of special interest in close relationships […] Levenger and Senn (1967) found that concealing negative feelings, particularly about their mates, was far more characteristic of satisfied spouses than dissatisfied ones. Metts (1989) found spouses more likely to conceal information than to make deliberately false statements. […] One common reason for lying is to support and sustain our partners – to avoid hurting them, to tell them what they want to hear, to build and maintain their self-esteem, to help them accomplish their goals, and to show concern for their physical and mental states. Indeed, DePaulo and Kashy (1998) found what they called “altruistic” lies to be the most common type of lie told to friends and best friends. Lies told to close relationship partners are usually viewed by the lie teller as altruistically motivated, guilt inducing, spontaneous, justified by the situation, and/or provoked by the lie receiver […] More satisfied couples may […] tend to create a new partner reality through what Murray and Holmes (1996) called “positive illusions” – seeing virtues in their partner that aren’t there, turning faults into virtues, constructing excuses for misdeeds, and so on. What may begin as lies of support or as positive illusions may later, with the effects of self-persuasion and/or the self-fulfilling prophecy, be viewed as fact”
“Liars often view their own behavior as far less harmful, offensive, and consequential than the target of the lie. Liars often describe extenuating circumstances that they view as justification for their lie(s), but targets often do not share those views […] Liars feel especially good about lies that make their partner feel better […] Sometimes lies are not uncovered, but suspicion has been aroused to such an extent that trust in one’s partner is negatively affected. To the extent that the lie or lies told have powerful effects on the liar (e.g., guilt, anger, fear, embarrassment), these effects may manifest themselves in almost any dialogue with the liar’s partner. The target of the lie may wonder why, for no apparent reason, his or her partner seems so irritable over the slightest things. […] liars will sometimes denigrate and distrust the target, trying to make themselves feel better by believing that their partner is just like they are – that they lie too, that they invited the lie by being such an easy dupe, that they created a situation where they were going to get just as much punishment for telling the truth as for lying, and so on. […] Serious lies are often told to cover major transgressions of the relationship such as infidelity.”
“when Metts (1989) asked people to describe a time when they didn’t tell their relationship partner the whole truth, over a third of them mentioned deceptions involving emotional information – for example, feelings of love and commitment.”
“Several studies portray the consequences of deception to be more substantial for women than men. […] Women […] seem to view deception as more unacceptable than men, see it as a more significant relational event, and react more strongly to its discovery. They report being more distressed and anxious than men on the discovery that their partner in a close relationship has lied to them […] and more tearful and apologetic than men for the serious lies they tell. They may also maintain their bitterness about the transgression for a longer period of time than men”
“we know relatively little about any kind of lie that has positive effects and any kind of truth that has negative effects in close relationships. Most surveys find that truth telling is considered a necessary feature in establishing and maintaining a close relationship [On the other hand “it seems that lies are common, even expected, in the interactions that serve as a launching pad for close relationships” […] “92 % admitted to lying to romantic partners about sexual issues.” This stuff is obviously complicated], but most people in those same surveys are willing to admit that lying may play a worthwhile role in close relationships [so this makes sense]. It is possible, of course, that lies that provide the bonding elements for close relationships in everyday dialogue may not even be thought of as lies by the relationship partners.”
Finally, a few observations from the chapter on ‘Temptation and Threat: Extradyadic Relations and Jealousy’:
“whereas about 30% of Asian Americans feel that violence is justified in case of a wife’s sexual infidelity […], among Arab American immigrants 48% of the women and 23% of the men approve of a man slapping a sexually unfaithful wife, with 18% of the women even approving a man killing his wife if she were to have an affair (Kulwicki & Miller, 1999). In general, attitudes toward infidelity are more permissive among younger individuals, among the better educated and those from the upper middle class, among persons who are less religious, among those living in urban areas, and among those holding liberal political orientations” [semi-related and more recent data here and here – I post these links partly because they’re ‘semi-relevant’ in this context, but also partly because I haven’t commented on the Charlie Hebdo attack here on the blog and I thought this was a good place to add a little bit of data to that discussion in case people reading along here want it. Another ‘Hebdo-related observation’ is that Statistics Denmark concluded a few years back, on the basis of a (very large) survey (the sample size was much larger than what is usually required for a Danish sample to be considered ‘representative’; n = 2.792), that half of the Danish immigrants and descendants from muslim countries (Danish link) were in favour of making it illegal for movies and books to ‘attack islam’ (and no, the support for such a legal restriction on freedom of speech was not lower for descendants).]
“There is some evidence that individuals who engage in extradyadic sex are relatively often characterized by lower levels of wellbeing and mental health […], and this seems to apply in particular to women […] Especially wives low in conscientiousness, high in narcissism, and high in psychoticism […] or suffering from a histrionic personality disorder […] seem to be inclined to be unfaithful. They may do so because, in part, these personality characteristics reflect insecure attachment styles […]. Indeed, some studies have found that especially among women, an anxious–ambivalent attachment style is associated with a tendency to be unfaithful. […] extradyadic sex is more prevalent among individuals with a positive attitude toward sexuality. […] Several studies have found that, particularly among men, adultery often stems from feelings of sexual deprivation in the primary relationship. In contrast, among women emotional dissatisfaction with the relationship has been found to be related to adultery [I did not find this surprising, but it’s probably worth keeping in mind] […] lowered satisfaction, as well as lowered commitment have also been found to be important determinants of extradyadic sexual involvement or of the willingness to be involved in an extradyadic relationship […] In addition, there is evidence that […] extradyadic sex may be particularly likely to occur in relationships characterized by low dependency.”
“Most extradyadic relationships are kept secret from the primary partner, and even when this partner gets obvious clues that the other partner may be having an affair, such clues are often denied because the offended partners may not want to know they are being cheated on. […] In general, it seems that extradyadic sexual relationships will particularly likely lead to a divorce when they stem primarily from dissatisfaction with the primary relationship with the affair being a consequence rather than a cause of relational problems […]. There is evidence that even when the spouse accepts the extradyadic sexual involvement such as in sexually open marriages, relational and sexual satisfaction decreases substantially over time”
“several studies have found lowered self-esteem and increased jealousy to be related […] a substantial number of studies [have] found that, particularly among women, jealousy is related to low self-esteem […] for jealousy to occur, a rival is a necessary and defining condition. Overall, a rival who possesses qualities that are believed to be important to the opposite sex or to one’s partner tends to evoke more feelings of jealousy than a rival who does not possess those qualities […] individuals tend to report more jealousy as their rivals possess more self-relevant attributes, such as intelligence, popularity, athleticism, and certain professional skills […] Because jealousy is evoked by those characteristics that contribute most to the rival’s value as a partner, one would, from an evolutionary–psychological perspective, expect women to feel more jealous than men when their rival is physically attractive and men to feel more jealous than women when their rival possesses status-related characteristics. Several studies have found support for this hypothesis […] there is increasing evidence that, rather than evoking merely upset, sexual and emotional jealousy evoke different emotional responses. In general, emotional infidelity is more likely to evoke feelings of insecurity and threat whereas sexual infidelity is more likely to evoke feelings of betrayal, anger, and repulsion […] A recurrent finding is that, in response to a jealousy-evoking event, women in particular have the tendency to think that they are “not good enough.” […] many more men than women commit homicides out of jealousy […]. However, studies that have asked participants what they would do if a jealousy-evoking event would occur consistently show that women in particular are inclined to endorse aggressive action against their rival […] Possible explanations for this discrepancy are that women are more likely than men to admit intentions of violence toward their rival, women are less likely than men to convert their violent intentions into actual behavior, and, although women may physically injure their rivals, they do not kill them, whereas men do.”
Here’s my first post about the book. In this post I’ll talk some more about the findings described in the book; I have included more quotes from the book in this post than I did in the first post because it’s very time-consuming to blog books the way I’ve tried to do over the last week, and I’ll never cover anywhere near the amount of material I’d like to cover if I limit myself to doing things that way.
“experiments by Yukl (1974) traced the location of demands, goals, and limits over time. His data suggest that at the early stages of negotiation, demands are placed well in advance of goals and limits (called overbidding, which may reflect negotiator efforts to create an image of firmness). But over time, overbidding diminishes, and demands come close to or identical with goals. Goals, in turn, tend to approach limits, as wishful thinking becomes eroded. The upshot of these trends is that limits are usually the most stable and demands the least stable of demands, goals, and limits.
A large body of work suggests that the impact of goals on negotiation is similar to those of limits. Higher goals produce higher demands, smaller concessions, and slower agreements; because higher goals produce higher demands, they lead to larger profits if agreement is reached […] There is some evidence that an “anchoring-and-insufficient- adjustment” (Tversky & Kahneman, 1974) process may mediate the impact of goals on negotiation. […] anchoring effects hold for individuals as well as groups of negotiators, and generalize across novices (e.g., students) and professionals (Whyte & Sebenius, 1997).”
The books talks a bit about the dual concern model – see the wiki for a general description. The model’s part of a literature which has focused on how the extent to which individuals care about the outcomes of others relate to their behaviours and outcomes during negotiations. Here’s a relevant quote:
“In a meta-analysis, De Dreu, Weingart, and Kwon (2000b) tested the effects of (a) individual differences (mainly social value orientation), (b) incentives, (c) instructions, and (d) implicit cues (group membership, future interaction, friend vs. stranger) on joint outcomes. For all four, effect sizes indicated that pro-social negotiators achieved higher joint outcomes than selfish negotiators and, importantly, there were no differences between the different operationalizations of social motive. This latter finding indicates a functional equivalence of various ways of implementing social motivation in negotiation.”
More specific findings in this area are probably more questionable than the meta-analytical results, but it’s for example noted in this part of the coverage that one study found that people who were more concerned about the outcomes of others (‘pro-social’) during negotiations saw their negotiation partners as more fair and trustworthy, made greater concessions, and that they both make more generous starting offers yet also end up doing better in the final agreement. Another study found that pro-social individuals were more likely to correctly recall joint-gain task features, whereas people who cared less about the outcomes of others more accurately recalled self-gain features. I would caution that the meta-review finding described above, that the differences in negotiation outcomes between groups of pro-social and selfish negotiators seem to be similar regardless of how the pro-social behaviour comes about, does not necessarily imply that other features of the negotiation process will necessarily be identical across implementation strategies.
People who enter negotiations like to look strong and project an image of toughness, and: “there is evidence that conditions that help negotiators maintain this sense of strength, yet allow them to make concessions, such as the help of a third party, increase the likelihood of agreement”.
“As for the literature on bargaining games, the base finding concerning learning is that inexperienced players often do not know how to maximize their value, but often improve across trials.” However it’s important to note that learning does not always lead to better outcomes: “one’s negotiation experience may also lead to less value creation. There are at least three kinds of reasons. First, prior experience with a skewed sample of negotiation situations (e.g., pure distributive haggling) may lead people to assume that value creation is not possible—and assuming it is not possible should make it less likely to occur […]. Second, prior experience with one kind of counterpart may make it difficult to create value with other kinds of counterparts because of misunderstandings […] Third, O’Connor and Arnold (2001) found that after an impasse, relative to those not experiencing an impasse, negotiators were less interested in working with their counterparts again, planned to share less information in the future, planned to behave less cooperatively, and felt that negotiation was a less effective means for resolving conflicts. And people appear to follow through with these behavioral intentions […], creating a self-reinforcing cycle of poor negotiated outcomes. In short, prior experience can yield lessons that are counterproductive.”
“broadly, there are two important conclusions […] concerning expertise and decision biases. First, practitioners with years of experience nonetheless show decision biases […]. Second, decision biases are surely more consequential for practitioners than novices because the former make more consequential decisions. […] there [has been] little attempt to define what constitutes expertise in decision making. For example, in the Northcraft and Neale (1987) study, the expert population was real estate agents with about 9 years of experience, engaging in about 16 transactions per year. This is only 144 total transactions — by contrast, Chase and Simon (1973) estimated that chess experts knew at least 50,000 chess patterns. Further, a given transaction does not provide all that much clear feedback, and there is little in the way of formal training to serve as a substitute. This is typical for negotiation as well — feedback is typically poor and subject to misinterpretation […], and formal training rarely exceeds a course or two. Impoverished training and poor feedback are troublesome because people’s intuitions are not particularly well honed for effective negotiation […], and expertise appears to require planning, monitoring, analyzing, and reflecting on one’s practice, not merely repeated performance […] trial and error is likely to be inefficient, can lead to misunderstandings, and is in itself a poor means of fostering reflection and reframing, given the paucity of information feedback alone provides. […] negotiators may more readily remember their interpretations about their negotiation than the perceptions on which the interpretations were based. This impedes learning, as the more one abstracts away from what happened, the more one is assimilating the experience to old categories (Argyris & Schon, 1996).”
“Most negotiators are neither novices nor experts. They have negotiated repeatedly in a particular setting […] For this reason, negotiators likely use situated, not general, concepts of negotiation […] Because negotiations occur in a wide variety of settings, people’s knowledge about negotiating is fragmented across situations […]. Not only are sophisticated negotiation strategies artificially limited in scope, people are also limited because they do not think of their actions in a variety of settings as all being negotiations. […] One result is that people’s assumptions about what a negotiation is are shaped by the limited situations they actually think about as being negotiations. For example, people may fail to claim value because they do not realize a situation is a negotiation and simply agree to another’s proposal (Babcock, Gelfand, Small, & Stayn, 2002).”
“Taken together, the articles reviewed here demonstrate that negotiators evaluate their own performance based both on their prenegotiation expectations and their perceptions of how opponents and similar (by role) others performed. […] Rationally, it should not matter how one’s opponent fares in a negotiation if one’s own interests are met, however, the empirical evidence reviewed above indicates that negotiators, quite irrationally, weigh their own success in negotiation against the perceived success of others.”
“Results indicate that a basis for a relationship (in-group status or self-disclosure) elicits positive affect within the negotiation, enhancing rapport and diminishing the likelihood of impasse. […] even trivial relationships facilitate rapport among negotiators, which may result in more cooperative behavior and better outcomes. […] The use of emotion-behaviors as tactical maneuvers in negotiation is often commented on and more frequently implied in discussion of emotion in negotiation. We are aware of just one published empirical paper (an edited volume chapter) that explicitly addressed the topic. Barry (1999) found that negotiators perceive the falsification of emotion (e.g., feigning pleasure, shock) more ethically appropriate than other deceptive negotiation tactics. Findings also suggest that negotiators differentiate between the strategic use of positive versus negative emotions, where feigning a negative emotion was considered less ethically appropriate than feigning a positive emotion.”
“An extensive literature has explored the effects of experienced mood or emotion on cognitive processes such as memory, information processing, and judgment. Several general patterns emerge from this research and may be summarized as follows […]: Affective experiences impact memory encoding and later recall; emotional impressions are often remembered more vividly than other details of social encounters. In addition, mood state during recall biases memory retrieval. Generally, happy (sad) memories are recalled when people are in positive (negative) moods. Information processing also seems to be guided by mood state; individuals in a given mood often seek out and pay greater attention to information that is congruent with that mood. Other research finds an increase in creativity and flexible problem solving when people are happy. […] In addition to the actual variability and stability of emotion over time, one’s predictions about affect in response to future events — “affective forecasting” — is likely to be important for decision making in negotiation. Gilbert, Wilson, and colleagues find consistent support for the theory that people overestimate the intensity and duration of their future emotional reactions to a particular event […] Bazerman
and Chugh […] link these heightened expectations to a broader tendency on the part of negotiators and others to overfocus on a narrow subset of information.”
“Empirical evidence confirms what our everyday experience tells us: some people are more emotionally expressive than others. For example, women perceive themselves to be more skilled at expressing emotion, where men view themselves as more skilled at controlling emotion […]. And indeed, women do tend to be more facially expressive of emotions than men […]. Men, on the other hand, report hiding their emotions more than women […] Emotional suppression is physically challenging for the suppressor, resulting in increases in cardiovascular activation and blood pressure […]. Suppression is also cognitively demanding, negatively impacting memory […] Particularly relevant for negotiation is research suggesting that emotional suppression inhibits relationship formation.”
“The book covers a lot of the research that exists in this field, but the research that has been done is in my opinion not very impressive. Another general problem I have with the book is that the authors much too rarely comment upon methodological issues in the research they cover – they’ll frequently make grand conclusions based on just a few studies (often just one or two papers), the validity of which are completely unknown(/unknowable) given the coverage. A simple word search for ‘sample size’ in this book will yield you exactly zero hits. Occasionally big and wide-reaching conclusions (promoting further theorizing later on in the coverage) will be drawn from data/research which frankly quite obviously provides completely insufficient support for the conclusion being drawn.
Even when you feel inclined to trust the results of the research you’ll often feel that it’s not really telling you very much, because the authors mostly frame the findings in terms of ‘it has been found that there’s an effect’, rather than ‘the effect size was…’. Where effect sizes are mentioned, sample sizes aren’t (and confidence intervals aren’t mentioned either), so it’s really hard to critically evaluate the coverage and the research results included without actually reading the papers in the references; which in a way makes reading the book a bit superfluous – why would you read a book if you need to read the papers the book ‘covers’ anyway in order to figure out if you can trust what the authors are telling you in the book?
There’s some useful content in there, it’s not like this stuff is all totally useless – some of the specific observations included are quite likely sound and important/useful to know about. Some chapters are better than others. But especially the second half of this book was a disappointing read.”
In the post below I’ll talk a little bit about some of the more specific ideas/research they cover, but before I do that I’ll note that there’s a lot of stuff they do not cover; specifically they talk very little about bargaining models from microeconomics, mechanism design, contract theory and related stuff. It’s not that kind of book – the book deals with the psychology of negotiation. There are some economic concepts included, you can’t write a book like this without them, but it’s occasionally quite obvious that some of the researchers are not particularly familiar with even reasonably simple ‘formal models’ in micro (such as those you might encounter at the advanced undergraduate level or early graduate level as a student of economics); and that this unfamiliarity might have lead them to overlook some important factors/ideas, in particular in the context of the role of uncertainty and risk in negotiations. At least in a couple of situations I found it somewhat strange that parallels to micro models and what they might have to say about this stuff were not made. But anyway, the book applies an almost purely behavioural economics/psychological approach to the topic at hand. This also means that if you’ve read along for a while here on this blog or you are familiar with this kind of research, some of the specific findings might not be that surprising to you. I’ve included them anyway, but as implied by the goodreads review above I maintain a skeptical mind – I’m quite open to the idea that some of the results in this book are simply wrong.
It’s noted early on that in two-party negotiations, negotiators tend to be more concessionary in positively-framed negotiations than in negatively-framed negotiations. They tend to be inappropriately affected by anchors (which among other things also translates into the person presenting the first offer in a negotiation tending to have an advantage over the other party), and when it comes to outcomes that favour themselves they tend to be overconfident and overly optimistic. They often frame the negotiation context in terms of a fixed pie even though the size of the pie is often partly endogenous (i.e. people tend to miss out in negotiations because they have a tendency to assume the other guy’s win is their loss, even if in many contexts the potential choice set will include deals which will benefit both parties due to the existence of different tradeoffs/valuations of specific variables between/among negotiators. People have been shown to make the assumption that ‘the pie is fixed’ in studies where by design it was not). Many studies have shown that people often escalate conflict when a rational analysis would be to change the strategy instead, and that they (very surprisingly!) interpret disputes in ways that favour themselves.
Positive mood may increase the tendency of a negotiator to select a cooperative negotiating strategy, which may lead to better negotiating outcomes. I’m more uncertain about this one than ‘the opposite’ result; that anger leads to poorer outcomes. Angry negotiators have been shown to be more likely to reject offers in ultimatum games, and anger seems to impair memory so that angry negotiators tend to forget details from negotiations. Angry people have also been shown to be less accurate when assessing the interests of the other party and to be more self-centered during negotiations. One study indicated that joint gains in negotiations involving angry people are lower than in negotiations involving controls.
Negotiations are complex interactions with a lot of uncertainty, and standard ‘cognitive miser’ models of human behaviour tell us that in such contexts we tend to cut down on uncertainty and simplify matters to ease decision-making and resolve tension associated with uncertainty e.g. by making use of cognitively available heuristics of various kinds, which may then influence negotiations in foreseeable ways. Like in other contexts, specific variables will affect the likelihood of people defaulting to heuristic reasoning; three specific variables are mentioned in this context in the book, the need for closure variable which I’ve talked about before, time pressure (less time for mental processing -> higher likelihood of relying on heuristics), and accuracy motivation (“Individuals differ in their accuracy motivation — or their desire to form accurate, rather than biased, judgments […] In general, the higher one’s level of accuracy motivation, the greater the likelihood that one will engage in systematic and thoughtful information processing […] Accuracy motivation has also been examined in negotiation contexts and has been found to reduce negotiators’ reliance on simple heuristics and improve the accuracy of their perceptions”).
What’s in focus for us during a negotiation matters a great deal, because when we focus on something specific we tend to have a hard time focusing on other stuff as well because our attention span is limited; a few reviews specifically make the point in this context that “people often act on a limited set of data that prompts an affective response that cuts off cognitive deliberation.” It makes sense to believe that what’s most likely to come into focus in this context is stuff which is easily available to us; many predictions follow from this idea, some of which have been tested and are talked about elsewhere in the book. I think the authors here are making a similar point as I believe was made by a few of the authors in Yzerbyt et al., a point also popularized by people like Kahneman in popular science books like Thinking, Fast and Slow (I skimmed the first 100 pages of this book a while back, before concluding that reading the book would not be worth my time), that it may (always? often? in some contexts?) be argued to be faulty to perceive of the primary effect of affect on decision-making as affect influencing the decisive cognitive processes, rather than affect deciding on which decisions are made – affect may in some sense more or less bypass cognitive processes completely in the decision-making process (we might rationalize decisions later, but such rationalizations may have little to nothing to do with why the decisions were made). All kinds of stuff gets ignored when people try to cut down on the level of complexity and uncertainty they face during negotiations, and “[s]ubstantial research on the Acquiring a Company problem suggests that bounded awareness leads decision makers to ignore or simplify the cognitions of opposing parties as well as the rules of the game”. The choice set is also affected. It’s cognitively expensive to consider what your opponent might be thinking, and it’s cognitively expensive to come up with and explore/analyze new/additional ideas for how to solve the problem at hand, so when stuff gets complicated you focus on yourself and your preferences, and you cut down on the amount of work that goes into figuring out why your opponent thinks the way he does and the number of alternatives you’re willing to consider.
When considering the potential deals that might result from a negotiation, people will usually consider a range of outcomes to be acceptable. At the top of that range you have the aspiration price and at the bottom you have the reservation price; a related important concept to be familiar with is the BATNA (Best Alternative To a Negotiated Agreement). It matters whether you focus on the upper range or the lower range of acceptable outcomes when you negotiate, and the authors argue that given the level of complexity and uncertainty involved in negotiations, people will tend to focus either on one or the other: “In general, negotiators achieve better outcomes for themselves when they focus on their aspiration price than when they focus on their reservation price”. A related point made here is that: “The more difficult the goal a negotiator is trying to achieve, the better the outcome obtained from the negotiation”. Presumably there are some relevant constraints at play here which they don’t talk about in the text; if you make unrealistic offers, it seems likely that this sort of behaviour would increase the likelihood that the other party walks away, leading to a poor outcome.
Although (or perhaps because?) they don’t talk about this in the book, I thought I should caution that I’m not convinced the above comments indicate that risk averse individuals who focus on reservation price rather than aspiration price are not optimizing (‘are leaving money on the table’) or that a risk-averse individual will necessarily ‘do better’ by focusing on the aspiration price; the negotiation strategies people employ presumably in part reflect their risk profiles, and if you’re more risk averse than another individual then you may well prefer an outcome with a lower expected value. The impression I got from reading the chapter in question was that most readers would be very tempted to conclude from the coverage that it’s always better to focus on the aspiration price during negotiations; I’m not sure they’ve actually shown that this is the case. There may be some reasons for assuming this is not the case; for example they note later on in the chapter that people who achieve high outcomes during a negotiation in multiple studies have been found to be less satisfied with their outcomes than people who achieve worse outcomes – so the relationship between subjective and objective outcomes is perhaps different from what people might expect it to be. Of course how important such concerns are depend upon the context of the negotiation; in organizations you’ll often want people to optimize expected value and (competent) owners will generally try to give people incentives to try to get them to do that, but when people negotiate in other contexts other concerns may be more important. That said, a general point probably worth keeping in mind is that tradeoffs like these exist and influence outcomes. It’s easier to focus on your aspiration price (and get the other guy to focus on your aspiration price) if you’re the one to make the first offer; as already mentioned people who make the initial offer in a negotiation do better than people who don’t. But they tend to be less happy about the outcome, and in particular people who make the initial offer tend to become dissatisfied with the result of the negotiation if an initial offer made by them is immediately accepted by the other party.
Behavioural differences in negotiation contexts are, it must be noted, not all dispositional in nature; situational factors may also affect preferences and behaviour in negotiation contexts in all kinds of ways. And people who negotiate will often neglect to take situational factors into account when making attributions during negotiations (…in Western societies, it is argued/added in a later chapter; there an argument is made that in East Asian societies people will be more likely to explain the behaviour of negotiation partners in terms of situational factors than in terms of dispositional factors, compared to what’s the case in the West).
Okay, so like most other people you have the impression that how the other party behaves during a negotiation may affect how you behave during the negotiation. But are your ideas about how the other party’s behaviour might influence you correct? People have looked at this stuff, and here’s a relevant quote: “Research by Diekmann and her colleagues […] suggests that negotiators are aware that the behaviors of their counterparts will influence their negotiation behaviors, but that they are not always accurate in forecasting how these counterpart behaviors will affect them. In a series of studies, these researchers found that negotiators expected that they would behave more competitively when negotiating against a competitive opponent than a cooperative opponent. However, in actual negotiations, this did not occur.”
When I started out writing this post I did not intend to write more than one post about the book, but I see now that this will certainly be necessary if I’m to cover all the stuff I’d ideally like to cover. If I don’t cover the rest of the chapters, you should note that there’s a lot of stuff in there which I did not get to talk about here.
“Most elementary statistics books discuss inference for proportions and probabilities, and the primary readership for this monograph is the student of statistics, either at an advanced undergraduate or graduate level. As some of the recommended so-called ‘‘large-sample’’ rules in textbooks have been found to be inappropriate, this monograph endeavors to provide more up-to-date information on these topics. I have also included a number of related topics not generally found in textbooks. The emphasis is on model building and the estimation of parameters from the models.
It is assumed that the reader has a background in statistical theory and inference and is familiar with standard univariate and multivariate distributions, including conditional distributions.”
The above quote is from the the book‘s preface. The book is highly technical – here’s a screencap of a page roughly in the middle:
I think the above picture provides some background as to why I do not think it’s a good idea to provide detailed coverage of the book here. Not all pages are that bad, but this is a book on mathematical statistics. The technical nature of the book made it difficult for me to know how to rate it – I like to ask myself when reading books like this one if I would be able to spot an error in the coverage. In some contexts here I clearly would not be able to do that (given the time I was willing to spend on the book), and when that’s the case I always feel hesitant about rating(/’judging’) books of this nature. I should note that there are pretty much no spelling/formatting errors, and the language is easy to understand (‘if you know enough about statistics…’). I did have one major problem with part of the coverage towards the end of the book, but it didn’t much alter my general impression of the book. The problem was that the author seems to apply (/recommend?) a hypothesis-testing framework for model selection, a practice which although widely used is frankly considered bad statistics by Burnham and Anderson in their book on model selection. In the relevant section of the book Seber discusses an approach to modelling which starts out with a ‘full model’ including both primary effects and various (potentially multi-level) interaction terms (he deals specifically with data derived from multiple (independent?) multinomial distributions, but where the data comes from is not really important here), and then he proceeds to use hypothesis tests of whether interaction terms are zero to determine whether or not interactions should be included in the model or not. For people who don’t know, this model selection method is both very commonly used and a very wrong way to do things; using hypothesis testing as a model selection mechanism is a methodologically invalid approach to model selection, something Burnham and Anderson talks a lot about in their book. I assume I’ll be covering Burnham and Anderson’s book in more detail later on here on the blog, so for now I’ll just make this key point here and then return to that stuff later – if you did not understand the comments above you shouldn’t worry too much about it, I’ll go into much more detail when talking about that stuff later. This problem was the only real problem I had with Seber’s book.
Although I’ll not talk a lot about what the book was about (not only because it might be hard for some readers to follow, I should point out, but also because detailed coverage would take a lot more time than I’d be willing to spend on this stuff), I decided to add a few links to relevant stuff he talks about in the book. Quite a few pages in the book are spent on talking about the properties of various distributions, how to estimate key parameters of interest, and how to construct confidence intervals to be used for hypothesis testing in those specific contexts.
Some of the links below deal with stuff covered in the book, a few others however just deal with stuff I had to look up in order to understand what was going on in the coverage:
Binomial proportion confidence interval. (Coverage of the Wilson score interval, Jeffreys interval, and the Clopper-Pearson interval included in the book).
Fisher’s exact test.
Factorial moment-generating function.
Multidimensional central limit theorem (the book applies this, but doesn’t really talk about it).
In this post I’ll talk about four more chapters from the book – the chapters following the ones I covered in the first post. The first one of these deals with ‘Self-Disclosure in Personal Relationships’. The chapter takes a look at individuals’ decision-making in terms of what, when, to whom, and how much we tell other people about how we feel and think, and how this stuff varies with relationship status, relationship length, and other factors. Some theoretical terms/concepts of interest include disclosure reciprocity, the extent to which disclosures are intended for a single recipient only (‘personalistic’), and conversational responsiveness. Studies looking at the latter two variables have found that disclosure input uniquely intended for the disclosure recipient may increase liking, and that the same is true for displays of high responsiveness (responsiveness here = “the extent to which and the way in which one participant’s actions address the previous actions, communications, needs, or wishes of another participant in that interaction”). Responsiveness may be indicated e.g. by response content, response style, and timing. In the context of the reciprocity variable, it’s noted in the chapter that there’s a significant amount of mutuality in terms of how much relationship partners share with each other; relationship partners who tend to disclose much are also likely to be recipients of high levels of disclosure from others. In close relationships people don’t always reciprocate right away, but over time the self-disclosure patterns seem to be somewhat similar (again, they don’t quantify the effects so I’m not sure how much this tells us or how relevant this is).
A distinction can be made between disclosures which are voluntary and disclosures which are not; the chapter only deals with the former kind of disclosures. The Opener scale is a way to measure/instrumentalize the self-disclosure variable – here’s a link. There are various other dimensions/variables one can consider when dealing with this stuff; for example the reward value of disclosure (both positive and negative outcomes may be related to sharing information with others, and this outcome will depend both upon the content of the information shared and the reactions of the people with whom one shares the information). The amount of information shared, usually measures in terms of topic breadth and -depth, is also a relevant dimension, as is whether or not the information shared is true. A distinction can be made between self-disclosures which are of a personal nature and disclosures which are of a relational nature; the latter type of self-disclosures focus on one’s relationship with the interaction partner. It is argued that both forms of self-disclosures have consequences for the development and maintenance of close relationships, which is hardly surprising.
In a meta-analytic review from the 90es, three conclusions were established: We disclose more to people we like, we like people who disclose to us better, and we like people to whom we have disclosed personal information better. According to researchers in this area, “all or most relationship partners will avoid talking about or conceal (or both) certain facts or feelings from significant others.” The chapter on ‘Lying and deception in close relationships’ of course has a lot more about such stuff, so I’ll cover this kind of thing in more detail later. It’s noted that the physical environment in which people interact may affect disclosure patterns, e.g. due to stuff like privacy regulation; how much, and which type of information, people are willing to share with others depend upon where they are sharing it. In the context of a social interaction, when people disclose may greatly impact both the likelihood of self-disclosure and how the message is received. Sharing information early may make it impossible to chicken out later, whereas waiting for a while before self-disclosing may enable the discloser to try to figure out if the (potential) recipient is ‘ready for the information’, or whether s/he’s perhaps too preoccupied with his/her own problems for it to be a good time to disclose. Disclosing at the end of a social interaction will minimize interaction/follow-up questions, but may also hurt the recipient because there’s no time to process the information, ask questions, etc. A distinction can be made between self-disclosures which are planned and ones that are not; people may often prefer to plan embarrassing disclosures.
They also briefly talk about health stuff in the chapter. Here’s a relevant quote:
“The research on the link between disclosure and health often focuses on the possible health benefits of self-disclosure in coping with negative life events and negative thoughts and feelings. But there may be psychological benefits from disclosing about pleasant events and positive emotions (e.g., getting a good grade, birth of a child, lower tuition rates). Gable, Reis, Impett, and Asher (2004) presented data on the phenomenon of capitalization, dealing with the benefits of sharing good things with significant others. Disclosing about positive personal events was associated with increases in daily positive affect as well higher relationship well-being (including intimacy and marital satisfaction) and was even more beneficial if the listener responded in an active and constructive manner to the information (e.g., “asks a lot of questions and shows genuine concern,” p. 50)”.
The next chapter in the book deals with ‘Close Relationships and Social Support: Implications for the Measurement of Social Support’. One general finding in this area of research is that reports of perceived available social support are not strongly associated with how much support is actually received. And when dealing with well-being and health measures, only the level of available support (not the amount of support received) seems to matter. The chapter notes that how support is perceived depends upon how people feel about their relationships, and that several studies have shown that marital satisfaction predicts how partners interpret supportive behaviours. For example husbands low in marital satisfaction have been found to perceive of their partner’s behaviours in a more negative light than husbands with a higher level of marital satisfaction. So if people are dissatisfied in a relationship, suppportive behaviours may not have the same effect they would have in a relationship where people are satisfied; in a relationship where people are dissatisfied, the recipient of social support may be less likely to notice that the other person is being nice to her, and perhaps she’ll interpret some neutral interactions as being negatively charged. Here’s a relevant quote from the next chapter of the book:
“The quality of their marital relationship is an important factor in how marital partners view each other and relationship-related negative events […]. Spouses in positive relationships seem to make attributions that do not locate the cause of the problem in their partner but see it as a temporary thing unlikely to recur. In contrast, spouses in distressed relationships are likely to locate the cause of an untoward event in their partner and see it as stable or lasting and affecting many aspects of the relationship rather than a specific situation.”
I think I talked about this stuff in my last post as well, but it’s a theme brought up a lot because it’s important. This stuff is part of the reason why many marriage/relationship intervention protocols are not purely behavioural in scope; how you behave matters, but how people think about how you behave matters a lot as well.
One of the reasons why social support provided may not be helpful in stressful situations is that support provided might impact self-esteem in a negative manner; if you think you can handle something on your own, being offered help by your partner might hurt you because it might indicate that your partner does not think very highly of your ability to cope on your own. Concerns such as these need be taken into account when considering whether to provide support, and which type of support to provide. Providing support which is ‘silent’/hidden may sometimes be more useful than providing support which is openly acknowledged.
The next chapter in the book deals with ‘Understanding Couple Conflict’. They note that most research on that topic has been done on married couples, so we know less about conflict patterns early on in the course of relationships. Not surprisingly, longitudinal research has suggested that what couples experience conflict about may change over the course of the relationship; one study following some couples over time found that money, jealousy, and relatives were top issues when people were asked before marriage, whereas later on, early in their marriage and after the birth of the first child, the same couples mentioned money (again), sex, and communication as the top issues. Conflicts involving children may be particularly important in the context of people who remarry. The authors state that to attribute negative outcomes of marriage to conflict mismanagement would be too simplistic; conflict is linked to relationship satisfaction, but so is a lot of other stuff. Some have suggested that different variables predict relationship dissatisfaction and divorce, an idea also covered in some detail elsewhere in the volume (lots of people stay in bad relationships, so showing that a variable predict relationship satisfaction is not the same thing as showing that it predicts relationship dissolution). It’s been noted in some of the research that negative conflict behaviours seem to be more important in terms of predicting outcomes than are positive conflict behaviours, and so people might do well to focus on the ratio of the two; and to keep in mind that a very hurtful comment might have a much larger impact than a positive interaction. Some behavioural danger signs which have been identified in longitudinal research and seem to predict marital outcomes years in advance are invalidation, escalation, negative interpretations, and withdrawal. They note in the paper that poor conflict management seems to be linked to depression in the spouse, though they also mention that the link is likely to be bidirectional. They observe towards the end of the chapter that: “A major theme in understanding the impact of conflict on romantic relationships is realizing that the amount of conflict may be less important than how that conflict is managed.”
The next chapter deals with ‘Sexuality in Close Relationships’. The numbers in the text were old, which was annoying, but they’re better than nothing:
“research indicates that by the age of 20, 80% to 90% of people have had sex and that the mean age of first sexual intercourse is around 16 to 17, although slightly higher for older generations and slightly lower for Blacks and Hispanics (e.g., Laumann et al., 1994). One of the most important predictors of early initiation to sex is being in a close, romantic relationship” (I’ve previously blogged some Danish numbers on that kind of stuff – unfortunately the coverage in the link is in Danish. The Danish numbers indicate that at the age of 18-20, 85% of all women (claims to) have had sex, and that 99% of all women in my age group (26-30) have had sex). In the chapter they also mention in this context that according to data provided by the General Social Survey (1998 numbers – as I said, the data are old), the overall mean number of sexual partners during adulthood was 7 (the median is in my opinion a much better variable to use here than the mean because the data is skewed, but they don’t report the median). Men systematically report a higher number of sexual partners than do women, which the authors mention that experts in this field believe is due to the combined effects of men systematically overreporting the number of partners and women systematically underreporting the number of partners.
It might also make sense to ask how often people have sex. Large national surveys (in the US) have tried to answer that question as well, and numbers on this stuff are also included in the coverage:
“Large national data sets have also assessed how often people have sex. Although there is considerable variability in reports of sexual frequency, the overall average (mean or median) has been found to around 1 to 2 times a week. For example, with data from the National Survey of Family and Households (NSFH), Call, Sprecher, and Schwartz (1995) found that married respondents had an overall mean frequency of 6.3 times per month. In the NHSLS, the mean frequency of sexual activity was slightly more than 6.5 times per month (Laumann et al., 1994). Smith (1998) reported that married respondents in the GSS data reported engaging in sexual intercourse an average of 61 times per year, which is slightly more than once a week. […] These national data sets and other largescale studies […], as well as smaller geographically limited samples […], also indicate that sex declines with age and number of years married. Thus, early in marriage, couples generally have sex frequently, but they do so less often over time.”
People who say they are sexually satisfied in their relationships are likely to report higher overall relationship satisfaction, which is very surprising. No, wait… Sexual frequency (how often people have sex) is positively related to sexual satisfaction and overall relationship satisfaction, which the authors suggest may account for the link between sexual satisfaction and general relationship satisfaction. I could easily think of some potential problems with that interpretation, but the research is not covered in detail and I have no way of knowing whether the ‘problem variables’ I’m thinking about have been addressed in the research. Health is an obvious problem variable as unhealthy people may be more likely to have relationship trouble than healthy people, and as unhealthy people may be unable/less likely to have sex (so the variable of interest may be health, not sexual frequency). There’s also some stuff going on in terms of age etc. – young people have more sex and relationship satisfaction tends to be high early on in the relationship (because those first years is the time period where the child being born tends to be most vulnerable and needs the protection and support of both parents, the evolutionary biologist might add) and then it drops later on – it may make some sense to conceive of the frequency variable as at least partly an outcome variable to be explained, rather than a predictor variable.
The chapter also briefly looks at what they term sexual communication, and note that behaviours such as initiations and refusals of sexual interactions are related to relationship satisfaction (both ‘global’ relationship satisfaction and sexual satisfaction). When a partner more often refuses sex this is bad for relationship satisfaction, and when a partner initiates more often relationship satisfaction is likely to be higher. Another type of sexual communication is the expression of preferences (likes and dislikes) for sexual behaviour; unsurprisingly, couples who communicate more with each other about what they like and do not like in the sexual context have higher levels of relationship satisfaction. The link between sexual conflict (e.g. disagreements about frequency, duration, etc.) and relationship satisfaction seems to be somewhat unclear, though most of the research mentioned in the chapter seems to support the idea that sexual conflict may be bad for relationship satisfaction.
I think I’ve talked about this before in coverage of books on evolutionary biology, but I’m not sure; the chapter notes that studies have found that the jealousy of males is more likely to be linked to a partner’s sexual infidelity (due to ‘paternity considerations’ – as long as other males do not have sex with the partner, he does not run a risk of raising another man’s offspring), whereas the jealousy of females is more likely to be linked to emotional infidelity of a partner (due to ‘provider stuff considerations’ – given the evolutionary context, a man having extra-pair copulations (-EPC) might not have been a big deal if he’s not intending to provide for the offspring resulting from those EPC, whereas on the other hand if he’s bonding emotionally with another female this might from her point of view be taken as an indication that he might actually leave her, which would be bad news for her and her offspring). I was debating whether to include this in the coverage in part because I seem to recall it being observed elsewhere that the differences are small and that they only really show up when researchers apply a forced-choice paradigm (‘if you’re allowed to categorize both types of infidelity as the ‘most awful thing ever’ (…read: use a Likert scale…), people will do that’), but I decided to do it anyway – but be careful not to interpret these differences as somehow suggesting/requiring that maless in general will not care about a partner’s emotional infidelity and that females in general will not care if their partners sleep around, as long as they don’t become emotionally attached to the women they sleep with. That’s not how it works, and the researchers know this.
At the end of the chapter it’s observed that a few studies have found that teenagers are more likely to limit or delay sex if they feel close to their parents and perceive of the parents as supportive, and if the parents exert moderate levels of control over their lives (as opposed to too little or too much).
“Most HLM [Hierarchical Linear Modeling] programs also allow for heterogeneous compound symmetry, which results in estimation of separate random effects across a distinguishing variable.”
“A distinctive feature of transactionalism is its emphasis on holism, with an underlying philosophy of science that highlights Aristotle’s “formal cause” over the more traditional focus on “efficient cause” (i.e., cause–effect) relationships. That is, the goal of a transactional approach is to elucidate the patterned and changing nature of holistic events involving people, psychological and temporal processes, physical features, and cultural–historical forces.”
Both quotes above are quotes from this book. Before I started reading it, I’d worried there’d be no content of the former kind, and a lot of content of the second kind. There are 41 chapters in this book, so there’s a huge amount of variation in terms of quality. In the end I ended up at two stars, though I think it’s probably closer to three stars than one. This is, however, an ‘average rating’. Two of the chapters in the book were frankly written in a way that made me want to beat up the authors. A few other chapters were on the other hand really nice. Most chapters were sort of okay, but nothing more than that.
It’s sort of painful to have to give a book like this two stars. It contains some really important and useful observations and insights, and if everybody had read a book like this many personal relationships might be very different. You want to know some of the stuff included in this book’s coverage, you really do. But although there’s some really nice stuff in there, there’s also a lot of stuff that’s really not all that great. The overall level of coverage I did not find to be impressive, but I should emhasize that in some ways this book is much better than the rating I’ve given it might imply.”
As readers who’ve read along for a while would know, the book is not the first book I’ve read on the topic, but it is certainly the longest. The book has roughly 790 pages of content and although some of those pages are references, there’s a lot of stuff in there. As I’ve mentioned before, I’ve lost all work I had done on approximately the first 300 pages because of computer trouble. This is annoying because some of those chapters are in my opinion some of the best in the book. I can’t face reading those chapters again now just in order to blog them, so instead of doing that I’ll jump right into the book and in this post cover stuff from the chapters in the middle. This is probably not the optimal approach, but I see no justifiable alternative. This book took a lot of time to read (with ~16 pages/hour and ~800 pages of text, it amounts to ca. 50 hours, with 20 pages/hour it’s ca. 40 hours; I didn’t time it but these estimates are probably not far off), and it does not help that some of the most technical chapters in the book were in the first part of it.
The book deals with both romantic relationships and other personal relationships like friendships – the word ‘relationships’ in the title does not equate ‘romantic relationships’, but most of the research does admittedly deal mostly with this kind of stuff. Almost everybody get married eventually and nearly everybody find romantic partners early on in their lives (there are some numbers on this stuff in the first part, which I know I highlighted when I read the book – but I lost the highlights and so I’m going to skip those numbers for now), so the book has very little stuff about the people who don’t, which I found annoying but also predictable. The book told me a lot of stuff about how people behave/interact in social contexts I’ve never experienced, so I have to assume I now know more about what it’s like being in a (/various types of) relationship(/s) than I used to do. Some people might argue I should be one of the last people in the world to read a book like this because a lot of the knowledge included in the book is frankly completely irrelevant; on the other hand I like to learn more about how people work.
One annoying feature of the book I did not mention in my goodreads review is that it seems many of the authors did not read the other contributions in the volume, so that you’ll sometimes get an explanation of what concept x or theory y is all about, even if you were told basically the same thing by a different author 100 pages ago. I’m not sure it matters that much and in some cases it’s probably perfectly okay to cover a topic again even if other authors have talked about it before, because it might not be obvious how concept x relates to the new concept y they talk about in this chapter; but I did occasionally find this approach slightly irritating because it feels as if some of the authors repeat themselves this way (they’re not repeating themselves; technically they’re repeating what other people have already said, but if you don’t care who wrote a specific chapter in a book, it amounts to the same thing). In the specific context here, where I’m covering ‘stuff in the middle’, this behaviour on part of the authors is probably a good thing as the authors tend to not take it for granted that you read all the chapters that came before (each chapter is reasonably self-contained), but I did find it annoying while reading the book.
In the coverage below I will talk about some of the key points made in a few specific chapters. I’ll skip a lot of stuff and try only to include observations which I found interesting. I ‘covered’ 6 chapters in this post, in the sense that I started on page 293 and then covered the next 6 chapters – however I only included material from four of these chapters. The chapters not covered were about ‘The Intimate Same-Sex Relationships of Sexual Minorities’, a topic I frankly couldn’t care less about, and ‘Physiology and Interpersonal Relationships’. The latter chapter was about stuff like how things like ‘social connectedness’ in middle age relates to blood pressure and heart disease (if you have a social network you do better on some health metrics), and how one can use various physiological variables to gauge e.g. emotional reponses in a research setting. Some of that stuff I’ve talked about before here on the blog (e.g. here), other parts of it I just didn’t find interesting. I covered roughly 120 pages in this post, meaning that if all my posts about the book cover that amount of content I’ll need another 5-6 posts to deal with all the material included in the book. I don’t think it’s likely I’ll write that many posts about the content of this book, but we’ll see…
In the chapter on ‘Family Relationships and Depression’, the authors talk about how relationships and depression relate to each other. It’s observed that marital context and marital events impact/predict (the latter word perhaps justified via the use of longitudinal studies) depressive symptoms to some extent. I should add that a general problem I have with a lot of the coverage in this book is that these authors implicitly focus a great deal on whether or not a link/an effect has been established (was the result ‘significant’?), but much less on how large the effect size actually is and how it might vary across subgroups and similar more relevant considerations (multiple statistics books I’ve read recently and one book I’m currently reading have focused on such behaviours (and criticized such behaviours on the part of researchers) – see e.g. Borenstein & Hedges or Burnham and Anderson, and so I tend to take note of such issues these days); as is often the case in publications like these, authors rarely report sample sizes unless they are large and impressive. Some of the results in the book are presumably well supported, as they come from hundreds of studies and many thousands of individuals, but even in some of those cases I’d not be surprised if external validity might not still sometimes be quite low because most of the studies involved are the results of research on 20-year old American psychology students, who may in some respects be quite different from 50-year olds who’ve been married for two decades. In some contexts such concerns are much more relevant than they are in other contexts; it should be noted that in some specific contexts a lot of work has actually been put into trying to avoid methodological pitfalls which people who have not read the literature may still consider to be adequate reasons for discarding the research. In some research areas in this field people have been doing longitudinal studies for decades, and these studies have involved a lot of people. So no, if you’re looking at divorce risk models you’ll not be limited to cross-sectional research on college students; far from it. On the other hand there’s presumably still lots of stuff these studies have not accounted for or dealt with in an appropriate manner, and there are other areas of research in this field where somewhat ‘basic issues’ are likely to be of much greater concern.
Anyway, back to the depression/family relationship stuff. They note in the chapter that children who perceive the parents as controlling/intrusive/less warm and supportive are at greater risk for depression, and that this excess risk persists into young adulthood (they don’t even attempt to quantify the effects – I’ll stop commenting upon this now, but you should know that this annoys me, because it does). They note that in animal models of rats it has been shown that maltreating pups can lead to enhanced glucocorticoid feedback sensitivity (you can think of this as increased stress sensitivity), which adds a biological basis for interpreting some of the observations. It’s obvious that the link between family distress and depression is bidirectional, and they note this in the chapter as well as talk about some of the reasons why this is the case – a quote seems relevant here:
“in his review of self-propagating processes in depression, Joiner (2000) highlighted the propensity for depressed persons to seek negative feedback, to engage in excessive reassurance seeking, to avoid conflict and so withdraw, and to elicit changes in the partner’s view of them. In each case, the behavior resulting from the individual’s depression carries the potential to generate increased interpersonal stress or to shift the response of others in a negative direction. Joiner suggested that increased interpersonal negativity, in turn, helps maintain depressive symptoms.”
In the next chapter, on ‘Communication: basic properties and their relevance to relationship research’, the authors talk about the so-called ‘demand-withdraw’ pattern, where one partner communicates in ‘demanding’ ways and the other partner tries to avoid the conversation/withdraw. This pattern has received some attention because it seems to predict marital (dis)satisfaction. Some studies have found that females are more likely to demand and males are more likely to withdraw, though this seems to depend upon the topics discussed and how important the topic is to the parties involved. There’s more on this topic later in the book which one might include here as well, but I decided to cover it later (perhaps in another post) as the communication chapter has a lot of other good stuff which it makes sense to cover first and as it’s really bothersome to jump around in a book like this while providing coverage as a lot of topics will be covered/mentioned in more than one chapter. The chapter talks about a dynamic at least conceptually related to the depression-research above, that people like to behave in ways that confirm the perceptions they have about how they relate to others; it’s far from the only chapter to note that how people think about their relationships with others can have substantial effects on how those relationships develop.
The chapter talks about some stuff which was also covered in Hargie – one key point being that communication is really complicated and that communication tends to convey multiple messages simultaneously at different levels. The chapter talks about one conceptualization of extra-linguistic features involved in conversations which is based on functional groupings, where seven different interactive functions are identified; these involve functions like intimacy, impression management, control, and dominance-power. The groupings seemed to me slightly arbitrary (there’s a grouping called ‘positive reinforcement’ – what about ‘negative reinforcement’?), but it’s certainly the case that conversations often involve a lot of stuff we’re not really aware of, and that a lot of that stuff is automatic. Given how much stuff is actually going on beneath the surface when we’re communicating with each other a relevant question seems to be how we manage to attend to all the relevant signals; or how we figure out which signals are relevant – the question is asked in the chapter, though no firm answer is given. They observe that people are often extremely selective about which signals they pay attention to and that they employ a number of mental shortcuts that helps making sense of what’s going on. A related observation from the book:
“Communication is inherently strategic in the sense that people communicate for a purpose (to fulfill needs) and that symbols are selected in a manner that is responsive to constraints (e.g., social appropriateness) and adjusted to purposes (e.g., giving comfort) on an ongoing, moment-to-moment basis. […] much behavior that is goal-dependent, monitored, and adjusted on an ongoing basis occurs outside awareness. Such behavior is not limited to communication behaviors and routines that are initially mindful and then become automated through overlearning […] Rather, most communication “strategies” are tacitly acquired and employed, in the same manner that individuals acquire and appropriately use language rules without ever being directly cognizant of them”
The next chapter also talks about these sorts of things – here’s a relevant quote:
“The extent to which relationship events are subject to in-depth conscious analysis will vary considerably depending on the stage of relationship, individual differences, and the situational context. In long-term stable relationships, a great deal of communication will become routine, resulting in overlearned and stereotypical sequences of behavior. Two kinds of events have been shown to snap people back into conscious, controlled cognition (often accompanied by emotion): negative events and unexpected events (Berscheid, 1983; Fletcher & Thomas, 1996).”
People in relationships are biased in the ways they interpret communication from the people they know. Research has found that people in dissatisfying relationships tend to code communication in dissimilar ways, so that the receiver might attribute negative intent to a message where no negative intent was intended on the part of the sender. People in such relationships may also make self-serving attributions about all kinds of relationship-related features, like which partner is disclosing more, who’s being attentive (and who’s not), who’s collaborating and who’s criticizing, etc. Several studies have found small or no associations between the amount of information disclosed during communication and the level of mutual understanding possessed by the individuals involved, which is thought to be due to ‘motivated misunderstanding’; the idea being that people are motivated for various reasons to hold/maintain inaccurate conceptions about others. An important observation is that such motivated misunderstandings and biases seem to often be adaptive in the relationship context; there are all kinds of ways in which people who think their partners are (way more) awesome (than they really are) do better than people who don’t think that way (more on this below). Perceptions are really important. A few studies have also found that people seem to lack accurate meta-knowledge about their degree of understanding of others, in the sense that the level of confidence in people’s inferences does not predict empathetic accuracy (I did not find this surprising – accurate person perception is hard, and some variables are much more difficult to deal with than are others). A final key point from this chapter is that a study which looked at those things found no differences in marital satisfaction between two groups of ‘skilled’ and ‘unskilled’ communicators; they suggested that unhappy couples with good communication skills may simply use their communication skills to more effectively hurt each other. A not really important point but one I like to include before moving on is that the chapter includes coverage of a paper by Duck and Pond, which I thought was amusing.
In the next chapter, on ‘Social Cognition in Intimate Relationships’, they talk some more about how people in happy and unhappy relationships think about the world in different ways. They note that people in happier and more stable relationships exaggerate the extent to which they are similar to their partners, the extent to which they were happy in the past (during the relationship), the positive qualities of the partner, and how much their current partner resembles their ideal partner (“to name but a few” of the relevant findings in this area of research). In the context of attachment theory, one study found that anxiously attached individuals tend to use negative/pessimistic causal attributions when explaining partner behaviour – attributions which are stable, global, and internal – whereas securely attached individuals low in attachment anxiety and avoidance offered charitable explanations of behaviour which were unstable, specific and external. So the insecurely attached individual will explain your lack of attention (or perceived lack of attention) as a result of you being a jerk who’ll never change, whereas the securely attached individual might explain it by you having had a bad day at work, or a cold. The study is not the only one of its kind: “a large body of research has demonstrated that attachment working models predict behavior, such as communication, conflict-resolution style, and support seeking and giving, which, in turn, influence relationship quality and satisfaction”.
The next chapter, ‘Emotion in theories of close relationships’, included some really neat observations which sort of relate to stuff covered in the last part of Aureli et al. A key argument is that we as humans use emotions to help us keeping track of whether people are behaving nicely towards us or not. Social exchange models of evolutionary biology maintain that we somehow keep track of whether or not we benefit from a specific relationship, or whether we perhaps put in more than we get out of the relationship. I think one of the reasons why people often find such conceptual models problematic to some extent is that it’s hard to figure out just how people ‘keep score’ of social exchanges the way you’d expect them to according to the theory; we don’t start up the computer and run our keeping-track-of-the-relationship-status-algorithm in Excel to figure out if we’re ‘overcontributing’ every time we fetch a cup of coffee for our partner or colleague (and what did people do before computers? Or coffee makers..?). One counterpoint which is not included in the coverage here but which I’ve seen elsewhere multiple times is the observation that even very simple organisms are really quite capable of displaying behaviours which look quite sophisticated to the outside observer, even if the organisms themselves are quite clearly unaware of the fact that they’re doing this, if selection pressures are significant. But considering emotional output as decision variables facilitating decision-making also helps make the ‘computational problems’ associated with such models easier to deal with. A few quotes from the chapter:
“Evolutionary theorists argue that emotions are hardwired “programs” that detect events that have recurred repeatedly over human evolution (e.g., the presence of a potential mate or rival; abandonment). Such events trigger discrete emotion programs and related perceptual, motivational, cognitive, and behavioral subprograms, selected over time as the most adaptive for dealing with such events […] Cosmides and Tooby (2000) argued from an evolutionary perspective that we must keep track of payoffs to make effective decisions about who to trust and how much to trust them. In survival terms, gullible or infinitely forgiving individuals are disadvantaged. […] Evolution has, in effect, provided an affect-based accounting system that interfaces nicely with social exchange theory. From this perspective, rather than computing profits and losses, people in close relationships may accumulate good and bad feelings toward others. […] In social exchange theories, specifically equity theory, emotions play an explicit role as signals or outcomes of unfair exchanges. […] Underbenefited people feel angry, overbenefited people feel guilty, and both are motivated to set things right (although perhaps not so much the guilty as the angry). Emotion theorists have no quarrel with the basic conclusion, but they suggest complications because of the complex appraisal processes underlying emotional states […]. For example, rather than feeling angry, underbenefited people may feel depressed if they believe they are helpless, hurt if they think the neglect is intentional, indifferent if they take it for granted, or guilty if they feel responsible for the inequity (Sprecher, 2001). Rather than guilt, overbenefited people may feel gratitude, hubris, contempt, happiness, or pride […] It depends on how they appraise the situation.”
A postscript completely unrelated to the book coverage above: I now once again has access to a computer on which I can actually blog books (it was not a coincidence that the first few posts this year were not book-related), so I’ll try to blog more ‘book stuff’ in the days to come.
i. “Years of love have been forgot, in the hatred of a minute.” (Edgar Allan Poe)
ii. “I am lonely, yet not everybody will do. I don’t know why, some people fill the gaps and others emphasize my loneliness.” (Anais Nin)
iii. “You don’t remember what happened. What you remember becomes what happened.” (John Green)
iv. “My best friend is he who rights my wrongs or reproaches my mistakes.” (José de San Martin)
v. “The covers of this book are too far apart.” (Ambrose Bierce)
vi. “Books are the quietest and most constant of friends; they are the most accessible and wisest of counsellors, and the most patient of teachers.” (Charles William Eliot)
vii. “You will get little or nothing from the printed page if you bring it nothing but your eye.” (Walter Pitkin)
viii. “It has been a thousand times observed, and I must observe it once more, that the hours we pass with happy prospects in view are more pleasing than those crowned with fruition.” (Oliver Goldsmith)
ix. “Where it is a duty to worship the sun it is pretty sure to be a crime to examine the laws of heat.” (John Morley)
x. “It is not enough to do good; one must do it the right way.” (-ll-)
xi. “Majorities are generally wrong, if only in their reasons for being right.” (George Saintsbury)
xii. “She was much loved by everybody, except her husband.” (said about Peter the Great’s first wife, Eudoxia Lopukhina – Will Cuppy, The Decline and Fall…)
xiii. “mankind always seeks an explanation for everything, whether it be true or false” (José Saramago)
xiv. “Although there exist many thousand subjects for elegant conversation, there are persons who cannot meet a cripple without talking about feet.” (Ernest Bramah)
xv. “Why does a man take it for granted that a girl who flirts with him wants him to kiss her — when, nine times out of ten, she only wants him to want to kiss her?” (Helen Rowland)
xvi. “Discussion is an exchange of knowledge; argument an exchange of ignorance.” (Robert Quillen)
xvii. “You can fool too many of the people too much of the time.” (James Thurber)
xviii. “Most of our ancestors were not perfect ladies and gentlemen. The majority of them weren’t even mammals.” (Robert Anton Wilson)
xix. “Anyone can do any amount of work provided it isn’t the work he is supposed to be doing at the moment.” (Robert Benchley)
xx. “History is a strange experience. The world is quite small now; but history is large and deep. Sometimes you can go much farther by sitting in your own home and reading a book of history, than by getting onto a ship or an airplane and traveling a thousand miles. When you go to Mexico City through space, you find it a sort of cross between modern Madrid and modern Chicago, with additions of its own; but if you go to Mexico City through history, back only 500 years, you will find it as distant as though it were on another planet: inhabited by cultivated barbarians, sensitive and cruel, highly organized and still in the Copper Age, a collection of startling, of unbelievable contrasts.” (Gilbert Highet)
i. Pendle witches.
“The trials of the Pendle witches in 1612 are among the most famous witch trials in English history, and some of the best recorded of the 17th century. The twelve accused lived in the area around Pendle Hill in Lancashire, and were charged with the murders of ten people by the use of witchcraft. All but two were tried at Lancaster Assizes on 18–19 August 1612, along with the Samlesbury witches and others, in a series of trials that have become known as the Lancashire witch trials. One was tried at York Assizes on 27 July 1612, and another died in prison. Of the eleven who went to trial – nine women and two men – ten were found guilty and executed by hanging; one was found not guilty.
The official publication of the proceedings by the clerk to the court, Thomas Potts, in his The Wonderfull Discoverie of Witches in the Countie of Lancaster, and the number of witches hanged together – nine at Lancaster and one at York – make the trials unusual for England at that time. It has been estimated that all the English witch trials between the early 15th and early 18th centuries resulted in fewer than 500 executions; this series of trials accounts for more than two per cent of that total.”
“One of the accused, Demdike, had been regarded in the area as a witch for fifty years, and some of the deaths the witches were accused of had happened many years before Roger Nowell started to take an interest in 1612. The event that seems to have triggered Nowell’s investigation, culminating in the Pendle witch trials, occurred on 21 March 1612.
On her way to Trawden Forest, Demdike’s granddaughter, Alizon Device, encountered John Law, a pedlar from Halifax, and asked him for some pins. Seventeenth-century metal pins were handmade and relatively expensive, but they were frequently needed for magical purposes, such as in healing – particularly for treating warts – divination, and for love magic, which may have been why Alizon was so keen to get hold of them and why Law was so reluctant to sell them to her. Whether she meant to buy them, as she claimed, and Law refused to undo his pack for such a small transaction, or whether she had no money and was begging for them, as Law’s son Abraham claimed, is unclear. A few minutes after their encounter Alizon saw Law stumble and fall, perhaps because he suffered a stroke; he managed to regain his feet and reach a nearby inn. Initially Law made no accusations against Alizon, but she appears to have been convinced of her own powers; when Abraham Law took her to visit his father a few days after the incident, she reportedly confessed and asked for his forgiveness.
Alizon Device, her mother Elizabeth, and her brother James were summoned to appear before Nowell on 30 March 1612. Alizon confessed that she had sold her soul to the Devil, and that she had told him to lame John Law after he had called her a thief. Her brother, James, stated that his sister had also confessed to bewitching a local child. Elizabeth was more reticent, admitting only that her mother, Demdike, had a mark on her body, something that many, including Nowell, would have regarded as having been left by the Devil after he had sucked her blood.”
“The Pendle witches were tried in a group that also included the Samlesbury witches, Jane Southworth, Jennet Brierley, and Ellen Brierley, the charges against whom included child murder and cannibalism; Margaret Pearson, the so-called Padiham witch, who was facing her third trial for witchcraft, this time for killing a horse; and Isobel Robey from Windle, accused of using witchcraft to cause sickness.
Some of the accused Pendle witches, such as Alizon Device, seem to have genuinely believed in their guilt, but others protested their innocence to the end.”
“Nine-year-old Jennet Device was a key witness for the prosecution, something that would not have been permitted in many other 17th-century criminal trials. However, King James had made a case for suspending the normal rules of evidence for witchcraft trials in his Daemonologie. As well as identifying those who had attended the Malkin Tower meeting, Jennet also gave evidence against her mother, brother, and sister. […] When Jennet was asked to stand up and give evidence against her mother, Elizabeth began to scream and curse her daughter, forcing the judges to have her removed from the courtroom before the evidence could be heard. Jennet was placed on a table and stated that she believed her mother had been a witch for three or four years. She also said her mother had a familiar called Ball, who appeared in the shape of a brown dog. Jennet claimed to have witnessed conversations between Ball and her mother, in which Ball had been asked to help with various murders. James Device also gave evidence against his mother, saying he had seen her making a clay figure of one of her victims, John Robinson. Elizabeth Device was found guilty.
James Device pleaded not guilty to the murders by witchcraft of Anne Townley and John Duckworth. However he, like Chattox, had earlier made a confession to Nowell, which was read out in court. That, and the evidence presented against him by his sister Jennet, who said that she had seen her brother asking a black dog he had conjured up to help him kill Townley, was sufficient to persuade the jury to find him guilty.”
“Many of the allegations made in the Pendle witch trials resulted from members of the Demdike and Chattox families making accusations against each other. Historian John Swain has said that the outbreaks of witchcraft in and around Pendle demonstrate the extent to which people could make a living either by posing as a witch, or by accusing or threatening to accuse others of being a witch. Although it is implicit in much of the literature on witchcraft that the accused were victims, often mentally or physically abnormal, for some at least, it may have been a trade like any other, albeit one with significant risks. There may have been bad blood between the Demdike and Chattox families because they were in competition with each other, trying to make a living from healing, begging, and extortion.”
This article is the only one of the five ‘main articles’ in this post which is not a featured article. I looked this one up because the Burnham & Anderson book I’m currently reading talks about this stuff quite a bit. The book will probably be one of the most technical books I’ll read this year, and I’m not sure how much of it I’ll end up covering here. Basically most of the book deals with the stuff ‘covered’ in the (very short) ‘Relationship between models and reality’ section of the wiki article. There are a lot of details the article left out… The same could be said about the related wiki article about AIC (both articles incidentally include the book in their references).
The first thing that would spring to mind if someone asked me what I knew about it would probably be something along the lines of: “…well, it’s huge…”
…and it is. But we know a lot more than that – some observations from the article:
“The atmosphere of Jupiter is the largest planetary atmosphere in the Solar System. It is mostly made of molecular hydrogen and helium in roughly solar proportions; other chemical compounds are present only in small amounts […] The atmosphere of Jupiter lacks a clear lower boundary and gradually transitions into the liquid interior of the planet. […] The Jovian atmosphere shows a wide range of active phenomena, including band instabilities, vortices (cyclones and anticyclones), storms and lightning. […] Jupiter has powerful storms, always accompanied by lightning strikes. The storms are a result of moist convection in the atmosphere connected to the evaporation and condensation of water. They are sites of strong upward motion of the air, which leads to the formation of bright and dense clouds. The storms form mainly in belt regions. The lightning strikes on Jupiter are hundreds of times more powerful than those seen on Earth.” [However do note that later on in the article it is stated that: “On Jupiter lighting strikes are on average a few times more powerful than those on Earth.”]
“The composition of Jupiter’s atmosphere is similar to that of the planet as a whole. Jupiter’s atmosphere is the most comprehensively understood of those of all the gas giants because it was observed directly by the Galileo atmospheric probe when it entered the Jovian atmosphere on December 7, 1995. Other sources of information about Jupiter’s atmospheric composition include the Infrared Space Observatory (ISO), the Galileo and Cassini orbiters, and Earth-based observations.”
“The visible surface of Jupiter is divided into several bands parallel to the equator. There are two types of bands: lightly colored zones and relatively dark belts. […] The alternating pattern of belts and zones continues until the polar regions at approximately 50 degrees latitude, where their visible appearance becomes somewhat muted. The basic belt-zone structure probably extends well towards the poles, reaching at least to 80° North or South.
The difference in the appearance between zones and belts is caused by differences in the opacity of the clouds. Ammonia concentration is higher in zones, which leads to the appearance of denser clouds of ammonia ice at higher altitudes, which in turn leads to their lighter color. On the other hand, in belts clouds are thinner and are located at lower altitudes. The upper troposphere is colder in zones and warmer in belts. […] The Jovian bands are bounded by zonal atmospheric flows (winds), called jets. […] The location and width of bands, speed and location of jets on Jupiter are remarkably stable, having changed only slightly between 1980 and 2000. […] However bands vary in coloration and intensity over time […] These variations were first observed in the early seventeenth century.”
“Jupiter radiates much more heat than it receives from the Sun. It is estimated that the ratio between the power emitted by the planet and that absorbed from the Sun is 1.67 ± 0.09.”
“Wife selling in England was a way of ending an unsatisfactory marriage by mutual agreement that probably began in the late 17th century, when divorce was a practical impossibility for all but the very wealthiest. After parading his wife with a halter around her neck, arm, or waist, a husband would publicly auction her to the highest bidder. […] Although the custom had no basis in law and frequently resulted in prosecution, particularly from the mid-19th century onwards, the attitude of the authorities was equivocal. At least one early 19th-century magistrate is on record as stating that he did not believe he had the right to prevent wife sales, and there were cases of local Poor Law Commissioners forcing husbands to sell their wives, rather than having to maintain the family in workhouses.”
“Until the passing of the Marriage Act of 1753, a formal ceremony of marriage before a clergyman was not a legal requirement in England, and marriages were unregistered. All that was required was for both parties to agree to the union, so long as each had reached the legal age of consent, which was 12 for girls and 14 for boys. Women were completely subordinated to their husbands after marriage, the husband and wife becoming one legal entity, a legal status known as coverture. […] Married women could not own property in their own right, and were indeed themselves the property of their husbands. […] Five distinct methods of breaking up a marriage existed in the early modern period of English history. One was to sue in the ecclesiastical courts for separation from bed and board (a mensa et thoro), on the grounds of adultery or life-threatening cruelty, but it did not allow a remarriage. From the 1550s, until the Matrimonial Causes Act became law in 1857, divorce in England was only possible, if at all, by the complex and costly procedure of a private Act of Parliament. Although the divorce courts set up in the wake of the 1857 Act made the procedure considerably cheaper, divorce remained prohibitively expensive for the poorer members of society.[nb 1] An alternative was to obtain a “private separation”, an agreement negotiated between both spouses, embodied in a deed of separation drawn up by a conveyancer. Desertion or elopement was also possible, whereby the wife was forced out of the family home, or the husband simply set up a new home with his mistress. Finally, the less popular notion of wife selling was an alternative but illegitimate method of ending a marriage.”
“Although some 19th-century wives objected, records of 18th-century women resisting their sales are non-existent. With no financial resources, and no skills on which to trade, for many women a sale was the only way out of an unhappy marriage. Indeed the wife is sometimes reported as having insisted on the sale. […] Although the initiative was usually the husband’s, the wife had to agree to the sale. An 1824 report from Manchester says that “after several biddings she [the wife] was knocked down for 5s; but not liking the purchaser, she was put up again for 3s and a quart of ale”. Frequently the wife was already living with her new partner. In one case in 1804 a London shopkeeper found his wife in bed with a stranger to him, who, following an altercation, offered to purchase the wife. The shopkeeper agreed, and in this instance the sale may have been an acceptable method of resolving the situation. However, the sale was sometimes spontaneous, and the wife could find herself the subject of bids from total strangers. In March 1766, a carpenter from Southwark sold his wife “in a fit of conjugal indifference at the alehouse”. Once sober, the man asked his wife to return, and after she refused he hanged himself. A domestic fight might sometimes precede the sale of a wife, but in most recorded cases the intent was to end a marriage in a way that gave it the legitimacy of a divorce.”
“Prices paid for wives varied considerably, from a high of £100 plus £25 each for her two children in a sale of 1865 (equivalent to about £12,500 in 2015) to a low of a glass of ale, or even free. […] According to authors Wade Mansell and Belinda Meteyard, money seems usually to have been a secondary consideration; the more important factor was that the sale was seen by many as legally binding, despite it having no basis in law. […] In Sussex, inns and public houses were a regular venue for wife-selling, and alcohol often formed part of the payment. […] in Ninfield in 1790, a man who swapped his wife at the village inn for half a pint of gin changed his mind and bought her back later. […] Estimates of the frequency of the ritual usually number about 300 between 1780 and 1850, relatively insignificant compared to the instances of desertion, which in the Victorian era numbered in the tens of thousands.”
“In 1825 a man named Johnson was charged with “having sung a song in the streets describing the merits of his wife, for the purpose of selling her to the highest bidder at Smithfield.” Such songs were not unique; in about 1842 John Ashton wrote “Sale of a Wife”.[nb 6] The arresting officer claimed that the man had gathered a “crowd of all sorts of vagabonds together, who appeared to listen to his ditty, but were in fact, collected to pick pockets.” The defendant, however, replied that he had “not the most distant idea of selling his wife, who was, poor creature, at home with her hungry children, while he was endeavouring to earn a bit of bread for them by the strength of his lungs.” He had also printed copies of the song, and the story of a wife sale, to earn money. Before releasing him, the Lord Mayor, judging the case, cautioned Johnson that the practice could not be allowed, and must not be repeated. In 1833 the sale of a woman was reported at Epping. She was sold for 2s. 6d., with a duty of 6d. Once sober, and placed before the Justices of the Peace, the husband claimed that he had been forced into marriage by the parish authorities, and had “never since lived with her, and that she had lived in open adultery with the man Bradley, by whom she had been purchased”. He was imprisoned for “having deserted his wife”.”
v. Bog turtle.
“The bog turtle (Glyptemys muhlenbergii) is a semiaquatic turtle endemic to the eastern United States. […] It is the smallest North American turtle, measuring about 10 centimeters (4 in) long when fully grown. […] The bog turtle can be found from Vermont in the north, south to Georgia, and west to Ohio. Diurnal and secretive, it spends most of its time buried in mud and – during the winter months – in hibernation. The bog turtle is omnivorous, feeding mainly on small invertebrates.”
“The bog turtle is native only to the eastern United States,[nb 1] congregating in colonies that often consist of fewer than 20 individuals. […] densities can range from 5 to 125 individuals per 0.81 hectares (2.0 acres). […] The bog turtle spends its life almost exclusively in the wetland where it hatched. In its natural environment, it has a maximum lifespan of perhaps 50 years or more, and the average lifespan is 20–30 years.”
“The bog turtle is primarily diurnal, active during the day and sleeping at night. It wakes in the early morning, basks until fully warm, then begins its search for food. It is a seclusive species, making it challenging to observe in its natural habitat. During colder days, the bog turtle will spend much of its time in dense underbrush, underwater, or buried in mud. […] Day-to-day, the bog turtle moves very little, typically basking in the sun and waiting for prey. […] Various studies have found different rates of daily movement in bog turtles, varying from 2.1 to 23 meters (6.9 to 75.5 ft) in males and 1.1 to 18 meters (3.6 to 59.1 ft) in females.”
“Changes to the bog turtle’s habitat have resulted in the disappearance of 80 percent of the colonies that existed 30 years ago. Because of the turtle’s rarity, it is also in danger of illegal collection, often for the worldwide pet trade. […] The bog turtle was listed as critically endangered in the 2011 IUCN Red List.“
This blogpost contains a list of books I’ve read during the year 2014, as well as links to blogposts on this site which I have written about the books.
Aside from links to relevant blogposts I’ve also added to the list a little information about the books – the numbers in the parentheses are the goodreads ratings I have given the books (for information about how to interpret those ratings, see incidentally the comments here and here), whereas the various letters following those numbers indicate which ‘type’ of book it is – ‘f’ = fiction, ‘nf’ = non-fiction. I have included the names of the publishers of the non-fiction books to the list as it didn’t take any mental effort to add this variable and it seemed to me like it might be relevant information, and I have also added the names of the authors of the fiction books as this also seemed like relevant information which was easy to add. Given that people reading along here can’t be expected to necessarily know all the publishers included, I have provided a link to some information about each of the publishers featured on the list – the link to the publisher is added the first place on the list where the publisher in question is mentioned – in order to make it easier to assess relatively fast which type of book it might be. Aside from these things I’ve written very little about each book; I have added a few other remarks here and there where they seemed relevant, but I’ve tried to keep such comments brief and to the point. If you don’t do this a post like this can get very long very fast.
I have not rated all the books on the list, but I have added the goodreads ratings in the great majority of cases where I have – usually the posts about the book will in the cases where no ratings are provided give you both some idea why I did not rate them and some idea as to what I think about those books. Books I have not rated often had some (to me) problematic features which I’ve felt somewhat ambivalent about. I have not blogged all the books I’ve read this year; this is partly because I decided a while back to limit fiction blogging to a minimum.
On the list below I have only included books which I have read in full and have actually finished, meaning that as usual some books are left out. I hope you’ll find the list helpful in terms of navigating the site and that it’ll make it easier for you to find stuff I’ve written here about things you consider interesting. This is probably also a good place to remind people of the existence of the category cloud in the sidebar (a lot of work went into making it as useful as it is now, so you’ll forgive me for pointing this resource out to you even if you’re already aware of its existence), as well as the search bar; there’s a lot of stuff on this blog at this point, but if you know how/where to look I’m not really sure it’s actually that difficult to navigate the site – at least I seem to manage reasonably well…
Okay, back to the list and the books. After the first 6 months of the year I’d read 53 books. As you’ll be able to tell from the list below, the complete list for the entire year contains 116 books. 46 (~40%) of the books are fiction, if you include both Lichtenberg and Cuppy in this category, which I’ve decided one probably should. That means that 70 (~60%) of the books were non-fiction. In terms of time expenditure this breakdown is of course highly misleading, as I spend much, much more time on non-fiction books than on fiction books.
Whereas I have not blogged all the books I’ve read, I have on the other hand also blogged/reviewed some of the books I did not finish. For this reason I have added an addendum at the bottom of the post with a few of those books and links to relevant blogposts and goodreads reviews. It should perhaps be noted that it usually makes sense to distinguish between two categories of unfinished books; books which are simply bad/terrible, and books which take a lot of work. Do not take the fact that I did not finish a book to necessarily be an indication that the book was bad; maybe it was just very long or took a lot of work (one example: I’ve read over 650 pages of the textbook Sexually Transmitted Diseases so far, yet I’m not even a third of the way through that book yet).
If you want to know what the books I’ve read actually ‘look like’ and you’d like to get some sort of a ‘big picture’ look at what one might think of as my ‘digital book shelves’, goodreads has a list here with cover views of many of the publications which feature on the list below. That list however also includes books I did not finish.
1. Pathophysiology of disease (5, nf. Lange medical text. Long, takes a lot of work compared to most of the other books on this list). I took a few quite long breaks from the book along the way, which is why the posts are somewhat spread out over time. I decided in the end to add all relevant posts about the book here, even the ones which were not written this year. Blog coverage here, here, here, here, here, and here.
2. The Complete Maus (4, f). Art Spiegelman. I was seriously considering not including this one on the list at all (is this even a book?), but on the other hand it took me significantly more time to read this thing than it took me to read Calvino so I figured I might as well add it to the list.
3. Why Women Have Sex (2, nf. Times Books. This is one of a few non-fiction books on this list which are not either academic publications or technical publications. Don’t be fooled by the fact that the book is written by two university professors; the level of coverage here is much lower than that of pretty much every other non-fiction book on this list. Blog coverage here and here.
6. Handbook of Individual Differences in Social Behavior (4, nf. Guilford Press psychology text. Long). Blog coverage here, here, and here. Note that I changed my mind about the goodreads rating after I’d written the last of my posts about the book.
7. Evolution of Island Mammals: Adaptation and Extinction of Placental Mammals on Islands (4, nf. Wiley-Blackwell biology text). Blog coverage here.
11. Death in the clouds (4, f). Agatha Christie.
14. Screening for Depression and Other Psychological Problems in Diabetes: A Practical Guide (2, nf. Springer). Blog coverage here.
16. The Daughter Of Time (4, f). Josephine Tey.
17. Cards on the Table (5, f). Agatha Christie.
18. A Practical Manual of Diabetic Retinopathy Management (2, nf. Wiley-Blackwell medical text). Blog coverage here.
19. The Cambridge Economic History of Modern Europe: Volume 1, 1700-1870 (3, nf. Cambridge University Press economics text). Blog coverage here and here.
21. Personality Judgment: A Realistic Approach to Person Perception (3, nf. An Academic Press psychology publication). Blog coverage here and here.
22. The Remains of the Day (5, f). Kazuo Ishiguro.
23. The Origin and Evolution of Cultures (5, nf. Oxford University Press. Takes a lot of work, but it’s an awesome book – “Highly recommended. Probably the best book I’ve read this year”). I added this book to my list of favourite books on goodreads. Blog coverage here, here, here, here and here.
24. What Did the Romans Know?: An Inquiry into Science and Worldmaking (2, nf. University of Chicago Press). Blog coverage here and here.
25. Bioterrorism and Infectious Agents: A New Dilemma for the 21st Century (3, nf. Springer medical text). Blog coverage here.
29. Lost in a Good Book (5, f). Jasper Fforde.
32. Do Androids Dream of Electric Sheep? (3, f). Philip K. Dick.
34. Military Geography: For Professionals and the Public (3, nf. Potomac Books/University of Nebraska Press). Blog coverage here, here and here.
36. To Kill a Mockingbird (2, f). Harper Lee. Overrated.
37. Impact of Sleep and Sleep Disturbances on Obesity and Cancer (5, nf. Springer medical text). Blog coverage here and here.
39. Something Rotten (4, f). Jasper Fforde.
41. Plant Animal Interactions: An Evolutionary Approach (5, nf. Blackwell Publishing biology text). A really great book, high average goodreads rating. I added it to my list of favourite books on goodreads. Blog coverage here, here, and here.
43. Peril at End House (4, f). Agatha Christie.
48. Poirot Investigates (3, f). Agatha Christie. Unlike the other books by her on the list, this book is a collection of short stories, rather than a novel.
50. First Among Sequels (5, f). Jasper Fforde. Goodreads review: “(Maybe I shouldn’t give all these [Jasper Fforde] books five stars, but as long as they keep being awesome I’ll keep giving them five stars.)“. Blog coverage here.
53. Murder in Mesopotamia (4, f). Agatha Christie.
54. Discrete Time Stochastic Control and Dynamic Potential Games: The Euler Equation Approach (nf. Springer – Springer Briefs in Mathematics). Blog coverage here.
55. The Big Four (1, f). Agatha Christie. A terrible book.
58. The Mystery of the Blue Train (3, f). Agatha Christie.
60. Hypoglycemia in Diabetes – Pathophysiology, Prevalence, and Prevention (3, nf. Published by the American Diabetes Association). Blog coverage here.
62. The Emergence of Animals: The Cambrian Breakthrough (4, nf. Columbia University Press). Blog covereage here and here.
67. Death on the Nile (4, f). Agatha Christie.
69. A murder is announced (2, f). Agatha Christie.
71. The True Believer: Thoughts on the Nature of Mass Movements (3, nf. Harper Perennial Modern Classics). Blog coverage here and here.
72. The Gospel According to Jesus Christ (2, f). José Saramago.
73. Aging – Facts and Theories (Interdisciplinary Topics in Gerontology, Vol. 39) (1, nf. Karger). Blog coverage here and here.
74. Ecological Dynamics (4, nf. Oxford University Press). A good book to point to when talking to people who believe biologists don’t ever really use complicated mathematical tools. Good coverage of various basic model concepts. Blog coverage here.
76. A Pocket Full Of Rye (4, f.) Agatha Christie.
77. Hercule Poirot’s Christmas (5, f). Agatha Christie.
79. The Body in the Library (3, f). Agatha Christie.
80. Appointment with Death (3, f). Agatha Christie.
81. The Gambler (2, f). Fyodor Dostoyevsky. Technically I didn’t finish the book in which the novel was included because I found the other novel included in that work, The Double, to be completely unreadable; but as The Gambler is a novel in its own right I decided it was okay to include it here.
82. Sad Cypress (3, f). Agatha Christie.
84. Adolescents and Adults with Autism Spectrum Disorders (1, nf. Springer). Blog coverage here and here.
85. Taken at the Flood (4, f). Agatha Christie.
86. The Hollow (4, f). Agatha Christie.
87. Sexual Selection in Primates: New and Comparative Perspectives (5, nf. Cambridge University Press). I added this book to my list of favourite books on goodreads. Blog coverage here and here.
88. Murder in the Mews (2, f). Agatha Christie.
89. They do it with mirrors (2, f). Agatha Christie. Way too easy to figure out – close to one star.
90. Unobserved Variables: Models and Misunderstandings (2-3?, nf. Springer – SpringerBriefs in Statistics). Blog coverage here.
91. Astrobiology of Earth: The Emergence, Evolution, and Future of Life on a Planet in Turmoil (3, nf. Oxford University Press).
93. The Labours Of Hercules (2, f). Agatha Christie.
98. The Moon is a Harsh Mistress (f). Robert Heinlein. Almost didn’t read past page 15, the book is silly, but I managed to ignore the silliness long enough to finish it. If you’re interested in reading classic science fiction, Asimov’s stuff is in my opinion much better than Heinlein’s.
99. Appointment Planning in Outpatient Clinics and Diagnostic Facilities (2, nf. Springer – SpringerBriefs in Health Care Management and Economics)). I included a few comments about the book in the third paragraph of this post. More detailed coverage of related topics can be found here.
100. The Stars, Like Dust (4, f). Isaac Asimov.
101. Female Infidelity and Paternal Uncertainty: Evolutionary Perspectives on Male Anti-Cuckoldry Tactics (2, nf. Cambridge University Press). Blog coverage here.
102. The Currents of Space (3, f). Isaac Asimov.
104. Pebble in the Sky (2, f). Isaac Asimov. Time-travel, telepathy, mind-reading, laughably implausible biology – this book reminded me way too much of Heinlein.
105. Delusion and Self-Deception: Affective and Motivational Influences on Belief Formation (2, nf. Psychology Press). Blog coverage here and here.
106. Robot Dreams (2, f). Isaac Asimov. I prefer his novels to his short-stories, but on the other hand some of the included narratives I simply would not have finished if they’d been longer (many stories were too silly, implausible, etc. for me to like them much).
107. Origin and Evolution of Planetary Atmospheres: Implications for Habitability (2, nf. Springer – SpringerBriefs in Astronomy). Close to three stars, but the poor language of the publication made it difficult for me to justify giving it that rating. Blog coverage here.
109. The Big Over Easy (4, f). Jasper Fforde.
111. The Decline and Fall of Practically Everybody (5, f? – I don’t really know how to categorize this one. ‘Humour’ is a more informative category than either ‘fiction’ or ‘non-fiction’). Will Cuppy.
112. The Fourth Bear (5, f). Jasper Fforde.
115. The Cambridge Handbook of Personal Relationships (2, nf. Cambridge University Press. Long). Please don’t draw strong conclusions about the book based on the 2-star rating before you’ve read my goodreads review. Blog coverage here, here, here, here, here, here, and here.
116. Pyramids (3, f). Terry Pratchett.
Addendum – a few books I have not finished, but which I have reviewed/covered either on goodreads or on this blog:
The Changing Nature of Pain Complaints over the Lifespan (1, nf. Springer). Blog coverage here.
Sexually Transmitted Diseases (nf. McGraw-Hill Professional Publishing). Blog coverage here, here, and here (I include here only posts written this year – note that the first post to which I link has links to additional coverage as well).
Books I did not finish and did not cover either here or on goodreads (some of these are books I expect to finish in 2015):
Introduction to Quantum Mechanics (nf. John Wiley & Sons). I’ve talked about this book a few times on the blog, but I’ve never really covered the book in any detail.
The State of Affairs: Explorations in infidelity and Commitment (1, nf. Psychology Press). Again I did technically post a review on goodreads, but once again the review did not add much information about the book.
Cognitive Psychology: A Student’s Handbook (nf. Psychology Press).
This is just a quick note to inform you that I now have some internet/computer access. I sort of want to get back to regular blogging, however given the limitations I’ll be imposed over the next week or so I’m not sure how much blogging you should expect. I have ordered an extra laptop to have one in reserve in the future, however it has not yet arrived and at the moment I’m relying on an old laptop I’ve borrowed. The laptop is very old and almost completely useless, and it crashes many times each day for basically no reason. I have been updating my 2014 book list in order to publish a ‘final version’ of this post, and the computer I’m using crashed 5 times while doing those updates (even stuff like opening a new tab in your browser or switching pages in a pdf may cause this computer to become unresponsive and require you to restart it); this is not the sort of situation which gives you a great desire to blog.
We’ll see how it goes.